Periodic Splines and Gaussian Processes for the Resolution of Linear Inverse Problems
Abstract
This paper deals with the resolution of inverse problems in a periodic setting or, in other terms, the reconstruction of periodic continuousdomain signals from their noisy measurements. We focus on two reconstruction paradigms: variational and statistical. In the variational approach, the reconstructed signal is solution to an optimization problem that establishes a tradeoff between fidelity to the data and smoothness conditions via a quadratic regularization associated to a linear operator. In the statistical approach, the signal is modeled as a stationary random process defined from a Gaussian white noise and a whitening operator; one then looks for the optimal estimator in the meansquare sense. We give a generic form of the reconstructed signals for both approaches, allowing for a rigorous comparison of the two. We fully characterize the conditions under which the two formulations yield the same solution, which is a periodic spline in the case of sampling measurements. We also show that this equivalence between the two approaches remains valid on simulations for a broad class of problems. This extends the practical range of applicability of the variational method.
I Introduction
This paper deals with inverse problems: one aims at recovering an unknown signal from its corrupted measurements. To be more specific, the motivation of this work is the reconstruction of an unknown continuousdomain and periodic signal from its noisy measurements for , where the are measurement functions. The goal is then to build an output signal that is as close as possible to .
Ia Inverse Problems in the Continuous Domain
Inverse problems are often formulated in the discrete domain [1, 2, 3, 4, 5]. This is motivated by the need of manipulating digital data on computers. Nevertheless, many naturally occurring signals depend on continuous variables (e.g., time or position). This leads us to attempt recovering a signal that depends on the continuous variable . In contrast with the classical discrete setting, our search space for this reconstructed signal is thus infinitedimensional [6]. Moreover, we choose a regularization based on true derivatives (as opposed to finite differences) to impose some smoothness on the reconstructed signal, a concept that is absent in the discrete setting.
When considering continuousdomain reconstruction methods, a majority of works, typically in machine learning, deal with sampling measurements. The goal is then to recover from its (possibly noisy) values at fixed location . In order to investigate a more general version of inverse problems, we shall consider generalized measurements [7, 8]. They largely exceed the sampling case and include Fourier sampling or convolution (e.g., MRI, xray tomography [9, 10]). Our only requirement is that the measurements depend linearly on, and evolve continuously with, the unknown signal up to some additive noise, so that .
IB Variational vs. Statistical Methods
In the discrete domain, two standard strategies are used to reconstruct an input signal from its noisy measurements , where models the acquisition process [5]. The first approach is deterministic and can be tracked back to the ’60s with Tikhonov’s seminal work [11]. The illposedness of the problem usually imposes the addition of a regularizer. By contrast, Wiener filtering is based on the stochastic modelization of the signals of interest and the optimal estimation of the targeted signal . This paper generalizes these ideas for the reconstruction of continuous signals from their discrete measurements.
In the variational setting, the reconstructed signal is a solution to an optimization problem that imposes some smoothness conditions [12]. More precisely, the optimization problem may take the form
(1) 
where is a linear operator. The first term in (1) controls the data fidelity. The regularization term constrains the function to satisfy certain smoothness properties (for this reason, the variational approach is sometimes called a smoothing approach). The parameter in (1) quantifies the tradeoff between the fidelity to the data and the regularization constraint.
In the statistical setting, the signal is modeled as a random process and is optimally reconstructed using estimation theory [13]. More precisely, one assumes that the continuousdomain signal is the realization of a stochastic process and that the samples are given by , where is a random perturbation and a linear measurement function. In this case, one specifies the reconstructed signal as the optimal statistical estimator in the meansquare sense
(2) 
where the estimators are computed from the generalized samples . The solution depends on the measurement function and the stochastic models specified for and . In our case, the random process is characterized by a linear operator that is assumed to have a whitening effect (it transforms into a periodic Gaussian white noise), while the perturbation is i.i.d. Gaussian.
IC Periodic and General Setting
The variational and statistical approaches have been extensively studied for continuousdomain signals defined on the infinitely supported real line. However, it is often assumed in practice that the input signals are periodic. In fact, a standard computational approach to signal processing is to extend by periodization the signals of otherwise bounded support. Periodic signals arise also naturally in applications such as the parametric representation of closed curves [14, 15, 16]. This has motivated the development of signalprocessing tools and techniques specialized to periodic signals in sampling theory, error analysis, wavelets, stochastic modelization, or curve representation [17, 18, 19, 20, 21, 22, 23].
In this paper, we develop the theory of the variational and statistical approaches for periodic continuousdomain signals in a very general context, including the following aspects:

We consider a broad class of measurement functions, with the only assumptions that they are linear and continuous.

Both methods refer to an underlying linear operator that affects the smoothness properties of the reconstruction. We deal with a very broad class of linear operators acting on periodic functions.

We consider possibly nonquadratic data fidelity terms in the smoothing approach.
ID Related Works
The topics investigated in this paper have already received some attention in the literature, mostly in the nonperiodic setting.
Reconstruction over the Real Line
Optimization problems of the form (1) appear in many fields and receive different names, including inverse problems in image processing [5], representer theorems in machine learning [24], or sometimes interpolation elsewhere. Schoenberg was the first to show the connection between (1) and spline theory [25]. Since then, this has been extended to other operators [26], or to the interpolation of the derivative of the signal [27, 28]. Many recent methods are dealing with nonquadratic regularization, especially the ones interested in the reconstruction of sparse discrete [29, 30] or continuous signals [6, 31, 32, 33]. We discuss this aspect more extensively in Section VIB.
A statistical framework requires the specification of the noise and of the signal stochastic model. The signal is then estimated from its measurements. A classical measure of the quality of an estimator is the meansquare error. This criterion is minimized by the minimum meansquare error (MMSE) estimator [13, 34]. The theory has been developed mostly for Gaussian processes and in the context of sampling measurements [35]. We are especially interested in innovation models, for which one assumes that the signal can be whitened (i.e., transformed into a white noise) by the application of a linear operator [36, 37]. Nonperiodic models have been studied in many situations, including the random processes associated with differential [38, 39] or fractional operators [40]. Extensions to nonGaussian models are extensively studied by Unser and Tafti [41].
The statistical and variational frameworks are deeply connected. It is remarkable that the solution of either problem can be expressed as spline functions in relation with the linear operator involved in regularization (variational approach) or whitening (statistical approach). Wahba has shown that the two approaches are strictly equivalent in the case of stationary Gaussian models [42]. This equivalence has also been recognized by several authors since then, as shown by Berlinet and ThomasAgnan [35], and Unser and Blu [43]. In the nonstationary case, this equivalence is not valid any more and the existence of connections has received less attention.
Reconstruction of Periodic Signals
Some strong practical concerns have motivated the need for an adaptation of the theory to the periodic setting. Important contributions in that direction have been proposed. Periodic splines are constructed and applied to sampling problems by Schoenberg [44] and Golomb [45]. The smoothing spline approach is studied in the periodic setting by Wahba [42] for derivative operators of any order. Although the periodic extension of the classical theory is briefly mentioned by several authors [35, 42, 46], we are not aware of a global treatment. Providing a general analysis in the periodic setting is precisely what we propose in this paper.
IE Outline and Main Contributions
Section II contains the main notations and tools for periodic functions and operators. In Section III, we state the periodic representer theorem (Theorem 1). It fully specifies the form of the solution in the variational approach in a very general setting. For the specific case of sampling measurements, we show that this solution is a periodic spline (Proposition 5). Section IV is dedicated to the statistical approach. We introduce a class of periodic stationary processes (the Gaussian bridges) for which we specify the MMSE estimator in the case of generalized linear measurements (Theorem 2). We also provide a theoretical comparison between the variational and statistical approaches by reformulating the MMSE estimation as the solution of a new optimization problem (Proposition 7). This highlights the strict equivalence of the two approaches for invertible operators and extends known results from sampling to generalized linear measurements. For noninvertible operators, we complete our analysis with simulations in Section V. In particular, we give empirical evidence of the practical relevance of the variational approach for the reconstruction of periodic stationary signals. We provide in Section VI a comparison between our results in the periodic setting and the known results over the real line. Finally, we conclude in Section VII. All the proofs have been postponed to the Appendix sections.
Ii Mathematical Background for Periodic Signals
Throughout the paper, we consider periodic functions and random processes. Without loss of generality, the period can always be normalized to one. Moreover, we identify a periodic function over with its restriction to a single period, chosen to be . We use the symbols , , and to specify a function, a random process, and an estimator of , respectively.
We call the space of periodic functions that are infinitely differentiable, the space of periodic generalized functions (dual of ), and the Hilbert space of square integrable periodic functions associated with the norm . Working with allows us to deal with functions with no pointwise interpretation, such as the Dirac comb defined by
(3) 
where is the Dirac impulse. The duality product between an element and a smooth function is denoted by . For instance, for every . When the two real functions are in , we simply have the usual scalar product . All these concepts are extended to complexvalued functions in the usual manner with the convention that for squareintegrable functions. The complex sinusoids are denoted by for any and . Any periodic generalized function can be expanded as
(4) 
where the are the Fourier coefficients of , given by . Finally, the convolution between two periodic functions and is given by
(5) 
If , we have that .
Iia Linear and ShiftInvariant Operators
Let be a linear, shiftinvariant (LSI), and continuous operator from to . The shift invariance implies the existence of such that
(6) 
for any . We call the frequency response of the operator ; it is also given by
(7) 
The sequence is the Fourier series of the periodic generalized function , and is therefore of slow growth [47, Chapter VII]. This implies that , a priori from to , actually continuously maps into itself. This is a significant difference with the nonperiodic setting — we discuss this point in the conclusion in Section VII. Therefore, one can extend it by duality from to . Then, for every , we easily obtain from (6) that
(8) 
The null space of is . We shall only consider operators whose null space is finitedimensional, in which case can only be made of linear combinations of sinusoids at frequencies that are annihilated by . We state this fact in Proposition 1 and prove it in Appendix A.
Proposition 1.
Let be a continuous LSI operator. If has a finitedimensional null space of dimension , then the null space is of the form
(9) 
where the are distinct.
From (6) and (9), we deduce that if and only if for some . In the following, we consider realvalued operators. In that case, we have the Hermitian symmetry . Moreover, if and only if . The orthogonal projection of on the null space is given by
(10) 
Let . Then, (4) can be reexpressed as and we have that which yields the Parseval relation
(11) 
IiB Periodic Splines
Historically, splines are functions defined to be piecewise polynomials [48]. A spline is hence naturally associated to the derivative operator of a given order [49] in the sense that, for a fixed , a spline function satisfies with the th derivative. Splines have been extended to differential [50, 51, 52, 53], fractional [26, 54] or, more generally, splineadmissible operators [41]. We adapt here this notion to the periodic setting, where the Dirac impulse is replaced by the Dirac comb .
Definition 1.
Consider an LSI operator with finitedimensional null space. We say that a function is a periodic spline if
(12) 
for some integer , weights , and knot locations .
Periodic splines play a crucial role in the variational and statistical approaches for the resolution of inverse problems in the periodic setting. We represent some periodic splines associated to different operators in Figure 1.
Iii Periodic Representer Theorem
We now consider a continuous LSI operator with finitedimensional null space . Let be the vector of the linear measurement functions . They usually are of the form for timedomain sampling problems. Here, we consider general linear measurements to include any kind of inverse problems. In this section, our goal is to recover a function from observed data such that . To do so, we consider the variational problem
(13) 
where is a strictly convex and continuous function called the cost function. This function controls the fidelity to data. A special attention will be given to the quadratic data fidelity of the form
(14) 
We give the solution of (13) for the space of periodic functions in Theorem 1. To derive this solution, we first introduce and characterize the space of functions on which (13) is welldefined.
Iiia Search Space
The optimization problem (13) deals with functions such that is squareintegrable, which leads us to introduce . Due to (11), we have that
(15) 
Similar constructions have been developed for functions over or for sequences by Unser et al. [32, 55]. We now identify a natural Hilbertian structure on . If is invertible, then inherits the Hilbertspace structure of via the norm . However, when has a nontrivial null space, is only a seminorm, in which case there exists (any element of the null space of ) such that . To obtain a bona fide norm, we complete the seminorm with a special treatment for the nullspace components in Proposition 2.
Proposition 2.
Let be a continuous LSI operator whose finitedimensional null space is defined by . We fix . Then, is a Hilbert space for the inner product
(16) 
IiiB Periodic ReproducingKernel Hilbert Space
Reproducingkernel Hilbert spaces (RKHS) are Hilbert spaces on which the evaluation maps are welldefined, linear, and continuous. In this section, we answer the question of when the Hilbert space associated to an LSI operator with finitedimensional null space is a RKHS. This property is relevant to us because periodic function spaces that are RKHS are precisely the ones for which one can use measurement functions of the form in (13).
Definition 2.
Let be a Hilbert space of periodic functions and be its dual. Then, we say that is a RKHS if the shifted Dirac comb for any .
This implies that any element of a RKHS has a pointwise interpretation as a function . As is well known, for any RKHS there exists a unique function such that and for every and . We call the reproducing kernel of .
Proposition 3.
The proof is given in Appendix C. Note that the reproducing kernel only depends on the difference .
IiiC Periodic Representer Theorem
Now that we have defined the search space of the optimization problem (13), we derive the representer theorem that gives the explicit form of its unique periodic solution.
Theorem 1.
We consider the optimization problem
(19) 
where

is strictly convex and continuous;

is an LSI operator with finitedimensional null space;

such that ;

are the observed data; and

is a tuning parameter.
The proof of Theorem 1 is given in Appendix D. The optimal solution depends on coefficients, but the condition implies that there are only degrees of freedom. In the case when is quadratic of the form (14), the solution is made explicit in Proposition 4.
Proposition 4.
The proof is given in Appendix E. In the case of sampling measurements, we show moreover in Proposition 5 that the optimal solution is a periodic spline in the sense of Definition 1. We recall that such measurements are valid as soon as the search space is a RKHS, a situation that has been fully characterized in Proposition 3.
Proposition 5.
The proof is given in Appendix F.
Iv Periodic Processes and MMSE
In this section, we change perspective and consider the following statistical problem: given noisy measurements of a zeromean and real periodic Gaussian process, we are looking for the optimal estimator (for the meansquare error) of the complete process over .
Iva NonPeriodic Setting
In a nonperiodic setting, it is usual to consider stochastic models where the random process is a solution to the stochastic differential equation [41]
(23) 
where is a linear differential operator and a continuous domain (nonperiodic) Gaussian white noise. When the null space of the operator is nontrivial, it is necessary to add boundary conditions such that the law of the process is uniquely defined.
IvB Gaussian Bridges
In the periodic setting, the construction of periodic Gaussian processes has to be adapted. We first introduce the notion of periodic Gaussian white noise, exploiting the fact that the law of a zeromean periodic Gaussian process is fully characterized by its covariance function such that
(24) 
Definition 3.
A periodic Gaussian white noise
For any periodic real function , the random variable is therefore Gaussian with mean and variance . Moreover, and are independent if and only if . Hence, the Fourier coefficients of the periodic Gaussian white noise satisfy the following properties:

;

;

, ;

and ;

, and are independent.
Put differently, for any nonzero frequency , and . This means that , , follows a complex normal distribution with mean 0, covariance , and pseudocovariance [56].
Gaussian bridges  \includegraphics [width=0.4]D+I.pdf  \includegraphics [width=0.4]derivee.pdf  \includegraphics [width=0.4]ellipse.pdf  \includegraphics [width=0.4]D2.pdf 
When has a nontrivial null space, there is no hope to construct a periodic process solution of (23) with a periodic Gaussian white noise. Indeed, the operator kills the nullspace frequencies, which contradicts that almost surely for . One should adapt (23) accordingly by giving special treatment to the nullspace frequencies. We propose here to consider a new class of periodic Gaussian processes: the Gaussian bridges. Given some operator and , we set
(25) 
where is given by (10). Note that for any when the null space of is trivial. Moreover, we remark that
(26) 
where is given in (16) (with ).
Definition 4.
A Gaussian bridge is a periodic Gaussian process , solution to the stochastic differential equation
(27) 
with a periodic Gaussian white noise and given by (25) for some LSI operator with finitedimensional null space and . We summarize this situation with the notation . When the null space is trivial, in which case the parameter is immaterial, we write .
The Gaussianbridge terminology is inspired by the Brownian bridge, the periodic version of the Brownian motion
Proposition 6.
The covariance function of the Gaussian bridge is
(28) 
where is defined in (18). It implies that
(29) 
In particular, we have that
(30) 
IvC Measurement Model and MMSE Estimator
For this section, we restrict ourselves to operators for which the native space is a RKHS. In that case, using (30) and (18), the Gaussian bridge satisfies
(31) 
which is finite according to (17). Therefore, the Gaussian bridge is (almost surely) squareintegrable.
The observed data are assumed to be generated as
(32) 
where is a Gaussian bridge (see Definition 4), is a vector of linear measurement functions, and are independent random perturbations such that . Given in (32), we want to find the estimator of the Gaussian bridge , imposing that it minimizes the quantity .
Theorem 2.
The proof is given in Appendix H. Theorem 2 can be seen as a generalization of the classical Wiener filtering, designed for discrete signals, to the hybrid case where the input signal is in a (periodic) continuousdomain and the (finitedimensional) measurements are discrete. A leading theme of this paper is that the form of the MMSE estimator is very close to the one of the solution of the representer theorem with and for a quadratic cost function. This connection is exploited in Section IVD.
IvD MMSE Estimation as a Representer Theorem
The MMSE estimator given in Theorem 2 can be interpreted as the solution of the optimization problem described in Proposition 7.
Proposition 7.
The proof of Proposition 7 follows the same steps as the ones of Theorem 1 (form of the minimizer for the periodic representer theorem) and Proposition 4 (explicit formulas in terms of system matrix for the vectors and ), with significant simplifications that are detailed in Appendix I. Proposition 7 has obvious similarities with Theorem 1, but it also adds new elements.

Proposition 7 gives an interpretation of the MMSE estimator of a Gaussian bridge given its measurements as the solution to an optimization problem. This problem is very close to the periodic representer theorem (Theorem 1) for a quadratic cost function. However, (34) differs from (19) because the regularization also penalizes nullspace frequencies.

If the null space is trivial, then
(36) for . This means that Theorem 1 (smoothing approach) and 2 (statistical approach) correspond to the same reconstruction method. This equivalence is wellknown for stationary processes on in the case of timedomain sampling measurements [42]. Our results extend this to the periodic setting and to the case of generalized linear measurements.

If the null space is nontrivial, then Theorem 1 and Proposition 7 yield different reconstructions. In particular, this implies that one cannot interpret the optimizer in Theorem 1 as the MMSE estimator of a Gaussian bridge. Yet, the solutions get closer and closer as . In Section V, we investigate more deeply this situation.
V Quality of the Estimators on Simulations
We consider as the linear estimator of given , where , , and is defined in Proposition 4. To simplify notations, we shall omit when considering . Each pair gives an estimator. In particular, if is a Gaussian bridge, then , according to Theorem 2. The meansquare error (MSE) of over experiments is computed as , where the are independent realizations of that yield a new noisy measurement and is the estimator based on . We define the normalized meansquare error (NMSE) by
(37) 
In this section, we first detail the generation of Gaussian bridges (Section VA). We then investigate the role of the parameters (Section VB) and (Section VC) on the quality of the estimator . We primarily focus on timedomain sampling measurements with , where the are in .
Va Generation of Gaussian Bridges
We first fix the operator with null space of dimension and . Then, we generate Fourier coefficients of a Gaussian white noise according to Definition 3. Finally, we compute the Gaussian bridge as
(38) 
Since , (38) provides a mere approximation of the Gaussian bridge. However, the approximation error can be made arbitrarily small by taking large enough. In Figure 2, we generate for four values of . For small values of , the nullspace component dominates, which corresponds in this case to the frequency . When increases, the nullspace component has a weaker influence.
VB Influence of
We evaluate the influence of the parameter for the case of the invertible operator . In this case we have that (since ), which simplifies (25). Hence, the parameter is immaterial and we denote by the estimator associated to . We consider and .
TimeDomain Sampling Measurements. We generated realizations of . From each one, we extracted noisy measurements. We then computed estimators , where is the set of values obtained by uniform sampling of the interval . The plot of the NMSE (approximated according to (37)) as a function of is given in Figure 3 (a). The minimum error is obtained for , which corresponds to . This result validates the theory presented in Theorem 2. Actually, when is small, the estimator interpolates the noisy measurements while, for a large , the estimator tends to oversmooth the curve. The MMSE estimator makes an optimal tradeoff between fitting the data and smoothing the curve. These observations about retain their validity for other operators, including noninvertible ones.
FourierDomain Sampling Measurements.
We consider complex exponential measurement functionals, inducing , where the are in . We define , such that for every .
We consider the measurements . Note that these measurement functionals are complex, which calls for a slight adaptation of the framework presented so far

;

, ;

, ;

and , ;

and , , are independent.
This means that for every .
We repeated the experiment done with the timedomain sampling using exactly the same procedure and parameters, and . The experimental curve of the evolution of the NMSE with is given in Figure 3 (b). Again, the minimum is obtained for . We now want to compare this curve to the theoretical one.
For the Fouriersampling case, we were also able to derive the corresponding closedform formulas for the NMSE (37).
Proposition 8.
Let be a Gaussian bridge associated with an invertible operator , and , , with the sampled frequencies and a complex Gaussian noise with variance as above. Then, the MSE of the estimator is given by
(39) 
where is the reproducing kernel of .
The proof is given in Appendix J. Note that is realvalued and strictly positive for every . From (39), we also recover the property that the optimum is reached for since each of the terms that appear in the first sum is minimized for this value of .
The theoretical curve for is given in Figure 3 (b) and is in good agreement with the experimental curve. We explain the slight variation ( for the norm over ) by the fact that (37) is only an estimation of the theoretical NMSE.
VC Influence of
In this section, we only consider noninvertible operators since invertibility has already been addressed in Section IVD (see (36)). In order to evaluate the specific influence of , we set . Hence, . We generated realizations of a Gaussian bridge , and from each one, we extracted noisy measurements. We repeated this for several operators and values of and . For each case, we compared to , , and in (20), seen here as an additional estimator. The corresponding NMSEs (see (37)) are given in Table II. We make four observations.
1) In each case, the best result is obtained with , as expected. We see, moreover, that . This is in line with the fact that the functional (19) to minimize in Theorem 1 corresponds to (34) with .
2) For small values of (i.e., or ), we see that