Inverse Problems in a Bayesian Setting^{†}^{†}thanks: Partly supported by the Deutsche Forschungsgemeinschaft (DFG) through SFB 880.
Abstract
In a Bayesian setting, inverse problems and uncertainty quantification (UQ) — the propagation of uncertainty through a computational (forward) model — are strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. We give a detailed account of this approach via conditional approximation, various approximations, and the construction of filters. Together with a functional or spectral approach for the forward UQ there is no need for timeconsuming and slowly convergent Monte Carlo sampling. The developed samplingfree nonlinear Bayesian update in form of a filter is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisation to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and nonlinear Bayesian update in form of a filter on some examples.
Keywords: inverse identification, uncertainty quantification, Bayesian update, parameter identification, conditional expectation, filters, functional and spectral approximation
Classification: —(MSC2010) 62F15, 65N21, 62P30, 60H15, 60H25, 74G75,
80A23, 74C05
—(PACS2010) 46.65.+g, 46.35.+z, 44.10.+i
—(ACM1998) G.1.8, G.3, J.2
1 Introduction
Inverse problems deal with the determination of parameters in computational models, by comparing the prediction of these models with either real measurements or observations, or other, presumably more accurate, computations. These parameters can typically not be observed or measured directly, only other quantities which are somehow connected to the one for which the information is sought. But it is typical that we can compute what the observed response should be, under the assumption that the unknown parameters have a certain value. And the difference between predicted or forecast response is obviously a measure for how well these parameters were identified.
There are different ways of attacking the problem of parameter identification theoretically and numerically. One way is to define some measure of discrepancy between predicted observation and the actual observation. Then one might use optimisation algorithms to make this measure of discrepancy as small as possible by changing the unknown parameters. Classical least squares approaches start from this point. The parameter values where a minimum is attained is then usually taken as the ‘best’ value and regarded as close to the ‘true’ value.
One of the problems is that for one the measure of discrepancy crops up pretty arbitrarily, and on the other hand the minimum is often not unique. This means that there are many parameter values which explain the observations in a ‘best’ way. To obtain a unique solution, some kind of ‘niceness’ of the optimal solution is required, or mathematically speaking, for the optimal solution some regularity is enforced, typically in competition with discrepancy measure to be minimised. This optimisation approach hence leads to regularisation procedures, a good overview of which is given by [5].
Here we take another tack, and base our approach on the Bayesian idea of updating the knowledge about something like the unknown parameters in a probabilistic fashion according to Bayes’s theorem. In order to apply this, the knowledge about the parameters has to be described in a Bayesian way through a probabilistic model [16], [41], [40]. As it turns out, such a probabilistic description of our previous knowledge can often be interpreted as a regularisation, thus tying these differing approaches together.
The Bayesian way is on one hand difficult to tackle, i.e. finding a computational way of doing it; and on the other hand often becomes computationally very demanding. One way the Bayesian update may be achieved computationally is through sampling. On the other hand, we shall here use a functional approximation setting to address such stochastic problems. See [26] for a synopsis on our approach to such parametric problems.
It is wellknown that such a Bayesian update is in fact closely related to conditional expectation [2], [11], and this will be the basis of the method presented. For these and other probabilistic notions see for example [30] and the references therein.
The functional approximation approach towards stochastic problems is explained e.g. in [24]. These approximations are in the simplest case known as Wiener’s socalled homogeneous or polynomial chaos expansion [43], which are polynomials in independent Gaussian RVs — the ‘chaos’ — and which can also be used numerically in a Galerkin procedure [10], [25], [24]. This approach has been generalised to other types of RVs [44]. It is a computational variant of white noise analysis, which means analysis in terms of independent RVs, hence the term ‘white noise’ [14], [15], [13], see also [25], [33], and [8] for here relevant results on stochastic regularity. Here we describe a computational extension of this approach to the inverse problem of Bayesian updating, see also [28], [35], [29], [34].
To be more specific, let us consider the following situation: we are investigating some physical system which is modelled by an evolution equation for its state:
(1) 
where describes the state of the system at time lying in a Hilbert space (for the sake of simplicity), is a—possibly nonlinear—operator modelling the physics of the system, and is some external influence (action / excitation / loading). Both and may involve some noise — i.e. a random process — so that (1) is a stochastic evolution equation.
Assume that the model depends on some parameters , which are uncertain. These may actually include the initial conditions for the state, . To have a concrete example of Eq. (1), consider the diffusion equation
(2) 
with appropriate boundary and initial conditions, where is a suitable domain. The diffusing quantity is (heat, concentration) and the term models sinks and sources. Similar examples will be used for the numerical experiments in Section 5 and Section 6. Here , the subspace of the Sobolev space satisfying the essential boundary conditions, and we assume that the diffusion coefficient is uncertain. The parameters could be the positive diffusion coefficient field , but for reasons to be explained fully later, we prefer to take , and assume .
The updating methods have to be well defined and stable in a continuous setting, as otherwise one can not guarantee numerical stability with respect to the PDE discretisation refinement, see [40] for a discussion of related questions. Due to this we describe the update before any possible discretisation in the simplest Hilbert space setting.
On the other hand, no harm will result for the basic understanding if the reader wants to view the occurring spaces as finite dimensional Euclidean spaces. Now assume that we observe a function of the state , and from this observation we would like to identify the corresponding . In the concrete example Eq. (2) this could be the value of at some points . This is called the inverse problem, and as the mapping is usually not invertible, the inverse problem is illposed. Embedding this problem of finding the best in a larger class by modelling our knowledge about it with the help of probability theory, then in a Bayesian manner the task becomes to estimate conditional expectations, e.g. see [16], [41], [40], and the references therein. The problem now is wellposed, but at the price of ‘only’ obtaining probability distributions on the possible values of , which now is modelled as a valued random variable (RV). On the other hand one naturally also obtains information about the remaining uncertainty. Predicting what the measurement should be from some assumed is computing the forward problem. The inverse problem is then approached by comparing the forecast from the forward problem with the actual information.
Since the parameters of the model to be estimated are uncertain, all relevant information may be obtained via their stochastic description. In order to extract information from the posterior, most estimates take the form of expectations w.r.t. the posterior. These expectations — mathematically integrals, numerically to be evaluated by some quadrature rule — may be computed via asymptotic, deterministic, or sampling methods. In our review of current work we follow our recent publications [28], [35], [29], [34].
One often used technique is a Markov chain Monte Carlo (MCMC) method [21], [9], constructed such that the asymptotic distribution of the Markov chain is the Bayesian posterior distribution; for further information see [34] and the references therein.
These approaches require a large number of samples in order to obtain satisfactory results. Here the main idea here is to perform the Bayesian update directly on the polynomial chaos expansion (PCE) without any sampling [28], [35], [26], [29], [34]. This idea has appeared independently in [1] in a simpler context, whereas in [37] it appears as a variant of the Kalman filter (e.g. [17]). A PCE for a pushforward of the posterior measure is constructed in [27].
From this short overview it may already have become apparent that the update may be seen abstractly in two different ways. Regarding the uncertain parameters
(3) 
where the set of elementary events is , a algebra of events, and a probability measure, one set of methods performs the update by changing the probability measure and leaving the mapping as it is, whereas the other set of methods leaves the probability measure unchanged and updates the function . In any case, the push forward measure on defined by for a measurable subset is changed from prior to posterior. For the sake of simplicity we assume here that — the set containing possible realisations of — is a Hilbert space. If the parameter is a RV, then so is the state of the system Eq. (1). In order to avoid a profusion of notation, unless there is a possibility of confusion, we will denote the random variables which now take values in the respective spaces and with the same symbol as the previously deterministic quantities in Eq. (1).
In our overview [34] on spectral methods in identification problems, we show that Bayesian identification methods [16], [41], [11], [40] are a good way to tackle the identification problem, especially when these latest developments in functional approximation methods are used. In the series of papers [35], [26], [29], [34], Bayesian updating has been used in a linearised form, strongly related to the GaussMarkov theorem [20], in ways very similar to the wellknown Kalman filter [17]. These similarities ill be used to construct an abstract linear filter, which we term the GaussMarkovKalman filter (GMKF). This turns out to be a linearised version of conditional expectation. Here we want to extend this to a nonlinear form, and show some examples of linear (LBU) and nonlinear (QBU) Bayesian updates.
The organisation of the remainder of the paper is as follows: in Section 2 we review the Bayesian update—classically defined via conditional probabilities—and recall the link between conditional probability measures and conditional expectation. In Section 3, first we point out in which way — through the conditional expectation — the posterior measure is characterised by Bayes’s theorem, and we point out different possibilities. Often, one does not only want a characterisation of the posterior measure, but actually an RV which has the posterior measure as pushforward or distribution measure. Some of the wellknown filtering algorithms start from this idea. Again by means of the conditional expectation, some possibilities of construction such an RV are explored, leading to ‘filtering’ algorithms.
In most cases, the conditional expectation can not be computed exactly. We show how the abstract version of the conditional expectation is translated into the possibility of real computational procedures, and how this leads to various approximations, also in connection with the previously introduced filters.
We show how to approximate the conditional expectation up to any desired polynomial degree, not only the linearised version [20], [17] which was used in [28], [35], [26], [29], [34]. This representation in monomials is probably numerically not very advantageous, so we additionally show a version which uses general function bases for approximation.
The numerical realisation in terms of a functional or spectral approximations — here we use the well known WienerHermite chaos — is shortly sketched in Section 4. In Section 5 we then show some computational examples with the linear version (LBU), whereas in Section 6 we show how to compute with the nonlinear or quadratic (QBU) version. Some concluding remarks are offered in Section 7.
2 Bayesian Updating
Here we shall describe the frame in which we want to treat the problem of Bayesian updating, namely a dynamical system with timediscrete observations and updates. After introducing the setting in Subsection 2.1, we recall Bayes’s theorem in Subsection 2.2 in the formulation of Laplace, as well as its formulation in the special case where densities exist, e.g. [2]. The next Subsection 2.3 treats the more general case and its connection with the notion of conditional expectation, as it was established by Kolmogorov, e.g. [2]. This notion will be the basis of our approach to characterise a RV which corresponds to the posterior measure.
2.1 Setting
In the setting of Eq. (1) consider the following problem: one makes observations at times , and from these one would like to infer what (and possibly ) is. In order to include a possible identification of the state , we shall define a new variable , which we would thus like to identify:
Assume that is the flow or solution operator of Eq. (1), i.e. , where is the initial condition at time . We then look at the operator which advances the variable from at time to at , where the Hilbert space carries the natural inner product implied from and ,
or a bit more generally encoded in an operator :
(4) 
This is a discrete time step advance map, for example of the dynamical system Eq. (1), where a random ‘error’ term is included, which may be used to model randomness in the dynamical system per se, or possible discretisation errors, or both, or similar things. Most dynamical — and also quasistatic and stationary systems, considering different loadings as a sequence in some pseudotime — can be put in the form Eq. (4) when observed at discrete points in time. Obviously, for fixed model parameters like in Eq. (1) the evolution is trivial and does not change anything, but the Eq. (4) allows to model everything in one formulation.
Often the dependence on the random term is assumed to be linear, so that one has
(5) 
where the scalar explicitly measures the size of the random term , which is now assumed to be discrete white noise of unit variance and zero mean, and possible correlations are introduced via the linear operator .
But one can not observe the entity or , i.e. directly—like in Plato’s cave allegory we can only see a ‘shadow’ — here denoted by a vector — of it, formally given by a ‘measurement operator’
(6) 
where for the sake of simplicity we assume to be a Hilbert space.
Typically one considers also some observational ‘error’ , so that the observation may be expressed as
(7) 
where similarly as before is a discrete white noise process, and the observer map resp. combines the ‘true’ quantity to be measured with the error, to give the observation ..
Translating this into the notation of the discrete dynamical system Eq. (4), one writes
(8) 
where again the operator is often assumed to be linear in the noise term, so that one has similarly to Eq. (5)
(9) 
The mappings in Eq. (6), in Eq. (7), in Eq. (8), resp. Eq. (9) are usually not invertible and hence the problem is called illposed. One way to address this is via regularisation (see e.g. [5]), but here we follow a different track. Modelling our lack of knowledge about and in a Bayesian way [41] by replacing them with a  resp. valued random variable (RV), the problem becomes wellposed [40]. But of course one is looking now at the problem of finding a probability distribution that best fits the data; and one also obtains a probability distribution, not just one pair .
We shall allow for to be an infinitedimensional space, as well as for ; although practically in any real situation only finitely many components are measured. But by allowing for the infinitedimensional case, we can treat the case of partial differential equations — PDE models — like Eq. (1) directly and not just their discretisations, as its often done, and we only use arguments which are independent on the number of observations. In particular this prevents hidden dependencies on local compactness, the dimension of the model, or the number of measurements, and the possible breakdown of computational procedures as these dimensions grow, as they will be designed for the infinitedimensional case. The procedure practically performed in a real computation on a finitedimensional model and a finitedimensional observation may then be seen as an approximation of the infinitedimensional case, and analysed as such.
Here we focus on the use of a Bayesian approach inspired by the ‘linear Bayesian’ approach of [11] in the framework of ‘white noise’ analysis [13], [14], [15], [22], [43], [44]. Please observe that although the unknown ‘truth’ may be a deterministic quantity, the model for the observed quantity involves randomness, and it therefore becomes a RV as well.
To complete the mathematical setup we assume that is a measure space with algebra and with a probability measure , and that and similarly , and are random variables (RVs). The corresponding expectation will be denoted by , giving the mean of the random variable, also denoted by . The quantity is the zeromean or fluctuating part of the RV .
The space of vector valued RVs, say , will for simplicity only be considered in the form , where is a Hilbert space with inner product , is a Hilbert space of scalar RVs — here we shall simply take — with inner product , and the tensor product signifies the Hilbert space completion with the scalar product as usually defined for elementary tensors with and by
and extended to all of by linearity.
Obviously, we may also consider the expectation not only as a linear operator , but, as is isomorphic to the subspace of constants , also as an orthogonal projection onto that subspace , and we have the orthogonal decomposition
where is the zeromean subspace, so that
Later, the covariance operator between two Hilbertspace valued RVs will be needed. The covariance operator between two RVs and is denoted by
For and it is also often written as .
2.2 Recollection of Bayes’s theorem
Bayes’s theorem is commonly accepted as a consistent way to incorporate new knowledge into a probabilistic description [16], [41], and its present mathematical form is due to Laplace, so that a better denomination would be the BayesLaplace theorem.
The elementary textbook statement of the theorem is about conditional probabilities
(10) 
where is some measurable subset of possible ’s, and the measurable subset is the information provided by the measurement. Here the conditional probability is called the posterior probability, is called the prior probability, the conditional probability is called the likelihood, and is called the evidence. The Eq. (10) is only valid when the set has nonvanishing probability measure, and becomes problematic when approaches zero, cf. [16], [32]. This arises often when is a onepoint set representing a measured value , as such sets have typically vanishing probability measure. In fact the wellknown BorelKolmogorov paradox has led to numerous controversies and shows the possible ambiguities [16]. Typically the posterior measure is singular w.r.t. the prior measure, precluding a formulation in densities. Kolmogorov’s resolution of this situation shall be sketched later.
One wellknown very special case where the formulation in densities is possible, which has particular requirements on the likelihood, is when —as here—is a metric space, and there is a background measure on — is the Borelalgebra of — and similarly with and , and the RVs and have probability density functions (pdf) w.r.t. and w.r.t. resp., and a joint density w.r.t. . Then the theorem may be formulated as ([41] Ch. 1.5, [32], [16])
(11) 
where naturally the marginal density (from German Zustandssumme) is a normalising factor such that the conditional density integrates to unity w.r.t . In this case the limiting case where vanishes may be captured via the metric [32] [16]. The joint density
may be factored into the likelihood function and the prior density , like a marginal density, . These terms in the second equality in Eq. (11) are in direct correspondence with those in Eq. (10). Please observe that the model for the RV representing the error in Eq. (8) determines the likelihood functions resp. . To require the existence of the joint density is quite restrictive. As Eq. (8) shows, is a function of , and a joint density on will generally not be possible as are most likely on a submanifold; but the situation of Eq. (9) is one possibility where a joint density may be established. The background densities are typically in finite dimensions the Lebesgue measure on , or more general Haar measures on locally compact Liegroups [39]. Most computational approaches determine the pdfs [23], [40], [18].
However, to avoid the critical cases alluded to above, Kolmogorov already defined conditional probabilities via conditional expectation, e.g. see [2]. Given the conditional expectation operator , the conditional probability is easily recovered as , where is the characteristic function of the subset . It may be shown that this extends the simpler formulation described by Eq. (10) or Eq. (11) and is the more fundamental notion, which we examine next. Its definition will lead directly to practical computational procedures.
2.3 Conditional expectation
The easiest point of departure for conditional expectation [2] in our setting is to define it not just for one piece of measurement —which may not even be possible unambiguously—but for subalgebras on . A subalgebra is a mathematical description of a reduced possibility of randomness — the smallest subalgebra allows only the constants in — as it contains fewer events than the full algebra . The connection with a measurement is to take , the algebra generated by the measurement from Eq. (8). These are all events which are consistent with possible observations of some value for . This means that the observation of allows only a certain ‘fineness’ of information to be obtained, and this is encoded in the subalgebra .
2.3.1 Scalar random variables
For scalar RVs —functions of with finite variance, i.e. elements of —the subspace corresponding to the subalgebra is a closed subspace [2] of the full space . One example of such a scalar RV is the function
mentioned at the end of Subsection 2.2 used to define conditional probability of the subset once a conditional expectation operator is defined: .
Definition 1.
For scalar functions of — scalar RVs — in , the conditional expectation is defined as the orthogonal projection onto the closed subspace , so that , e.g. see [2].
The question is now on how to characterise this subspace , in order to make it more accessible for possible numerical computations. In this regard, note that the DoobDynkin lemma [2] assures us that if a RV — like — is in the subspace , then for some , the space of measurable scalar functions on . We state this key fact and the resulting new characterisation of the conditional expectation in
Proposition 2.
The subspace is given by
(12) 
The conditional expectation of a scalar RV , being the orthogonal projection, minimises the distance to the original RV over the whole subspace:
(13) 
where is the orthogonal projector onto . The Eq. (12) and Eq. (13) imply the existence of a optimal map such that
(14) 
In Eq. (13), one may equally well minimise the square of the distance, the lossfunction
(15) 
Taking the vanishing of the first variation / Gâteaux derivative of the lossfunction Eq. (15) as a necessary condition for a minimum leads to a simple geometrical interpretation: the difference between the original scalar RV and its projection has to be perpendicular to the subspace:
(16) 
Rephrasing Eq. (13) with account to Eq. (16) and Eq. (15) leads for the optimal map to
(17) 
and the orthogonality condition of Eq. (17) which corresponds to Eq. (16) leads to
(18) 
Proof.
The Eq. (12) is a direct statement of the DoobDynkin lemma [2], and the Eq. (13) is equivalent to the definition of the conditional expectation being an orthogonal projection in — actually an elementary fact of Euclidean geometry.
The existence of the optimal map in Eq. (14) is a consequence of the minimisation of a continuous, coercive, and strictly convex function — the norm Eq. (13) — over the closed set in the complete space . The equivalence of minimising the norm Eq. (13) and Eq. (15) is elementary, which is restated in Eq. (17).
2.3.2 Vector valued random variables
Now assume that is a function of which takes values in a vector space , i.e. a valued RV, where is a Hilbert space. Two simple examples are given by the conditional mean where with , and by the conditional variance where one takes , where . The Hilbert tensor product is again needed for such vector valued RVs, where a bit more formalism is required, as we later want to take linear combinations of RVs, but with linear operators as ‘coefficients’ [20], and this is most clearly expressed in a componentfree fashion in terms of invariance, where we essentially follow [3], [4]:
Definition 3.
Let be a subspace of . The subspace is called linearly closed, closed, or invariant, iff is closed, and and it holds that .
In finite dimensional spaces one can just apply the notions for the scalar case in section 2.3.1 component by component, but this is not possible in the infinite dimensional case. Of course the vectorial description here collapses to the scalar case upon taking . From [3] one has the following
Proposition 4.
It is obvious that the whole space is linearly closed, and that for a linearly closed subspace its orthogonal complement is also linearly closed. Clearly, for a closed subspace , the tensor space is linearly closed, and hence the space of constants is linearly closed, as well as its orthogonal complement , the subspace of zeromean RVs.
Let be a RV, and denote by
(19) 
the closure of the span of the image of and the algebra generated by , where is the Borelalgebra of . Denote the closed subspace generated by by . Let , the linearly closed subspace generated by , and finally denote by , the onedimensional ray and hence closed subspace generated by . Obviously it holds that
and is linearly closed according to Proposition 4.
Definition 5.
Let and be subspaces of , and two RVs.

The two subspaces are weakly orthogonal or simply just orthogonal, denoted by
, iff it holds that .

A RV is weakly orthogonal or simply just orthogonal to the subspace , denoted by
, iff , i.e. it holds that .

Two RVs are weakly orthogonal or as usual simply just orthogonal, denoted by
, iff , i.e. .

The two subspaces and are strongly orthogonal or orthogonal, iff they are linearly closed—Definition 3—and it holds that , and . This is denoted by
, and in other words .

The RV is strongly orthogonal to a linearly closed subspace , denoted by
, iff , i.e. it holds that .

The two RVs are strongly orthogonal or simply just uncorrelated, denoted by
, iff , i.e. .

Let be two subalgebras. They are independent, denoted by
, iff the closed subspaces of generated by them are orthogonal in :
. 
The two subspaces and are stochastically independent, denoted by
, iff the subalgebras generated are: .

The two RVs are stochastically independent, denoted by
, iff , i.e. .
Proposition 6.
Obviously . It is equally obvious that for any two closed subspaces , the condition implies that the tensor product subspaces are strongly orthogonal:
This implies that for a closed subspace the subspaces and its orthogonal complement are linearly closed and strongly orthogonal.
Proposition 7.
Let be two zeromean RVs. Then
Strong orthogonality in general does not imply independence, and orthogonality does not imply strong orthogonality, unless is onedimensional.
If is linearly closed, then
From this we obtain the following:
Lemma 8.
Set for the valued RV with finite variance on the subalgebra , representing the new information.
Then is invariant or strongly closed, and for any zero mean RV :
(20) 
In addition, it holds — even if is not zero mean — that
(21) 
Proof.
Extending the scalar case described in section 2.3.1, instead of
and its subspace generated by the measurement
one now considers the space Eq. (22) and its subspace Eq. (23)
(22)  
(23) 
The conditional expectation in the vectorvalued case is defined completely analogous to the scalar case, see Definition 1:
Definition 9.
From this one may derive a characterisation of the conditional expectation similar to Proposition 2.
Theorem 10.
The subspace is given by
(24) 
The conditional expectation of a vectorvalued RV , being the orthogonal projection, minimises the distance to the original RV over the whole subspace:
(25) 
where is the orthogonal projector onto . The Eq. (24) and Eq. (25) imply the existence of a optimal map such that
(26) 
In Eq. (25), one may equally well minimise the square of the distance, the lossfunction
(27) 
Taking the vanishing of the first variation / Gâteaux derivative of the lossfunction Eq. (27) as a necessary condition for a minimum leads to a simple geometrical interpretation: the difference between the original vectorvalued RV and its projection has to be perpendicular to the subspace :
(28) 
Rephrasing Eq. (25) with account to Eq. (28) and Eq. (27) leads for the optimal map to
(29) 
and the orthogonality condition of Eq. (29) which corresponds to Eq. (28) leads to
(30) 
In addition, as is linearly closed, one obtains the useful statement
(31) 
or rephrased :
(32) 
Proof.
Already in [17] it was noted that the conditional expectation is the best estimate not only for the loss function ‘distance squared’, as in Eq. (15) and Eq. (27), but for a much larger class of loss functions under certain distributional constraints. However for the quadratic loss function this is valid without any restrictions.
Requiring the derivative of the quadratic loss function in Eq. (15) and Eq. (27) to vanish may also be characterised by the LaxMilgram lemma, as one is minimising a quadratic functional over the vector space , which is closed and hence a Hilbert space. For later reference, this result is recollected in
Theorem 11.
In the scalar case, there is a unique minimiser to the problem in Eq. (13), and it is characterised by the orthogonality condition Eq. (16)
(33) 
The minimiser is unique as an element of , but the mapping in Eq. (17) may not necessarily be. It also holds that
(34) 
As in the scalar case, in the vectorvalued case there is a unique minimiser to the problem in Eq. (25), which satisfies the orthogonality condition Eq. (28)
(35) 
which is equivalent to the the strong orthogonality condition Eq. (31)
(36) 
The minimiser is unique as an element of , but the mapping in Eq. (29) may not necessarily be. It also holds that
(37) 
Proof.
It is all already contained in Proposition 2 resp. Theorem 10. Except for Eq. (36), this is just a rephrasing of the LaxMilgram lemma, as the bilinear functional—in this case the inner product—is naturally coercive and continuous on the subspace , which is closed and hence a Hilbert space. The only novelty here are the Eq. (34) and Eq. (37) which follow from Pythagoras’s theorem. ∎
3 Characterising the posterior
The information contained in the Bayesian update is encoded in the conditional expectation. And it only characterises the distribution of the posterior. A few different ways of characterising the distribution via the conditional expectation are sketched in Subsection 3.1. But in many situations, notably in the setting of Eq. (4) or Eq. (5), with the observations according to Eq. (8) or Eq. (9), we want to construct a new RV to serve as an approximation to the solution of Eq. (4) or Eq. (5). This then is a filter, and a few possibilities will be given in Subsection 3.2.
3.1 The posterior distribution measure
It was already mentioned at the beginning of section 2.3.1, that the scalar function may be used to characterise the conditional probability distribution of . Indeed, if for a RV one defines:
(38) 
one has completely characterised the posterior distribution, a version of which is under certain conditions—[2], [32], [16]—a measure on , the image space of the RV .
One may also recall that the characteristic function in the sense of stochastics of a RV , namely
completely characterises the distribution of the RV . As we assume that is a Hilbert space, we may identify with its dual space , and in this case take as defined on . If now a conditional expectation operator is given, it may be used to define the conditional characteristic function :
(39) 
This again completely characterises the posterior distribution.
Another possible way, actually encompassing the previous two, is to look at all functions , and compute—when they are defined and finite—the quantities
(40) 
again completely characterising the posterior distribution. The two previous examples show that not all functions of with finite conditional expectation are needed. The first example uses the set of functions
whereas the second example uses the set
3.2 A posterior random variable — filtering
In the context of a situation like in Eq. (4) resp. Eq. (5), which represents the unknown system and state vector , and where one observes according to Eq. (8) resp. Eq. (9), one wants to have an estimating or tracking model system, with a state estimate for which would in principle obey Eq. (4) resp. Eq. (5) with the noise set to zero—as one only knows the structure of the system as given by the maps resp. but not the initial condition nor the noise. The observations can be used to correct the state estimate , as will be shown shortly. The state estimate will be computed via Bayesian updating. But the Bayesian theory, as explained above, only characterises the posterior distribution; and there are many random variables which might have a given distribution. To obtain a RV which can be used to predict the next state through the estimate one may use a filter based on Bayesian theory. The mean vehicle for this will be the notion of conditional expectation as described in the previous Section 2. As we will first consider only one update step, the time index will be dropped for the sake of ease of notation: The true state is , its forecast is , and the forecast of the measurement is , whereas the observation is .
To recall, according to Definition 9, the Bayesian update is defined via the conditional expectation through a measurement —which will for the sake of simplicity be denoted just by —of a valued RV is simply the orthogonal projection onto the subspace in Eq. (24),
which is given by the optimal map from Eq. (26), characterised by Eq. (32), where we have added an index to signify that this is the optimal map for the conditional expectation of the RV .
The linearly closed subspace induces a orthogonal decomposition decomposition
where the orthogonal projection onto is given by . Hence a RV in like can be decomposed accordingly as
(41) 
This Eq. (41) is the starting point for the updating. A measurement will inform us about the component in , namely , while we leave the component orthogonal to it unchanged: . Adding these two terms then gives an updated or assimilated RV :
(42) 
where is the forecast and is the innovation. For and one has the following result:
Proposition 12.
The assimilated RV from Eq. (42) has the correct conditional expectation