A Galerkin least squares approach for
photoacoustic tomography^{1}^{1}1Funding: Authors gratefully acknowledge the support of the Tyrolean Science Fund (TWF).
Technikerstrasse 13, A6020 Innsbruck, Austria
Abstract
The development of fast and accurate image reconstruction algorithms is a central aspect of computed tomography. In this paper we address this issue for photoacoustic computed tomography in circular geometry. We investigate the Galerkin least squares method for that purpose. For approximating the function to be recovered we use subspaces of translation invariant spaces generated by a single function. This includes many systems that have previously been employed in PAT such as generalized KaiserBessel basis functions or the natural pixel basis. By exploiting an isometry property of the forward problem we are able to efficiently set up the Galerkin equation for a wide class of generating functions and devise efficient algorithms for its solution. We establish a convergence analysis and present numerical simulations that demonstrate the efficiency and accuracy of the derived algorithm.
Key words: Photoacoustic imaging, computed tomography, Galerkin least squares method, KaiserBessel functions, Radon transform, leastsquares approach.
AMS subject classification: 65R32, 45Q05, 92C55.
label= \setenumeratelabel=()
1 Introduction
Photoacoustic tomography (PAT) is an emerging noninvasive tomographic imaging modality that allows high resolution imaging with high contrast. Applications are ranging from breast screening in patients to whole body imaging of small animals [4, 45, 27, 58]. The basic principle of PAT is as follows. If a semitransparent sample is illuminated with a short pulse, then parts of the optical energy are absorbed inside the sample (see Figure 1.1). This causes a rapid thermoelastic expansion, which in turns induces an acoustic pressure wave. The pressure wave is measured outside of the sample and used for reconstructing an image of the interior.
In this paper we work with the standard model of PAT, where the acoustic pressure solves the standard wave equation
(1.1) 
Here is the spatial dimension, the absorbed energy distribution, the spatial Laplacian, and the derivative with respect to the time variable . The speed of sound is assumed to be constant and has been rescaled to one. We further suppose that vanishes outside an open ball . The goal of PAT is to recover the function from measurements of . Evaluation of is referred to as the direct problem and the problem of reconstructing from (possibly approximate) knowledge of as the inverse problem of PAT. The cases and are of actual relevance in PAT (see [29, 7]).
In the recent years several solution methods for the inverse problem of PAT have been derived. These approaches can be classified in direct methods on the one and iterative (model based) approaches on the other hand. Direct methods are based on explicit solutions for inverting the operator that can be implemented numerically. This includes time reversal (see [8, 24, 12, 43, 54]), Fourier domain algorithms (see [1, 20, 31, 46, 53, 59]), and explicit reconstruction formulas of the backprojection type (see [2, 11, 12, 15, 16, 18, 19, 30, 32, 41, 42, 61]). Model based iterative approaches, on the other hand, are based on a discretization of the forward problem together with numerical solution methods for solving the resulting system of linear equations. Existing iterative approaches use interpolation based discretization (see [9, 47, 48, 52, 62]) or approximation using radially symmetric basis functions (see [56, 57]). Recently, also iterative schemes using a continuous domain formulation of the adjoint have been studied, see [3, 5, 17]. Direct methods are numerically efficient and robust and have similar complexity as numerically evaluating the forward problem. Iterative methods typically are slower since the forward and adjoint problems have to be evaluated repeatedly. However, iterative methods have the advantage of being flexible as one can easily add regularization terms and incorporate measurement characteristics such as finite sampling, finite bandwidth and finite detectors size (see [9, 22, 51, 55, 56, 60]). Additionally, iterative methods tend to be more accurate in the case of noisy data.
1.1 Proposed Galerkin least squares approach
In this paper we develop a Galerkin approach for PAT that combines advantages of direct and model based approaches. Our method comes with a clear convergence theory, sharp error estimates and an efficient implementation. The Galerkin least squares method for consists in finding a minimizer of the restricted least squares functional,
(1.2) 
where is a finite dimensional reconstruction space and an appropriate Hilbert space norm. If is a basis of then , where is the unique solution of the Galerkin equation
(1.3) 
We call the matrix the (discrete) imaging matrix.
In general, both the computation of the imaging matrix as well as the solution of the Galerkin equation can be numerically expensive. In this paper we demonstrate that for the inverse problem of PAT, both issues can be efficiently implemented. These observations are based on the following:

Shift invariance. If, additionally, we take the basis functions as translates of a single generating function , then for .
Consequently only inner products have to be computed in our Galerkin approach opposed to inner products required in the general case. Further, the resulting shift invariant structure of the system matrix allows to efficiently solve the Galerkin equation.
Note that shift invariant spaces are frequently employed in computed tomography and include splines spaces, spaces of bandlimited functions, or spaces generated by KaiserBessel functions. In this paper we will especially use KaiserBessel functions which are often considered as the most suitable basis for computed tomography [23, 34, 39, 44]. For the use in PAT they have first been proposed in [56]. We are not aware of existing Galerkin approaches for tomographic image reconstruction exploiting isometry and shift invariance. However, we anticipate that similar methods can be derived for other tomographic problems, where an isometry property is known (such as Xray based CT [28, 40]). We further note that our approach has close relations to the method of approximate inverse, which has frequently been applied to computed tomography [21, 35, 36, 37, 49, 50]. Instead of approximating the unknown function using a prescribed reconstruction space, the method of approximate inverse recovers prescribed moments of the unknown and is somehow dual to the Galerkin approach.
1.2 Outline
The rest of this article is organized as follows. In Section 2 we apply the Galerkin least squares method for the inverse problem of PAT. By using the isometry property we derive a simple characterization of the Galerkin equation in Theorem 2.2. We derive a convergence and stability result for the Galerkin least squares method applied to PAT (see Theorem 2.3). In Section 3 we study shift invariant spaces for computed tomography. As the main results in that section we derive an estimate for the approximation error using elements from the shift invariant space. In Section 4 we present details for the Galerkin approach using subspaces of shift invariant spaces. In Section 5 we present numerical studies using our Galerkin approach and compare it to related approaches in the literature. The paper concludes with a conclusion and a short outlook in Section 6.
2 Galerkin approach for PAT
Throughout the following, suppose , let denote the open ball with radius centered at the origin, and let denote the Hilbert space of all square integrable functions which vanish outside . For two measurable functions we write
(2.1) 
provided that the integral exists. We further denote by the Hilbert space of all functions with .
2.1 PAT and the wave equation
For initial data consider the wave equation (1.1). The solution of (1.1) restricted to the boundary of is denoted by . The associated operator is defined by
Lemma 2.1 (Isometry and continuous extension of ).

For all we have .

uniquely extends to a bounded linear operator .

For all we have .
Proof.
1: See [11, Equation (1.16)] for even and
[12, Equation (1.16)] for odd. (Note that
the isometry identities in [11, 12] are stated for the wave equation with different initial conditions, and therefore at first glance look different from 1.)
2, 3: Item 1 implies that is bounded with respect to the norms of and and defined on a dense subspace of . Consequently it uniquely extends to a bounded operator . The continuity of the inner product finally shows the isometry property on .
∎
We call the acoustic forward operator. PAT is concerned with the inverse problem of estimating from potentially noisy and approximate knowledge of . In this paper we use the Galerkin least squares method for that purpose.
2.2 Application of the Galerkin method
Let and be families of subspaces of and , respectively, with . Further let denote the orthogonal projection on and suppose . The Galerkin method for solving defines the approximate solution as the solution of
(2.2) 
In this paper we consider the special case where , in which case the solution of (2.2) is referred to as Galerkin least squares method. The name comes from the fact that in this case the Galerkin solution can be uniquely characterized as the minimizer of the least squares functional over ,
(2.3) 
Because is a quadratic functional on a finite dimensional space and is injective, (2.3) possesses a unique solution. Together with the isometry property we obtain the following characterizations of the least squares Galerkin method for PAT.
Theorem 2.2 (Characterizations of the Galerkin least squares method).
For and the following are equivalent:

;

minimizes the least squares functional (2.3);

For an arbitrary basis of , we have
(2.4) where

with ;

;

.

Proof.
In general, evaluating all matrix entries can be difficult. For many basis functions an explicit expression for is not available including the natural pixel basis, spaces defined by linear interpolation, or spline spaces. Hence has to be evaluated numerically which is time consuming and introduces additional errors. Even if is given explicitly, then the inner products have to be computed numerically and stored. For large this can be problematic and time consuming. In contrast, by using the isometry property in our approach we only have to compute the inner products . Further, in computed tomography it is common to take as translates of a single function . In such a situation the inner products satisfy and therefore only a small fraction of all inner products actually have to be computed.
2.3 Convergence and stability analysis
As another consequence of the isometry property we derive linear error estimates for the Galerkin approach to PAT. We consider noisy data where the data is known to satisfy
(2.5) 
for some noise level and unknown . For noisy data we define the Galerkin least squares solution by
(2.6) 
We then have the following convergence and stability result.
Theorem 2.3 (Convergence and stability of the Galerkin method for PAT).
Proof.
We start with the noise free case . The definition of and the isometry property of yield
This shows and yields (2.7) for . Here and below we use to denote the orthogonal projection on a closed subspace .
Now consider the case of arbitrary , with where satisfies . Because is closed we can write
Following the case and using that one verifies that . Therefore, by the triangle inequality and the isometry property of we obtain
Together with this concludes the proof. ∎
The error estimate in Theorem 2.3 depends on two terms: the first term depends on the approximation properties of the space and the second term on the noise level . As easily verified both terms are optimal and cannot be improved. The second term shows stability of our Galerkin least squares approach. Under the reasonable assumption that the spaces satisfy the denseness property
the derived error estimate further implies convergence of the Galerkin approach.
3 Shift invariant spaces in computed tomography
In many tomographic and signal processing applications, natural spaces for approximating the underlying function are subspaces of shift invariant spaces. In this paper we consider spaces that are generated by translated and scaled versions of a single function ,
(3.1) 
Here denotes the linear hull, stands for the closure with respect to of a set , and
(3.2) 
We have chosen the normalization of the generating functions in such a way that for all . In this section we derive conditions such that any function can be approximated by elements in . Further, we present examples of generating functions that are relevant for (photoacoustic) computed tomography.
Any tomographic reconstruction method uses, either explicitly or implicitly, a particular discrete reconstruction space. This is obvious for any iterative procedure as it requires a finite dimensional representation of the forward operator that can be evaluated numerically. However, also direct methods use an underlying discrete image space. For example, standard filtered backprojection algorithms usually reconstruct samples of a bandlimited approximation of the unknown function. In such a situation, the underlying discrete signal space consists of bandlimited functions. In this paper we allow more general shift invariant spaces.
The following properties of the generating function and the spaces have been reported desirable for tomographic applications (see [44, 56]):

has “small” spatial support;

is rotationally invariant;

is a Riesz basis of ;

satisfies the so called partition of unity property.
Conditions 1 and 2 are desirable from a computational point of view and often help to derive efficient reconstruction algorithms. The properties 3 and 4 are of more fundamental nature as these conditions imply that any function can be approximated arbitrarily well by elements in as (with kept fixed; the so called stationary case). In [44] it has been pointed out that the properties 14 cannot be simultaneously fulfilled. This implies that for taking independent of , the spaces have a limited approximation capability in the sense that for a typical function , the approximation error does not converge to zero as and is kept fixed.
Despite these negative results, radially symmetric basis functions are of great popularity in computed tomography (see for example, [23, 33, 34, 39, 44, 57, 56]). In this paper we therefore propose to also allow the shift parameter to be variable. Under reasonable assumptions we show that the approximation error converges to zero for . This convergence in particularly holds for radially symmetric generating functions having some decay in the Fourier space, including generalized KaiserBessel functions which are the most popular choice in tomographic image reconstruction.
3.1 Riesz bases of shift invariant spaces
Recall that the family is called a Riesz basis of if there exist such that
(3.3) 
where is the squared norm of . A Riesz basis of can equivalently be defined as a linear independent family of frames and the constants and are the lower and upper frame bounds of , respectively. In the following we write for the dimensional Fourier transform defined by for and extended to by continuity.
The following two Lemmas are well known in the case that (see [38, Theorem 3.4]). Due to page limitations and because the general case is shown analogously, the proofs of the Lemmas are omitted.
Lemma 3.1 (Riesz basis property).
The family is a Riesz basis of with frame bounds and , if and only if
(3.4) 
Proof.
Follows the lines of [38, Theorem 3.4]. ∎
The following Lemma implies that for any Riesz basis one can construct an orthonormal basis of that is again generated by translated and scaled versions of a single function .
Lemma 3.2 (Orthonormalization).
Let be a Riesz basis of .

orthonormal for a.e. .

is an orthonormal basis of , where is defined by
(3.5)
Proof.
Follows the lines of [38, Theorem 3.4]. ∎
According to Lemma 3.2, for theoretical purposes one may assume that the considered basis of is already orthogonal. From a practical point of view, however, it may be more convenient to work with the original nonorthogonal basis. The function may have additional properties such as small support or radial symmetry which may not be the case for . Also it may not be the case that is known analytically.
3.2 The approximation error
We now investigate the approximation error in shift invariant spaces,
(3.6) 
as well as its asymptotic properties. Here and in the following denotes the orthogonal projection on . It is given by , where is any orthogonal basis of . For the stationary case , the following Theorem has been obtained in [6].
Theorem 3.3 (The approximation error).
Let be a Riesz basis of and define
(3.7) 
Then, for every with we have
(3.8) 
where the remainder can be estimated as
(3.9) 
Proof.
Let denote the orthonormal basis of the space as constructed in Lemma 3.2. Further, for every define and define functions by its Fourier representation
Then we have and .
Now for every , we investigate the approximation error . We have . Further,
where is the th Fouriercoefficient of the periodization of the function . Due to Parseval’s identity we have
Therefore we obtain
Next notice that for and we have . Therefore we can estimate
Together with the triangle inequality and the CauchySchwarz inequality for sums we obtain
Here the sum is convergent because . After recalling that , the above estimate yields (3.8). ∎
Note that the remainder in Theorem 3.3 satisfies as . Consequently, for every sequence we have if and
By Lebesgue’s dominated convergence theorem this holds if almost everywhere converges to as . In the following theorem we consider two possible sequences where this is the case. Note that Item 1 in that theorem is well known (see, for example, [38]), while Item 2 to the best of our knowledge is new.
Theorem 3.4 (Asymptotic behavior of ).
Let .

Suppose that . Then, for every we have that almost everywhere if and only if
(3.10) Equation (3.10) is called the partition of unity property.

Suppose as for some . Let and be bounded sequences in with as . Then
(3.11)