Optimal Experimental Design for Constrained Inverse ProblemsSubmitted to the editors June 30, 2019. \fundingThis work was partially supported by NSF DMS 1522599 (L. Ruthotto) and NSF DMS 1723005 (J. Chung and M. Chung).

# Optimal Experimental Design for Constrained Inverse Problems††thanks: Submitted to the editors June 30, 2019. \fundingThis work was partially supported by NSF DMS 1522599 (L. Ruthotto) and NSF DMS 1723005 (J. Chung and M. Chung).

Lars Ruthotto Department of Mathematics and Computer Science, Emory University, Atlanta, GA (, http://www.mathcs.emory.edu/l̃ruthot/).    Julianne Chung Department of Mathematics, Computational Modeling and Data Analytics Division, Academy of Integrated Science, Virginia Tech, Blacksburg, VA (, http://www.math.vt.edu/people/jmchung/).    Matthias Chung Department of Mathematics, Computational Modeling and Data Analytics Division, Academy of Integrated Science, Virginia Tech, Blacksburg, VA (, http://www.math.vt.edu/people/mcchung/).
###### Abstract

In this paper, we address the challenging problem of optimal experimental design (OED) of constrained inverse problems. We consider two OED formulations that allow to reduce the experimental costs by minimizing the number of measurements. The first formulation assumes a fine discretization of the design parameter space and uses sparsity promoting regularization to obtain an efficient design. The second formulation parameterizes the design and seeks optimal placement for these measurements by solving a small-dimensional optimization problem. We consider both problems in a Bayes risk as well as an empirical Bayes risk minimization framework. For the unconstrained inverse state problem, we exploit the closed form solution for the inner problem to efficiently compute derivatives for the outer OED problem. The empirical formulation does not require an explicit solution of the inverse problem and therefore allows to integrate constraints efficiently. A key contribution is an efficient optimization method for solving the resulting, typically high-dimensional, bilevel optimization problem using derivative-based methods. To overcome the lack of non-differentiability in active set methods for inequality constraints problems, we use a relaxed interior point method. To address the growing computational complexity of empirical Bayes OED, we parallelize the computation over the training models. Numerical examples and illustrations from tomographic reconstruction, for various data sets and under different constraints, demonstrate the impact of constraints on the optimal design and highlight the importance of OED for constrained problems.

e

OED for Constrained Inverse ProblemsL. Ruthotto, J. Chung, and M. Chung

xperimental design, constrained optimization, tomographic reconstruction, Bayes risk, and empirical Bayes risk

{AMS}

62K05, 65F22, 80M50

## 1 Introduction

The ability to optimally configure or design an experimental setup, or optimal experimental design (OED), can have significant benefits and improvements in a wide range of scientific and engineering applications [4, 32]. Only recently has the focus shifted from OED for well-posed problems to OED for ill-posed inverse problems, where an unresolved challenge is how to efficiently include constraints on the state parameters (i.e., the solutions of the inverse problem) [16, 17]. In this work, we investigate the impact of state constraints on the optimal design, and we propose a unified OED framework for linear inverse problems with linear equality and inequality constraints in the context of tomographic reconstruction.

Before introducing the OED problem, we first describe the discrete inverse problem. Given a design parameter that describes the experiment setup, the observations are given as

 d(p)=M(p)ftrue+ε(p), (1)

where is the exact model that we wish to reconstruct, is the design-dependent forward operator with matrix describing the parameter-to-observation map, and represents additive noise. Some examples of include the map from image to sinogram in tomography (see, e.g., [21, 27, 28]) or the map onto an observation space of the solution of a partial or ordinary differential equation (see, e.g., [3, 12, 15]). In this work we assume that the measurements depend on the underlying design of the experiment (e.g., may determine the positions of the sources and/or detectors or represent the times at which measurements are taken). The noise can come from various sources, e.g., measurement errors, modeling errors, and numerical rounding errors. For simplicity we assume that is normally distributed with zero mean and known symmetric positive definite covariance matrix for any design .

We focus on ill-posed inverse problems where regularization in the form of prior knowledge is required to compute stable, reasonable approximations of [20]. Following a Bayesian framework [6, 26], we treat as a random variable with a truncated multivariate normal distribution with probability density given as

 π(ftrue)=⎧⎨⎩ce−γ22(ftrue−μ)⊤Γ−1f(ftrue−μ),if Ceftrue−ce=0$and$Ciftrue−ci≥0,0,otherwise,

with appropriate constants and given positive definite covariance matrix and mean . Here, the pairs and , define linear equality and inequality constraints on , respectively. Denoting by the identity matrix, for example, bound constraints correspond to choosing and where and contain lower and upper bounds respectively (e.g., non-negativity constraints are reasonable when reconstructing density images). Furthermore, setting , where is a vector of all ones, allows one to fix the integral (or mass) of , which might be helpful in applications such as emission tomography. A wide range of prior knowledge can be included to estimate in this formulation. The unconstrained case reduces to a simple Gaussian distribution on the entire domain [7] and can provide insight in the Bayes formulation.

For fixed the maximum a-posteriori (MAP) estimate provides an estimator to (1) and can be obtained by minimizing the negative log likelihood of the posterior probability distribution function, i.e.,

 (2)

where . In general, obtaining requires solving a constrained optimization problem, which in this case is a convex quadratic programming problem. In the absence of inequality constraints, the optimal solution to (2) depends linearly on the data, which can be used for efficient optimization of the design parameter; see [16]. In the presence of inequality constraints however, the optimality condition becomes nonlinear in the data and, thus, solving (2) becomes computationally challenging. We assume a unique minimizer of the constrained optimization problem exists.

In this work we consider optimal design problems that aim at selecting an experimental design that not only leads to most accurate estimates of but also minimizes experimental costs. Finding a balance between these often conflicting goals can be challenging. For example, in tomographic reconstruction, obtaining more projection data may lead to higher resolution image reconstructions, but more scans may lead to more harmful radiation to the patient, longer scan times, and higher operational costs.

For our OED problem, the goal is to obtain design parameters that minimize the Bayes risk, i.e., the expected value of the mean squared reconstruction errors, while simultaneously minimizing measurement costs. This requires solving an optimization problem of the form,

 (3)

where is the set of feasible design parameters, is the expected value where and are random variables, and the functional encodes the measurement costs. Two formulations in the context of tomographic reconstruction will be described in Section 2.

The literature on OED methodologies is vast, with a range of techniques tailored to various optimality criteria in both Bayesian [11] and non-Bayesian settings. Classic OED works include [4, 31, 32], but a recent large effort has been made to develop efficient algorithms for obtaining designs that minimize a loss function of the Fisher information matrix and to new applications, e.g., in biology and exploratory drilling [16, 17, 18, 30]. Many of these approaches follow an empirical Bayes risk minimization framework and exploit the case where the model parameters, here , depend linearly on the observables This is not necessarily the case for the constrained inverse problems of interest here. Extensions to nonlinear problems have been considered in [2, 14, 22], and an approach based on consistent Bayesian inference was developed in [5] and used for OED in [37].

The primary goal in many design problems is to enforce sparse sampling in the design parameters. Sparsity enforcing regularization in the context of ODE was considered in [1, 14, 18, 19], but no additional state constraints were considered. The main contribution of this work is the inclusion of state constraints in OED frameworks, which is a critical yet missing piece in the shift from OED for well-posed to OED for ill-posed problems. Two open questions include (1) how does the inclusion of state constraints impact the optimal design? And (2) how does one efficiently incorporate constraints in an OED framework? In this paper, we address both of these questions by developing and investigating a unified OED framework for constrained inverse problems.

An overview of this paper is as follows. In Section 2 we describe two problem formulations for OED in the context of tomographic reconstruction, one of which assumes a fine discretization of the parameter space and enforces sparsity of the design and the other assumes a fixed number of design parameters and seeks optimal locations for these measurements. In Section 3 we investigate various problem formulations for the OED problems that demonstrate the range of problems and constraints that can be addressed. We describe efficient computational approaches in Section 4. In Section 5, we demonstrate the impact of state constraints on the design, which is something that has not been investigated before, and we provide numerical results that demonstrate the effectiveness of our approach. Conclusions are provided in Section 6.

## 2 Motivating Application: Tomographic Reconstruction

Although the methods described in this paper extend to more general problems and applications, we focus our discussion on two problem setups in the context of tomographic reconstruction.

Tomography is a widely used imaging technique where penetrating waves (e.g., x-ray or acoustic waves) transverse an object and are collected or detected [21, 27, 28]. Oftentimes, multiple projections are made as the wave source rotates around the object. Then given these observed measurements, the goal of the inverse problem is to reconstruct the interior properties (e.g., densities) of the object at different locations; see Figure 1.

Mathematically, we can model the transmission process for one projection via a (sparse) matrix where is the number of rays. Then the noise-free projection data obtained by rotating the source degrees clockwise can be modeled as where rotates the object by degrees counterclockwise. Typically we assume Next we describe two OED problems.

#### OED Problem A

In the first scenario, we assume that a fine discretization of the set of projection angles is given and aim at identifying the angles that provide the most important measurements. To this end, we introduce the design parameters with whose components encode the importance of each measurement. Define the ordered index set and denote to be the cardinality of We define matrix to contain the -th standard basis vectors for Then, and are given by

 M(p)=P(Iℓ⊗T)⎡⎢ ⎢⎣R(θ1)⋮R(θℓ)⎤⎥ ⎥⎦,andd(p)=Pd, (4)

where Notice that essentially extracts all of the non-zero rows of We assume that .

Since we would like to keep the number of projection angles low, we incorporate a sparsity-inducing prior on the design parameters. That is, we use the -norm with large enough so that we can get a relaxation of and enforce sparsity in the design parameters [8, 9]. In the Bayesian framework, this is equivalent to imposing a Laplace prior distribution on (see e.g., [23, 25]). Furthermore, we assume (non-negative orthant) such that continuous optimization methods can be used.

In summary, OED problem A can be written as the bilevel optimization problem

 minp≥0E 12∥∥ˆf(p)−ftrue∥∥22+β∥p∥1 (5) subject to ^f(p)=argminf12∥M(p)f−d(p)∥22+α22∥L(f−μ)∥22 (6) subject to  Cef−ce=0 and Cif−ci≥0,

where and zero values in correspond to zeroing out measurements for the corresponding angle. After computing such an -regularized design, the solution can be used to identify important components, and a second optimization can be done to optimize the weights of the non-zero components (see [16]). While the sparsity of the design generally depends on the choice of (e.g., the larger the fewer non-zero elements in the design vector) a relevant issue in some applications is identifying so that a design with a given number of measurements is obtained.

#### OED Problem B

In the second scenario, we present a method that optimizes the design parameters for a fixed number of measurements. Suppose , e.g., for tomography it contains angles corresponding to locations of the sources. Then in the inverse problem,

 M(p)=(Iℓ⊗T)R(p), where R(p)=⎡⎢ ⎢⎣R(p1)⋮R(pℓ)⎤⎥ ⎥⎦ (7)

and . Assuming that no additional regularization is required for (other than the above described reparameterization), OED problem B reads

 minp∈ΩE 12∥∥ˆf(p)−ftrue∥∥22 (8) subject to ^f(p)=argminf12∥M(p)f−d(p)∥22+α22∥L(f−μ)∥22 (9) subject to  Cef−ce=0 and Cif−ci≥0.

Note that mathematically OED problem A and B differ in the particular choice of , and the sparsity regularization term. We refer to problems (5) and (8) as the design problems and problems (6) and (9) as the underlying inverse or state problems.

## 3 Design Problem Formulations

In this section, we consider OED problems (5) and (8) and discuss two problem formulations, each of which may have advantageous properties depending on problem assumptions and constraints.

#### Bayes risk minimization

Notice that the expected values in (5) and (8) are defined in terms of the distributions of and . For problems where such knowledge is available or can be well approximated (e.g., by the sample mean or sample covariance), we investigate Bayes risk minimization for both design problems. This approach assumes that no inequality constraints are included on the inverse state problems (6) and (9). While the theory can be extended to equality constrained problems, we consider unconstrained inverse problems for simplicity. Under these assumptions the MAP estimate is given by

 ˆf(p)=Q(p)−1(M(p)⊤(M(p)ftrue+ε(p))+α2L⊤Lμ), (10)

where Then the design objective (for convenience omitting the design costs expressed by ) can be written as

 J(p) =12E∥∥Q(p)−1(M(p)⊤(M(p)ftrue+ε(p))+α2L⊤Lμ)−ftrue∥∥22 =12E∥∥(Q(p)−1M(p)⊤M(p)−In)ftrue+Q(p)−1(M(p)⊤ε(p)+α2L⊤Lμ)∥∥22.

Let and denote to be the trace of a matrix, then by utilizing the quadratic form property along with the above assumptions, we get

 2J(p)= E(f⊤trueK(p)⊤K(p)ftrue+2f⊤trueK(p)⊤Q(p)−1(M(p)⊤ε(p)+α2L⊤Lμ) + (M(p)⊤ε(p)+α2L⊤Lμ)⊤Q(p)−⊤Q(p)−1(M(p)⊤ε(p)+α2L⊤Lμ)) = μ⊤K(p)⊤K(p)μ+γ−2tr(K(p)⊤K(p)Γf)+2α2μ⊤K(p)⊤Q(p)−1L⊤Lμ + σ−2tr(M(p)Q(p)−⊤Q(p)−1M(p)⊤)+α4μ⊤L⊤LQ(p)−⊤Q(p)−1L⊤Lμ.

Notice that the first, third, and fifth term sum up to . Thus, we get

 J(p) =12γ2∥∥K(p)L−1∥∥2F+12σ2∥∥Q(p)−1M(p)⊤∥∥2F =12σ2∥∥M†α(p)∥∥2F,

where , the in the last equation denotes the Moore-Penrose pseudoinverse, and denotes the Frobenius norm. Recall that the goal of OED is to find design parameters that minimize . Thus, the significance of this result is that in the Bayes formulation with an unconstrained state problem, the optimal design parameters will correspond to a regularized coefficient matrix that has smallest pseudoinverse in the Frobenius norm sense (with an additional regularization term for OED problem A). Due to the nonlinear dependence on the design problem does not admit a closed-form solution; however, numerical methods and computational simplifications can be used to obtain optimal parameters (see Section 4).

#### Empirical Bayes risk minimization

For problems where distributions of and may be unknown or not obtainable, we consider empirical Bayes risk design problems, where training or calibration data , are used to approximate the expected value. Such data are often readily available, and in the case of tomography provide a clear understanding of how images may look. Stochastic programming methods such as stochastic average approximation (SAA) or stochastic approximation (SA) can be used to incorporate these training data in OED problems A and B. Solving stochastic optimization problems often requires computationally intensive techniques in order to obtain a good approximation of the expected value [10, 13, 34]. Here we consider an SAA approach, but remark that for very large numbers of training data, this approach may not be feasible. However, a benefit of this formulation, compared to the Bayes risk minimization procedure described above, is the ability to incorporate constraints on the state problem and take advantage of existing constrained optimization algorithms.

We follow the empirical Bayes approach in [14, 16, 17] that treats given training data as samples from the distribution and uses the sample mean to approximate the expected value. Note that, due to the presence of the regularization, the estimator given by (2) will be biased unless the true solution is in the nullspace of . Thus, design choices cannot solely be based on properties of the forward operator (see also the the discussion in [16]).

Assume that we are given a set of training data that consists of true models . For a fixed design parameter , we can simulate datasets using (1) and obtain reconstructions by solving the constrained inversion problem (2) using that data. Then we can approximate the Bayes risk OED problem (3) with the following empirical Bayes risk OED problem,

 (11)

The bilevel optimization problem (11) could be solved as a large constrained optimization problem. Instead we follow a technique commonly used in the PDE constrained optimization literature where we eliminate the constraint by solving for for yielding the reduced problem. Although both approaches have their merits, the reduced problem might be more attractive, especially if . Furthermore, the reduced problem allows for parallel computing, since the constraints can be eliminated independently.

Assuming that is closed and convex, (11) can be solved, for example, using a Projected Steepest Descent or Projected Gauss-Newton method (see also [17]). In both cases, we need to compute the gradient of the design objective function,

 ∇pJN(p)=1NN∑i=1∇pˆf(i)(ˆf(i)(p)−f(i)true)+∇pR, (12)

where contains the derivatives of the reconstructed image with respect to the design parameters. For OED problem A, where and , computing the gradient for the regularization term is straightforward. However, the more challenging computation involves the sensitivity matrix that is solver-dependent and outlined in the following section.

## 4 Computational approaches for OED

In this section, we describe efficient computational approaches for computing solutions to OED problems A and B.

#### Bayes risk minimization

We begin with computational simplifications that can be used for the Bayes risk minimization problem. First we consider the case where (standard Tikhonov regularization). Assume with rank and let be the SVD where the non-zero diagonal elements of are . Then, with some algebraic manipulations, it can be shown that design objective (again omitting the regularization term for convenience) can be written as

 J(p) =12σ2∥∥(Σ(p)⊤Σ(p)+α2In)−1[Σ(p)⊤αIn]∥∥2F =12σ2(r∑i=11σi(p)2+α2+n−rα2).

Note that the second term should not be ignored since the rank of may depend on

For the general case where the generalized SVD could be used to simplify the objective function as

 J(p)=12σ2∥∥X(p)−1(Σ(p)⊤Σ(p)+α2Ψ(p)⊤Ψ(p))−1/2∥∥2F,

where

and and are orthogonal matrices, and are diagonal matrices and is a nonsingular matrix [36]. A simpler approach for is to compute the singular values of denoted , in which case

 J(p)=12σ2n∑i=1(σα,i(p))−2.

Next we exploit the specific structure of OED problem A to analyze the dependence on . Assume that all so that , then

 M(p)=(diag(p)⊗Inr)⎡⎢ ⎢⎣TR(θ1)⋮TR(θℓ)⎤⎥ ⎥⎦=⎡⎢ ⎢⎣p1TR(θ1)⋮pℓTR(θℓ)⎤⎥ ⎥⎦.

The singular values of are given in the following Lemma.

###### Lemma 1

Let for , then the unsorted singular values of are given by where is the -th singular value of  .

###### Proof

The result follows from the fact that the eigenvalues of are given as .

Thus, if we let and define such that the -th column of contains the squares of the singular values of , then the squares of the singular values of are given in the vector This formulation enables us to compute the gradient as

 ∇pJ(p)=−σ−2diag(p)Π⊤hα+βsign(p), (13)

where with being the -th element of . Notice that a critical point occurs if (ignoring the non-differentiability of at ), but this implies that which is not reasonable. Thus, to obtain optimal design parameters for OED problem A, a nonlinear solver must be used.

#### Empirical Bayes risk minimization

In the absence of a closed form expression for the MAP estimate (e.g., due to the presence of inequality constraints) we follow an SAA approach and use gradient-based optimization methods to solve the empirical Bayes risk problem (11). To ensure differentiability, we propose using interior point methods for solving the constrained inverse state problems (6) and (9). Although active set methods could also be used for solving problems such as (2), we prefer interior point methods because the ultimate goal is fast optimization of the design, and interior point methods enable fast sensitivity computations, i.e., methods for computing the derivatives of the reconstructed solution with respect to the (possibly relaxed) design parameters. Before deriving the sensitivities we describe the interior point method used in our experiments, which is a standard primal dual interior point method for quadratic programming based on Mehrotra’s predictor-corrector approach; a more detailed description can be found, e.g., in [29, Ch.16].

We first rewrite the constrained optimization problem as a quadratic program

 minf12f⊤Q(p)f+b(p)⊤f subject to Cef−ce=0,Cif−ci≥0, (14)

where and . To deal with the linear inequality constraints, we introduce slack variables , yielding the equivalent problem

 minf,s12f⊤Q(p)f+b(p)⊤f subject to Cef−ce=0,Cif−ci−s=0,s≥0. (15)

This is a convex quadratic optimization problem, whose objective function is strictly convex if . The Lagrangian is given by

 L(f,λe,s,λi)=12f⊤Q(p)f+b(p)⊤f−λ⊤e(Cef−ce)−λ⊤i(Cif−ci−s). (16)

Necessary and sufficient conditions for a global minimizer are the KKT conditions. Here, we consider the perturbed conditions for some centrality parameter ,

 F(f,λe,s,λi,δ,μ)=⎡⎢ ⎢ ⎢ ⎢ ⎢⎣Q(p)f+b(p)−C⊤eλe−C⊤iλiCef−ceCif−ci−sSΛie−δμe⎤⎥ ⎥ ⎥ ⎥ ⎥⎦=0,λi,s≥0. (17)

Here, , and is a vector of all ones, and is a complementarity measure (it is zero at a KKT point). In interior point methods, the goal is to iteratively approximate a root of using Newton’s method and some line search that ensures strict positivity of the slack and the Lagrange multiplier associated with the inequality constraints (i.e., and ).

At the -th iteration of the interior point method, we denote the current iterates as , linearize the optimality conditions (17), and solve the linear system

 (18)

where

 ⎡⎢ ⎢ ⎢ ⎢⎣⋮⋮⋮⋮∇fF∇λeF∇sF∇λiF⋮⋮⋮⋮⎤⎥ ⎥ ⎥ ⎥⎦=⎡⎢ ⎢ ⎢ ⎢ ⎢⎣Q(p)−C⊤e0−C⊤iCe000Ci0−Imi000ΛiS⎤⎥ ⎥ ⎥ ⎥ ⎥⎦. (19)

Here, the dual residual is

 rd(p)=Q(p)fk+b(p)−C⊤eλke−C⊤iλki

and the primal residuals for the equality and inequality constraints are, respectively,

 re=Cefk−ce and ri=Cifk−ci−sk.

The crucial components of the primal dual interior point method include an efficient linear solver for (18), the step length selection (here we use the largest step size in that ensures that both and remain sufficiently far from the boundary), and the choice of the centrality parameter . For the latter, we use Mehrotra’s predictor-corrector approach as described in [29]. First, in the predictor step, we compute an affine scaling step (i.e., we solve (18) for ) and perform a line search. Then, in the corrector step, we compute the final direction by solving (18) for

 δ=((f+αaffΔfaff)⊤(s+αaffΔsaff)meμ)3.

Thus, each iteration requires two linear solves.

#### Computing sensitivities

In order to enable fast optimization of the design parameters (i.e., optimization for the outer problem), we need to differentiate the solutions of the quadratic program (15) with respect to the design parameters. To do this, we use implicit differentiation of the optimality condition (17). Let be the computed solution to the quadratic programming problem. Then, we are interested in computing the sensitivity matrix such that

for all .

To this end, we differentiate both sides of (17) around the current KKT point with respect to and obtain

 0 =∇pF(ˆf(p),λe(p),s(p),λi(p),δ,μ) =∇pF(ˆf(p),λe,s,λi,δ,μ)+⎡⎢ ⎢ ⎢ ⎢⎣⋮⋮⋮⋮∇ˆfF∇λeF∇sF∇λiF⋮⋮⋮⋮⎤⎥ ⎥ ⎥ ⎥⎦⎡⎢ ⎢ ⎢ ⎢ ⎢⎣Jˆf(p)Jλe(p)Js(p)Jλi(p)⎤⎥ ⎥ ⎥ ⎥ ⎥⎦.

Assuming that the linear system in (19) is invertible (that is, there is a unique KKT point) we obtain

 ⎡⎢ ⎢ ⎢ ⎢ ⎢⎣Jˆf(p)Jλe(p)Js(p)Jλi(p)⎤⎥ ⎥ ⎥ ⎥ ⎥⎦=−⎡⎢ ⎢ ⎢ ⎢ ⎢⎣Q(p)−C⊤e0−C⊤iCe000Ci0−Imi000ΛiS⎤⎥ ⎥ ⎥ ⎥ ⎥⎦−1⎡⎢ ⎢ ⎢ ⎢⎣∇p(Q(p)ˆf(p)+b(p))000⎤⎥ ⎥ ⎥ ⎥⎦.

In the case of OED Problem A with we have

 ∇p(Q(p)f+b(p))=2⎡⎢ ⎢ ⎢⎣R(θ1)⊤T⊤⋱R(θℓ)⊤T⊤⎤⎥ ⎥ ⎥⎦(M(p)f−d(p)).

Computing the complete sensitivity matrix of interest requires linear solves and thus might be infeasible when . Thus, as is also common in PDE parameter estimation [15], we provide matrix-free implementations that compute matrix vector products and at the cost of one linear solve per training sample.

For some applications and in particular for OED Problem B, it may be beneficial to re-parameterize the angles contained in by taking a reference angle and non-negative increments , such that . This results in the following additional constraints

 δ1−a≥0,δj≥0for j=2,…,n, and b−n∑j=1δj≥0,

where is the lower and the upper bound for the angles and derivatives are given by

 ∂p∂δ=⎡⎢ ⎢ ⎢ ⎢ ⎢⎣100…0110…0⋮⋱⋮111…1⎤⎥ ⎥ ⎥ ⎥ ⎥⎦.

## 5 Numerical Experiments

In this section, we provide various examples for both OED problems A and B, using the tomography reconstruction problems described in Section 2. We consider various assumptions (including different training data sets) and investigate how different constraints on the imaging state problem may affect the optimal design parameters.

### 5.1 Implementation

Our numerical framework for minimizing the empirical Bayes risk is implemented as an add-on to the parameter estimation package jInv [33]. To benefit from jInv’s existing capabilities, the routines for solving the lower-level problem and computing sensitivities are implemented as a module, extending the abstract forward problem type. In particular, this allows parallel and distributed-memory evaluation of the constrained inverse problems for different training data. The design problem is formulated and solved using the misfit, regularization, and optimization methods provided in jInv’s. Our module will be made freely available.

### 5.2 Data sets and Constraints

We consider four sets of training data shown in Figure 2 and described below. To obtain each projection, we use model (1) with parallel rays where the number of rays is equal to the pixel size.

Rectangles:

The data set consists of binary images of size that show randomly spaced and sized rectangles. The rectangles’ edges are aligned with the coordinate axes. Note that given the knowledge on the binary constraints, exact reconstruction is possible using two measurements with angles of and degrees, respectively.

Pentagons:

A second data set consists of binary images of pentagons. As before, the training data consists of discrete images with pixels. The location and the size of the pentagons change randomly from image to image; however, the angles between the edges and the coordinate axes are the same. Thus, the object can be reconstructed exactly with appropriate constraints and from five projection angles.

Shapes:

As a non-binary example we consider a synthetically generated dataset containing discrete images of size that are obtained by evaluating a random smooth function on a randomly chosen supporting set.

Phantom:

As a more realistic example, we generate a training data set consisting of gray valued images resembling random variations of the Modified Shepp-Logan phantom [24, 35]. The Shepp-Logan phantom resembles basic head characteristics: exterior, skull, left and right ventricles, as well as tumors. We randomly varied head features, such as skull size, head & ventricle size and orientation, intensity, and number of tumors. Our Matlab implementation will be publicly available. Here, we discretize the data on a regular grid.

These data sets can be used directly in empirical Bayes risk minimization OED problems for approximating the expected value, as discussed in Section 3. Furthermore, we can treat the images as samples from some underlying distributions and either use a prior assumption on the covariance or a sample approximation to solve the Bayes OED problems.

The main interest of this work is to investigate how different constraints on the imaging problem may affect the optimal design parameters. Therefore, we consider four common choices of constraints in (2).

unconstrained:

No constraints are imposed on the discrete image, . In this case the optimality condition in (2) is a linear system with a positive definite matrix and (for small-scale problems) is solved using a Cholesky factorization. For this case, we compare the optimal Bayes design with the optimal empirical Bayes design.

equality constraints:

We assume that the sum of the intensity values equals a known constant. To this end, we set and . Similar to the previous case the optimality condition (2) is linear and requires solving a saddle point problem. To this end, we use an LU factorization.

non-negativity:

A physically meaningful constraint in many imaging problems is to enforce reconstructed intensity values to be non-negative. In our formulation we use and .

bound constraints:

In addition to non-negativity, we enforce an upper bound and restrict the intensity values of each pixel to be between and by using and .

Since the most plausible choice of a constraint will depend on the particular application our framework supports a variety of constraints. Due to the relative simplicity of OED problem B (e.g., no additional design regularization and potentially much fewer parameters to optimize), we begin with some investigations on the impact of lower level constraints on the overall optimal experimental design.

### 5.3 OED Results for Problem B

For simplicity and for visualization purposes, we consider OED problem B with ; that is, we aim to find the projection angles, where the resulting reconstructions minimize the mean-squared error. Note that computing reconstructions in this case is highly under-determined.

We first investigate the OED objective function for projection angles in intervals of degree, assuming . Note that no regularization is included in the outer optimization problem of OED problem B, and thus the objective function measures the mean squared error of the reconstruction. We begin with no constraints on the inner problem and provide the Bayes risk values in Figure 3 for various covariance matrices where the underlying images are pixels. First, we provide the Bayes risk for , where the minimum occurs around angles and degrees. Then for both the rectangles and phantoms data sets, we generated 1,000,000 sample images and computed the sample covariance matrices (with a small regularization to ensure positive definiteness). Using the computed factor , we computed Bayes risks and provide them along with their corresponding minima in Figure 3. In general, we see that as well as correspond to higher Bayes risk, which seems intuitive for tomography. However, it is evident that the choice of does affect the optimal design.

Next we provide the empirical Bayes risk (i.e., values of the sampled objective function) for all four training data sets and four options for constraints in Figure 4. The two best designs are marked with red dots. For all training data, the design improvement is most pronounced for the box-constrained lower-level problem. Further, for the pentagon example, the global optima are obtained at different points for unconstrained/equality constrained and the inequality constrained problems.

We investigate the impact of the value of the regularization parameter, , on the reconstruction error for the optimal designs identified in the previous steps. To this end, we compute the MSE for 20 logarithmically equal spaced values of between and using the optimal projection angles determined in the previous step. We see that for the unconstrained and equality constrained problem, the MSE is less sensitive to the choice of and thus the global minima are difficult to identify. For the inequality constrained problems substantial improvement of the design can be obtained. For all datasets, we found that thee box constraints resulted in smaller MSE values overall. Since a good choice of may not be available a priori, one could consider an approach that incorporates as a design parameter, so that optimal design also includes optimizing for .

### 5.4 OED Results for Problem A

Given the training data and a fine discretization of the tomography operator, we generate data, by evaluating the forward problem and adding 0.1% Gaussian white noise. Then, the OED problem is solved twice. First, we aim at eliminating the number of rows in by solving the OED problem A with an -regularizer on the measurement parameters. Second, we adjust the weights of the non-zero weights by re-solving the OED problem without regularization. This procedure resembles the method introduced in [16].

We first investigate the impact of the sparsity parameter on the design. In the top row of Figure 6, we provide the number of projections for various values of , and as expected, we see that with a larger sparsity parameter, we obtain fewer projections. The more interesting result is in the second row where we provide the MSE as a function of Here we see that even with fewer projections, reconstructions obtained by imposing constraints correspond to smaller MSE values.

In Figures 7 and 8, we provide four sample reconstructions and error images for each dataset and constraint. We observe that, overall, fewer projections are required and smaller reconstruction errors are possible if box or non-negativity constraints are included on the lower problem. This distinction is most prominent with the datasets of binary images, where only a few projections are needed.

Intuitively, the optimal angles for the rectangle images should be 0 and 90 degrees and the optimal angles for the pentagon images should be 27, 63, 99, 135, and 171 degrees. This is because the training images all share the same orientation, and angles orthogonal to the edges may be considered optimal (see Figure 2). In Figure 7, we see that for the rectangles, the non-negative and box-constrained OED parameters are and degrees respectively, and for the pentagons, and degrees respectively. These results illustrate the benefits of incorporating proper constraints to improve the optimal experimental design.

## 6 Conclusions

In this work, we consider optimal experimental design problems in the context of tomographic reconstruction and investigate the impact of state constraints on the optimal design. We examine two problem formulations for enforcing sparse sampling in the design parameters, where OED problem A employs a sparsity enforcing regularization term and OED problem B achieves a desired level of sparsity by construction. We investigate Bayes risk and empirical Bayes risk minimization techniques for OED. For problems with known or well-approximated (e.g., from very large data sets) mean and covariance matrix, a reformulation of the Bayes risk can lead to efficient methods to obtain optimal designs for the unconstrained case. However, the empirical risk minimization framework allows for incorporation of state constraints. The primary challenge toward efficient optimization of the empirical problem is computing derivatives of the reconstructed image with respect to the design parameters. We obtain these by using implicit differentiation of the KKT conditions within interior point methods and by exploiting parallel computing in Julia. Our numerical results on various datasets demonstrate that including state constraints does indeed impact the optimal design, in that fewer projections are required, and smaller MSE values can be obtained. Some items for future work include considering integer or binary constraints on the design parameters, extensions to nonlinear inverse problems, and extensions to other applications.

## Acknowledgments

This work was initiated as a part of the SAMSI Program on Optimization 2016-2017. The material was based upon work partially supported by the National Science Foundation under Grant DMS-1127914 to the Statistical and Applied Mathematical Sciences Institute. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

## References

• [1] A. Alexanderian, N. Petra, G. Stadler, and O. Ghattas, A-optimal design of experiments for infinite-dimensional Bayesian linear inverse problems with regularized ell_0-sparsification, SIAM Journal on Scientific Computing, 36 (2014), pp. A2122–A2148.
• [2] A. Alexanderian, N. Petra, G. Stadler, and O. Ghattas, A fast and scalable method for A-optimal design of experiments for infinite-dimensional Bayesian nonlinear inverse problems, SIAM Journal on Scientific Computing, 38 (2016), pp. A243–A272.
• [3] S. R. Arridge, Optical tomography in medical imaging, Inverse Problems, 15 (1999), pp. R41–R93, https://doi.org/10.1088/0266-5611/15/2/022, http://dx.doi.org/10.1088/0266-5611/15/2/022.
• [4] A. Atkinson, A. Donev, and R. Tobias, Optimum experimental designs, with SAS, vol. 34, Oxford University Press, 2007.
• [5] T. Butler, J. Jakeman, and T. Wildey, A consistent Bayesian formulation for stochastic inverse problems based on push-forward measures, arXiv preprint arXiv:1704.00680, (2017).
• [6] D. Calvetti and E. Somersalo, An Introduction to Bayesian Scientific Computing, vol. 2 of Ten Lectures on Subjective Computing, Springer Science & Business Media, New York, NY, Nov. 2007, https://doi.org/10.1007/978-0-387-73394-4, http://link.springer.com/10.1007/978-0-387-73394-4.
• [7] D. Calvetti and E. Somersalo, An Introduction to Bayesian Scientific Computing: Ten Lectures on Subjective Computing, vol. 2, Springer Science & Business Media, 2007.
• [8] E. J. Candes, J. K. Romberg, and T. Tao, Stable signal recovery from incomplete and inaccurate measurements, Communications on pure and applied mathematics, 59 (2006), pp. 1207–1223.
• [9] E. J. Candes and T. Tao, Decoding by linear programming, IEEE transactions on information theory, 51 (2005), pp. 4203–4215.
• [10] B. P. Carlin and T. A. Louis, Bayes and empirical Bayes methods for data analysis, Statistics and Computing, 7 (1997), pp. 153–154.
• [11] K. Chaloner and I. Verdinelli, Bayesian experimental design: A review, Statistical Science, (1995), pp. 273–304.
• [12] M. Cheney, D. Isaacson, and J. C. Newell, Electrical impedance tomography, SIAM review, (1999), https://doi.org/10.1137/S0036144598333613, http://epubs.siam.org/doi/abs/10.1137/S0036144598333613.
• [13] J. Chung, M. Chung, J. T. Slagel, and L. Tenorio, Stochastic newton and quasi-newton methods for large linear least-squares problems, arXiv preprint arXiv:1702.07367, (2017).
• [14] M. Chung and E. Haber, Experimental design for biological systems, SIAM Journal on Control and Optimization, 50 (2012), pp. 471–489.
• [15] E. Haber, Computational methods in geophysical electromagnetics, Mathematics in Industry (Philadelphia), Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2015, http://www.ams.org/mathscinet-getitem?mr=MR3307518.
• [16] E. Haber, L. Horesh, and L. Tenorio, Numerical methods for experimental design of large-scale linear ill-posed inverse problems, Inverse Problems, 24 (2008), pp. 055012–17, https://doi.org/10.1088/0266-5611/24/5/055012, http://stacks.iop.org/0266-5611/24/i=5/a=055012?key=crossref.1514bfdbc22f7155acb9adafa0ad38ae.
• [17] E. Haber, L. Horesh, and L. Tenorio, Numerical methods for the design of large-scale nonlinear discrete ill-posed inverse problems, Inverse Problems, 26 (2010), p. 025002, https://doi.org/10.1088/0266-5611/26/2/025002, http://stacks.iop.org/0266-5611/26/i=2/a=025002?key=crossref.190a70b366e17a256cd9842ad074a5fc.
• [18] E. Haber, Z. Magnant, C. Lucero, and L. Tenorio, Numerical methods for A-optimal designs with a sparsity constraint for ill-posed inverse problems, Computational Optimization and Applications, 52 (2012), p. 293.
• [19] E. Haber, K. Van Den Doel, and L. Horesh, Optimal design of simultaneous source encoding, Inverse Problems in Science and Engineering, 23 (2015), pp. 780–797.
• [20] P. C. Hansen, Discrete inverse problems: Insight and algorithms, SIAM, 2010.
• [21] J. Hsieh, Computed tomography: principles, design, artifacts, and recent advances, vol. 114, SPIE press, 2003.
• [22] X. Huan and Y. M. Marzouk, Simulation-based optimal Bayesian experimental design for nonlinear systems, Journal of Computational Physics, 232 (2013), pp. 288–317.
• [23] H. Huang, E. Haber, L. Horesh, and J. Seo, Optimal estimation of l1 regularization prior from a regularized empirical Bayesian risk standpoint, Inverse Problems and Imaging, (2013).
• [24] A. K. Jain, Fundamentals of digital image processing, Prentice-Hall, Inc., 1989.
• [25] N. L. Johnson and S. Kotz, Distributions in statistics. continuous multivariate distributions- john wiley & sons, Inc. New York, (1972).
• [26] J. Kaipio and E. Somersalo, Statistical and Computational Inverse Problems, vol. 160 of Applied Mathematical Sciences, Springer Science & Business Media, New York, Mar. 2006, https://doi.org/10.1007/b138659, http://link.springer.com/10.1007/b138659.
• [27] A. C. Kak and M. Slaney, Principles of computerized tomographic imaging, SIAM, 2001.
• [28] F. Natterer, The mathematics of computerized tomography, SIAM, 2001.
• [29] J. Nocedal and S. Wright, Numerical Optimization, Springer Series in Operations Research and Financial Engineering, Springer Science & Business Media, Dec. 2006, https://doi.org/10.1007/978-0-387-40065-5, http://link.springer.com/10.1007/978-0-387-40065-5.
• [30] W. Nowak, F. De Barros, and Y. Rubin, Bayesian geostatistical design: Task-driven optimal site investigation when the geostatistical model is uncertain, Water Resources Research, 46 (2010).
• [31] A. Pázman, Foundations of optimum experimental design, vol. 14, Springer, 1986.
• [32] F. Pukelsheim, Optimal design of experiments, SIAM, 2006.
• [33] L. Ruthotto, E. Treister, and E. Haber, jInv – a flexible Julia package for PDE parameter estimation, SIAM J. Sci. Comput., (2017), http://arxiv.org/abs/1606.07399.
• [34] A. Shapiro, D. Dentcheva, and A. Ruszczyński, Lectures on stochastic programming: modeling and theory, SIAM, 2009.
• [35] L. A. Shepp and B. F. Logan, The Fourier reconstruction of a head section, IEEE Transactions on Nuclear Science, 21 (1974), pp. 21–43.
• [36] G. W. Stewart, Matrix Algorithms: Volume II: Eigensystems, SIAM, 2001.
• [37] S. N. Walsh, T. M. Wildey, and J. D. Jakeman, Optimal experimental design using a consistent Bayesian approach, arXiv preprint arXiv:1705.09395, (2017).
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters