Weak Recovery Thresholds for TomoPIV

Critical Parameter Values and Reconstruction Properties of Discrete Tomography: Application to Experimental Fluid Dynamics

Stefania Petra, Christoph Schnörr, Andreas Schröder Image and Pattern Analysis Group, University of Heidelberg, Speyerer Str. 6, 69115 Heidelberg, Germany {petra,schnoerr}@math.uni-heidelberg.de iwr.ipa.uni-heidelberg.de Institute of Aerodynamics and Flow Technology, German Aerospace Center, Bunsenstr. 10, 37073 Göttingen, Germany andreas.schroeder@dlr.de dlr.de/as

We analyze representative ill-posed scenarios of tomographic PIV with a focus on conditions for unique volume reconstruction. Based on sparse random seedings of a region of interest with small particles, the corresponding systems of linear projection equations are probabilistically analyzed in order to determine (i) the ability of unique reconstruction in terms of the imaging geometry and the critical sparsity parameter, and (ii) sharpness of the transition to non-unique reconstruction with ghost particles when choosing the sparsity parameter improperly. The sparsity parameter directly relates to the seeding density used for PIV in experimental fluids dynamics that is chosen empirically to date. Our results provide a basic mathematical characterization of the PIV volume reconstruction problem that is an essential prerequisite for any algorithm used to actually compute the reconstruction. Moreover, we connect the sparse volume function reconstruction problem from few tomographic projections to major developments in compressed sensing.

Key words and phrases:
compressed sensing, underdetermined systems of linear equations, sparsity, large deviation, tail bound, algebraic reconstruction, TomoPIV
Support by the German Research Foundation (DFG) is gratefully acknowledged, grant SCHN457/11.

1. Introduction

Motivated by an application from fluid dynamics [9], we investigate conditions for an highly underdetermined nonnegative system of linear equations to have a unique nonnegative solution, provided it is sparse. The sought solution is a sparse 3D image of particles immersed in a fluid known only from its projection. This projection represents the simultaneous 2D images captured by few camera sensors from different viewing directions, see Fig. 1. The reconstruction of the 3D image from the 2D images employs a standard algebraic image reconstruction model, which assumes that the image consists of an array of unknowns (cells, voxels), and sets up algebraic equations for the unknowns in terms of measured projection data. The latter are the pixel entries in the recorded 2D images that represent the integration of the original 3D light intensity distribution along the pixels line-of-sight. The number of cameras is limited to 3 to 6 cameras, typically 4. As a consequence, the reconstruction problem becomes severely ill-posed.

Thus, we consider a huge and severely underdetermined linear system


with the following properties: a very sparse nonnegative measurement matrix with constant small support of length of all column vectors,


and a nonnegative -sparse solution vector . While equals the number of cameras, is equal related to the particle density (equal, in the present work, proportional, in practice). We also consider the discretization (or resolution) parameter , and relate it to the number of discretization cells and number of measurements:


We will answer the following question: what is the maximal number of particles, depending on the image resolution parameter , that can be localized uniquely? Formally, we want to relate the exact recovery of from it’s noiseless measurements to the sparsity and to the dimensions of of the projection matrix . Moreover, we investigate critical values of the sparsity parameter such that most -sparse nonnegative solutions are the unique nonnegative solutions of (1.1) with high probability.


Figure 1. Typical camera arrangements: in circular configuration (right) or all in line (left).

1.1. Related Work

Research on compressed sensing [5, 3] focuses on properties of underdetermined linear systems that guarantee exact recovery of sparse or compressible signals from measurements . Donoho and Tanner [7, 8] have computed sharp reconstruction thresholds for random measurement matrices, such that for given a signal length and numbers of measurements , the maximal sparsity value which guarantees perfect reconstruction can be determined explicitly. The authors derived their results by connecting it to problem from geometric probability that points in general position in can be linearly separated [15]. This holds with probability for , and with if and , where


The authors show in [7, Thm. 1.10] that the probability of uniqueness of a -sparse nonnegative vector equals , provided satisfies certain conditions which do not hold in our considered application. Likewise, by exploiting again Wendel’s theorem, Mangasarian and Recht showed [11] that a binary solution is most likely unique if , provided that comes from a centrosymmetric distribution. Unfortunately, the underlying distribution lacks symmetry with respect to the origin. However, we recently showed [13] for a three camera scenario that there are thresholds on sparsity (i.e. density of the particles), below which exact recovery will succeed and above which it fails with high probability. This explicit thresholds depend on the number of measurements (recording pixel in the camera arrays). The current work investigates further geometries and focus on an average case analysis of conditions under which uniqueness of can be expected with high probability. A corresponding tail bound implies a weak threshold effect and criterion for adequately choosing the value of the sparsity parameter .

1.2. Notation

denotes the cardinality of a finite set and for . We will denote by and the set of -sparse vectors. The corresponding sets of non-negative vectors are denoted by and , respectively. The support of a vector , , is the set of indices of non-vanishing components of .

For a finite set , the set denotes the union of all neighbors of elements of where the corresponding relation (graph) will be clear from the context.

denotes the -th column vector of a matrix . For given index sets , matrix denotes the submatrix of with rows and columns indexed by and , respectively. denote the respective complement sets. Similarly, denotes a subvector of .

denotes the expectation operation applied to a random variable and the probability to observe an event .

2. Graph Related Properties of Tomographic Projection Matrices

Recent trends in compressed sensing [2, 16] tend to replace random dense matrices by adjacency matrices of ”high quality” expander graphs. Explicit constructions of such expanders exist, but are quite involved. However, random binary matrices with nonreplicative columns that have entries equal to , perform numerically extremely well, even if is small, as shown in [2]. In [10, 12] it is shown that perturbing the elements of adjacency matrices of expander graphs with low expansion, can also improve performance.

2.1. Preliminaries

For simplicity, we will restrict on situations were the intersection lengths of projection rays corresponding to each camera with each discretization cell are all equal. Thus, we can make the assumption that the entries of are binary. It will be useful to denote the set of cells by and the set of rays by . The incidence relation between cells and rays is then given by


for all , . Thus, cells and rays correspond to columns and rows of .

This gives the equivalent representation in terms of a bipartite graph with left and right vertices and , and edges iff . has constant left-degree equal to the number of projecting directions.

For any non-negative measurement matrix and the corresponding graph, the set

contains all neighbors of . The same notation applies to neighbors of subsets of right nodes. Further, we will call any non-negative matrix adjacency matrix, based on the incidence relation of its non-zero entries.

If is the non-negative adjacency matrix of a bipartite graph with constant left degree , the perturbed matrix is computed by uniformly perturbing the non-zero entries to obtain , and by normalizing subsequently all column vectors of . In practice, such perturbation can be implemented by discretizing the image by radial basis functions of unequal size or and choose their locations on an irregular grid.

The following class of graphs plays a key role in the present context and in the field of compressed sensing in general.

Definition 2.1.

A -unbalanced expander is a bipartite simple graph with constant left-degree such that for any with , the set of neighbors of has at least size .

Recovery of a -sparse nonnegative solution via an -unbalanced expander was derived in [14]. It employs the smallest expansion constant with respect to other similar results in the literature.

Theorem 2.1.

Let be the adjacency matrix of a -unbalanced expander and . Then for any -sparse vector with , the solution set is a singleton.

Now let denote the tomographic projection matrix, and consider a subset of columns and a corresponding -sparse vector . Then has support , and we may remove the subset of rows from the linear system corresponding to . Moreover, based on the observation , we know that


We continue by formalizing the system reduction just described.

Definition 2.2.

The reduced system corresponding to a given non-negative vector ,


results from by choosing the subsets of rows and columns




Note that for a vector and the bipartite graph induced by the measurement matrix , we have the correspondence (cf. (2.2))

We further define




The following proposition asserts that solving the reduced system (2.3) will always recover the support of the solution to the original system

Proposition 2.2.

[13, Prop. 5.1] Let and have nonnegative entries only, and let and be defined by (2.6) and (2.7), respectively. Then


Consequently, we can restrict the linear system to the subset of columns , and only consider properties of this reduced systems.

2.2. Guaranteed Uniqueness

Uniqueness of is guaranteed if all or less-sparse supported on induce overdetermined reduced systems with .

Proposition 2.3.

[13, Th. 3.4] Let be the adjacency matrix of a bipartite graph such that for all random subsets of left nodes, the set of neighbors of satisfies


Then, for any -sparse nonnegative vector , the solution set is a singleton.

For perturbed matrices uniqueness is guaranteed for square reduced systems, and thus less high sparsity values.

Proposition 2.4.

[13, Th. 3.4] Let be the adjacency matrix of a bipartite graph such that for all subsets of left nodes, the set of neighbors of satisfies


Then, for any -sparse vector , there exists a perturbation of such that the solution set is a singleton.

Recovery via perturbed underdetermined reduced systems is possible and our numerical results from Section 5 suggest the following.

Conjecture 2.5.

Let be the adjacency matrix of a bipartite graph such that for all subsets of left nodes, the set of neighbors of satisfies


Then, for any -sparse vector , there exists a perturbation of such that the solution set is a singleton.

The consequences of Propositions 2.3, 2.4 and Conjecture 2.5 are investigated in the following sections 3.2 and 4.2 by working out critical values of the sparsity parameter for which the respective conditions are satisfied with high probability.

3. 3 Cameras - Left Degree equals 3

In this section, we analyze the imaging set-up depicted in Figure 2, left panel, which also represents typical 3D scenarios encountered in practice with a coarse resolution only along the third coordinate, as shown by Figure 2, center panel.

Figure 2. Sketch of a 3-cameras setup in 2D. Left: The hexagonal area discretized in equally sized cells projected on 3 1D cameras. The resulting projection matrix is underdetermined, with and , where is the number of cells on each hexagon edge. Middle: This geometry can be easily extended to 3D by enhancing both cameras and volume by one dimension, thus representing scenarios of practical relevance when cameras are aligned on a line. Right: When considering a square area along with three projection directions (two orthogonal, one diagonal) one obtains a projection matrix with analogous reconstruction properties. The projection matrix equals up to scaling the previous projection matrix corresponding to the hexagonal area if we remove the cells in the marked corners along with incident rays.

3.1. Imaging Geometry

Cell centers of hexagonal cells that partition a region of interest, are given by lattice points corresponding to integer linear combinations of two vectors ,


for the index set


with problem size that we assume (in this section) to be an odd number for simplicity. The number of projections , where , corresponds to the rays of direction , is


The number of cells incident with projection rays ranges over the interval


from the periphery towards the center. Thus, indexing with each projection ray along any particular direction , from one side of the hexagon across the center towards the opposite side, the numbers of cells incident with ray is


The total number of cells is




Accordingly, the system of equations representing the imaging geometry depicted by Figure 2, left panel, has dimensions


Note that . For further reference, we define the quantities


and list some further relations and approximations for large ,


3.2. Dimensions of Reduced Systems

We estimate the expected dimensions (2.5) of the reduced system (2.3) based on uniformly selecting cells at random locations.

To each projection ray , we associate a binary random variable taking the value if not any of the cells is incident with ray , and otherwise. We call the event zero-measurement.

We are interested in the random variable


that determines the number of projection rays not incident with any of the cells, that is the number of zero measurements. We set


Hence, is the expected size of the support of the measurement vector .

Remark 3.1.

Note that random variables are not independent because different projection rays may intersect. This dependency does not affect the expected value of , but it does affect the deviation of observed values of from its expected value – cf. Section 3.3.

Remark 3.2.

We do not assume in the derivation below that different cells are selected. In fact, a single cell may be occupied by more than a single particle in practice, because real particles are very small relative to the discretization cells . The imaging optics enlarges the appearance of particles, and the action of physical projection rays is adequately represented by linear superposition.

Definition 3.1 (Sparsity Parameter).

We refer to the number introduced above as sparsity parameter. Thus, highly sparse scenarios correspond to low values .

Lemma 3.1.

The expected number of zero measurements is


For , has a Bernoulli distribution with


For independent trials, we have (cf. Remark 3.2)


By the linearity of expectations and (3.15), we obtain (3.13),


We discuss few specific scenarios depending on the sparsity parameter .

No particles:

For , we obviously have

High sparsity:

By (3.10), we have , hence . Thus, for large problem sizes and small values of ,

By (3.7), , hence


This approximation says that for sufficiently small values of each randomly selected cell can be expected to create 3 independent measurements, which just reflects the fact that each cell is met by three projection rays.

Less high sparsity:

For increasing values of higher-order terms cannot longer be ignored, due to the increasing number of projection rays meeting several particles. Taking the second-order term into account, we obtain in an analogous way


which is a fairly tight upper bound for values of and that are relevant to applications.

We consider next the expected number of cells of cells supporting the set according to (2.4). We denote this expected number by


and by the expected size of the complement.

Let denote the partition of all projection rays by the three directions. For each cell , there are three unique rays , incident with . Furthermore, for and some ray , let denote the set of rays that intersect with . As before, denotes the number of cells covered by projection ray .

Proposition 3.2.

For a given sparsity parameter , the expected number of cells that can be recognized as empty based on the observations of random variables , is


Each cell intersects with three projection rays . Hence, given the rays corresponding to zero measurements, each cell that can be recognized as empty if either one, two or three rays from the set belong to this set.

We therefore determine separately the expected number of removable cells (i) due to individual rays corresponding to zero measurements, (ii) due to all pairs of rays that intersect and correspond to zero measurements, and (iii) due to all triples of rays that intersect and correspond to zero measurements. The estimate (3.20a) then results from the inclusion-exclusion principle that combines these numbers so as to avoid overcounting, to obtain the desired estimate corresponding to the union of these events.

Consider each projection ray for any fixed direction . Because these rays do not intersect, the expected number of cells that can be removed based on the observation , is


by the linearity of expectations and (3.15). Due to the symmetry of the setup, this number is the same for each direction . Hence we multiply by in (3.20a).

Consider next pairs of directions . For fixed, the expected number of empty cells based on a zero measurement corresponding to some ray and all rays intersecting with , is


The linearity of expectations and gives (3.20c). Due to symmetry, we have to multiply by in (3.20a).

Finally, the expected number of empty cells that correspond to observed zero measurements along all three projection directions, is


which equals (3.20d). ∎

An immediate consequence of Lemma 3.1 and Prop. 3.2 is

Corollary 3.3.

For a given value of the sparsity parameter , the expected dimensions of the reduced system (2.3) are


with given by (3.13) and (3.20).

3.3. A Tail Bound

We are interested in how sharply the random number of zero measurements peaks around its expected value given by (3.13).

Because the random variables , are not independent due to the intersection of projection rays, we apply the following classical inequality for bounding the deviation of a random variable from its expected value based on martingales, that is on sequences of random variables defined on a finite probability space satisfying


where denotes an increasing sequence of -fields in with being -measurable.

Theorem 3.4 (Azuma’s Inequality [1, 4]).

Let be a sequence of random variables such that for each ,


Then, for all and any ,


Let , denote the -field generated by the collection of subsets of that correspond to all possible events after having observed randomly selected cells. We set . Because observing cell just further partitions the current state based on the previously observed cells by possibly removing some ray (or rays) from the set of zero measurements, we have a nested sequence (filtration) of the set of all subsets of .

Based on this, for a fixed value of the sparsity parameter , we define the sequence of random variables


where , are the random variable specifying the expected number of zero measurements after having observed randomly selected cells, conditioned on the subset of events determined by the observation of randomly selected cells. Consequently, due to the absence of any information, and is just the observed number of zero measurements. The sequence is a martingale by construction satisfying , that is condition (3.25).

Proposition 3.5.

Let be the expected number of zero measurements for a given sparsity parameter , given by (3.13). Then, for any ,


Let denote the subset of rays with zero measurements after the random selection of cells. For the remaining trials, the probability that not any cell incident with some ray will be selected, is


with given by (3.14). Consequently, by the linearity of expectations, the expectation of zero measurements, given the number of zero measurements after the selection of cells, is


Now suppose we observe the random selection of the -th cell. We distinguish two possible cases.

  1. Cell is not incident with any ray . Then the number of zero measurements remains the same, and



  2. Cell is incident with one, two or three rays contained in . Let denote the set after removing these rays. Then

    Furthermore, since and ,

    Further upper bounding by dropping the second sum shows that the resulting first term is still smaller than the bound (3.33).

As a result, we consider the larger bound (3.33) of these two cases and compute

Applying Theorem 3.4 completes the proof. ∎

Remark 3.3.

Expanding the r.h.s. of (3.29) around in terms of the variable shows


This indicates appropriate choices for large but finite problem sizes occurring in applications, so as to bound the deviation of from its expected value. As a result, for such choices of , our analysis based on expected values of the key system parameters will hold in applications with high probability.

3.4. Critical Sparsity Values and Recovery

We derived the expected number of nonzero measurements (2.5) induced by random -sparse vectors and the corresponding expected number of non redundant cells . The tail bound, Prop. 3.5, guarantees that the dimensions of reduced systems concentrate around the derived expected values, explaining the threshold effects of unique recovery from sparse tomographic measurements.

We now introduce some further notations and discuss the implication of Section 2.2 on exact recovery of . Let and be the expected dimensions of the reduced system induced by a random -sparse nonnegative vector as detailed in Corollary 3.3. Let and denote by and the on dependent sparsity values which solve the equations


In what follows, the phrase with high probability refers to values of the sparsity parameter for which random supports concentrate around the crucial expected value according to Prop. 3.5, thus yielding a desired threshold effect.

Proposition 3.6.

The system , with measurement matrix , admits unique recovery of -sparse non-negative vectors with high probability, if


For perturbed systems we have.

Proposition 3.7.

The system , with perturbed measurement matrix , admits unique recovery of -sparse non-negative vectors with high probability, if satisfies condition .

In case Conjecture 2.5 holds, uniqueness of would be guaranteed if . Finally, we comment on the maximal sparsity threshold , in case reduced systems would follow a symmetric distribution with respect to the origin and columns would be in general position.


Figure 3. Imaging setup with 4 cameras corresponding to the image planes, shown as two pairs in the left and center panel, respectively. Right panel: Cell centers projected onto the first image plane are shown as dots for the case . The cube is discretized into cells and projected along rays.

4. 4 Cameras - Left Degree equals 4

We consider the imaging set-up depicted by Figure 3 and conduct a probabilistic analysis of its recovery properties, analogous to 3.2. This scenario is straightforward to realize and should also be particularly relevant to practical applications.

4.1. Imaging Geometry

Each coordinate of the unit cube is discretized into the intervals , resulting in voxels with coordinates


There are 4 sets of parallel projection rays corresponding to the normals of the image planes depicted in Fig. 3,


We denote the set of projection rays and its partition corresponding to the 4 directions


Each set contains projection rays whose measurements yields a projection image with pixels. We index and denote the pixels by , and the projection rays through these pixels by

For each cell indexed by according to (4.1), we represent the corresponding pixels after a suitable transformation by


The cardinalities of the projection rays, i.e. the number of cells covered by each projection ray, are


We observe the symmetries


and define


because does not vary with . Summing up the cells covered by all rays along the first direction, for example, we obtain by (4.5a), (4.6) and (4.7),

We set


We conduct next for this setup the analysis analogous to Section 3.2, in order to compute the expected size of the reduced system (2.5) for random -sparse vectors .

4.2. Dimensions of Reduced Systems

We first compute the expected number of measurements (2.5) as a function of the sparsity parameter .

Lemma 4.1.

The expected number of non-zero measurements is