Active Mean Fields for Probabilistic Image Segmentation: Connections with Chan-Vese and Rudin-Osher-Fatemi Models

# Active Mean Fields for Probabilistic Image Segmentation: Connections with Chan-Vese and Rudin-Osher-Fatemi Models

Marc Niethammer111University of North Carolina at Chapel Hill, Department of Computer Science and Biomedical Research Imaging Center (BRIC), ().    Kilian M. Pohl222Center for Health Sciences, SRI International and Department of Psychiatry and Behavioral Sciences, Stanford University, ().    Firdaus Janoos333Two Sigma Investments, ().    William M. Wells III444Harvard Medical School and Brigham and Women’s Hospital, ().
S
\opt

arxiv \optsiims

\myabstract

Segmentation is a fundamental task for extracting semantically meaningful regions from an image. The goal of segmentation algorithms is to accurately assign object labels to each image location. However, image-noise, shortcomings of algorithms, and image ambiguities cause uncertainty in label assignment. Estimating the uncertainty in label assignment is important in multiple application domains, such as segmenting tumors from medical images for radiation treatment planning. One way to estimate these uncertainties is through the computation of posteriors of Bayesian models, which is computationally prohibitive for many practical applications. On the other hand, most computationally efficient methods fail to estimate label uncertainty. We therefore propose in this paper the Active Mean Fields (AMF) approach, a technique based on Bayesian modeling that uses a mean-field approximation to efficiently compute a segmentation and its corresponding uncertainty. Based on a variational formulation, the resulting convex model combines any label-likelihood measure with a prior on the length of the segmentation boundary. A specific implementation of that model is the Chan–Vese segmentation model (CV), in which the binary segmentation task is defined by a Gaussian likelihood and a prior regularizing the length of the segmentation boundary. Furthermore, the Euler–Lagrange equations derived from the AMF model are equivalent to those of the popular Rudin-Osher-Fatemi (ROF) model for image denoising. Solutions to the AMF model can thus be implemented by directly utilizing highly-efficient ROF solvers on log-likelihood ratio fields. We qualitatively asses the approach on synthetic data as well as on real natural and medical images. For a quantitative evaluation, we apply our approach to the icgbench dataset.

\opt

arxiv

\opt

siims egmentation, mean-field approximation, Rudin-Osher-Fatemi model, Chan-Vese model {AMS}

## 1 Introduction

Image segmentation approaches rarely provide measures of segmentation label uncertainty. In fact, most existing and probabilistically-motivated segmentation approaches only compute the maximum a posteriori (MAP) solution [34, 35, 8, 20, 44]. Using these models to segment ambiguous boundaries is troublesome especially for applications where confidence in object boundaries impacts analysis. For example, many radiation treatment plans base dose distribution on the boundaries of tumors segmented from medical images with low contrast [37]. This can be problematic, as segmentation variability can have a substantial effect on radiation treatment; Martin et al. [37] report that such variability caused mean observer tumor control probability (i.e., the probability to control or eradicate a tumor at a given dose) to range from (22.6 11.9)% to (33.7 0.6)% between six participating physicians in a study of intensity-modulated radiation therapy (IMRT) of 4D-CT-based non-small cell lung cancer radiotherapy. The precision of the planning could be improved around highly-confident tumor boundaries [37, 30] thereby reducing the risk of damaging healthy tissue in those areas. As significant information about label uncertainty is contained in the posterior distribution, it is natural to go beyond determining a MAP solution and instead to either compute the posterior distribution itself or a computationally efficient approximation.

This paper develops such a method for an efficient approximation of the posterior distribution on labels. Furthermore, it connects this method to the Rudin-Osher-Fatemi (ROF) model for image-denoising [51, 57, 3] and previously existing level-set segmentation approaches [42], in particular the Chan-Vese segmentation model [15]. Due to these connections we can (i) make use of the efficient solvers for the ROF model to approximate the posterior distribution on labels and (ii) compute the solution to the Chan-Vese model through the MAP realization of our approximation to the posterior distribution, i.e., our model is more general and subsumes the Chan-Vese model. In contrast to the implicit style of active-contour methods that represent labels by way of zero level-sets, such as the classical formulation of the Chan-Vese model, we use a dense logit (“log odds”), representation of label probabilities. This is akin to the convex approaches for active contours [2], but in a probabilistic formulation.

### 1.1 Motivations

Beyond optimal labelings, posterior distributions on labelings offer some advantages. For example, in many instances, one wishes to obtain information about segmentation confidence; or in change detection, distributions can help to determine whether an observed apparent change may be due to chance. Furthermore, probabilistic models on latent label fields can be useful for constructing more ambitious systems that, e.g., perform simultaneous segmentation and atlas registration [49]. However, the computation of posterior distributions is typically costly. Conversely, the computation of deterministic segmentation results, as for example by the popular active-contour approaches, is inexpensive and has shown to be an effective approach. Hence, we were motivated to merge both technologies, to obtain an active-contour inspired segmentation approach capable of estimating posterior distributions efficiently.

In previous work [50], we described an Active Mean Fields (AMF) approach to image segmentation that used a variational mean field method (VMF) approach along with a logit representation to construct a segmentation system similar to the one described here. This method empirically generated accurate segmentations and converged well, but used a different, and more awkward, approximation of the expected value of the length functional. In this present work, we use a linearization approach via the Plefka approximation. Using this approximation has profound consequences as it allows to make connections to the Chan-Vese [15] segmentation model and the ROF denoising model [51] in the continuous space. This connection in turn makes possible the efficient implementation of the AMF model through approaches used for ROF denoising. Hence, the overall model is convex, easy to implement and fast. Furthermore, we show good approximation properties in comparison to the “exact” distribution.

### 1.2 Contributions

• It derives an AMF approach that allows a computationally efficient estimation of the posterior distribution of the segmentation label map based on the VMF approximation for binary image segmentation regularized via a boundary length prior.

• It establishes strong connections between the proposed AMF model, active-contour models and total-variation (TV) denoising. In particular, the model retains the global optimality of convex active-contours while estimating a level-set function that has a direct interpretation as an approximate posterior on the segmentation. This is in contrast to level-set techniques which use the zero level-set only as a means for representing the object boundary with no (probabilistic) interpretation of the non-zero level-sets.

• It demonstrates how the Rudin-Osher-Fatemi (ROF) TV denoising model can be used to efficiently compute solutions of the AMF model. Hence, given the widespread availability of high-performance ROF-solvers, the AMF model is very simple to implement and will be immediately usable by the community with little effort.

### 1.3 Background

The earliest and simplest probabilistic image segmentation approaches frequently used pixel-wise independent Maximum Likelihood (ML) or MAP classifiers [56], that could be as simple as image thresholding. Better performance, in the face of noise, motivated the use of regularization, or prior probability models on the label fields that discouraged fragmentation [4], leading to the wide-spread application of Markov Random Field (MRF) models [28, 59]. Image segmentation with MRF models was initially thought to be computationally complex, which motivated approximations, including the mean field approach from statistical physics [31, 17]. Moreover, recently, fast solvers have appeared using graph-cuts, belief propagation or linear programming techniques that yield globally optimal solutions for certain energy functions [54].

Typically, the segmentation problem is posed as the minimization of an energy or negative log-likelihood that incorporates an image likelihood and a regularization term on the boundaries of segmented objects. This regularization may be specified either: (i) directly on the boundary (explicitly as a parametric curve or surface, or implicitly through the use of level-set functions); or (ii) by representing objects via indicator functions, where discontinuities in those functions identify boundaries. The direct boundary representation is attractive because it reduces complexity as only objects of co-dimension one need to be dealt with (i.e., a curve in 2D, a surface in 3D, etc.). The price for this reduction in complexity is that, usually, minimization becomes non-convex, and hence can get trapped in poor local minima in the absence of good initializations. In the snakes approach [32], a popular example of explicit boundary representation, the boundary curve represented by control points is evolved such that it captures the object of interest (for example, by getting attracted to edges) while assuring regularity of the boundary by penalizing rapid boundary changes through elasticity and rigidity terms. Although computationally efficient, explicit parametric representations cannot easily deal with topological changes and have numerical issues due to their fixed object parameterization (e.g. when an initial curve grows or shrinks drastically). Furthermore, though not an intrinsic problem of explicit parameterizations, such methods are typically not geometric, making evolutions dependent on curve parameterizations.

In contrast, level-set representations [42, 36] of active-contour methods [10, 33] do not suffer from these topological and parameterization issues. These methods use implicit representations of the label-field, where an object’s boundary is, for example represented through the zero level-set of a function. A parametric boundary representation is evolved directly, for example by moving its associated control points. For a level-set representation the level-set function is evolved, which indirectly implies an evolution of the segmentation boundary. Specifically, an evolution equation is imposed on the level-set function such that its zero level-set moves as desired. As the level-set function is (by construction) either strictly positive or negative (depending on convention used) inside the object and strictly negative or positive on the outside of the object, a labeling can be obtained by simple thresholding. Level-set approaches for image segmentation make use of advanced numerical methods to solve the associated partial differential equations [42, 53]. To assure boundary regularity, segmentation energies typically penalize boundary length or surface area.

While initial curve and surface evolution methods focused on energy minimization based on boundary regularity and boundary misfit, later approaches such as the Chan-Vese model [15], added terms that encoded statistics about the regions inside and outside the segmentation boundary. Such region-based models can be as simple as homoscedastic (i.e., same variance) Gaussian likelihoods with specified (but distinct) means for foreground and background respectively, as in the Chan-Vese case. They can also be much more complex such as trying to maximally separate intensity or feature distributions inside and outside an object [26]. Overall, a large variety of region-based approaches exist, providing great modeling freedom [20]. While region-based models are less sensitive to initialization, they are still non-convex when combined with weighted curve-length terms for regularization. Hence, a global optimum cannot be guaranteed by numerical optimization for such formulations. The dependency on curve and surface initializations popularized the formulation of energy minimization methods which can find a global energy optimum. One such approach is to reformulate an energy minimization problem as a problem defined over an appropriately chosen graph.

In the context of image segmentation, the idea is to create a graph with added source and sink nodes in such a way that a minimum cut of the graph implies a variable configuration which minimizes the original image segmentation energy [7]. For a large class of binary segmentation problems, these graph-cut approaches allow for the efficient computation of globally optimal solutions through max-flow algorithms [34]. In particular, discrete versions of the active-contour and Chan-Vese models (with fixed means) can be formulated. To avoid computing trivial solutions for the boundary-only active contours, graph-cut formulations add seed-points, specifying fixed background and foreground pixels or voxels (in 3D). While conceptually attractive, graph-cut approaches suffer from the need to build the graphs and the necessity to specify discrete neighborhood structures which may negatively affect the regularity of the obtained solution by creating so-called metrication artifacts.

Recently, the focus has shifted away from level-set and graph representations to formulations of active contours and related models by means of indicator functions [2, 8] defined in the continuum and allowing for convex formulations. These methods are closely related to segmentation via graph-cuts, but avoid the construction of graphs and can alleviate metrication artifacts. A key insight here is that the boundary-length or area term can formulated through the total variation of an indicator function. This regularization formulation becomes convex when followed by relaxation of the indicator function to the interval . Hence these approaches strike an attractive balance between Partial Differential Equation (PDE)-based level-set formulations and the global properties of graph-cut methods. As they are specified via PDEs, highly accurate and fast implementations for these solvers are available [47]. As these convex formulations make use of TV terms, they are conceptually related to TV image-denoising. The use of TV regularization for denoising was pioneered by Rudin, Osher and Fatemi (the ROF model [51]). The ROF model uses quadratic (i.e., ) coupling to the image intensities and TV for edge-preserving noise-removal [9]. Approaches with coupling yielding a form of geometric scale-space have also been proposed [13]. As we will see, our proposed approach will be closely related to these modern TV regularization and denoising approaches.

Segmentation approaches based on energy-optimization as discussed above typically either have a probabilistic interpretation (as negative log-likelihoods) or have been explicitly derived from probabilistic considerations. The reader is referred to Cremers et al. [20] for a review of recent developments in probabilistic level-set segmentation. All these techniques, while probabilistic in nature, seek optimal labels and do not directly provide information about the posterior distribution or uncertainty in their solutions. In contrast, our proposed AMF approach will approximate posterior distributions from which one can infer a segmentation and corresponding uncertainty.

### 1.4 AMF Segmentation Approach

AMF segmentation is a Bayesian approach, which results in a posterior distribution on the label map. The AMF approach combines explicit representations of label likelihoods with a boundary length prior. As we will show, our approach makes strong connections to ROF-denoising, and convex active-contour as well as probabilistic active-contour formulations.

In prior work, Monte-Carlo approaches have been used to characterize posterior distributions on segmentations, which require sampling [24, 18, 45]. In addition, the Monte-Carlo approach is quite general about statistical modeling assumptions so that it could be applied to the likelihood and regularity terms of our segmentation tasks. Approximations are then only caused by the sampling. A potential drawback of such a Monte-Carlo approach is that an accurate estimation might require the generation of a large number of samples, which can be time consuming.

In contrast to the Monte-Carlo approach, our mean-field approximation is based on a factorized distribution that is quick to compute, but which is a relatively severe approximation. A potential drawback of our method is that samples drawn from the approximated posterior can have an un-natural fragmented appearance. However, our experimental results reveal that the approximation is surprisingly accurate (in terms of correlation of the posterior probabilities and the segmentation area), when compared to the exact model using much slower Gibbs sampling.
{mdframed} In summary, the primary advantages of our approach are speed, simplicity, and leverage of existing convex solver technology. We show in Section 3.2 that using ROF-denoising on the logit field of label probabilities results in a “denoised” logit transform from which a label probability function can easily be obtained through a sigmoid transformation. Given an ROF solver, the AMF model can thus be implemented in one line of source-code. Furthermore, the AMF model provides a good approximation of the posterior of the segmentation under a curve-length prior as we experimentally show in Section 5.3.

### 1.5 Structure of the Article

In Section 2, we specify a discrete-space probabilistic formulation of segmentation with the goal of finding the posterior distribution of labels, given an input image. We use the VMF approach, along with a linearization approximation that simplifies the problem. This results in an optimization problem for determining the parameters of an approximation to the posterior distribution on pixel labels. In the style of Chan and Vese [15] and many others, we shift from discrete to continuous space facilitating use of the calculus of variations for the optimization problem, yielding the Euler-Lagrange equations for the AMF model.

In Section 3, we show that the AMF Euler-Lagrange equations for the zero level-set correspond to those of a special case of the Chan-Vese model [15], and that the AMF “approximate posterior” has the same mode, or MAP realization, as the exact posterior distribution. Subsequently we show that the AMF Euler-Lagrange equations have the same form as those of the ROF model of image denoising, and we discuss methods that may be used for solving AMF.

Section 4 describes other important properties of AMF. We show that for a one-parameter family of realizations, the approximated and exact posteriors agree ratiometrically, and we explore their agreement for more general realizations. In addition, we show that the AMF problem is convex, and is unbiased in a particular sense.

Section 5 shows the experimental results on examples that include intensity ambiguities. It also demonstrates the quality of the AMF in approximating the true posterior via Gibbs sampling. Furthermore, Section 5 discusses AMF results for real ultrasound images of the heart, the prostate, a common test image in computer vision, and on a large collection of images from the icgbench segmentation dataset [52].

Finally, Section 6 concludes with a summary and an outlook on future work. Detailed derivations of the approximation properties can be found in the appendix.

## 2 Active Mean Fields (AMF)

This section introduces the basic discrete-space probabilistic model (Section 2.1), that includes a conventional conditionally independent likelihood term and a prior that penalizes the boundary length of the labeling. The VMF approach is used (Section 2.2), along with a Plefka approximation (Section 2.3), to construct a factorized distribution that, given image data, approximates the posterior distribution on labelings. The resulting optimization problem for determining the parameters of the variational distribution has a KL-divergence data attachment term and a TV regularizer. The objective function is converted to continuous space (Section 2.4), yielding the Euler-Lagrange equations of the AMF model (Section 2.5), that involve logit label probabilities and likelihoods along with a curvature term.

In the following sections, we use upper-case and to represent probability mass functions and lower-case and to represent probability density functions.

### 2.1 Original Probability Model

Define the space of images as a compact domain 111Our theory also holds for higher dimensions, i.e., . We discuss our approach in for simplicity and hence talk about pixels. In 3D for example, we would deal with voxel grids and we would need to compute a 3D variant of total variation, but the overall results will hold unchanged. indexed by and let denote the indices of the lattice of image pixels. Furthermore, denotes a binary random field defined on the pixel lattice whose realizations are the binary labelings of a real-valued image on the pixel lattice; given the image pixel index , and are the corresponding quantities specific to pixel . For convenience, we write with , where the definition of is problem specific and is assumed to be given (for example, specified parametrically or obtained through kernel density estimation on a given feature space; we will not address this issue here). Now, if we make the usual assumption that the likelihood term, i.e., the probability density of observing intensities conditioned on labels, is conditionally independent and identically distributed (iid), i.e.\optarxiv

 (1) p(y|z)=∏\definecolorpgfstrokecolorrgb0,0,0\pgfsys@color@gray@stroke0\pgfsys@color@gray@fill0i∈IXp(yi | zi),

then the corresponding log-likelihood, defined with respect to the logit transform of the pixel-specific image likelihood \optarxiv

 (2) ψi≜lnp(yi|1)p(yi|0) ,

is \optarxiv

 (3) lnp(y|z)=∑\definecolorpgfstrokecolorrgb0,0,0\pgfsys@color@gray@stroke0\pgfsys@color@gray@fill0i∈IXlnp(yi | zi)=∑\definecolorpgfstrokecolorrgb0,0,0\pgfsys@color@gray@stroke0\pgfsys@color@gray@fill0i∈IXziψi+lnp(yi|0).

Next, we apply a prior that penalizes the length of the boundaries of the label map, \optarxiv

 (4) P(z)∝exp(−λL(z)).

Here, is a constant. The larger the more irregular segmentation boundaries are penalized and therefore discouraged. We defer discussion of the length functional to Section 2.4.

By Bayes’ rule the posterior probability of the label map given the image is \optarxiv

 (5) P(z|y)∝ p(y|z)P(z) so that (6) lnP(z|y)= ∑i∈Xziψi+lnp(yi|0)−λL(z)+\const.

Here the constant is equal to the log-partition function of the prior distribution. This constant is not easily computed, as it requires a sum over all of the configurations of .

### 2.2 Variational Mean-Field Approximation

As mentioned above, the partition function cannot easily be computed. In the variational mean-field (VMF) approach [58], we approximate the posterior distribution via a simpler variational distribution by minimizing the distance between and (here, in a Kullback-Leibler sense – see details below). The explicit computation of the integrals involved in the partition function (for continuous variables) can thereby be avoided. Specifically, the mean-field method approximates the joint distribution of a countable family of random-variables as a product of univariate distributions. The VMF approximation is widely used in machine learning and other fields [58].

For the binary segmentation problem, we define the mean-field approximation of the posterior distribution as a field of independent Bernoulli random variables defined on the lattice with probability , which constitute the random field : \optarxiv

 (7) Q(z; θ) ≜∏\definecolorpgfstrokecolorrgb0,0,0\pgfsys@color@gray@stroke0\pgfsys@color@gray@fill0i∈IXθzii(1−θi)1−zi (8) = exp⎧⎨⎩∑\definecolorpgfstrokecolorrgb0,0,0\pgfsys@color@gray@stroke0\pgfsys@color@gray@fill0i∈IX[ziϕi+ln(1−θi)]⎫⎬⎭,

where . Next, is parameterized so that it minimizes the KL-divergence with respect to the original probability model, i.e., \optarxiv

 (9) \definecolorpgfstrokecolorrgb0,0,0\pgfsys@color@gray@stroke0\pgfsys@color@gray@fill0^θ∗\definecolorpgfstrokecolorrgb0,0,0\pgfsys@color@gray@stroke0\pgfsys@color@gray@fill0≜ argminθ\mathdsKL[Q(z;θ)||P(z|y)] (10) = argminθ\mathdsEQ[lnQ(z;θ)−lnP(z|y)] (11) = argminθ\mathdsEQ[lnQ(z;θ)−lnp(y|z)+λL(z)].

With minor abuse of KL-divergence notation: \optarxiv

 (12) \definecolorpgfstrokecolorrgb0,0,0\pgfsys@color@gray@stroke0\pgfsys@color@gray@fill0^θ∗=argminθ\mathdsKL[Q(z;θ)||p(y|z)]+\mathdsEQ[λL(z)].

In other words, the VMF approximation selects the parameters of the factorized variational distribution such that (i) local image likelihood information, , is captured while at the same time (ii) considering the expected value of the segmentation boundary length (which is a global property that regularizes the solution).

### 2.3 Plefka’s Approximation

Although minimizing the KL-divergence term in Eqn. (12) with respect to is relatively straightforward, minimizing \optarxiv is generally not as it entails a sum over all configurations of . In the mean-field literature, difficult expectations of functions of random-fields have been approximated using Plefka’s method [46].

Noting that according to Eqn. (7) and that the first order Taylor expansion of the curve length function with respect to is \optarxiv , then Plefka’s approximation states that

 (13) \mathdsEQ[L(z)]≈L(z∗)+(\mathdsEQ[z]−z∗)∇L(z∗)≈L(\mathdsEQ[z])=L(θ)

so that an approximation of Eqn. (12) is \optarxiv

 (14) ^θ\definecolorpgfstrokecolorrgb0,0,0\pgfsys@color@gray@stroke0\pgfsys@color@gray@fill0≜ argminθ\mathdsKL[Q(z;θ)||p(y|z)]+λL(θ),

where .

Assuming is convex, then the Plefka approximation of Eqn. (14) is a lower bound to the original objective function of Eqn. (12) as Jensen’s inequality states . While this is not directly useful for our purposes, there has been some work on “converse Jensen inequalities” [23] that may provide useful bound relationships. In the end, approximations are justified by the quality of their results, such as the favorable properties highlighted in Section 4.

### 2.4 Continuous Variant of Variational Problem

In the previous section, we showed how the problem of computing the posterior distribution of a label-field under an (unspecified) boundary-length prior results in solving the optimization problem of Eqn. (14). To solve this problem using computationally efficient PDE optimization techniques, we first replace the random-field defined on a discrete lattice by one defined on continuous space.

Expanding Eqn. (14) by using the definition of the log likelihood (Eqn. (3)) and of (Eqn. (8)) we get: \optarxiv

 ^θ= \operatornamewithlimitsargminθ\mathdsEQ⎡⎣∑\definecolorpgfstrokecolorrgb0,0,0\pgfsys@color@gray@stroke0\pgfsys@color@gray@fill0i∈IXziϕi+ln(1−θi)−ziψi−lnp(yi|0)⎤⎦ +λL(θ) = \operatornamewithlimitsargminθ∑\definecolorpgfstrokecolorrgb0,0,0\pgfsys@color@gray@stroke0\pgfsys@color@gray@fill0i∈IX[θiϕi+ln(1−θi)−θiψi−lnp(yi|0)] (15) +λL(θ).
\opt

siims

 (16) ^θ= \operatornamewithlimitsargminθ\mathdsEQ⎡⎣∑\definecolorpgfstrokecolorrgb0,0,0\pgfsys@color@gray@stroke0\pgfsys@color@gray@fill0i∈IXziϕi+ln(1−θi)−ziψi−lnp(yi|0)⎤⎦ (17) +λL(θ) (18) = \operatornamewithlimitsargminθ∑\definecolorpgfstrokecolorrgb0,0,0\pgfsys@color@gray@stroke0\pgfsys@color@gray@fill0i∈IX[θiϕi+ln(1−θi)−θiψi−lnp(yi|0)]+λL(θ). (19) = \operatornamewithlimitsargminθ∑\definecolorpgfstrokecolorrgb0,0,0\pgfsys@color@gray@stroke0\pgfsys@color@gray@fill0i∈IX[θiϕi+ln(1−θi)−θiψi]+λL(θ).

To solve the above equation by extending to the continuum, the logit transform of the likelihood is now defined as

 (20) ψ(x)≜lnp(y(x)|z(x)=1)p(y(x)|z(x)=0),

where denotes the location (i.e., the continuous equivalent of the index ), and , , and are the corresponding values of , , and at location . Similarly, the continuous variant of the logit transform of the variational probability function, ), is defined as

 (21) ϕ(x)≜lnθ(x)1−θ(x).

Now, if we denote with the area of a lattice element and replace the summation over the lattice with integration over , then Eqn. (19) becomes in the continuous space, \optarxiv

 (22) ^θ= \operatornamewithlimitsargminθv−1∫Xθ(x)(ϕ(x)−ψ(x))+ln(1−θ(x))dx (23) +λL(θ) .
\opt

siims

 (24) ^θ= \operatornamewithlimitsargminθ v−1⋅(∫Xθ(x)(ϕ(x)−ψ(x))+ln(1−θ(x))dx)+λL(θ) .

By the co-area formula [5], the length of the boundaries of a binary image defined on the continuum is equal to its total-variation: \optarxiv

 (25) L(z)=TV[z(x)]=∫X||∇z(x)||2 dx

where is the 2-norm and \optarxiv is the (weak) gradient of \optarxiv.222 is defined as for any test function ; in the case of being an element of a convex set, is convex.

Therefore putting it all together, the continuous variant of the variational problem is: \optarxiv

 (26) ^θ=\operatornamewithlimitsargminθ∫Xθ(x)(ϕ(x)−ψ(x))+ln(1−θ(x))+vλ||∇θ(x)||2 dx,
\opt

siims

 (27) ^θ=\operatornamewithlimitsargminθ∫Xθ(x)(ϕ(x)−ψ(x))+ln(1−θ(x))+vλ||∇θ(x)||2 dx,

which we call the Active Mean Field approximation. Note, that depends on according to Eqn. (21).

### 2.5 Euler-Lagrange (EL) Equations

Defining the curvature operator, \optarxiv

 (28) κ(θ(x))≜∇⋅(∇θ(x)∥∇θ(x)∥2),

the Euler-Lagrange equation describing the stationary points of Eqn. (27) is given by: \optarxiv

 (29) ϕ(x)−ψ(x)−vλκ(θ(x))=0 .

This can be derived as follows: Expanding according to Eqn. (21), we obtain the objective function

 (30) E(θ)=∫X−θ(x)ψ(x)+θ(x)ln(θ(x))+(1−θ(x))ln(1−θ(x))+vλ∥∇θ(x)∥2 dx.

The variation of is [55]

 (31) δE(θ;δθ)≜∂E(θ+ϵδθ)∂ϵ⏐ϵ=0 ,

where , denotes an admissible perturbation of , and denotes the partial derivative with respect to . The variation becomes

 (32) δE(θ;δθ)=∫X−ψ(x)δθ(x)+lnθ(x)1−θ(x)δθ(x)+vλ1∥∇θ(x)∥2∇θ(x)⋅∇δθ(x) dx.

Integration by parts assuming Neumann boundary conditions and using Eqn. (21) results in

 (33) δE(θ;δθ)=∫X(ϕ(x)−ψ(x)−vλ∇⋅(∇θ(x)∥∇θ(x)∥2))δθ(x) dx .

As the variation needs to vanish for all admissible perturbations at optimality, we obtain the Euler-Lagrange equation

 (34) ϕ(x)−ψ(x)−vλκ(θ(x))=0 .

According to Eqn. (21), is obtained from through a logit transform. Consequentially, we can obtain from via the sigmoid function

 (35) σ(x)≜(1+exp(−x))−1

as . The sigmoid function, , is monotonic (i.e., ) so that

 (36) ∇θ(x)=σ′(ϕ(x))∇ϕ(x)

and

 (37) ∇θ(x)∥∇θ(x)∥2=σ′(ϕ(x))∇ϕ(x)∥σ′(ϕ(x))∇ϕ(x)∥2=∇ϕ(x)∥∇ϕ(x)∥2 .

Hence, the Euler-Lagrange equation can be rewritten as \optarxiv

 (38) ϕ(x)−ψ(x)−vλκ(ϕ(x))=0 .

In summary, the distribution \optarxiv approximates the “exact” distribution, \optarxiv, in the KL-divergence sense when (the logit transform of the parameter ) satisfies the Euler-Lagrange equation of the AMF model; we will refer to Eqn. (38) as the “AMF Equation.” As the objective function is strictly convex (see Section 4.2) in \optarxiv, the stationary point is the unique global optimum.

## 3 Connections to Chan-Vese and ROF

In this section we establish the connection between the AMF model and the Chan-Vese segmentation model (Section 3.1) as well as the ROF denoising model (Section 3.2). In particular, we show that the Chan Vese Euler-Lagrange equations correspond to those of the zero level-set of the AMF model, so a Chan-Vese segmentation can be obtained as the zero level-set of the AMF solution. We also show that the AMF Euler-Lagrange equations (Eqn. (38)) have the same form as those of the ROF model. Therefore, the solver technologies that have been developed for the ROF model may be deployed for AMF.

### 3.1 Connection to Chan-Vese

To derive the connection between the AMF and the Chan-Vese approach, we introduce the energy for the generalized Chan-Vese model based on a relaxed indicator function (i.e., \optarxiv), which, according to [8], can be written as \optarxiv

 (39) Ecv(θ)=∫X−θ(x)ψ(x)+vλ∥∇θ(x)∥2 dx,

with the first part of the function being the data term and the second term regularizing the boundary length. Such a length prior is essential to encourage large, contiguous segmentation areas. The importance of the length-prior becomes especially clear in the context of the Mumford-Shah model [39], of which the Chan-Vese model is a special case. In the absence of a length prior, the Mumford-Shah approach will assign each pixel in regions with constant image intensity to its own (separate) parcel. The standard Chan-Vese model [16] (without the area prior of this model) can be recovered from Eqn. (39) for the special case that the class conditional intensity model is Gaussian, i.e., \optarxiv and \optarxiv. In this case: \optarxiv

 (40) ψ(x)≜\definecolorpgfstrokecolorrgb0,0,0\pgfsys@color@gray@stroke0\pgfsys@color@gray@fill0lnσ0σ1−12σ20(y(x)−μ0)2+12σ21(y(x)−μ1)2,

and the corresponding Chan-Vese energy becomes:

 (41) Egausscv(θ)=∫X−θ(x)(lnσ0σ1−12σ20(y(x)−μ0)2+12σ21(y(x)−μ1)2)+vλ∥∇θ(x)∥2 dx.

The means of the Gaussians (, ) are estimated jointly in the standard Chan-Vese model [15] and the standard deviations are assumed to be fixed and identical. In contrast, in the generalized Chan-Vese model (Eqn. (39)), parameters of are typically assumed to be fixed and are not jointly estimated. This assures the convexity of the overall model. However, if desired, these parameters can also be estimated. A simple approach would be an alternating optimization strategy. Note that the Chan-Vese segmentation model of Eqn. (41) becomes Otsu-thresholding [43] if the length prior is disregarded (). Hence, unlike Chan-Vese segmentation, Otsu-thresholding cannot suppress image fragmentation and irregularity.

The Euler-Lagrange equations of the generalized Chan-Vese energy (Eqn. (39)) are:

 (42) −ψ(x)−vλκ(θ(x))=0.

This is identical to the AMF Euler-Lagrange equation (Eqn. (34)) at the zero level-set . By construction, the zero level-set of a level-set implementation for the generalized Chan-Vese model has to agree with the solution obtained from the Euler-Lagrange equations of the generalized Chan-Vese model using indicator functions as both minimize the same energy function just using different parameterizations. Consequentially, also, the zero level-sets of both the AMF model and the level-set implementation of Chan-Vese need to agree.

In contrast to the generalized Chan-Vese model described above, the original Chan-Vese model of [15], formulated as a curve evolution approach, is characterized by an energy function (penalizing segmentations with large, continuous areas) with an additional term of the form

 (43) Earea(C)=νArea(inside(C)),

where denotes the curve defining the boundary of the segmentation, is a non-negative constant to weight the area influence, and simply denotes the area enclosed by . For implementation purposes is implicitly represented by the zero level-set of a level set function . The corresponding Euler-Lagrange equation is, on the zero level-set [15],

 (44) −ψ(x)−vλκ(ϕ(x))+ν=0 .

Examining the level-set of the AMF model (Eqn. (38)), so that , we notice that this level-set satisfies the same Euler-Lagrange equation as the zero level-set of the Chan-Vese model with a specified non-zero value of . In other words, the level-sets of the dense AMF solution provide a family of solutions for the Chan-Vese problem for a continuum of values of the area penalty.

Note that such area penalties cannot effectively be added in the indicator-function based approaches to the Chan-Vese active-contour models proposed by Appleton et al. [2] and Bresson et al. [8]. The goal of these models is to capture a binary segmentation result through a relaxed indicator function, (i.e., instead of ). However, it can be shown [41] that in certain instances this relaxation produces undesirable segmentation results when combined with an area penalty.

### 3.2 Connection between AMF and ROF Models

In their seminal paper, Rudin, Osher and Fatemi [51] proposed a denoising method for, e.g., intensity images , \optarxiv

 (45) u∗(x)=argminu∫X∥∇u(x)∥2 dxs.t.∫X(u(x)−u0(x))2 dx=σ2 ,
\opt

siims

 (46) u∗(x)=argminu∫X∥∇u(x)∥2 dxs.t.∫X(u(x)−u0(x))2 dx=σ2 ,

where . As discussed by Vogel and Oman [57], this is equivalent to the following unconstrained problem, \optarxiv

 (47) u∗(x)=argminu[∫X(u(x)−u0(x))2dx+α2∫X∥∇u(x)∥2dx] ,
\opt

siims

 (48) u∗(x)=argminu[∫X(u(x)−u0(x))2dx+α2∫X∥∇u(x)∥2dx] ,

for a suitable choice of . They refer to this formulation as “TV penalized least squares.”

The corresponding Euler-Lagrange equation is

 (49) u(x)−u0(x)−ακ(u(x))=0 .

For , this equation has the same form as the Euler-Lagrange equations of the AMF model of Eqn. (38), which is

 (50) ϕ(x)−ψ(x)−vλκ(ϕ(x))=0 .

In this equivalence, the denoised intensity image of the ROF model, , corresponds to the logit parameter field of the AMF distribution, , while the noisy input intensity image of the ROF model, , corresponds to the logit-transformed label probabilities in the AMF problem, . Furthermore, if the class conditional intensity model is homoscedastic Gaussian, then (from Eqn. (40)) is linear in the observed intensity. Furthermore, the AMF solution is equivalent to solving an ROF problem that is effectively denoising the logit-transformed approximation of the posterior label likelihoods.

Because of the equivalence of the Euler-Lagrange equations of the AMF and the ROF models, the considerable technology developed for solving the ROF model may be applied to the AMF model. In particular, a globally optimal solution (see Section 4.2 for a proof of the convexity of this model) of the AMF model can be computed by the ROF denoising approach. In other words, given an ROF solver (ROFsolve) that minimizes

 (51) EROF(u;u0,\definecolorpgfstrokecolorrgb0,0,0\pgfsys@color@gray@stroke0\pgfsys@color@gray@fill0α)=∫X(u(x)−u0(x))2+\definecolorpgfstrokecolorrgb0,0,0\pgfsys@color@gray@stroke0\pgfsys@color@gray@fill0α2∥∇u∥2 dx

such that

 (52) u∗\definecolorpgfstrokecolorrgb0,0,0\pgfsys@color@gray@stroke0\pgfsys@color@gray@fill0≜arg minu EROF(u;u0,\definecolorpgfstrokecolorrgb0,0,0\pgfsys@color@gray@stroke0\pgfsys@color@gray@fill0α)=\definecolorpgfstrokecolorrgb0,0,0\pgfsys@color@gray@stroke0\pgfsys@color@gray@fill0ROFsolve(u0,\definecolorpgfstrokecolorrgb0,0,0\pgfsys@color@gray@stroke0\pgfsys@color@gray@fill0α),

solving the AMF problem for a given and then simply becomes

 (53) θ∗=σ(ROFsolve(ψ,\definecolorpgfstrokecolorrgb0,0,0\pgfsys@color@gray@stroke0\pgfsys@color@gray@fill0vλ)).

Eqn. (53) is the central result concerning the implementation of our method as it connects the optimal AMF solution to a straightforward ROF denoising problem.

## 4 Additional Properties of AMF

We now summarize some approximation properties of AMF (Section 4.1), show the objective function to be convex (Section 4.2), and show that AMF is unbiased in a specific sense (Section 4.3).

### 4.1 Approximation Properties

Our goal is an efficient yet accurate approximation, , to the exact posterior distribution for general realizations of . To show that is in fact a good approximation, we study its properties here. For convenience, we only summarize the results of some of the approximation properties of the AMF model and refer to the appendix for mathematical details. In particular, the appendix shows that

• The zero level-set of is the boundary of the most probable realization of and is the MAP realization under . This is not generally the case for mean field approximations.

• Because the log partition function of the prior is not easily computed we compare \optarxiv with \optarxiv, where is the most probable realization under both distributions according to a). These probability ratios are not only in agreement for the zero level-set, but also for realizations that are bounded by any level-set of .

• The probability ratios approximately agree for realization whose boundary normals are close in direction to .

• If neither a) nor b) hold, the probability ratio for will be larger than that for , i.e., it underestimates the length penalty associated with the prior.

### 4.2 Convexity

A nice property of the AMF model is that its energy is strictly convex and therefore we can find a unique global minimizer. This is in contrast to the TV based segmentation models [2, 8] which are generally convex (but not strictly so) and therefore may have multiple non-unique optima.

To show convexity, we consider the continuum formulation of AMF which can be rewritten as a function of \optarxiv as: \optarxiv

 (54) E\definecolorpgfstrokecolorrgb0,0,0\pgfsys@color@gray@stroke0\pgfsys@color@gray@fill0amf(θ)=∫X−θψ+vλ∥∇θ∥2+(1−θ)\definecolorpgfstrokecolorrgb0,0,0\pgfsys@color@gray@stroke0\pgfsys@color@gray@fill0ln(1−θ)+θ\definecolorpgfstrokecolorrgb0,0,0\pgfsys@color@gray@stroke0\pgfsys@color@gray@fill0lnθ dx
\opt

siims

 (55)

where dependencies on space are dropped only for notational convenience (i.e., and ) and we expressed in terms of . The term \optarxiv

 (56) ∫X−θψ+λ∥∇θ∥2 dx

is convex in as the first summand is linear in , the 2-norm is convex, is a linear operator and both terms are summed with a positive weight . To see that the rest of the integrand is also convex, consider a function of the form \optarxiv

 (57) f(θ)= (1−θ)\definecolorpgfstrokecolorrgb0,0,0\pgfsys@color@gray@stroke0\pgfsys@color@gray@fill0ln(1−θ)+θ\definecolorpgfstrokecolorrgb0,0,0\pgfsys@color@gray@stroke0\pgfsys@color@gray@fill0ln(θ).

which implies that \optarxiv

 (58) f′′(θ)= 1θ(1−θ)>0forθ∈(0,1).

Therefore, \optarxiv is strictly convex. Because the sum of convex and strictly convex functions is strictly convex, the overall AMF energy is strictly convex in and therefore has a unique global minimizer (see [6] for details on convexity preserving operations). In particular, we note that for a non-informative data term, i.e., pixels are locally equally likely to be foreground or background (), is the globally optimal solution. For the related standard TV segmentation model [2], which would only minimize Eqn. (56), any constant solution would be a global minimizer.

### 4.3 Unbiased in Homogeneous Regions

In this section we analyze the behavior of the AMF estimator over homogeneous (i.e., constant intensity) patches of an image. The AMF objective function, Eqn. (27), can be written: \optarxiv

 (59) ^θ=argminθ∫X\mathdsKL[Q(z(x);θ(x))||p(y(x)|z(x))]dx+vλTV[z] .
\opt

siims

 (60) ^θ=argminθ∫X\mathdsKL[Q(z(x);θ(x))||p(y(x)|z(x))]dx+vλTV[z] .

Now, for a patch of constant intensity, i.e., \optarxiv, the optimum will be attained at \optarxiv