Active Mean Fields for Probabilistic Image Segmentation: Connections with ChanVese and RudinOsherFatemi Models
arxiv \optsiims
Segmentation is a fundamental task for extracting semantically meaningful regions from an image. The goal of segmentation algorithms is to accurately assign object labels to each image location. However, imagenoise, shortcomings of algorithms, and image ambiguities cause uncertainty in label assignment. Estimating the uncertainty in label assignment is important in multiple application domains, such as segmenting tumors from medical images for radiation treatment planning. One way to estimate these uncertainties is through the computation of posteriors of Bayesian models, which is computationally prohibitive for many practical applications. On the other hand, most computationally efficient methods fail to estimate label uncertainty. We therefore propose in this paper the Active Mean Fields (AMF) approach, a technique based on Bayesian modeling that uses a meanfield approximation to efficiently compute a segmentation and its corresponding uncertainty. Based on a variational formulation, the resulting convex model combines any labellikelihood measure with a prior on the length of the segmentation boundary. A specific implementation of that model is the Chan–Vese segmentation model (CV), in which the binary segmentation task is defined by a Gaussian likelihood and a prior regularizing the length of the segmentation boundary. Furthermore, the Euler–Lagrange equations derived from the AMF model are equivalent to those of the popular RudinOsherFatemi (ROF) model for image denoising. Solutions to the AMF model can thus be implemented by directly utilizing highlyefficient ROF solvers on loglikelihood ratio fields. We qualitatively asses the approach on synthetic data as well as on real natural and medical images. For a quantitative evaluation, we apply our approach to the icgbench dataset.
arxiv
Contents
siims egmentation, meanfield approximation, RudinOsherFatemi model, ChanVese model {AMS}
1 Introduction
Image segmentation approaches rarely provide measures of segmentation label uncertainty. In fact, most existing and probabilisticallymotivated segmentation approaches only compute the maximum a posteriori (MAP) solution [34, 35, 8, 20, 44]. Using these models to segment ambiguous boundaries is troublesome especially for applications where confidence in object boundaries impacts analysis. For example, many radiation treatment plans base dose distribution on the boundaries of tumors segmented from medical images with low contrast [37]. This can be problematic, as segmentation variability can have a substantial effect on radiation treatment; Martin et al. [37] report that such variability caused mean observer tumor control probability (i.e., the probability to control or eradicate a tumor at a given dose) to range from (22.6 11.9)% to (33.7 0.6)% between six participating physicians in a study of intensitymodulated radiation therapy (IMRT) of 4DCTbased nonsmall cell lung cancer radiotherapy. The precision of the planning could be improved around highlyconfident tumor boundaries [37, 30] thereby reducing the risk of damaging healthy tissue in those areas. As significant information about label uncertainty is contained in the posterior distribution, it is natural to go beyond determining a MAP solution and instead to either compute the posterior distribution itself or a computationally efficient approximation.
This paper develops such a method for an efficient approximation of the posterior distribution on labels. Furthermore, it connects this method to the RudinOsherFatemi (ROF) model for imagedenoising [51, 57, 3] and previously existing levelset segmentation approaches [42], in particular the ChanVese segmentation model [15]. Due to these connections we can (i) make use of the efficient solvers for the ROF model to approximate the posterior distribution on labels and (ii) compute the solution to the ChanVese model through the MAP realization of our approximation to the posterior distribution, i.e., our model is more general and subsumes the ChanVese model. In contrast to the implicit style of activecontour methods that represent labels by way of zero levelsets, such as the classical formulation of the ChanVese model, we use a dense logit (“log odds”), representation of label probabilities. This is akin to the convex approaches for active contours [2], but in a probabilistic formulation.
1.1 Motivations
Beyond optimal labelings, posterior distributions on labelings offer some advantages. For example, in many instances, one wishes to obtain information about segmentation confidence; or in change detection, distributions can help to determine whether an observed apparent change may be due to chance. Furthermore, probabilistic models on latent label fields can be useful for constructing more ambitious systems that, e.g., perform simultaneous segmentation and atlas registration [49]. However, the computation of posterior distributions is typically costly. Conversely, the computation of deterministic segmentation results, as for example by the popular activecontour approaches, is inexpensive and has shown to be an effective approach. Hence, we were motivated to merge both technologies, to obtain an activecontour inspired segmentation approach capable of estimating posterior distributions efficiently.
In previous work [50], we described an Active Mean Fields (AMF) approach to image segmentation that used a variational mean field method (VMF) approach along with a logit representation to construct a segmentation system similar to the one described here. This method empirically generated accurate segmentations and converged well, but used a different, and more awkward, approximation of the expected value of the length functional. In this present work, we use a linearization approach via the Plefka approximation. Using this approximation has profound consequences as it allows to make connections to the ChanVese [15] segmentation model and the ROF denoising model [51] in the continuous space. This connection in turn makes possible the efficient implementation of the AMF model through approaches used for ROF denoising. Hence, the overall model is convex, easy to implement and fast. Furthermore, we show good approximation properties in comparison to the “exact” distribution.
1.2 Contributions
The main contributions of this article are:

It derives an AMF approach that allows a computationally efficient estimation of the posterior distribution of the segmentation label map based on the VMF approximation for binary image segmentation regularized via a boundary length prior.

It establishes strong connections between the proposed AMF model, activecontour models and totalvariation (TV) denoising. In particular, the model retains the global optimality of convex activecontours while estimating a levelset function that has a direct interpretation as an approximate posterior on the segmentation. This is in contrast to levelset techniques which use the zero levelset only as a means for representing the object boundary with no (probabilistic) interpretation of the nonzero levelsets.

It demonstrates how the RudinOsherFatemi (ROF) TV denoising model can be used to efficiently compute solutions of the AMF model. Hence, given the widespread availability of highperformance ROFsolvers, the AMF model is very simple to implement and will be immediately usable by the community with little effort.
1.3 Background
The earliest and simplest probabilistic image segmentation approaches frequently used pixelwise independent Maximum Likelihood (ML) or MAP classifiers [56], that could be as simple as image thresholding. Better performance, in the face of noise, motivated the use of regularization, or prior probability models on the label fields that discouraged fragmentation [4], leading to the widespread application of Markov Random Field (MRF) models [28, 59]. Image segmentation with MRF models was initially thought to be computationally complex, which motivated approximations, including the mean field approach from statistical physics [31, 17]. Moreover, recently, fast solvers have appeared using graphcuts, belief propagation or linear programming techniques that yield globally optimal solutions for certain energy functions [54].
Typically, the segmentation problem is posed as the minimization of an energy or negative loglikelihood that incorporates an image likelihood and a regularization term on the boundaries of segmented objects. This regularization may be specified either: (i) directly on the boundary (explicitly as a parametric curve or surface, or implicitly through the use of levelset functions); or (ii) by representing objects via indicator functions, where discontinuities in those functions identify boundaries. The direct boundary representation is attractive because it reduces complexity as only objects of codimension one need to be dealt with (i.e., a curve in 2D, a surface in 3D, etc.). The price for this reduction in complexity is that, usually, minimization becomes nonconvex, and hence can get trapped in poor local minima in the absence of good initializations. In the snakes approach [32], a popular example of explicit boundary representation, the boundary curve represented by control points is evolved such that it captures the object of interest (for example, by getting attracted to edges) while assuring regularity of the boundary by penalizing rapid boundary changes through elasticity and rigidity terms. Although computationally efficient, explicit parametric representations cannot easily deal with topological changes and have numerical issues due to their fixed object parameterization (e.g. when an initial curve grows or shrinks drastically). Furthermore, though not an intrinsic problem of explicit parameterizations, such methods are typically not geometric, making evolutions dependent on curve parameterizations.
In contrast, levelset representations [42, 36] of activecontour methods [10, 33] do not suffer from these topological and parameterization issues. These methods use implicit representations of the labelfield, where an object’s boundary is, for example represented through the zero levelset of a function. A parametric boundary representation is evolved directly, for example by moving its associated control points. For a levelset representation the levelset function is evolved, which indirectly implies an evolution of the segmentation boundary. Specifically, an evolution equation is imposed on the levelset function such that its zero levelset moves as desired. As the levelset function is (by construction) either strictly positive or negative (depending on convention used) inside the object and strictly negative or positive on the outside of the object, a labeling can be obtained by simple thresholding. Levelset approaches for image segmentation make use of advanced numerical methods to solve the associated partial differential equations [42, 53]. To assure boundary regularity, segmentation energies typically penalize boundary length or surface area.
While initial curve and surface evolution methods focused on energy minimization based on boundary regularity and boundary misfit, later approaches such as the ChanVese model [15], added terms that encoded statistics about the regions inside and outside the segmentation boundary. Such regionbased models can be as simple as homoscedastic (i.e., same variance) Gaussian likelihoods with specified (but distinct) means for foreground and background respectively, as in the ChanVese case. They can also be much more complex such as trying to maximally separate intensity or feature distributions inside and outside an object [26]. Overall, a large variety of regionbased approaches exist, providing great modeling freedom [20]. While regionbased models are less sensitive to initialization, they are still nonconvex when combined with weighted curvelength terms for regularization. Hence, a global optimum cannot be guaranteed by numerical optimization for such formulations. The dependency on curve and surface initializations popularized the formulation of energy minimization methods which can find a global energy optimum. One such approach is to reformulate an energy minimization problem as a problem defined over an appropriately chosen graph.
In the context of image segmentation, the idea is to create a graph with added source and sink nodes in such a way that a minimum cut of the graph implies a variable configuration which minimizes the original image segmentation energy [7]. For a large class of binary segmentation problems, these graphcut approaches allow for the efficient computation of globally optimal solutions through maxflow algorithms [34]. In particular, discrete versions of the activecontour and ChanVese models (with fixed means) can be formulated. To avoid computing trivial solutions for the boundaryonly active contours, graphcut formulations add seedpoints, specifying fixed background and foreground pixels or voxels (in 3D). While conceptually attractive, graphcut approaches suffer from the need to build the graphs and the necessity to specify discrete neighborhood structures which may negatively affect the regularity of the obtained solution by creating socalled metrication artifacts.
Recently, the focus has shifted away from levelset and graph representations to formulations of active contours and related models by means of indicator functions [2, 8] defined in the continuum and allowing for convex formulations. These methods are closely related to segmentation via graphcuts, but avoid the construction of graphs and can alleviate metrication artifacts. A key insight here is that the boundarylength or area term can formulated through the total variation of an indicator function. This regularization formulation becomes convex when followed by relaxation of the indicator function to the interval . Hence these approaches strike an attractive balance between Partial Differential Equation (PDE)based levelset formulations and the global properties of graphcut methods. As they are specified via PDEs, highly accurate and fast implementations for these solvers are available [47]. As these convex formulations make use of TV terms, they are conceptually related to TV imagedenoising. The use of TV regularization for denoising was pioneered by Rudin, Osher and Fatemi (the ROF model [51]). The ROF model uses quadratic (i.e., ) coupling to the image intensities and TV for edgepreserving noiseremoval [9]. Approaches with coupling yielding a form of geometric scalespace have also been proposed [13]. As we will see, our proposed approach will be closely related to these modern TV regularization and denoising approaches.
Segmentation approaches based on energyoptimization as discussed above typically either have a probabilistic interpretation (as negative loglikelihoods) or have been explicitly derived from probabilistic considerations. The reader is referred to Cremers et al. [20] for a review of recent developments in probabilistic levelset segmentation. All these techniques, while probabilistic in nature, seek optimal labels and do not directly provide information about the posterior distribution or uncertainty in their solutions. In contrast, our proposed AMF approach will approximate posterior distributions from which one can infer a segmentation and corresponding uncertainty.
1.4 AMF Segmentation Approach
AMF segmentation is a Bayesian approach, which results in a posterior distribution on the label map. The AMF approach combines explicit representations of label likelihoods with a boundary length prior. As we will show, our approach makes strong connections to ROFdenoising, and convex activecontour as well as probabilistic activecontour formulations.
In prior work, MonteCarlo approaches have been used to characterize posterior distributions on segmentations, which require sampling [24, 18, 45]. In addition, the MonteCarlo approach is quite general about statistical modeling assumptions so that it could be applied to the likelihood and regularity terms of our segmentation tasks. Approximations are then only caused by the sampling.
A potential drawback of such a MonteCarlo approach is that an accurate estimation might require the generation of a large number of samples, which can be time consuming.
In contrast to the MonteCarlo approach, our meanfield approximation is based on a factorized distribution that is quick to compute, but which is a relatively severe approximation. A potential drawback of our method is that samples drawn from the approximated posterior can have an unnatural fragmented appearance. However, our experimental results reveal that the approximation is surprisingly accurate (in terms of correlation of the posterior probabilities and the segmentation area), when compared to the exact model using much slower Gibbs sampling.
{mdframed}
In summary, the primary advantages of our approach are speed, simplicity, and leverage of existing convex solver technology. We show in Section 3.2 that using ROFdenoising on the logit field of label probabilities results in a “denoised” logit transform from which a label probability function can easily be obtained through a sigmoid transformation. Given an ROF solver, the AMF model can thus be implemented in one line of sourcecode. Furthermore, the AMF model provides a good approximation of the posterior of the segmentation under a curvelength prior as we experimentally show in Section 5.3.
1.5 Structure of the Article
In Section 2, we specify a discretespace probabilistic formulation of segmentation with the goal of finding the posterior distribution of labels, given an input image. We use the VMF approach, along with a linearization approximation that simplifies the problem. This results in an optimization problem for determining the parameters of an approximation to the posterior distribution on pixel labels. In the style of Chan and Vese [15] and many others, we shift from discrete to continuous space facilitating use of the calculus of variations for the optimization problem, yielding the EulerLagrange equations for the AMF model.
In Section 3, we show that the AMF EulerLagrange equations for the zero levelset correspond to those of a special case of the ChanVese model [15], and that the AMF “approximate posterior” has the same mode, or MAP realization, as the exact posterior distribution. Subsequently we show that the AMF EulerLagrange equations have the same form as those of the ROF model of image denoising, and we discuss methods that may be used for solving AMF.
Section 4 describes other important properties of AMF. We show that for a oneparameter family of realizations, the approximated and exact posteriors agree ratiometrically, and we explore their agreement for more general realizations. In addition, we show that the AMF problem is convex, and is unbiased in a particular sense.
Section 5 shows the experimental results on examples that include intensity ambiguities. It also demonstrates the quality of the AMF in approximating the true posterior via Gibbs sampling. Furthermore, Section 5 discusses AMF results for real ultrasound images of the heart, the prostate, a common test image in computer vision, and on a large collection of images from the icgbench segmentation dataset [52].
Finally, Section 6 concludes with a summary and an outlook on future work. Detailed derivations of the approximation properties can be found in the appendix.
2 Active Mean Fields (AMF)
This section introduces the basic discretespace probabilistic model (Section 2.1), that includes a conventional conditionally independent likelihood term and a prior that penalizes the boundary length of the labeling. The VMF approach is used (Section 2.2), along with a Plefka approximation (Section 2.3), to construct a factorized distribution that, given image data, approximates the posterior distribution on labelings. The resulting optimization problem for determining the parameters of the variational distribution has a KLdivergence data attachment term and a TV regularizer. The objective function is converted to continuous space (Section 2.4), yielding the EulerLagrange equations of the AMF model (Section 2.5), that involve logit label probabilities and likelihoods along with a curvature term.
In the following sections, we use uppercase and to represent probability mass functions and lowercase and to represent probability density functions.
2.1 Original Probability Model
Define the space of images as a compact domain ^{1}^{1}1Our theory also holds for higher dimensions, i.e., . We discuss our approach in for simplicity and hence talk about pixels. In 3D for example, we would deal with voxel grids and we would need to compute a 3D variant of total variation, but the overall results will hold unchanged. indexed by and let denote the indices of the lattice of image pixels. Furthermore, denotes a binary random field defined on the pixel lattice whose realizations are the binary labelings of a realvalued image on the pixel lattice; given the image pixel index , and are the corresponding quantities specific to pixel . For convenience, we write with , where the definition of is problem specific and is assumed to be given (for example, specified parametrically or obtained through kernel density estimation on a given feature space; we will not address this issue here). Now, if we make the usual assumption that the likelihood term, i.e., the probability density of observing intensities conditioned on labels, is conditionally independent and identically distributed (iid), i.e.\optarxiv
(1) 
then the corresponding loglikelihood, defined with respect to the logit transform of the pixelspecific image likelihood \optarxiv
(2) 
is \optarxiv
(3) 
Next, we apply a prior that penalizes the length of the boundaries of the label map, \optarxiv
(4) 
Here, is a constant. The larger the more irregular segmentation boundaries are penalized and therefore discouraged. We defer discussion of the length functional to Section 2.4.
By Bayes’ rule the posterior probability of the label map given the image is \optarxiv
(5)  
so that  
(6) 
Here the constant is equal to the logpartition function of the prior distribution. This constant is not easily computed, as it requires a sum over all of the configurations of .
2.2 Variational MeanField Approximation
As mentioned above, the partition function cannot easily be computed. In the variational meanfield (VMF) approach [58], we approximate the posterior distribution via a simpler variational distribution by minimizing the distance between and (here, in a KullbackLeibler sense – see details below). The explicit computation of the integrals involved in the partition function (for continuous variables) can thereby be avoided. Specifically, the meanfield method approximates the joint distribution of a countable family of randomvariables as a product of univariate distributions. The VMF approximation is widely used in machine learning and other fields [58].
For the binary segmentation problem, we define the meanfield approximation of the posterior distribution as a field of independent Bernoulli random variables defined on the lattice with probability , which constitute the random field : \optarxiv
(7)  
(8) 
where . Next, is parameterized so that it minimizes the KLdivergence with respect to the original probability model, i.e., \optarxiv
(9)  
(10)  
(11) 
With minor abuse of KLdivergence notation: \optarxiv
(12) 
In other words, the VMF approximation selects the parameters of the factorized variational distribution such that (i) local image likelihood information, , is captured while at the same time (ii) considering the expected value of the segmentation boundary length (which is a global property that regularizes the solution).
2.3 Plefka’s Approximation
Although minimizing the KLdivergence term in Eqn. (12) with respect to is relatively straightforward, minimizing \optarxiv is generally not as it entails a sum over all configurations of . In the meanfield literature, difficult expectations of functions of randomfields have been approximated using Plefka’s method [46].
Noting that according to Eqn. (7) and that the first order Taylor expansion of the curve length function with respect to is \optarxiv , then Plefka’s approximation states that
(13) 
so that an approximation of Eqn. (12) is \optarxiv
(14) 
where .
Assuming is convex, then the Plefka approximation of Eqn. (14) is a lower bound to the original objective function of Eqn. (12) as Jensen’s inequality states . While this is not directly useful for our purposes, there has been some work on “converse Jensen inequalities” [23] that may provide useful bound relationships. In the end, approximations are justified by the quality of their results, such as the favorable properties highlighted in Section 4.
2.4 Continuous Variant of Variational Problem
In the previous section, we showed how the problem of computing the posterior distribution of a labelfield under an (unspecified) boundarylength prior results in solving the optimization problem of Eqn. (14). To solve this problem using computationally efficient PDE optimization techniques, we first replace the randomfield defined on a discrete lattice by one defined on continuous space.
Expanding Eqn. (14) by using the definition of the log likelihood (Eqn. (3)) and of (Eqn. (8)) we get: \optarxiv
(15) 
siims
(16)  
(17)  
(18)  
(19) 
To solve the above equation by extending to the continuum, the logit transform of the likelihood is now defined as
(20) 
where denotes the location (i.e., the continuous equivalent of the index ), and , , and are the corresponding values of , , and at location . Similarly, the continuous variant of the logit transform of the variational probability function, ), is defined as
(21) 
Now, if we denote with the area of a lattice element and replace the summation over the lattice with integration over , then Eqn. (19) becomes in the continuous space, \optarxiv
(22)  
(23) 
siims
(24) 
By the coarea formula [5], the length of the boundaries of a binary image defined on the continuum is equal to its totalvariation: \optarxiv
(25) 
where is the 2norm and \optarxiv is the (weak) gradient of \optarxiv.^{2}^{2}2 is defined as for any test function ; in the case of being an element of a convex set, is convex.
Therefore putting it all together, the continuous variant of the variational problem is: \optarxiv
(26) 
siims
(27) 
which we call the Active Mean Field approximation. Note, that depends on according to Eqn. (21).
2.5 EulerLagrange (EL) Equations
Defining the curvature operator, \optarxiv
(28) 
the EulerLagrange equation describing the stationary points of Eqn. (27) is given by: \optarxiv
(29) 
This can be derived as follows: Expanding according to Eqn. (21), we obtain the objective function
(30) 
The variation of is [55]
(31) 
where , denotes an admissible perturbation of , and denotes the partial derivative with respect to . The variation becomes
(32) 
Integration by parts assuming Neumann boundary conditions and using Eqn. (21) results in
(33) 
As the variation needs to vanish for all admissible perturbations at optimality, we obtain the EulerLagrange equation
(34) 
According to Eqn. (21), is obtained from through a logit transform. Consequentially, we can obtain from via the sigmoid function
(35) 
as . The sigmoid function, , is monotonic (i.e., ) so that
(36) 
and
(37) 
Hence, the EulerLagrange equation can be rewritten as \optarxiv
(38) 
In summary, the distribution \optarxiv approximates the “exact” distribution, \optarxiv, in the KLdivergence sense when (the logit transform of the parameter ) satisfies the EulerLagrange equation of the AMF model; we will refer to Eqn. (38) as the “AMF Equation.” As the objective function is strictly convex (see Section 4.2) in \optarxiv, the stationary point is the unique global optimum.
3 Connections to ChanVese and ROF
In this section we establish the connection between the AMF model and the ChanVese segmentation model (Section 3.1) as well as the ROF denoising model (Section 3.2). In particular, we show that the Chan Vese EulerLagrange equations correspond to those of the zero levelset of the AMF model, so a ChanVese segmentation can be obtained as the zero levelset of the AMF solution. We also show that the AMF EulerLagrange equations (Eqn. (38)) have the same form as those of the ROF model. Therefore, the solver technologies that have been developed for the ROF model may be deployed for AMF.
3.1 Connection to ChanVese
To derive the connection between the AMF and the ChanVese approach, we introduce the energy for the generalized ChanVese model based on a relaxed indicator function (i.e., \optarxiv), which, according to [8], can be written as \optarxiv
(39) 
with the first part of the function being the data term and the second term regularizing the boundary length. Such a length prior is essential to encourage large, contiguous segmentation areas. The importance of the lengthprior becomes especially clear in the context of the MumfordShah model [39], of which the ChanVese model is a special case. In the absence of a length prior, the MumfordShah approach will assign each pixel in regions with constant image intensity to its own (separate) parcel. The standard ChanVese model [16] (without the area prior of this model) can be recovered from Eqn. (39) for the special case that the class conditional intensity model is Gaussian, i.e., \optarxiv and \optarxiv. In this case: \optarxiv
(40) 
and the corresponding ChanVese energy becomes:
(41) 
The means of the Gaussians (, ) are estimated jointly in the standard ChanVese model [15] and the standard deviations are assumed to be fixed and identical. In contrast, in the generalized ChanVese model (Eqn. (39)), parameters of are typically assumed to be fixed and are not jointly estimated. This assures the convexity of the overall model. However, if desired, these parameters can also be estimated. A simple approach would be an alternating optimization strategy. Note that the ChanVese segmentation model of Eqn. (41) becomes Otsuthresholding [43] if the length prior is disregarded (). Hence, unlike ChanVese segmentation, Otsuthresholding cannot suppress image fragmentation and irregularity.
The EulerLagrange equations of the generalized ChanVese energy (Eqn. (39)) are:
(42) 
This is identical to the AMF EulerLagrange equation (Eqn. (34)) at the zero levelset . By construction, the zero levelset of a levelset implementation for the generalized ChanVese model has to agree with the solution obtained from the EulerLagrange equations of the generalized ChanVese model using indicator functions as both minimize the same energy function just using different parameterizations. Consequentially, also, the zero levelsets of both the AMF model and the levelset implementation of ChanVese need to agree.
In contrast to the generalized ChanVese model described above, the original ChanVese model of [15], formulated as a curve evolution approach, is characterized by an energy function (penalizing segmentations with large, continuous areas) with an additional term of the form
(43) 
where denotes the curve defining the boundary of the segmentation, is a nonnegative constant to weight the area influence, and simply denotes the area enclosed by . For implementation purposes is implicitly represented by the zero levelset of a level set function . The corresponding EulerLagrange equation is, on the zero levelset [15],
(44) 
Examining the levelset of the AMF model (Eqn. (38)), so that , we notice that this levelset satisfies the same EulerLagrange equation as the zero levelset of the ChanVese model with a specified nonzero value of . In other words, the levelsets of the dense AMF solution provide a family of solutions for the ChanVese problem for a continuum of values of the area penalty.
Note that such area penalties cannot effectively be added in the indicatorfunction based approaches to the ChanVese activecontour models proposed by Appleton et al. [2] and Bresson et al. [8]. The goal of these models is to capture a binary segmentation result through a relaxed indicator function, (i.e., instead of ). However, it can be shown [41] that in certain instances this relaxation produces undesirable segmentation results when combined with an area penalty.
3.2 Connection between AMF and ROF Models
In their seminal paper, Rudin, Osher and Fatemi [51] proposed a denoising method for, e.g., intensity images , \optarxiv
(45) 
siims
(46) 
where . As discussed by Vogel and Oman [57], this is equivalent to the following unconstrained problem, \optarxiv
(47) 
siims
(48) 
for a suitable choice of . They refer to this formulation as “TV penalized least squares.”
The corresponding EulerLagrange equation is
(49) 
For , this equation has the same form as the EulerLagrange equations of the AMF model of Eqn. (38), which is
(50) 
In this equivalence, the denoised intensity image of the ROF model, , corresponds to the logit parameter field of the AMF distribution, , while the noisy input intensity image of the ROF model, , corresponds to the logittransformed label probabilities in the AMF problem, . Furthermore, if the class conditional intensity model is homoscedastic Gaussian, then (from Eqn. (40)) is linear in the observed intensity. Furthermore, the AMF solution is equivalent to solving an ROF problem that is effectively denoising the logittransformed approximation of the posterior label likelihoods.
Because of the equivalence of the EulerLagrange equations of the AMF and the ROF models, the considerable technology developed for solving the ROF model may be applied to the AMF model. In particular, a globally optimal solution (see Section 4.2 for a proof of the convexity of this model) of the AMF model can be computed by the ROF denoising approach. In other words, given an ROF solver (ROFsolve) that minimizes
(51) 
such that
(52) 
solving the AMF problem for a given and then simply becomes
(53) 
Eqn. (53) is the central result concerning the implementation of our method as it connects the optimal AMF solution to a straightforward ROF denoising problem.
4 Additional Properties of AMF
We now summarize some approximation properties of AMF (Section 4.1), show the objective function to be convex (Section 4.2), and show that AMF is unbiased in a specific sense (Section 4.3).
4.1 Approximation Properties
Our goal is an efficient yet accurate approximation, , to the exact posterior distribution for general realizations of . To show that is in fact a good approximation, we study its properties here. For convenience, we only summarize the results of some of the approximation properties of the AMF model and refer to the appendix for mathematical details. In particular, the appendix shows that

The zero levelset of is the boundary of the most probable realization of and is the MAP realization under . This is not generally the case for mean field approximations.

Because the log partition function of the prior is not easily computed we compare \optarxiv with \optarxiv, where is the most probable realization under both distributions according to a). These probability ratios are not only in agreement for the zero levelset, but also for realizations that are bounded by any levelset of .

The probability ratios approximately agree for realization whose boundary normals are close in direction to .

If neither a) nor b) hold, the probability ratio for will be larger than that for , i.e., it underestimates the length penalty associated with the prior.
4.2 Convexity
A nice property of the AMF model is that its energy is strictly convex and therefore we can find a unique global minimizer. This is in contrast to the TV based segmentation models [2, 8] which are generally convex (but not strictly so) and therefore may have multiple nonunique optima.
To show convexity, we consider the continuum formulation of AMF which can be rewritten as a function of \optarxiv as: \optarxiv
(54) 
siims
(55) 
where dependencies on space are dropped only for notational convenience (i.e., and ) and we expressed in terms of . The term \optarxiv
(56) 
is convex in as the first summand is linear in , the 2norm is convex, is a linear operator and both terms are summed with a positive weight . To see that the rest of the integrand is also convex, consider a function of the form \optarxiv
(57) 
which implies that \optarxiv
(58) 
Therefore, \optarxiv is strictly convex. Because the sum of convex and strictly convex functions is strictly convex, the overall AMF energy is strictly convex in and therefore has a unique global minimizer (see [6] for details on convexity preserving operations). In particular, we note that for a noninformative data term, i.e., pixels are locally equally likely to be foreground or background (), is the globally optimal solution. For the related standard TV segmentation model [2], which would only minimize Eqn. (56), any constant solution would be a global minimizer.
4.3 Unbiased in Homogeneous Regions
In this section we analyze the behavior of the AMF estimator over homogeneous (i.e., constant intensity) patches of an image. The AMF objective function, Eqn. (27), can be written: \optarxiv
(59) 
siims
(60) 
Now, for a patch of constant intensity, i.e., \optarxiv, the optimum will be attained at \optarxiv