Discriminative Parameter Estimationfor Random Walks Segmentation

# Discriminative Parameter Estimation for Random Walks Segmentation

Pierre-Yves Baudin    Danny Goodman    Puneet Kumar    Noura Azzabou    Pierre G. Carlier    Nikos Paragios    M. Pawan Kumar
###### Abstract

The Random Walks (RW) algorithm is one of the most efficient and easy-to-use probabilistic segmentation methods. By combining contrast terms with prior terms, it provides accurate segmentations of medical images in a fully automated manner. However, one of the main drawbacks of using the RW algorithm is that its parameters have to be hand-tuned. we propose a novel discriminative learning framework that estimates the parameters using a training dataset. The main challenge we face is that the training samples are not fully supervised. Specifically, they provide a hard segmentation of the images, instead of a probabilistic segmentation. We overcome this challenge by treating the optimal probabilistic segmentation that is compatible with the given hard segmentation as a latent variable. This allows us to employ the latent support vector machine formulation for parameter estimation. We show that our approach significantly outperforms the baseline methods on a challenging dataset consisting of real clinical 3D MRI volumes of skeletal muscles.

11institutetext: Center for Visual Computing, École Centrale Paris, FR 22institutetext: Université Paris-Est, LIGM (UMR CNRS), École des Ponts ParisTech, FR 33institutetext: Équipe Galen, INRIA Saclay, FR 44institutetext: Institute of Myology, Paris, FR 55institutetext: CEA, I BM, MIRCen, IdM NMR Laboratory, Paris, FR 66institutetext: UPMC University Paris 06, Paris, FR

## 1 Introduction

The Random Walks (RW) algorithm is one of the most popular techniques for segmentation in medical imaging [5]. Although it was initially proposed for interactive settings, recent years have witnessed the development of fully automated extensions. In addition to the contrast information employed in the original formulation [5], the automated extensions incorporate prior information based on appearance [4] and shape [1].

It has been empirically observed that the accuracy of the RW algorithm relies heavily on the relative weighting between the various contrast and prior terms. Henceforth, we refer to the relative weights of the various terms in the RW objective function as parameters. At present, researchers either rely on a user to hand-tune the parameters or on exhaustive cross-validation [1, 4]. However, both these approaches quickly become infeasible as the number of terms in the RW objective function increase.

In contrast to the RW literature, the problem of parameter estimation has received considerable attention in the case of discrete models such as CRFs [9]. Recent years have witnessed the emergence of structured-output support vector machine (Structured SVM) as one of the most effective discriminative frameworks for supervised parameter estimation [10, 11]. Given a training dataset that consists of pairs of input and their ground-truth output, structured SVM minimizes the empirical risk of the inferred output with respect to the ground-truth output. The risk is defined by a user-specified loss function that measures the difference in quality between two given outputs.

We would like to discriminatively learn the parameters of the RW formulation. To this end, a straightforward application of structured SVM would require a training dataset that consists of pairs of inputs as well as their ground-truth outputs—in our case, the optimal probabilistic segmentation. In other words, we require a human to provide us with the output of the RW algorithm for the best set of parameters. This is an unreasonable demand since the knowledge of the optimal probabilistic segmentation is as difficult to acquire as it is to hand-tune the parameters itself. Thus we cannot directly use structured SVM to estimate the desired parameters.

In order to handle the above difficulty, we propose a novel formulation for discriminative parameter estimation in the RW framework. Specifically, we learn the parameters using a weakly supervised dataset that consists of pairs of medical acquisitions and their hard segmentations. Unlike probabilistic segmentations, hard segmentations can be obtained easily from human annotators. We treat the optimal probabilistic segmentation that is compatible with the hard segmentation as a latent variable. Here, compatibility refers to the fact that the probability of the ground-truth label (as specified by the hard segmentation) should be greater than the probability of all other labels for each pixel/voxel. The resulting representation allows us to learn the parameters using the latent SVM formulation [3, 8, 12].

While latent SVM does not result in a convex optimization problem, its local optimum solution can be obtained using the iterative concave-convex procedure (CCCP) [13]. The CCCP involves solving a structured SVM problem, which lends itself to efficient optimization. In order to make the overall algorithm computationally feasible, we propose a novel efficient approach for ACI based on dual decomposition [2, 7]. We demonstrate the benefit of our learning framework over a baseline structured SVM using a challenging dataset of real 3D MRI volumes.

## 2 Preliminaries

We will assume that the input is a 3D volume. We denote the -th voxel of as , and the set of all voxels as . In a hard segmentation, each voxel is assigned a label (for example, the index of a muscle). We will use to represent the human annotation (that is, the class labels of the voxels in ) in binary form:

 z(i,s)={1 if voxel i∈V is% of class s∈S,0 otherwise. (1)

In other words, the binary form of the annotation specifies delta distribution over the putative labels for each voxel. Our training dataset is a collection of training images and hard segmentations : . Note that we use subscript to denote the input index within a dataset, and parenthetical to denote a voxel within a particular input.

### 2.1 Random Walks Segmentation

The RW algorithm provides a probabilistic—or soft—segmentation of an input , which we denote by , that is,

 y(i,s)=Pr[voxel i is of class s],∀i∈V,s∈S. (2)

When using one contrast term and one prior model, the RW algorithm amounts to minimizing the following convex quadratic objective functional:

 E(y,x) =y⊤L(x)y+wprior∥y−y0∥2Ω0(x), (3) =y⊤L(x)y+Eprior(y,x). (4)

Here, is a reference prior probabilistic segmentation dependent on appearance [4] or shape [1], and is a diagonal matrix that specifies a voxel-wise weighting scheme for . The term refers to a combinatorial Laplacian matrix defined on a neighborhood system based on the adjacency of the voxels. It is a block diagonal matrix—one block per label—with all identical blocks, where the entries of the block use the typical Gaussian kernel formulation (see  [5]). The relative weight is the parameter for the above RW framework. The above problem is convex, and can be optimized efficiently by solving a sparse linear system of equations. We refer the reader to [1, 5] for further details.

### 2.2 Parameters and Feature Vectors

In the above description of the RW algorithm, we restricted ourselves to a single Laplacian and a single prior. However, our goal is to enable the use of numerous Laplacians and priors. To this end, let denote a known family of Laplacian matrices and denote a known family of prior energy functionals. In section 4, we will specify the family of Laplacians and priors used in our experiments. We denote the general form of a linear combination of Laplacians and prior terms as:

 L(x;w)=∑αwαLα(x),Eprior(⋅,x;w)=∑βwβEβ(⋅,x),w≥0. (5)

Each term is of the form:

 Eβ(y,x)=∥∥y−yβ∥∥2Ωβ(x), (6)

where is the -th reference segmentation and is the corresponding voxel-wise weighting matrix (which are both known). We denote the set of all parameters as . Clearly, the RW energy (4) is linear in , and can therefore be formulated as:

 E(y,x;w) =yTL(x;w)y+Eprior(y,x;w), (7) =wTψ(x,y), (8)

where is known as the joint feature vector of and . Note that by restricting the parameters to be non-negative (that is, ), we ensure that the energy functional remains convex.

### 2.3 Loss Function

As mentioned earlier, we would like to estimate the parameters by minimizing the empirical risk over the training samples. The risk is specified using a loss function that measures the difference between two segmentations. In this work, we define the loss function as the number of incorrectly labeled voxels. Formally, let denote the underlying hard segmentation of the soft segmentation , that is, , where is the Kronecker function. The loss function is defined as

 Δ(z,y) =1−1|V|^yTz, (9)

where is the set of all voxels, and denotes the cardinality of a set.

## 3 Parameter Estimation Using Latent SVM

Given a dataset , which consists of inputs and their hard segmentation , we would like to estimate parameters such that the resulting inferred segmentations are accurate. Here, the accuracy is measured using the loss function . Formally, let denote the soft segmentation obtained by minimizing the energy functional for the -th training sample, that is,

 yk(w)=\operatornamewithlimitsargminyw⊤ψ(xk,y). (10)

We would like to learn the parameters such that the empirical risk is minimized. In other words, we would like to estimate the parameters such that

 w∗=\operatornamewithlimitsargminw1N∑kΔ(zk,yk(w)). (11)

The above objective function is highly non-convex in , which makes it prone to bad local minimum solutions. To alleviate this deficiency, it can be shown that the following latent SVM formulation minimizes a regularized upper bound on the risk for a set of samples :

 minw≥0 λ||w||2+λ′||w−w0||2+1N∑kξk, (12) s.t. minyk,Δ(zk,yk)=0w⊤ψ(xk,yk)≤w⊤ψ(xk,¯¯¯yk)−Δ(zk,¯¯¯yk)+ξk,∀¯¯¯yk,∀k,

where the slack variable represents the upper bound of the risk for the -th training sample. Note that we have added two regularization terms for the parameters . The first term , weighted by hyperparameter , ensures that we do not overfit to the training samples. The second term , weighted by hyperparameter , ensures that we do not obtain a solution that is very far away from our initial estimate . The reason for including this term is that our upper bound to the empirical risk may not be sufficiently tight. Thus, if we do not encourage our solution to lie close to the initial estimate, it may drift towards an inaccurate set of parameters. In section 4, we show the empirical effect of the hyperparameters and on the accuracy of the parameters.

While the upper bound of the empirical risk derived above is not convex, it was shown to be a difference of two convex functions in [12]. This observation allows us to obtain a local minimum or saddle point solution using the CCCP algorithm [12, 13], outlined in Algorithm 1, which iteratively improves the parameters starting with an initial estimate . It consists of two main steps at each iteration: (i) step 3, which involves estimating a compatible soft segmentation for each training sample—known as annotation consistent inference (ACI); and (ii) step 4, which involves updating the parameters by solving problem (13). In the following subsections, we provide efficient algorithms for both the steps.

### 3.1 Annotation Consistent Inference

Given an input and its hard segmentation , ACI requires us to find the soft segmentation with the minimum energy, under the constraint that it should be compatible with (see step 3 of Algorithm 1). We denote the ground truth label of a voxel by , that is, , and the set of all voxels by . Using our notation, ACI can be formally specified as

 miny∈C(V)y⊤L(x;w)y+Eprior(y,x;w). (14)

Here, is the set of all compatible probabilistic segmentations, that is,

 y(i,s)≥0,∀i∈V,∀s∈S, (15) ∑s∈Sy(i,s)=1,∀i∈V, (16) y(i,si)≥y(i,s),∀i∈V,∀s∈S. (17)

Constraints (15) and (16) ensure that is a valid probabilistic segmentation. The last set of constraints (17) ensure that is compatible with . Note that in the absence of constraints (17), the above problem can be solved efficiently using the RW algorithm. However, since the ACI problem requires the additional set of compatibility constraints, we need to develop a novel efficient algorithm to solve the above convex optimization problem. To this end, we exploit the powerful dual decomposition framework [2, 7]. Briefly, we divide the above problem into a set of smaller subproblems defined using overlapping subsets of variables. Each subproblem can be solved efficiently using a standard convex optimization package. In order to obtain the globally optimal solution of the original subproblem, we pass messages between subproblems until they agree on the value of all the shared variables. For details on the ACI algorithm, please refer to the supplementary materials of this paper.

### 3.2 Parameter Update

Having generated a compatible soft segmentation, the parameters can now be efficiently updated by solving problem (13) for a fixed set of soft segmentations . This problem can be solved efficiently using the popular cutting plane method (for details on this algorithm, please refer to [6]). Briefly, the method starts by specifying no constraints for any of the training samples. At each iteration, it finds the most violated constraint for each sample, and updates the parameters until the increase in the objective function is less than a small epsilon.

In this work, due to the fact that our loss function is not concave, we approximate the most violated constraint as the predicted segmentation, that is,

 ¯¯¯y=\operatornamewithlimitsargminyw⊤ψ(x,y). (18)

The above problem is solved efficiently using the RW algorithm.

## 4 Experiments

Dataset. The dataset consists of 30 MRI volumes of the thigh region of dimensions . The various segments correspond to 4 different muscle groups together with the background class. We randomly split the dataset into 80% for training and 20% for testing. In order to reduce the training time for both our method and the baselines, we divide each volume into volumes of dimension .

Laplacians and Prior Terms. We use 4 different Laplacians (generated with different weitghing functions). Furthermore, we use two shape priors based on [1] and one appearance prior based on [4]. This results in a total of 7 parameters to be estimated.

Methods. The main hypothesis of our work is that it is important to represent the unknown optimal soft segmentation using latent variables. Thus we compare our method with a baseline structured SVM that replaces the latent variables with the given hard segmentations. In other words, our baseline estimates the parameters by solving problem (13), where the imputed soft segmentations are replaced by the hard segmentations . During our experiments, we found that replacing the hard segmentation with a pseudo soft segmentation based on the distance transform systematically decreased the loss of the output. Thus the method refered to as ”Baseline” uses a structured SVM with distance-tranform ”softened” segmentations.

Results. Fig. 1 shows the test loss for three different methods: (i) the initial hand-tuned parameters ; (ii) the baseline structured SVM with distance transforms; and (iii) our proposed approach using latent SVM. As can be seen from Fig. 1, latent SVM provides significantly better results than the baselines—even when using the distance transform. For the 4 x 5 hyperparameter settings that we report (that is, four different values of and 5 different values of ), latent SVM is significantly better than SVM in 15 cases, and significantly worse in only 2 cases. Note that latent SVM provides the best results for very small values of , which indicates that the upper bound on the empirical risk in tight. As expected, for sufficiently large values of , all the methods provide similar results. For the best settings of the corresponding hyperparameters, the percentage of incorrectly labeled voxels as follows: (i) for , ; (ii) for structured SVM, ; and (iii) for latent SVM, . Fig. 2 shows some example segmentations for the various methods.

## 5 Discussion

We proposed a novel discriminative learning framework to estimate the parameters for the probabilistic RW segmentation algorithm. We represented the optimal soft segmentation that is compatible with the hard segmentation of each training sample as a latent variable. This allowed us to formulate the problem of parameter estimation using latent SVM, which upper bounds the empirical risk of prediction with a difference of convex optimization program. Using a challenging clinical dataset of MRI volumes, we demonstrated the efficacy of our approach over the baseline method that replaces the latent variables with the given hard segmentations. The latent SVM framework can be used to estimate parameters with partial hard segmentations. Such an approach would allow us to scale the size of the training dataset by orders of magnitude.

## References

• [1] Baudin, P.Y., Azzabou, N., Carlier, P., Paragios, N.: Prior knowledge, random walks and human skeletal muscle segmentation. In: MICCAI (2012)
• [2] Bertsekas, D.: Nonlinear Programming. Athena Scientific (1999)
• [3] Felzenszwalb, P., McAllester, D., Ramanan, D.: A discriminatively trained, multiscale, deformable part model. In: CVPR (2008)
• [4] Grady, L.: Multilabel random walker image segmentation using prior models. In: CVPR (2005)
• [5] Grady, L.: Random walks for image segmentation. PAMI (2006)
• [6] Joachims, T., Finley, T., Yu, C.N.: Cutting-plane training of structural SVMs. Machine Learning (2009)
• [7] Komodakis, N., Paragios, N., Tziritas, G.: MRF optimization via dual decomposition: Message-passing revisited. In: ICCV (2007)
• [8] Smola, A., Vishwanathan, S., Hoffman, T.: Kernel methods for missing variables. In: AISTATS (2005)
• [9] Szummer, M., Kohli, P., Hoiem, D.: Learning CRFs using graph cuts. In: ECCV (2008)
• [10] Taskar, B., Guestrin, C., Koller, D.: Max-margin Markov networks. In: NIPS (2003)
• [11] Tsochantaridis, I., Hofmann, T., Joachims, T., Altun, Y.: Support vector machine learning for interdependent and structured output spaces. In: ICML (2004)
• [12] Yu, C.N., Joachims, T.: Learning structural SVMs with latent variables. In: ICML (2009)
• [13] Yuille, A., Rangarajan, A.: The concave-convex procedure (CCCP). Neural Computation (2003)
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters