Convex variational methods on graphs for multiclass segmentation of high-dimensional data and point clouds

Convex variational methods on graphs for multiclass segmentation of high-dimensional data and point clouds

Abstract

Graph-based variational methods have recently shown to be highly competitive for various classification problems of high-dimensional data, but are inherently difficult to handle from an optimization perspective. This paper proposes a convex relaxation for a certain set of graph-based multiclass data segmentation models involving a graph total variation term, region homogeneity terms, supervised information and certain constraints or penalty terms acting on the class sizes. Particular applications include semi-supervised classification of high-dimensional data and unsupervised segmentation of unstructured 3D point clouds. Theoretical analysis shows that the convex relaxation closely approximates the original NP-hard problems, and these observations are also confirmed experimentally. An efficient duality based algorithm is developed that handles all constraints on the labeling function implicitly. Experiments on semi-supervised classification indicate consistently higher accuracies than related non-convex approaches, and considerably so when the training data are not uniformly distributed among the data set. The accuracies are also highly competitive against a wide range of other established methods on three benchmark datasets. Experiments on 3D point clouds acquired by a LaDAR in outdoor scenes, demonstrate that the scenes can accurately be segmented into object classes such as vegetation, the ground plane and human-made structures.

1Introduction

The graphical framework has become a popular setting for classification [100] and filtering [31] of high-dimensional data. Some of the best performing classification algorithms are based on solving variational problems on graphs [82]. In simple terms, these algorithms attempt to group the data points into classes in such a way that pairs of data points with different class memberships are as dissimilar as possible with respect to a certain feature. In order to avoid the computational complexity of working with fully connected graphs, approximations, such as those based on spectral graph theory [10] or nearest neighbors [33], are typically employed. For example, [10] and [67] employ spectral approaches along with the Nyström extension method [37] to efficiently calculate the eigendecomposition of a dense graph Laplacian. Works, such as [33], use the “nearest neighbor” approach to sparsify the graph for computational efficiency. Variational problems on graphs have also become popular for processing of 3D point clouds [33].

When the classification task is cast as the minimization of similarity of point pairs with different class membership, extra information is necessary to avoid the trivial global minimizer of value zero where all points are assigned to the same class. In semi-supervised classification methods, a small set of the data points are given as training data in advance, and their class memberships are imposed as hard constraints in the optimization problem. In unsupervised classification methods one typically enforces the sizes of each class not to deviate too far from each other, examples including the normalized cut [78] and Cheeger ratio cut problems [26].

Most of the computational methods for semi-supervised and unsupervised classification obtain the solution by computing the local minimizer of a non-convex energy functional. Examples of such algorithms are those based on phase fields [10] and the MBO scheme [67]. PDEs on graphs for semi-supervised classification also include the Eikonal equation [30] and tug-of-war games related to the infinity-Laplacian equation [34]. Unsupervised problems with class size constraints are inherently the most difficult to handle from an optimization viewpoint, as the convex envelope of the problem has a trivial constant function as a minimizer [78]. Various ways of simplifying the energy landscape have been proposed [45]. Our recent work [65] showed that semi-supervised classification problems with two classes could be formulated in a completely convex framework and also presented efficient algorithms that could obtain global minimizers.

Image segmentation is a special classification problem where the objective is to assign each pixel to a region. Algorithms based on energy minimization are among the most successful image segmentation methods, and they have historically been divided into region-based’ and contour-based’.

Region-based methods attempt to find a partition of the image so that the pixels within each region as a whole are as similar as possible. Additionally, some regularity is imposed on the region boundaries to favor spatial grouping of the pixels. The similarity is usually measured in the statistical sense. In the simplest case, the pixels within each region should be similar to the mean intensity of each region, as proposed in the Chan-Vese [22] and Mumford-Shah [71] models. Contour-based methods [62] instead seek the best suited locations of the region boundaries, typically at locations of large jumps in the image intensities, indicating the interface between two objects.

More recently, it has been shown that the combination of region and contour-based terms in the energy function can give qualitatively very good results [14], especially when non-local operators are used in the contour terms [40]. There now exists efficient algorithms for solving the resulting optimization problems that can avoid getting stuck in a local minimum, including both combinatorial optimization algorithms [11] and more recent convex continuous optimization algorithms [14]. The latter have shown to be advantageous in several aspects, such as the fact that they require less memory and have a greater potential for parallel implementation of graphics processing units (GPUs), but special care is needed in case of non-local variants of the differential operators (e.g. [76]).

Most of the current data segmentation methods [82] can be viewed as contour-based’, since they seek an optimal location of the boundaries of each region. Region-based variational image segmentation models with two classes were generalized to graphs for data segmentation in [59] and for 3D point cloud segmentation in [59] in a convex framework. The region terms could be constructed directly from the point geometry and/or be constructed from a color vector defined at the points. Concrete examples of the latter were used for experiments on point cloud segmentation. Region terms have also been proposed in the context of Markov Random Fields for 3D point cloud segmentation [2], where the weights were learned from training data using associate Markov networks. The independent preprint [94] proposed to use region terms for multiclass semi-supervised classification in a convex manner, where the region terms were inferred from the supervised points by diffusion.

Contributions

This paper proposes a convex relaxation and an efficient algorithmic optimization framework for a general set of graph based data classification problems that exhibits non-trivial global minimizers. It extends the convex approach for semi-supervised classification with two classes given in our previous work [65] to a much broader range of problems, including multiple classes, novel and more practically useful incorporation of class size information, and a novel unsupervised segmentation model for 3D point clouds acquired by a LaDAR.

The same basic relaxation for semi-supervised classification also appeared in the independent preprint [94]. The most major distinctions of this work compared to the preprint [94] are: we also incorporate class size information in the convex framework; we give a mathematical and experimental analysis of the close relation between the convex relaxed and original problems; we propose a different duality based max-flow’ inspired algorithm; we incorporate information of the supervised points in a different way; and we consider unsupervised segmentation of 3D point clouds.

The contributions can be summarized more specifically as follows:

  • We specify a general set of classification problems that are suitable for being approximated in a convex manner. The general set of problems involves minimization of a multiclass graph cut term together with supervised constraints, region homogeneity terms and novel constraints or penalty terms acting on the class sizes. Special cases include semi-supervised classification of high-dimensional data and unsupervised segmentation of 3D point clouds.

  • A convex relaxation is proposed for the general set of problems and its approximation properties are analyzed thoroughly in theory and experiments. This extends the work on multiregion image segmentation [98] to data clustering on graphs, and to cases where there are constraints or penalty terms acting on the class sizes. Since either the introduction of multiple classes or size constraints causes the general problem to become NP-hard, the relaxation can (probably) not be proven to be exact. Instead, conditions are derived for when an exact global minimizer can be obtained from a dual solution of the relaxed problem. The strongest conditions are derived in case there are no constraints on the class sizes, but the theoretical results in both cases show that very close approximations are expected. These theoretical results also agree well with experimental observations.

  • The convex relaxed problems are formulated as equivalent dual problems that are structurally similar to the max-flow’ problem over the graph. This extends our work [65] to multiple classes and the work on image segmentation proposed in [96] to data clustering on graphs. We use a conceptually different proof than [65], which relates max-flow’ with another more direct dual formulation of the problem. Furthermore, it is shown that also the size constraints and penalty term can be incorporated naturally in the max-flow problem by modifying the flow conservation condition, such that there should be a constant flow excess at each node.

  • As in our previous work [65], an augmented Lagrangian algorithm is developed based on the new max-flow’ dual formulations of the problems. A key advantage compared to related primal-dual algorithms [21] in imaging, such as the one considered in the preprint [94], is that all constraints on the labeling function are handled implicitly. This includes constraints on the class sizes, which are dealt with by separate dual variables indicating the flow excess at the nodes. Consequently, projections onto the constraint set of the labeling function, which tend to decrease the accuracy and put strict restrictions on the step sizes, are avoided.

  • We propose an unsupervised segmentation model for unstructured 3D point clouds aquired by a LaDAR within the general framework. It extends the models of [59] to multiple classes and gives concrete examples of region terms constructed purely based on geometrical information of the unlabeled points, in order to distinguish classes such as vegetation, the ground plane and human-made structures in an outdoor scene. We also propose a graph total variation term that favors alignment of the region boundaries along “edges” indicated by discontinuities in the normal vectors or the depth coordinate. In contrast to [2], our model does not rely on any training data.

  • Extensive experimental evaluations on semi-supervised classification indicate consistently higher accuracies than related local minimization approaches, and considerably so when the training data are not uniformly distributed among the data set. The accuracies are also highly competitive against a wide range of other established methods on three benchmark datasets. The accuracies can be improved further if an estimate of the approximate class sizes are given in advance. Experiments on 3D point clouds acquired by a LaDAR in outdoor scenes demonstrate that the scenes can accurately be segmented into object classes such as vegetation, the ground plane and regular structures. The experiments also demonstrate fast and highly accurate convergence of the algorithms, and show that the approximation difference between the convex and original problems vanishes or becomes extremely low in practice.

Organization

This paper starts by formulating the general set of problems mathematically in Section 2. Section 3 formulates a convex relaxation of the general problem and analyzes the quality of the relaxation from a dual perspective. Section 4 reformulates the dual problem as a max-flow’ type of problem and derives an efficient algorithm. Applications to semi-supervised classification of high-dimensional data are presented in Section 5.1, and applications to segmentation of unstructured 3D point clouds are described in Section 5.2, including specific constructions of each term in the general model. Section 5 also presents a detailed experimental evaluation for both applications.

2Data segmentation as energy minimization over a graph

Assume we are given data points in . In order to formulate the segmentation of the data points as a minimization problem, the points are first organized in an undirected graph. Each data point is represented by a node in the graph. The edges in the graph, denoted by , consist of pairs of data points. Weights on the edges measure the similarity between data points and . A high value of indicates that and are similar and a low value indicates that they are dissimilar. A popular choice for the weight function is the Gaussian

where is the distance, in some sense, between and . For example, the distance between two 3D points and is naturally their Euclidean distance. In order to reduce the computational burden of working with fully connected graphs, one often only considers the set of edges where is largest. For instance, edges may be constructed between each vertex in and its nearest neighbors. More formally, for each , one constructs an edge for the points with the shortest distance to . Such a graph can be constructed efficiently by using kd-trees in time [9]. Note that the number of edges incident to some nodes in the resulting graph may be larger than , as illustrated in Figure ? where , due to symmetry of the undirected graph. The construction of the graph itself provides a basic segmentation of the nodes, for instance in Figure ?, it can be observed that the graph contains 3 different connected components. This fact has been exploited in basic graph based classification methods [1].

In several recent works, the classification problem has been formulated as finding an optimal partition of the nodes in the graph . The most basic objective function can be formulated as

where the constraint imposes that there should be no vacuum or overlap between the subsets . If , then is the so-called “graph cut” [69]. The motivation behind the model is to group the vertices into classes in such a way that pairs of vertices with different class memberships are as dissimilar as possible, indicated by a low value of .

2.1Size constraints and supervised constraints

Extra assumptions are necessary to avoid the trivial global minimizer of , where for some and for all . There are two common ways to incorporate extra assumptions. In semi-supervised classification problems, the class membership of a small set of the data points is given as training data in advance by the constraints

where is the set of “training” points known to belong to class . For notational convenience, the set of all class indices is denoted by in the rest of this paper.

In unsupervised classification problems, one usually assumes that the regions should have approximately equal sizes. The simplest way to achieve this is to impose that each class should have a given size :

where . We focus on the case that the norm is the number of nodes in for simplicity. As an alternative, could be the sum of degrees of each node in , where the degree of a node is the number of edges incident to that node. If size constraints are introduced, the problem cannot generally be solved exactly due to NP-hardness. This will be discussed in more detail in Section 3.

Usually, a more flexible option is preferred of modifying the energy function such that partitions of equal sizes have lower energy. In case of two classes, the energy becomes , where and . Several different ways of normalizing the energy by the class sizes have been proposed, which can be summarized as follows

The expression on the left is called the ratio cut in case of the norm and the normalized cut in case of . The expression on the right is called the Cheeger-ratio cut with the norm .

The energy functions are highly non-convex, but ways to simplify the energy landscape have been proposed [16] in order to reduce the number of local minima.

2.2New flexible constraint and penalty term on class sizes

In this paper, we aim to provide a broader set of constraints and penalty terms acting on the class sizes that can be handled in a completely convex manner. They are designed to achieve the same net result as the ratio energies of promoting classes of equal sizes, but in a completely convex way. They can also promote any other size relations between the class sizes. We will consider flexible size constraints of the form

where is an upper bound on the size of class and is a lower bound. Such types of constraints have previously been proposed for image segmentation in [52]. In case one only knows an estimate of the expected class sizes, such constraints can be used to enforce the sizes to lie within some interval of those estimates. To be well defined, it is obviously required that and . Note that if , then becomes equivalent to .

To avoid imposing absolute upper and lower bounds on the class sizes, we also propose appending a piecewise-linear penalty term to the energy function , defined as

An illustration of is given in Figure ?. In the limit as , the penalty term becomes an equivalent representation of the hard constraints . Note that quadratic or higher order penalty terms, although they are convex, are not well suited for the convex relaxation, because they tend to encourage non-binary values of the labeling functions. In fact, we believe the set of constraints and penalty terms given here is complete when it comes to being suited for completely convex relaxations.

One major contribution of this paper is an efficient algorithmic framework that handles size constraints of the form and the penalty term naturally, with almost no additional computational efforts.

Illustration of penalty term P_\gamma(V_i).

2.3Region homogeneity terms

The classification problem involves the minimization of an energy on the boundary of the classes. The energy is minimized if the data points on each side of the boundary are as dissimilar as possible. These classification models are therefore similar to edge-based image segmentation models, which align the boundary of the regions along edges in the image where the intensity changes sharply. By contrast, region-based image segmentation models, such as the “Chan-Vese” model, use region homogeneity terms that measure how well each pixel fits with each region, in the energy function.

Example of segmentation of a graph of 2D points (with number of neighbors k=2) into regions of low density (yellow), high degree of correlation of coordinates between neighboring points (red), medium correlation (blue) and low correlation (green). Dashed edges indicate those that contribute to the energy.

A graph extension of variational segmentation problems with two classes was formulated in [59], using a non-local total variation term together with a region term promoting homogeneity of a vertex function. The vertex function could be constructed directly from point geometry and/or from external information such as a color vector defined at each point. We extend the general problem to multiple classes and optional constraints as follows:

under optional supervised constraints and/or size constraints /penalty term . In [59] the region terms were defined in terms of a general vertex function , which could depend on a color vector or the point geometry. Experimental results on point clouds were shown in case was a color vector defined at each point. In this work, we will give concrete constructions of for point cloud segmentation purely based on the geometry of the 3D points themselves. For example, the eigenvalues and eigenvectors obtained from a local PCA around each point carry useful information for describing the homogeneity within each class. Concrete examples are given in Section 5.2. An illustrative example is given in Figure ?, where each node is a 2D point and the region terms have been constructed to distinguish points with different statistical relations to their neighboring points.

The independent preprint [94], proposed to use region terms in the energy function for semi-supervised classification and the authors proposed a region term that was inferred from the supervised points by diffusion. In contrast, the region terms in this work do not rely on any supervised points, but are as mentioned only specified and demonstrated for the application of 3D point cloud segmentation.

3Convex relaxation of minimization problem and analysis based on duality

In this section, the classification problems are formulated as optimization problems in terms of binary functions instead of sets. The binary representations are used to derive convex relaxations. First, some essential mathematical concepts are introduced, such as various differential operators on graphs. These concepts are used extensively to formulate the binary and convex problems and the algorithms.

3.1Differential operators on graphs

Our definitions of operators on graphs are based on the theory in [43]. More information is found in these papers.

Consider two Hilbert spaces, and , which are associated with the sets of vertices and edges, respectively, and the following inner products and norms:

for some and . Above is the degree of node (it’s number of incident nodes) and is the weighting function.

From these definitions, we can define the gradient operator and the Dirichlet energy as

We use the equation to define the divergence:

where we have exploited symmetry of the undirected graph in the derivation of the operator.

Using divergence, a family of total variations can now be defined:

The definition of a family of graph Laplacians is:

3.2Binary formulation of energy minimization problem

A partition of satisfying the no vacuum and overlap constraint

can be described by a binary vector function defined as

In other words, if and only if , where is the unit normal vector which is at the component and for all other components. The no vacuum and overlap constraint can be expressed in terms of as

Moreover, note that the minimization term of can be naturally related to total variation for . In fact,

This connection between the two terms was used in several recent works to derive, utilizing the graphical framework, efficient unsupervised algorithms for clustering. For example, [16] formulate rigorous convergence results for two methods that solve the relaxed Cheeger cut problem, using non-local total variation. Moreover, [82] provides a continuous relaxation of the Cheeger cut problem, and derives an efficient algorithm for finding good cuts. The authors of [82] relate the Cheeger cut to total variation, and then present a split-Bregman approach of solving the problem. In [88] the continuum limit of total variation on point clouds was derived.

The general set of problems can now be formulated in terms of as

where

is the set of binary functions indicating the partition. The superscript stands for “primal”. The optional size constraints , can be imposed in the terms of as

where . The size penalty term can be imposed by appending the energy function with the term .

In case of semi-supervised classification, takes the form of

where is a binary function taking the value of in and elsewhere, and is a function that takes on a large constant value on supervised points and zero elsewhere. If is chosen sufficiently large, it can be guaranteed that the solution satisfies the supervised constraints. The algorithm to be presented in this work does not require the selection of an appropriate value for the parameter , as the ideal case where can be handled naturally without introducing numerical instabilities.

Region homogeneity terms can be imposed by setting . More generally, region homogeneity terms and supervised data points can be combined by setting

The total variation term is defined as in with .

If the number of supervised points is very low and there is no additional region term, the global minimizer of may become the trivial solution where for one of the classes, say , everywhere, and for the other classes for supervised points of class and elsewhere. The threshold tends to occur around less than of the points. As in our previous work [65], this problem can be countered by increasing the number of edges incident to supervised points in comparison to other points. Doing so will increase the cost of the trivial solution without significantly influencing the desired global minimizer. An alternative, proposed in the preprint [94], is to create region terms in a pre-processing step by diffusing information of the supervised points into their neighbors.

3.3Convex relaxation of energy minimization problem

Due to the binary constraints , the problem is non-convex. As in several recent works on variational image segmentation [98] and MRF optimization [51], we replace the indicator constraint set by the convex unit simplex

Hence, we are interested in solving the following convex relaxed problem

under optional size constraints or penalty term . In case and no size constraints, the relaxation is exact, as proven for image segmentation in [80] and classification problems on graphs in [59]. In case , the problem becomes equivalent to a multiway cut problem, which is known to be NP-hard [28]. In case size constraints are imposed, the problem becomes NP-hard even when , as it becomes equivalent to a knapsack problem [64] in the special case of no TV term.

In this paper we are interested in using the convex relaxation to solve the original problem approximately. Under certain conditions, the convex relaxation gives an exact global minimizer of the original problem. For instance, it can be straight forwardly shown that

Let be the energy function defined in with or without the size penalty term . Since it follows that . Therefore, if and it follows that . The size constraints can be regarded as a special case by choosing .

If the computed solution of is not completely binary, one way to obtain an approximate binary solution that exactly indicates the class membership of each point, is to select the binary function as the nearest vertex in the unit simplex by the threshold

As an alternative to the threshold scheme , binary solutions of the convex relaxation can also be obtained from a dual solution of , which has a more solid theoretical justification if some conditions are fulfilled. The dual problem also gives insight into why the convex relaxation is expected to closely approximate the original problem. This is the topic of the next section.

3.4Analysis of convex relaxation through a dual formulation

We will now derive theoretical results which indicate that the multiclass problem is closely approximated by the convex relaxation . The following results extend those given in [7] from image domains to graphs. In contrast to [7], we also incorporate size constraints or penalty terms in the analysis. In fact, the strongest results given near the end of the section are only valid for problems without such size constraints/terms. This observation agrees well with our experiments, although in both cases very close approximations are obtained.

We start by deriving an equivalent dual formulation of . Note that this dual problem is different from the “max-flow” type dual problem on graphs proposed in our previous work [65] in case of two classes. Its main purpose is theoretical analysis, not algorithmic development. In fact, its relation to flow maximization will be the subject of the next section. Dual formulations on graphs have also been proposed in [46] for variational multiscale decomposition of graph signals.

By using the definition of total variation , the problem with size penalty term can be expressed in primal-dual form as

where is defined in . It will be shown that the size constraints or penalty term can be implicitly incorporated by introducing the dual variables , as

The primal-dual problem satisfies all the conditions of the mini-max theorem (see e.g. Chapter 6, Proposition 2.4 of [32]). The constraint sets for and are compact and convex, and the energy function is convex l.s.c. for fixed and concave u.s.c. for fixed . This implies the existence of at least one primal-dual solution (saddle point) of finite energy value.

For a given , the terms involving and can be rearranged as

Consider the above three choices for . In case the class sizes do not contribute to the energy. In case the two above terms summed together is exactly equal to the size penalty term . In case , the constraint set on is no longer compact, but we can apply Sion’s generalization of the mini-max theorem [79], which allows either the primal or dual constraint set to be non-compact. It follows that if the size constraints are not satisfied, the energy would be infinite, contradicting existence of a primal-dual solution.

From the mini-max theorems, it also follows that the inf and sup operators can be interchanged as follows

For notational convenience, we denote the unit simplex pointwise as

For an arbitrary vector , observe that

Therefore, the inner minimization of can be solved analytically at each position , and we obtain the dual problem

Assuming a solution of the dual problem has been obtained, the following theorem characterizes the corresponding primal variable

Since all conditions of the mini-max theorem [32] are satisfied (c.f. proof of Theorem ?), there must exist a maximizer of the dual problem and a minimizer of the primal problem such that is a solution of the primal-dual problem (see e.g. [32]). For arbitrary vectors and , it must hold that . Therefore, at the point , must satisfy

otherwise the primal-dual energy would exceed the dual energy, contradicting strong duality. The above expression can be further decomposed as follows

Since
for all , the above can only be true provided and for .

If the minimizer is unique, it follows directly from , that must be the indicator vector .

If the minimizer is unique at every point , then the corresponding primal solution given by is contained in the binary set . By Proposition ?, is a global minimizer of .

It can also be shown that an exact binary primal solution exists if there are two non-unique minimal components to the vector

but this result only holds in case there are no constraints acting on the class sizes.

A constructive proof of Theorem ? is given in Appendix A.

If the vector has three or more minimal components, it cannot in general be expected that a corresponding binary primal solution exists, reflecting that one can probably not obtain an exact solution to the NP-hard problem in general by a convex relaxation. Experiments indicate that this very rarely, if ever, happens in practice for the classification problem .

As an alternative thresholding scheme, can be selected based on the formula after a dual solution to the convex relaxation has been obtained. If there are multiple minimal components to the vector , one can select to be one for an arbitrary one of those indices, just as for the ordinary thresholding scheme . Experiments will demonstrate and compare both schemes in Section 5.

4Max-flow’ formulation of dual problem and algorithm

A drawback of the dual model is the non-smoothness of the objective function, which is also a drawback of the original primal formulation of the convex relaxation. This section reformulates the dual model in a structurally similar way to a max-flow problem, which is smooth and facilitates the development of a very efficient algorithm based on the augmented Lagrangian theory.

The resulting dual problem can be seen as a multiclass variant of the max-flow model proposed in our work [65] for two classes, and a graph analogue of the max-flow model given for image domains in [96]. Note that our derivations differ conceptually from [96], because we directly utilize the dual problem derived in the last section. Furthermore, the new flexible size constraint and penalty term are incorporated naturally in the max-flow problem by a modified flow conservation condition, which indicates that there should be a constant flow excess at each node. The amount of flow excess is expressed with a few additional optimization variables in the algorithm, and they can optimized over with very little additional computational cost.

4.1Max-flow’ reformulation dual problem

We now derive alternative dual and primal-dual formulations of the convex relaxed problem that are more beneficial for computations. The algorithm will be presented in the next section.

By introducing the auxiliary variable , the dual problem can be reformulated as follows

By adding another set of auxiliary variables , , the constraints can be formulated as

for all and all . Rearranging the terms in constraint , and using the definition of the infinity norm , leads to the max-flow’ model subject to -.

Problem with constraints - is structurally similar to a max-flow problem over copies of the graph , , where for . The aim of the max-flow problem is to maximize the flow from a source vertex to a sink vertex under flow capacity at each edge and flow conservation at each node. The variable can be regarded as the flow on the edges from the source to the vertex in each of the subgraphs , which have unbounded capacities. The variables and can be regarded as the flow and capacity on the edge from vertex in the subgraph to the sink. Constraint is the flow conservation condition. Observe that in case of size constraints/terms, instead of being conserved, there should be a constant excess flow for each node in the subgraph . The objective function is a measure of the total amount of flow in the graph.

Utilizing results from Section 3.4, we now show that the convex relaxation is the equivalent dual problem to the max-flow problem .

The equivalence between the primal-dual problem and the max-flow problem follows directly as is an unconstrained Lagrange multiplier for the flow conservation constraint . Existence of the Lagrange multipliers follows as: 1) is upper bounded, since it is equivalent to , which by Theorem ? admits a solution; 2) the constraints are linear, and hence differentiable.

The equivalence between the primal-dual problem , the max-flow problem and the convex relaxed problem now follows: By Proposition ? the max-flow’ problem is equivalent to the dual problem . By Theorem ?, the dual problem is equivalent to the convex relaxed problem with size constraints if , size penalty term if and no size constraints if .

Note an important distinction between the primal-dual problem and the primal-dual problem derived in the last section: The primal variable is unconstrained in . The simplex constraint is handled implicitly. It may not seem obvious from the proof how the constraints on are encoded in the primal-dual problem, therefore we give some further insights: For a given primal variable , the maximization with respect to of the primal-dual problem at the point can be rearranged as

If does not satisfy the sum to one constraint at , then the primal-dual energy would be infinite, contradicting boundedness from above. In a similar manner, the optimization with respect to can be expressed as

which would be infinite if does not satisfy the non-negativity constraints. If , the indicator function of class , the value would be , which is indeed the pointwise cost of assigning to class .

4.2Augmented Lagrangian max-flow algorithm

This section derives an efficient algorithm, which exploits the fact all constraints on are handled implicitly in the primal-dual problem . The algorithm is based on the augmented Lagrangian theory, where is updated as a Lagrange multiplier by a gradient descent step each iteration. Since no subsequent projection of is necessary, the algorithm tolerates a wide range of step sizes and converges with high accuracy. The advantages of related max-flow’ algorithms for ordinary 2D imaging problems over e.g. Arrow-Hurwicz type primal-dual algorithms have been demonstrated in [6].

From the primal-dual problem , we first construct the augmented Lagrangian functional:

An augmented Lagrangian algorithm for minimizing the above functional is given below, which involves alternatively maximizing for the dual variables and then updating the Lagrange multiplier .

Note that if there are no constraints on the class sizes, , then for every iteration . The algorithm can in this case be simplified by setting for all and ignoring all steps involving and .

(1,0)218


Initialize , , , , and . For until convergence:

  • Optimize flow, for

    where is fixed.

  • Optimize source flow

    where is fixed.

  • Optimize sink flow , for ,

    where is fixed.

  • Optimize , for ,

    where is fixed.

  • Optimize , for ,

    where is fixed.

  • Update , for

(1,0)218

The optimization problem can be solved by a few steps of the projected gradient method as follows:

Above, is a projection operator which is defined as

where sgn is the sign function. There are extended convergence theories for the augmented Lagrangian method in the case when one of the subproblems is solved inexactly, see e.g. [35]. In our experience, one gradient ascent iteration leads to the fastest overall speed of convergence.

The subproblems and can be solved by

Consider now the subproblems (Equation 40) and (Equation 41). In case no constraints are given on and , the maximizers over the sum of the concave quadratic terms can be computed as the average of the maximizers to each individual term as

respectively for and . Since the objective function is concave, and the maximization variable is just a constant, an exact solution to the constrained maximization problem can now be obtained by a projection onto that constraint as follows

Algorithm 1 is suitable for parallel implementation on GPU, since the subproblems at each substep can be solved pointwise independently of each other using simple floating point arithmetics. The update formula for , which only requires access to the values of neighboring nodes at the previous iterate. As discussed in Section 2, the number of neighbors may vary for nodes across the graph, therefore special considerations should be taken when declaring memory. We have implemented the algorithm on CPU for experimental evaluation for simplicity.

5Applications and experiments

We now focus on specific applications of the convex framework. Experimental results on semi-supervised classification of high-dimensional data are presented in Section 5.1. Section 5.2 proposes specific terms in the general model for segmentation of unstructured 3D point clouds, and presents experimental results on LaDAR data acquired in outdoor scenes. In both cases we give a thorough examination of accuracy of the results, tightness of the convex relaxations, and convergence properties of the algorithms.

A useful quality measure of the convex relaxation is to what extent the computed solution is binary. Proposition ? indicates that if the computed solution is completely binary, it is also an exact global minimizer to the original non-convex problem. Let be a thresholding of in the sense that each row of is modified to be the closest vertex in the unit simplex according to the scheme . As a quality measure of the solution at each iteration of Algorithm 1, we calculate the average difference between and its thresholded version as follows:

where is the number of nodes, and is the number of classes. We call the “binary difference” of at iteration . Naturally, we want to become as low as possible as the algorithm converges.

5.1Semi-supervised classification results

In this section, we describe the supervised classification results, using the algorithm with and without the size constraints , or penalty term .

We compare the accuracy of the results with respect to the ground truth. The results are also compared against other local minimization approaches in terms of the final total variation energies:

where is the number of classes. A lower value of is better. The energy contribution from the fidelity term is ignored because the solution satisfies the supervised constraints by construction, thus giving zero contribution from those terms.

To compute the weights for the data sets, we use the Zelnik-Manor and Perona weight function [74]. The function is defined as:

where is a distance measure between vertices and , and is the distance between vertex and its closest neighbor. If is not among the nearest neighbors of , then is set to 0. After the graph is computed, we symmetrize it by setting

Here, is a parameter to be chosen. The weight function will be defined more specifically for each application.

We run the minimization procedure until the following stopping criterion is satisfied:

where is the from the previous iteration, and the value of varies depending on the data set (anywhere from to ).

All experiments were performed on a 2.4 GHz Intel Core i2 Quad CPU. We initialize constant (in our case, the constant is set to 500) if is a supervised point of any class but class , and otherwise, for all . The variables , , , are initialized to zero for all . The variable is initialized to , where is the number of classes. We set .

In the following, we give details about the set up and results for each dataset, before we draw some general conclusions in the end.

Mnist

The MNIST data set [57], affiliated with the Courant Institute of New York University, consists of images of handwritten digits through . Some of the images in the database are shown in Figure ?. The objective is, of course, to assign the correct digit to each image; thus, this is a -class segmentation problem.

We construct the graph as follows; each image is a node on a graph, described by the feature vector of pixel intensity values in the image. These feature vectors are used to compute the weights for pairs of nodes. The weight matrix is computed using the Zelnik-Manor and Perona weight function with local scaling using the closest neighbor. We note that preprocessing of the data is not needed to obtain an accurate classification; we do not perform any preprocessing. The parameter used was 0.05.

Examples of digits from the MNIST data base

The average accuracy results over 100 different runs with randomly chosen supervised points are shown in Table ? in case of no size constraints. We note that the new approaches reach consistently higher accuracies and lower energies than related local minimization approaches, and that incorporation of size information can improve the accuracies further. The computation times are highly efficient, but not quite as fast as MBO, which only uses 10 iteration to solve the problem in an approximate manner. The plots of the binary difference versus iteration, depicted in Figure ?, show that the binary difference converges to an extremely small number.

The results of the data set are visualized in Figure ?. For the visualization procedure, we use the first and the sixth eigenvector of the graph Laplacian. The dimension of each of the eigenvectors is , and each node of the data set is associated with a value of each of the vectors. One way to visualize a classification of a data set such as MNIST, which consists of a collection of images, is to plot the values of one eigenvector of the graph Laplacian versus another and use colors to differentiate classes in a given segmentation. In this case, the plots in Figure ? graph the values of the first versus the sixth eigenvector (of the graph Laplacian) relating to the nodes of class 4 and 9 only. The blue and red region represents nodes of class 4 and 9, respectively. The green region represents misclassified points.

Moreover, we compare our results to those of other methods in Table ?, where our method’s name is written in bold. Note that algorithms such as linear and nonlinear classifiers, boosted stumps, support vector machines and both neural and convolution nets are all supervised learning approaches, which use around of the images as a training set ( of the data) and images for testing. However, we use only (or less) of our data as supervised training points, and obtain classification results that are either competitive or better than those of some of the best methods. Moreover, note that no preprocessing was performed on the data, as was needed for some of the methods we compare with; we worked with the raw data directly.

Three Moons Data Set

We created a synthetic data set, called the three moons data set, to test our method. The set is constructed as follows. First, consider three half circles in . The first two half top circles are unit circles with centers at and . The third half circle is a bottom half circle with radius of and center at . A thousand points from each of the three half circles are sampled and embedded in by adding Gaussian noise with standard deviation of to each of the components of each embedded point. The goal is to segment the circles, using a small number of supervised points from each class. Thus, this is a 3-class segmentation problem. The noise and the fact that the points are embedded in high-dimensional space make this difficult.

We construct the graph as follows; each point is a node on a graph, described by the feature vector consisting of the dimensions of the point. To compute the distance component of the weight function for pairs of nodes, we use these feature vectors. The weight matrix is computed using the Zelnik-Manor and Perona weight function with local scaling using the nearest neighbor. The parameter was 0.1.

The results of the data set are visualized in Figure ? and the accuracies are shown in Table ?. This is the only dataset where the proposed approach got lower accuracy than MBO. For this particular example, the global minimizer does not seem the best in terms of accuracy, which is a fault of the model rather than an optimization procedure.

Coil

We evaluated our performance on the benchmark COIL data set [73] from the Columbia University Image Library. This is a set of color images of objects, taken at different angles. The red channel of each image was downsampled to pixels by averaging over blocks of pixels. Then, of the objects were randomly selected and then partitioned into six classes. Discarding images from each class leaves per class, giving a data set of data points and 6 classes.

We construct the graph as follows; each image is a node on a graph. We apply PCA to project each image onto 241 principal components; these components form the feature vectors. The vectors are used to calculate the distance component of the weight function. The weight matrix is computed using the Zelnik-Manor and Perona weight function with local scaling using the nearest neighbor. The parameter used was 0.03.

Resulting accuracies are shown in Table ?, indicating that our method outperforms local minimization approaches and is comparable to or better than some of the other best existing methods. The results of the data set are visualized in Figure ?; the procedure used is similar to that of the MNIST data set visualization procedure. The plots in the figure graph the values of the first versus the third eigenvector of the graph Laplacian. The results of the classification are labeled by different colors.

Landsat Satellite data set

We also evaluated our performance on the Landsat Satellite data set, obtained from the UCI Machine Learning Repository [4]. This is a hyperspectral data set which is composed of sets of multi-spectral values of pixels in 3 3 neighborhoods in a satellite image; the portions of the electromagnetic spectrum covered include near-infrared. The goal is to predict the classification of the central pixel in each element of the data set. The six classes are red soil, cotton crop, grey soil, damp grey soil, soil with vegetation stubble and very damp grey soil. There are 6435 nodes in the data set.

We construct the graph as follows. The UCI website provides a -dimensional feature vector for each node. The feature vectors are used to calculate the distance component of the weight function. The weight matrix is computed using the Zelnik-Manor and Perona weight function with local scaling using the nearest neighbor. The parameter c used was 0.3.

Table ? includes comparison of our method to some of the best methods (most cited in [70]). One can see that our results are of higher accuracy. We now note that, except the GL and MBO algorithms, all other algorithms we compare the Landsat satellite data to are supervised learning methods, which use 80% of data for training and 20% for testing. Our method was able to outperform these algorithms while using a very small percentage of the data set (10%) as supervised points. Even with 5.6% supervised points it outperforms all but one of the aforementioned methods.

Non-uniform distribution of supervised points

In all previous experiments, the supervised points have been sampled randomly from all the datapoints. To test the algorithms in more challenging scenarios, we introduce some bias in the sampling of the supervised points, which is also a more realistic situation in practice. We used two different data sets for this test: the MNIST data set and the COIL data set.

In the case of the MNIST data set, we chose the supervised points non-randomly for digits and only. To obtain the non-randomness, we allowed a point to be chosen as supervised only if it had a particular range of values for the second eigenvector. This resulted in a biased distribution of the supervised points. The results for this experiment were the following: for the max flow algorithm, the overall accuracy was , while for digits and , it was . For comparison, the non-convex MBO algorithm [39] gave an accuracy of overall, but for digits and . The MBO method was also a bit more unstable in its accuracy with respect to different distributions of the supervised points. The max-flow algorithm was very stable, with a very small standard deviation for a set of accuracies for different supervised point distributions.

In the case of the COIL data set, we chose the supervised points non-randomly for classes and . The non-randomness was achieved in the same way as for the MNIST data set. The results were the following: the overall accuracy of the max-flow was , while for classes and , it was . The MBO algorithm [39] gave an accuracy of overall, but for classes and .

These results are summarized in Table ? and are visualized in Figures ? and ? for MNIST and COIL data sets, respectively.

Experiments with size constraints and penalty term

The exact size constraints could improve the accuracies if knowledge of the exact class sizes are available. However, it is not realistic to obtain the exact knowledge of the class sizes in practice, and this was the motivation behind developing the flexible constraints or the penalty term . In order to simulate the case that only an estimate of the class sizes are known, we perturb the exact class sizes by a random number ranging between 1 and 20 of