Belief Propagation Min-Sum Algorithm for Generalized Min-Cost Network Flow

# Belief Propagation Min-Sum Algorithm for Generalized Min-Cost Network Flow

Andrii Riazanov, Yury Maximov and Michael Chertkov *The work was supported by funding from the U.S. Department of Energy’s Office of Electricity as part of the DOE Grid Modernization Initiative.Andrii Riazanov is with the Computer Science Department, Carnegie Mellon University, Pittsburgh, PA 15213, USA. riazanov@cs.cmu.eduYury Maximov is with Skolkovo Institute of Science and Technology, Center for Energy Systems, and Los Alamos National Laboratory, Theoretical Division T-4 & CNLS, Los Alamos, NM 87544, USA yury@lanl.gov Michael Chertkov is with Skolkovo Institute of Science and Technology, Center for Energy Systems, and Los Alamos National Laboratory, Theoretical Division T-4, Los Alamos, NM 87544, USA chertkov@lanl.gov
###### Abstract

Belief Propagation algorithms are instruments used broadly to solve graphical model optimization and statistical inference problems. In the general case of a loopy Graphical Model, Belief Propagation is a heuristic which is quite successful in practice, even though its empirical success, typically, lacks theoretical guarantees. This paper extends the short list of special cases where correctness and/or convergence of a Belief Propagation algorithm is proven.

We generalize formulation of Min-Sum Network Flow problem by relaxing the flow conservation (balance) constraints and then proving that the Belief Propagation algorithm converges to the exact result.

## I Introduction

Belief Propagation algorithms were designed to solve optimization and inference problems in graphical models. Since a variety of problems from different fields of science (communication, statistical physics, machine learning, computer vision, signal processing, etc.) can be formulated in the context of graphical model, Belief Propagation algorithms are of great interest for research during the last decade [1, 2]. These algorithms belong to message-passing heuristic, which contains distributive, iterative algorithms with little computation performed per iteration.

There are two types of problems in graphical models of the greatest interest: computation of the marginal distribution of a random variable, and finding the assignment which maximizes the likelihood. Sum-product and Min-sum algorithms were designed for solving these two problems under the heuristic of Belief Propagation. Originally, the sum-product algorithm was formulated on trees ([3, 4, 5]), for which this algorithm represents the idea of dynamic programming in the message-passing concept, where variable nodes transmit messages between each other along the edges of the graphical model. However, these algorithms showed surprisingly good performance even when applied to the graphical models of non-tree structure ([6, 7, 8, 9, 10]). Since Belief Propagation algorithms can be naturally implemented and paralleled using the simple idea of message passing, these instruments are widely used for finding approximate solutions to optimization of inference problems.

However, despite the good practical performance of this heuristic in many cases, the theoretical grounding of these algorithm remain unexplored (to some extend), so there are no actual proofs that the algorithms give correct (or even approximately correct) answers for the variety of problem statements. That’s why one of important tasks is to explore the scope of the problems, for which these algorithms indeed can be applied, and to justify their practical usage.

In [11] the authors proved that the Belief Propagation algorithm give correct answers for Min-Cost Network Flow problem, regardless of the underlying graph being a tree or not. Moreover, the pseudo-polynomial time convergence was proven for these problems, if some additional conditions hold (the uniqueness of the solution and integral input). This work significantly extended the set of problems, for which Belief Propagation algorithms are justified.

In this paper we formulate the extension of Min-Cost Network Flow problem, which we address as Generalized Min-Cost Network Flow problem (GMNF). This problem statement is much broader then the original formulation, but we amplify the ideas of [11] to prove that Belief Propagation algorithms also give the correct answers for this generalization of the problem. This extension might find a lot of applications in various fields of study, since GMNF problem is the general problem of linear programming with additional constraints on the cycles of the underlying graphs (more precise, on the coefficient of corresponding vertices), which might be natural for some practical formulations.

## Ii Generalized Min-Cost Network Flow

### Ii-a Problem statement

Let be a directed graph, where is the set of vertices and is the set of edges, . For any vertex we denote as the set of edges incident to , and is the coefficient related to this pair , such that if is an out-arc with respect to (e.g. for some vertex ), and if is an in-arc with respect to (e.g. for some vertex ).

For any vertex and edges incident to we define . Then we consider the following property of the graph:

###### Definition II.1

The graph is called ratio-balanced if for every non-directed cycle which consists of vertices and edges it holds:

 k∏i=1δ(vi,ei,vi−1)=δ(v1,ek,e1)⋅δ(v2,e1,e2)⋅…⋅× ×δ(vk−1,ek−2,ek−1)⋅δ(vk,ek−1,ek)=1. (1)

Here by non-directed cycle we mean that for every pair and the pair it holds that either or . It is not hard to verify that it suffices for equation (1) to hold only for every simple non-directed cycle of , since then the equation (1) can be easily deduced to hold for arbitrary non-directed cycle.

To check that the given graph is ratio-balances, one then need to check whether (1) holds for any simple cycle. If is the number of edges, and is the number of simple cycles of , one obviously needs at least time to iterate trough all simple cycles. In fact, the optimal algorithm for this task was introduced in [12], which runs for time. Then, to check whether a graph is ratio-balances, one may use this algorithm to iterate through all simple cycles and to check (1) for every one of them.

We formulate the Generalized Min-Cost Network Flow problem for ratio-balanced graph as follows:

 minimize ∑e∈Ecexe (GMNF) subject to ∑e∈Evaevxe=fv,∀v∈V, 0≤xe≤ue,∀e∈E.

Here the first set of constrains are balance constraints which must hold for each vertex. The second set of constrains consists of capacity constraints on each edge of . Coefficients and , defined for each edge , are called the cost and the capacity of the edge, respectively. Any assignment of in this problem which satisfies the balance and capacity constraints is referred as flow. Finally, the objective function is called the total flow.

### Ii-B Definitions and properties

For the given (GMNF) problem on the graph and flow on this graph, the residual network is defined as follows: has the same vertex set as , and for each edge if then is an arc in with the cost and coefficients . Finally, if then there is an arc in with the cost and coefficients . It is not hard to see that is ratio-balanced whenever is, since only the absolute values of the coefficients occur in the definition of this property.

Then for each directed cycle , we define the cost of this cycle as follows:

 c(C)≜c1+δ(v2,e1,e2)× ×(c2+δ(v3,e2,e3)(c3+⋯+δ(vk,ek−1,ek)ck)⋯)= =c1+k∑i=2cii∏j=2δ(vj,ej−1,ej)

It is easy to see that is properly defined whenever the graph is ratio-constrained.

Then we define , where the minimum is taken over all cycles in the residual network .

###### Lemma II.1

If (GMNF) has a unique solution , then .

{proof}

We will show that for every directed cycle from we can push the additional flow through the edges of this cycle such that the linear constraints in (GMNF) will still be satisfied, but the total flow in the cycle will change by for some .

Let , . From the definition of the residual network it follows that we can increase the flow in every edge of the cycle for some positive quantity such that the capacity constraints will still be satisfied. Let’s push additional flow trough . In order to satisfy the balance constraint in , we need to adjust the flow in . We have 4 cases: either one of can be in , or their opposites can be in . If is in , we will say that it is ’direct’ arc, otherwise it will be ’opposite’. Then there are four cases:

1. are direct arcs. Then we have the new flow on edge : . In order to satisfy the balance constraint for the vertex , is must hold . Since both and are direct, it means that , and , and thus, by definition, , and . Therefore, we have .

2. is direct, is opposite. Then again, , but now ’pushing’ the flow through (as the edge in the residual network) means decreasing . The same equalities holds, so . Since is opposite arc, that means that we should push additional through .

3. is opposite, is direct – similar to the case 2).

4. are opposite arcs – similar to the case 1).

So, if we push through , we need to push through to keep the balance in . Then, analogically, to maintain the balance in , we need to push additional through . Then, consequently adjusting the balance in all the vertexes of , we will retrieve that to keep the balance in , we need to push through . Now it suffices to show that the balance in is also satisfied. Similarly, we know that if we push in , then we need to push (since is ratio-balanced) through , and that is exactly the amount which we assumed to push at the beginning of this proof. So we indeed push consistent flow through all the edges of in such a way that the balance constraints in all the vertices is satisfied. We now only need to mention that we can take as small as it is needed to satisfy also all the capacity constraints in the cycle.

 k∑i=1cxiεi=cx1ε+cx2εδ(v2,e1,e2)+ +cx2εδ(v2,e1,e2)δ(v3,e2,e3)+⋯+cxkεk∏i=2δ(vi,ei−1,ei)= =ε⋅cx(C)

Now it is obvious that if for some , we can change the flow in such that the total cost will not increase. It means that either is not an optimal flow, or it is not the unique solution of (GMNF).

Next we define the cost of a directed path in or :

###### Definition II.2

Let be a directed path. Then the cost of this path is defined as

 l(S)≜c1+δ(v2,e1,e2)(c2+δ(v3,e2,e3)× ×(⋯(ck−2+δ(vk−1,ek−2,ek−1)ck−1)⋯))= =c1+k−1∑i=2cii∏j=2δ(vj,ej−1,ej)

We also define the ’reducer’ of the path as follows:

 t(S)≜minj=2,…,k−1j∏i=2δ(vi,ei−1,ei)

To prove the main result of this paper we will use the following crucial lemma:

###### Lemma II.2

Let be any ratio-balanced graph, or a residual network of some ratio-balanced graph (as we already mentioned, the residual network will also be ratio-balanced in this case). Let be a directed path in , and be a cycle with . Let be the path . Then , where is the minimum of all the reducers among all directed paths in .

{proof}
 l(R)=c1+δ(v2,e1,e2)(c2+δ(v3,e2,e3)×
 ×(⋯(ck−2+δ(vk−1,ek−2,ek−1)ck−1)⋯))=
 =(c1+p−1∑i=2[cii∏j=2δ(vj,ej−1,ej)])+
 +(p−1∏j=2δ(vj,ej−1,ej))⋅δ(vp,ep−1,e′1)≥T×
 ×(c′1+m∑i=2[c′ii∏j=2δ(v′j,e′j−1,e′j)])c(C)+
 +(p−1∏j=2δ(vj,ej−1,ej))⋅δ(vp,ep−1,e′1)×
 ×(m∏j=2δ(v′j,e′j−1,e′j))(δ(v′1,e′m,e′1))−1⋅δ(vp,e′m,ep)×
 ×(cp+k−1∑i=p+1[cii∏j=p+1δ(vj,ej−1,ej)])≥
 ≥(c1+p−1∑i=2[cii∏j=2δ(vj,ej−1,ej)])+
 +(p−1∏j=2δ(vj,ej−1,ej))⋅∣∣ ∣∣aep−1vpae′1vp∣∣ ∣∣∣∣ ∣ ∣∣ae′1v′1ae′mv′1∣∣ ∣ ∣∣∣∣ ∣∣ae′mvpaepvp∣∣ ∣∣δ(vp,ep−1,ep)×
 ×(cp+k−1∑i=p+1[cii∏j=p+1δ(vj,ej−1,ej)])+
 +Tc(C)=
 =(c1+p−1∑i=2[cii∏j=2δ(vj,ej−1,ej)])+
 +p∏j=2δ(vj,ej−1,ej)×
 ×(cp+k−1∑i=p+1[cii∏j=p+1δ(vj,ej−1,ej)])+Tc(C)=
 =c1+p−1∑i=2[cii∏j=2δ(vj,ej−1,ej)]+
 +k−1∑i=p[cii∏j=2δ(vj,ej−1,ej)]+Tc(C)=
 =l(S)+Tc(C)

## Iii Belief Propagation algorithm for GMNF

### Iii-a Min-Sum algorithm

Algorithm 1 represents the Belief Propagation Min-Sum algorithm for (GMNF) from [11]. In the algorithm the functions and are the variable and factor functions, respectively, defined for as follows:

 ϕe(z)={cezeif 0≤ze≤ue,+∞otherwise. ψv(z)={0if ∑e∈Evaevze=fv,+∞otherwise.

We address the reader to the article [11] for more details, intuition and justifications on the Belief Propagation algorithm for general optimization problems, linear programs, or Min-Cost Network Flow in particular.

### Iii-B Computation trees

One of the important notions used for proving correctness and/or convergence for BP algorithm is the computation tree ([11, 13, 14, 15]) (unwrapped tree in some sources). The idea under this construction is the following: for the fixed edge of the graph , one might want to build a tree of depth , such that performing iterations of on graph gives the same estimation of flow on , as the optimal solution of the appropriately defined (GMNF) problem on the computation tree .

Since the proof of our result is based on the computation trees approach, in this subsection we describe the construction in details. We will use the same notations for computation tree, as in [11] (section 5).

In this paper we consider the computation trees, corresponding to edges of . We say that is the ”root” for -level computation tree . Each vertex or edge of is a duplicate of some vertex or edge of . Define the mapping such that if is a duplicate of , then . In other words, this function maps each duplicate from to its inverse in .

The easiest way to describe the construction is inductively. Let . Then the tree consists of two vertices , , such that , and an edge . We say that belong to -level of . Note that for any two vertices it holds that , so the vertices in a tree are connected with an edge if and only if their inverse in the initial graph are connected. This property will hold for all trees . Now assume that we defined a tree , such that for any it holds that . Denote by the set of leafs of (vertices which are connected by edge with exactly one another vertex). For any , denote by the vertex, with which is connected by edge (so either or ). We now build by extending the three as follows: for every let , and consider the set , where is the set of neighbors of in . Then for every vertex add vertex to expand and an edge if or an edge if to expand , and set . Also set the level of to be equal .

So, the tree contains as an induced subtree, and also contains vertices on level , which are connected to leafs of (in fact, it is easy to see that new vertices are connected only with leafs from level). From the construction, one may see that for any it holds that . In fact, any vertex of with level less then is a local copy of the corresponding vertex from . More precisely: let and the level of is less or equal then . Denote . Then for any vertex such that either or , there exist exactly one vertex such that and is connected with in the same way (direction) as and are connected in . Then it is clear that we can extend the mapping on edges by saying . Now for every vertex and an incident edge , we can define the coefficient , where , and . We also set the cost and capacity on the computation tree correspondingly to the initial graph, so and

Now assume there is a (GMNF) problem stated for a graph . We define the induced (GMNF) problem on a computation tree in the following way. Let be a set of vertices with levels less than . Then consider the problem:

 minimize ∑~e∈E(TNe)c~ex~e (GMNFNe) subject to ∑~e∈Ev′a~ev′x~e=fv′,∀v′∈V0(TNe), 0≤x~e≤u~e,∀~e∈E(TNe).

Roughly speaking, (GMNF) is just a simple (GMNF) on a computation tree, except that there are no balance constraints for the vertices of level. Keeping in mind that the computation tree is locally equivalent to the initial graph, and that Min-Sum algorithm belongs to message-passing heuristic, which means that the algorithm works locally at each step, one can intuitively guess that BP for (GMNF) works quite similar as BP for the initial (GMNF). This reasoning can be formalized in the following lemma from [11].

###### Lemma III.1

Let be the value produced by BP for (GMNF) at the end of iteration for the flow value on edge . Then there exists an optimal solution of (GMNF) such that , where is the root of .

Though this lemma was proven only for ordinary Min-Cost Network Flow problem, where for all , its proof doesn’t rely on these coefficients at any point, which allows us to extend it for any values of these coefficients.

### Iii-C Main results

We will now use lemma III.1 to prove our main result of correctness of BP Min-Sum for (GMNF). The following theorem is the generalization of Theorem 4.1 from [11], and our proof shares the ideas from the original proof.

Let , and denote by the estimation of flow after iterations of Algorithm 1.

###### Theorem III.2

Suppose (GMNF) has a unique solution . Define to be the maximum absolute value of the cost of a simple directed path in , and as the minimum of the reducers among all such paths. Then for any .

{proof}

Suppose to the contrary that there exists and such that . By Lemma III.1, there exist an optimal solution of such that , and thus . Then, without loss of generality, assume . We will show that it it possible to adjust in such way, that the flow in will decrease, which will contradict to the optimality of .

Let be the root edge of the computation tree . Since is a feasible solution of and is a feasible solution of GMNF:

 fΓ(v′α)=∑~e∈Ev′αa~ev′αy∗~e=ae′0v′αy∗e′0+∑~e∈Ev′α∖e0a~ev′αy∗~e
 fΓ(v′α)=∑~e∈EΓ(v′α)a~eΓ(v′α)x∗~e=ae′0Γ(v′α)x∗e0+∑~e∈Ev′α∖e0a~eΓ(v′α)x∗~e

Since the nodes and the edges in the computation tree are copies of nodes and vertexes in , . Then from above equalities it follows that there exists incident to in such that . If , then is an in-arc for , and we say that has the same orientation, as . In such case, . Otherwise, we say that has the opposite orientation, and . Using the similar arguments, we will find incident to satisfying similar condition. Then we can apply the similar reasoning for the other ends of , using the balance constraints and inequalities on components of and for corresponding vertexes. In the end, we will have a non-directed path starting and ending in leaves of such that for one of two cases holds:

• . Then has the same orientation as . In this case, define and .

• . Then and have opposite orientations. Define and .

Note that the capacity constraints are similar for corresponding vertices from and GMNF, and since is feasible for , hence, every it holds . Then, for any , we have , which means that (from the definition of the residual network). Note that in this case , so . Next, let’s now , and thus . Again, out of the definition of the residual network,