A Weighted Linear Matroid Parity Algorithm A preliminary version of this paper has appeared in Proceedings of the 49th Annual ACM Symposium on Theory of Computing (STOC 2017), pp. 264–276.

# A Weighted Linear Matroid Parity Algorithm††thanks: A preliminary version of this paper has appeared in Proceedings of the 49th Annual ACM Symposium on Theory of Computing (STOC 2017), pp. 264–276.

Satoru Iwata Department of Mathematical Informatics, University of Tokyo, Tokyo 113-8656, Japan. E-mail: iwata@mist.i.u-tokyo.ac.jp    Yusuke Kobayashi Research Institute for Mathematical Sciences, Kyoto University, Kyoto, 606-8502, Japan. E-mail: yusuke@kurims.kyoto-u.ac.jp
###### Abstract

The matroid parity (or matroid matching) problem, introduced as a common generalization of matching and matroid intersection problems, is so general that it requires an exponential number of oracle calls. Nevertheless, Lovász (1980) showed that this problem admits a min-max formula and a polynomial algorithm for linearly represented matroids. Since then efficient algorithms have been developed for the linear matroid parity problem.

In this paper, we present a combinatorial, deterministic, polynomial-time algorithm for the weighted linear matroid parity problem. The algorithm builds on a polynomial matrix formulation using Pfaffian and adopts a primal-dual approach based on the augmenting path algorithm of Gabow and Stallmann (1986) for the unweighted problem.

## 1 Introduction

The matroid parity problem [22] (also known as the matchoid problem [20] or the matroid matching problem [24]) was introduced as a common generalization of matching and matroid intersection problems. In the general case, it requires an exponential number of independence oracle calls [19, 26], and a PTAS has been developed only recently [23]. Nevertheless, Lovász [24, 26, 27] showed that the problem admits a min-max theorem for linear matroids and presented a polynomial algorithm that is applicable if the matroid in question is represented by a matrix.

Since then, efficient combinatorial algorithms have been developed for this linear matroid parity problem [12, 33, 34]. Gabow and Stallmann [12] developed an augmenting path algorithm with the aid of a linear algebraic trick, which was later extended to the linear delta-matroid parity problem [14]. Orlin and Vande Vate [34] provided an algorithm that solves this problem by repeatedly solving matroid intersection problems coming from the min-max theorem. Later, Orlin [33] improved the running time bound of this algorithm. The current best deterministic running time bound due to [12, 33] is , where is the cardinality of the ground set, is the rank of the linear matroid, and is the matrix multiplication exponent, which is at most . These combinatorial algorithms, however, tend to be complicated.

An alternative approach that leads to simpler randomized algorithms is based on an algebraic method. This is originated by Lovász [25], who formulated the linear matroid parity problem as rank computation of a skew-symmetric matrix that contains independent parameters. Substituting randomly generated numbers to these parameters enables us to compute the optimal value with high probability. A straightforward adaptation of this approach requires iterations to find an optimal solution. Cheung, Lau, and Leung [3] have improved this algorithm to run in time, extending the techniques of Harvey [16] developed for matching and matroid intersection.

While matching and matroid intersection algorithms [7, 9] have been successfully extended to their weighted version [8, 10, 18, 21], no polynomial algorithms have been known for the weighted linear matroid parity problem for more than three decades. Camerini, Galbiati, and Maffioli [2] developed a random pseudopolynomial algorithm for the weighted linear matroid parity problem by introducing a polynomial matrix formulation that extends the matrix formulation of Lovász [25]. This algorithm was later improved by Cheung, Lau, and Leung [3]. The resulting complexity, however, remained pseudopolynomial. Tong, Lawler, and Vazirani [39] observed that the weighted matroid parity problem on gammoids can be solved in polynomial time by reduction to the weighted matching problem. As a relaxation of the matroid matching polytope, Vande Vate [41] introduced the fractional matroid matching polytope. Gijswijt and Pap [15] devised a polynomial algorithm for optimizing linear functions over this polytope. The polytope was shown to be half-integral, and the algorithm does not necessarily yield an integral solution.

This paper presents a combinatorial, deterministic, polynomial-time algorithm for the weighted linear matroid parity problem. To do so, we combine algebraic approach and augmenting path technique together with the use of node potentials. The algorithm builds on a polynomial matrix formulation, which naturally extends the one discussed in [13] for the unweighted problem. The algorithm employs a modification of the augmenting path search procedure for the unweighted problem by Gabow and Stallmann [12]. It adopts a primal-dual approach without writing an explicit LP description. The correctness proof for the optimality is based on the idea of combinatorial relaxation for polynomial matrices due to Murota [31]. The algorithm is shown to require arithmetic operations. This leads to a strongly polynomial algorithm for linear matroids represented over a finite field. For linear matroids represented over the rational field, one can exploit our algorithm to solve the problem in polynomial time.

Independently of the present work, Gyula Pap has obtained another combinatorial, deterministic, polynomial-time algorithm for the weighted linear matroid parity problem based on a different approach.

The matroid matching theory of Lovász [27] in fact deals with a more general class of matroids that enjoy the double circuit property. Dress and Lovász [6] showed that algebraic matroids satisfy this property. Subsequently, Hochstättler and Kern [17] showed the same phenomenon for pseudomodular matroids. The min-max theorem follows for this class of matroids. To design a polynomial algorithm, however, one has to establish how to represent those matroids in a compact manner. Extending this approach to the weighted problem is left for possible future investigation.

The linear matroid parity problem finds various applications: structural solvability analysis of passive electric networks [30], pinning down planar skeleton structures [28], and maximum genus cellular embedding of graphs [11]. We describe below two interesting applications of the weighted matroid parity problem in combinatorial optimization.

A -path in a graph is a path between two distinct vertices in the terminal set . Mader [29] showed a min-max characterization of the maximum number of openly disjoint -paths. The problem can be equivalently formulated in terms of -paths, where is a partition of and an -path is a -path between two different components of . Lovász [27] formulated the problem as a matroid matching problem and showed that one can find a maximum number of disjoint -paths in polynomial time. Schrijver [37] has described a more direct reduction to the linear matroid parity problem.

The disjoint -paths problem has been extended to path packing problems in group-labeled graphs [4, 5, 35]. Tanigawa and Yamaguchi [38] have shown that these problems also reduce to the matroid matching problem with double circuit property. Yamaguchi [42] clarifies a characterization of the groups for which those problems reduce to the linear matroid parity problem.

As a weighted version of the disjoint -paths problem, it is quite natural to think of finding disjoint -paths of minimum total length. It is not immediately clear that this problem reduces to the weighted linear matroid parity problem. A recent paper of Yamaguchi [43] clarifies that this is indeed the case. He also shows that the reduction results on the path packing problems on group-labeled graphs also extend to the weighted version.

The weighted linear matroid parity is also useful in the design of approximation algorithms. Prömel and Steger [36] provided an approximation algorithm for the Steiner tree problem. Given an instance of the Steiner tree problem, construct a hypergraph on the terminal set such that each hyperedge corresponds to a terminal subset of cardinality at most three and regard the shortest length of a Steiner tree for the terminal subset as the cost of the hyperedge. The problem of finding a minimum cost spanning hypertree in the resulting hypergraph can be converted to the problem of finding a minimum spanning tree in a 3-uniform hypergraph, which is a special case of the weighted parity problem for graphic matroids. The minimum spanning hypertree thus obtained costs at most 5/3 of the optimal value of the original Steiner tree problem, and one can construct a Steiner tree from the spanning hypertree without increasing the cost. Thus they gave a 5/3-approximation algorithm for the Steiner tree problem via weighted linear matroid parity. This is a very interesting approach that suggests further use of weighted linear matroid parity in the design of approximation algorithms, even though the performance ratio is larger than the current best one for the Steiner tree problem [1].

## 2 The Minimum-Weight Parity Base Problem

Let be a matrix of row-full rank over an arbitrary field with row set and column set . Assume that both and are even. The column set is partitioned into pairs, called lines. Each has its mate such that is a line. We denote by the set of lines, and suppose that each line has a weight .

The linear dependence of the column vectors naturally defines a matroid on . Let denote its base family. A base is called a parity base if it consists of lines. As a weighted version of the linear matroid parity problem, we will consider the problem of finding a parity base of minimum weight, where the weight of a parity base is the sum of the weights of lines in it. We denote the optimal value by . This problem generalizes finding a minimum-weight perfect matching in graphs and a minimum-weight common base of a pair of linear matroids on the same ground set.

As another weighted version of the matroid parity problem, one can think of finding a matching (independent parity set) of maximum weight. This problem can be easily reduced to the minimum-weight parity base problem.

Associated with the minimum-weight parity base problem, we consider a skew-symmetric polynomial matrix in variable defined by

 ΦA(θ)=(OA−A⊤D(θ)),

where is a block-diagonal matrix in which each block is a skew-symmetric polynomial matrix corresponding to a line . Assume that the coefficients are independent parameters (or indeterminates).

For a skew-symmetric matrix whose rows and columns are indexed by , the support graph of is the graph with edge set . We denote by the Pfaffian of , which is defined as follows:

 PfΦ=∑MσM∏(u,v)∈MΦuv,

where the sum is taken over all perfect matchings in and takes in a suitable manner, see [28]. It is well-known that and for any square matrix .

We have the following lemma that associates the optimal value of the minimum-weight parity base problem with .

###### Lemma 2.1.

The optimal value of the minimum-weight parity base problem is given by

 ζ(A,L,w)=∑ℓ∈Lwℓ−degθPfΦA(θ).

In particular, if (i.e., ), then there is no parity base.

###### Proof.

We split into and such that

 ΦA(θ)=ΨA+Δ(θ), ΨA=(OA−A⊤O), Δ(θ)=(OOOD(θ)).

The row and column sets of these skew-symmetric matrices are indexed by . By [32, Lemma 7.3.20], we have

 PfΦA(θ)=∑X⊆W±PfΨA[W∖X]⋅PfΔ(θ)[X],

where each sign is determined by the choice of , is the principal submatrix of whose rows and columns are both indexed by , and is defined in a similar way. One can see that if and only if (or, equivalently ) is a union of lines. One can also see for that if and only if is nonsingular, which means that is a base of . Thus, we have

 PfΦA(θ)=∑B±PfΨA[U∪B]⋅PfΔ(θ)[V∖B],

where the sum is taken over all parity bases . Note that no term is canceled out in the summation, because each term contains a distinct set of independent parameters. For a parity base , we have

 degθ(PfΨA[U∪B]⋅PfΔ(θ)[V∖B])=∑ℓ⊆V∖Bwℓ=∑ℓ∈Lwℓ−∑ℓ⊆Bwℓ,

which implies that the minimum weight of a parity base is . ∎

Note that Lemma 2.1 does not immediately lead to a (randomized) polynomial-time algorithm for the minimum weight parity base problem. This is because computing the degree of the Pfaffian of a skew-symmetric polynomial matrix is not so easy. Indeed, the algorithms in [2, 3] for the weighted linear matroid parity problem compute the degree of the Pfaffian of another skew-symmetric polynomial matrix, which results in pseudopolynomial complexity.

## 3 Algorithm Outline

In this section, we describe the outline of our algorithm for solving the minimum-weight parity base problem.

We regard the column set as a vertex set. The algorithm works on a vertex set that includes some new vertices generated during the execution. The algorithm keeps a nested (laminar) collection of vertex subsets of such that is a union of lines for each . The indices satisfy that, for any two members with , either or holds. Each member of is called a blossom. The algorithm maintains a potential and a nonnegative variable , which are collectively called dual variables.

We note that although and are called dual variables, they do not correspond to dual variables of an LP-relaxation of the minimum-weight parity base problem. Indeed, this paper presents neither an LP-formulation nor a min-max formula for the minimum-weight parity base problem, explicitly. We will show instead that one can obtain a parity base that admits feasible dual variables and , which provide a certificate for the optimality of .

The algorithm starts with splitting the weight into and for each line , i.e., . Then it executes the greedy algorithm for finding a base with minimum value of . If is a parity base, then is obviously a minimum-weight parity base. Otherwise, there exists a line in which exactly one of its two vertices belongs to . Such a line is called a source line and each vertex in a source line is called a source vertex. A line that is not a source line is called a normal line.

The algorithm initializes and proceeds iterations of primal and dual updates, keeping dual feasibility. In each iteration, the algorithm applies the breadth-first search to find an augmenting path. In the meantime, the algorithm sometimes detects a new blossom and adds it to . If an augmenting path is found, the algorithm updates along . This will reduce the number of source lines by two. If the search procedure terminates without finding an augmenting path, the algorithm updates the dual variables to create new tight edges. The algorithm repeats this process until becomes a parity base. Then is a minimum-weight parity base. See Fig. 1 for a flowchart of our algorithm.

The rest of this paper is organized as follows.

In Section 4, we introduce new vertices and operations attached to blossoms. We describe some properties of blossoms kept in the algorithm, which we denote (BT1) and (BT2).

The feasibility of the dual variables is defined in Section 5. The dual feasibility is denoted by (DF1)–(DF3). We also describe several properties of feasible dual variables that are used in other sections.

In Section 6, we show that a parity base that admits feasible dual variables attains the minimum weight. The proof is based on the polynomial matrix formulation of the minimum-weight parity base problem given in Section 2. Combining this with some properties of the dual variables and the duality of the maximum-weight matching problem, we show the optimality of such a parity base.

In Section 7, we describe a search procedure for an augmenting path. We first define an augmenting path, and then we describe our search procedure. Roughly, our procedure finds a part of the augmenting path outside the blossoms. The routing in each blossom is determined by a prescribed vertex set that satisfies some conditions, which we denote (BR1)–(BR5). Note that the search procedure may create new blossoms.

The validity of the procedure is shown in Section 8. We show that the output of the procedure is an augmenting path by using the properties (BR1)–(BR5) of the routing in each blossom. We also show that creating a new blossom does not violate the conditions (BT1), (BT2), (DF1)–(DF3), and (BR1)–(BR5).

In Section 9, we describe how to update the dual variables when the search procedure terminates without finding an augmenting path. We obtain new tight edges by updating the dual variables, and repeat the search procedure. We also show that if we cannot obtain new tight edges, then the instance has no feasible solution, i.e., there is no parity base.

If the search procedure succeeds in finding an augmenting path , the algorithm updates the base along . The details of this process are presented in Section 10. Basically, we replace the base with the symmetric difference of and . In addition, since there exist new vertices corresponding to the blossoms, we update them carefully to keep the conditions (BT1), (BT2), and (DF1)–(DF3). In order to define a new routing in each blossom, we apply the search procedure in each blossom, which enables us to keep the conditions (BR1)–(BR5).

Finally, in Section 11, we describe the entire algorithm and analyze its running time. We show that our algorithm solves the minimum-weight parity base problem in time when is a finite field of fixed order. When , it is not obvious that a direct application of our algorithm runs in polynomial time. However, we show that the minimum-weight parity base problem over can be solved in polynomial time by applying our algorithm over a sequence of finite fields.

## 4 Blossoms

In this section, we introduce buds and tips attached to blossoms and construct auxiliary matrices that will be used in the definition of dual feasibility.

Each blossom contains at most one source line. A blossom that contains a source line is called a source blossom. A blossom with no source line is called a normal blossom. Let and denote the sets of source blossoms and normal blossoms, respectively. Then, . Let denote the number of blossoms in .

Each normal blossom has a pair of associated vertices and outside , which are called the bud and the tip of , respectively. The pair is called a dummy line. To simplify the description, we denote and . The vertex set is defined by with . The tip is contained in , whereas the bud is outside . For every with , we have if and only if . Similarly, we have if and only if . Thus, each normal blossom is of odd cardinality. The algorithm keeps a subset such that and for each . It also keeps for distinct and for each . This implies that , where , and hence .

Recall that is the row set of . The fundamental cocircuit matrix with respect to a base is a matrix with row set and column set obtained by . In other words, is obtained from by identifying and , applying row transformations, and changing the ordering of columns. For a subset , we have if and only if is nonsingular. Here, denotes the symmetric difference. Then the following lemma characterizes the fundamental cocircuit matrix with respect to .

###### Lemma 4.1.

Suppose that is in the form of with being nonsingular. Then

 C′:=(α−1α−1β−γα−1δ−γα−1β)

is the fundamental cocircuit matrix with respect to .

###### Proof.

In order to obtain the fundamental cocircuit matrix with respect to , we apply row elementary transformations to so that the columns corresponding to form the identity matrix. Hence, the obtained matrix is

 (α−10−γα−1I)(I0αβ0Iγδ)=(α−10Iα−1β−γα−1I0δ−γα−1β),

which shows that is the fundamental cocircuit matrix with respect to . ∎

This operation converting to is called pivoting around . We have the following property on the nonsingularity of their submatrices.

###### Lemma 4.2.

Let and be the fundamental cocircuit matrices with respect to and , respectively. Then, for any , is nonsingular if and only if is nonsingular.

###### Proof.

Consider the matrix whose column set is equal to . Then, is nonsingular if and only if the columns of indexed by form a nonsingular matrix. This is equivalent to that the corresponding columns of form a nonsingular matrix, which means that is nonsingular. ∎

The algorithm keeps a matrix whose row and column sets are and , respectively. The matrix is obtained from by attaching additional rows/columns corresponding to , and then pivoting around . Thus we have . In other words, the matrix obtained from by pivoting around contains as a submatrix (see (BT1) below). If the row and column sets of are clear, for a vertex set , we denote .

In our algorithm, the matrix satisfies the following properties.

(BT1)

Let be the matrix obtained from by pivoting around . Then, is the fundamental cocircuit matrix with respect to .

(BT2)

Each normal blossom satisfies the following.

• If and , then , for any with , and for any with (see Fig. 2).

• If and , then , for any with , and for any with .

## 5 Dual Feasibility

In this section, we define feasibility of the dual variables and show their properties. Our algorithm for the minimum-weight parity base problem is designed so that it keeps the dual feasibility.

Recall that a potential , and a nonnegative variable are called dual variables. A blossom is said to be positive if . For distinct vertices and for , we say that a pair crosses if . For distinct , we denote by the set of indices such that crosses . We introduce the set of ordered vertex pairs defined by

 F∗:={(u,v)∣u∈B∗,v∈V∗∖B∗,C∗uv≠0}.

For distinct , we define

 Quv:=∑i∈Iuvq(Hi).

The dual variables are called feasible with respect to and if they satisfy the following.

(DF1)

for every line .

(DF2)

for every .

(DF3)

for every and with .

If no confusion may arise, we omit and when we discuss dual feasibility.

Note that if , then corresponds to the nonzero entries of , which shows that holds for . This implies that (DF2) holds if is a base minimizing , because for any . We also note that (DF3) holds if . Therefore, and are feasible if satisfies (DF1), , and minimizes in . This ensures that the initial setting of the algorithm satisfies the dual feasibility.

We now show some properties of feasible dual variables.

###### Lemma 5.1.

Suppose that and are feasible dual variables. Let be a vertex subset such that is nonsingular. Then, we have

 p(X∖B∗)−p(X∩B∗)≥∑{q(Hi)∣Hi∈Λ, |X∩Hi| is odd}.
###### Proof.

Since is nonsingular, there exists a perfect matching between and such that , , and for . The dual feasibility implies that for . Combining these inequalities, we obtain

 p(X∖B∗)−p(X∩B∗)≥μ∑j=1Qujvj=μ∑j=1∑i∈Iujvjq(Hi). (1)

If is odd, there exists an index such that , which shows that the coefficient of in the right hand side of (1) is at least . This completes the proof ∎

We now consider the tightness of the inequality in Lemma 5.1. Let be the undirected graph with vertex set and edge set , where we regard as a set of unordered pairs. An edge with and is said to be tight if . We say that a matching is consistent with a blossom if at most one edge in crosses . We say that a matching is tight if every edge of is tight and is consistent with every positive blossom . As the proof of Lemma 5.1 clarifies, if there exists a tight perfect matching in the subgraph of induced by , then the inequality of Lemma 5.1 is tight. Furthermore, in such a case, every perfect matching in must be tight, which is stated as follows.

###### Lemma 5.2.

For a vertex set , if has a tight perfect matching, then any perfect matching in is tight.

When for some , one can delete from without violating the dual feasibility. In fact, removing such a source blossom does not affect the dual feasibility, (BT1), and (BT2). If is a normal blossom, then apply the pivoting operation around to , remove and from , and remove from . This process is referred to as .

###### Lemma 5.3.

If for some , the dual variables remain feasible and (BT1) and (BT2) hold after is executed.

###### Proof.

We only consider the case when and , since we can deal with the case of and in the same way. Let be the original matrix and be the matrix obtained after is executed. Let (resp. ) be the ordered vertex pairs corresponding to the nonzero entries of (resp. ).

Suppose that and are feasible with respect to . In order to show that and are feasible with respect to , it suffices to consider (DF2), since (DF1) and (DF3) are obvious. Suppose that . If , then by the dual feasibility with respect to . Otherwise, we have and . By Lemma 4.1, , and hence and imply that and . Then, by the dual feasibility with respect to , we obtain

 p(v)−p(bi) ≥Qbiv, p(ti)−p(u) ≥Quti.

Furthermore, we have by (DF3) and . By combining these inequalities, we obtain . This shows that (DF2) holds with respect to .

By the definition of , it is obvious that satisfies (BT1).

To show (BT2), let be a normal blossom that is different from . Suppose that and . we consider the following cases, separately.

• If , then for any . In particular, .

• If , then for any . In particular, .

• If , then we have that and .

In every case, we have that for any , and for any . Therefore, , for any with , and for any with . We can deal with the case when and in a similar way. This shows that satisfies (BT2). ∎

## 6 Optimality

In this section, we show that if we obtain a parity base and feasible dual variables and , then is a minimum-weight parity base.

Note again that although and are called dual variables, they do not correspond to dual variables of an LP-relaxation of the minimum-weight parity base problem. Our optimality proof is based on the algebraic formulation of the problem (Lemma 2.1) and the duality of the maximum-weight matching problem.

###### Theorem 6.1.

If is a parity base and there exist feasible dual variables and , then is a minimum-weight parity base.

###### Proof.

Since the optimal value of the minimum-weight parity base problem is represented with as shown in Lemma 2.1, we evaluate the value of , assuming that we have a parity base and feasible dual variables and .

Recall that is transformed to by applying row transformations and column permutations, where is the fundamental cocircuit matrix with respect to the base obtained by . Note that the identity submatrix gives a one to one correspondence between and , and the row set of can be regarded as . We now apply the same row transformations and column permutations to , and then apply also the corresponding column transformations and row permutations to obtain a skew-symmetric polynomial matrix , that is,

 Φ′A(θ)=⎛⎜ ⎜⎝OIC−I−C⊤\omit\span\omit\raisebox5.0pt[0.0pt][0.0pt]$D′(θ)$⎞⎟ ⎟⎠←U←B←V∖B,

where is a skew-symmetric matrix obtained from by applying row and column permutations simultaneously. Note that , where the sign is determined by the ordering of .

We now consider the following skew-symmetric matrix:

 Φ∗A(θ)=⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝\omit\span\omitO\omit\span\omit\cline3−3\omit\span\omit\raisebox7.0pt[0.0pt][0.0pt]$O$I\omit\span\omit\raisebox7.0pt[0.0pt][0.0pt]$C∗$O−I\omit\span\omit\cline1−2\omit\span\omit\omit\span\omit\raisebox7.0pt[0.0pt][0.0pt]$D′(θ)$\raisebox7.0pt[0.0pt][0.0pt]$O$\cline3−5\omit\span\omit\raisebox7.0pt[0.0pt][0.0pt]$−C∗⊤$\omit\span\omitOO⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠\raisebox7.0pt[0.0pt][0.0pt]$←U∗$(identifiedwith$B∗$)←B←V∖B←T∖B∗.

Here, the row and column sets of are both indexed by , where is the row set of , which can be identified with . Then, we have the following claim.

It holds that .

###### Proof.

Since satisfies (BT1), we can obtain from by applying elementary row transformations, where is some matrix. Here, the row and column sets are and , respectively. We apply the same row transformations and their corresponding column transformations to . Then, we obtain the following matrix:

 ˆΦA(θ)=⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝\omit\span\omitOXI\cline3−5\omit\span\omit\raisebox7.0pt[0.0pt][0.0pt]$O$ICOO−I\omit\span\omit\cline1−2−X⊤−C⊤\omit\span\omit\raisebox7.0pt[0.0pt][0.0pt]$D′(θ)$\raisebox7.0pt[0.0pt][0.0pt]$O$−IO\omit\span\omitOO⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠\raisebox7.0pt[0.0pt][0.0pt]$←U∗$(identifiedwith$B∗$)←B←V∖B←T∖B∗,

and hence . Since , we have that

 degθPfΦ∗A(θ)=degθPfˆΦA(θ)=degθPfΦ′A(θ)=degθPfΦA(θ),

which completes the proof. ∎

In what follows, we evaluate . Construct a graph with edge set . Each edge has a weight . Then it can be easily seen by the definition of Pfaffian that the maximum weight of a perfect matching in is at least . Let us recall that the dual linear program of the maximum-weight perfect matching problem on is formulated as follows.

 Minimize ∑v∈W∗π(v)−∑Z∈Ωξ(Z) subject to π(u)+π(v)−∑Z∈Ωuvξ(Z)≥degθ(Φ∗A(θ))uv(∀(u,v)∈E∗), ξ(Z)≥0(∀Z∈Ω),

where and (see, e.g., [37, Theorem 25.1]). In what follows, we construct a feasible solution of this linear program. The objective value provides an upper bound on the maximum weight of a perfect matching in , and consequently serves as an upper bound on .

Since can be identified with , we can naturally define a bijection between and . We define by

 π(v)={p(v)if v∈V∪(T∖B∗),−p(η−1(v))if v∈U∗,

For each , we introduce and set (see Fig. 3). Since is of odd cardinality and there is no source line in , we see that

 |Zi|=|Hi∖(T∩B∗)|+|Hi∩B∗|=|Hi|+|Hi∩B|

is odd and . Define for any . We now show the following claim.

###### Claim 6.3.

The dual variables and defined as above form a feasible solution of the linear program (6).

###### Proof.

Suppose that . If and , then (DF1) shows that , where . Since is even for any , this shows (6). If and , then implies that , and hence , which shows (6) as is even for any .

The remaining case of is when and . That is, it suffices to show that satisfies (6) if . By the definition of , we have , where . By the definition of , we have if and only if , which shows that

 ∑i:Zi∈Ωuvξ(Zi)=∑i∈Iu′vq(Hi).

Since , by (DF2), we have

 p(v)−p(u′)≥Qu′v=∑i∈Iu′vq(Hi).

Thus, we obtain

 π(u)+π(v)−∑i:Zi∈Ωuvξ(Zi)≥0,

which shows that satisfies (6). ∎

The objective value of this feasible solution is

 ∑v∈W∗π(v)−λ∑i=1ξ(Zi) = ∑v∈V∖Bp(v)+∑v∈T∖B∗p(v)−∑v∈T∩B∗p(v)−λ∑i=1ξ(Zi) (3) = ∑v∈V∖Bp(v)=∑ℓ⊆V∖Bwℓ,

where the first equality follows from the definition of , the second one follows from the definition of and (DF3), and the third one follows from (DF1). By the weak duality of the maximum-weight matching problem, we have

 ∑v∈W∗π(v)−λ∑i=1ξ(Zi) ≥(maximum weight of a perfect matching in Γ∗) ≥degθPfΦ∗A(θ)=degθPfΦA(θ). (4)

On the other hand, Lemma 2.1 shows that any parity base satisfies that

 ∑ℓ⊆B′wℓ≥∑ℓ∈L