Randomized Strategies for Robust Combinatorial Optimization

# Randomized Strategies for Robust Combinatorial Optimization

Yasushi Kawase Tokyo Institute of Technology, Tokyo, Japan. kawase.y.ab@m.titech.ac.jp Hanna Sumita Tokyo Metropolitan University, Tokyo, Japan. sumita@tmu.ac.jp
###### Abstract

In this paper, we study the following robust optimization problem. Given an independence system and candidate objective functions, we choose an independent set, and then an adversary chooses one objective function, knowing our choice. Our goal is to find a randomized strategy (i.e., a probability distribution over the independent sets) that maximizes the expected objective value. To solve the problem, we propose two types of schemes for designing approximation algorithms. One scheme is for the case when objective functions are linear. It first finds an approximately optimal aggregated strategy and then retrieves a desired solution with little loss of the objective value. The approximation ratio depends on a relaxation of an independence system polytope. As applications, we provide approximation algorithms for a knapsack constraint or a matroid intersection by developing appropriate relaxations and retrievals. The other scheme is based on the multiplicative weights update method. A key technique is to introduce a new concept called -reductions for objective functions with parameters . We show that our scheme outputs a nearly -approximate solution if there exists an -approximation algorithm for a subproblem defined by -reductions. This improves approximation ratio in previous results. Using our result, we provide approximation algorithms when the objective functions are submodular or correspond to the cardinality robustness for the knapsack problem.

## 1 Introduction

This paper addresses robust combinatorial optimization. Let be a finite ground set, and let be a positive integer. Suppose that we are given set functions and an independence system . The functions represent possible scenarios. The worst case value for across all scenarios is defined as , where . We focus on a randomized strategy for the robust optimization problem, i.e., a probability distribution over . Let and denote the set of probability distributions over and , respectively. The worst case value for a randomized strategy is defined as . The aim of this paper is to solve the following robust optimization problem:

 max mink∈[n]∑X∈IpX⋅fk(X)s.t.p∈Δ(I). (1)

There exist a lot of previous work on a deterministic strategy for (1), that is, finding that maximizes the worst case value. We are motivated by the following two merits to focus on a randomized strategy. The first one is that the randomization improves the worst case value dramatically. Suppose that , , and . Then, the maximum worst case value of deterministic strategy is , while that of randomized strategy is . The second merit is that a randomized strategy can be found more easily than a deterministic one. It is known that finding a deterministic solution is hard even in a simple setting [1, 28]. In particular, as we will see later (Theorem 2.2), computing a solution with the maximum worst case value is NP-hard even to approximate even for linear objectives subject to a cardinality constraint. Note that the randomized version of this problem is polynomial-time solvable (see Theorem 3.4).

It is worth noting that we can regard the optimal value of (1) as the game value in a two-person zero-sum game where one player (algorithm) selects a feasible solution and the other player (adversary) selects a possible scenario .

An example of the robust optimization problem appears in the (zero-sum) security games, which models the interaction between a system defender and a malicious attacker to the system . The model and its game-theoretic solution have various applications in the real world: the Los Angeles International Airport to randomize deployment of their limited security resources ; the Federal Air Marshals Service to randomize the allocation of air marshals to flights ; the United States Coast Guard to recommend randomized patrolling strategies for the coast guard ; and many other agencies. In this game, we are given targets . The defender selects a set of targets , and then the attacker selects a facility . The utility of defender is if and if . Then, we can interpret the game as the robust optimization with where if and if for . Most of the literature has focused on the computation of the Stakelberg equilibrium, which is equivalent to (1).

Another example of (1) is to compute the cardinality robustness for the maximum weight independent set problem [23, 19, 25, 37, 31]. The problem is to choose an independent set of size at most with as large total weight as possible, but the cardinality bound is not known in advance. For each independent set , we denote the total weight of the heaviest elements in by . The problem is also described as the following zero-sum game. First, the algorithm chooses an independent set , and then the adversary (or nature) chooses a cardinality bound , knowing . The payoff of the algorithm is . For , an independent set is said to be -robust if for any . Then, our goal is to find a randomized strategy that maximizes the robustness , i.e., . We refer this problem as the maximum cardinality robustness problem. This is formulated as (1) by setting .

Since (1) can be regarded as the problem of computing the game value of the two-person zero-sum game, one most standard way to solve (1) is to use the linear programming (LP). In fact, it is known that we can compute the exact game value in polynomial time with respect to the numbers of deterministic (pure) strategies for both players (see, e.g., [39, 5] for the detail). However, in our setting, direct use of the LP formulation does not give an efficient algorithm, because the set of deterministic strategies for the algorithm is , whose cardinality is exponentially large, and hence the numbers of the variables and the constraints in the LP formulation are exponentially large.

Another known way to solve (1) is to use the multiplicative weights update (MWU) method. The MWU method is an algorithmic technique which maintains a distribution on a certain set of interest and updates it iteratively by multiplying the probability mass of elements by suitably chosen factors based on feedback obtained by running another algorithm on the distribution . MWU is a simple but powerful method that is used in wide areas such as game theory, machine learning, computational geometry, optimization, and so on. Freund and Schapire  showed that MWU can be used to calculate the approximate value of a two-person zero-sum game under some conditions. More precisely, if (i) the adversary has a polynomial size deterministic strategies and (ii) the algorithm can compute a best response, then MWU gives a polynomial-time algorithm to compute the game value up to an additive error of for any fixed constant . For each , we call a best response for if . Krause et al.  and Chen et al.  extended this result for the case when the algorithm can only compute an -best response, i.e., an -approximate solution for . They provided a polynomial-time algorithm that finds an -approximation of the game value up to additive error of for any fixed constant . This implies an approximation ratio of , where is the optimal value of (1). Their algorithms require pseudo-polynomial time to obtain an -approximation solution for a fixed constant . In this paper, we improve their technique to find it in polynomial time.

The main results of this paper are two general schemes for solving (1) based on LP and MWU in the form of using some subproblems. Therefore, when we want to solve a specific class of the problem (1), it suffices to solve the subproblem. As consequences of our results, we show (approximation) algorithms to solve (1) in which the objective functions and the constraint belong to well-known classes in combinatorial optimization, such as submodular functions, knapsack/matroid/-matroid intersection constraints.

### Related work

While there exist still few papers on randomized strategies of the robust optimization problems, algorithms to find a deterministic strategy have been intensively studied in various setting. See also survey papers [1, 28]. Krause et al.  focused on where ’s are monotone submodular functions. Those authors showed that this problem is NP-hard even to approximate, and provided an algorithm that outputs a set of size whose objective value is at least as good as the optimal value. Orlin et al.  provided constant-factor approximate algorithms to solve , where is a monotone submodular function.

Kakimura et al.  proved that the deterministic version of the maximum cardinality robustness problem is weakly NP-hard but admits an FPTAS. Since Hassin and Rubinstein  introduced the notion of the cardinality robustness, many papers have been investigating the value of the maximum cardinality robustness [23, 19, 25]. Matuschke et al.  introduced randomized strategies for the cardinality robustness, and they presented a randomized strategy with -robustness for a certain class of independence system . Kobayashi and Takazawa  focused on independence systems that are defined from the knapsack problem, and exhibited two randomized strategy with robustness and , where is the exchangeability of the independence system and .

When , the deterministic version of the robust optimization problem (1) is exactly the classical optimization problem . For the monotone submodular function maximization problem, there exist -approximation algorithms under a knapsack constraint  or a matroid constraint [8, 15], and there exists a -approximation algorithm under a -matroid intersection constraint for any fixed  . For the unconstrained non-monotone submodular function maximization problem, there exists a -approximation algorithm, and this is best possible [14, 7]. As for the case when the objective function is linear, the knapsack problem admits an FPTAS .

### Our results

##### LP-based algorithm

We focus on the case when all the objective functions are linear. In a known LP formulation for zero-sum games, each variable corresponds a probability that each is chosen. Because is large, we use another LP formulation of (1). The number of variables is reduced by setting as a variable a probability that each element in is chosen. The feasible region consists of the independence system polytope, that is, the convex hull of the characteristic vectors for . Although our LP formulation still has the exponential number of constraints, we can use the result by Grötschel, Lovász, and Schrijver  that if we can efficiently solve the separation problem for the polytope of the feasible region, then we can efficiently solve the LP by the ellipsoid method. Since the solution of the LP is an aggregated strategy for (1), we must retrieve a randomized strategy from it. To do this, we use the result in  again that we can efficiently compute the convex combination of the optimal vector with extreme points (vertex) of the polytope. Consequently, we can see that there exists a polynomial-time algorithm for (1) when is a matroid (or a matroid intersection), because a matroid (intersection) polytope admits a polynomial-time separation algorithm. As another application, we also provide a polynomial-time algorithm for the robust shortest path problem by using the dominant of an path polytope.

Moreover, we extend our scheme to deal with the case that the separation problem is NP-hard. For many combinatorial optimization problems such as the knapsack problem and the -matroid intersection problem (), the existence of an efficient algorithm to solve the separation problem is still unknown. A key point to deal such cases is to use a slight relaxation of the independence system polytope. We show that if we can efficiently solve the separation problem for the relaxed polytope, then we can know an approximate value of (1). The approximation ratio is equal to the gap between the original polytope and the relaxed polytope. The most difficult point is the translation of the optimal solution of the LP to a randomized strategy, because the optimal solution may not belong to the original feasible region, and we are no longer able to use the result in . Instead, we compute a randomized strategy approximately. We demonstrate our extended scheme for the knapsack constraint and the -matroid intersection constraint by developing appropriate relaxations and retrievals for them. As results, we obtain a PTAS and a -approximation algorithm for the knapsack constraint and the -matroid intersection constraint, respectively.

The merit of the LP-based algorithm compared with MWU is that the LP-based one is applicable to the case when the set of possible objective functions is given by a half-space representation of a polytope. The problem (1) is equivalent to the case where the set of possible objective functions is given by a convex hull of linear functions (i.e., a vertex representation). Since a vertex representation can be transformed to a half-space representation (by an extended formulation as we will describe later), (1) with a half-space representation is a generalization of the original problem. On the other hand, the transformation of a half-space representation to a vertex one is expensive because the number of vertices may be exponentially large. Both representations of a polytope have different utility, and hence it is important that the LP-based algorithm can deal with both.

##### MWU-based algorithm

We improve the technique of [34, 11] to obtain an approximation algorithm based on the MWU method. Their algorithm adopts the value of for update, but this may lead the slow convergence when is small for some . To overcome the drawback, we make the convergence rate per iteration faster by introducing a novel concept called -reduction. For any nonnegative function , a function is called an -reduction of if (i) is always at most and (ii) for any such that is at most . We assume that for some polynomially bounded , there exists an -approximation algorithm that solves for any and , where is an -reduction of for each . By using the approximation algorithm as a subroutine and by setting appropriately the value of , we show that for any fixed constant , our scheme gives an -approximation solution in polynomial time with respect to and . We remark that the support size of the output may be equal to the number of iterations. Without loss of the objective value, we can find a sparse solution whose support size is at most by using LP.

The merit of the MWU-based algorithm is the applicability to a wide class of the robust optimization problem. We also demonstrate our scheme for various optimization problems. For any , we show that a linear function has an -reduction to a linear function, a monotone submodular function has an -reduction to a monotone submodular function, and a non-monotone submodular function has an -reduction to a submodular function. Therefore, we can construct subroutines owing to existing work. Consequently, for the linear case, we obtain an FPTAS for (1) subject to the knapsack constraint and a -approximation algorithm subject to the -matroid intersection constraint. For the monotone submodular case, there exist a -approximation algorithm for the knapsack or matroid constraint, and a -approximation for the -matroid intersection constraint. For the non-monotone submodular case, we derive a -approximation algorithm for (1) without a constraint.

An important application of our MWU-based scheme is the maximum cardinality robustness problem. For independence systems defined from the knapsack problem, we obtain an FPTAS for the maximum cardinality robustness problem. To construct the subroutine, we give a gap-preserving reduction of to for any , which admits an FPTAS . We also show that the maximum cardinality robustness problem is NP-hard.

We remark that both schemes produce a randomized strategy, but the schemes themselves are deterministic. Our results are summarized in Table 1.

##### Orgamization of this paper

The rest of this paper is organized as follows. In Section 2, we fix notations and give a precise description of our problem. In Section 3, we explain basic scheme of LP-based algorithms and then extend the result to a knapsack constraint case and a -matroid intersection constraint case. In Section 4, we explain multiplicative weights update method.

## 2 Preliminaries

##### Linear and submodular functions

Throughout this paper, we consider set functions with . We say that a set function is submodular if holds for all  [18, 32]. In particular, a set function is called linear (modular) if holds for all . A linear function is represented as for some . A function is said to be monotone if for all . A linear function is monotone if and only if ().

##### Independence system

Let be a finite ground set. An independence system is a set system with the following properties: (I1) , and (I2) implies . A set is said to be independent, and an inclusion-wise maximal independent set is called a base. The class of independence systems is wide and it includes matroids, -matroid intersections, and families of knapsack solutions.

A matroid is an independence system satisfying that (I3) , implies the existence of such that . All bases of a matroid have the same cardinality, which is called the rank of the matroid and is denoted by . An example of matroids is a uniform matroid , where for some . Note that the rank of this uniform matroid is . Given two matroids and , the matroid intersection of and is defined by . Similarly, given matroids , the -matroid intersection is defined by .

Given an item set with size and value for each , and the capacity , the knapsack problem is to find a subset of that maximizes the total value subject to a knapsack constraint . Each subset satisfying the knapsack constraint is called a knapsack solution. Let be the family of knapsack solutions. Then, is an independence system.

##### Robust optimization problem

Let be a finite ground set, and let be a positive integer. Given set functions and an independence system , our task is to solve

 maxp∈Δ(I)mink∈[n]∑X∈IpX⋅fk(X).

For each , we denote and assume that . We assume that the functions are given by an oracle, i.e., for a given , we can query an oracle about the values . Let and denote the set of probability distributions over and , respectively.

By von Neumann’s minimax theorem , it holds that

 maxp∈Δ(I)mink∈[n]∑X∈IpX⋅fk(X)=minq∈ΔnmaxX∈I∑k∈[n]qk⋅fk(X). (2)

This leads the following proposition, which will be used later.

###### Proposition 2.1.

Let denote the optimal value of (1). It holds that .

###### Proof.

The upper bound follows from

 ν∗ =minq∈ΔnmaxX∈I∑k∈[n]qk⋅fk(X)≤mink∈[n]maxX∈Ifk(X)=mink∈[n]fk(X∗k).

Let be a probability distribution such that . Then we have

 ν∗ =maxp∈ΔImink∈[n]∑X∈IpX⋅fk(X)≥mink∈[n]∑X∈Ip∗X⋅fk(X)≥mink∈[n]fk(X∗k)/n.

This implies that we can find a -approximate solution by just computing .

We prove that, even for an easy case, computing the optimal worst case value among deterministic solutions is strongly NP-hard even to approximate. To prove this, we reduce the hitting set problem, which is known to be NP-hard . Given subsets on a ground set and an integer , the hitting set problem is to find a subset such that and for all .

###### Theorem 2.2.

It is NP-hard to compute

 maxX∈Imink∈[n]fk(X) (3)

even when the objective functions are linear and is given by a uniform matroid. Moreover, there exists no approximation algorithm for the problem unless P=NP.

###### Proof.

Let be an instance of the hitting set problem. We construct an instance of (3) as follows. The constraint is defined so that is the rank uniform matroid. Note that is a family of subsets with at most elements. Each objective function is defined by , which is linear.

If there exists an hitting set , then , which implies that the optimal value of (3) is at least . On the other hand, if any is not a hitting set, then for all , meaning that the optimal value of (3) is . Therefore, even deciding whether the optimal value of (3) is positive or zero is NP-hard. Thus, there exists no approximation algorithm to the problem unless P=NP. ∎

## 3 LP-based Algorithms

In this section, we present a computation scheme for the robust optimization problem (1) with linear functions , i.e., . Here, holds for and since we assume . A key technique is the separation problem for an independence system polytope. An independence system polytope of is a polytope defined as , where is a characteristic vector in , i.e., if and only if . For a probability distribution , we can get a point such that . Then, means a probability that is chosen when we select an independent set according to the probability distribution . Conversely, for a point , there exists such that by definition of . Given , the separation problem for is to either assert or find a vector such that for all .

The rest of this section is organized as follows. In Section 3.1, we prove that we can solve (1) in polynomial time if there is a polynomial-time algorithm to solve the separation problem for . We list up classes of independence systems such that there exists a polynomial- time algorithm for the separation problem in Section 3.2. In Section 3.3, we tackle the case when it is hard to construct a separation algorithm for . We show that we can obtain an approximation solution when we can slightly relax . Moreover, we deal with a setting that objective functions are given by a polytope in Section 3.4, and consider nearly linear functions in Section 3.5.

### 3.1 Basic scheme

We observe that the optimal robust value of (1) is the same as the optimal value of the following linear programming (LP):

 max νs.t.ν≤∑e∈Ewiexe(∀i∈[n]),x∈P(I). (4)
###### Lemma 3.1.

When are linear, the optimal value of (4) is equal to that of (1).

###### Proof.

Let be the optimal solution of (1) and let . Let be a vector such that for each . Note that . Then holds by the definition of . Thus, the optimal value of (4) is at least

 mink∈[n]∑e∈Ewkex∗e =mink∈[n]∑e∈E∑X∈I:e∈Xp∗X⋅wke =mink∈[n]∑X∈I∑e∈Xp∗X⋅wke=mink∈[n]∑X∈Ip∗X⋅fk(X)=ν∗.

On the other hand, let be an optimal solution of (4). As , there exists a such that for each . Then we have

 ν∗ =maxp∈Δ(I)mink∈[n]∑X∈IpX⋅fk(X) ≥mink∈[n]∑X∈Ip′X⋅fk(X)=mink∈[n]∑X∈I∑e∈Xp′X⋅wke =mink∈[n]∑e∈Ewke∑X∈I:e∈Xp′X=mink∈[n]∑e∈Ewke⋅x′e≥ν′.

Thus the optimal solution of (1) is obtained by the following two-step scheme.

1. compute the optimal solution of LP (4), which we denote as ,

2. compute such that .

It is trivial that if is bounded by a polynomial in and , then we can obtain by replacing with in (4) and solving it. In general, we can solve the two problems in polynomial time by the ellipsoid method when we have a polynomial-time algorithm to solve the separation problem for . This is due to the following theorems given by Grötschel, Lovász, and Schrijver .

###### Theorem 3.2 ().

Let be a polytope. If the separation problem for can be solved in polynomial time, then we can solve a linear program over in polynomial time.

###### Theorem 3.3 ().

Let be a polytope. If the separation problem for can be solved in polynomial time, then there exists a polynomial time algorithm that, for any vector , computes affinely independent vertices of () and positive reals with such that .

Therefore, we see the following general result.

###### Theorem 3.4.

If are linear and there is a polynomial-time algorithm to solve the separation problem for , then we can solve the linear robust optimization problem (1) in polynomial time.

### 3.2 Independence system polytopes with separation algorithms

Here we list up classes of independence systems such that there exists a polynomial-time algorithm for the separation problem. For more details of the following representation of polytopes, see .

##### Matroid constraint

Suppose that is a matroid with a rank function . Then, we can write

 P(I)={x∈[0,1]E∣∣ ∣∣∑e∈Uxe≤ρ(U) (∀U⊆E)}.

The separation problem for is solvable in strongly polynomial time by Cunningham’s algorithm . Thus, we can solve the linear robust optimization problem (1) subject to a matroid constraint.

##### Matroid intersection

Let and be matroids with rank functions and . Then, for a matroid intersection , we can denote

 P(I)={x∈[0,1]E∣∣ ∣∣∑e∈Uxe≤ρi(U) (∀U⊆E, i=1,2)}

and hence the separation problem for is solvable in strongly polynomial time by Cunningham’s algorithm . Thus, we can solve the linear robust optimization problem (1) subject to a matroid intersection constraint. We remark that matroid intersection includes bipartite matching and arborescences in directed graphs and hence we can also solve the robust maximum weight bipartite matching problem and the robust maximum weight arborescence problem.

##### Shortest s–t path

We explain that our scheme works for the set of paths, although it does not form a independence system. We are given a directed graph , source , destination , and length . Let be the set of paths and for . Then, our task is to find a probability distribution over paths that minimizes . We mention that the deterministic version of this problem is NP-hard even for restricted cases .

Since the longest path problem is NP-hard, we cannot expect an efficient separation algorithm for . However, if we extend the path polytope to its dominant it becomes tractable. The dominant of is defined as the set of vectors with for some . Then, we can denote

The separation problem for the polytope can be solved in polynomial time by solving a minimum cut problem and hence we can obtain

 x∗∈argminx∈P↑(I)∩[0,1]Emaxk∈[n]∑e∈Eℓk(e)xe.

Moreover, since for all and , we have , and hence we can obtain the optimal solution of the robust shortest path problem .

### 3.3 Relaxation of the polytope

We present an approximation scheme for the case when the separation problem for is hard to solve. Recall that where for and .

We modify the basic scheme as follows. First, instead of solving the separation problem for , we solve the one for a relaxation of . For a polytope and a positive number , we denote . We call a polytope -relaxation of if it holds that

 α^P(I)⊆P(I)⊆^P(I).

Then we solve

 maxx∈^P(I)mink∈[n]∑e∈Ewkexe (5)

instead of LP (4), and obtain an optimal solution .

Next, we compute a convex combination of using . Here, if is the optimal solution for (5), then is an -approximate solution of LP (4), because

 maxx∈P(I)mink∈[n]∑e∈Ewkexe ≤maxx∈^P(I)mink∈[n]∑e∈Ewkexe =mink∈[n]∑e∈Ewke^xe=1α⋅mink∈[n]∑e∈Ewke(α^xe).

As , there exists such that . However, the retrieval of such a probability distribution may be computationally hard, because the separation problem for is hard to solve. Hence, we relax the problem and compute such that , where . Then, is a -approximate solution of , because

 maxp∈Δ(I)mink∈[n]∑X∈IpX⋅fk(X) ≤mink∈[n]∑e∈Ewke^xe=1β⋅mink∈[n]∑e∈Ewke(β^xe) ≤1β⋅mink∈[n]∑e∈E∑X∈I:e∈Xp∗Xwke =1β⋅mink∈[n]∑X∈Ip∗X⋅fk(X).

Thus the basic scheme is modified as the following approximation scheme:

1. compute the optimal solution for LP (5),

2. compute such that for each .

###### Theorem 3.5.

Suppose that are linear. If there exists a polynomial-time algorithm to solve the separation problem for a -relaxation of , then an -approximation of the optimal value of (1) is computed in polynomial-time. In addition, if there exists a polynomial-time algorithm to find such that for any , then a -approximate solution of (1) is found in polynomial-time.

We remark that we can combine the result in Section 3.4 with this theorem.

In the subsequent sections, we apply Theorem 3.5 to important two cases when is defined from a knapsack constraint or a -matroid intersection. For this purpose, we develop appropriate relaxations of and retrieval procedures for .

#### 3.3.1 Relaxation of a knapsack polytope

Let be a set of items with size for each . Without loss of generality, we assume that a knapsack capacity is one, and for all . Let be a family of knapsack solutions, i.e., .

It is known that admits a polynomial size relaxation scheme (PSRS), i.e., there exists a -relaxation of through a linear program of polynomial size for a fixed .

###### Theorem 3.6 (Bienstock ).

Let . There exist a polytope and its extended formulation with variables and constraints such that

 (1−ϵ)Pϵ(I)⊆P(I)⊆Pϵ(I).

Thus, the optimal solution to can be computed in polynomial time. The remaining task is to compute such that for each . We give an algorithm for this task.

###### Lemma 3.7.

There exists a polynomial-time algorithm that computes such that for each .

###### Proof.

To obtain such a probability distribution, we explain Bienstock’s relaxation scheme. Let and let for . Then, the constraints of are given as follows:

 xe=∑κi=1∑S∈SiySe (∀e∈E), (6) ySe=yS0 (∀S∈⋃κi=1Si, ∀e∈S), (7) ySe=0 (∀S∈⋃κ−1i=1Si, ∀e∈E∖S), (8) ySe≤yS0 (∀S∈⋃κi=1Si, ∀e∈E∖S), (9) ∑e∈Es(e)ySe≤yS0 (∀S∈Sκ), (10) ySe=0 (∀S∈Sκ, ∀e∈E∖S:s(e)>mine′∈Ss(e′)), (11) ySe≥0 (∀S∈⋃κi=1Si, ∀e∈E∪{0}), (12) ∑κi=1∑S∈SiyS0=1. (13)

Intuitively, corresponds to a knapsack solution if and corresponds to a (fractional) knapsack solution such that is the -largest items if .

Let be an optimal solution for and let satisfy (6)–(13). For each , we define

 QS=⎧⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪⎩ y ∣∣ ∣ ∣ ∣∣ ∑e∈Es(e)ye≤1,ye=1(∀e∈S),ye=0(∀e∈E∖S:s(e)>mine′∈Ss(e′)),0≤ye≤1(∀e∈E∖S:s(e)≤mine′∈Ss(e′))⎫⎪ ⎪ ⎪ ⎪⎬⎪ ⎪ ⎪ ⎪⎭.

Let us denote . Then, by (7) and (9)–(12), we have . Also, by Theorem 3.3, we can compute a convex combination representation of with at most vertices of for each with . Suppose that where , is a vertex of , , and ().

Let be a vertex of that is not integral. Then, there exists exactly one item such that  . Let . Then, for every and it holds that

 κκ+1~y≤∑e∈S∪{e∗}1κ+1χ(T∖{e})

because, if (i.e., ), we have .

Now, we are ready to construct the probability distribution . Let us define

 p∗= κ−1∑i=1∑S∈Si^yS0χ(S)+∑S∈Sκ:^yS0>0^yS0∑i∈[tS]:~y