Derandomization for k-submodular maximization

# Derandomization for k-submodular maximization

Hiroki Oshima Department of Mathematical Informatics,
Graduate School of Information Science and Technology, The University of Tokyo
July 20, 2019
###### Abstract

Submodularity is one of the most important property of combinatorial optimization, and -submodularity is a generalization of submodularity. Maximization of a -submodular function is NP-hard, and approximation algorithm has been studied. For monotone -submodular functions, [Iwata, Tanigawa, and Yoshida 2016] gave -approximation algorithm. In this paper, we give a deterministic algorithm by derandomizing that algorithm. Our algorithm is -approximation and runs in polynomial time.

## 1 Introduction

A set function is submodular if, for any , . Submodularity is one of the most important properties of combinatorial optimization. The rank functions of matroids and cut capacity functions of networks are submodular. Submodular functions can be seen as discrete version of convex functions.

For submodular function minimization, [4] showed the first polynomial-time algorithm. The combinatorial strongly polynomial algorithms were shown by [5] and [8]. On the other hand, submodular function maximization is NP-hard and we can only use approximation algorithms. Let an input function for maximization be , a maximizer of be , and an output of an algorithm be . The approximation ratio of the algorithm is defined as for deterministic algorithms and for randomized algorithms. A randomized version of Double Greedy algorithms in [2] achieves -approximation. [3] showed -approximation requires exponential time value oracle queries. This implies that, Double Greedy algorithm is one of the best algorithms in terms of the approximation ratio. [1] showed a derandomized version of randomized Double Greedy algorithm, and their algorithm achieves -approximation.

-submodularity is an extension of submodularity. -submodular function is defined as below.

###### Definition 1

Let . A function is called -submodular if we have

 f(x)+f(y)≥f(x⊓y)+f(x⊔y)

for any . Note that

 x⊓y = (X1∩Y1,...,Xk∩Yk)  and x⊔y = (X1∪Y1∖⋃i≠1(Xi∪Yi),...,Xk∪Yk∖⋃i≠k(Xi∪Yi)).

It is a submodular function if . It is called a bisubmodular function if .

Maximization for -submodular functions is also NP-hard and approximation algorithm have been studied. An input of the problem is a nonnegative -submodular function. Note that, for any -submodular function and any , a function is -submodular. An output of the problem is . Let an input -submodular function be , a maximizer of be , and an output of an algorithm be . Then we define the approximation ratio of the algorithm as for deterministic algorithms, and for randomized algorithms. For bisubmodular functions, [6] and [9] showed that the algorithm for submodular functions in [2] can be extended. [9] analyzed an extension for -submodular functions. They showed a randomized -approximation algorithm with and a deterministic -approximation algorithm. Now we have a -approximation algorithm shown in [7]. In particular, for monotone -submodular functions, [7] gave a -approximation algorithm. They also showed any -approximation algorithm requires exponential time value oracle queries.

In this paper, we give a deterministic approximation algorithm for monotone -submodular maximization. It satisfies -approximation and runs in polynomial-time. Our algorithm is a derandomized version of algorithm for monotone functions in [7]. We also note the derandomization scheme is from [1], used for Double Greedy algorithm.

## 2 Preliminary

Define a partial order on for and as follows:

 x⪯ydef⟺Xi⊆Yi(i=1,...,k).

Also, for , , and , define

 Δe,if(x)=f(X1,...,Xi−1,Xi∪{e},Xi+1,...,Xk)−f(X1,...,Xk).

A monotone -submodular function is -submodular and satisfies for any and in with .

The property of -submodularity can be written as another form.

###### Theorem 2.1

([9] THEOREM 7) A function is -submodular if and only if is orthant submodular and pairwise monotone.

Note that orthant submodularity is to satisfy

 Δe,if(x)≥Δe,if(y)  (x,y∈(k+1)V, x⪯y, e∉k⋃l=1Yl, i∈{1,...,k}),

and pairwise monotonicity is to satisfy

 Δe,if(x)+Δe,jf(x)≥0 (x∈(k+1)V, e∉k⋃l=1Xl, i,j∈{1,...,k} (i≠j)).

To analyze -submodular functions, it is often convenient to identify as . A -dimensional vector is associated with by .

## 3 Existing randomized algorithms

### 3.1 Algorithm framework

In this section, we see the framework to maximize -submodular functions (Algorithm 1 [7]). [6] and [9] used it with specific distributions.

Algorithm 1 is not only used for monotone functions. However, in this paper, we only use it for monotone functions.

Now we define some variables to see Algorithm 1. Let be an optimal solution, and we write as at -th iteration. Let other variables be as follows:

 o(j)=(o⊔s(j))⊔s(j) , t(j−1)(e)={o(j)(e)(e≠e(j))0(e=e(j)) y(j)i=Δe(j),if(s(j−1)) , a(j)i=Δe(j),if(t(j−1))

Algorithm 1 satisfies following lemma.

###### Lemma 1

([7] LEMMA 2.1.)
Let . Conditioning on , suppose that

 k∑i=1(a(j)i∗−a(j)i)p(j)i≤ck∑i=1(y(j)ip(j)i)

holds for each with , where . Then .

### 3.2 A randomized algorithm for monotone functions

In [7], a randomized -approximation algorithm for monotone -submodular functions (Algorithm 2) is shown.

Algorithm 2 runs in polynomial time. The approximation ratio of Algorithm 2 satisfies the theorem below.

###### Theorem 3.1

([7] THEOREM 2.2.) Let be a maximizer of a monotone -submodular function and let be the output of Algorithm 2. Then .

In the proof of this theorem (see [7]), the inequality of Lemma 1 is proved with . We get from monotonicity, and from orthant submodularity. Hence, the inequality

 ∑i≠i∗(y(j)i∗p(j)i)≤(1−1k)k∑i=1(y(j)ip(j)i) (1)

is used. The inequality of Lemma 1 is satisfied when the inequality (1) is valid.

## 4 Deterministic algorithm

In this section, we give a polynomial-time deterministic algorithm for maximizing monotone -submodular functions. Our algorithm is Algorithm 3. Algorithm 3 is a derandomized version of Algorithm 2. We note the derandomization scheme of this algorithm is from [1].

In the algorithm, we construct a distribution which satisfies . Then the algorithm outputs the best solution in . We can see the right hand side of (2) in Algorithm 3 is the expected value of the left hand side of (1) for with . it is because . Also the left hand side of (2) is the expected value of the right hand side of (1) with . From (3) and (4), in (5) is constructed as a distribution.

Algorithm 3 achieves the same approximation ratio as Algorithm 2.

###### Theorem 4.1

Let be a maximizer of a monotone nonnegative k-submodular function and let be the output of Algorithm 3. Then .

###### Proof

We consider the -th iteration. From (5), we get

 Es∼Dj−1[k∑i=1pi,syi(s)] = Es∼Dj−1[k∑i=1pi,s(f(se(j),i)−f(s))] (6) = Es∼Dj−1[k∑i=1pi,sf(se(j),i)−f(s)] = Es′∼Dj[f(s′)]−Es∼Dj−1[f(s)].

Now, we consider . Define the variables as follows:

 r(e) = {o[s](e)(e≠e(j))0(e=e(j)) ai(s) = Δe(j),if(r)

Then we have

 f(o[s])−f(o[se(j),i]) = ai∗(s)−ai(s)  (i∗=o(e(j))) (7)

From monotonicity and orthant submodularity of , we have

 ai∗(s)−ai(s)≤yi∗(s). (8)

From (7) and (8), we get

 Es∼Dj−1[f(o[s])]−Es′∼Dj[f(o[s′])] = Es∼Dj−1[k∑i=1pi,sf(o[s])−f(o[se(j),i])] (9) = Es∼Dj−1[k∑i=1pi,s(f(o[s])−f(o[se(j),i]))] = Es∼Dj−1⎡⎣∑i≠i∗pi,s(ai∗(s)−ai(s))⎤⎦ ≤ Es∼Dj−1⎡⎣∑i≠i∗pi,s(yi∗(s))⎤⎦ = Es∼Dj−1[(1−pi∗,s)(yi∗(s))].

satisfies (2) for all . Hence we obtain

 (1−1k)(Es′∼Dj[f(s′)]−Es∼Dj−1[f(s)])≥Es∼Dj−1[f(o[s])]−Es′∼Dj[f(o[s′])] (10)

from (6) and (9). By the summation of (10), we get

 (1−1k)(Es′∼Dn[f(s′)]−Es∼D0[f(s)])≥Es∼D0[f(o[s])]−Es′∼Dn[f(o[s′])]. (11)

Note that for , and for . Now we have

 f(o) ≤ (2−1k)Es′∼Dn[f(s′)]−(1−1k)f(0) ≤ (2−1k)Es′∼Dn[f(s′)] ≤

The algorithm performs a polynomial number of value oracle queries.

###### Theorem 4.2

Algorithm 3 returns a solution after value oracle queries.

###### Proof

Algorithm 3 uses the value oracle to caluculate . At -th iteration, the number of is . From (5), equals the number of . Then we have to consider at -th iteration.

By the definition, is an extreme point solution of (2), (3), and (4). Note that, we can get a solution by setting as the distribution of Algorithm 2 for each . We can also see the feasible region of (2), (3), and (4) is bounded. Then some extreme point solution exists.

Let . By and equalities of (3), inequalities are tight at any extreme point solution. (2) have inequalities and (4) have inequalities. Then, at least inequalities of (4) are tight. Hence, the number of is at most .

Now we have . We can also see . Then the number of value oracle queries is

 n∑j=1k|Dj−1|≤n∑j=1k(jk+1).

In our algorithm, we have to search for an extreme point solution. We can do it by solving LP for some objective function. If we use LP for our algorithm, it is polynomial-time not only for the number of queries but also for the number of operations. The simplex method is not proved to be a polynomial-time method. However, it is practical. Our algorithm needs only an extreme point solution, then if we get a basic solution, it is enough. So we can use the first phase of two-phase simplex method to find an extreme point solution.

## 5 Conclusion

We showed a derandomized algorithm for monotone -submodular maximization. It is -approximation and polynomial-time algorithm.

One of open problems is a faster method for finding an extreme point solution of the linear formulation. For submodular functions, [1] showed greedy methods are effective. It is because their formulation is the form of fractional knapsack problem. Our formulation is similar to theirs, and ours can be seen as the form of an LP relaxation of multidimensional knapsack problem. However, faster methods are not given than general LP solutions. The number of constraints in our formulation depends on and the number of iterations. It is therefore difficult to find an extreme point faster.

Constructing a deterministic algorithm for nonmonotone functions is also an important open problem. For nonmonotone functions, we have pairwise monotonicity instead of monotonicity. In such a situation, for some , can be negative. However, if for all , we can’t find such . Then, if we try to use the same derandomizing method, the number of constraints in the linear formulation and the size of will be exponential. So algorithm can’t finish in polynomial-time.

## References

• [1] N. Buchbinder and M. Feldman. Deterministic algorithms for submodular maximization problems. In Proceedings of the Twenty-Seventh Annual ACM-SIAM Symposium on Discrete Algorithms, pages 392–403. SIAM, 2016.
• [2] N. Buchbinder, M. Feldman, J. Seffi, and R. Schwartz. A tight linear time (1/2)-approximation for unconstrained submodular maximization. SIAM Journal on Computing, 44(5):1384–1402, 2015.
• [3] U. Feige, V. S. Mirrokni, and J. Vondrak. Maximizing non-monotone submodular functions. SIAM Journal on Computing, 40(4):1133–1153, 2011.
• [4] M. Grötschel, L. Lovász, and A. Schrijver. The ellipsoid method and its consequences in combinatorial optimization. Combinatorica, 1(2):169–197, 1981.
• [5] S. Iwata, L. Fleischer, and S. Fujishige. A combinatorial strongly polynomial algorithm for minimizing submodular functions. Journal of the ACM, 48(4):761–777, 2001.
• [6] S. Iwata, S. Tanigawa, and Y. Yoshida. Bisubmodular function maximization and extensions. Technical report, Technical Report METR 2013-16, The University of Tokyo, 2013.
• [7] S. Iwata, S. Tanigawa, and Y. Yoshida. Improved approximation algorithms for k-submodular function maximization. In Proceedings of the Twenty-Seventh Annual ACM-SIAM Symposium on Discrete Algorithms, pages 404–413. SIAM, 2016.
• [8] A. Schrijver. A combinatorial algorithm minimizing submodular functions in strongly polynomial time. Journal of Combinatorial Theory, Series B, 80(2):346–355, 2000.
• [9] J. Ward and S. Živný. Maximizing k-submodular functions and beyond. ACM Trans. Algorithms, 12(4):47:1–47:26, August 2016.
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters