A Reduction for Optimizing Lattice Submodular Functions with Diminishing Returns
Abstract
A function is DRsubmodular if it satisfies for all . Recently, the problem of maximizing a DRsubmodular function subject to a budget constraint as well as additional constraints has received significant attention [6, 7, 5, 8].
In this note, we give a generic reduction from the DRsubmodular setting to the submodular setting. The running time of the reduction and the size of the resulting submodular instance depends only logarithmically on . Using this reduction, one can translate the results for unconstrained and constrained submodular maximization to the DRsubmodular setting for many types of constraints in a unified manner.
1 Introduction
Recently, constrained submodular optimization has attracted a lot of attention as a common abstraction of a variety of tasks in machine learning ranging from feature selection, exemplar clustering to sensor placement. Motivated by the use cases where there is a large budget of identical items, a generalization of submodular optimization to integer lattice is proposed by [6]. Previously, submodular functions has been generalized to lattices via the lattice submodular property. A function is lattice submodular if for all ,
In the generalization due to [6, 7], a function is DRsubmodular if it satisfies
for all (diminishing return property), where is the vector in that has a in the coordinate corresponding to and in all other coordinates.
It can be shown that any DRsubmodular function is also lattice submodular (but the reverse direction is not necessarily true). Similar to submodular functions, the applications can be formulated as maximizing a DRsubmodular function subject to constraints, such as a budget constraint . While it is straightforward to reduce optimization of DRsubmodular function with budget constraint to optimization of submodular function with items, the goal of [6] is to find algorithms for this setting with running time logarithmic in rather than polynomial in , which follows from the straightforward reduction. Following [6], there have been several works extending problems involving submodular functions to the DRsubmodular setting [7, 5, 8].
In this note, we give a generic reduction from the DRsubmodular setting to the submodular setting. The running time of the reduction and the size of the resulting submodular instance depends only logarithmically on . Using this reduction, one can translate the results for unconstrained and constrained submodular maximization to the DRsubmodular setting for many types of constraints in a unified manner.
2 The Reduction
Lemma 1.
For any , there is a decomposition with so that for any , there is a way to express as the sum of a subset of the multiset .
Proof.
Let be the binary representation of with . Let be all the nonzeroes among . Let , for , and for . It is clear that and .
Consider an arbitrary number . Let be the largest bit that is 1 for but it is 0 for ( must exist because ). Let be the number that agrees with on all bits larger or equal to and has all 0s for the smaller bits. We can form from (which sum up to ) and the additional numbers from corresponding to the bits equal to 1 from to in the binary representation of . Notice that and it can be written as a sum of numbers from (just ’s binary representation). By removing those numbers from the representation of above, we obtain a subset of the ’s that sums to . ∎
Corollary 2.
For any , there is a way to write with so that and for any , there is a way to express as the sum of a subset of the multiset .
Proof.
We start with the decomposition of the above lemma and refine it until the condition is satisfied. As long as there exists some , replace with two new numbers and . Each replacement step produces a new term equal to so the number of replacement steps is at most . Thus, the number of terms in the decomposition is at most . ∎
The reduction.
Suppose we need to optimize over the domain . By the above lemma, we can write with and any number at most can be written as a sum of a subset of the ’s. Let . Consider a function defined on the ground set defined as follows. Consider . Let and we define .
By Lemma 1, for any vector , there is a vector such that for all . Thus, the set captures all of . Next, we show that is submodular.
Lemma 3.
The function is submodular.
Proof.
Consider 2 vectors such that for all . Consider an arbitrary element that is not in . Let defined as and defined as . We have and . By the diminishing return property, we have
∎
3 Modeling Constraints
We are interested in maximizing subject to constraints. In this section, we show how to translate constraints on to constraints for maximizing .
Cardinality constraint.
The constraint with can be translated to
By applying Corollary 2, we map from a cardinality constraint to a knapsack constraint where all weights are at most an fraction of the budget.
Knapsack constraint.
The knapsack constraint can be translated to
General constraints.
Consider the problem , where denotes the set of all solutions that satisfy the constraints.
We can apply algorithmic frameworks from the submodular setting — such as the frameworks based on continuous relaxations and rounding [9, 3] — to the DRsubmodular setting as follows. Let be a relaxation of that satisfies the following conditions:

is downwardclosed: if and then .

There is a separation oracle for : given , there is an oracle that either correctly decides that or otherwise returns a hyperplane separating from , i.e., a vector and such that and for all .
We apply Lemma 1 (or Corollary 2) to obtain the multiset such that, for any vector , there is a vector such that for all . Define the linear function where is computed according to . Let be the submodular function given by the reduction. Let be the multilinear extension of :
where is a random set that contains each element independently at random with probability .
Thus we obtain the following fractional problem: . As shown in the following lemma, we can use the separation oracle for to maximize a linear objective , where , subject to the constraints and .
Lemma 4.
Using the separation oracle for and an algorithm such as the ellipsoid method, for any vector , one can find in polynomial time a vector that maximizes subject to and .
Proof.
It suffices to verify that the separation oracle for allows us to separate over . To this end, let be a vector in . Separation for the constraint can be done trivially by checking if every coordinate of is in . Thus, we focus on separation for the constraint . Using the separation oracle for , we can check whether . If yes, then we are done. Otherwise, the oracle returns and such that and for all . Let be , where is the adjoint of i.e. . Then . Now let be a vector in such that . We have . Thus is a hyperplane separating from . ∎
Since we can solve , where , we can approximately solve the fractional problem using the (measured) Continuous Greedy algorithm or local search [9, 3, 4].
We note that in some settings, such as when is a polymatroid polytope
Some examples of results.
Using the reduction above, we immediately get algorithms for maximizing DRsubmodular functions subject to various types of constraints. We include a few examples below.
Theorem 5.
There is a approximation algorithm for unconstrained DRsubmodular maximization with running time .
Proof.
Theorem 6.
There is a approximation algorithm for maximizing a monotone DRsubmodular function subject to a cardinality constraint with running time where .
Proof.
If , the result follows via the trivial reduction of making copies of every item. Next, we consider the case . By the reduction using Corollary 2, we need to solve a submodular maximization problem with a knapsack constraint where all weights are at most times the budget and there are items. The result follows from applying the Density Greedy algorithm with either descending thresholds or lazy evaluation [1]. ∎
Theorem 7.
There is an approximation algorithm for maximizing a DRsubmodular function subject to a polymatroid constraint with running time that is polynomial in and , where if the function is monotone and otherwise.
Proof.
Let be the polymatroid polytope. We apply Lemma 1 (or Corollary 2) to obtain the multiset such that, for any vector , there is a vector such that for all . Let be the submodular function given by the reduction. Let be the multilinear extension of .
Since we can separate over in polynomial time using a submodular minimization algorithm, it follows from Lemma 4 that we can optimize any linear (in ) objective over and for all , where . Therefore, using the measured Continous Greedy algorithm, we can find an approximate fractional solution to the problem , where for monotone functions and for nonmonotone functions. Similarly to [8], we can round the resulting fractional solution without any loss in the approximation. Let be defined as . Define as for any . Note that is the multilinear extension of a submodular function agreeing with on .
Let be defined as . First one can show that via a hybrid argument. Let be a random integral vector whose first coordinates are distributed according to (that is, constructing a randomized rounding of and then converting it to an integral vector in ) and the last coordinates are picked randomly among so that the expectation is . Note that and and we will show that . Indeed, for all , and are identically distributed so we can couple the randomness so that . Let be with the th coordinate zeroed out and define a single variable function where . Define be the piecewise linear function agreeing with on integral points and does not have any break points other than integral points. By the DR property of , we have that is concave. Thus, . On the other hand, because is linear in , we have .
Next as done in [8, Lemma 13], one can show that the constraints are equivalent to a matroid polytope. Thus, one can round to an integral vector without losing any value in using strategies such as pipage rounding or swap rounding. ∎
Footnotes
 Let be a monotone submodular function with . The polymatroid associated with is the polytope .
References
 Ashwinkumar Badanidiyuru and Jan Vondrák. Fast algorithms for maximizing submodular functions. In Proceedings of the TwentyFifth Annual ACMSIAM Symposium on Discrete Algorithms (SODA), pages 1497–1514, 2014.
 Niv Buchbinder, Moran Feldman, Joseph Naor, and Roy Schwartz. A tight linear time (1/2)approximation for unconstrained submodular maximization. SIAM J. Comput., 44(5):1384–1402, 2015.
 Chandra Chekuri, Jan Vondrák, and Rico Zenklusen. Submodular function maximization via the multilinear relaxation and contention resolution schemes. SIAM J. Comput., 43(6):1831–1879, 2014.
 Moran Feldman, Joseph Naor, and Roy Schwartz. A unified continuous greedy algorithm for submodular maximization. In Proceedings of the 52nd Annual IEEE Symposium on Foundations of Computer Science (FOCS), pages 570–579, 2011.
 Takanori Maehara, Akihiro Yabe, JP NEC, and Kenichi Kawarabayashi. Budget allocation problem with multiple advertisers: A game theoretic view. In Proceedings of the 32nd International Conference on Machine Learning (ICML), pages 428–437, 2015.
 Tasuku Soma, Naonori Kakimura, Kazuhiro Inaba, and Kenichi Kawarabayashi. Optimal budget allocation: Theoretical guarantee and efficient algorithm. In Proceedings of The 31st International Conference on Machine Learning (ICML), pages 351–359, 2014.
 Tasuku Soma and Yuichi Yoshida. A generalization of submodular cover via the diminishing return property on the integer lattice. In Advances in Neural Information Processing Systems (NIPS), pages 847–855, 2015.
 Tasuku Soma and Yuichi Yoshida. Maximizing monotone submodular functions over the integer lattice. In International Conference on Integer Programming and Combinatorial Optimization (IPCO), pages 325–336, 2016.
 Jan Vondrák. Optimal approximation for the submodular welfare problem in the value oracle model. In Proceedings of the 40th Annual ACM Symposium on Theory of Computing (STOC), pages 67–74, 2008.