On Scenario Aggregation to Approximate Robust Optimization Problems

On Scenario Aggregation to Approximate Robust Optimization Problems

André Chassein Fachbereich Mathematik, University of Kaiserslautern, Germany Marc Goerigk Department of Management Science, Lancaster University, United Kingdom
Abstract

As most robust combinatorial min-max and min-max regret problems with discrete uncertainty sets are NP-hard, research into approximation algorithm and approximability bounds has been a fruitful area of recent work. A simple and well-known approximation algorithm is the midpoint method, where one takes the average over all scenarios, and solves a problem of nominal type. Despite its simplicity, this method still gives the best-known bound on a wide range of problems, such as robust shortest path, or robust assignment problems.

In this paper we present a simple extension of the midpoint method based on scenario aggregation, which improves the current best -approximation result to an -approximation for any desired . Our method can be applied to min-max as well as min-max regret problems.

Keywords: robust optimization; approximation algorithms; min-max regret

1 Introduction

We consider uncertain optimization problems of the form

where is the set of feasible solutions, and is an uncertain objective function that comes from some uncertainty set . Two popular approaches to reformulate such an uncertain problem to a robust counterpart are min-max optimization

and min-max regret optimization

where is used as an additional normalization term. In particular for combinatorial problems where , problems of this type have received significant attention in the research literature, see, e.g., the surveys [KY97, ABV09, KZ16] on this topic. In this paper we focus on the case of discrete uncertainty, i.e., the uncertainty set is of the form .

For most combinatorial problems where the deterministic version can be solved in polynomial time (e.g., shortest path, spanning tree, selection), both robust counterparts turn out to be NP-hard. Therefore, the approximability of such problems has been analyzed (see, e.g., [ABV07]).

A popular approximation algorithm due to its generality and simplicity is the midpoint method (see, e.g., [CG15]). The idea is to define a new scenario , which is the average of all scenarios in the uncertainty set, and to solve a nominal problem with respect to these costs. This method is known to be a -approximation algorithm for both min-max and min-max regret optimization. In the case of interval uncertainty, this approach even gives a 2-approximation [KZ06, Con12]. Quite surprisingly, this is still the best known approximation guarantee for several problems, see Table 1. In column ”Fix ”, we denote if an FPTAS is known for the problem with fixed number of scenarios .

Problem additionalLB additionalUB Fix

Min-Max

Shortest Path \Checkmark
Spanning Tree \Checkmark
- Cut
Assignment
Selection \Checkmark
Knapsack - \Checkmark

Regret

Shortest Path \Checkmark
Spanning Tree \Checkmark
- Cut
Assignment
Selection \Checkmark
Knapsack not approx. not approx.
Table 1: Current best known approximation guarantees (UB) for unbounded number of scenarios , and best known inapproximability results (LB) (see [KZ16]).

In this paper a simple improvement of the midpoint approach is presented, where the basic idea is not to aggregate all scenarios into a single scenario, but into a sufficiently small set of scenarios instead. We show that if the min-max problem for a constant number of scenarios is sufficiently approximable, then there is a polynomial-time -approximation for any constant . With a slight modification, this also holds for min-max regret. This result hence improves all entries of Table 1 where the best-known approximation is and the column ”Fix ” is checked. Interestingly, this also leads to the first-ever approximation algorithm for min-max knapsack problems with unbounded .

Note that this method is not a PTAS. While PTAS exist for most problems when is fixed, they have exponential runtime in . Our approach remains polynomial in , but does not give a constant approximation guarantee.

The remainder of this paper is structured as follows. In Section 2 we present our improved approximation algorithm in the case of min-max robustness, and discuss its application to min-max regret in Section 3. We describe a small computational experiment on our approach in Section 4 before we conclude the paper in Section 5.

2 Min-Max Approximation

In this section, we show how to improve the -approximation algorithm for the min-max problem to a -approximation algorithm for any constant , if a 2-approximation is available for a fixed number of scenarios. The basic idea is the following. Let us assume we have scenarios. Solving the robust problem with all 16 scenarios would yield a 1-approximation (i.e., an optimal solution). Solving the problem with only one aggregated scenario gives a 16-approximation. We show that intermediate scenario aggregations also yield intermediate approximation guarantees (see Figure 1).

Figure 1: Basic aggregation scheme.

Now let us assume we would like to have a -approximation algorithm. We could aggregate to two scenarios, and solve the resulting problem. However, solving a min-max problem with only two scenarios is usually already NP-hard. Hence, we aggregate to four scenarios instead (which would give a -approximation, which is more than we need), and solve this problem with an algorithm that guarantees a 2-approximation. In total, this method then yields a -approximation. In the following, we explain the details of this procedure.

For simplicity, we assume here, but our results readily extend to any . Let any partition of into sets with cardinality be given . For each , set , i.e., is the midpoint scenario of scenario set .

Lemma 1.

Let be an optimal solution for the min-max problem with scenario set . Then, is a 2-approximation for the min-max problem with scenario set .

Proof.

Let be an optimal solution for , and an optimal solution for . Let be the index of the worst-case scenario in with respect to , and choose such that . Then

We repeatedly apply Lemma 1 to reduce the number of scenarios. Denote by the original scenario set containing all scenarios. After the first level of aggregation we end up with scenario set containing scenarios. Repeating the aggregation process we create sets for from to .

Corollary 2.

Applying Lemma 1 repeatedly, we get a scenario set with scenarios such that solving the min-max problem with respect to gives a -approximation for the min-max problem with scenario set .

We present an instance where the approximation guarantee obtained in Corollary 2 is tight for the min-max shortest path problem. Let be the number of scenarios and the number of scenarios that are used in the aggregation. Consider the instance of the shortest path problem presented in Figure 2. The top path is divided into blocks of edges. All edges in the block have cost in scenario and cost in all other scenarios. Hence, the objective value of the top path is equal to . the cost structure for the bottom path is different: The edge of the bottom path has cost in the scenario and cost in all other scenarios. Hence, the objective value of the bottom path is . Consider the aggregation schema as in Figure 1. For both paths it holds that after the aggregation the cost of an edge of the block has cost in the aggregated scenario and in all other aggregated scenarios. Hence, both paths are identical with respect to the aggregated scenarios and the optimal solution of the aggregated problem may consist of the top instead of the bottom path. This leads to a gap of .

Figure 2: An instance of the min-max shortest path problem for which the approximation guarantee of Corollary 2 is tight. All edges share a similar cost structure: The cost is in one scenario and in all other scenarios. We represent this by the unit vectors .
Lemma 3.

A solution that is an -approximation for is also an )-approximation for .

Proof.

Analogously to the proof of Lemma 1. ∎

We can now state the main result of this section.

Theorem 4.

Let a constant be given. If there exists a 2-approximation algorithm for the min-max problem with a fixed number of scenarios, there exists a polynomial-time algorithm that gives an -approximation for the min-max problem.

Proof.

Let be constant. We choose . According to Corollary 2 we construct the set with scenarios. Using the 2-approximation algorithm for the min-max problem with a fixed number of scenarios, we find a 2-approximation for . Using Lemma 1, we conclude that the solution is a -approximation. Note that the running time of this procedure is polynomial since the value of and, therefore, also , is fixed. ∎

Corollary 5.

For the min-max shortest path, spanning tree, selection, and knapsack problem with unbounded number of scenarios , there exists a polynomial-time -approximation algorithm for any fixed .

As examples, let us consider the min-max selection and shortest path problems. For both problems we need a 2-approximation algorithm for the min-max problem with a fixed number of scenarios.

  • For selection, there exists an FPTAS that finds for a fixed number of scenarios a -approximation with running time (see [KZ07]). Hence, using Theorem 4, an -approximation is possible in by aggregating to scenarios, and approximating the resulting problem with a factor , i.e., choosing .

  • For shortest path, there exists an FPTAS that finds for a fixed number of scenarios a -approximation with running time in , which makes it possible to find an -approximation in time by aggregating to scenarios, and approximating the resulting problem with a factor , i.e. choosing .

3 Min-Max Regret Approximation

To translate the results obtained for min-max to min-max regret problems we need to modify the aggregation procedure, as the following example shows.

Example 6.

Consider the min-max regret shortest path instance shown in Figure 3. There are four scenarios. An optimal solution is to take the path in the middle with a regret of . If we aggregate the first two and the last two scenarios, we arrive at the instance shown in Figure 3. Here, an optimal solution (with perceived regret ) is to take the top path; but its true regret is . Hence, in this example, we obtain only a -approximation and not a -approximation as in the case of the min-max objective function.

\thesubsubfigure Instance with 4 scenarios.

\thesubsubfigure Instance with aggregated scenarios
Figure 3: An example instance where aggregating from four to two scenarios leads to a -approximation for min-max regret.

Instead of simply aggregating pairs of scenarios and solving a min-max regret problem on this new scenario set, we consider the following problem

(*)

Note that we do not use the objective , as would be usual for min-max regret. Further generalizing, we call a problem

with arbitrary a generalized min-max regret problem.

Lemma 7.

Solving the generalized min-max regret problem (*) on is a 2-approximation for .

Proof.

Let be optimal for problem (*), and optimal for the original problem with uncertainty set . Again, denote by and choose such that . Then

Note that the arguments used in the proof of Lemma 7 can be generalized to the case where scenarios are aggregated repeatedly, i.e., we aggregate to sets with more than two elements. Similar to Corollary 2 we obtain:

Corollary 8.

Given an aggregated scenario set where each of the scenarios is given as . The optimal solution of the generalized min-max regret problem

yields an -approximation of the min-max regret problem with scenario set .

Similar to min-max, we can use a 2-approximation for the generalized min-max regret problem with a fixed number of scenarios to obtain an -approximation for the min-max regret problem. Using the same arguments as in the proof of Theorem 4 we obtain:

Theorem 9.

Let a constant be given. If there exists a 2-approximation algorithm for the generalized min-max regret problem with a fixed number of scenarios, then there exists a polynomial-time algorithm that gives an -approximation for the min-max regret problem.

Note that in our construction of the generalized problem, we have . Hence, we can use the same proof as in [ABV07] using the FPTAS for multi-objective spanning tree to show that there is an FPTAS for our generalized min-max regret spanning tree problem. The same approach applies to the min-max regret selection problem.

Furthermore, we can modify any generalized min-max regret shortest path problem by adding an edge from to with costs in scenario . We create an additional scenario where the costs of each edge is 0, and the costs of the new edge is a sufficiently large value . As , we can then solve a classic min-max regret problem on this instance, giving the same objective value as before. Hence, the FPTAS for min-max regret shortest path (see [ABV07]) can also be applied to our generalized problem.

Corollary 10.

For the min-max regret shortest path, spanning tree, and selection problem with unbounded number of scenarios , there exists a polynomial-time -approximation algorithm for any fixed .

4 Computational Experiments

In this section, we present a small test of the proposed aggregation method on randomly generated instances. As benchmark problem we use the shortest path problem. We define a complete layered graph with layers and width . The scenario set consists of randomly generated scenarios. The cost of each edge is chosen uniformly in for each scenario. In the first step, we begin with the full set of scenarios. Next, we half the number of scenarios by aggregating them pairwise. This is repeated till we end up with a single scenario. In each step, we solve the min-max shortest path problem with the corresponding set of scenarios. To solve these problems, we solve an IP formulation of the problem using CPLEX. Note that in the last step, where the uncertainty set consists only of a single scenario, only a classic shortest path problem needs to be solved. At the end, we evaluate the performance of the computed paths by computing their worst case cost using the original set of scenarios. To average the results we divide the performance of each solution by the performance of the optimal solution.

The aggregation scheme presented in Figure 1 proposes to aggregate always two consecutive scenarios. This is arbitrary, and any other aggregation rule can be used in practice to improve the performance of the method. Beside the aggregation of consecutive scenarios, we also tested to aggregate similar scenarios. To this end, we computed a perfect matching between the different scenarios. We set the cost of matching scenario with scenario to the euclidean distance of scenario and .

The results of the experiment, averaged over 1000 instances, are shown in Figure 4.

Figure 4: The horizontal axis gives the number of scenarios that are used for the different aggregation levels. The relative worst case performance of the different solutions for the different levels of aggregation is shown on the vertical axis. The red straight line shows the performance when aggregating similar scenarios, and the blue dashed line shows the performance of the consecutive aggregation scheme.

It can be seen that the more involved aggregation rule based on scenario similarity does indeed gives better results for intermediate aggregation levels. For full or no aggregation, the aggregation rule is of course irrelevant. Note also that the relative performance of the aggregated solutions is far below the theoretical performance guarantee. Interestingly, for the aggregation scheme based on similarity, there seems to exist a roughly linear relationship between the number of scenarios and the relative worst case performance.

5 Conclusions

The midpoint method is a central approximation algorithm in robust optimization. Despite its simplicity, is has been the best-known method for several classic combinatorial problems. In this paper we presented a simple variant of the method, where the uncertainty set is not aggregated to a single scenario, but to a sufficiently small set of scenarios instead. This reduced scenario set is then approximated using, e.g., an FPTAS for discrete uncertainty of constant size. Our approach can be used to find polynomial time -approximations for any constant , thus improving several currently known best approximability results.

Our results hold for any aggregation scheme. However, for practical purposes, aggregating similar scenarios is reasonable, so as to preserve the structure of the uncertainty set as far as possible. To quantify this effect we considered a computational experiment using random shortest path instances. Our results indicate that approximation guarantees are considerably smaller on these instances than the theoretical bounds suggest, and that aggregating similar scenarios does indeed improve the quality of solutions.

References

  • [ABV07] H. Aissi, C. Bazgan, and D. Vanderpooten. Approximation of min–max and min–max regret versions of some combinatorial optimization problems. European Journal of Operational Research, 179(2):281 – 290, 2007.
  • [ABV09] H. Aissi, C. Bazgan, and D. Vanderpooten. Min–max and min–max regret versions of combinatorial optimization problems: A survey. European Journal of Operational Research, 197(2):427 – 438, 2009.
  • [CG15] A. Chassein and M. Goerigk. A new bound for the midpoint solution in minmax regret optimization with an application to the robust shortest path problem. European Journal of Operational Research, 244(3):739–747, 2015.
  • [Con12] E. Conde. On a constant factor approximation for minmax regret problems using a symmetry point scenario. European Journal of Operational Research, 219(2):452–457, 2012.
  • [KY97] P. Kouvelis and G. Yu. Robust Discrete Optimization and Its Applications. Kluwer Academic Publishers, 1997.
  • [KZ06] A. Kasperski and P. Zieliński. An approximation algorithm for interval data minmax regret combinatorial optimization problems. Information Processing Letters, 97(5):177–180, 2006.
  • [KZ07] A. Kasperski and P. Zieliński. Approximation of min-max (regret) combinatorial optimization problems under discrete scenario representation. Technical report, Institute of Mathematics and Computer Science, Wroclaw University of Science and Technology, PRE 7, 2007.
  • [KZ16] A. Kasperski and P. Zieliński. Robust discrete optimization under discrete and interval uncertainty: A survey. In Robustness Analysis in Decision Aiding, Optimization, and Analytics, pages 113–143. Springer, 2016.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
34559
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description