Game Tree Search in a Robust Multistage Optimization Framework: Exploiting Pruning Mechanisms

Game Tree Search in a Robust Multistage Optimization Framework: Exploiting Pruning Mechanisms

Michael Hartisch and Ulf Lorenz
University of Siegen, Siegen, Germany
{michael.hartisch, ulf.lorenz}@uni-siegen.de
Abstract

We investigate pruning in search trees of so-called quantified integer linear programs (QIPs). QIPs consist of a set of linear inequalities and a minimax objective function, where some variables are existentially and others are universally quantified. They can be interpreted as two-person zero-sum games between an existential and a universal player on the one hand, or multistage optimization problems under uncertainty on the other hand. Solutions are so-called winning strategies for the existential player that specify how to react on moves of the universal player - i.e. certain assignments of universally quantified variables - to certainly win the game.

QIPs can be solved with the help of game tree search that is enhanced with non-chronological back-jumping. We develop and theoretically substantiate pruning techniques based upon (algebraic) properties similar to pruning mechanisms known from linear programming and quantified boolean formulas. The presented Strategic Copy-Pruning mechanism allows to implicitly deduce the existence of a strategy in linear time (by static examination of the QIP-matrix) without explicitly traversing the strategy itself. We show that the implementation of our findings can massively speed up the search process.

Keywords: Planning Algorithms, Combinatorial Search and Optimization, Game Playing, Heuristic Search, Planning under Uncertainty, Robust Optimization, Quantified Integer Programming

Introduction

Mixed-integer linear programming (MIP) [34] is the state-of-the-art technique for computer aided optimization of real world problems. Nowadays, commercial top solvers are able to solve large MIPs of practical size, but companies observe an increasing danger of disruptions, which prevent them from acting as planned. One reason is that input data is often assumed to be deterministic and exactly known when decisions have to be made, but in reality they are often afflicted with some kinds of uncertainties. Examples are flight and travel times, throughput times or arrival times of externally produced goods. Thus, there is a need for planning and deciding under uncertainty. Uncertainty, however, often pushes the complexity of problems that are in the complexity class P or NP, to PSPACE [27]. Therefore, NP-complete integer programs are not appropriate for such problems. Prominent solution paradigms for optimization under uncertainty are Stochastic Programming [6], Robust Optimization [4, 22, 15], Dynamic Programming [3], Sampling [16] and of course POMDP [26]. Relatively unexplored are the abilities of linear programming extensions for PSPACE-complete problems. In the early 2000s the idea of universally quantified variables, as they are used in quantified constraint satisfaction problems [13], was picked up again [37], coining the term quantified integer program (QIP). Quantified integer programming is a direct, very formal extension of integer linear programming (IP), making QIPs applicable in a very natural way [10, 11]. They allow robust multistage optimization extending the two/three-stage approach of Robust Optimization [4]. Multistage models in contrast to two/three-stage models allow more precise planning strategies as uncertain events typically do not occur all at the same time (delay in timetables, changed cost estimate for edges in a graph).

A solution of a QIP is a strategy – in the game tree search sense [29], see Definition 3 – for assigning existentially quantified variables such that some linear constraint system is fulfilled. By adding a minimax objective function the aim is to find the best strategy [23]. As not unusual in the context of optimization under uncertainty [4, 5] a polyhedral uncertainty set can be used [17]. There are two different ways known how to tackle a QIP: On the one hand the so-called deterministic equivalent program can be built, similar to the ones known from stochastic programming [38], and solved using standard integer programming solvers. On the other hand the more direct approach is to conduct a game tree search [2, 33, 12, 35]. We are interested in utilizing game solving techniques in combination with linear programming techniques. Recently a solver for quantified mixed integer programs was made available as open source. This solver combines techniques known from game tree search, linear programming and (quantified) boolean formula [9].

An optimization task is often split up into two parts: finding the optimal solution itself and proving that no better solution can exist. It turned out that applying backjumping techniques as utilized by QBF-solvers [41] and cutting planes as commonly used in integer programming [24] are also highly beneficial for QIPs in order to assess that no (better) strategy can exist in certain subtrees. This subtask even becomes simpler with an increasing number of universally quantified variables. However, finding a solution, which we call a winning strategy, proved to be more difficult. At first glance, it seems that the exponential number of leaves belonging to a strategy must be traversed explicitly. This is certainly true in the worst-case. However, as practical game trees are often structured irregularly, typically there are “difficult” parts of a game tree where a very deliberated substrategy must be found but also other parts that are really easy to master. In this paper we present a procedure, called strategic copy-pruning (SCP), that is capable of recognizing such easily-masterable subtrees which makes it possible to implicitly deduce the existence of a winning strategy therein. In experiments, this SCP often allows to conclude the existence of a winning strategy with a linear number of algebraic operations. In particular, in those cases it is not necessary to examine an exponential number of leaves.

The effect of SCP is reinforced if the sequence of variable assignments predicted as optimal by minimax for both sides, called the principal variation [8], is traversed in an early stage of the tree search. Detecting and verifying this particular variable assignment is essential in order to obtain the objective value. This, of course, is not as easy as it sounds, but having reasonable knowledge of which universal variable assignments are particularly vicious can massively boost the search process. Several heuristics exist to analyze and find such promising moves in a game tree search environment [1, 33, 30, 40].

Of course there are publications specifically dealing with pruning and backjumping techniques both in the area of game tree search [39, 21, 18] and quantified boolean formula (QBF) [14, 7] as well. Moreover there is Kawano’s simulation [20], sss* [36], MTD(f) [30, 31] and (nega)scout [32]. However, none of the above cover what we do here.

The paper is organized as follows: First basic definitions and notations regarding QIPs are presented. Then two pruning techniques for the QIP game tree search are introduced and examined theoretically: First, the well known monotonicity [7] of variables is recaptured. Second, as our main result, we derive from already found strategies the existence of winning strategies in other branches. This happens in a way such that these branches do not need to be investigated explicitly. Finally the conducted experiments are presented.

Preliminaries: Basics of Quantified Integer Programming

Let be the number of variables and a vector of variables.111, and denote the set of integers, natural numbers, and rational numbers, respectively. For each variable its domain with , , , is given by . The domain of the entire variable vector is described by , i.e. each variable must obey its domain. Let denote the vector of quantifiers. We call the set of existential variables and the set of universal variables. Further, each maximal consecutive subsequence in consisting of identical quantifiers is called quantifier block with denoting the -th block. Let , , denote the number of blocks and thus is the number of quantifier changes. The variable vector of variable block will be referred to as .

Definition 1 (Quantified Integer Linear Program (QIP)).

Let and for and let and be given as described above. Let be the vector of objective coefficients and let denote the vector of coefficients belonging to block . Let the term with the component wise binding operator denote the quantification vector such that every quantifier binds the variables to its domain . We call with

(*)

a QIP with objective function (for a minimizing existential player).

In the following, we will only consider binary QIPs, i.e. and for all . This requirement, however, does not constitute a restriction as any QIP instance can be converted artificially through binarization.

A QIP instance can be interpreted as a two-person zero-sum game between an existential player setting the existentially quantified variables and a universal player setting the universally quantified variables with payoff . The variables are set in consecutive order according to the variable sequence. Consequently, we say that a player makes the move if she fixes the variable to . At each such move, the corresponding player knows the settings of before taking her decision . Each fixed vector , that is, when the existential player has fixed the existential variables and the universal player has fixed the universal variables, is called a game. If satisfies the linear constraint system , the existential player pays to the universal player. If does not satisfy , we say the existential player loses and the payoff will be . This is a small deviation from conventional zero-sum games but using222Since this is only a matter of interpretation the consequences are not discussed further. also fits for zero-sum games. The chronological order of the variable blocks given by can be represented using a game tree.

Definition 2 (Game Tree).

Let be the edge-labeled finite directed tree with a set of nodes , a set of edges and a vector of edge labels . Each inner level of the tree either consists of only nodes from or only of nodes from , with the root node at level 0 being from . The leaf nodes (nodes without children) are from . The -th variable is represented by the inner nodes at depth . Each edge connects a node in some level to a node in level . Outgoing edges from a node in level represent moves from of the player at the current node, the corresponding edge labels encode the variable assignments of the move.

Thus, a path from the root to a leaf represents a game of the QIP and the sequence of edge labels encodes its moves and hence the assignment of the corresponding variables. The most relevant term in order to describe solutions are so-called strategies.

Definition 3 ((Existential) Strategy).

A strategy (for the assignment of existential variables) is a subtree of a game tree . Each node has exactly one child, and each node has as many children as in , i.e. as many as there are values in the corresponding variable domain.

In the following, the word strategy will always refer to an existential strategy. Universal strategies can be defined similarly but are not needed in our context. A strategy is called a winning strategy if all paths from the root node to a leaf represent a vector such that . A QIP is called feasible if (*1) is true (see Definition 1), i.e. if a winning strategy for the assignment of existential variables exists. If there is more than one winning strategy for the existential player, the objective function aims for a certain (the “best”) one. The value of a strategy is given by its minimax value which is the maximum value at its leaves [29]. Hence, the objective value of a feasible QIP is the minimax value of the root node, i.e. the minimax value of the optimal winning strategy. Note that a leaf not fulfilling can be represented by the value . The minimax value for any node is recursively defined as follows:

Definition 4 (Minimax Value).

Let be a subtree of a game tree of a QIP as in Definition 2. Let denote the variable assignment corresponding to the leaf node defined by the edge labels of the path from the root to . For any node the minimax value is recursively defined by

For the value of any node is the outcome if the remaining variables are assigned optimally starting from this node, i.e. the outcome of optimal play by both players, whereas implies that there exists no (existential) strategy to ensure .

Hence, the value of a strategy is the minimax value of its root node and is defined by the principal variation (PV) [8], i.e. the sequence of variable assignments being chosen during optimal play in . From now on will refer to the outcome of optimal play by both players in the entire game tree , i.e. Definition 4 with is used.

Example 1.

Let us consider a QIP with binary variables, , and let the constraint system given by

The minimax value of the root node of the game tree is and the principal variation is given by , , and . The minimax value of the inner node at level 1 resulting from setting has the minimax value , i.e. after setting there exists no winning strategy.

Theoretical Analysis

A quantified integer program can be solved using its deterministic equivalent program [38], which is an integer program with exponentially increased size, or via the more direct approach: a game tree search. During such a game tree search we are interested in quickly evaluating or estimating the minimax value of nodes, i.e. we want to examine the optimal (existential) strategy of the corresponding subtree. In order to speed up the search process, limiting the number of subtrees that need to be explored is extremely beneficial. Such pruning operations are applied in many search based algorithms, e.g. the alpha-beta algorithm [21], branch-and-bound [25] and DPLL [41]. In the following, we will present two approaches that allow pruning in a QIP game tree search, and thus in a strategic optimization task.

In case of QIPs a rather simple argument exists such that certain variable assignments never need to be checked as they are worse than their counterparts. This concept of monotone variables is well known in the field of quantified boolean formulas [7] and integer programming [25]. We shortly present the consequences for QIPs, before we deal with our main Theorem 3.

Definition 5 (Monotone Variable).

A variable of a QIP is called monotone if it occurs with only positive or only negative sign in the matrix and objective, i.e. if the entries of and belonging to ( and ) are either all non-negative or all non-positive333 denotes the -th column and the -th row of ..

Theorem 1.

Let be a QIP and let variable , , be monotone with all non-negative entries. For any two leaves and of the game tree represented by the fixed variable vectors and , respectively, it is .

Proof 1.

If it is . Hence, some constraints exists with . Due to the monotonicity of variable it is and hence , i.e. . If, on the other hand, it is .

Theorem 2.

Let be a QIP and let variable , , be monotone with all non-negative entries. For any node at depth and its two successors and representing the assignment of and , respectively, it holds: .

Proof 2.

Let there be an optimal winning strategy for the subtree of . Due to Theorem 1 this strategy is also a winning strategy for the subtree of with all leaf values being smaller than or equal to the leaves of the strategy at . Hence, . If there is no winning strategy for the subtree of it is obviously .

Using this easily verifiable monotonicity allows us to omit certain subtrees a priori since solving the subtree of its sibling is guaranteed to yield the desired minimax value. Obviously, similar results can be achieved for monotone variables with non-positive entries.

In contrast to this usage of prior knowledge we also want to gather deep knowledge during the search process: found strategies in certain subtrees can be useful in order to assess the minimax value of related subtrees rapidly. The idea is based upon the observation that typically in only a rather small part of the game tree a distinct and crafty strategy is required in order to ensure the fulfillment of the constraint system: in the right-hand side subtree of Figure 1 it suffices to find a fulfilling existential variable assignment for only one scenario (universal variable assignment) and reuse it in the other branches.

Figure 1: Illustrative strategy for which the universal assignment entails a simple winning strategy: Regardless of future universal decisions existential variables can be set in a certain simple way, e.g. the existential decisions in the dashed ellipse are all the same. on the other hand compels a more clever strategy, e.g. the existential decisions in the dotted ellipse differ depending on previous universal decisions.
Theorem 3.

[Strategic Copy-Pruning (SCP)] 
Let and let be a fixed variable assignment of the variables . Let be the corresponding node in the game tree. Let and be the two children of corresponding to the variable assignment and of the universal variable , respectively. Let there be an optimal winning strategy for the subtree below with minimax value defined by the variable assignment , i.e. . If the minimax value of the copied strategy for the subtree below - obtained by adoption of future444future means variable blocks with index . existential variable assignments as in - is not larger than and if this copied strategy constitutes a winning strategy then . Formally: If both

(1)

and

(2)

for all constraints then .

For clarification note that Condition (1) ensures that the change in the minimax value of the copied strategy, resulting from flipping and using the worst case assignment of the remaining future universal variables, is not positive, i.e. that its minimax value is still smaller than or equal to . Condition (2) verifies that every constraint is satisfied in each leaf of the copied strategy by ensuring the fulfillment of each constraint in its specific worst case scenario.

Proof 3.

If (2) is satisfied there automatically exists a winning strategy for the subtree of corresponding to with root node , since for any future universal variable assignment the assignment of upcoming existential variables as in fulfills the constraint system. Further, the minimax value of this strategy is smaller than or equal to due to Condition (1):

Hence, the (still unknown) optimal strategy for the subtree below has a minimax value smaller than or equal to , i.e. . Therefore, with Definition 4, .

Note that, since , Condition (2) is trivially fulfilled for any constraint with for all , i.e. constraints that are not influenced by future universal variables do not need to be examined. Hence, only a limited number of constraints need to be checked in case of a sparse matrix. Further, note that (1) is fulfilled if for all , i.e. if the future universal variables have no direct effect on the objective value. In particular, if , i.e. it is a satisfiabilty problem rather than an optimization problem, Condition (1) can be neglected as it is always fulfilled.

The theoretical result from Theorem 3 must be implemented cautiously. For a brief explanation of Algorithm 1 consider Figure 2 representing the final four variables of a QIP with strictly alternating quantifiers.

MAX

MIN

MAX

MIN

Figure 2: Illustrative game tree: Circular nodes are existential decision nodes, rectangular nodes are universal decision nodes and pentagonal nodes are leaves. The dashed lines indicate that those underlying subtrees might be omitted if Theorem 3 applies.

We assume the search has found the fulfilling variable assignment (represented by node G) for which is the optimal assignment for the final variable block with regard to , i.e. . If the requirements of Theorem 3 for are fulfilled it is and we do not have to calculate explicitly as the existence of a winning strategy below is ensured. If this attempt is successful the application of Theorem 3 at node would be attractive. However, one must ensure, that , i.e. that setting is indeed optimal in this stage. If this optimality cannot be guaranteed, but Conditions (1) and (2) are fulfilled at node , we still can conclude the existence of a winning strategy for the subtree at but we cannot yet specify . However, storing the information and can be advantageous (see line 1 in Algorithm 1).

Data: ,
1 last universal node;
2 index of the variable associated with ;
3 mode=Pruning;
4 while  do
5       if  is monotone and is set accordingly then
6             mark as finished; goto line 1;
7            
8       end if
9      if  then // is existential variable
10             if  or unknown  then
11                   mode=BoundUpdate; goto line 1;
12                  
13             end if
14            
15      else // is universal variable
16             if Condition (1) is violated then return;
17             for each constraint with  do
18                   if Condition (2) is violated for  then return;
19                  
20             end for
21            if modePruning then
22                  mark as finished;
23            else
24                  update bound: ;
25                  
26             end if
27            
28       end if
29      =predecessor();
30       =index of the variable associated with ;
31      
32 end while
Algorithm 1 RecycleStrategy(, )

Note, that as soon as the first universal node is found during this backtracking for which Conditions (1) or (2) are violated Algorithm 1 stops. Further, note that if Condition (2) is fulfilled at some universal node, e.g. node in Figure 2, for the next universal node above, e.g. node , Condition (2) only needs to be checked for those constraints in which the variable corresponding to the node of interest, e.g. , is present (see line 1 in Algorithm 1). This allows a very fast verification of Condition (2) if matrix is sparse, making Theorem 3 practically applicable. An outline of our implementation is given in Algorithm 1 that either prunes subtrees, i.e. marks nodes as finished (“mode=Pruning” as long as the “optimal winning strategy”-condition of Theorem 3 is met, see line 1), or updates the bounds on the minimax value of universal nodes (“mode=BoundUpdate” as soon as Theorem 3 cannot be applied anymore). The presented function is invoked, when for fixed variables the optimal assignment of was found as exemplarily described above. The variable allocation and the corresponding objective value are the input for Algorithm 1.

Note, that computing line 1 of Algorithm 1, i.e. checking Condition (1), requires operations, while line 1 is called times (with being the number of constraints in which the current universal variable occurs) and the computing time of line 1 itself is . Hence, Algorithm 1 has an overall runtime of . Thus, the complexity is linear in the input size (size of matrix A). In our experiments, where each of the universal variables occurs in only a few rows and the matrix is sparse, the runtime of the heuristic is negligible.

Example 2.

Let us consider the following QIP with binary variables (The min/max alternation in the objective and the binary variable domains are omitted):

Starting at the root node of the corresponding game tree we can immediately omit the subtree corresponding to due to the monotonicity of . Keep in mind that the result of Theorem 3 is particularly beneficial if the search process of a QIP solver first examines the principal variation, i.e. the variable assignment defining the actual minimax value.

-

MIN

MAX

MIN

MAX

MIN

Figure 3: Optimal winning strategy for the stated QIP. Circular nodes are existential decision nodes (from ), rectangular nodes are universal decision nodes and pentagonal nodes are leaves. The values given in the leaves constitute the objective value corresponding to the variable assignment along the path from the root to this leaf. The dashed lines indicate that those existential decisions where simply copied from the path drawn thicker.

Assume the search process follows the path drawn thick in Figure 3 to node , i.e. the path corresponding to the variable assignment , , and . Setting is optimal in this case, as would violate the second constraint. Hence, the minimax value of is . On the way up in the search tree we then want to determine . As (1) and (2) are fulfilled for , and we know that . That means we have (easily) verified a winning strategy starting from with minimax value smaller than or equal to . In node setting is obviously to the detriment of the existential player, because the second constraint would become unfulfillable. Hence, . In node we once again try to apply Theorem 3 by copying the existential decisions of and in the thick path to the not yet investigated subtree associated with . As (1) and (2) are fulfilled for , and this attempt is successful and . Note that by applying Theorem 3 the minimax value of the subtrees below and are not known exactly: in particular we only obtain , whereas a better strategy exists resulting in (Setting in node ).
Hence, by finding the principal variation first (thick path) and applying Theorem 2 at node , Theorem 3 at node and and some further reasoning from linear programming at node and the minimax value at the root node was found to be 4 with optimal first stage solution .

Theorem 3 can particularly come into effect if the branching decisions at universal nodes result in rather vicious scenarios, i.e. in variable assignments restricting the constraint system and maximizing the objective value. Hence, the applicability of the presented results largely depends on the implemented diving and sorting heuristic.

Solver, Experiments and Results

The open source555We accessed the open sources from http://www.q-mip.org solver Yasol [9], which is used to analyze the theoretical findings, combines two well known search mechanisms: The alpha-beta algorithm [21], traditionally used in a game tree search environment, and a generalization of the DPLL algorithm [41], used to solve SAT problems. We extended the main search algorithm to a scout algorithm [32]. The solver proceeds in two phases in order to find optimal solutions of 0/1-QIP instances.

  • Phase 1 (Feasibility Phase): The instance’s feasibility is determined, i.e. it is checked whether the instance has any solution at all. During this phase, the solver acts like a QBF solver [7, 41] with some extra abilities. Technically it performs a null window search [28].

  • Phase 2 (Optimization Phase): The solution space is explored in order to find the provable optimal solution. The (nega)scout algorithm is enhanced by non-chronological backtracking and backward implication [41, 14].

We enhanced this solver in two different ways:

  • The detection of monotone variables (MONO) was implemented and their properties exploited during the game tree search.

  • The adoption of existing winning strategies (strategic copy-pruning (SCP)) from one branch of a universal node to another was realized.

The SCP-enhancement (made possible by Theorem 3) can be switched on and off in both phases separately.

The instances used to study the effect of the presented results are runway scheduling problems under uncertainty modeled as QIPs. They were created following the ideas presented in [19]. The task is to find a b-matching: all airplanes must be assigned to exactly one time slot, while one time slot can take in at most airplanes. Furthermore, the airplanes must land within an uncertain time windows (a set of time slots). Reasons for such variations (in the arrival time) might be adjusted airspeed (due to weather) or operational problems. Hence, we are interested in an initial matching plan that can be fixed cheaply if the mandatory time windows for some planes do not contain the initially scheduled time slot. The testset contains 29 instances666The studied benchmark instances and a brief explanation can be found at http://www.q-mip.org/index.php?id=41, varying in the number of planes, the number of time slots, the type of allowed disturbances, the number of universal blocks and the cost function. In terms of the sizes of the (solved feasible) instances this results in between 100-300 existential variables, 10-30 universal variables and 50-100 constraints.

In Table 1 the number of solved instances is displayed for different settings. For each instance a maximum of one hour solution time was provided. All experiments were executed on a PC with an Intel i7-4790 (3.6 GHz) processor and 32GB RAM.

Setting
MONO SCP # solved
off off 14
off only feas 16
off only opt 21
off both 24
on off 23
on only feas 24
on only opt 25
on both 25
Table 1: Number of solved instances dependend on the solver setting: exploitation of monotone variables (MONO) and the strategic copy-pruning (SCP) in different phases of the solver.

If neither of the presented procedures is used 14 out of 29 instances are solved. Without taking advantage of the monotonicity SCP can be benficial in either solution phase regarding the number of solved instances. If applied in both phases the number of solved instances is increased up to 24. When also exploiting the monotonicity the number of solved instances increases to 25. However, SCP turns out to be somewhat disadvantegous in the feasibility phase. Even though an additional instance is solved (24) compared to the setting with SCP turned off (23) the average solution time increases: in Table 2 the average time needed for the 23 instances solved by all versions with turned on monotonicity is displayed.

SCP setting off only feas only opt both
avg. time 84.17s 101.70s 25s 32s
Table 2: Average time needed for the 23 solved instances by all four settings and activated monotonicity.

Four instances were not solved at all. These instances have more than 100 universal variables and more than 10000 existential variables. However, there also are infeasible instances of the same magnitude that are solved within seconds. Nonetheless, detecting a contradiction leading to infeasibilty can obviously be much faster than finding and ensuring an optimal strategy with more than leaves.

The best setting is to use SCP only in the optimization phase while exploiting variable monotonicity because additionally using SCP in the feasibility phase slightly increases the average solution time. Our conjecture is that this is due to biasing effects. Experiments conducted on a QBF test collection777QBF instances can easily be converted into the QIP format. of 797 instances, taken from www.qbflib.org, show positive effects for the SCP version. With the setting ’Mono on’ and ’SCP off’ 644 instances are solved. If SCP is turned on in both phases (it actually is only invoked in the feasibility phase as no optimization phase is conducted) 674 instances can be solved. Further, the solution time on the instances solved in both cases decreased by 15% when SCP is used.

In order to assess the performance results, we also built the deterministic equivalent program of each instance of the 29 runway scheduling instances and tried to solve the resulting integer program using CPLEX 12.6.1.0, a standard MIP solver. Only six of the 29 instances where solved this way, given the same amount of time (one hour), while for 14 instances not even the construction of the corresponding DEP could be finished, some of them because of the limited memory of 32 GB RAM.

Conclusion

We introduced the concept of strategic copy-pruning (SCP) during game tree search for quantified integer programs, which are robust multistage optimization problems. SCP makes it possible to omit certain subtrees during the game tree search by implicitly verifying the existence of a strategy in linear time: finding a single leaf and applying SCP can be sufficient to guarantee an optimal strategy in a subtree. This is standing in contrast to existing algorithms such as Kawano’s simulation, (nega)scout, sss* and MTD(f) in which the existence of a strategy is proven by traversing it explicitly. In addition to the theoretical results, we presented how those findings can be applied in a game tree search environment. Even though the generalized theoretical result implies linear computing time in the number of non-zero-elements of the constraint matrix, the presented partial realization of SCP as well as the sparsity of matrices allow high-speed pruning. Experiments showed that utilizing the presented approach in the open source solver Yasol resulted in a massive boost in both the number of solved instances and the solution time on a particular testset. Because of the strictly formal framework provided by QIPs we were able to derive the SCP procedure. It would be interesting to see whether SCP can be transferred to other areas of optimization under uncertainty.

References

  • [1] S. Akl and M. Newborn. The principal continuation and the killer heuristic. In Proceedings of the 1977 annual conference, ACM ’77, Seattle, Washington, USA, pages 466–473, 1977.
  • [2] V. Allis. Searching for Solutions in Games and Artificial Intelligence. PhD thesis, Maastricht, Maastricht, The Netherlands, 1994.
  • [3] R. Bellman. Dynamic Programming. Dover Publications, Incorporated, 2003.
  • [4] A. Ben-Tal, L. E. Ghaoui, and A. Nemirovski. Robust Optimization. Princeton University Press, 2009.
  • [5] D. Bertsimas, D. Brown, and C. Caramanis. Theory and applications of robust optimization. SIAM Rev., 53(3):464–501, 2011.
  • [6] J. Birge and F. Louveaux. Introduction to Stochastic Programming. Springer Publishing Company, Incorporated, 2nd edition, 2011.
  • [7] M. Cadoli, M. Schaerf, A. Giovanardi, and M. Giovanardi. An algorithm to evaluate quantified boolean formulae and its experimental evaluation. Journal of Automated Reasoning, 28(2):101–142, 2002.
  • [8] M. Campbell and T. Marsland. A comparison of minimax tree search algorithms. Artificial Intelligence, 20(4):347 – 367, 1983.
  • [9] T. Ederer, M. Hartisch, U. Lorenz, T. Opfer, and J. Wolf. Yasol: An open source solver for quantified mixed integer programs. In Advances in Computer Games - 15th International Conference, ACG 2017, pages 224–233, 2017.
  • [10] T. Ederer, U. Lorenz, A. Martin, and J. Wolf. Quantified linear programs: A computational study. In Proceedings of the 19th European Conference on Algorithms, ESA’11, pages 203–214, Berlin, Heidelberg, 2011. Springer-Verlag.
  • [11] T. Ederer, U. Lorenz, T. Opfer, and J. Wolf. Multistage optimization with the help of quantified linear programming. In Operations Research Proceedings 2014, pages 369–375. Springer, 2016.
  • [12] R. Feldmann, P. Mysliwietz, and B. Monien. Distributed game tree search on a massively parallel system, pages 270–288. Springer Berlin Heidelberg, Berlin, Heidelberg, 1992.
  • [13] R. Gerber, W. Pugh, and M. Saksena. Parametric dispatching of hard real-time tasks. IEEE Trans. Computers, 44(3):471–479, 1995.
  • [14] E. Giunchiglia, M. Narizzano, and A. Tacchella. Backjumping for quantified boolean logic satisfiability. Artificial Intelligence, 145(1):99–120, 2003.
  • [15] M. Goerigk and A. Schöbel. Algorithm Engineering in Robust Optimization, pages 245–279. Springer International Publishing, Cham, 2016.
  • [16] A. Gupta, M. Pál, R. Ravi, and A. Sinha. Boosted sampling: Approximation algorithms for stochastic optimization. In Proceedings of the Thirty-sixth Annual ACM Symposium on Theory of Computing, STOC ’04, pages 417–426, New York, NY, USA, 2004. ACM.
  • [17] M. Hartisch, T. Ederer, U. Lorenz, and J. Wolf. Quantified integer programs with polyhedral uncertainty set. In Computers and Games - 9th International Conference, CG 2016, pages 156–166, 2016.
  • [18] T. Hauk, M. Buro, and J. Schaeffer. Rediscovering *-minimax search. In H. J. van den Herik, Y. Björnsson, and N. S. Netanyahu, editors, Computers and Games, pages 35–50, Berlin, Heidelberg, 2006. Springer Berlin Heidelberg.
  • [19] A. Heidt, H. Helmke, M. Kapolke, F. Liers, and A. Martin. Robust runway scheduling under uncertain conditions. Journal of Air Transport Management, 56:28 – 37, 2016.
  • [20] Y. Kawano. Using similar positions to search game trees. Games of No Chance, 29:193–202, 1996.
  • [21] D. Knuth and R. Moore. An analysis of alpha-beta pruning. Artificial Intelligence, 6(4):293 – 326, 1975.
  • [22] C. Liebchen, M. Lübbecke, R. Möhring, and S. Stiller. The Concept of Recoverable Robustness, Linear Programming Recovery, and Railway Applications, pages 1–27. Springer Berlin Heidelberg, Berlin, Heidelberg, 2009.
  • [23] U. Lorenz and J. Wolf. Solving multistage quantified linear optimization problems with the alpha–beta nested benders decomposition. EURO Journal on Computational Optimization, 3(4):349–370, 2015.
  • [24] H. Marchand, A. Martin, R. Weismantel, and L. Wolsey. Cutting planes in integer and mixed integer programming. Discrete Applied Mathematics, 123(1):397 – 446, 2002.
  • [25] G. Nemhauser and L. Wolsey. Integer and Combinatorial Optimization. Wiley-Interscience, New York, NY, USA, 1988.
  • [26] D. T. Nguyen, A. Kumar, and H. C. Lau. Collective multiagent sequential decision making under uncertainty. In Proceedings of the Thirty-First National Conference on Artificial Intelligence, AAAI’17. AAAI Press, 2017.
  • [27] C. Papadimitriou. Games against nature. Journal of Computer and System Sciences, 31(2):288 – 301, 1985.
  • [28] J. Pearl. Scout: A simple game-searching algorithm with proven optimal properties. In Proceedings of the First AAAI Conference on Artificial Intelligence, AAAI’80, pages 143–145. AAAI Press, 1980.
  • [29] W. Pijls and A. de Bruin. Game tree algorithms and solution trees. Theoretical Computer Science, 252(1):197 – 215, 2001.
  • [30] A. Plaat, J. Schaeffer, W. Pijls, and A. de Bruin. Best-first fixed-depth minimax algorithms. Artif. Intell., 87(1-2):255–293, Nov. 1996.
  • [31] A. Plaat, J. Schaeffer, W. Pijls, and A. de Bruin. Exploiting graph properties of game trees. In Proceedings of the Thirteenth National Conference on Artificial Intelligence - Volume 1, AAAI’96, pages 234–239. AAAI Press, 1996.
  • [32] A. Reinefeld. An improvement to the scout tree search algorithm. ICGA Journal, 6(4):4–14, 1983.
  • [33] J. Schaeffer. The history heuristic and alpha-beta search enhancements in practice. IEEE Trans. Pattern Anal. Mach. Intell., 11(11):1203–1212, 1989.
  • [34] A. Schrijver. Theory of Linear and Integer Programming. John Wiley & Sons, Inc., New York, NY, USA, 1986.
  • [35] D. Silver, A. Huang, C. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis. Mastering the game of go with deep neural networks and tree search. Nature, 529:484–503, 2016.
  • [36] G. C. Stockman. A minimax algorithm better than alpha-beta? Artificial Intelligence, 12(2):179–196, 1979.
  • [37] K. Subramani. Analyzing selected quantified integer programs. In D. Basin and M. Rusinowitch, editors, Automated Reasoning, pages 342–356, Berlin, Heidelberg, 2004. Springer Berlin Heidelberg.
  • [38] R.-B. Wets. Stochastic programs with fixed recourse: The equivalent deterministic program. SIAM Review, 16(3):309–339, 1974.
  • [39] M. Winands, H. van den Herik, J. Uiterwijk, and E. van der Werf. Enhanced forward pruning. Information Sciences, 175(4):315 – 329, 2005.
  • [40] M. Winands, E. van der Werf, H. van den Herik, and J. Uiterwijk. The relative history heuristic. In Computers and Games, 4th International Conference, CG 2004, pages 262–272, 2004.
  • [41] L. Zhang. Searching for Truth: Techniques for Satisfiability of Boolean Formulas. PhD thesis, Princeton, NJ, USA, 2003.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
321509
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description