Feedback-Based Tree Search for Reinforcement Learning

Feedback-Based Tree Search for Reinforcement Learning

Daniel R. Jiang    Emmanuel Ekwedike    Han Liu

Inspired by recent successes of Monte-Carlo tree search (MCTS) in a number of artificial intelligence (AI) application domains, we propose a model-based reinforcement learning (RL) technique that iteratively applies MCTS on batches of small, finite-horizon versions of the original infinite-horizon Markov decision process. The terminal condition of the finite-horizon problems, or the leaf-node evaluator of the decision tree generated by MCTS, is specified using a combination of an estimated value function and an estimated policy function. The recommendations generated by the MCTS procedure are then provided as feedback in order to refine, through classification and regression, the leaf-node evaluator for the next iteration. We provide the first sample complexity bounds for a tree search-based RL algorithm. In addition, we show that a deep neural network implementation of the technique can create a competitive AI agent for the popular multi-player online battle arena (MOBA) game King of Glory.

MCTS, reinforcement learning, tree search

1 Introduction

Monte-Carlo tree search (MCTS), introduced in Coulom (2006) and surveyed in detail by Browne et al. (2012), has received attention in recent years for its successes in gameplay artificial intelligence (AI), culminating in the Go-playing AI AlphaGo (Silver et al., 2016). MCTS seeks to iteratively build the decision tree associated with a given Markov decision process (MDP) so that attention is focused on “important” areas of the state space, assuming a given initial state (or root node of the decision tree). The intuition behind MCTS is that if rough estimates of state or action values are given, then it is only necessary to expand the decision tree in the direction of states and actions with high estimated value. To accomplish this, MCTS utilizes the guidance of leaf-node evaluators (either a policy function (Chaslot et al., 2006) rollout, a value function evaluation (Campbell et al., 2002; Enzenberger, 2004), or a mixture of both (Silver et al., 2016)) to produce estimates of downstream values once the tree has reached a certain depth (Browne et al., 2012). The information from the leaf-nodes are then backpropagated up the tree. The performance of MCTS depends heavily on the quality of the policy/value approximations (Gelly & Silver, 2007), and at the same time, the successes of MCTS in Go show that MCTS improves upon a given policy when the policy is used for leaf evaluation, and in fact, it can be viewed as a policy improvement operator (Silver et al., 2017). In this paper, we study a new feedback-based framework, wherein MCTS updates its own leaf-node evaluators using observations generated at the root node.

MCTS is typically viewed as an online planner, where a decision tree is built starting from the current state as the root node (Chaslot et al., 2006, 2008; Hingston & Masek, 2007; Maîtrepierre et al., 2008; Cazenave, 2009; Méhat & Cazenave, 2010; Gelly & Silver, 2011; Gelly et al., 2012; Silver et al., 2016). The standard goal of MCTS is to recommend an action for the root node only. After the action is taken, the system moves forward and a new tree is created from the next state (statistics from the old tree may be partially saved or completely discarded). MCTS is thus a “local” procedure (in that it only returns an action for a given state) and is inherently different from value function approximation or policy function approximation approaches where a “global” policy (one that contains policy information about all states) is built. In real-time decision-making applications, it is more difficult to build an adequate “on-the-fly” local approximation than it is to use pre-trained global policy in the short amount of time available for decision-making. For games like Chess or Go, online planning using MCTS may be appropriate, but in games where fast decisions are necessary (e.g., Atari or MOBA video games), tree search methods are too slow (Guo et al., 2014). The proposed algorithm is intended to be used in an off-policy fashion during the reinforcement learning (RL) training phase. Once the training is complete, the policies associated with leaf-node evaluation can be implemented to make fast, real-time decisions without any further need for tree search.

Main Contributions. These characteristics of MCTS motivate our proposed method, which attempts to leverage the local properties of MCTS into a training procedure to iteratively build global policy across all states. The idea is to apply MCTS on batches of small, finite-horizon versions of the original infinite-horizon Markov decision process (MDP). A rough summary is as follows: (1) initialize an arbitrary value function and a policy function; (2) start (possibly in parallel) a batch of MCTS instances, limited in search-depth, initialized from a set of sampled states, while incorporating a combination of the value and policy function as leaf-node evaluators; (3) update both the value and policy functions using the latest MCTS root node observations; (4) Repeat starting from step (2). This method exploits the idea that an MCTS policy is better than either of the leaf-node evaluator policies alone (Silver et al., 2016), yet improved leaf-node evaluators also improve the quality of MCTS (Gelly & Silver, 2007). The primary contributions of this paper are summarized below.

  1. We propose a batch, MCTS-based RL method that operates on continuous state, finite action MDPs and exploits the idea that leaf-evaluators can be updated to produce a stronger tree search using previous tree search results. Function approximators are used to track policy and value function approximations, where the latter is used to reduce the length of the tree search rollout (oftentimes, the rollout of the policy becomes a computational bottle-neck in complex environments).

  2. We provide a full sample complexity analysis of the method and show that with large enough sample sizes and sufficiently large tree search effort, the performance of the estimated policies can be made close to optimal, up to some unavoidable approximation error. To our knowledge, batch MCTS-based RL methods have not been theoretically analyzed.

  3. An implementation of the feedback-based tree search algorithm using deep neural networks is tested on the recently popular MOBA game (a North American version of the same game is titled ). The result is a competitive AI agent for the 1v1 mode of the game.

2 Related Work

The idea of leveraging tree search during training was first explored by Guo et al. (2014) in the context of Atari games, where MCTS was used to generate offline training data for a supervised learning (classification) procedure. The authors showed that by using the power of tree search offline, the resulting policy was able to outperform the deep -network (DQN) approach of (Mnih et al., 2013). A natural next step is to repeatedly apply the procedure of Guo et al. (2014). In building AlphaGo Zero, Silver et al. (2017) extends the ideas of Guo et al. (2014) into an iterative procedure, where the neural network policy is updated after every episode and then reincorporated into tree search. The technique was able to produce a superhuman Go-playing AI (and improves upon the previous AlphaGo versions) without any human replay data.

Our proposed algorithm is a provably near-optimal variant (and in some respects, generalization) of the AlphaGo Zero algorithm. The key differences are the following: (1) our theoretical results cover a continuous, rather than finite, state space setting, (2) the environment is a stochastic MDP rather than a sequential deterministic two player game, (3) we use batch updates, (4) the feedback of previous results to the leaf-evaluator manifests as both policy and value updates rather than just the value (as Silver et al. (2017) does not use policy rollouts).

Anthony et al. (2017) proposes a general framework called expert iteration that combines supervised learning with tree search-based planning. The methods described in Guo et al. (2014), Silver et al. (2017), and the current paper can all be (at least loosely) expressed under the expert iteration framework. However, no theoretical insights were given in any of these previous works and our paper intends to fill this gap by providing a full theoretical analysis of an iterative, MCTS-based RL algorithm. Our analysis relies on the concentrability coefficient idea of Munos (2007) for approximate value iteration and builds upon the work on classification based policy iteration (Lazaric et al., 2016), approximate modified policy iteration (Scherrer et al., 2015), and fitted value iteration (Munos & Szepesvári, 2008).

Sample complexity results for MCTS are relatively sparse. Teraoka et al. (2014) gives a high probability upper bound on the number of playouts needed to achieve -accuracy at the root node for a stylized version of MCTS called . More recently, Kaufmann & Koolen (2017) provided high probability bounds on the sample complexity of two other variants of MCTS called - and -. In this paper, we do not require any particular implementation of MCTS, but make a generic assumption on its accuracy that is inspired by these results.

3 Problem Formulation

Consider a discounted, infinite-horizon MDP with a continuous state space and finite action space . For all , the reward function satisfies . The transition kernel, which describes transitions to the next state given current state and action , is written — a probability measure over . Given a discount factor , the value function of a policy starting in is given by


where is the state visited at time . Let be the set of all stationary, deterministic policies (i.e., mappings from state to action). The optimal value function is obtained by maximizing over all policies: .

Both and are bounded by . We let be the set of bounded, real-valued functions mapping to . We frequently make use of the shorthand operator , where the quantity is be interpreted as the reward gained by taking an action according to , receiving the reward , and then receiving an expected terminal reward according to the argument :

It is well-known that is the unique fixed-point of , meaning (Puterman, 2014). The Bellman operator is similarly defined using the maximizing action:

It is also known that is the unique fixed-point of (Puterman, 2014) and that acting greedily with respect to the optimal value function produces an optimal policy:

We use the notation to mean the compositions of the mapping , e.g., . Lastly, let and let be a distribution over . We define left and right versions of an operator :

Note that and is another distribution over .

4 Feedback-Based Tree Search Algorithm

We now formally describe the proposed algorithm. The parameters are as follows. Let be a space of approximate policies and be a space of approximate value functions (e.g., classes of neural network architectures). We let be the policy function approximation (PFA) and be the value function approximation (VFA) at iteration of the algorithm. Parameters subscripted with ‘0’ are used in the value function approximation (regression) phase and parameters subscripted with ‘1’ are used in the tree search phase. The full description of the procedure is given in Figure 1, using the notation , where maps all states to the action . We now summarize the two phases, VFA (Steps 2 and 3) and MCTS (Steps 4, 5, and 6).

VFA Phase. Given a policy , we wish to approximate its value by fitting a function using subroutine on states sampled from a distribution . Each call to requires repeatedly performing rollouts that are initiated from leaf-nodes of the decision tree. Because repeating full rollouts during tree search is expensive, the idea is that a VFA obtained from a one-time regression on a single set of rollouts can drastically reduce the computation needed for . For each sampled state , we estimate its value using full rollouts, which can be obtained using the absorption time formulation of an infinite horizon MDP (Puterman, 2014, Proposition 5.3.1).

MCTS Phase. On every iteration , we sample a set of i.i.d. states from a distribution over . From each state, a tree search algorithm, denoted , is executed for iterations on a search tree of maximum depth . We assume here that the leaf evaluator is a general function of the PFA and VFA from the previous iteration, and , and it is denoted as a “subroutine” . The results of the procedure are piped into a subroutine , which fits a new policy using classification (from continuous states to discrete actions) on the new data. As discussed more in Assumption 4, uses observations (one-step rollouts) to compute a loss function.

Sample a set of i.i.d. states from and i.i.d. states from . Compute a sample average of independent rollouts of for each . See Assumption 1. Use on the set to obtain a value function . See Assumption 1. From each , run with parameters , , and evaluator . Return estimated value of each , denoted . See Assumptions 2 and 3. For each and , create estimate by averaging transitions from . See Assumption 4. Use to solve a cost-sensitive classification problem and obtain the next policy . Costs are measured using and . See Assumption 4. Increment and return to Step 1.

Figure 1: Feedback-Based Tree Search Algorithm
Figure 2: Illustration of the Feedback Loop

The illustration given in Figure 2 shows the interactions (and feedback loop) of the basic components of the algorithm: (1) a set of tree search runs initiated from a batch of sampled states (triangles), (2) leaf evaluation using and is used during tree search, and (3) updated PFA and VFA and using tree search results.

5 Assumptions

Figure 1 shows the algorithm written with general subroutines , , , and , allowing for variations in implementation suited for different problems. However, our analysis assumes specific choices and properties of these subroutines, which we describe now. The regression step solves a least absolute deviation problem to minimize an empirical version of

as described in the first assumption.

Assumption 1 ( Subroutine).

For each , define for all and for each , the state is drawn from . Let be an estimate of using rollouts and , the VFA resulting from , obtained via least absolute deviation regression:


There are many ways that may be defined. The standard leaf evaluator for MCTS is to simulate a default or “rollout” policy (Browne et al., 2012) until the end of the game, though in related tree search techniques, authors have also opted for a value function approximation (Campbell et al., 2002; Enzenberger, 2004). It is also possible to combine the two approximations: Silver et al. (2016) uses a weighted combination of a full rollout from a pre-trained policy and a pre-trained value function approximation.

Assumption 2 ( Subroutine).

Our approach uses a partial rollout of length and a value estimation at the end. produces unbiased observations of


where .

Assumption 2 is motivated by our MOBA game, on which we observed that even short rollouts (as opposed to simply using a VFA) are immensely helpful in determining local outcomes (e.g., dodging attacks, eliminating minions, health regeneration). At the same time, we found that numerous full rollouts simulated using the relatively slow and complex game engine is far too time-consuming within tree search.

We also need to make an assumption on the sample complexity of , of which there are many possible variations (Chaslot et al., 2006; Coulom, 2006; Kocsis & Szepesvári, 2006; Gelly & Silver, 2007; Couëtoux et al., 2011a, b; Al-Kanj et al., 2016; Jiang et al., 2017). Particularly relevant to our continuous-state setting are tree expansion techniques called progressive widening and double progressive widening, proposed in Couëtoux et al. (2011a), which have proven successful in problems with continuous state/action spaces. To our knowledge, analysis of the sample complexity is only available for stylized versions of MCTS on finite problems, like Teraoka et al. (2014) and Kaufmann & Koolen (2017). Theorems from these papers show upper bounds on the number of iterations needed so that with high probability (greater than ), the value at the root node is accurate within a tolerance of . Fortunately, there are ways to discretize continuous state MDPs that enjoy error guarantees, such as Bertsekas (1975), Dufour & Prieto-Rumeau (2012), or Saldi et al. (2017). These error bounds can be combined with the MCTS guarantees of Teraoka et al. (2014) and Kaufmann & Koolen (2017) to produce a sample complexity bound for MCTS on continuous problems. The next assumption captures the essence of these results (and if desired, can be made precise for specific implementations through the references above).

Assumption 3 ( Subroutine).

Consider a -stage, finite-horizon subproblem of (1) with terminal value function and initial state is . Let the result of be denoted . We assume that there exists a function , such that if iterations of are used, the inequality holds with probability at least .

Now, we are ready to discuss the subroutine. Our goal is to select a policy that closely mimics the performance of the result, similar to practical implementations in existing work (Guo et al., 2014; Silver et al., 2017; Anthony et al., 2017). The question is: given a candidate , how do we measure “closeness” to the policy? We take inspiration from previous work in classification-based RL and use a cost-based penalization of classification errors (Langford & Zadrozny, 2005; Li et al., 2007; Lazaric et al., 2016). Since is an approximation of the performance of the policy, we should try to select a policy with similar performance. To estimate the performance of some candidate policy , we use a one-step rollout and evaluate the downstream cost using .

Assumption 4 ( Subroutine).

For each and , let be an estimate of the value of state-action pair using samples.

Let , the result of , be obtained by minimizing the discrepancy between the result and the estimated value of the policy under approximations :

where are i.i.d. samples from .

An issue that arises during the analysis is that even though we can control the distribution from which states are sampled, this distribution is transformed by the transition kernel of the policies used for rollout/lookahead. Let us now introduce the concentrability coefficient idea of Munos (2007) (and used subsequently by many authors, including Munos & Szepesvári (2008), Lazaric et al. (2016), Scherrer et al. (2015), and Haskell et al. (2016)).

Assumption 5 (Concentrability).

Consider any sequence of policies . Suppose we start in distribution and that the state distribution attained after applying the policies in succession, , is absolutely continuous with respect to . We define an -step concentrability coefficient

and assume that . Similarly, we assume , is absolutely continuous with respect to and assume that

is finite for any .

The concentrability coefficient describes how the state distribution changes after steps of arbitrary policies and how it relates to a given reference distribution. Assumptions 1-5 are used for the remainder of the paper.

6 Sample Complexity Analysis

Before presenting the sample complexity analysis, let us consider an algorithm that generates a sequence of policies satisfying with no error. It is proved in Bertsekas & Tsitsiklis (1996, pp. 30-31) that in the finite state and action setting. Our proposed algorithm in Figure 1 can be viewed as approximately satisfying this iteration in a continuous state space setting, where plays the role of and evaluation of uses a combination of accurate rollouts (due to ) and fast VFA evaluations (due to ). The sample complexity analysis requires the effects of all errors to be systematically analyzed.

For some , our goal is to develop a high probability upper bound on the expected suboptimality, over an initial state distribution , of the performance of policy , written as . Because there is no requirement to control errors with probability one, bounds in tend to be much more useful in practice than ones in the traditional . Notice that:


where the left-hand-side is the loss function used in the classification step from Assumption 4. It turns out that we can relate the right-hand-side (albeit under a different distribution) to the expected suboptimality after iterations , as shown in the following lemma. Full proofs of all results are given in the supplementary material.


The expected suboptimality of can be bounded as follows:

Lemma 1 (Loss to Performance Relationship).

where .

Figure 3: Various Errors Analyzed in Lemma 3

From Lemma 6, we see that the expected suboptimality at iteration can be upper bounded by the suboptimality of the initial policy (in maximum norm) plus a discounted and re-weighted version of accumulated over prior iterations. Hypothetically, if were small for all iterations and all states , then the suboptimality of converges linearly to zero. Hence, we may refer to as the “true loss,” the target term to be minimized at iteration . We now have a starting point for the analysis: if (5) can be made precise, then the result can be combined with Lemma 6 to provide an explicit bound on . The various errors that we incur when relating the objective of to the true loss include the error due to regression using functions in ; the error due to sampling the state space according to ; the error of estimating using the sample average of one-step rollouts ; and of course, the error due to .

We now give a series of lemmas that help us carry out the analysis. In the algorithmic setting, the policy is a random quantity that depends on the samples collected in previous iterations; however, for simplicity, the lemmas that follow are stated from the perspective of a fixed policy or fixed value function approximation rather than or . Conditioning arguments will be used when invoking these lemmas (see supplementary material).

Lemma 2 (Propagation of VFA Error).

Consider a policy and value function . Analogous to (4), let . Then, under Assumption 5, we have the bounds:

  1. ,

  2. .

The lemma above addresses the fact that instead of using directly, and only have access to the estimates and ( steps of rollout with an evaluation of at the end), respectively. Note that propagation of the error in is discounted by or and since the lemma converts between and , it is also impacted by the concentrability coefficients and .

Let be the VC-dimension of the class of binary classifiers and let be the pseudo-dimension of the function class . The VC-dimension is a measure of the capacity of and the notion of a pseudo-dimension is a generalization of the VC-dimension to real-valued functions (see, e.g., Pollard (1990), Haussler (1992), Mohri et al. (2012) for definitions of both). Similar to Lazaric et al. (2016) and Scherrer et al. (2015), we will present results for the case of two actions, i.e., . The extension to multiple actions is possible by performing an analysis along the lines of Lazaric et al. (2016, Section 6). We now quantify the error illustrated in Figure 3. Define the quantity , the sum of the coefficients from Lemma 2.

Lemma 3.

Suppose the regression sample size is

and the sample size , for estimating the regression targets, is

Furthermore, there exist constants , , , and , such that if and are large enough to satisfy

and if , then

with probability at least .

Sketch of Proof.

By adding and subtracting terms, applying the triangle inequality, and invoking Lemma 2, we see that:

Here, the error is split into two terms. The first depends on the sample and the history through while the second term depends on the sample and the history through . We can thus view as fixed when analyzing the first term and as fixed when analyzing the second term (details in the supplementary material). The first term contributes the quantity in the final bound with additional estimation error contained within . The second term contributes the rest. See Figure 3 for an illustration of the main proof steps. ∎

The first two terms on the right-hand-side are related to the approximation power of and and can be considered unavoidable. We upper-bound these terms by maximizing over , in effect removing the dependence on the random process in the analysis of the next theorem. We define:

two terms that are closely related to the notion of inherent Bellman error (Antos et al., 2008; Munos & Szepesvári, 2008; Lazaric et al., 2016; Scherrer et al., 2015; Haskell et al., 2017). Also, let , which was assumed to be finite in Assumption 5.

Theorem 1.

Suppose the sample size requirements of Lemma 3 are satisfied with and replacing and , respectively. Then, the suboptimality of the policy can be bounded as follows:

with probability at least .

Search Depth. How should the search depth be chosen? Theorem 1 shows that as increases, fewer iterations are needed to achieve a given accuracy; however, the effort required of tree search (i.e., the function ) grows exponentially in . At the other extreme (), more iterations are needed and the “fixed cost” of each iteration of the algorithm (i.e., sampling, regression, and classification — all of the steps that do not depend on ) becomes more prominent. For a given problem and algorithm parameters, these computational costs can each be estimated and Theorem 1 can serve as a guide to selecting an optimal .

7 Case Study: King of Glory MOBA AI

We implemented Feedback-Based Tree Search within a new and challenging environment, the recently popular MOBA game King of Glory by Tencent (the game is also known as Honor of Kings and a North American release of the game is titled Arena of Valor). Our implementation of the algorithm is one of the first attempts to design an AI for the 1v1 version of this game.

Figure 4: Screenshot from 1v1 King of Glory

Figure 5: Number of Frames to Defeat Marksman Heroes

Game Description. In the King of Glory, players are divided into two opposing teams and each team has a base located on the opposite corners of the game map (similar to other MOBA games, like League of Legends or Dota 2). The bases are guarded by towers, which can attack the enemies when they are within a certain attack range. The goal of each team is to overcome the towers and eventually destroy the opposing team’s “crystal,” located at the enemy’s base. For this paper, we only consider the 1v1 mode, where each player controls a primary “hero” alongside less powerful game-controlled characters called “minions.” These units guard the path to the crystal and will automatically fire (weak) attacks at enemies within range. Figure 4 shows the two heroes and their minions; the upper-left corner shows the map, with the blue and red markers pinpointing the towers and crystals.

Experimental Setup. The state variable of the system is taken to be a 41-dimensional vector containing information obtained directly from the game engine, including hero locations, hero health, minion health, hero skill states, and relative locations to various structures. There are 22 actions, including move, attack, heal, and special skill actions, some of which are associated with (discretized) directions. The reward function is designed to mimic reward shaping (Ng et al., 1999) and uses a combination of signals including health, kills, damage dealt, and proximity to crystal. We trained five King of Glory agents, using the hero DiRenJie:

  1. The “FBTS” agent is trained using our feedback-based tree search algorithm for iterations of 50 games each. The search depth is and rollout length is . Each call to ran for 400 iterations.

  2. The second agent is labeled “NR” for no rollouts. It uses the same parameters as the FBTS agent except no rollouts are used. At a high level, this bears some similarity to the AlphaGo Zero algorithm (Silver et al., 2017) in a batch setting.

  3. The “DPI” agent uses the direct policy iteration technique of (Lazaric et al., 2016) for iterations. There is no value function and no tree search (due to computational limitations, more iterations are possible when tree search is not used).

  4. We then have the “AVI” agent, which implements approximate value iteration (De Farias & Van Roy, 2000; Van Roy, 2006; Munos, 2007; Munos & Szepesvári, 2008) for iterations. This algorithm can be considered a batch version of DQN (Mnih et al., 2013).

  5. Lastly, we consider an “SL” agent trained via supervised learning on a dataset of approximately 100,000 state/action pairs of human gameplay data. Notably, the policy architecture used here is consistent with the previous agents.

In fact, both the policy and value function approximations are consistent across all agents; they use fully-connected neural networks with five and two hidden layers, respectively, and SELU (scaled exponential linear unit) activation (Klambauer et al., 2017). The initial policy takes random actions: move (w.p. 0.5), directional attack (w.p. 0.2), or a special skill (w.p. 0.3). Besides biasing the move direction toward the forward direction, no other heuristic information is used by . was chosen to be a variant of UCT (Kocsis & Szepesvári, 2006) that is more amenable toward parallel simulations: instead of using the argmax of the UCB scores, we sample actions according to the distribution obtained by applying softmax to the UCB scores.

In the practical implementation of the algorithm, uses a cosine proximity loss while uses a negative log-likelihood loss, differing from the theoretical specifications. Due to the inability to “rewind” or “fast-forward” the game environment to arbitrary states, the sampling distribution is implemented by first taking random actions (for a random number of steps) to arrive at an initial state and then following until the end of the game. To reduce correlation during value approximation, we discard of the states encountered in these trajectories. For , we follow the policy while occasionally injecting noise (in the form of random actions and random switches to the default policy) to reduce correlation. During rollouts, we use the internal AI for the hero DiRenJie as the opponent.

Results. As the game is nearly deterministic, our primary methodology for testing to compare the agents’ effectiveness against a common set of opponents chosen from the internal AIs. We also added the internal DiRenJie AI as a “sanity check” baseline agent. To select the test opponents, we played the internal DiRenJie AI against other internal AIs (i.e., other heroes) and selected six heroes of the marksman type that the internal DiRenJie AI is able to defeat. Each of our agents, including the internal DiRenJie AI, was then played against every test opponent. Figure 5 shows the length of time, measured in frames, for each agent to defeat the test opponents (a value of 20,000 frames is assigned if the opponent won). Against the set of common opponents, FBTS significantly outperforms DPI, AVI, SL, and the internal AI. However, FBTS only slightly outperforms NR on average (which is perhaps not surprising as NR is the only other agent that also uses MCTS). Our second set of results help to visualize head-to-head battles played between FBTS and the four baselines (all of which are won by FBTS): Figure 6 shows the ratio of the FBTS agent’s gold to its opponent’s gold as a function of time. Gold is collected throughout the game as heroes deal damage and defeat enemies, so a ratio above 1.0 (above the red region) indicates good relative performance by FBTS. As the figure shows, each game ends with FBTS achieving a gold ratio in the range of .

Figure 6: In-game Behavior

8 Conclusion & Future Work

In this paper, we provide a sample complexity analysis for feedback-based tree search, an RL algorithm based on repeatedly solving finite-horizon subproblems using MCTS. Our primary methodological avenues for future work are (1) to analyze a self-play variant of the algorithm and (2) to consider related techniques in multi-agent domains (see, e.g., Hu & Wellman (2003)). The implementation of the algorithm in the 1v1 MOBA game King of Glory provided us encouraging results against several related algorithms; however, significant work remains for the agent to become competitive with humans.


We sincerely appreciate the helpful feedback from four anonymous reviewers, which helped to significantly improve the paper. We also wish to thank our colleagues at Tencent AI Lab, particularly Carson Eisenach and Xiangru Lian, for assistance with the test environment and for providing the SL agent. The first author is very grateful for the support from Tencent AI Lab through a faculty award.


  • Al-Kanj et al. (2016) Al-Kanj, Lina, Powell, Warren B, and Bouzaiene-Ayari, Belgacem. The information-collecting vehicle routing problem: Stochastic optimization for emergency storm response. arXiv preprint arXiv:1605.05711, 2016.
  • Anthony et al. (2017) Anthony, Thomas, Tian, Zheng, and Barber, David. Thinking fast and slow with deep learning and tree search. In Advances in Neural Information Processing Systems, pp. 5366–5376, 2017.
  • Antos et al. (2008) Antos, András, Szepesvári, Csaba, and Munos, Rémi. Learning near-optimal policies with bellman-residual minimization based fitted policy iteration and a single sample path. Machine Learning, 71(1):89–129, 2008.
  • Bertsekas (1975) Bertsekas, Dimitri P. Convergence of discretization procedures in dynamic programming. IEEE Transactions on Automatic Control, 20(3):415–419, 1975.
  • Bertsekas & Tsitsiklis (1996) Bertsekas, Dimitri P and Tsitsiklis, John N. Neuro-dynamic Programming. Athena Scientific, Belmont, MA, 1996.
  • Browne et al. (2012) Browne, Cameron B, Powley, Edward, Whitehouse, Daniel, Lucas, Simon M, Cowling, Peter I, Rohlfshagen, Philipp, Tavener, Stephen, Perez, Diego, Samothrakis, Spyridon, and Colton, Simon. A survey of monte carlo tree search methods. IEEE Transactions on Computational Intelligence and AI in games, 4(1):1–43, 2012.
  • Campbell et al. (2002) Campbell, Murray, Hoane Jr, A Joseph, and Hsu, Feng-hsiung. Deep blue. Artificial Intelligence, 134(1-2):57–83, 2002.
  • Cazenave (2009) Cazenave, Tristan. Nested Monte-Carlo search. In International Joint Conference on Artificial Intelligence, pp. 456–461, 2009.
  • Chaslot et al. (2006) Chaslot, Guillaume, Saito, Jahn-Takeshi, Uiterwijk, Jos WHM, Bouzy, Bruno, and van den Herik, H Jaap. Monte-Carlo strategies for computer Go. In 18th Belgian-Dutch Conference on Artificial Intelligence, pp. 83–90, 2006.
  • Chaslot et al. (2008) Chaslot, Guillaume, Bakkes, Sander, Szita, Istvan, and Spronck, Pieter. Monte-carlo tree search: A new framework for game AI. In AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, 2008.
  • Couëtoux et al. (2011a) Couëtoux, Adrien, Hoock, Jean-Baptiste, Sokolovska, Nataliya, Teytaud, Olivier, and Bonnard, Nicolas. Continuous upper confidence trees. In International Conference on Learning and Intelligent Optimization, pp. 433–445. Springer, 2011a.
  • Couëtoux et al. (2011b) Couëtoux, Adrien, Milone, Mario, Brendel, Mátyás, Doghmen, Hassan, Sebag, Michele, and Teytaud, Olivier. Continuous rapid action value estimates. In Asian Conference on Machine Learning, pp. 19–31, 2011b.
  • Coulom (2006) Coulom, Rémi. Efficient selectivity and backup operators in Monte-Carlo tree search. In International Conference on Computers and Games, pp. 72–83, 2006.
  • De Farias & Van Roy (2000) De Farias, D Pucci and Van Roy, Benjamin. On the existence of fixed points for approximate value iteration and temporal-difference learning. Journal of Optimization theory and Applications, 105(3):589–608, 2000.
  • Dufour & Prieto-Rumeau (2012) Dufour, François and Prieto-Rumeau, Tomás. Approximation of markov decision processes with general state space. Journal of Mathematical Analysis and Applications, 388(2):1254–1267, 2012.
  • Enzenberger (2004) Enzenberger, Markus. Evaluation in go by a neural network using soft segmentation. In Advances in Computer Games, pp. 97–108. Springer, 2004.
  • Gelly & Silver (2007) Gelly, Sylvain and Silver, David. Combining online and offline knowledge in UCT. In Proceedings of the 24th International Conference on Machine learning, pp. 273–280, 2007.
  • Gelly & Silver (2011) Gelly, Sylvain and Silver, David. Monte-carlo tree search and rapid action value estimation in computer Go. Artificial Intelligence, 175(11):1856–1875, 2011.
  • Gelly et al. (2012) Gelly, Sylvain, Kocsis, Levente, Schoenauer, Marc, Sebag, Michele, Silver, David, Szepesvári, Csaba, and Teytaud, Olivier. The grand challenge of computer Go: Monte Carlo tree search and extensions. Communications of the ACM, 55(3):106–113, 2012.
  • Guo et al. (2014) Guo, Xiaoxiao, Singh, Satinder, Lee, Honglak, Lewis, Richard L, and Wang, Xiaoshi. Deep learning for real-time Atari game play using offline Monte-Carlo tree search planning. In Advances in Neural Information Processing Systems, pp. 3338–3346, 2014.
  • Haskell et al. (2016) Haskell, William B, Jain, Rahul, and Kalathil, Dileep. Empirical dynamic programming. Mathematics of Operations Research, 41(2):402–429, 2016.
  • Haskell et al. (2017) Haskell, William B, Jain, Rahul, Sharma, Hiteshi, and Yu, Pengqian. An empirical dynamic programming algorithm for continuous MDPs. arXiv preprint arXiv:1709.07506, 2017.
  • Haussler (1992) Haussler, David. Decision theoretic generalizations of the PAC model for neural net and other learning applications. Information and Computation, 100(1):78–150, 1992.
  • Hingston & Masek (2007) Hingston, Philip and Masek, Martin. Experiments with Monte Carlo Othello. In IEEE Congress on Evolutionary Computation, pp. 4059–4064. IEEE, 2007.
  • Hu & Wellman (2003) Hu, Junling and Wellman, Michael P. Nash Q-learning for general-sum stochastic games. Journal of Machine Learning Research, 4(Nov):1039–1069, 2003.
  • Jiang et al. (2017) Jiang, Daniel R, Al-Kanj, Lina, and Powell, Warren B. Monte carlo tree search with sampled information relaxation dual bounds. arXiv preprint arXiv:1704.05963, 2017.
  • Kaufmann & Koolen (2017) Kaufmann, Emilie and Koolen, Wouter. Monte-Carlo tree search by best arm identification. In Advances in Neural Information Processing Systems, pp. 4904–4913, 2017.
  • Klambauer et al. (2017) Klambauer, Günter, Unterthiner, Thomas, Mayr, Andreas, and Hochreiter, Sepp. Self-normalizing neural networks. In Advances in Neural Information Processing Systems, pp. 972–981, 2017.
  • Kocsis & Szepesvári (2006) Kocsis, Levente and Szepesvári, Csaba. Bandit based Monte-Carlo planning. In European Conference on Machine Learning, pp. 282–293, 2006.
  • Langford & Zadrozny (2005) Langford, John and Zadrozny, Bianca. Relating reinforcement learning performance to classification performance. In Proceedings of the 22nd International Conference on Machine Learning, pp. 473–480, 2005.
  • Lazaric et al. (2016) Lazaric, Alessandro, Ghavamzadeh, Mohammad, and Munos, Rémi. Analysis of classification-based policy iteration algorithms. Journal of Machine Learning Research, 17(19):1–30, 2016.
  • Li et al. (2007) Li, Lihong, Bulitko, Vadim, and Greiner, Russell. Focus of attention in reinforcement learning. Journal of Universal Computer Science, 13(9):1246–1269, 2007.
  • Maîtrepierre et al. (2008) Maîtrepierre, Raphaël, Mary, Jérémie, and Munos, Rémi. Adaptative play in Texas hold’em poker. In European Conference on Artificial Intelligence, 2008.
  • Méhat & Cazenave (2010) Méhat, Jean and Cazenave, Tristan. Combining UCT and nested Monte Carlo search for single-player general game playing. IEEE Transactions on Computational Intelligence and AI in Games, 2(4):271–277, 2010.
  • Mnih et al. (2013) Mnih, Volodymyr, Kavukcuoglu, Koray, Silver, David, Graves, Alex, Antonoglou, Ioannis, Wierstra, Daan, and Riedmiller, Martin. Playing Atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.
  • Mohri et al. (2012) Mohri, Mehryar, Rostamizadeh, Afshin, and Talwalkar, Ameet. Foundations of Machine Learning. MIT Press, 2012.
  • Munos (2007) Munos, Rémi. Performance bounds in l_p-norm for approximate value iteration. SIAM Journal on Control and Optimization, 46(2):541–561, 2007.
  • Munos & Szepesvári (2008) Munos, Rémi and Szepesvári, Csaba. Finite-time bounds for fitted value iteration. Journal of Machine Learning Research, 9(May):815–857, 2008.
  • Ng et al. (1999) Ng, Andrew Y, Harada, Daishi, and Russell, Stuart. Policy invariance under reward transformations: Theory and application to reward shaping. In Proceedings of the 16th International Conference on Machine Learning, pp. 278–287, 1999.
  • Pollard (1990) Pollard, David. Empirical processes: Theory and applications. In NSF-CBMS Regional Conference Series in Probability and Statistics, pp. i–86. JSTOR, 1990.
  • Puterman (2014) Puterman, Martin L. Markov Decision Processes: Discrete Stochastic Dynamic Programming. John Wiley & Sons, 2014.
  • Saldi et al. (2017) Saldi, Naci, Yüksel, Serdar, and Linder, Tamás. On the asymptotic optimality of finite approximations to markov decision processes with Borel spaces. Mathematics of Operations Research, 42(4):945–978, 2017.
  • Scherrer et al. (2015) Scherrer, Bruno, Ghavamzadeh, Mohammad, Gabillon, Victor, Lesner, Boris, and Geist, Matthieu. Approximate modified policy iteration and its application to the game of tetris. Journal of Machine Learning Research, 16(Aug):1629–1676, 2015.
  • Silver et al. (2016) Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., van den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., Dieleman, S., Grewe, D., Nham, J., Kalchbrenner, N., Sutskever, I., Lillicrap, T., Leach, M., Kavukcuoglu, K., Graepel, T., and Hassabis, D. Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587):484–489, 2016.
  • Silver et al. (2017) Silver, David, Schrittwieser, Julian, Simonyan, Karen, Antonoglou, Ioannis, Huang, Aja, Guez, Arthur, Hubert, Thomas, Baker, Lucas, Lai, Matthew, Bolton, Adrian, et al. Mastering the game of go without human knowledge. Nature, 550(7676):354, 2017.
  • Teraoka et al. (2014) Teraoka, Kazuki, Hatano, Kohei, and Takimoto, Eiji. Efficient sampling method for Monte Carlo tree search problem. IEICE Transactions on Information and Systems, 97(3):392–398, 2014.
  • Van Roy (2006) Van Roy, Benjamin. Performance loss bounds for approximate value iteration with state aggregation. Mathematics of Operations Research, 31(2):234–244, 2006.

See pages - of supplement.pdf

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description