A Micro-Objective Perspective of Reinforcement Learning

A Micro-Objective Perspective of Reinforcement Learning

Changjian Li
Department of Electrical and Computer Engineering
University of Waterloo
Waterloo, ON N2L3G1
changjian.li@uwaterloo.ca
&Krzysztof Czarnecki
Department of Electrical and Computer Engineering
University of Waterloo
Waterloo, ON N2L3G1
k2czarne@uwaterloo.ca
Abstract

The standard reinforcement learning (RL) formulation considers the expectation of the (discounted) cumulative reward. This is limiting in applications where we are concerned with not only the expected performance, but also the distribution of the performance. In this paper, we introduce micro-objective reinforcement learning — an alternative RL formalism that overcomes this issue. In this new formulation, a RL task is specified by a set of micro-objectives, which are constructs that specify the desirability or undesirability of events. In addition, micro-objectives allow prior knowledge in the form of temporal abstraction to be incorporated into the global RL objective. The generality of this formalism, and its relations to single/multi-objective RL, and hierarchical RL are discussed.

\keywords

reinforcement learning; Markov decision process

\acknowledgements

The authors would like to thank Sean Sedwards, Jaeyoung Lee and other members of Waterloo Intelligent Systems Engineering Lab (WISELab) for discussions.

\startmain

1 Introduction and Related Works

The RL formulation commonly adopted in literature aims to maximize the expected return (discounted cumulative reward), which is desirable if all we are concerned with is the expectation. However, in many practical problems, especially in risk-sensitive applications, we not only care about the expectation, but also the distribution of the return. For example, in autonomous driving, being able to drive well in expectation is not enough, we need to guarantee that the risk of collision is below a certain acceptable level. As another example, there might be two investment plans with the same expected return, but different variance. Depending on investor type, one investment plan might be more attractive than the other. As a simplified abstraction, we consider the following Markov Decision Process (MDP) with only one non-absorbing state , which is the state where the investment decision is to be made. There are two actions, and , corresponding to the two investment plans. From , if is taken, there is chance of getting a profit of (entering absorbing state ), and chance of getting a loss of (entering absorbing state ). If is taken, there is probability of earning a profit of (entering absorbing state ), and probability of receiving a loss of (entering absorbing state ). The reward function is therefore as follows:

(1)

The reward function is zero onwards once an absorbing state is reached. Both and will result in an expected return of . However, the investor might not be able to afford a loss of more than, say, , in which case is preferable to . Unfortunately, the expected return formulation provides no mechanism to differentiate these two actions. Furthermore, any mixture policy of and (mixing and with some probability) also has the same expected return.

Two approaches have been discussed in literature to tackle this issue. One is to shape the reward so that the expected return of the policies are no longer the same (Koenig and Simmons, 1994). E.g., we can give more negative reward when the loss is higher than , such as the following:

Although in this simple case, the reward shaping is fairly straight-forward, in more complex tasks such as autonomous driving where there are many conflicting aspects, it is a challenge to choose a reward that properly balances the expected overall performance and the risk of each aspect.

The second approach is to use an alternative formulation that considers more than just the expected return. Several methods (Sato et al., 2001; Sherstan et al., 2018; Tamar et al., 2013) have been proposed to estimate the variance of return in addition to the expectation. While this alleviates the issue by taking variance into account, distributions can differ even if both the expectation and the variance are the same. Yu et al. (Yu et al., 1998) considered the problem of maximizing the probability of receiving a return that is greater than a certain threshold. Geibel and Wysotzki (Geibel and Wysotzki, 2011) considered constrained MDPs (Altman, 1999) where the discounted probabilities of error states (unacceptable states) are constrained. However, both of these formulations are designed only for a specific type of application, and do not have the generality required as an alternative RL formalism.

In this paper, we propose to solve this issue by restricting the return to a distribution that is entirely decided by its mean, namely the Bernoulli distribution. To motivate this idea, observe that in any RL task, we are essentially concerned with a set of events, some desirable, some undesirable, to different extents. For example, in the task of autonomous driving, collision is an undesirable event; running a red light is another event, still undesirable but not as much; driving within the speed limit is a desirable event, etc. Instead of associating each event with a reward, and evaluating a policy by the total reward it accumulates (as in conventional RL), we can think of all the events as a whole, and evaluate a policy based on the combination of events it would lead to. The return is now only an indicator of an event, and is restricted to binary values: if the event happens, and if it does not. Given a policy, the return of each micro-objective is thus a Bernoulli random variable, whose mean is both the value function, and the probability of event occurrence under the policy. Following this view, a task can be specified by a predefined set of events, and a partial order that allows comparison between different combinations of event probabilities. The goal is to find the policies that result in the most desirable combinations of probabilities. 111To be exact, the combinations (of probabilities) that are not less desirable than any other combinations..

This can be illustrated with the investment example. Instead of defining a reward function as in Eq. 1, we define entering , , and as four events. If action is taken, the chances of the four events occurring are , , and . If action is taken, the chances are . Apart from the events, a partial order on the probability vectors is also defined to specify which probability vector is more desirable. Let denote the expected returns of the four events. If we set the partial order to , we arrive at an equivalent formulation to the standard RL formulation with reward specified by Eq. 1. If, however, we want the probability of getting a loss of more than to be less than a certain threshold , we can simply redefine the partial order so that is smaller whenever .

2 Background

2.1 MDP and Reinforcement Learning

The standard RL problem is often formulated in terms of a (single-objective) Markov Decision Process (MDP), which can be represented by a six-tuple , where is a finite set of states; is a finite set of actions; is the transition probability from state to state taking action ; is the initial state distribution; is the discount factor; and is the reward for taking action in state and arriving state . The goal is to find the maximal expected discounted cumulative reward: , where denotes to the power of , and denotes the set of policies we would like to consider. In the most general case, a policy can be history-dependent and random, in the form of , where a decision rule is the probability of taking action in state with history . A history is a sequence of past states, actions and decision rules . A policy is said be deterministic if for only one action, in which case we can use a simplified notation , where . Correspondingly, if the policy is deterministic, a history can be represented with . A policy is said to be stationary if the decision rule only depends on the current state , and does not change with time, i.e., . The set of all history-dependent random policies, history-dependent deterministic policies, stationary random policies, and stationary deterministic policies are denoted by , , and , respectively. We call a task an episodic task with horizon if the state space is augmented with time; ; and . The expected return following a policy starting from the initial state distribution is called the value function, which is denoted as . For a single-objective MDP, there exists stationary deterministic optimal policy, that is,

2.2 Multi-objective Reinforcement Learning

In some cases (Roijers et al., 2014), it is preferable to consider different aspects of a task as separate objectives. Multi-objective reinforcement learning is concerned with multi-objective Markov decision processes (MOMDPs) , where , and , are the state space, action space, transition probability and initial state distribution as in single-objective MDPs; Now there are k pairs of discount factors and rewards , one for each objective. The value function for the th objective is defined as . Let be the value functions for all objectives, and be the set of all realizable value functions by policies in , is a partial order defined on . Multi-objective RL aims to find the policies such that is a maximal element 222A maximal element of a subset of some partially ordered set is an element of that is not smaller than any other element in . of , which we refer to as the optimal policies. We say a policy strictly dominates policy if and . A commonly adopted partial order for multi-objective RL is , in which case, the set of maximal elements are also called the Pareto frontier of . Episodic tasks have not been widely discussed in the context of multi-objective RL, and most existing literature assumes . Although for a single-objective MDP, the optimal value can be attained by a deterministic stationary policy, this is in general not true for multi-objective MDPs. White (White, 1982) showed that history-dependent deterministic policies can Pareto dominate stationary deterministic policies; Chatterjee et al. (Chatterjee et al., 2006) proved for the case that stationary random policies suffice for Pareto optimality.

3 Micro-Objective Reinforcement Learning

In standard multi-objective RL, there is no restriction on the reward function of each objective. Each objective can itself be a ‘macro’ objective that involves multiple aspects. This makes multi-objective RL subject to the same issue single-objective RL has: only the expectation of return is considered for each objective. Conceptually, micro-objective RL is multi-objective RL at its extreme: each micro-objective is concerned with one and only one aspect — the occurrence of an event. An event can be ‘entering a set of goal/error states’, ‘taking certain actions in certain states’, or ‘entering a set of states at certain time steps’, etc., but ultimately can be represented by a set of histories. If the history up to the current time step is in the set, we say that the event happens.

3.1 Micro-objectives

At the core of the micro-objective formulation is a new form of value function . Denoting as the set of all possible histories, is the set of histories that corresponds to the occurrence of the event, which we call the termination set of a micro-objective. is also a set of histories, which we call the initiation set. The terminologies are deliberately chosen to resemble those of options (Sutton et al., 1999), and as we will see, this form of value function is indeed connected to options. Independent from the task, a micro-objective has its own initiation and termination. A micro-objective initiates if it is not currently active and , when an associated timer is also initiated. A micro-objective terminates if it is currently active and , upon which a return of is received. It also terminates if or the task terminates, upon which a return of is received, and is reset. Note that and are the time step and time horizon for the task, whereas and are the time step and time horizon for the micro-objective. is defined as the expected return the th micro-objective receives starting from initial state distribution following policy . For example, suppose that the task always starts from (i.e., ), and the micro-objective is active three times (in sequence) before task termination if policy is followed, with the return of , and , respectively, then is . Although this particular form of value function is similar to the generalized value function (GVF) proposed by Sutton et al. (Sutton et al., 2011) in the sense that both have their own initiation, termination and return, it is a rather different concept. Unlike a GVF which is associated with a target policy, and can be interpreted as the answer to a question regarding the target policy; the value function of a micro-objective is parameterized by the global control policy, and is an evaluation of the global policy with respect to one aspect of the task. This becomes clearer when we consider the fact that the value function for a micro-objective is conditioned on the initial state distribution of the task. As a result, the value functions of the micro-objectives appear in the global RL objective, while it is not obvious how GVFs can be used in the RL task specification.

Formally, a micro-objective RL problem is an episodic task represented by the -tuple , where , , , are the state space, action space, transition probabilities, and initial state distribution as usual. is the terminal states of the task, and is the time horizon. The task terminates whenever , or a state in is reached. to are the micro-objectives as described above. Let be the value functions for all micro-objectives, and be the set of all realizable value functions by policies in , is a partial order defined on . Similar to multi-objective RL, the goal is to find the policies such that is a maximal element of . However, the multi-objective RL formulation introduced in Section 2.2 does not subsume micro-objective RL. For one thing, there is no notion of objective termination in multi-objective RL.

3.2 Connections to Hierarchical RL

Hierarchical RL (Barto and Mahadevan, 2003) refers to RL paradigms that exploits temporal abstraction to facilitate learning, where a higher level policy selects ‘macro’ actions that in turn ‘call’ some lower level actions. Notable Hierarchical RL approaches include the options formalism (Sutton et al., 1999), hierachical abstract machines (HAM) (Parr and Russell, 1997), the MAXQ framework (Dietterich, 2000), and feudal RL (Dayan and Hinton, 1992). However, in none of these frameworks does the specification for temporal abstraction appear in the global RL objective. As a result, there is no clear measure on how well each temporally abstracted action should be learned, and how important they are compared to the goal of the high-level task. The micro-objective formulation is designed to allow temporal abstraction to be expressed as micro-objectives, therefore building the bridge between ‘objectives’ and ‘options’. To see this, recall that a micro-objective has an initiation set and a termination set , corresponding to the initiation set and termination condition of an option. If the initiation set and the goal states 333Options do not need to have goal states. This is an intentional oversimplification to illustrate the idea. of an option coincide with the initiation set and the termination set of a micro-objective, then the micro-objective can be thought of as a measure of how important this option is. {SCfigure}[1.5][h] A task with three hypothetical options. is the initial state, and is the goal state. There are two possible scenarios. In the first scenario, reaching is important, in which case both and must be learned well; In the second scenario, is just some guidance for exploration, in which case only needs to be learned well. The importance of learning each option can be represented by micro-objectives. To be more concrete, consider the following task where the initial state is , the goal state is , and is a set of intermediate states. is a hypothetical (might not exist, and is to be learned if needed) option that takes the agent from to (might or might not pass through ). Similarly, is a hypothetical option from to , and is a hypothetical option from to . Such a task can be a taxi agent at location driving a passenger from a pick-up location to a destination , in which case we can define two micro-objectives and , corresponding to and respectively. In this example, a micro-objective for is not needed, because passing though is required. The value function is thus , and the partial order can be defined as . Another possible scenario for such a task would be an agent navigating through a maze, and is only some heuristics. In this case, the only important micro-objective is , which corresponds to option . and are only there to help exploration. Let , the partial order can be defined as

3.3 Generality

We briefly discuss the generality of the micro-objective RL formulation. Since the state space, action space and transition probabilities are the same as in a MDP, our only concern is whether micro-objectives, together with the partial order can imply an arbitrary optimal policy. If for any stationary deterministic policy and any Markov dynamics , there exists a partial order and a set of micro-objectives such that is optimal, then we can cast any single-objective RL problem into an equivalent micro-objective problem. Now we show that micro-objective RL can indeed imply an arbitrary stationary deterministic policy . Let be an enumeration of the finite state space , we define micro-objectives , which, with abuse of notation, we write as . In other words, for each state we define an event: taking in . If we further define the partial order as , then is optimal for this micro-objective RL task. Therefore, given enough micro-objectives, the micro-objective formulation is at least as general as the standard RL formulation.

Although we’ve shown above that micro-objective RL is able to imply any stationary deterministic policy, it remains interesting whether and how an ‘objective’ in single/multi-objective RL can be directly translated to one or more ‘micro-objective(s)’. Consider the th objective of multi-objective RL (in the case of single-objective RL, ) with reward . If we define the following (countably infinite) set of micro-objectives:

where denotes the set of all possible histories until (but excluding) time , then the value function for the th objective of the original single/multi-objective RL can be written in terms of the above micro-objectives as:

thus casting an ‘objective’ as a set of ‘micro-objectives’.

3.4 Optimal Policies

In this section, we discuss the optimal policies for micro-objective RL. We first show that stationary random policies can strictly dominate stationary deterministic policies. Consider the following example (Figure. 0(a)) where there are three states , and . From there are two actions and . always leads to state and always leads to state . The episode always starts from , i.e., . There are two micro-objectives and , whose value functions are denoted with abuse of notation, as and , respectively. The partial order is defined as . For this task, the stationary random policy that selects and with equal probability strictly dominates any stationary deterministic policy that always selects either or .

{SCfigure}

[0.4][h]

(a)
(b)

The optimal policies for micro-objective RL are in general history-dependent and random.

Now we show that history-dependent deterministic policies can strictly dominate stationary policies. Consider the example (Figure. 0(b)) where there are five states, to . The episode starts from and with equal probability, i.e., . The next state for both and is always regardless of the action taken. From there are two actions and , leading to and with certainty, respectively. There are two micro-objectives: to go from to , i.e., ; and to go from to , i.e., . Again we abbreviate the value function of the two micro-objectives respectively as and . The partial order is thus defined as . In this task, the only distinction between policies is the choice of action in state . It’s not hard to see that the history-dependent policy that takes action in if the initial state is , and takes action in if the initial state is strictly dominates any stationary policies.

By combining the above two examples, we can construct an example where a history-dependent random policy strictly dominates history-dependent deterministic policies. Therefore the optimal policies for micro-objective RL are in general history-dependent and random.

4 Conclusions

We introduced micro-objective RL, a general RL formalism that not only solves the problem of standard RL formulation that only the expectation is considered, but also allows temporal abstraction to be incorporated into the global RL objective. Intuitively, this micro-objective paradigm bears more resemblance to how humans perceive a task — it is hard for a human to tell what reward they receive at a certain time, but it is relatively easy to tell how good a particular combination of events is. Ongoing research topics include effective algorithms for micro-objective RL, and compact representation of similar micro-objectives.

References

  • Altman [1999] Eitan Altman. Constrained Markov Decision Processes. Chapman & Hall/CRC, 1999. ISBN 9780849303821.
  • Barto and Mahadevan [2003] Andrew G. Barto and Sridhar Mahadevan. Recent advances in hierarchical reinforcement learning. Discrete Event Dynamic Systems, 13(1-2):41–77, 2003. doi: 10.1023/A:1022140919877. URL https://doi.org/10.1023/A:1022140919877.
  • Chatterjee et al. [2006] Krishnendu Chatterjee, Rupak Majumdar, and Thomas A. Henzinger. Markov decision processes with multiple objectives. In STACS 2006, 23rd Annual Symposium on Theoretical Aspects of Computer Science, Marseille, France, February 23-25, 2006, Proceedings, pages 325–336, 2006. doi: 10.1007/11672142_26. URL https://doi.org/10.1007/11672142_26.
  • Dayan and Hinton [1992] Peter Dayan and Geoffrey E. Hinton. Feudal reinforcement learning. In Advances in Neural Information Processing Systems 5, [NIPS Conference, Denver, Colorado, USA, November 30 - December 3, 1992], pages 271–278, 1992. URL http://papers.nips.cc/paper/714-feudal-reinforcement-learning.
  • Dietterich [2000] Thomas G. Dietterich. Hierarchical reinforcement learning with the MAXQ value function decomposition. J. Artif. Intell. Res., 13:227–303, 2000. doi: 10.1613/jair.639. URL https://doi.org/10.1613/jair.639.
  • Geibel and Wysotzki [2011] Peter Geibel and Fritz Wysotzki. Risk-sensitive reinforcement learning applied to control under constraints. CoRR, abs/1109.2147, 2011. URL http://arxiv.org/abs/1109.2147.
  • Koenig and Simmons [1994] Sven Koenig and Reid G. Simmons. Risk-sensitive planning with probabilistic decision graphs. In Proceedings of the 4th International Conference on Principles of Knowledge Representation and Reasoning (KR’94). Bonn, Germany, May 24-27, 1994., pages 363–373, 1994.
  • Parr and Russell [1997] Ronald Parr and Stuart J. Russell. Reinforcement learning with hierarchies of machines. In Advances in Neural Information Processing Systems 10, [NIPS Conference, Denver, Colorado, USA, 1997], pages 1043–1049, 1997. URL http://papers.nips.cc/paper/1384-reinforcement-learning-with-hierarchies-of-machines.
  • Roijers et al. [2014] Diederik Marijn Roijers, Peter Vamplew, Shimon Whiteson, and Richard Dazeley. A survey of multi-objective sequential decision-making. CoRR, abs/1402.0590, 2014. URL http://arxiv.org/abs/1402.0590.
  • Sato et al. [2001] Makoto Sato, Hajime Kimura, and Shibenobu Kobayashi. Td algorithm for the variance of return and mean-variance reinforcement learning. Transactions of the Japanese Society for Artificial Intelligence, 16(3):353–362, 2001. doi: 10.1527/tjsai.16.353.
  • Sherstan et al. [2018] Craig Sherstan, Dylan R. Ashley, Brendan Bennett, Kenny Young, Adam White, Martha White, and Richard S. Sutton. Comparing direct and indirect temporal-difference methods for estimating the variance of the return. In Proceedings of the Thirty-Fourth Conference on Uncertainty in Artificial Intelligence, UAI 2018, Monterey, California, USA, August 6-10, 2018, pages 63–72, 2018. URL http://auai.org/uai2018/proceedings/papers/35.pdf.
  • Sutton et al. [1999] Richard S. Sutton, Doina Precup, and Satinder P. Singh. Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning. Artif. Intell., 112(1-2):181–211, 1999. doi: 10.1016/S0004-3702(99)00052-1. URL https://doi.org/10.1016/S0004-3702(99)00052-1.
  • Sutton et al. [2011] Richard S. Sutton, Joseph Modayil, Michael Delp, Thomas Degris, Patrick M. Pilarski, Adam White, and Doina Precup. Horde: a scalable real-time architecture for learning knowledge from unsupervised sensorimotor interaction. In 10th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2011), Taipei, Taiwan, May 2-6, 2011, Volume 1-3, pages 761–768, 2011. URL http://portal.acm.org/citation.cfm?id=2031726&CFID=54178199&CFTOKEN=61392764.
  • Tamar et al. [2013] Aviv Tamar, Dotan Di Castro, and Shie Mannor. Temporal difference methods for the variance of the reward to go. In Proceedings of the 30th International Conference on Machine Learning, ICML 2013, Atlanta, GA, USA, 16-21 June 2013, pages 495–503, 2013. URL http://jmlr.org/proceedings/papers/v28/tamar13.html.
  • White [1982] D.J White. Multi-objective infinite-horizon discounted markov decision processes. Journal of Mathematical Analysis and Applications, 89(2):639 – 647, 1982. ISSN 0022-247X. doi: https://doi.org/10.1016/0022-247X(82)90122-6. URL http://www.sciencedirect.com/science/article/pii/0022247X82901226.
  • Yu et al. [1998] Stella X Yu, Yuanlie Lin, and Pingfan Yan. Optimization models for the first arrival target distribution function in discrete time. Journal of Mathematical Analysis and Applications, 225(1):193 – 223, 1998. ISSN 0022-247X. doi: https://doi.org/10.1006/jmaa.1998.6015. URL http://www.sciencedirect.com/science/article/pii/S0022247X98960152.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
375290
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description