Partner Approximating Learners (PAL):Simulation-Accelerated Learning with Explicit Partner Modeling in Multi-Agent Domains{}^{**}

Partner Approximating Learners (PAL): Simulation-Accelerated Learning with Explicit Partner Modeling in Multi-Agent Domains

Abstract

Mixed cooperative-competitive control scenarios such as human-machine interaction with individual goals of the interacting partners are very challenging for reinforcement learning agents. In order to contribute towards intuitive human-machine collaboration, we focus on problems in the continuous state and control domain where no explicit communication is considered and the agents do not know the others’ goals or control laws but only sense their control inputs retrospectively. Our proposed framework combines a learned partner model based on online data with a reinforcement learning agent that is trained in a simulated environment including the partner model. Thus, we overcome drawbacks of independent learners and, in addition, benefit from a reduced amount of real world data required for reinforcement learning which is vital in the human-machine context. We finally analyze an example that demonstrates the merits of our proposed framework which learns fast due to the simulated environment and adapts to the continuously changing partner because of the partner approximation.

Reinforcement Learning, Mixed Cooperative-Competitive Control, Machine Learning in Control, Opponent Modeling
12

I Introduction

In numerous control problems such as robotics, intelligent manufacturing plants and highly-automated driving, several so-called agents (e.g. machines and/or humans) are involved and need to adapt to each other in order to improve their behavior. Allowing the agents to pursue individual, not necessarily opposing, goals by means of individual reward structures leads to mixed cooperative-competitive [1] reinforcement learning (RL) problems. Although, especially in control problems, the system dynamics is often known, the reward structures and control laws of other agents are usually unknown to each agent. If the agents adapt their behavior during runtime, this leads to non-stationary environments from the point of view of each agent. Thus, rather than being ignorant concerning the presence of other agents, it is advisable to consider their influence explicitly [2]. Another challenge arising when control tasks are learned by means of RL is the lack of data efficiency as much real-world data is required in order to obtain decent performance. Major successes in training an agent in simulated environments rather than solely based on real data have been reported by [3] and [4]. In simulated environments, powerful hardware can be used to speed up simulations and thus increase the rate of interactions without risk of erratic exploration. However, concerning the multi-agent case, RL based solely on simulations would not appropriately consider the other agents’ non-stationary behavior.

In this work, we focus on control problems in continuous state and control spaces where no explicit communication or reward sharing is available but the agents are solely able to sense or deduce the other agents’ control inputs after they have been applied. In order to account for the above-mentioned challenges, we propose a general framework that combines the merits of maintaining a partner model that is constantly updated based on real data with learning in a simulated environment. More precisely, each agent approximates a partner model that incorporates the aggregated controls of all other agents. This approximation is constantly updated based on real data in order to capture their changing behavior. Then, each agent simulates a virtual replica of the real control loop containing the system model, the partner approximation and his own control law. In this virtual simulation, the agent updates his control law by means of RL methods in order to improve the performance w.r.t. his reward function. The control law learned in the virtual environment is then transferred to the real control loop and the partner approximation is updated again in order to capture the other agents’ changes and reaction. That way, potentially non-stationary partners are steadily approximated and explicitly considered by the RL agent learning in a simulated environment.

In the next section, we define our problem and the concept of partner approximating learners (PAL). Then, we place our framework in the context of related work before we propose our main topology. Finally, we give an example choice of the components and analyse our method by means of a swing-up task of an inverted pendulum.

Ii Problem and partner approximating learner definition

Consider a discrete-time system controlled by agents that is given in nonlinear state space representation , where and are the continuous state and continuous control of agent . From the point of view of agent , let be the aggregated control input of all other agents. Then, agent aims to adapt his control law in order to maximize his long-term discounted reward

(1)

In this mixed cooperative-competitive setting with continuous state and control spaces, no explicit communication between the agents is allowed. Each agent maintains a model of the system dynamics , either as a result of model design or approximated via e.g. recurrent neural networks. Furthermore, each agent senses the partners’ controls , i.e. after one step delay, as well as the system state and is aware of his own reward function and discount factor , but has no knowledge of the other agents’ reward functions , and control laws . Based on this problem definition, the notion of partner approximating learners (PAL) is given as follows.

Definition 1

A partner approximating learner (PAL) is an RL agent in a multiagent setting acting in continuous state and control spaces that

  • does not explicitly communicate with other agents and does not know their reward structures and control laws

  • is able to sense or deduce the other agents’ control inputs after they have been applied

  • maintains and updates a model of the other agents’ aggregated control law (partner model) based on real data

  • updates his control law based on simulated data while explicitly incorporating the partner model.

In the following section, we outline related work before our proposed framework is introduced.

Iii Related work

The idea to incorporate simulated data from a system model into the learning process of an RL agent was proposed in the Dyna architecture [5]. Extensions of this concept to use a simulated environment rather than solely learning from real data marked a breakthrough in order to cope with the sample complexity when using high-dimensional function approximators in continuous control tasks. One example is the use of Normalized Advantage Functions (NAF) with Imagination Rollouts [6], which not only allows for the use of continuous state and control spaces, but accelerates learning by means of model-based simulated data that is additionally fed into the replay buffer. Another example is given by [4], where a complex dexterous hand manipulation task has successfully been learned in simulation based on Proximal Policy Optimization (PPO) [7] and transferred to a physical robot.

On the multi-agent side, independent learners have shown limited performance [2] due to the non-stationarity of the environment. In fully cooperative settings, optimistic learning such as hysteretic Q-learning [8] was proposed assuming that all agents tend to improve collective rewards. Other approaches require explicit communication [9, 10] or share actor parameters [11, 12]. Partner modeling strives to avoid the disadvantages of independent learners without the necessity of communication or parameter sharing. In Self Other-Modeling [13], the agent updates his belief of the partners’ hidden goals and predicts the others’ controls inputs based on his own control law. In the work of [14], the maximum-likelihood is used to predict the partners’ future controls based on previous controls in finite state and control spaces under the requirement that the payoff matrix is known to all agents. Multi-agent Deep Deterministic Policy Gradient MADDPG [1] is a remarkable extension to DDPG [15], thus allowing continuous state and control spaces. MADDPG uses centralized training with decentralized execution. Thus, the Q-function of each agent not only depends on the state and his own control but also the controls of all other agents. In order to remove the assumption of knowing all agents’ control laws, it is suggested in [1, Section 4.2] to infer control laws of other agents.

Usually, multi-agent RL algorithms do not explicitly assume knowledge of the system dynamics and therefore learn only based on observed data. In contrast, our framework benefits from known system dynamics, which is often available in control engineering as a result of model design or can be approximated, and requires real data solely to update the partner model whereas the RL agent is able to explore and generate huge amounts of data in a virtual environment. Our framework proposed in the next section incorporates partner modeling into the paradigm to accelerate learning by using simulated data and can therefore be interpreted to extend powerful mechanisms such as the Dyna-architecture [5] or Imagination Rollouts [6] to the multi-agent case.

Iv Proposed adaptive mixed cooperative-competitive controller

In this section, we introduce the topology of the proposed Partner Approximating Learner framework (PAL-framework), which can be used with various partner identification and RL algorithms due to its modularity. We refer to all controllers implemented in the PAL-framework as Partner Approximating Learners (PALs). Our framework consists of three main components that can be seen in Fig. 1: the identification which approximates all partners’ aggregated control law with a model and the internal simulation where RL is used to improve the last component namely the control law which is applied to the real physical system to be controlled. In the following, the components will be explained in more detail.

Fig. 1: Structure of the proposed framework. Each agent identifies an aggregated partner model from online data, optimizes his control law based on the partner model and system model by means of RL in the internal simulation and transfers the learned control law to the controller in reality.

Iv-a Online partner identification with experience replay

To be able to improve the own control law toward a higher long-term reward , the behavior of the partners must be taken into account. We therefore continuously identify and improve a model of in order to predict the aggregate control input of all partners from the current state . Note however, that is not always fixed and might change, e.g. because the partners are learning as well. Thus, the model should be a flexible and powerful function approximator in order to accurately capture a wide range of possible partner control laws.

Supervised learning algorithms typically require a lot of training data before any useful approximation of the target is obtained. Due to the fact that the data has to be obtained from interactions of the partners with the system, the rate of new information about the partners’ behavior is quite low. Additionally, using only the newest set of input-output data for training leads to a high variance in the direction of the applied updates to the models which often leads to unstable learning algorithms [16]. Both the relative scarcity of data and the high variance of updates also prevented the use of deep neural networks in RL for many years. A breakthrough to both problems was introduced to deep RL by [16] in the form of experience replay (ER). Instead of training on only the latest experience, samples are chosen from the replay buffer uniformly at random (u.a.r.) and form the mini-batch, which is used for training.

Because both online identification and deep RL exhibit these problems, we adapt experience replay for the use in online identification. To this end, we save the input-output data of the partner as experiences into an identification buffer . To update the approximate model of the partner, we pick experiences from the buffer and use a supervised learning algorithm that is appropriate for the specific task. The size of the buffer should be large enough to have a high chance of holding information about different regions of the state space and thus capturing nonlinearities in the identification step. Limiting it in size is however not only a memory requirement, but helps to discard experiences that are outdated and thus do not capture the current behavior of the potentially changing partner. Even improved ER algorithms which differ in the way the experiences are drawn from , such as prioritized experience replay (PER) [17] and combined experience replay (CER) [18], can be used directly as long as they do not take the reward of an experience into account (there is no reward associated with the input-output data of the partner). To use PER, the priorities are weighted according to the prediction error rather than the TD error. Fig. 2 shows the different components of the identification part of the controller.

identification
Fig. 2: Online identification. Each time step , the state and control input of the partners are stored in the identification buffer . A mini-batch is formed by picking experiences using an applicable experience replay algorithm and the model of the partners’ behavior is improved.

Iv-B Internal simulation

The core idea of the PAL-framework lies within the internal simulation that the controller runs in order to improve its control law. It consists of two parts, a virtual replica of the real control loop and an RL agent acting on this replica.

Virtual replica

In order to capture the interactions of the real control loop, the three components of “reality” in Fig. 1 have to be known. The controller’s behavior and the system dynamics are both known, while the partners’ behavior is not. This is where the approximate partner model (see Section IV-A) is used. We are now able to simulate the behavior of the real control loop offline and typically much faster, with no wear of the hardware and without cumbersome and costly RL on the physical system.

Reinforcement learning algorithm

partner modelsim. system
Fig. 3: An RL agent improves his control law in the internal simulation based on the partner model and system dynamics.

With a simulation of the real control loop at hand, RL can be applied in a straightforward way, when the system and approximate partner model are combined into a single Markov Decision Process (MDP) with state space , action space , system dynamics , reward function and discount factor , where denotes the time step in the simulation. In this auxiliary MDP, the RL agent chooses simulated controls and obtains the resulting simulated state of the simulated system. In addition, the agent experiences a reward . Based on these experiences, which are usually stored in a replay buffer , the agent improves his control law. The complete setup of the simulated control loop can be seen in Fig. 3 for the example of an actor-critic RL agent, where the critic estimates the long-term reward and the actor represents the control law. Note that for some RL algorithms, the partner model may additionally be used directly by the RL agent, e.g. in the case of MADDPG [1].

Iv-C Control law

The control law learned in simulation can then be used as the control law of the controller acting on the physical system (i.e. “reality” in Fig. 1). The representation of the control law that acts on the physical system therefore depends on the kind of RL agent that is used in the internal simulation. Since the formulation of the problem is done in discrete time, will be used every timestep to calculate for the duration of the next timestep.

V Experiments

In this section, we give the example system that is used in order to demonstrate the effectiveness of the proposed topology, define concrete algorithms for the experiments and discuss results.

V-a Example system

Because of the relevance both in control theory [19] and machine learning [20] literature, a pendulum swing-up task is selected. To easily and reproducibly test the potential of the proposed controller, the “real”, i.e. physical, system is replaced by a separate simulation, not to be confused with the internal simulation implemented by PALs. The pendulum has a two-dimensional state space, an angle , where is defined to be the upright position, and an angular velocity and two agents are able to control the pendulum simultaneously. Both control variables and that represent a momentum applied to the pendulum are clipped to the range of which necessitates a swing-up of the pendulum. The pendulum model is based on the pendulum from OpenAI Gym [21] and modified to additionally allow a second agent to apply torque to the pendulum. At first, the goal is for both controllers to swing-up and hold the pendulum vertically, later we shift the goal to an inclined position. On reset, the pendulum starts at a random state within , , which means it has some potential and/or kinetic energy at initialization. The nonlinear system equations are given by

where , , and . In the following, concrete algorithms will be chosen to implement PALs.

V-B Ddpg-Pal

To approximate the partners, we use a multilayer perceptron (MLP) to be able to capture highly nonlinear control laws . In order to train this partner model , we use CER [22], as it uses new information right when it is available and is fairly robust to the size of the replay buffer.

For the RL agents, the Deep Deterministic Policy Gradient (DDPG) algorithm [15] is chosen. This makes the use of continuous state and control spaces possible. Because of the actor-critic nature of the DDPG algorithm, the control law can also be easily used on the real system, since it is directly available in the form of the actor. We will refer to this specific PAL implementation as DDPG-PAL. The choice of optimizers, learning rates and other hyperparameters are given in the supplementary details in Appendix A.

V-C Examined controller setups

In order to examine the functionality of DDPG-PALs, the internal simulation, the partner approximation and the RL agent have to work properly. To examine whether all of these components contribute to the proper functioning, several experiments are conducted and presented in the following. Since we are focusing on interacting agents, both controllers are learning. The metrics that are reported are averaged over ten test runs and the plots are from one of the two runs that were closest to the median. The four different setups that are examined are defined as follows.

Baseline (no internal simulation; no explicit identification)

The direct but naive way of using RL for a cooperative swing-up task follows the independent learner paradigm (cf. [2]). In this case, both the controller and its partner are regular DDPG agents interacting with the same physical environment without using an internal simulation. In order not to withhold information that the DDPG-PAL possesses, the baseline agents can measure the delayed output of each other and treat it like a third state of the system. For the agent, this can reduce the perceived instationarity of the MDP containing an adaptive partner [1].

Oblivious DDPG-agents in a simulated environment (using an internal simulation; identification disabled)

Since both agents, while initialized differently, have the same goal, it might be possible for them to achieve the swing-up without knowledge of the other controller. To test if the identification is indeed improving the agents’ performance, we use the internal simulation while disabling the identification. This results in each controller learning in an internal simulation which only incorporates the system model. They learn as if there were no partner influencing the system, with no way of realizing that there is, which is why we call them oblivious DDPG-agents.

DDPG-PALs (using an internal simulation and identification)

In order to improve both learning time and quality through simulated experience and a partner model, we use DDPG-PAL for both partners. Therefore, this controller setup represents an example of our proposed PAL-architecture. For the scenarios above, the goal of swinging up the pendulum and holding it upright is expressed with the reward function for both agents, i.e. . It punishes the control effort , the deviation from the vertical position and the angular velocity .

DDPG-PALs with different reward functions

While the aforementioned settings serve to examine the advantages of the PAL-architecture, the fourth experiment uses partners with different reward functions. This is motivated by the case of human-robot-collaboration, where, although goals are typically aligned, different humans might prefer different ways of achieving the goal. As an example, imagine the task for a human to transport a piece of equipment from to with support by a robot. Understandably, taller people might have different preferences regarding the height it should be transported at compared to shorter people. A suitable robot controller would ideally both realize and account for those preferences and thus learn to support different human partners differently when transporting the piece of equipment as long as this aligns with its own goals.

To mimic this situation, we use two DDPG-PALs with slightly different reward functions. Here, the machine controller (agent 1), trying to cooperate with the partner (agent 2), uses the reward

(2)

Thus, agent 1 has two optima for the pendulum position, and . Note that is the angle at which the negative reward caused by the deviation from and the constant control effort to hold this position are balanced. On the other hand, the partner (e.g. representing the human) uses

(3)

This means he would like to swing-up the pendulum and hold it at . The second optimum for agent 1 at thus leads to a lower reward for agent 2. To ease the swing-up for this task, the limits of and are widened to .

V-D Results

As is depicted in Fig. 4, baseline DDPG-agents swing-up the pendulum towards the vertical position at around for the first time. In addition to taking relatively long until the first successful swing-up is performed, holding the pendulum upright is very unstable and it can be seen that the pendulum tips over multiple times with no significant improvement.

Fig. 4: Cooperative swing-up with two baseline agents. Note the shifted time axis.
Fig. 5: Two oblivious DDPG-agents performing the swing-up task without knowledge of each other.

Considering the oblivious DDPG-agents, Fig. 5 shows that even without the identification a swing-up can generally be learned much faster in the simulated environment compared to the baseline. However, because the impact of the partner is ignored, the pendulum can not be held upright for longer periods of time. This leads to an average reward per second of over all runs in the first whereas for the baseline the average reward in this time is and clearly much worse as the pendulum is not held in the upright position at all during this time.

Fig. 6 reveals that the swing-up is successful after just when using DDPG-PALs. In addition, it can be seen that the pendulum is held upright more stable compared to the baseline and the oblivious DDPG-agents and is easily re-erected after tipping over. The cooperating DDPG-PALs achieve an average reward per second of which is a significant improvement compared to the oblivious DDGP-agents which do not include the partner model.

Fig. 6: Both agents learn using DDPG-PAL including internal simulations.
Fig. 7: Two DDPG-PALs with different goals agree on the optimum that suits both.

These results show that not only the internal simulation, but also the identification significantly improves the results.

For DDPG-PALs with different reward functions, Fig. 7 shows that swinging up the pendulum is also achieved quite fast. At , the pendulum is held vertically, which already leads to fairly high reward. This vertical position is, however, not the optimum for either of the controllers. Right after tipping over at roughly , they agree on the optimal position at around .

V-E Discussion

The results above indicate that the desired behavior can successfully be learned by PALs. In order to make broader claims about the applicability especially in the case of PALs with different reward functions, it is necessary to show that agent 1 has indeed learned to prefer over , even though this does not follow directly from . Instead, preferring is better for agent 1 because agent 2 is uncooperative at , which leads to a lower reward for agent 1 in the region around . The preferences of agent 1 can not only be found by experimentation, but explicitly in the DDPG critic, i.e. the action-value function , that is used in the internal simulation. Removing the dependency on the control (i.e. action), we get the state-value function , which allows us to compare the value that the agent assigns to the two states at (with ). As reference, we perform ten runs where agent 1 has his partner approximation disabled, i.e. follows the oblivious-agent mechanism. This leads to the average values of and

which makes a difference of only , meaning that the controller does not significantly prefer one of the states over the other. Furthermore, note that this value is solely based on the estimation of the agent in the internal simulation and not on actual rewards. With partner approximation disabled, the MDP in the internal simulation is less complex because the influence of the partner is missing. This leads to the agent estimating higher rewards than he would actually get when acting in the real world where the partner influences the system as well (cf. the oblivious DDPG-agents that suffer from the lack of an appropriate partner model).

When using the full DDPG-PAL algorithm with partner approximation, the distinction becomes much more significant as and and reflects reality much better where is penalized. Thus, the agent developed understanding of the situation. This indicates that PALs can indeed learn the preferences of their partner and subsequently improve the control law towards goal-oriented cooperation.

Vi Conclusion

This work introduces a framework named Partner Approximating Learners (PAL-framework) which combines learning the partners’ behavior in mixed cooperative-competitive settings under restricted information with deep RL in a simulated environment. The framework offers two major benefits over independent learners merely training on online data. On one hand, maintaining and constantly updating an explicit model of the partners’ aggregated control law takes their influence into consideration and allows the agents to adapt to each other. On the other hand, PALs learn in a simulated environment where the current partner model is explicitly used. Thus, PALs reduce wear on the system explore the state space more safely, while relying on the latest partner model. After proposing our framework, we show its merits by an example of a pendulum swing-up task. Here, utilizing the simulated environment rather than simply working on online data significantly speeds up learning. Furthermore, maintaining a partner model improves the performance. Finally, two DDPG-PALs, where one is indifferent to two states and the other prefers one state over another, successfully assess the situation and agree on a reasonable solution despite the challenging setting of the agents having different reward functions.

Appendix A Supplementary Details

The hyperparameters of the identification algorithm as well as the RL agent are given below.

hyperparameter value
time steps between ident. updates 1
learning episodes per ident. update 4
number of hidden layers 3
neurons per hidden layer 16
size of identification buffer last of “reality”
experience replay CER [18]
training data per ident. update of buffer
mini batch size 20
initial weights hidden layer u.a.r.
initial weights output layer u.a.r.
activation function hidden layer sigmoid
activation function output layer linear
optimizer Adam, learning rate ,
, , no
gradient clipping, decay,
fuzz factor or AMSGrad
error metric MSE
size of validation set 0
shuffle mini batch before training true
TABLE I: Hyperparameters of the identification.
hyperparameter value
time steps between RL updates 2
size of replay buffer last of “reality”
length episode RL training simulated time,
number of hidden layers A/C 3
neurons per hidden layer actor 16
neurons per hidden layer critic 32
activation f. hidden layer A/C sigmoid
activation f. output layer A/C linear
initial weights all layers A/C u.a.r.
optimizer A/C Adam, learning rate ,
,
gradient clipping
no decay, fuzz factor
or AMSGrad
experience replay u.a.r. [15]
discount factor
batch size
warm up A/C 100
error metric MAE
target network update rate
exploration Ornstein-Uhlenbeck with
, ,
TABLE II: Hyperparameters of the DDPG agents. Here, A/C stands for “actor and critic”

Footnotes

  1. footnotetext: These authors contributed equally to this work.
  2. footnotetext: This work has been submitted to IEEE for possible publication.

References

  1. R. Lowe, Y. Wu, A. Tamar, J. Harb, O. P. Abbeel, and I. Mordatch, “Multi-agent actor-critic for mixed cooperative-competitive environments,” in Advances in Neural Information Processing Systems, pp. 6379–6390, 2017.
  2. L. Matignon, G. J. Laurent, and N. Le Fort-Piat, “Independent reinforcement learners in cooperative markov games: A survey regarding coordination problems,” The Knowledge Engineering Review, vol. 27, no. 1, pp. 1–31, 2012.
  3. A. Brokaw, “Google hooked 14 robot arms together so they can help each other learn,” 2016.
  4. M. Andrychowicz, B. Baker, M. Chociej, R. Jozefowicz, B. McGrew, J. Pachocki, A. Petron, M. Plappert, G. Powell, A. Ray, J. Schneider, S. Sidor, J. Tobin, P. Welinder, L. Weng, and W. Zaremba, “Learning dexterous in-hand manipulation,” Arxiv.org, vol. 1808.00177, 2018.
  5. R. S. Sutton, “Dyna, an integrated architecture for learning, planning, and reacting,” ACM SIGART Bulletin, vol. 2, no. 4, pp. 160–163, 1991.
  6. S. Gu, T. Lillicrap, I. Sutskever, and S. Levine, “Continuous deep q-learning with model-based acceleration,” in International Conference on Machine Learning, pp. 2829–2838, 2016.
  7. J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal policy optimization algorithms,” CoRR, vol. abs/1707.06347, 2017.
  8. L. Matignon, G. J. Laurent, and N. Le Fort-Piat, “Hysteretic q-learning: An algorithm for decentralized reinforcement learning in cooperative multi-agent teams,” in 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 64–69, 2007.
  9. J. Foerster, I. A. Assael, N. d. Freitas, and S. Whiteson, “Learning to communicate with deep multi-agent reinforcement learning,” 30th Conference on Neural Information Processing Systems, 2016.
  10. S. Sukhbaatar, A. Szlam, and R. Fergus, “Learning multiagent communication with backpropagation,” in Advances in Neural Information Processing Systems 29, pp. 2244–2252, Curran Associates, Inc, 2016.
  11. J. K. Gupta, M. Egorov, and M. Kochenderfer, “Cooperative multi-agent control using deep reinforcement learning,” in Autonomous Agents and Multiagent Systems (G. Sukthankar and J. A. Rodriguez-Aguilar, eds.), vol. 10642 of Lecture Notes in Computer Science, pp. 66–83, Cham: Springer International Publishing, 2017.
  12. M. J. Hausknecht, Cooperation and Communication in Multiagent Deep Reinforcement Learning. Phd thesis, University of Austin, Texas, USA, 2016.
  13. R. Raileanu, E. Denton, A. Szlam, and R. Fergus, “Modeling others using oneself in multi-agent reinforcement learning,” Arxiv.org, vol. 1802.09640v3, 2018.
  14. J. Foerster, R. Y. Chen, M. Al-Shedivat, S. Whiteson, P. Abbeel, and I. Mordatch, “Learning with opponent-learning awareness,” in Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS ’18, pp. 122–130, 2018.
  15. T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra, “Continuous control with deep reinforcement learning,” Arxiv.org, vol. 1509.02971v5, 2015.
  16. V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller, “Playing atari with deep reinforcement learning,” 2013.
  17. T. Schaul, D. Horgan, K. Gregor, and D. Silver, “Universal value function approximators,” in International Conference on Machine Learning, pp. 1312–1320, 2015.
  18. K. Zhang, H. Zhang, G. Xiao, and H. Su, “Tracking control optimization scheme of continuous-time nonlinear system via online single network adaptive critic design method,” Neurocomputing, vol. 251, pp. 127–135, 2017.
  19. K. J. Åström and K. Furuta, “Swinging up a pendulum by energy control,” Automatica, vol. 36, no. 2, pp. 287–295, 2000.
  20. S. Adam, L. Busoniu, and R. Babuska, “Experience replay for real-time reinforcement learning control,” IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), vol. 42, no. 2, pp. 201–212, 2012.
  21. G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba, “Openai gym,” Arxiv.org, vol. 1606.01540, 2016.
  22. S. Zhang and R. S. Sutton, “A deeper look at experience replay,” Arxiv.org, vol. 1712.01275, 2017.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
409981
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description