Regularization Matters in Policy Optimization

Regularization Matters in Policy Optimization

Abstract

Deep Reinforcement Learning (Deep RL) has been receiving increasingly more attention thanks to its encouraging performance on a variety of control tasks. Yet, conventional regularization techniques in training neural networks (e.g., regularization, dropout) have been largely ignored in RL methods, possibly because agents are typically trained and evaluated in the same environment, and because the deep RL community focuses more on high-level algorithm designs. In this work, we present the first comprehensive study of regularization techniques with multiple policy optimization algorithms on continuous control tasks. Interestingly, we find conventional regularization techniques on the policy networks can often bring large improvement, especially on harder tasks. We also compare with the widely used entropy regularization and find regularization is generally better. Our findings are further shown to be robust against training hyperparameters variations. We further study regularizing different components and find that only regularizing the policy network is typically the best. We hope our study provides guidance for future practices in regularizing policy optimization algorithms.

\printAffiliationsAndNotice\icmlEqualContribution

1 Introduction

The use of regularization methods to prevent overfitting is a key technique in successfully training neural networks. Perhaps the most widely recognized regularization methods in deep learning are regularization (also known as weight decay) and dropout (Srivastava et al., 2014). These techniques are standard practice in supervised learning tasks across many domains. Major tasks in computer vision, e.g., image classification (Krizhevsky et al., 2012; He et al., 2016), object detection (Ren et al., 2015; Redmon et al., 2016), use regularization as a default option. In natural language processing, for example, the Transformer (Vaswani et al., 2017) uses dropout, and the popular BERT model (Devlin et al., 2018) uses regularization. In fact, it is rare to see state-of-the-art neural models trained without regularization in a supervised setting.

However, in deep reinforcement learning (deep RL), those conventional regularization methods are largely absent or underutilized in past research, possibly because in most cases we are maximizing the return on the same task as in training. In other words, there is no generalization gap from the training environment to the test environment (Cobbe et al., 2018). Heretofore, researchers in deep RL have focused on high-level algorithm design and largely overlooked issues related to network training, including regularization. For popular policy optimization algorithms like Trust Region Policy Optimization (TRPO) (Schulman et al., 2015), Proximal Policy Optimization (PPO) (Schulman et al., 2017), and Soft Actor Critic (SAC) (Haarnoja et al., 2018), conventional regularization methods were not considered. Even in popular codebases such as the OpenAI Baseline (Dhariwal et al., 2017), regularization and dropout were not incorporated.

Instead, the most commonly used regularization in the RL community is an entropy regularization term that penalizes the high-certainty output from the policy network to encourage more exploration during the training process and prevent the agent from overfitting to certain actions. The entropy regularization was first introduced by Williams and Peng (1991) and now used by many contemporary algorithms (Mnih et al., 2016; Schulman et al., 2017; Teh et al., 2017; Farebrother et al., 2018).

In this work, we take an empirical approach to assess the conventional paradigm which omits common regularization when learning deep RL models. We study agent performance on current task (the environment which the agent is trained on), rather than its generalization ability to different environments as many recent works (Zhang et al., 2018a; Zhao et al., 2019; Farebrother et al., 2018; Cobbe et al., 2018). We specifically focus our study on policy optimization methods, which are becoming increasingly popular and have achieved top performance on various tasks. We evaluate four popular policy optimization algorithms, namely SAC, PPO, TRPO, and the synchronous version of Advantage Actor Critic (A2C), on multiple continuous control tasks. Various conventional regularization techniques are considered, including / weight regularization, dropout, weight clipping (Arjovsky et al., 2017) and Batch Normalization (BN) (Ioffe and Szegedy, 2015). We compare the performance of these regularization techniques to that without regularization, as well as the entropy regularization.

Surprisingly, even though the training and testing environments are the same, we find that many of the conventional regularization techniques, when imposed to the policy networks, can still bring up the performance, sometimes significantly. Among those regularizers, regularization, perhaps the most simple one, tends to be the most effective overall and generally outperforms entropy regularization. regularization and weight clipping can boost performance in many cases. Dropout and Batch Normalization tend to bring improvements only on off-policy algorithms. Additionally, all regularization methods tend to be more effective on more difficult tasks. We also verify our findings with a wide range of training hyperparameters and network sizes, and the result suggests that imposing proper regularization can sometimes save the effort of tuning other training hyperparameters. We further study which part of the policy optimization system should be regularized, and conclude that generally only regularizing the policy network suffices, as imposing regularization on value networks usually does not help. Finally we discuss and analyze possible reasons for some experimental observations. Our main contributions can be summarized as follows:

  • We provide, to our best knowledge, the first comprehensive study of common regularization methods in policy optimization, which have been largely ignored in the deep RL literature.

  • We find conventional regularizers can be effective on continuous control tasks (especially on harder ones), with statistical significance. Remarkably, simple regularizers (, , weight clipping) could perform better than the more widely used entropy regularization, with generally the best. BN and dropout can only help in off-policy algorithms.

  • We experiment with multiple randomly sampled training hyperparameters for each algorithm and confirm our findings still hold.

  • We study which part of the network(s) should be regularized. The key lesson is to regularize the policy network but not the value network.

2 Related Works

Regularization in Deep RL. Conventional regularization methods have rarely been applied in deep RL. One rare case of such use is in Deep Deterministic Policy Gradient (DDPG) (Lillicrap et al., 2016), where Batch Normalization is applied to all layers of the actor and some layers of the critic, and regularization is applied to the critic.

Some recent studies have developed more complicated regularization approaches to continuous control tasks. Cheng et al. (2019) regularizes the stochastic action distribution using a control prior. The regularization weight at a given state is adjusted based on the temporal difference (TD) error. Galashov et al. (2019) introduces a default policy that receives limited information as a regularizer, which accelerates convergence and improves performance. Parisi et al. (2019) uses TD error regularization to penalize inaccurate value estimation and Generalized Advantage Estimation (GAE) (Schulman et al., 2016) regularization to penalize GAE variance. However, most of these regularizations are rather complicated (Galashov et al., 2019), catered to certain algorithms (Parisi et al., 2019), or need prior information (Cheng et al., 2019). Also, these techniques consider regularizing the output of the network, while conventional regularization methods mostly directly regularize the parameters. In this work, we focus on studying these simpler but under-utilized regularization methods.

Generalization in Deep RL typically refers to how the model perform in a different environment from the one it is trained on. The generalization gap can come from different modes/levels/difficulties of a game (Farebrother et al., 2018), simulation vs. real world (Tobin et al., 2017), parameter variations (Pattanaik et al., 2018), or different random seeds in environment generation (Zhang et al., 2018b). There are a number of methods designed to address this issue, e.g., through training the agent over multiple domains/tasks (Tobin et al., 2017; Rajeswaran et al., 2017), adversarial training (Tobin et al., 2017), designing model architectures (Srouji et al., 2018), adaptive training (Duan et al., 2016), etc. Meta RL (Finn et al., 2017; Gupta et al., 2018; Al-Shedivat et al., 2017) try to learn generalizable agents by training on many environments drawn from the same family/distribution. There are also some comprehensive studies on RL generalization with interesting findings (Zhang et al., 2018a, b; Zhao et al., 2019; Packer et al., 2018), e.g., algorithms performing better in training environment could perform worse with domain shift (Zhao et al., 2019).

Recently, several studies have investigated conventional regularization’s effect on generalization. Farebrother et al. (2018) shows that in Deep Q-Networks (DQN), regularization and dropout can sometimes bring benefit when evaluated on the same Atari game with mode variations. Cobbe et al. (2018) shows that regularization, dropout, data augmentation, and Batch Normalization can improve generalization performance, but to a less extent than entropy regularization and -greedy exploration. Different from those studies, we focus on regularization’s effect in the same environment, a more direct goal compared with generalization, yet on which conventional regularizations are under-explored.

3 Regularization Methods

There are in general two kinds of common approaches for imposing regularization. One is to discourage complex models (e.g., weight regularization, weight clipping), and the other is to inject certain noise in network activations (e.g., dropout and Batch Normalization). Here we briefly introduce the methods we investigate in our experiments.

/ Weight Regularization. Large weights are usually believed to be a sign of overfitting to the training data, since the function it represents tend to be complex. One can encourage small weights by adding a loss term penalizing the norm of the weight vector. Suppose is the original empirical loss we want to minimize. SGD updates the model on a mini-batch of training samples: , where is the learning rate. When applying regularization, we add an additional -norm squared loss term to the training objective. Thus the SGD step becomes . Similarly, in the case of weight regularization, the additional loss term is , and the SGD step becomes .

Weight Clipping. Weight clipping is a simple operation: after each gradient update step, each individual weight is clipped to range , where is a hyperparameter. This could be formally described as . In Wasserstein GANs (Arjovsky et al., 2017), weight clipping is used to enforce the constraint of Lipschitz continuity. This plays an important role in stabilizing the training of GANs (Goodfellow et al., 2014), which were notoriously hard to train and often suffered from “mode collapse” before. Weight clipping can also be seen as a regularizor since it reduce the complexity of the model space, by preventing any weight’s magnitude from being larger than .

Dropout. Dropout (Srivastava et al., 2014) is one of the most successful regularization techniques developed specifically for neural networks. The idea is to randomly deactivate a certain percentage of neurons during training; during testing, a rescaling is applied to ensure the scale of the activations is the same as training. One explanation for its effectiveness in reducing overfitting is they can prevent “co-adaptation” of neurons. Another explanation is that dropout acts as implicit model ensembling, because during training a different model is sampled to fit each mini-batch of data.

Batch Normalization. Batch Normalization (BN) (Ioffe and Szegedy, 2015) is invented to address the problem of “internal covariate shift”, and it does the following transformation: , where and are the mean and standard deviation of input activations over , and are trainable affine transformation parameters (scale and shift) which provides the possibility of linearly transforming normalized activations back to any scales. BN turns out to greatly accelerate the convergence and bring up the accuracy. It has become a standard component, especially in convolutional networks. BN also acts as a regularizer (Ioffe and Szegedy, 2015): since the statistics and depend on the current batch, BN subtracts and divides different values in each iteration. This stochasticity can encourage subsequent layers to be robust against such input variation.

Entropy Regularization. In a policy optimization framework, the policy network is used to model a conditional distribution over actions, and entropy regularization is widely used to prevent the learned policy from overfitting to one or some of the actions. More specifically, in each step, the output distribution of the policy network is penalized to have a high entropy. Policy entropy is calculated at each step as , where is the state-action pair. Then the per-sample entropy is averaged within the batch of state-action pairs to obtain the regularization term . A coefficient is also needed, and is added to the policy objective to be maximized during policy updates. Entropy regularization also encourages exploration due to increased randomness in actions, leading to better performance in the long run.

4 Experiments

4.1 Settings

Figure 1: Reward vs. timesteps, for four algorithms (columns) and four environments (rows).

Algorithms. We evaluate the six regularization methods introduced in Section 3 using four popular policy optimization algorithms, namely, A2C (Mnih et al., 2016), TRPO (Schulman et al., 2015), PPO (Schulman et al., 2017), and SAC (Haarnoja et al., 2018). The first three algorithms are on-policy while the last one is off-policy. For the first three algorithms, we adopt the code from OpenAI Baseline (Dhariwal et al., 2017), and for SAC, we use the official implementation at (Haarnoja, 2018). Our code can be accessed at https://github.com/xuanlinli17/po-rl-regularization.

Tasks. The algorithms with different regularizers are tested on nine continuous control tasks: Hopper, Walker, HalfCheetah, Ant, Humanoid, and HumanoidStandup from the MuJoCo simulation environment (Todorov et al., 2012); Humanoid, AtlasForwardWalk, and HumanoidFlagrun from the more challenging RoboSchool (OpenAI, ) suite. Among the MuJoCo tasks, agents for Hopper, Walker, and HalfCheetah are easier to learn, while Ant, Humanoid, HumanoidStandup are relatively harder (larger state-action space, more training examples). The three Roboschool tasks are even harder than all the MuJoCo tasks as they require more timesteps to converge. To better understand how different regularization methods work on different difficulties, we roughly categorize the first three environments as “easy” tasks and the last six as “hard” tasks.

Training. On MuJoCo tasks, we keep all training hyperparameters unchanged as in the codebase adopted. Since hyperparameters for the RoboSchool tasks are not included in the original codebase, we briefly tune the hyperparameters for each algorithm before we apply any regularization (details in Appendix C). For details on regularization strength tuning, please see Appendix B. The results shown in this section are obtained by only regularizing the policy network, and a further study on this will be presented in Section 6. We run each experiment independently with five seeds, then use the average return over the last episodes as the final result. Each regularization method is evaluated independently, with other regularizers turned off. We refer to the result without any regularization as the baseline. For BN and dropout, we use its training mode in updating the network, and test mode in sampling trajectories.

During our training, negligible computation overhead is induced when a regularizer is applied. Specifically, the increase in training time for BN is , dropout , while , , weight clipping, and entropy regularization are all . We used up to 16 NVIDIA Titan Xp GPUs and 96 Intel Xeon E5-2667 CPUs, and all experiments take roughly 57 days with resources fully utilized.

Note that entropy regularization is still applicable for SAC, despite it already incorporates the maximization of entropy in the reward. In our experiments, we add the entropy regularization term to the policy loss function in equation (12) of Haarnoja et al. (2018). Meanwhile, policy network dropout is not applicable to TRPO because during policy updates, different neurons in the old and new policy networks are dropped out, causing different shifts in the old and new action distributions given the same state, which violates the trust region constraint. In this case, the algorithm fails to perform any update from network initialization.

Reg \ Alg A2C TRPO PPO SAC TOTAL
Easy Hard Total Easy Hard Total Easy Hard Total Easy Hard Total Easy Hard Total
Entropy 33.3 100.0 77.8 0.0 50.0 33.3 0.0 33.3 22.2 33.3 50.0 44.4 16.7 58.3 44.4
0.0 50.0 33.3 0.0 66.7 44.4 33.3 83.3 66.7 66.7 66.7 66.7 25.0 66.7 52.8
0.0 50.0 33.3 0.0 66.7 44.4 33.3 66.7 55.6 33.3 50.0 44.4 16.7 58.3 44.4
Weight Clip 0.0 16.7 11.1 33.3 33.3 33.3 33.3 66.7 55.6 33.3 16.7 22.2 25.0 33.3 30.6
Dropout 0.0 0.0 0.0 N/A N/A N/A 33.3 50.0 44.4 66.7 50.0 55.6 33.3 33.3 33.3
BatchNorm 0.0 0.0 0.0 0.0 0.0 0.0 0.0 16.7 11.1 33.3 50.0 44.4 8.3 16.7 13.9
Table 1: Percentage (%) of environments where the final performance “improves” with regularization, by our definition in Section 4.2.
Reg \Alg A2C TRPO PPO SAC TOTAL
Easy Hard Total Easy Hard Total Easy Hard Total Easy Hard Total Easy Hard Total
Baseline 0.296 -0.171 -0.015 0.280 0.102 0.161 0.244 -0.541 -0.280 -0.220 -0.472 -0.388 0.150 -0.271 -0.130
Entropy 1.140 1.010 1.060 0.158 0.304 0.256 0.432 -0.247 -0.021 0.320 -0.156 0.002 0.512 0.229 0.323
0.534 0.930 0.798 0.511 0.388 0.429 0.302 0.757 0.606 0.362 0.245 0.284 0.427 0.580 0.529
0.154 0.426 0.335 0.306 0.572 0.483 0.272 0.764 0.600 0.191 -0.173 -0.052 0.231 0.397 0.342
Weight Clip 0.221 0.242 0.235 0.282 0.489 0.420 0.340 0.625 0.530 -0.360 -0.093 -0.182 0.121 0.316 0.251
Dropout -1.160 -1.180 -1.170 N/A N/A N/A -0.119 -0.471 -0.354 0.346 0.485 0.438 -0.309 -0.389 -0.362
BatchNorm -1.190 -1.260 -1.240 -1.540 -1.850 -1.750 -1.470 -0.887 -1.080 -0.639 0.165 -0.103 -1.210 -0.959 -1.040
Table 2: The average -score for each regularization method. Note that a negative -score does not necessarily mean the method hurts the performance, because it could be higher than the baseline. The scores within 0.01 range from the highest are in bold.
Reg \Alg A2C TRPO PPO SAC TOTAL
Easy Hard Total Easy Hard Total Easy Hard Total Easy Hard Total Easy Hard Total
Entropy 0.008 0.000 0.000 0.652 0.160 0.467 0.614 0.210 0.218 0.262 0.240 0.103 0.051 0.000 0.000
0.373 0.000 0.000 0.432 0.049 0.049 0.860 0.000 0.000 0.207 0.010 0.005 0.102 0.000 0.000
0.544 0.002 0.019 0.923 0.000 0.009 0.936 0.000 0.000 0.353 0.252 0.137 0.621 0.000 0.000
Weight Clip 0.775 0.018 0.087 0.993 0.017 0.065 0.791 0.000 0.000 0.760 0.209 0.409 0.869 0.000 0.000
Dropout 0.000 0.000 0.000 N/A N/A N/A 0.273 0.799 0.737 0.213 0.002 0.001 0.015 0.425 0.048
BatchNorm 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.095 0.000 0.384 0.023 0.247 0.000 0.000 0.000
Table 3: P-values from Welch’s t-test comparing the -scores of regularization methods and baseline.

4.2 Results.

Training curves. We plot the training curves from four environments (rows) in Figure 1, on four algorithms (columns). Figures for the rest five environments are deferred to Appendix K. In the figure, different colors are used to denote different regularization methods, e.g., black is the baseline method. Shades are used to denote standard deviation range. Notably, these conventional regularizers can frequently boost the performance across different tasks and algorithms, demonstrating that a study on the regularization in deep RL is highly demanding. Interestingly, in some cases where the baseline (with the default hyperparameters in the codebase) does not converge to a reasonable solution, e.g., A2C Ant, PPO Humanoid, imposing some regularization can make the training converge to a high level. Another observation is that BN always significantly hurts the baseline for on-policy algorithms. The reason will be discussed later. For the off-policy SAC algorithm, dropout and BN sometimes bring large improvement on hard tasks like AtlasForwardWalk and RoboschoolHumanoid.

How often do regularizations help?  To quantitatively measure the effectiveness of the regularizations on each algorithm across different tasks, we define the condition when a regularization is said to “improve” upon the baseline in a certain environment. Denote the baseline mean return over five seeds on an environment as , and the mean and standard deviation of the return obtained with a certain regularization method over five seeds as and . We say the performance is “improved” by the regularization if , where is the minimum return threshold of an environment. The threshold serves to ensure the return is at least in a reasonable level. We set the threshold to be for HumanoidStandup and for all other tasks.

The result is shown in Table 1. Perhaps the most significant observation is that regularization is the most often to improve upon the baseline. A2C algorithm is an exception, where entropy regularization is the most effective. regularization behaves similar to regularization, but is outperformed by the latter. Weight clipping’s usefulness is highly dependent on the algorithms and environments. Despite in total it only helps at 30.6% times, it can sometimes outperform entropy regularization by a large margin, e.g., in TRPO Humanoid and PPO Humanoid as shown in Figure 1. BN is not useful at all in the three on-policy algorithms (A2C, TRPO, and PPO). Dropout is not useful in A2C at all, and sometimes helps in PPO. However, BN and dropout can be useful in SAC. All regularization methods generally improve more often when they are used on harder tasks, perhaps because for easier ones the baseline is often sufficiently strong to reach a high performance.

Note that under our definition, not “improving” does not indicate the regularization is hurting the performance. If we define “hurting” as (the reward minimum threshold is not considered here), then total percentage of hurting is 0.0% for , 2.8% for , 5.6% for weight clipping, 44.4% for dropout, 66.7% for BN, and 0.0% for entropy. In other words, under our parameter tuning range, and entropy regularization never hurt with appropriate strengths. For BN and dropout, we also note that almost all hurting cases are in on-policy algorithms, except one case for BN in SAC. In sum, all regularizations in our study very rarely hurt the performance except for BN/dropout in on-policy methods.

Reg \Alg A2C TRPO PPO SAC TOTAL
Easy Hard Total Easy Hard Total Easy Hard Total Easy Hard Total Easy Hard Total
Baseline 0.491 -0.051 0.165 0.147 0.142 0.144 0.340 -0.270 -0.026 -0.005 -0.249 -0.152 0.243 -0.107 0.033
Entropy 0.417 0.515 0.476 0.193 0.264 0.236 0.135 -0.139 -0.030 0.209 -0.123 0.010 0.239 0.129 0.173
0.076 0.820 0.522 0.361 0.479 0.432 0.515 0.863 0.724 0.019 0.274 0.172 0.243 0.609 0.463
0.534 0.709 0.639 0.244 0.512 0.405 0.444 0.771 0.641 0.115 0.065 0.085 0.334 0.514 0.442
Weight Clip 0.454 0.497 0.480 0.485 0.409 0.440 0.226 0.520 0.402 -0.503 -0.004 -0.203 0.165 0.356 0.280
Dropout -0.237 -1.070 -0.737 N/A N/A N/A -0.915 -0.833 -0.866 0.005 -0.098 -0.057 -0.382 -0.667 -0.553
BatchNorm -1.740 -1.420 -1.540 -1.430 -1.810 -1.660 -0.745 -0.912 -0.845 0.160 0.135 0.145 -0.938 -1.000 -0.976
Table 4: The average -score for each regularization method, under five sampled hyperparameter settings.
Reg \Alg A2C TRPO PPO SAC TOTAL
Easy Hard Total Easy Hard Total Easy Hard Total Easy Hard Total Easy Hard Total
Entropy 0.576 0.000 0.000 0.712 0.140 0.191 0.230 0.364 0.975 0.279 0.397 0.179 0.956 0.000 0.006
0.002 0.000 0.000 0.117 0.000 0.000 0.307 0.000 0.000 0.894 0.001 0.006 0.998 0.000 0.000
0.767 0.000 0.000 0.404 0.000 0.000 0.550 0.000 0.000 0.514 0.042 0.045 0.254 0.000 0.000
Weight Clip 0.770 0.000 0.000 0.006 0.005 0.000 0.514 0.000 0.000 0.019 0.135 0.692 0.368 0.000 0.000
Dropout 0.000 0.000 0.000 N/A N/A N/A 0.000 0.000 0.000 0.957 0.287 0.407 0.000 0.000 0.000
BatchNorm 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.386 0.047 0.032 0.000 0.000 0.000
Table 5: P-values from Welch’s t-test comparing the -scores of regularization and baseline, under five sampled hyperparameter settings.

How much do regularizations improve? For each algorithm and environment (for example, PPO on Ant), we calculate a -score for each regularization method and the baseline, by treating results produced by all regularizations (including baseline) and all five seeds together as a population, and calculate each method’s average -scores from its five final results (positively clipped). -score is also known as “standard score”, the signed fractional number of standard deviations by which the value of a data point is above the mean value. For each algorithm and environment, a regularizer’s -score roughly measures its relative performance among others. The -scores are then averaged over environments of a certain difficulty (easy/hard), and the results are shown in Table 2. In terms of the average improved margin, we can draw mostly similar observations as the improvement frequency (Table 1): tops the average -score most often, and by large margin in total; entropy regularization is best used with A2C; Dropout and BN are only useful in the off-policy SAC algorithm; the improvement over baseline is larger on hard tasks. Notably, for all algorithms, any regularization on average outperforms the baseline on hard tasks, except dropout and BN in on-policy algorithms. On hard tasks, and weight clipping also perform higher than entropy in total, besides .

Is the improvement statistically significant? For each regularization method, we collect the -scores produced by all seeds and all environments of a certain difficulty (e.g. for on PPO and hard environments, we have 6 envs 5 seeds = 30 -scores), and perform Welch’s t-test (two-sample t-test with unequal variance) with the corresponding -scores produced by the baseline. The resulting p-values are presented in Table 3. Note that whether the significance indicates improvement or harm depends on the relative mean -score in Table 2. For example, for BN and dropout in on-policy algorithms, the statistical significance denotes harm, and in most other cases it denotes improvement. From the results, we observe that the improvement is statistically significant (p ) for hard tasks in general, with only a few exceptions. In total, , , entropy and weight clipping are all statistically significantly better than baseline. For Welch’s t-test between entropy regularization and other regularizers, see Appendix F. For more metrics of comparison (e.g., average ranking, min-max scaled reward) see Appendix G.

5 Robustness with Hyperparameter Changes

In the previous section, the experiments are conducted mostly with the default hyperparameters in the codebase we adopt, which are not necessarily optimized. For example, PPO Humanoid baseline performs poorly using default hyperparameters, not converging to a reasonable solution. Meanwhile, it is known that RL algorithms are very sensitive to hyperparameter changes (Henderson et al., 2018). Thus, our findings can be vulnerable to such variations. To further confirm our findings, we evaluate the regularizations under a variety of hyperparameter settings. For each algorithm, we sample five hyperparameter settings for the baseline and apply regularization on each of them. Due to the heavy computation budget, we only evaluate on five MuJoCo environments: Hopper, Walker, Ant, Humanoid, and HumanoidStandup. Under our sampled hyperparameters, poor baselines are mostly significantly improved. See Appendix D and L for details on sampling and curves.

Similar to Table 2 and 3, the results of -scores and p-values are shown in Table 4 and Table 5. For results of improvement percentages similar to Table 1, please refer to Appendix E. We note that our main findings discussed in Section 4 still hold. Interestingly, compared to the previous section, , , and weight clipping all tend to be better than entropy regularization by larger margins.

Figure 2: Final reward vs. single hyperparameter change. “Rollout Timesteps” refers to the number of state-action samples used for training between policy updates.

To better visualize the robustness against change of hyperparameters, we show the result when a single hyperparameter is varied in Figure 2. We note that the certain regularizations can consistently improve the baseline with different hyperparameters. In these cases, proper regularizations can ease the hyperparameter tuning process, as they bring up performance of baselines with suboptimal hyperparameters to be higher than that with better ones.

6 Policy and Value Network Regularization

Reg\Alg A2C TRPO PPO SAC TOTAL
Pol Val P+V Pol Val P+V Pol Val P+V Pol Val P+V Pol Val P+V
50.0 0.0 16.7 50.0 16.7 33.3 66.7 16.7 66.7 66.7 33.3 33.3 58.3 16.7 37.5
50.0 16.7 50.0 33.3 0.0 33.3 66.7 0.0 50.0 33.3 33.3 33.3 45.8 12.5 41.7
Weight Clip 16.7 0.0 16.7 50.0 33.3 16.7 66.7 0.0 66.7 33.3 16.7 16.7 41.7 8.3 29.2
Dropout 0.0 16.7 0.0 N/A 33.3 N/A 66.7 33.3 50.0 50.0 0.0 0.0 38.9 20.8 16.7
BatchNorm 16.7 16.7 16.7 0.0 16.7 0.0 16.7 0.0 50.0 33.3 16.7 0.0 16.7 12.5 16.7
Table 6: Percentage (%) of environments where performance ”improves” when regularized on policy / value / policy and value networks.

Our experiments so far only impose regularization on policy network. To investigate the relationship between policy and value network regularization, we compare four options: 1) no regularization, and regularizing 2) policy network, 3) value network, 4) policy and value networks. For 2) and 3) we tune the regularization strengths independently and then use the appropriate ones for 4) (more details in Appendix B). We evaluate all four algorithms on the six MuJoCo tasks and present the improvement percentage in Table 6. Note that entropy regularization is not applicable to the value network. For detailed training curves, please refer to Appendix M.

We observe that generally, only regularizing the policy network is the most often to improve almost all algorithms and regularizations. Regularizing the value network alone does not bring improvement as often as other options. Though regularizing both is better than regularizing value network alone, it is worse than only regularizing the policy network.

7 Analysis and Conclusion

Figure 3: Reward distribution (frequency vs. reward value) over 100 trajectories. Regularized models generalize to unseen samples more stably with high reward.

Figure 4: Reward comparison between regularization and baseline with different amount of training samples. Regularized models can reach similar performance as baseline with less data, demonstrating their stronger generalization ability.

Why does regularization benefit policy optimization? In RL, when we are training and evaluating on the same environment, there is no generalization gap across different environments. However, there is still generalization between samples: the agents is only trained on the limited trajectories it has experienced, which cannot cover the whole state-action space of the environment. A successful policy needs to generalize from seen samples to unseen ones, which potentially makes regularization necessary. This might also explain why regularization could be more helpful on harder tasks, which have larger state space, and the portion of the space that have appeared in training tends to be smaller. Overfitting to this smaller portion of space would cause more serious issues, where regularization may help more.

To support the argument above, we take agents trained with and without regularization, evaluate the reward on 100 different trajectories, and plot the reward distributions over trajectories in Figure 3. These trajectories represent unseen samples during training, since the state space is continuous and it is impossible to traverse the same trajectories as in training. For baseline, some of the trajectories yield relatively high rewards, while others yield low rewards, demonstrating the baseline cannot stably generalize to unseen examples; for regularized models, the rewards are more concentrated in a high level, demonstrating they can more stably generalize to unseen samples. This suggests that conventional regularization can improve the model’s generalization ability to larger portion of unseen samples.

We also compare the reward with varying number of training samples/timesteps, since the performance of learning from fewer samples is closely related to generalization ability. From the results in Figure 4, we find that for regularized models to reach the same level of reward as baseline, they need much fewer samples in training. This suggests regularized models have better generalization ability than baseline.

Why do BN and dropout work only with off-policy algorithms? One finding in our experiments is BN and dropout can sometimes improve on the off-policy algorithm SAC, but mostly hurt on-policy algorithms. We hypothesize two possible reasons: 1) for both BN and dropout, training mode is used to train the network, and testing mode is used to sample actions during interaction with the environment, leading to a discrepancy between the sampling policy and optimization policy (the same holds if we always use training mode). For on-policy algorithms, if such discrepancy is large, it can cause severe “off-policy issues”, which hurts the optimization process or even crashes it since their theory necessitates that the data is “on policy”, i.e., data sampling and optimization policies are the same. For off-policy algorithms, this discrepancy is not an issue, since they sample data from replay buffer and do not require the two polices to be the same. 2) BN can be sensitive to input distribution shifts, since the mean and std statistics depend on the input, and if the input distribution changes too quickly in training, the mapping functions of BN layers can change quickly too, which can possibly destabilize training. One evidence for this is that in supervised learning, when transferring a ImageNet pretrained model to other vision datasets, sometimes the BN layers are fixed (Yang et al., 2017) and only other layers are trained. In off-policy algorithms, the sample distributions are relatively slow-changing since we always draw from the whole replay buffer which holds cumulative data; in on-policy algorithms, we always use the samples generated from the latest policy, and the faster-changing input distribution for on-policy algorithms could be harmful to BN. Previously, BN has also been shown to be useful in DDPG (Lillicrap et al., 2015), an off-policy algorithm.

In summary, we conducted the first comprehensive study of regularization methods on multiple policy optimization algorithms. We found that conventional regularizations (, , weight clipping) could be effective at improving performance, even more than the widely used entropy regularization. BN and dropout could be useful but only on off-policy algorithms. Our findings were confirmed with multiple sampled hyperparameters. Further experiments have shown that generally, the best practice is to regularize the policy network but not the value network or both.

Appendix A Policy Optimization Algorithms

The policy optimization family of algorithms is one of the most popular methods for solving reinforcement learning problems. It directly parameterizes and optimizes the policy to gain more cumulative rewards. Below, we give a brief introduction to the algorithms we evaluate in our work.

A2c.

Sutton et al. (2000) developed a policy gradient to update the parametric policy in a gradient descent manner. However, the gradient estimated in this way suffers from high variance. Advantage Actor Critic (A3C) (Mnih et al., 2016) is proposed to alleviate this problem by introducing a function approximator for values and replacing the Q-values with advantage values. A3C also utilizes multiple actors to parallelize training. The only difference between A2C and A3C is that in a single training iteration, A2C waits for parallel actors to finish sampling trajectories before updating the neural network parameters, while A3C updates in an asynchronous manner.

Trpo.

Trust Region Policy Optimization (TRPO)  (Schulman et al., 2015) proposes to constrain each update within a safe region defined by KL divergence to guarantee policy improvement during training. Though TRPO is promising at obtaining reliable performance, approximating the KL constraint is quite computationally heavy.

Ppo.

Proximal Policy Optimization (PPO)  (Schulman et al., 2017) simplifies TRPO and improves computational efficiency by developing a surrogate objective that involves clipping the probability ratio to a reliable region, so that the objective can be optimized using first-order methods.

Sac.

Soft Actor Critic (SAC) (Haarnoja et al., 2018) optimizes the maximum entropy objective in reward (Ziebart et al., 2008), which is different from the objective of the on-policy methods above. SAC combines soft policy iteration, which maximizes the maximum entropy objective, and clipped double learning (Fujimoto et al., 2018), which prevents overestimation bias, during actor and critic updates, respectively.

Appendix B Implementation and Tuning for Regularization Methods

As mentioned in the paper, in Section 4 we only regularize the policy network; in Section 6, we investigate regularizing both policy and value networks.

For and regularization, we add and , respectively, to the loss of policy network or value network of each algorithm (for SAC’s value regularization, we apply regularization only to the network instead of also to the two networks). and loss are applied to all the weights of the policy or value network. For A2C, TRPO, and PPO, we tune in the range of for and for . For SAC, we tune in the range of for and for .

For weight clipping, the OpenAI Baseline implementation of the policy network of A2C, TRPO, and PPO outputs the mean of policy action from a two-layer fully connected network (MLP). The log standard deviation of the policy action is represented by a standalone trainable vector. We find that when applied only to the weights of MLP, weight clipping makes the performance much better than when applied to only the logstd vector or both. Thus, for these three algorithms, the policy network weight clipping results shown in all the sections above come from clipping only the MLP part of the policy network. On the other hand, in the SAC implementation, both the mean and the log standard deviation come from the same MLP, and there is no standalone log standard deviation vector. Thus, we apply weight clipping to all the weights of the MLP. For all algorithms, we tune the policy network clipping range in . For value network, the MLP produces a single output of estimated value given a state, so we clip all the weights of the MLP. For A2C, TRPO, and PPO, we tune the clipping range in . For SAC, we only clip the network and do not clip the two networks for simplicity. We tune the clipping range in due to its weights having larger magnitude.

For BatchNorm/dropout, we apply it before the activation function of each hidden layer/immediately after the activation function. When the policy or the value network is performing update using minibatches of trajectory data or minibatches of replay buffer data, we use the train mode of regularization and update the running mean and standard deviation. When the policy is sampling trajectory from the environment, we use the test mode of regularization and use the existing running mean and standard deviation to normalize data. For Batch Normalization/dropout on value network, only training mode is applied since value network does not participate in sampling trajectories. Note that adding policy network dropout on TRPO causes the KL divergence constraint to be violated almost every time during policy network update. Thus, policy network dropout causes the training to fail on TRPO, as the policy network cannot be updated.

For entropy regularization, we add to the policy loss. is tuned from for A2C, TRPO, PPO and for SAC. Note that for SAC, our entropy regularization is added directly on the optimization objective (equation 12 in Haarnoja et al. (2018)), and is different from the original maximum entropy objective inside the reward term.

Note that for the three on-policy algorithms (A2C, TRPO, PPO) we use the same tuning range, and the only exception is the off-policy SAC. The reason why SAC’s tuning range is different is that SAC uses a hyperparameter that controls the scaling of the reward signal, while A2C, TRPO, and PPO do not. In the original implementation of SAC, the reward signals are pre-tuned to be scaled up by a factor ranging from to , according to specific environments. Also, unlike A2C, TRPO, and PPO, SAC uses unnormalized reward because if the reward magnitude is small, then, according to the original paper, the policy becomes almost uniform. Due to the above reasons, the reward magnitude of SAC is much higher than the magnitude of rewards used by A2C, TRPO, and PPO. Thus, the policy network loss and the value network loss have larger magnitude than those of A2C, TRPO, and PPO, so the appropriate regularization strengths become higher. Considering the SAC’s much larger reward magnitude, we selected a different range of hyperparameters for SAC before we ran the whole experiments.

The optimal policy network regularization strength we selected for each algorithm and environment used in Section 4 can be seen in the legends of Appendix M. In addition to the results with environment-specific strengths presented in Section 4, we also present the results when the regularization strength is fixed across all environments for the same algorithm. The results are shown in Appendix H.

In Section 6, to investigate the effect of regularizing both policy and value networks, we combine the tuned optimal policy and value network regularization strengths. The detailed training curves are presented in Appendix M.

As a side note, when training A2C, TRPO, and PPO on the HalfCheetah environment, the results have very large variance. Thus, for each regularization method, after we obtain the best strength, we rerun it for another five seeds as the final result in Table 1 and 2.

Appendix C Default Hyperparameter Settings for Baselines

Training timesteps. For A2C, TRPO, and PPO, we run timesteps for Hopper, Walker, and HalfCheetah; timesteps for Ant, Humanoid (MuJoCo), and HumanoidStandup; timesteps for Humanoid (RoboSchool); and timesteps for AtlasForwardWalk and HumanoidFlagrun. For SAC, since its simulation speed is much slower than A2C, TRPO, and PPO (as SAC updates its policy and value networks using a minibatch of replay buffer data at every timestep), and since it takes fewer timesteps to converge, we run timesteps for Hopper and Walker; timesteps for HalfCheetah and Ant; timesteps for Humanoid and HumanoidStandup; and timesteps for the RoboSchool environments.

Hyperparameters for RoboSchool. In the original PPO paper (Schulman et al., 2017), hyperparameters for the Roboschool tasks are given, so we apply the same hyperparameters to our training, except that instead of linear annealing the log standard deviation of action distribution from to , we let it to be learnt by the algorithm, as implemented in OpenAI Baseline (Dhariwal et al., 2017). For TRPO, due to its proximity to PPO, we copy PPO’s hyperparameters if they exist in both algorithms. We then tune the value update step size in . For A2C, we keep the original hyperparameters and tune the number of actors in and the number of timesteps for each actor between consecutive policy updates in . For SAC, we tune the reward scale from .

The detailed hyperparameters used in our baselines for both MuJoCo and RoboSchool are listed in Tables 7-10.

Table 7: Baseline hyperparameter setting for A2C MuJoCo and RoboSchool tasks.
Table 8: Baseline hyperparameter setting for TRPO Mujoco and RoboSchool tasks. The original OpenAI implementation does not support multiple actors sampling trajectories at the same time, so we modified the code to support this feature and accelerate training.
Table 9: Baseline hyperparameter setting for PPO MuJoCo and RoboSchool tasks.

Hyperparameter Value
Hidden layer size
Number of hidden layers
Samples per batch
Replay buffer size
Learning rate constant
Discount factor ()
Target smoothing coefficient ()
Target update interval
Reward Scaling
(Hopper, Walker, HalfCheetah, Ant)
(MuJoCo Humanoid and all RoboSchool tasks)
(HumanoidStandup)
Table 10: Baseline hyperparameter setting for SAC.

Appendix D Hyperparameter Sampling Details

In Section 5, we present results based on five hyperparameter settings. To obtain such hyperparameter variations, we consider varying the learning rates and the hyperparameters that each algorithm is very sensitive to. For A2C, TRPO, and PPO, we consider a range of rollout timesteps between consecutive policy updates by varying the number of actors or the number of trajectory sampling timesteps for each actor. For SAC, we consider a range of reward scale and a range of target smoothing coefficient.

More concretely, for A2C, we sample the learning rate from linear decay, the number of trajectory sampling timesteps (nsteps) for each actor from , and the number of actors (nenvs) from . For TRPO, we sample the learning rate of value network (vf_stepsize) from and the number of trajectory sampling timesteps for each actor (nsteps) in . The policy update uses conjugate gradient descent and is controlled by the max KL divergence. For PPO, we sample the learning rate from , the number of actors (nenvs) from , and the probability ratio clipping range (cliprange) in . For SAC, we sample the learning rate from the target smoothing coefficient () from , and the reward scale from small, default, and large mode. The default reward scale of is changed to ; to ; to for each mode, respectively. Sampled hyperparameters 1-5 for each algorithms are listed in Table 11-14.

Learning rate Nsteps Nenvs
Baseline
Hyperparam. 1
Hyperparam. 2
Hyperparam. 3
Hyperparam. 4
Hyperparam. 5
Table 11: Sampled hyperparameter settings for A2C.
Vf_stepsize Nsteps
Baseline
Hyperparam. 1
Hyperparam. 2
Hyperparam. 3
Hyperparam. 4
Hyperparam. 5
Table 12: Sampled hyperparameter settings for TRPO.
Learning rate Nenvs Cliprange
Baseline linear
Hyperparam. 1 linear
Hyperparam. 2 constant
Hyperparam. 3 linear
Hyperparam. 4 constant
Hyperparam. 5 linear
Table 13: Sampled hyperparameter settings for PPO
Learning rate Mode
Baseline default
Hyperparam. 1 small
Hyperparam. 2 large
Hyperparam. 3 small
Hyperparam. 4 small
Hyperparam. 5 default
Table 14: Sampled hyperparameter settings for SAC

Appendix E Hyperparameter Experiment Improvement Percentage Result

We provide the percentage of improvement result in Table 15 as a complement with Table 4, for the experiments with multiple sampled hyperparameters.


Reg \ Alg A2C TRPO PPO SAC TOTAL
Easy Hard Total Easy Hard Total Easy Hard Total Easy Hard Total Easy Hard Total
Entropy 20.0 40.0 32.0 0.0 26.7 16.0 10.0 33.3 24.0 60.0 13.3 32.0 22.5 28.3 26.0
20.0 60.0 44.0 10.0 40.0 28.0 20.0 86.7 60.0 10.0 40.0 28.0 15.0 56.7 40.0
10.0 53.3 36.0 10.0 46.7 32.0 10.0 86.7 56.0 20.0 26.7 24.0 12.5 53.3 37.0
Weight Clip 0.0 46.7 28.0 40.0 46.7 44.0 10.0 73.3 48.0 0.0 33.3 20.0 12.5 50.0 35.0
Dropout 20.0 0.0 8.0 N/A N/A N/A 0.0 40.0 24.0 0.0 20.0 12.0 6.7 20.0 14.7
BatchNorm 0.0 0.0 0.0 10.0 0.0 4.0 10.0 33.3 24.0 20.0 20.0 20.0 10.0 13.3 12.0
Table 15: Percentage (%) of environments where the final performance ”improves” when using regularization, under five randomly sampled training hyperparameters for each algorithm.

Reg \Alg A2C TRPO PPO SAC TOTAL
Easy Hard Total Easy Hard Total Easy Hard Total Easy Hard Total Easy Hard Total
0.052 0.558 0.066 0.221 0.585 0.209 0.661 0.000 0.000 0.894 0.069 0.122 0.580 0.001 0.016
0.001 0.000 0.000 0.569 0.051 0.069 0.620 0.000 0.001 0.669 0.933 0.753 0.060 0.097 0.827
Weight Clip 0.005 0.000 0.000 0.643 0.273 0.248 0.785 0.000 0.002 0.047 0.806 0.365 0.017 0.420 0.421
Dropout 0.000 0.000 0.000 N/A N/A N/A 0.067 0.426 0.124 0.932 0.013 0.029 0.000 0.000 0.000
BatchNorm 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.003 0.000 0.011 0.155 0.598 0.000 0.000 0.000
Table 16: P-values from Welch’s t-test comparing the -scores of entropy regularization and other regularizers, under the default hyperparameter setting.

Reg \Alg A2C TRPO PPO SAC TOTAL
Easy Hard Total Easy Hard Total Easy Hard Total Easy Hard Total Easy Hard Total
0.005 0.004 0.589 0.250 0.004 0.007 0.022 0.000 0.000 0.334 0.007 0.169 0.959 0.000 0.000
0.405 0.040 0.041 0.689 0.001 0.013 0.068 0.000 0.000 0.628 0.202 0.525 0.234 0.000 0.000
Weight Clip 0.760 0.863 0.959 0.028 0.112 0.007 0.586 0.000 0.000 0.002 0.449 0.105 0.397 0.000 0.037
Dropout 0.000 0.000 0.000 N/A N/A N/A 0.000 0.000 0.000 0.314 0.850 0.564 0.000 0.000 0.000
BatchNorm 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.804 0.169 0.329 0.000 0.000 0.000
Table 17: P-values from Welch’s t-test comparing the -scores of entropy regularization and other regularizers, under five sampled hyperparameter settings for each policy optimization algorithm.

Appendix F Statistical Significance Test Comparing Entropy Regularization with other Regularizations

As a complement to Table 3 in Section 4 and Table 5 in Section 5, we present the p-value results from Welch’s t-test comparing the -scores of entropy regularization with other regularizers in Table 16 and Table 17. Note that whether the significance indicates improvement or harm over entropy regularization depends on the relative mean -score in Table 2 under default hyperparameter setting and Table 4 under sampled hyperparameter setting. We observe that in total, has significant improvement over entropy in both default hyperparameter setting and sampled hyperparameter setting. and weight clipping are significantly better than entropy under sampled hyperparameter setting. In general, the improvement over entropy is statistically more significant for hard tasks.

Appendix G Additional Metrics for Evaluation of Performance

g.1 Ranking all regularizers


Reg \ Alg A2C TRPO PPO SAC TOTAL
Easy Hard Total Easy Hard Total Easy Hard Total Easy Hard Total Easy Hard Total
Baseline 3.33 4.50 4.11 3.33 4.67 4.22 3.00 6.00 5.00 4.33 5.83 5.33 3.50 5.25 4.67
Entropy 1.00 1.50 1.33 4.67 3.00 3.56 3.00 4.17 3.78 3.00 3.83 3.55 2.92 3.13 3.06
2.67 1.50 1.89 1.33 2.83 2.33 3.00 2.17 2.45 3.00 2.67 2.78 2.50 2.29 2.36
4.33 3.67 3.89 2.67 2.17 2.34 3.33 2.67 2.89 3.67 4.83 4.44 3.50 3.34 3.39
Weight Clip 3.67 3.83 3.78 3.00 2.33 2.55 3.00 2.50 2.67 4.33 4.17 4.22 3.50 3.21 3.31
Dropout 6.00 6.00 6.00 N/A N/A N/A 5.67 4.67 5.00 3.33 3.17 3.22 5.00 4.61 4.74
BatchNorm 7.00 7.00 7.00 6.00 6.00 6.00 7.00 5.83 6.22 6.33 3.50 4.44 6.58 5.58 5.92
Table 18: The average rank in the mean return for different regularization methods under default hyperparameter settings. regularization tops the ranking for most algorithms and environment difficulties.

Reg \ Alg A2C TRPO PPO SAC TOTAL
Easy Hard Total Easy Hard Total Easy Hard Total Easy Hard Total Easy Hard Total
Baseline 2.70 4.13 3.65 3.70 3.40 3.50 3.00 5.53 4.69 4.20 5.00 4.73 3.40 4.52 4.14
Entropy 3.50 2.93 3.12 3.60 3.47 3.51 4.30 4.40 4.37 3.10 4.47 4.01 3.63 3.82 3.75
4.40 2.27 2.98 2.50 2.53 2.52 1.90 1.80 1.83 3.50 2.73 2.99 3.08 2.33 2.58
2.70 2.53 2.59 3.10 2.27 2.55 2.80 2.20 2.40 3.70 4.00 3.90 3.08 2.75 2.86
Weight Clip 3.30 3.13 3.19 2.20 3.33 2.95 3.70 2.87 3.15 5.80 4.27 4.78 3.75 3.40 3.52
Dropout 4.40 6.07 5.51 N/A N/A N/A 6.10 5.33 5.59 4.20 4.27 4.25 4.90 5.22 5.12
BatchNorm 7.00 6.93 6.95 5.90 6.00 5.97 6.20 5.80 5.93 3.50 3.27 3.35 5.65 5.50 5.55
Table 19: The average rank in the mean return for different regularization methods, under five randomly sampled training hyperparameters for each algorithm.

We compute the “average ranking” metric to compare the relative effectiveness of different regularization methods. Note that the average ranking of different methods across a set of tasks/datasets has been adopted as a metric before, as in Ranftl et al. (2019) and Knapitsch et al. (2017). Here, we rank the performance of all the regularization methods, together with the baseline, for each algorithm and task, and present the average ranks in Table 18 and Table 19. The ranks of returns among different regularizers are collected for each environment (after averaging over 5 random seeds), and then the mean rank over all seeds is calculated. From Table 18 and Table 19, we observe that, except for BN and dropout in on-policy algorithms, all regularizations on average outperform baselines. Again, regularization is the strongest in most cases. Other similar observations can be made as in previous tables. For every algorithm, baseline ranks lower on harder tasks than easier ones; in total, it ranks 3.50 for easier tasks and 5.25 for harder tasks. This indicates that regularization is more effective when the tasks are harder.

g.2 Comparison and Significance Testing with Scaled Rewards

Min-max scaling is a linear-mapping operation to map values ranging from to , using . For each environment and policy optimization algorithm (for example, PPO on Ant), we calculate a “scaled reward” for each regularization method and the baseline, using the maximum mean return obtained using any regularization method (including baseline) as and as , on positively clipped returns. We then average the scaled rewards of mean return over environments of a certain difficulty (easy/hard). We present the results under the default hyperparameter setting in Table 20-22 and the results under sampled hyperparameter settings in Table 23-25. To analyze whether regularization significantly improves over the baseline and whether conventional regularizers significantly improves over entropy, we perform Welch’s t-test on the scaled rewards, using an identical approach to the one we used for -score. Our observation is similar to the ones we made in Section 4 and Section 5.


Reg \Alg A2C TRPO PPO SAC TOTAL
Easy Hard Total Easy Hard Total Easy Hard Total Easy Hard Total Easy Hard Total
Entropy 100.00 93.00 95.31 86.20 84.30 84.96 95.20 57.20 69.89 94.90 89.00 90.97 94.09 80.88 85.28
74.80 90.20 85.06 97.20 87.20 90.56 88.00 93.20 91.45 97.10 92.40 93.94 89.28 90.74 90.25
58.80 70.60 66.69 91.90 95.50 94.28 90.00 93.50 92.32 95.00 89.70 91.48 83.62 87.33 86.19
Weight Clip 61.20 65.30 63.89 90.70 89.60 89.96 92.70 88.40 89.83 91.60 89.20 89.99 84.02 83.11 83.42
Dropout 0.85 9.05 6.32 N/A N/A N/A 76.20 42.50 53.69 97.00 96.20 96.49 58.02 49.25 52.17
BatchNorm 0.00 6.32 4.21 21.80 12.90 15.85 25.70 30.70 29.07 88.20 92.90 91.35 33.93 35.72 35.12
Baseline 64.10 48.90 53.95 91.80 77.90 82.53 89.30 47.90 61.66 90.50 86.40 87.81 83.93 65.26 71.49
Table 20: Scaled rewards for each regularization method under the default hyperparameter setting.

Reg \Alg A2C TRPO PPO SAC TOTAL
Easy Hard Total Easy Hard Total Easy Hard Total Easy Hard Total Easy Hard Total
Entropy 0.010 0.000 0.000 0.643 0.327 0.661 0.673 0.343 0.356 0.232 0.538 0.247 0.074 0.000 0.000
0.378 0.000 0.000 0.504 0.162 0.141 0.916 0.000 0.000 0.113 0.065 0.016 0.268 0.000 0.000
0.621 0.009 0.055 0.997 0.002 0.032 0.953 0.000 0.000 0.261 0.259 0.120 0.994 0.000 0.000
Weight Clip 0.801 0.042 0.136 0.938 0.096 0.224 0.802 0.000 0.000 0.797 0.462 0.439 0.982 0.000 0.000
Dropout 0.000 0.000 0.000 N/A N/A N/A 0.293 0.563 0.336 0.117 0.001 0.000 0.000 0.000 0.000
BatchNorm 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.054 0.000 0.600 0.034 0.154 0.000 0.000 0.000
Table 21: P-values from Welch’s t-test comparing the scaled rewards of regularization and baseline, under the default hyperparmeter setting.

Reg \Alg A2C TRPO PPO SAC TOTAL
Easy Hard Total Easy Hard Total Easy Hard Total Easy Hard Total Easy Hard Total
0.066 0.581 0.069 0.247 0.697 0.287 0.493 0.000 0.002 0.699 0.302 0.280 0.416 0.003 0.070
0.002 0.000 0.000 0.625 0.071 0.090 0.663 0.000 0.002 0.736 0.734 0.859 0.052 0.054 0.774
Weight Clip 0.006 0.000 0.000 0.693 0.487 0.422 0.838 0.000 0.008 0.160 0.881 0.726 0.065 0.536 0.509
Dropout 0.000 0.000 0.000 N/A N/A N/A 0.094 0.142 0.054 0.723 0.024 0.029 0.000 0.000 0.000
BatchNorm 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.006 0.000 0.036 0.219 0.906 0.000 0.000 0.000
Table 22: P-values from Welch’s t-test comparing the scaled rewards of entropy and other regularizers, under the default hyperparmeter setting.

Reg \Alg A2C TRPO PPO SAC TOTAL
Easy Hard Total Easy Hard Total Easy Hard Total Easy Hard Total Easy Hard Total
Baseline 86.80 60.80 71.20 87.90 88.30 88.10 93.70 66.60 77.40 92.80 86.80 89.20 90.30 75.60 81.50
Entropy 83.80 81.20 82.20 86.80 91.00 89.30 89.80 65.80 75.40 97.50 87.50 91.50 89.50 81.40 84.60
69.90 92.80 83.60 93.60 95.40 94.70 96.40 90.60 92.90 93.30 95.50 94.60 88.30 93.60 91.50
89.10 88.70 88.80 89.90 97.10 94.20 95.20 89.00 91.50 92.70 91.70 92.10 91.70 91.60 91.70
Weight Clip 85.50 81.30 83.00 96.40 96.60 96.50 91.30 84.10 87.00 86.70 91.60 89.60 90.00 88.40 89.00
Dropout 59.30 21.90 36.90 N/A N/A N/A 71.20 57.60 63.00 94.10 89.30 91.20 74.90 56.30 63.70
BatchNorm 0.00 14.90 8.96 41.70 47.90 45.40 70.60 53.00 60.00 96.10 92.60 94.00 52.10 52.10 52.10
Table 23: Scaled rewards for each regularization method under the five sampled hyperparameter settings.

Reg \Alg A2C TRPO PPO SAC TOTAL
Easy Hard Total Easy Hard Total Easy Hard Total Easy Hard Total Easy Hard Total
Entropy 0.586 0.000 0.005 0.737 0.181 0.514 0.218 0.870 0.565 0.055 0.690 0.184 0.634 0.006 0.040
0.003 0.000 0.002 0.090 0.000 0.000 0.396 0.000 0.000 0.859 0.000 0.002 0.323 0.000 0.000
0.709 0.000 0.000 0.514 0.000 0.000 0.631 0.000 0.000 0.982 0.027 0.116 0.486 0.000 0.000
Weight Clip 0.804 0.000 0.002 0.007 0.000 0.000 0.470 0.000 0.002 0.083 0.049 0.831 0.864 0.000 0.000
Dropout 0.000 0.000 0.000 N/A N/A N/A 0.000 0.027 0.000 0.655 0.264 0.260 0.000 0.000 0.000
BatchNorm 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.003 0.000 0.192 0.022 0.009 0.000 0.000 0.000
Table 24: P-values from Welch’s t-test comparing the scaled rewards of regularization and baseline, under five sampled hyperparameters.

Reg \Alg A2C TRPO PPO SAC TOTAL
Easy Hard Total Easy Hard Total Easy Hard Total Easy Hard Total Easy Hard Total
0.009 0.004 0.690 0.065 0.011 0.003 0.034 0.000 0.000 0.074 0.000 0.045 0.572 0.000 0.000
0.372 0.047 0.044 0.363 0.000 0.004 0.084 0.000 0.000 0.101 0.064 0.720 0.244 0.000 0.000
Weight Clip 0.748 0.986 0.824 0.006 0.003 0.000 0.636 0.000 0.000 0.001 0.104 0.324 0.769 0.000 0.001
Dropout 0.001 0.000 0.000 N/A N/A N/A 0.000 0.068 0.000 0.168 0.469 0.868 0.000 0.000 0.000
BatchNorm 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.009 0.000 0.520 0.049 0.140 0.000 0.000 0.000
Table 25: P-values from Welch’s t-test comparing the scaled rewards of entropy and other regularizers, under five sampled hyperparameters.

Reg \ Alg A2C TRPO PPO SAC TOTAL
Easy Hard Total Easy Hard Total Easy Hard Total Easy Hard Total Easy Hard Total
Entropy 33.3 66.7 55.6 0.0 33.3 22.2 0.0 16.7 11.1 0.0 16.7 11.1 8.3 33.3 25.0
0.0 50.0 33.3 0.0 50.0 33.3 33.3 66.7 55.6 33.3 33.3 33.3 16.7 50.0 38.9
0.0 33.3 22.2 0.0 50.0 33.3 33.3 50.0 44.4 33.3 33.3 33.3 16.7 41.7 33.3
Weight clipping 0.0 0.0 0.0 33.3 33.3 33.3 33.3 50.0 44.4 33.3 0.0 11.1 25.0 20.8 22.2
Dropout 0.0 0.0 0.0 N/A N/A N/A 33.3 50.0 44.4 66.7 16.7 33.3 33.3 22.2 25.9
BatchNorm 0.0 0.0 0.0 0.0 0.0 0.0 0.0 16.7 11.1 33.3 50.0 44.4 8.3 16.7 13.9
Table 26: Percentage (%) of environments that, when using a regularization, ”improves”. For each algorithm, one single strength for each regularization is applied to all environments.

Reg \ Alg A2C TRPO PPO SAC TOTAL
Easy Hard Total Easy Hard Total Easy Hard Total Easy Hard Total Easy Hard Total
Baseline 0.558 0.078 0.238 0.375 0.218 0.270 0.377 -0.392 -0.136 0.153 -0.137 -0.040 0.366 -0.058 0.083
Entropy 1.070 0.858 0.928 0.246 0.330 0.302 0.023 -0.250 -0.159 -0.031 -0.219 -0.156 0.327 0.180 0.229
0.816 1.050 0.971 0.330 0.437 0.401 0.388 0.677 0.581 -0.327 0.080 -0.055 0.302 0.561 0.475
0.385 -0.010 0.122 0.216 0.630 0.492 0.315 0.438 0.397 -0.137 -0.149 -0.145 0.195 0.227 0.216
Weight Clip -0.871 0.143 -0.195 0.375 0.191 0.253 0.387 0.364 0.371 -0.189 -0.355 -0.300 -0.075 0.086 0.032
Dropout -0.960 -1.010 -0.991 N/A N/A N/A -0.048 -0.194 -0.145 0.614 0.331 0.425 -0.131 -0.290 -0.237
BatchNorm -0.996 -1.110 -1.070 -1.540 -1.810 -1.720 -1.440 -0.643 -0.909 -0.083 0.449 0.272 -1.020 -0.778 -0.857
Table 27: The average z-score for different regularization methods. For each algorithm, one single strength for each regularization is applied to all environments.

Appendix H Regularization with a Single Strength

In previous sections, we tune the strength of regularization for each algorithm and environment, as described in Appendix B. Now we restrict the regularization methods to a single strength for each algorithm, across different environments. The results are shown in Table 26 and 27. The selected strength are presented in Table 28. We see that the regularization is still generally the best performing one, but SAC is an exception, where BN is better. This can be explained by the fact that in SAC, the reward scaling coefficient is different for each environment, which potentially causes the optimal and strength to vary a lot across different environments, while BN does not have a strength parameter.


Reg \ Alg A2C TRPO PPO SAC
Entropy 1.0
Weight clipping 0.2 0.2 0.2 0.3
Dropout 0.05 0.05 0.05 0.2
BatchNorm True True True True
Table 28: The fixed single regularization strengths that are used in each algorithm to obtain results in Table 26 and Table 27.

Appendix I Regularizing with both and Entropy

We also investigate the effect of combining regularization with entropy regularization, given that both cases of applying one of them alone yield performance improvement. We take the optimal strength of regularization and entropy regularization together and compare with applying regularization or entropy regularization alone. From Figure 5, we find that the performance increases for PPO HumanoidStandup, approximately stays the same for TRPO Ant, and decreases for A2C HumanoidStandup. Thus, the regularization benefits are not always addable. This phenomenon is possibly caused by the fact that the algorithms already achieve good performance using only regularization or entropy regularization, and further performance improvement is restrained by the intrinsic capabilities of algorithms.

Figure 5: The effect of combining regularization with entropy regularization. For PPO HumanoidStandup, we use the third randomly sampled hyperparameter setting. For A2C HumanoidStandup and TRPO Ant, we use the baseline as in Section 4.

Appendix J Comparing Regularization with Fixed Weight Decay (AdamW)

Figure 6: Comparison between regularization and weight decay. For PPO Humanoid and HumanoidStandup, we use the third randomly sampled hyperparameter setting.

For the Adam optimizer (Kingma and Ba, 2015), “fixed weight decay” (AdamW in Loshchilov and Hutter (2019)) differs from regularization in that the gradient of is not computed with the gradient of the original loss, but the weight is “decayed” finally with the gradient update. For Adam these two procedures are very different (see Loshchilov and Hutter (2019) for more details). In this section, we compare the effect of adding regularization with that of using AdamW, with PPO on Humanoid and HumanoidStandup. The result is shown in Figure 6. Similar to , we briefly tune the strength of weight decay in AdamW and the optimal one is used. We find that while both regularization and AdamW can significantly improve the performance over baseline, the performance of AdamW tends to be slightly lower than the performance of regularization.

Appendix K Additional Training Curves Under Default Hyperparameters

Figure 7: Reward vs. timesteps, for four algorithms (columns) and five environments (rows).

As a complement with Figure 1 in Section 4, we plot the training curves of the other five environments in Figure 7.

Appendix L Training Curves for Hyperparameter Experiments

In this section, we plot the full training curves of the experiments in Section 5 with five sampled hyperparameter settings for each algorithm in Figure 8 to 11. The strength of each regularization is tuned according to the range in Appendix B.

Figure 8: Training curves of A2C regularizations under five randomly sampled hyperparameters.

Figure 9: Training curves of TRPO regularizations under five randomly sampled hyperparameters.

Figure 10: Training curves of PPO regularizations under five randomly sampled hyperparameters.

Figure 11: Training curves of SAC regularizations under five randomly sampled hyperparameters.

Appendix M Training Curves for Policy vs. Value Experiments

We plot the training curves with our study in Section 6 on policy and value network regularizations in Figure 12-15.

Figure 12: The interaction between policy and value network regularization for A2C. The optimal policy regularization and value regularization strengths are listed in the legends. Results of regularizing both policy and value networks are obtained by combining the optimal policy and value regularization strengths.

Figure 13: The interaction between policy and value network regularization for TRPO.

Figure 14: The interaction between policy and value network regularization for PPO.

Figure 15: The interaction between policy and value network regularization for SAC.

References

  1. Continuous adaptation via meta-learning in nonstationary and competitive environments. arXiv preprint arXiv:1710.03641. Cited by: §2.
  2. Wasserstein gan. arXiv preprint arXiv:1701.07875. Cited by: §1, §3.
  3. Control regularization for reduced variance reinforcement learning. arXiv preprint arXiv:1905.05380. Cited by: §2.
  4. Quantifying generalization in reinforcement learning. arXiv preprint arXiv:1812.02341. Cited by: §1, §1, §2.
  5. Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Cited by: §1.
  6. OpenAI baselines. GitHub. Note: \urlhttps://github.com/openai/baselines Cited by: Appendix C, §1, §4.1.
  7. RL2: fast reinforcement learning via slow reinforcement learning. arXiv preprint arXiv:1611.02779. Cited by: §2.
  8. Generalization and regularization in dqn. arXiv preprint arXiv:1810.00123. Cited by: §1, §1, §2, §2.
  9. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1126–1135. Cited by: §2.
  10. Addressing function approximation error in actor-critic methods. In Proceedings of the 35th International Conference on Machine Learning, J. Dy and A. Krause (Eds.), Proceedings of Machine Learning Research, Vol. 80, Stockholmsmässan, Stockholm Sweden, pp. 1587–1596. Cited by: Appendix A.
  11. Imformation asymmetry in kl-regularized rl. In International Conference on Learning Representations, Cited by: §2.
  12. Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680. Cited by: §3.
  13. Meta-reinforcement learning of structured exploration strategies. In Advances in Neural Information Processing Systems, pp. 5302–5311. Cited by: §2.
  14. Soft actor-critic: off-policy maximum entropy deep reinforcement learning with a stochastic actor. In International Conference on Machine Learning, pp. 1856–1865. Cited by: Appendix A, Appendix B, §1, §4.1, §4.1.
  15. Soft actor-critic. GitHub. Note: \urlhttps://github.com/haarnoja/sac Cited by: §4.1.
  16. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §1.
  17. Deep reinforcement learning that matters. In Thirty-Second AAAI Conference on Artificial Intelligence, Cited by: §5.
  18. Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167. Cited by: §1, §3.
  19. Adam: a method for stochastic optimization. Cited by: Appendix J.
  20. Tanks and temples: benchmarking large-scale scene reconstruction. ACM Trans. Graph. 36 (4). External Links: ISSN 0730-0301, Link, Document Cited by: §G.1.
  21. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105. Cited by: §1.
  22. Continuous control with deep reinforcement learning. In International Conference on Learning Representations, Cited by: §2.
  23. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971. Cited by: §7.
  24. Decoupled weight decay regularization. In International Conference on Learning Representations, Cited by: Appendix J.
  25. Asynchronous methods for deep reinforcement learning. In International conference on machine learning, pp. 1928–1937. Cited by: Appendix A, §1, §4.1.
  26. OpenAI Open-source software for robot simulation, integrated with openai gym. GitHub. Note: \urlhttps://github.com/openai/roboschool Cited by: §4.1.
  27. Assessing generalization in deep reinforcement learning. arXiv preprint arXiv:1810.12282. Cited by: §2.
  28. TD-regularized actor-critic methods. In Machine Learning, pp. 1–35. Cited by: §2.
  29. Robust deep reinforcement learning with adversarial attacks. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, pp. 2040–2042. Cited by: §2.
  30. Towards generalization and simplicity in continuous control. In Advances in Neural Information Processing Systems, pp. 6550–6561. Cited by: §2.
  31. Towards robust monocular depth estimation: mixing datasets for zero-shot cross-dataset transfer. arXiv:1907.01341. Cited by: §G.1.
  32. You only look once: unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 779–788. Cited by: §1.
  33. Faster r-cnn: towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pp. 91–99. Cited by: §1.
  34. Trust region policy optimization. In International Conference on Machine Learning, pp. 1889–1897. Cited by: Appendix A, §1, §4.1.
  35. High-dimensional continuous control using generalized advantage estimation. In International Conference on Learning Representations, Cited by: §2.
  36. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347. Cited by: Appendix A, Appendix C, §1, §1, §4.1.
  37. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research 15 (1), pp. 1929–1958. Cited by: §1, §3.
  38. Structured control nets for deep reinforcement learning. In International Conference on Machine Learning, pp. 4749–4758. Cited by: §2.
  39. Policy gradient methods for reinforcement learning with function approximation. In Advances in neural information processing systems, pp. 1057–1063. Cited by: Appendix A.
  40. Distral: robust multitask reinforcement learning. In Advances in Neural Information Processing Systems, pp. 4496–4506. Cited by: §1.
  41. Domain randomization for transferring deep neural networks from simulation to the real world. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 23–30. Cited by: §2.
  42. Mujoco: a physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5026–5033. Cited by: §4.1.
  43. Attention is all you need. In Advances in neural information processing systems, pp. 5998–6008. Cited by: §1.
  44. Function optimization using connectionist reinforcement learning algorithms. Connection Science 3 (3), pp. 241–268. Cited by: §1.
  45. A faster pytorch implementation of faster r-cnn. https://github.com/jwyang/faster-rcnn.pytorch. Cited by: §7.
  46. A dissection of overfitting and generalization in continuous reinforcement learning. ArXiv abs/1806.07937. Cited by: §1, §2.
  47. A study on overfitting in deep reinforcement learning. arXiv preprint arXiv:1804.06893. Cited by: §2.
  48. Investigating generalisation in continuous deep reinforcement learning. ArXiv abs/1902.07015. Cited by: §1, §2.
  49. Maximum entropy inverse reinforcement learning. In Proceedings of the 23rd National Conference on Artificial Intelligence - Volume 3, AAAI’08, pp. 1433–1438. External Links: ISBN 978-1-57735-368-3, Link Cited by: Appendix A.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
408771
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description