Reinforcement Learning with Perturbed Rewards

Reinforcement Learning with Perturbed Rewards

Abstract

Recent studies have shown that reinforcement learning (RL) models are vulnerable in various noisy scenarios. For instance, the observed reward channel is often subject to noise in practice (e.g., when rewards are collected through sensors), and is therefore not credible. In addition, for applications such as robotics, a deep reinforcement learning (DRL) algorithm can be manipulated to produce arbitrary errors by receiving corrupted rewards. In this paper, we consider noisy RL problems with perturbed rewards, which can be approximated with a confusion matrix. We develop a robust RL framework that enables agents to learn in noisy environments where only perturbed rewards are observed. Our solution framework builds on existing RL/DRL algorithms and firstly addresses the biased noisy reward setting without any assumptions on the true distribution (e.g., zero-mean Gaussian noise as made in previous works). The core ideas of our solution include estimating a reward confusion matrix and defining a set of unbiased surrogate rewards. We prove the convergence and sample complexity of our approach. Extensive experiments on different DRL platforms show that trained policies based on our estimated surrogate reward can achieve higher expected rewards, and converge faster than existing baselines. For instance, the state-of-the-art PPO algorithm is able to obtain 84.6% and 80.8% improvements on average score for five Atari games, with error rates as 10% and 30% respectively.

Introduction

Designing a suitable reward function plays a critical role in building reinforcement learning models for real-world applications. Ideally, one would want to customize reward functions to achieve application-specific goals [10]. In practice, however, it is difficult to design a reward function that produces credible rewards in the presence of noise. This is because the output from any reward function is subject to multiple kinds of randomness:

  • Inherent Noise. For instance, sensors on a robot will be affected by physical conditions such as temperature and lighting, and therefore will report back noisy observed rewards.

  • Application-Specific Noise. In machine teaching tasks [27], when an RL agent receives feedback/instructions, different human instructors might provide drastically different feedback that leads to biased rewards for machine.

  • Adversarial Noise.  \citeauthorhuang2017adversarial have shown that by adding adversarial perturbation to each frame of the game, they can mislead pre-trained RL policies arbitrarily.

Assuming an arbitrary noise model makes solving this noisy RL problem extremely challenging. Instead, we focus on a specific noisy reward model which we call perturbed rewards, where the observed rewards by RL agents are learnable. The perturbed rewards are generated via a confusion matrix that flips the true reward to another one according to a certain distribution. This is not a very restrictive setting [6] to start with, even considering that the noise could be adversarial: For instance, adversaries can manipulate sensors via reversing the reward value.

In this paper, we develop an unbiased reward estimator aided robust framework that enables an RL agent to learn in a noisy environment with observing only perturbed rewards. The main challenge is that the observed rewards are likely to be biased, and in RL or DRL the accumulated errors could amplify the reward estimation error over time. To the best of our knowledge, this is the first work addressing robust RL in the biased rewards setting (existing work need to assume the unbiased noise distribution). We do not require any assumption on the knowledge of true reward distribution or adversarial strategies, other than the fact that the generation of noises follows a reward confusion matrix. We address the issue of estimating the reward confusion matrices by proposing an efficient and flexible estimation module for settings with deterministic rewards.

\citeauthor

DBLP:conf/ijcai/EverittKOL17 provided preliminary studies for this noisy reward problem and gave some general negative results. The authors proved a No Free Lunch theorem, which is, without any assumption about what the reward corruption is, all agents can be misled. Our results do not contradict with the results therein, as we consider a stochastic noise generation model (that leads to a set of perturbed rewards).

We analyze the convergence and sample complexity for the policy trained using our proposed method based on surrogate rewards, using -Learning as an example. We then conduct extensive experiments on OpenAI Gym [2] and show that the proposed reward robust RL method achieves comparable performance with the policy trained using the true rewards. In some cases, our method even achieves higher cumulative reward - this is surprising to us at first, but we conjecture that the inserted noise together with our noise-removal unbiased estimator add another layer of exploration, which proves to be beneficial in some settings.

Our contributions are summarized as follows: (1) We formulate and generalize the idea of defining a simple but effective unbiased estimator for true rewards under reinforcement learning setting. The proposed estimator helps guarantee the convergence to the optimal policy even when the RL agents only have noisy observations of the rewards. (2) We analyze the convergence to the optimal policy and the finite sample complexity of our reward-robust RL methods, using -Learning as the example. (3) Extensive experiments on OpenAI Gym show that our proposed algorithms perform robustly even at high noise rates. Code is online available: https://github.com/wangjksjtu/rl-perturbed-reward.

Related Work

Robust Reinforcement Learning

It is known that RL algorithms are vulnerable in noisy environments [13]. Recent studies  [12, 21, 24] show that learned RL policies can be easily misled with small perturbations in observations. The presence of noise is very common in real-world environments, especially in robotics-relevant applications [4, 27]. Consequently, robust RL algorithms have been widely studied, aiming to train a robust policy that is capable of withstanding perturbed observations [48, 34, 9] or transferring to unseen environments [36, 7]. However, these algorithms mainly focus on noisy vision observations, instead of observed rewards. Some early works [33, 31, 44, 37] on noisy reward RL rely on the knowledge of unbiased noise distribution, which limits their applicability to more general biased rewards settings. A couple of recent works [23, 38] have looked into a parallel question of training robust RL algorithms with uncertainty in models.

Learning with Noisy Data

Learning appropriately with biased data has received quite a bit of attention in recent machine learning studies [32, 41, 42, 45, 51, 28]. The idea of this line of works is to define unbiased surrogate loss functions to recover the true loss using the knowledge of the noise. Our work is the first to formally establish this extension both theoretically and empirically. Our quantitative understandings will provide practical insights when implementing reinforcement learning algorithms in noisy environments.

Problem Formulation and Preliminaries

In this section, we define our problem of learning from perturbed rewards in reinforcement learning. Throughout this paper, we will use perturbed reward and noisy reward interchangeably, considering that the noise could come from both intentional perturbation and natural randomness. In what follows, we formulate our Markov Decision Process (MDP) and reinforcement learning (RL) problem with perturbed rewards.

Reinforcement Learning: The Noise-Free Setting

Our RL agent interacts with an unknown environment and attempts to maximize the total of its collected reward. The environment is formalized as a Markov Decision Process (MDP), denoting as . At each time , the agent in state takes an action , which returns a reward (which we will also shorthand as ) 1, and leads to the next state according to a transition probability kernel . encodes the probability , and commonly is unknown to the agent. The agent’s goal is to learn the optimal policy, a conditional distribution that maximizes the state’s value function. The value function calculates the cumulative reward the agent is expected to receive given it would follow the current policy after observing the current state : where is a discount factor ( indicates an undiscounted MDP setting [40, 43, 15]). Intuitively, the agent evaluates how preferable each state is, given the current policy. From the Bellman Equation, the optimal value function is given by It is a standard practice for RL algorithms to learn a state-action value function, also called the -function. -function denotes the expected cumulative reward if agent chooses in the current state and follows thereafter:

Perturbed Reward in RL

In many practical settings, the RL agent does not observe the reward feedback perfectly. We consider the following MDP with perturbed reward, denoting as 2: instead of observing at each time directly (following his action), our RL agent only observes a perturbed version of , denoting as . For most of our presentations, we focus on the cases where , are finite sets; but our results generalize to the continuous reward settings with discretization techinques.

The generation of follows a certain function . To let our presentation stay focused, we consider the following state-independent flipping error rates model: if the rewards are binary (consider and ), () can be characterized by the following noise rate parameters : . When the signal levels are beyond binary, suppose there are outcomes in total, denoting as . will be generated according to the following confusion matrix where each entry indicates the flipping probability for generating a perturbed outcome: Again we’d like to note that we focus on settings with finite reward levels for most of our paper, but we provide discussions later on how to handle continuous rewards.

In the paper, we also generalize our solution to the case without knowing the noise rates (i.e., the reward confusion matrices) for settings in which the rewards for each (state, action) pair is deterministic, which is different from the assumption of knowing them as adopted in many supervised learning works [32]. Instead we will estimate the confusion matrices in our framework.

Learning with Perturbed Rewards

In this section, we first introduce an unbiased estimator for binary rewards in our reinforcement learning setting when the error rates are known. This idea is inspired by [32], but we will extend the method to the multi-outcome, as well as the continuous reward settings.

Unbiased Estimator for True Reward

With the knowledge of noise rates (reward confusion matrices), we are able to establish an unbiased approximation of the true reward in a similar way as done in [32]. We will call such a constructed unbiased reward as a surrogate reward. To give an intuition, we start with replicating the results for binary reward in our RL setting:

Lemma 1.

Let be bounded. Then, if we define,

(1)

we have for any ,

In the standard supervised learning setting, the above property guarantees convergence - as more training data are collected, the empirical surrogate risk converges to its expectation, which is the same as the expectation of the true risk (due to unbiased estimators). This is also the intuition why we would like to replace the reward terms with surrogate rewards in our RL algorithms.

The above idea can be generalized to the multi-outcome setting in a fairly straight-forward way. Define , where denotes the value of the surrogate reward when the observed reward is . Let be the bounded reward matrix with values. We have the following results:

Lemma 2.

Suppose is invertible. With defining:

(2)

we have for any ,

Continuous reward

When the reward signal is continuous, we discretize it into intervals, and view each interval as a reward level, with its value approximated by its middle point. With increasing , this quantization error can be made arbitrarily small. Our method is then the same as the solution for the multi-outcome setting, except for replacing rewards with discretized ones. Note that the finer-degree quantization we take, the smaller the quantization error - but we would suffer from learning a bigger reward confusion matrix. This is a trade-off question that can be addressed empirically.

So far we have assumed knowing the confusion matrices and haven’t restricted our solution to any specific setting, but we will address this additional estimation issue focusing on determinisitc reward settings, and present our complete algorithm therein.

Convergence and Sample Complexity: -Learning

We now analyze the convergence and sample complexity of our surrogate reward based RL algorithms (with assuming knowing ), taking -Learning as an example.

Convergence guarantee

First, the convergence guarantee is stated in the following theorem:

Theorem 1.

Given a finite MDP, denoting as , the -learning algorithm with surrogate rewards, given by the update rule,

(3)

converges w.p.1 to the optimal -function as long as and .

Note that the term on the right hand of Eqn. (3) includes surrogate reward estimated using Eqn. (1) and Eqn. (2). Theorem 1 states that agents will converge to the optimal policy w.p.1 when replacing the rewards with surrogate rewards, despite of the noises in the observed rewards. This result is not surprising - though the surrogate rewards introduce larger variance, we are grateful of their unbiasedness, which grants us the convergence. In other words, the addition of the perturbed reward does not affect the convergence guarantees of -Learning with surrogate rewards.

Sample complexity

To establish our sample complexity results, we first introduce a generative model following previous literature [18, 19, 17]. This is a practical MDP setting to simplify the analysis.

Definition 1.

A generative model for an MDP is a sampling model which takes a state-action pair as input, and outputs the corresponding reward and the next state randomly with the probability of , i.e., .

Exact value iteration is impractical if the agents follow the generative models above exactly [15]. Consequently, we introduce a phased Q-Learning which is similar to the ones presented in [15, 18] for the convenience of proving our sample complexity results. We briefly outline phased Q-Learning as follows - the complete description (Algorithm 2) can be found in Appendix A.

Definition 2.

Phased Q-Learning algorithm takes samples per phase by calling generative model . It uses the collected samples to estimate the transition probability and then update the estimated value function per phase. Calling generative model means that surrogate rewards are returned and used to update the value function.

The sample complexity of Phased -Learning is given as follows:

Theorem 2.

(Upper Bound) Let be bounded reward, be an invertible reward confusion matrix with denoting its determinant. For an appropriate choice of , the Phased -Learning algorithm calls the generative model times in epochs, and returns a policy such that for all state , w.p. .

Theorem 2 states that, to guarantee the convergence to the optimal policy, the number of samples needed is no more than times of the one needed when the RL agent observes true rewards perfectly. This additional constant is the price we pay for the noise presented in our learning environment. When the noise level is high, we expect to see a much higher ; otherwise when we are in a low-noise regime , -Learning can be very efficient with surrogate reward [19]. Note that Theorem 2 gives the upper bound in discounted MDP setting; for undiscounted setting (), the upper bound is at the order of . This result is not surprising, as the phased -Learning helps smooth out the noise in rewards in consecutive steps. We will experimentally test how the bias removal step performs without explicit phases.

While the surrogate reward guarantees the unbiasedness, we sacrifice the variance at each of our learning steps, and this in turn delays the convergence (as also evidenced in the sample complexity bound). It can be verified that the variance of surrogate reward is bounded when is invertible, and it is always higher than the variance of true reward. This is summarized in the following theorem:

Theorem 3.

Let be bounded reward and confusion matrix is invertible. Then, the variance of surrogate reward is bounded as follows:

To give an intuition of the bound, when we have binary reward, the variance for surrogate reward bounds as follows: As , the variance becomes unbounded and the proposed estimator is no longer effective, nor will it be well-defined.

Variance reduction

In practice, there is a trade-off question between bias and variance by tuning a linear combination of and , i.e., , via choosing an appropriate . Other variance reduction techniques in RL with noisy environment, for instance [37], can be combined with our proposed bias removal technique too. We test them in the experiment section.

Estimation of Confusion Matrices

In previous solutions, we have assumed the knowledge of reward confusion matrices, in order to compute the surrogate reward. This knowledge is often not available in practice. Estimating these confusion matrices is challenging without knowing any ground truth reward information; but we would like to remark that efficient algorithms have been developed to estimate the confusion matrices in supervised learning settings [1, 26, 20, 11]. The idea in these algorithms is to dynamically refine the error rates based on aggregated rewards. Note this approach is not different from the inference methods in aggregating crowdsourcing labels, as referred in the literature [3, 16, 25]. We adapt this idea to our reinforcement learning setting, which is detailed as follows.

The estimation procedure is only for the case with deterministic reward, but not for stochastic rewards. The reason is that we will use repeated observations to refine an estimated ground truth reward, which will be leveraged to estimate the confusion matrix. With uncertainty in the true reward, it is not possible to distinguish a clean case with true reward from the perturbed reward case with true reward and added noise by confusion matrix .

1:  Input: , ,
2:  Output: ,
3:  Initialize value function arbitrarily.
4:  while  is not converged do
5:     Initialize state , observed reward set
6:     Set confusion matrix as identity matrix
7:     while  is not terminal do
8:         Choose from using policy derived from
9:         Take action , observe and noisy reward
10:         if collecting enough for all pairs then
11:            Get predicted true reward using majority voting
12:            Re-estimate based on and (using Eqn. 5)
13:         end if
14:         Obtain surrogate reward ()
15:         Update using surrogate reward
16:         
17:     end while
18:  end while
19:  return and
Algorithm 1 Reward Robust RL (sketch)

At each training step, the RL agent collects the noisy reward and the current state-action pair. Then, for each pair in , the agent predicts the true reward based on accumulated historical observations of reward for the corresponding state-action pair via, e.g., averaging (majority voting). Finally, with the predicted true reward and the accuracy (error rate) for each state-action pair, the estimated reward confusion matrices are given by

(4)
(5)

where in above denotes the number of state-action pair that satisfies the condition in the set of observed rewards (see Algorithm 1 and 3); and denote predicted true rewards (using majority voting) and observed rewards when the state-action pair is . We break potential ties in Eqn. (4) equally likely. The above procedure of updating continues indefinitely as more observation arrives. Our final definition of surrogate reward replaces a known reward confusion in Eqn. (2) with our estimated one . We denote this estimated surrogate reward as .

(a) -Learning
(b) CEM
(c) SARSA
(d) DQN
(e) DDQN
Figure 1: Learning curves from five RL algorithms on CartPole game with true rewards ( , noisy rewards (  and estimated surrogate rewards ()  ( . Note that are unknown to the agents and each experiment is repeated 10 times with different random seeds. We plotted 10% to 90% percentile area with its mean highlighted. Full results are in Appendix D (Figure 6).

We present (Reward Robust RL) in Algorithm 13. Note that the algorithm is rather generic, and we can plug in any exisitng RL algorithm into our reward robust one, with only changes in replacing the rewards with our estimated surrogate rewards.

Experimental Results

In this section, we conduct extensive experiments to evaluate the noisy reward robust RL mechanism with different games, under various noise settings. Due to the space limit, more experimental results can be found in Appendix D.

Experimental Setup

Environments and RL Algorithms

To fully test the performance under different environments, we evaluate the proposed robust reward RL method on two classic control games (CartPole, Pendulum) and seven Atari 2600 games (AirRaid, Alien, Carnival, MsPacman, Pong, Phoenix, Seaquest), which encompass a large variety of environments, as well as rewards. Specifically, the rewards could be unary (CartPole), binary (most of Atari games), multivariate (Pong) and even continuous (Pendulum). A set of state-of-the-art RL algorithms are experimented with, while training under different amounts of noise (See Table 3)4. For each game and algorithm, unless otherwise stated, three policies are trained with different random initialization to decrease the variance.

(a) DDPG (symmetric)
(b) DDPG (rand-one)
(c) DDPG (rand-all)
(d) NAF (rand-all)
Figure 2: Learning curves from DDPG and NAF on Pendulum game with true rewards ( , noisy rewards (  and surrogate rewards () ( . Both symmetric and asymmetric noise are conduced in the experiments and each experiment is repeated 3 times with different random seeds. Full results are in Appendix D (Figure 9).

Reward Post-Processing

For each game and RL algorithm, we test the performance for learning with true rewards, noisy rewards and surrogate rewards. Both symmetric and asymmetric noise settings with different noise levels are tested. For symmetric noise, the confusion matrices are symmetric. As for asymmetric noise, two types of random noise are tested: 1) rand-one, each reward level can only be perturbed into another reward; 2) rand-all, each reward could be perturbed to any other reward, via adding a random noise matrix. To measure the amount of noise w.r.t confusion matrices, we define the weight of noise in Appendix B. The larger is, the higher the noise rates are.

Robustness Evaluation

CartPole

The goal in CartPole is to prevent the pole from falling by controlling the cart’s direction and velocity. The reward is for every step taken, including the termination step. When the cart or pole deviates too much or the episode length is longer than 200, the episode terminates. Due to the unary reward in CartPole, a corrupted reward is added as the unexpected error (). As a result, the reward space is extended to . Five algorithms -Learning [53], CEM [47], SARSA [46], DQN [50] and DDQN [52] are evaluated.

 Noise Rate Reward -Learn CEM SARSA DQN DDQN DDPG  NAF
170.0 98.1 165.2 187.2 187.8 -1.03 -4.48
165.8 108.9 173.6 200.0 181.4 -0.87 -0.89
181.9 99.3 171.5 200.0 185.6 -0.90 -1.13
134.9 28.8 144.4 173.4 168.6 -1.23 -4.52
149.3 85.9 152.4 175.3 198.7 -1.03 -1.15
161.1 82.2 159.6 186.7 200.0 -1.05 -1.36
56.6 19.2 12.6 17.2 11.8 -8.76 -7.35
177.6 87.1 151.4 185.8 195.2 -1.09 -2.26
172.1 83.0 174.4 189.3 191.3
Table 1: Average scores of various RL algorithms on CartPole and Pendulum with noisy rewards () and surrogate rewards under known () or estimated () noise rates. Note that the results for last two algorithms DDPG (rand-one) NAF (rand-all) are on Pendulum, but the others are on CartPole.
Figure 3: Learning curves from PPO on Pong-v4 game with true rewards ( , noisy rewards (  and surrogate rewards () ( . The noise rate increases from 0.6 to 0.9, with a step of 0.1. Full results are in Appendix D (Figure 10).
  Noise Rate Reward Lift () Alien Carnival Phoenix MsPacman Seaquest
1835.1 1239.3 4609.0 1709.1 849.2
70.4% 1737.0 3966.8 7586.4 2547.3 1610.6
84.6% 2844.1 5515.0 5668.8 2294.5 2333.9
538.2 919.9 2600.3 1109.6 408.7
119.8% 1668.6 4220.1 4171.6 1470.3 727.8
80.8% 1542.9 4094.3 2589.1 1591.2 262.4
495.2 380.3 126.5 491.6 0.0
757.4% 1805.9 4088.9 4970.4 1447.8 492.5
648.9% 1618.0 4529.2 2792.1 1916.7 328.5
Table 2: Average scores of PPO on five selected games with noisy rewards () and surrogate rewards under known () or estimated () noise rates.
(a) -Learning
(b) CEM
(c) SARSA
(d) DQN
(e) DDQN
Figure 4: Learning curves from five reward robust RL algorithms on CartPole game with true rewards ( , noisy rewards ( , sample-mean noisy rewards  , estimated surrogate rewards (  and sample-mean estimated surrogate rewards  . Full results are in Appendix D (Figure 8).

In Figure 1, we show that our estimator successfully produces meaningful surrogate rewards that adapt the underlying RL algorithms to the noisy settings, without any assumption of the true distribution of rewards. With the noise rate increasing (from 0.1 to 0.9), the models with noisy rewards converge slower due to larger biases. However, we observe that the models (DQN and DDQN) always converge to the best score 200 with the help of surrogate rewards.

In some circumstances (slight noise - see Figure 0(b)0(c)), the surrogate rewards even lead to faster convergence. This points out an interesting observation: learning with surrogate reward sometimes even outperforms the case with observing the true reward. We conjecture that the way of adding noise and then removing the bias (or moderate noise) introduces implicit exploration. This may also imply why some algorithms with estimated confusion matrices leads to better results than with known in some cases (Table 1).

Pendulum

The goal in Pendulum is to keep a frictionless pendulum standing up. Different from the CartPole setting, the rewards in pendulum are continuous: . The closer the reward is to zero, the better performance the model achieves. For simplicity, we firstly discretized into 17 intervals: , with its value approximated using its maximum point. After the quantization step, the surrogate rewards can be estimated using multi-outcome extensions.

We experiment two popular algorithms, DDPG [22] and NAF [8] in this game. In Figure 2, both algorithms perform well with surrogate rewards under different amounts of noise. In most cases, the biases were corrected in the long-term, even when the amount of noise is extensive (e.g., ). The quantitative scores on CartPole and Pendulum are given in Table 1, where the scores are averaged based on the last 30 episodes. Our reward robust method is able to achieve good scores consistently.

Atari

We validate our algorithm on seven Atari 2600 games using the state-of-the-art algorithm PPO [39]. The games are chosen to cover a variety of environments. The rewards in the Atari games are clipped into . We leave the detailed settings to Appendix B.

Results for PPO on Pong-v4 in symmetric noise setting are presented in Figure 3. More results on other Atari games and noise settings are given in Appendix D. Similar to previous results, our surrogate estimator performs consistently well and helps PPO converge to the optimal policy. Table 2 shows the average scores of PPO on five selected Atari games with different amounts of noise (symmetric asymmetric). In particular, when the noise rates , agents with surrogate rewards obtain significant amounts of improvements in average scores. For the cases with unknown ( in Table 2), due to the large state-space (image-input) in confusion matrix estimation, we embed and consider the adjacent frames within a batch as the same state and set the memory size for states as 1,000. Please refer to Appendix B for details.

Compatible with Variance Reduction Techniques

As illustrated in Theorem 3, our surrogate rewards introduce larger variance while conducting unbiased estimation, which are likely to decrease the stability of RL algorithms. Apart from the linear combination idea (a linear trade-off), some variance reduction techniques in statistics (e.g., correlated sampling) can also be applied to our method. Specially, \citeauthorRomoff2018 proposed to use a reward estimator to compensate for stochastic corrupted-reward signals. It is worthy to notice that their method is designed for variance reduction under zero-mean noises, which is no longer efficacious in more general perturbed-reward setting. However, it is potential to integrate their method with our robust-reward RL framework because surrogate rewards provide unbiasedness guarantee.

To verify this idea, we repeated the experiments of Cartpole but included variance reduction step for estimated surrogate rewards. Following \citeauthorRomoff2018, we adopted sample mean as a simple approximator during the training and set sequence length as . As shown in Figure 4, the models with only variance reduction technique (red lines) suffer from huge regrets, and in general do not converge to the optimal policies. Nevertheless, the variance reduction step helps surrogate rewards (purple lines) to achieve faster convergence or better performance in multiple cases. Similarly, Table 4 in Appendix C provides quantitative results which show that our surrogate reward benefits from variance reduction techniques (“ours + VRT”), especially when the noise rate is high.

Conclusions

Improving the robustness of RL in the settings with perturbed and noisy rewards is important given the fact that such noises are common when exploring a real-world scenario, such as sensor errors. In addition, in adversarial environments, perturbed reward could be leveraged Different robust RL algorithms have been proposed but they either only focus on the noisy observations or need strong assumption on the unbiased noise distribution for observed rewards. In this paper, we propose the first simple yet effective RL framework for dealing with biased noisy rewards. The convergence guarantee and finite sample complexity of -Learning (or its variant) with estimated surrogate rewards are provided. To validate the effectiveness of our approach, extensive experiments are conducted on OpenAI Gym, showing that surrogate rewards successfully rescue models from misleading rewards even at high noise rates. We believe this work will further shed light on exploring robust RL approaches under different noisy rewards observations in real-world environments.

Acknowledgement

This work was supported by National Science Foundation award CCF-1910100 and DARPA award ASED-00009970.

Appendix A Proofs

Proof of Lemma 1.

For simplicity, we shorthand as , and let denote the general reward levels and corresponding surrogate ones:

(6)

When , from the definition in Lemma 1:

Taking the definition of surrogate rewards Eqn. (1) into Eqn. (6), we have

Similarly, when , it also verifies

Proof of Lemma 2.

The idea of constructing unbiased estimator is easily adapted to multi-outcome reward settings via writing out the conditions for the unbiasedness property (s.t. ). For simplicity, we shorthand as in the following proofs. Similar to Lemma 1, we need to solve the following set of functions to obtain :

where denotes the value of the surrogate reward when the observed reward is . Define , and , then the above equations are equivalent to: If the confusion matrix is invertible, we obtain the surrogate reward:

According to above definition, for any true reward level , we have

Furthermore, the probabilities for observing surrogate rewards can be written as follows:

where , and , represent the probabilities of occurrence for surrogate reward and true reward respectively.

Corollary 1.

Let and denote the probabilities of occurrence for surrogate reward and true reward . Then the surrogate reward satisfies,

(7)
Proof of Corollary 1.

From Lemma 2, we have,

Consequently,

To establish Theorem 1, we need an auxiliary result (Lemma 3) from stochastic process approximation, which is widely adopted for the convergence proof for -Learning [14, 49].

Lemma 3.

The random process taking values in and defined as

converges to zero w.p.1 under the following assumptions:

  • , and ;

  • , with ;

  • , for .

Here stands for the past at step , is allowed to depend on the past insofar as the above conditions remain valid. The notation refers to some weighted maximum norm.

Proof of Lemma 3.

See previous literature [14, 49]. ∎

Proof of Theorem 1.

For simplicity, we abbreviate , , , , , and as , , , , , , and , respectively.

Subtracting from both sides the quantity in Eqn. (3):

Let and .

In consequence,

Finally,

Because is bounded, it can be clearly verified that

for some constant . Then, due to the Lemma 3, converges to zero w.p.1, i.e., converges to . ∎

The procedure of Phased -Learning is described as Algorithm 2:

  Input: : generative model of , : number of iterations.
  Output: : value function, : policy function.
  Set
  for  do
     Calling times for each state-action pair.
     Set
  end for
  return and
Algorithm 2 Phased -Learning

Note that here is the estimated transition probability, which is different from in Eqn. (7).

To obtain the sample complexity results, the range of our surrogate reward needs to be known. Assuming reward is bounded in , Lemma 4 below states that the surrogate reward is also bounded, when the confusion matrices are invertible:

Lemma 4.

Let be bounded, where is a constant; suppose , the confusion matrix, is invertible with its determinant denoting as . Then the surrogate reward satisfies

(8)
Proof of Lemma 4.

From Eqn. (2), we have,

where is the adjugate matrix of ; is the determinant of . It is known from linear algebra that,

where is the determinant of the matrix that results from deleting row and column of . Therefore, is also bounded:

where the sum is computed over all permutations of the set ; is the element of ; returns a value that is whenever the reordering given by can be achieved by successively interchanging two entries an even number of times, and whenever it can not.

Consequently,

Proof of Theorem 2.

From Hoeffding’s inequality, we obtain:

because is bounded within . In the same way, is bounded by from Lemma 4. We then have,