Modern deep learning methods provide an effective means to learn good representations. However, is a good representation itself sufficient for efficient reinforcement learning? This question is largely unexplored, and the extant body of literature mainly focuses on conditions which permit efficient reinforcement learning with little understanding of what are necessary conditions for efficient reinforcement learning. This work provides strong negative results for reinforcement learning methods with function approximation for which a good representation (feature extractor) is known to the agent, focusing on natural representational conditions relevant to value-based learning and policy-based learning. For value-based learning, we show that even if the agent has a highly accurate linear representation, the agent still needs to sample exponentially many trajectories in order to find a near-optimal policy. For policy-based learning, we show even if the agent’s linear representation is capable of perfectly representing the optimal policy, the agent still needs to sample exponentially many trajectories in order to find a near-optimal policy.
These lower bounds highlight the fact that having a good (value-based or policy-based) representation in and of itself is insufficient for efficient reinforcement learning. In particular, these results provide new insights into why the existing provably efficient reinforcement learning methods rely on further assumptions, which are often model-based in nature. Additionally, our lower bounds imply exponential separations in the sample complexity between 1) value-based learning with perfect representation and value-based learning with a good-but-not-perfect representation, 2) value-based learning and policy-based learning, 3) policy-based learning and supervised learning and 4) reinforcement learning and imitation learning.
Modern reinforcement learning (RL) problems are often challenging due to the huge state space. To tackle this challenge, function approximation schemes are often employed to provide a compact representation, so that reinforcement learning can generalize across states. A common paradigm is to first use a feature extractor to transform the raw input to features (a succinct representation) and then apply a linear predictor on top of the features. Traditionally, the feature extractor is often handcrafted (sutton2018reinforcement), while more modern methods often train a deep neural network to extract features. The hope of this paradigm is that, if there exists a good low dimensional (linear) representation, then efficient reinforcement learning is possible.
Empirically, combining various RL function approximation algorithms with neural networks for feature extraction has lead to tremendous successes on various tasks (mnih2015human; schulman2015trust; schulman2017proximal). A major problem, however, is that these methods often require a large amount of samples to learn a good policy. For example, deep -network requires millions of samples to solve certain Atari games (mnih2015human). Here, one may wonder if there are fundamental statistical limitations on such methods, and, if so, under what conditions it would be possible to efficiently learn a good policy?
In the supervised learning context, it is well-known that empirical risk minimization is a statistically efficient method when using a low-complexity hypothesis space (shalev2014understanding), e.g. a hypothesis space with bounded VC dimension. For example, a polynomial number of samples suffice for learning a near-optimal -dimensional linear classifier, even in the agnostic setting111Here we only study the sample complexity and ignore the computational complexity.. In contrast, in the more challenging RL setting, we seek to understand if efficient learning is possible (say from a sample complexity perspective) when we have access to an accurate (and compact) parametric representation — e.g. our policy class contains a near-optimal policy or our value function hypothesis class accurately approximates the true value functions. In particular, this work focuses on the following question:
Is a good representation sufficient for sample-efficient reinforcement learning?
This question is largely unexplored, where the extant body of literature mainly focuses on conditions which are sufficient for efficient reinforcement learning though there is little understanding of what are necessary conditions for efficient reinforcement learning. The challenge in reinforcement learning is that it is not evident how agents can leverage the given representation to efficiently find a near-optimal policy for reasons related to the exploration-exploitation trade-off; there is no direct analogue of empirical risk minimization in the reinforcement learning context.
Many recent works have provided polynomial upper bounds under various sufficient conditions, and in what follows we list a few examples. For value-based learning, the work of wen2013efficient showed that for deterministic systems222MDPs where both reward and transition are deterministic., if the optimal -function can be perfectly predicted by linear functions of the given features, then the agent can learn the optimal policy exactly with a polynomial number of samples. Recent work (jiang2017contextual) further showed that if the Bellman rank, a certain complexity measure, is bounded, then the agent can learn a near-optimal policy efficiently. For policy-based learning, agarwal2019optimality gave polynomial upper bounds which depend on a parameter that measures the difference between the initial distribution and the distribution induced by the optimal policy.
Our contributions. This work gives, perhaps surprisingly, strong negative results to this question. The main results are exponential sample complexity lower bounds in terms of planning horizon for value-based and policy-based algorithms with given good representations333 Our results can be easily extend to infinite horizon MDPs with discount factors by replacing the planning horizon with , where is the discount factor. We omit the discussion on discount MDPs for simplicity. . A summary of previous upper bounds and along with our new lower bounds is provided in Table 1. These lower bounds include:
For value-based learning, we show even if the -functions of all policies can be approximated, in a worst case sense, by linear functions of the given representation with approximation error where is the dimension of the representation and is the planning horizon, then the agent still needs to sample exponential number of trajectories to find a near-optimal policy.
We show even if the optimal policy can be perfectly predicted by a linear function of the given representation with a strictly positive margin, the agent still requires exponential number of trajectories to find a near-optimal policy.
These lower bounds hold even in deterministic systems and even if the agent knows the transition model. Furthermore, these negative results also apply to the case where , the optimal state-action value, can be accurately approximated by a linear function. Since the class of linear functions is a strict subset of many more complicated function classes, including neural networks in particular, our negative results imply lower bounds for these more complex function classes as well.
Our results highlight a few conceptual insights:
Efficient RL may require the representation to encode model information (transition and reward). Under (implicit) model-based assumptions, there exist upper bounds that can tolerate approximation error (jiang2017contextual; yang2019sample; sun2019model).
Since our lower bounds apply even when the agent knows the transition model, the hardness is not due to the difficulty of exploration in the standard sense. The unknown reward function is sufficient to make the problem exponentially difficult.
Our lower bounds are not due to the agent’s inability to perform efficient supervised learning, since our assumptions do admit polynomial sample complexity upper bounds if the data distribution is fixed.
Our lower bounds are not pathological in nature and suggest that these concerns may arise in practice. In a precise sense, almost all feature extractors induce a hard MDP instance in our construction (see Section 4.3).
Instead, one interpretation is that the hardness is due to a distribution mismatch in the following sense: the agent does not know which distribution to use for minimizing a (supervised) learning error (see kakade2003sample for discussion), and even a known transition model is not information-theoretically sufficient to reduce the sample complexity.
Furthermore, our work implies several exponential separations on the sample complexity between: 1) value-based learning with a perfect representation and value-based learning with a good-but-not-perfect representation, 2) value-based learning and policy-based learning, 3) policy-based learning and supervised learning and 4) reinforcement learning and imitation learning. We provide more details in Section 6.
|Query Oracle||RL||Generative Model||Known Transition|
|Previous Upper Bounds|
|Exact Linear + DetMDP (wen2013efficient)||✓||✓||✓|
|Exact Linear + Bellman-Rank (jiang2017contextual)||✓||✓||✓|
|Exact Linear + Low Var + Gap (du2019provably)||✓||✓||✓|
|Exact Linear + Gap (Open Problem / Theorem B.1)||?||✓||✓|
|Exact Linear for all (Open Problem / Theorem C.1)||?||✓||✓|
|Lower Bounds (this work)|
|Approx. Linear + DetMDP (Theorem 4.1)||✗||✗||✗|
|Approx. Linear for all + DetMDP(Theorem 4.1)||✗||✗||✗|
|Exact Linear + Margin + Gap + DetMDP (Theorem 4.2)||✗||✗||✗|
|Exact Linear (Open Problem)||?||?||?|
2 Related Work
A summary of previous upper bounds, together with lower bounds proved in this work, is provided in Table 1. Some key assumptions are formally stated in Section 3 and Section 4. Our lower bounds highlight that classical complexity measures in supervised learning including small approximation error and margin, and standard assumptions in reinforcement learning including optimality gap and deterministic systems, are not enough for efficient RL with function approximation. We need additional assumptions, e.g., ones used in previous upper bounds, for efficient RL.
2.1 Previous Lower Bounds
Existing exponential lower bounds, to our knowledge, construct unstructured MDPs with an exponentially large state space and reduce a bandit problem with exponentially many arms to an MDP (krishnamurthy2016pac; sun2017deeply). However, these lower bounds do not immediately apply to MDPs whose transition models, value functions, or policies can be approximated with some natural function classes, e.g., linear functions, neural networks, etc. The current work gives the first set of lower bounds for RL with linear function approximation (and thus also hold for super-classes of linear functions like neural networks).
2.2 Previous Upper Bounds
We divide previous algorithms (with provable guarantees) into three classes: those that utilize uncertainty-based bonuses (e.g. UCB variants or Thompson sampling variants); approximate dynamic programming variants; and direct policy search-based methods (such as Conserve Policy Iteration (CPI) (kakade2003sample)) or policy gradient methods. The first class of methods include those based on witness rank, Belman rank, and the Eluder dimension, while the latter two classes of algorithms make assumptions either on concentrability coefficients or on distribution mismatch coefficients (see agarwal2019optimality; Scherrer:API for discussions).
Uncertainty bonus-based algorithms. Now we discuss existing theoretical results on value-based learning with function approximation. wen2013efficient showed that in deterministic systems, if the optimal -function is within a pre-specified function class which has bounded Eluder dimension (for which the class of linear functions is a special case), then the agent can learn the optimal policy using a polynomial number of samples. This result has recently been generalized by du2019provably which can deal with stochastic reward and low variance transition but requires strictly positive optimality gap. As we listed in Table 1, it is an open problem whether the condition that the optimal -function is linear itself is sufficient for efficient RL.
li2011knows proposed a -learning algorithm which requires the Know-What-It-Knows oracle. However, it is in general unknown how to implement such oracle in practice. jiang2017contextual proposed the concept of Bellman Rank to characterize the sample complexity of value-based learning methods and gave an algorithm that has polynomial sample complexity in terms of the Bellman Rank, though the proposed algorithm is not computationally efficient. Bellman rank is bounded for a wide range of problems, including MDP with small number of hidden states, linear MDP, LQR, etc. Later work gave computationally efficient algorithms for certain special cases (dann2018polynomial; du2019provably; yang2019sample; jin2019provably). Recently, Witness rank, a generalization of Bellman rank to model-based methods, is studied in sun2019model.
Approximate dynamic programming-based algorithms. We now discuss approximate dynamic programming-based results characterized in terms of the concentrability coefficient. While classical approximate dynamic programming results typically require -bounded errors, the notion of concentrability (originally due to (munos2005error)) permits sharper bounds in terms of average case function approximation error, provided that the concentrability coefficient is bounded (e.g. see munos2005error; szepesvari2005finite; antos2008learning; geist2019theory). Under the assumption that this problem-dependent parameter is bounded, munos2005error; szepesvari2005finite and antos2008learning provided sample complexity and error bounds for approximate dynamic programming methods when there is a data collection policy (under which value-function fitting occurs) that induces a finite concentrability coefficient. The assumption that the concentrability coefficient is finite is in fact quite limiting. See chen2019information for a more detailed discussion on this quantity.
Direct policy search-based algorithms. Stronger guarantees over approximate dynamic programming-based algorithms can be obtained with direct policy search-based methods, where instead of having a bounded concentrability coefficient, one only needs to have a bounded distribution mismatch coefficient. The latter assumption requires the agent to have access to a “good” initial state distribution (e.g. a measure which has coverage over where an optimal policy tends to visit); note that this assumption does not make restrictions over the class of MDPs. There are two classes of algorithms that fall into this category. First, there is Conservative Policy Iteration (kakade2002approximately), along with Policy Search by Dynamic Programming (PSDP) (NIPS2003_2378), and other boosting-style of policy search-based methods scherrer2014local; Scherrer:API, which have guarantees in terms of bounded distribution mismatch ratio. Second, more recently, agarwal2019optimality showed that policy gradient styles of algorithms also have comparable guarantees; the results also directly imply the learnability results for the “Approx. Linear for all ” row in Table 1. Similar guarantees can be obtained with CPI (and its variants) with comparable assumptions.
Throughout this paper, for a given integer , we use to denote the set .
3.1 Episodic Reinforcement Learning
Let be an Markov Decision Process (MDP) where is the state space, is the action space whose size is bounded by a constant, is the planning horizon, is the transition function which takes a state-action pair and returns a distribution over states and is the reward distribution. Without loss of generality, we assume a fixed initial state 444Some papers assume the initial state is sampled from a distribution . Note this is equivalent to assuming a fixed initial state , by setting for all and now our state is equivalent to the initial state in their assumption.. A policy prescribes a distribution over actions for each state. The policy induces a (random) trajectory where , , , , etc. To streamline our analysis, for each , we use to denote the set of states at level , and we assume do not intersect with each other. We also assume almost surely. Our goal is to find a policy that maximizes the expected total reward . We use to denote the optimal policy. We say a policy is -optimal if .
In this paper we prove lower bounds for deterministic systems, i.e., MDPs with deterministic transition , deterministic reward . In this setting, and can be regarded as functions instead of distributions. Since deterministic systems are special cases of general stochastic MDPs, lower bounds proved in this paper still hold for more general MDPs.
3.2 -function, -function and Optimality Gap
An important concept in RL is the -function. Given a policy , a level and a state-action pair , the -function is defined as . For simplicity, we denote . It will also be useful to define the value function of a given state as . For simplicity, we denote . Throughout the paper, for the -function and and the value function and , we may omit from the subscript when it is clear from the context.
In addition to these definitions, we list below an important assumption, the optimality gap assumption, which is widely used in reinforcement learning and bandit literature. To state the assumption, we first define the function as . Now we formally state the assumption.
Assumption 3.1 (Optimality Gap).
There exists such that for all with .
Here, is the smallest reward-to-go difference between the best set of actions and the rest. Recently, du2019qlearning gave a provably efficient -learning algorithm based on this assumption and simchowitz2019non showed that with this condition, the agent only incurs logarithmic regret in the tabular setting.
3.3 Query Models
Here we discuss three possible query oracles interacting with the MDP.
RL: The most basic and weakest query oracle for MDP is the standard reinforcement learning query oracle where the agent can only interact with the MDP by choosing actions and observe the next state and the reward.
Generative Model: A stronger query model assumes the agent can transit to any state (kearns2002near; kakade2003sample; sidford2018near). This query model is available in certain robotic applications where one can control the robot to reach the target state.
Known Transition: The strongest query model considered is that the agent can not only transit to any state, but it also knows the whole transition. In this model, only the reward is unknown.
In this paper, we will prove lower bounds for the strongest Known Transition query oracle. Therefore, our lower bounds also apply to RL and Generative Model query oracles.
4 Main Results
In this section we formally present our lower bounds. We also discuss proof ideas in Section 4.3.
4.1 Lower Bound for Value-based Learning
We first present our lower bound for value-based learning. A common assumption is that the -function can be predicted well by a linear function of the given features (representation) (bertsekas1996neuro). Formally, the agent is given a feature extractor which can be hand-crafted or a pre-trained neural network that transforms a state-action pair to a -dimensional embedding. The following assumption states that the given feature extractor can be used to predict the -function with approximation error at most using a linear function.
Assumption 4.1 ( Realizability).
There exists and such that for any and any ,
Here is the approximation error, which indicates the quality of the representation. If , then -function can be perfectly predicted by a linear function of . In general, becomes smaller as we increase the dimension of , since larger dimension usually has more expressive power. When the feature extractor is strong enough, previous papers (chen2019information; farahmand2011regularization) assume that linear functions of can approximate the -function of any policy.
Assumption 4.2 (Value Completeness).
There exists , such that for any and any policy , there exists such that for any ,
In the theoretical reinforcement learning literature, Assumption 4.2 is often called the (approximate) policy completeness assumption. This assumption is crucial in proving polynomial sample complexity guarantee for value iteration type of algorithms (chen2019information; farahmand2011regularization).
The following theorem shows when , the agent needs to sample exponential number of trajectories to find a near-optimal policy.
Theorem 4.1 (Exponential Lower Bound for Value-based Learning).
There exists a family of MDPs and a feature extractor that satisfy Assumption 4.2 with , such that any algorithm that returns a -optimal policy with probability needs to sample trajectories.
Note this lower bound also applies to MDPs that satisfy Assumption 4.1, since Assumption 4.2 is a strictly stronger assumption. We would like to emphasize that since linear functions is a subclass of more complicated function classes, e.g., neural networks, our lower bound also holds for these function classes.
4.2 Lower Bound for Policy-based Learning
Next we present our lower bound for policy-based learning. This class of methods use function approximation on the policy and use optimization techniques, e.g., policy gradient, to find the optimal policy. In this paper, we focus on linear policies on top of a given representation. A linear policy is a policy of the form where , is a given feature extractor and is the linear coefficient. Note that applying policy gradient on softmax parameterization of the policy is indeed trying to find the optimal policy among linear policies.
Similar to value-based learning, a natural assumption for policy-based learning is that the optimal policy is realizable555 Unlike value-based learning, it is hard to define completeness on the policy-based learning with function approximation, since not all policy has the form. .
Assumption 4.3 ( Realizability).
For any , there exists that satisfies for any , we have
Here we discuss another assumption. For learning a linear classifier in the supervised learning setting, one can reduce the sample complexity significantly if the optimal linear classifier has a margin.
Assumption 4.4 ( Realizability + Margin).
We assume satisfies for any . For any , there exists with and such that for any , there is a unique optimal action , and for any , .
Here we restrict the linear coefficients and features to have unit norm for normalization. Note that Assumption 4.4 is strictly stronger than Assumption 4.3. Now we present our result for linear policy.
Theorem 4.2 (Exponential Lower Bound for Policy-based Learning).
Compared with Theorem 4.1, Theorem 4.2 is even more pessimistic, in the sense that even with perfect representation with benign properties (gap and margin), the agent still needs to sample exponential number of samples. It also suggests that policy-based learning could be very different from supervised learning.
4.3 Proof Ideas
The binary tree hard instance.
All our lower bound are proved based on reductions from the following hard instance. In this instance, both the transition and the reward are deterministic, and there are two actions and . There are levels of states, which form a full binary tree of depth . Playing action transits a state to its left child while playing action transits a state to its right child. There are states in level , and thus states in total. Among all the states in level , there is only one state with reward , and for all other states in the MDP, the corresponding reward value . Intuitively, to find a -optimal policy for such MDPs, the agent must enumerate all possible states in level to find the state with reward . Doing so intrinsically induces a sample complexity of . This intuition is formalized in Theorem 5.1 using Yao’s minimax principle (yao1977probabilistic).
Lower bound for value-based learning
We now show how to construct a set of features so that Assumption 4.1-4.2 hold. Our main idea is to the utilize the following fact regarding the identity matrix: -. Here for a matrix , its - (a.k.a approximate rank) is defined to be , where we use to denote the entry-wise norm of a matrix. The upper bound - was first proved in alon2009perturbed using the Johnson-Lindenstrauss Lemma (johnson1984extensions), and we also provide a proof in Lemma 5.1. The concept of - has wide applications in theoretical computer science (alon2009perturbed; barak2011rank; alon2013approximate; alon2014cover; chen2019classical), but to our knowledge, this is the first time that it appears in reinforcement learning.
This fact can be alternatively stated as follow: there exists such that . We interpret each row of as the feature of a state in the binary tree. By construction of , now features of states in the binary tree have a nice property that (i) each feature vector has approximately unit norm and (ii) different feature vector are nearly orthogonal. Using this set of features, we can now show that Assumption 4.1 and 4.2 hold. Here we prove Assumption 4.1 holds as an example and prove other assumptions also hold in the formal proof. To prove Assumption 4.1, we note that in the binary tree hard instance, for each level , only a single state satisfies , and all other states satisfy . We simply take to be the feature of the state with . Since all feature vectors are nearly orthogonal, Assumption 4.1 holds.
Since the above fact regarding the - of the identity matrix can be proved by simply taking each row of to be a random unit vector, our lower bound reveals another intriguing (yet pessimistic) aspect of Assumption 4.1 and 4.2: for the binary tree instance, almost all feature extractors induce a hard MDP instance. This again suggests that a good representation itself may not necessarily lead to efficient RL and additional assumptions (e.g. on the reward distribution) could be crucial.
Lower bound for policy-based learning.
It is straightfoward to construct a set of feature vectors for the binary tree instance so that Assumption 4.3 holds, even if . We set to be if and if . For each level , for the unique state in level with , we set to be if and if . With this construction, Assumption 4.3 holds.
To prove that the lower bound under Assumption 4.4, we use a new reward function for states in level in the binary tree instance above so that there exists a unique optimal action for each state in the MDP. See Figure 1 for an example with levels of states. Another nice property of the new reward function is that for all states we always have . Now, we define different new MDPs as follow: for each state in level , we change its original reward (defined in Figure 1) to . An exponential sample complexity lower bound for these MDPs can be proved using the same argument as the original binary tree hard instance, and now we show this set of MDPs satisfy Assumption 4.4. We first show in Lemma 5.2 that there exists a set with , so that for each , there exists a hyperplane that separates and , and all vectors in have distance at least to . Equivalently, for each we can always define a linear function so that and for all . This can be proved using standard lower bounds on the size of -nets. Now we simply use vectors in as features of states. By construction of the reward function, for each level , there could only be two possible cases for the optimal policy . I.e., either for all states in level , or for a unique state and for all . In both cases, we can easily define a linear function with margin to implement the optimal policy , and thus Assumption 4.4 holds. Notice that in this proof, we critically relies on , so that we can utilize the curse of dimensionality to construct a large set of vectors as features.
5 Formal Proofs of Lower Bounds
In this section we present formal proofs of our lower bounds. We first introduce the INDEX-QUERY problem, which will be useful in our lower bound arguments.
Definition 5.1 (Index-Query).
In the problem, there is an underlying integer . The algorithm sequentially (and adaptively) outputs guesses and queries whether . The goal is to output , using as few queries as possible.
Definition 5.2 (-correct algorithms).
For a real number , we say a randomized algorithm is -correct for , if for any underlying integer , with probability at least , outputs .
The following theorem states the query complexity of for -correct algorithms, whose proof is provided in Section A.1.
Any -correct algorithm for requires at least queries in the worst case.
5.1 Proof of Lower Bound for Value-based Learning
For any , there exists a set of vectors with such that
for all ;
for any with .
Now we give the construction of the hard MDP instances. We first define the transitions and the reward functions. In the hard instances, both the rewards and the transitions are deterministic. There are levels of states, and level contains distinct states. Thus, there are states in the MDPs. We use to name these states. Here, is the unique state in level , and are the two states in level , , , and are the four states in level , etc. There are two different actions, and , in the MDPs. For a state in level with , playing action transits state to state and playing action transits state to state , where and are both states in level . See Figure 2 for an example with .
In our hard instances, for all pairs except for a unique state in level and a unique action . It is convenient to define , if playing action transits to . For our hard instances, we have for a unique node in level and for all other nodes.
Now we define the features map . We invoke Lemma 5.1 to get a set with . For each state , is defined to be , and is defined to be . This finishes the definition of the MDPs. We now show that no matter which state in level satisfies , the resulting MDP always satisfies Assumption 4.2.
Verifying Assumption 4.2.
By construction, for each level , there is a unique state in level and action , such that . For all other pairs such that or , it is satisfied that . For a given level and policy , we take to be . Now we show that for all states in level and .
- Case I: .
In this case, we have and , since and do not have a common non-zero coordinate.
- Case II: and .
In this case, by the second property of in Lemma 5.1 and the fact that , we have . Meanwhile, we have .
- Case III: and .
In this case, we have .
Finally, we prove any algorithm that solves these MDP instances and succeeds with probability at least needs to sample at least trajectories. We do so by providing a reduction from to solving MDPs. Suppose we have an algorithm for solving these MDPs, we show that such an algorithm can be transformed to solve . For a specific choice of in , there is a corresponding MDP instance with
Notice that for all MDPs that we are considering, the transition and features are always the same. Thus, the only thing that the learner needs to learn by interacting with the environment is the reward value. Since the reward value is non-zero only for states in level , each time the algorithm for solving MDP samples a trajectory that ends at state where is a state in level , we query whether or not in , and return reward value 1 if and 0 otherwise. If the algorithm is guaranteed to return a -optimal policy, then it must be able to find .
5.2 Proof of Lower Bound for Policy-based Learning
Warmup: Lower Bound for Linear Policy without Margin.
To present the hardness results, we first give the construction of the hard instances. The transitions and rewards functions of these MDP instances are exactly the same as those in Section 5.1. The main difference is in the definition of the feature map . For this lower bound, we define if and if . By construction, these MDPs satisfy Assumption 3.1 with . We now show that no matter which state in level satisfies 666Recall that , if playing action transits to . Moreover, for the instances in Section 5.1, we have for a unique node in level and for all other nodes., the resulting MDP always satisfies Assumption 4.3.
Verifying Assumption 4.3.
Recall that for each level , there is a unique state in level and action , such that . For all other pairs such that or , it is satisfied that . We simply take to be if , and take to be if .
Using the same lower bound argument (by reducing INDEX-QUERY to MDPs), we have the following theorem.
Proof of Theoerem 4.2
Let be a positive integer and be a real number. Then there exists a set of points with size such that for every point ,
Now we are ready to prove Theorem 4.2.
Proof of Theorem 4.2.
We define a set of deterministic MDPs. The transitions of these hard instances are exactly the same as those in Section 5.1. The main difference is in the definition of the feature map and the reward function. Again in the hard instances, for all in the first levels. Using the terminology in Section 5.1, we have for all states in the first levels. Now we define for states in level . We do so by recursively defining the optimal value function . The initial state in level satisfies . For each state in the first levels, we have and . For each state in the level , we have and . This implies that . In fact, this implies a stronger property that each state has a unique optimal action. See Figure 1 for an example with .
To define different MDPs, for each state in level of the MDP defined above, we define a new MDP by changing from its original value to . This also affects the definition of the optimal function for states in the first levels. In particular, for each level , we have changed the value of a unique state in level from its original value (at most ) to . By doing so we have defined different MDPs. See Figure 3 for an example with .
Now we define the feature function . We invoke Lemma 5.2 with and . Since is sufficiently small, we have . We use to denote an arbitrary subset of with cardinality . By Lemma 5.2, for any , the distance between and the convex hull of is at least . Thus, there exists a hyperplane which separates and , and for all points , the distance between and is at least . Equivalently, for each point , there exists and such that , and the linear function satisfies and for all . Given the set , we construct a new set , where . Thus for all . Clearly, for each , there exists a vector such that and for all . It is also clear that . We take and .
We now show that all the MDPs constructed above satisfy the linear policy assumption. Namely, we show that for any state in level , after changing to be 1, the resulting MDP satisfies the linear policy assumption. As in Section 5.1, for each level , there is a unique state in level and action , such that . For all other pairs such that or , it is satisfied that . For each level , if , then we take and , and all other entries in are zeros. If , we use to denote the vector formed by the first coordinates of . By construction, we have . We take in this case. In any case, we have . Now for each level , if , then for all states in level , we have . In this case, and for all states in level , and thus Assumption 4.4 is satisfied. If , then and for all states in level . By construction, we have for all states in level , since and do not have a common non-zero entry. We also have and for all states in level . Finally, we normalize all and so that they all have unit norm. Since for all pairs before normalization, Assumption 4.4 is still satisfied after normalization.
Finally, we prove any algorithm that solves these MDP instances and succeeds with probability at least needs to sample at least trajectories. We do so by providing a reduction from to solving MDPs. Suppose we have an algorithm for solving these MDPs, we show that such an algorithm can be transformed to solve . For a specific choice of in , there is a corresponding MDP instance with
Notice that for all MDPs that we are considering, the transition and features are always the same. Thus, the only thing that the learner needs to learn by interacting with the environment is the reward value. Since the reward value is non-zero only for states in level , each time the algorithm for solving MDP samples a trajectory that ends at state where is a state in level , we query whether or not in , and return reward value 1 if and it original reward value otherwise. If the algorithm is guaranteed to return a -optimal policy, then it must be able to find .
Perfect representation vs. good-but-not-perfect representation. For value-based learning in deterministic systems, wen2013efficient showed polynomial sample complexity upper bound when the representation can perfectly predict the -function. In contrast, if the representation is only able to approximate the -function, then the agent requires exponential number of trajectories. This exponential separation demonstrates a provable exponential benefit of better representation.
Value-based learning vs. policy-based learning. Note that if the optimal -function can be perfectly predicted by the provided representation, then the optimal policy can also be perfectly predicted using the same representation. Since wen2013efficient showed polynomial sample complexity upper bound when the representation can perfectly predict the -function, our lower bound on policy-based learning thus demonstrates that the ability of predicting the -function is much stronger than that of predicting the optimal policy.
Supervised learning vs. reinforcement learning. For policy-based learning, if the planning horizon , the problem becomes learning a linear classifier, for which there are polynomial sample complexity upper bounds. For policy-based learning, the agent needs to learn linear classifiers sequentially. Our lower bound on policy-based learning shows the sample complexity dependency on is exponential.
Imitation learning vs. reinforcement learning. In imitation learning (IL), the agent can observe trajectories induced by the optimal policy (expert). If the optimal policy is linear in the given representation, it can be shown that the simple behavior cloning algorithm only requires polynomial number of samples to find a near-optimal policy (ross2011reduction). Our Theorem 4.2 shows if the agent cannot observe expert’s behavior, then it requires exponential number of samples. Therefore, our lower bound shows there is an exponential separation between policy-based RL and IL when function approximation is used.
6.2 Lower Bounds for Model-based Learning
Finally, we remark that using the technique for proving the lower bound for value-based learning, we can obtain a lower bound for “linear MDPs” in which the transition probability matrix can be approximated by a linear function of the representation. Section D shows that if the transition matrix is only approximated in the sense, then the agent still requires an exponential number of samples. We do note that an approximation for a transition matrix may be a weak condition. Under the stronger condition that the transition matrix can be approximated well under the total variational distance ( distance), then there exists polynomial sample complexity upper bounds that can tolerate approximation error (yang2019sample; yang2019reinforcement; jin2019provably).
The authors would like to thank Yuping Luo, Wenlong Mou, Mengdi Wang and Yifan Wu for insightful discussions. Simon S. Du is supported by National Science Foundation (Grant No. DMS-1638352) and the Infosys Membership. Ruosong Wang is supported in part by NSF grant IIS-1763562 and Office of Naval Research grant N000141812861. Sham M. Kakade acknowledges funding from the Washington Research Foundation Fund for Innovation in Data-Intensive Discovery; the NSF award CCF 1740551; and the ONR award N00014-18-1-2247.
Appendix A Technical Proofs
a.1 Proof of Theorem 5.1
The proof is a straightforward application of Yao’s minimax principle yao1977probabilistic. We provide the full proof for completeness.
Consider an input distribution where is drawn uniformly at random from . Suppose there is a -correct algorithm for with worst case query complexity such that . By averaging, there is a deterministic algorithm with worst case query complexity , such that
We may assume that the sequence of queries made by is fixed. This is because (i) is deterministic and (ii) before correctly guesses , all responses that receives are the same (i.e., all guesses are incorrect). We use to denote the sequence of queries made by . Notice that is the worst case query complexity of . Suppose , there exist distinct such that will never guess , and will be incorrect if equals , which implies
a.2 Proof of Lemma 5.1
We need the following tail inequality for random unit vectors, which will be useful for the proof of Lemma 5.1.
Lemma A.1 (Lemma 2.2 in dasgupta2003elementary).
For a random unit vector in and , we have
In particular, when ,we have
Proof of Lemma 5.1.
Let be a set of independent random unit vectors in with . We will prove that with probability at least , satisfies the two desired properties as stated in Lemma 5.1. This implies the existence of such set .
It is clear that for all , since each is drawn from the unit sphere. We now prove that for any with , with probability at least , we have . Notice that this is sufficient to prove the lemma, since by a union bound over all the possible pairs of , this implies that satisfies the two desired properties with probability at least .
Now, we prove that for two independent random unit vectors and in with , with probability at least , . By rotational invariance, we assume that is a standard basis vector. I.e., we assume and for all . Notice that now is the magnitude of the first coordinate of . We finish the proof by invoking Lemma A.1 and taking . ∎
a.3 Proof of Lemma 5.2
Proof of Lemma 5.2.
Consider a -packing with size on the -dimensional unit sphere (for the existence of such a packing, see, e.g., lorentz1966metric). Let be the origin. For two points , we denote the length of the line segment between . Note that every two points satisfy .
To prove the lemma, it suffices to show that satisfies the property . Consider a point , let be a hyperplane that is perpendicular to (notice that is a also a vector) and separates and every other points in . We let the distance between and be the largest possible, i.e., contains a point in . Since is on the unit sphere and is a -packing, we have that is at least away from every point on the spherical cap not containing , defined by the cutting plane . More formally, let be the intersection point of the line segment and . Then
Indeed, by symmetry, ,
where . Notice that the distance between and the convex hull of is lower bounded by the distance between and , which is given by . Consider the triangles defined by . We have (note that lies inside ). By Pythagorean theorem, we have
Solve the above three equations for , we have