Centralized Conflict-free Cooperation for Connected and Automated Vehicles at Intersections by Proximal Policy Optimization

Centralized Conflict-free Cooperation for Connected and Automated Vehicles at Intersections by Proximal Policy Optimization

Abstract

Connected vehicles will change the modes of future transportation management and organization, especially at intersections. There are mainly two categories coordination methods at unsignalized intersection, i.e. centralized and distributed methods. Centralized coordination methods need huge computation resources since they own a centralized controller to optimize the trajectories for all approaching vehicles, while in distributed methods each approaching vehicles owns an individual controller to optimize the trajectory considering the motion information and the conflict relationship with its neighboring vehicles, which avoids huge computation but needs sophisticated manual design. In this paper, we propose a centralized conflict-free cooperation method for multiple connected vehicles at unsignalized intersection using reinforcement learning (RL) to address computation burden naturally by training offline. We firstly incorporate a prior model into proximal policy optimization (PPO) algorithm to accelerate learning process. Then we present the design of state, action and reward to formulate centralized cooperation as RL problem. Finally, we train a coordinate policy by our model-accelerated PPO (MA-PPO) in a simulation setting and analyze results. Results show that the method we propose improves the traffic efficiency of the intersection on the premise of ensuring no collision.

Connected and Automated vehicle, Centralized coordination method, Reinforcement learning, Traffic intersection

I Introduction

The increasing demand for mobility poses great challenges to road transport. The connected and automated vehicles is attracting extensive attention, due to its potential to benefit traffic safety, efficiency, and economy [19]. The widely studied, but also simplified, version of connected vehicle cooperation is the platoon control system in highways. Platoon control aims to ensure that a group of connected vehicles in the same lane move at a harmonized longitudinal speed, while maintaining desired inter-vehicle spaces[33, 8, 18]. As a typical scenario in urban areas, intersection is more complex and challenging for multi-vehicle coordination than that in highway. At the intersection, vehicles enter from different intersection entrances, cross their specific trajectories at the intersection zone, and leave the intersection at different exits. The complex conflict relationship between vehicles results in complicated vehicle decisions to avoid collisions at the intersection. Hence, it needs complicated design to guarantee traffic safety while improving traffic efficiency. To resolve multi-vehicle coordination at intersections, several studies focus on traffic signals design scheme. Goodall et al. (2013) developed a decentralized fully adaptive traffic control algorithm to optimize traffic signal timing [9]. Feng et al. (2015) presented a real-time adaptive signal phase allocation algorithm using connected vehicle data, which optimizes the phase sequence and duration by solving a two-level optimization problem [7]. These traffic signal based coordination methods for intersection control can ensure traffic safety, but they may result in inefficiency for intersection management. Hence, researchers have started to focus on signal-free methods for intersection coordination.

Currently, there are mainly two types of methods to handle coordination at unsignalized intersection, i.e. centralized and distributed coordination methods. Centralized coordination methods utilize the global information of the whole intersection to centrally control every vehicle at intersection. Dresner and Stone (2008) treated drivers and intersections as autonomous agents in a multi-agent system and built a new reservation-based approach around a detailed communication protocol [5]. Lee and Park (2012) eliminated potential overlaps of vehicular trajectories coming from all conflicting approaches at the intersection, then sought a safe maneuver for every vehicle approaching the intersection and manipulates each of them [17]. Dai et al. (2016) formulated an intersection control model and transformed it into a convex optimization problem, with consideration of safety and efficiency [2]. However, these centralized coordination methods suffer from huge computation requirement since they coordinate approaching vehicles by optimizing all their trajectories with a centralized controller.

In distributed coordination methods, there is no central controller but distributed controller in each approaching vehicle to optimize its own trajectory considering motion and conflict relationship with its neighboring vehicles. Ahmane et al. proposed a model based on Timed Petri Nets with Multipliers (TPNM) and used that to design the control policy through the structural analysis [1]. Xu et al. proposed a conflict-free geometry topology and a communication topology to transform two-dimension vehicle cluster at the intersection to one-dimension vehicle virtual platoon and eventually designed distributed feedback controller [35]. Distributed coordination methods satisfy huge computation resources requirement by distributed computation, however, they need design of sophisticated dynamic model and complicated communication relationship carefully.

One of the most fundamental goals in artificial intelligence is how to learn a new skill, especially from high-dimensional sensor input. Reinforcement Learning (RL) gradually learns a better policy from trail-and-error interaction with environment, which is highly similar to human and has the potential to address a large number of complex problems [29]. Recently, significant progress has been made on a variety of problems by combining advances of deep learning and RL. Mnih et al. (2015) proposed Deep Q-learning Network (DQN) and attained to human level performance on Atari video games with raw pixels for input [22]. Silver et al. used RL and tree search method to conquer go game and produced two famous programs: Alpha Go and Alpha zero, defeating the most excellent human champion [27, 28]. Considering DQN only suitable for problems with discrete action spaces, Deep Deterministic Policy Gradient (DDPG) algorithm is proposed to solve continuous control problems [20]. While vanilla policy gradient methods suffer from poor data efficiency and robustness, Trust Region Policy Optimization (TRPO) is proposed [24]. However, TRPO is not compatible with architectures that include noise and rarely implements parameter sharing between the policy and value function. Proximal Policy Optimization (PPO) is proposed as an updated version of TRPO, which alternated between sampling data through interaction and optimizing a “surrogate” objective function using stochastic gradient ascent [26].

RL has been poised to conquer the autonomous driving problem because of the super-human potential. Existing RL researches on autonomous driving most focus on the intelligence of single vehicle driving in relatively simple traffic scenarios. DQN was initially used to realize control high-frequency discrete steering actions of vehicle [32, 23]. After Asynchronous Advantage Actor-Critic (A3C) method was proposed, some researches adopted this framework to accelerate the learning speed and maintain the training stability [21, 12, 4]. Due to the long time credit assignment advantage of hierarchical reinforcement learned, it was used to both high-level maneuver selection and low-level motion control for decision making of self-driving cars [13]. Besides, other researchers successfully applied DDPG to autonomous driving, realizing control on continuous acceleration, steering and braking actions [15, 34].

In this paper, we employ RL as our method for centralized control of multiple connected vehicles to realize autonomous collision-free passing at unsignalized intersections. It is realized by firstly formulating state space, action space and reward function in framework of RL, and then training policy by distributed PPO algorithm. Besides, to enhance sample efficiency and accelerate training process, we incorporate a prior model into PPO algorithm. Since we trained a centralized controller, there is no need to design complex components used by distributed controller elaborately. And RL trains off-line and infers on-line, thus it naturally unloads on-line computation burden. Our results show the learned policy is able to increase driving safety and traffic efficiency at intersections.

The rest of this paper is organized as follows. Section II introduces preliminaries of Markov decision process and policy gradient methods. Section III proposes model accelerated PPO (MA-PPO) which is an improvement based on original PPO and model-based RL methods. Section IV illustrates our problem statement and methodology and section V looks into experimental settings and illustrates results. Last section VI summaries this work.

Ii Preliminaries of RL

Consider an infinite-horizon discounted Markov decision process (MDP), defined by the tuple (), where is a finite set of states, is a finite set of actions, is the transition probability distribution. is the reward function, is the distribution of the initial state , and is the discounted factor.

Let denote a stochastic policy , we seek to learn optimal policy which has maximum value function for all , where the value function is the expected sum of discounted rewards from a state when following policy :

(1)

where and for short. Similarly, we use the following standard definition of the state-action value function :

(2)

Ii-a Vanilla policy gradient

In practice, finding optimal policy for every state is impractical for large state space, thus we consider parameterized policies with parameter vector . For the same reason, state-value function is parameterizd as with parameter vector .

Policy optimization methods seek to find optimal which maximize average performance of policy

(3)

where and .

Vanilla methods optimize (3) by stochastic policy gradient [30]. Its gradient is shown in (4).

(4)
(5)

where is state distribution at time , is called discounted visiting frequency, which in practice is usually replaced with state stationary distribution under denoted by [31]. Combined with likelihood ratio and baseline technique [29], we can write (4) in expectation format.

(6)

where is advantage function which could be estimated by several methods [25].

Ii-B Trust region method

While vanilla policy gradient is simple to implement, it often leads to destructively large policy updates. TRPO optimizes lower bound of (3), i.e. equation (7), to guarantee performance improvement.

(7)

However, it is hard to choose a single value of that performs well across different problems, TRPO uses penalty shown in (8) instead.

(8)

where we denote , and . TRPO can be regarded as natural policy gradient [14]. It finds steepest policy gradient in fisher matrix normed space rather than euclidean space, which helps to reduce impact of policy parameterization when calculating gradient and stabilize learning process.

Iii Model accelerated PPO

Iii-a Proximal policy optimization

In this paper, PPO algorithm is employed as our baseline. It is inspired by TRPO and has two main differences, i.e. unconstrained surrogate objective function and generalized advantage estimation.

Unconstrained surrogate objective function

Observing stability policy update requires punishment of policy deviation based on the unconstrained optimization (9) from theroy of TRPO,

(9)

PPO alternatively construct a subtle lower bound of (9) to eliminate motivation of too much deviation of policy distribution directly. Its objective is (10).

(10)

where is probability ratio. When , with objective function (9), would tend to be much larger than 1 to make the objective as large as possible, which leads to unstable learning, while PPO objective (10) cut this motivation by clipping within and taking minimum of original objective (7) and clipped function. Same situation is with .

Generalized advantage estimation

Advantage function is necessary for policy gradient calculation, and it can be estimated by (11).

(11)

where is action-value function estimated by samples of , is state-value function approximation. TRPO and A3C use Monte-Carlo method to construct as in (12).

(12)

It is an unbiased estimation of , but suffers from high variance. Actor-critic methods use one-step TD to form as in (13).

(13)

which has low variance but is biased. Generalized advantage estimation is actually same as TD() only that it uses linear combination of n-step TD to estimate instead of . Backward view of TD() is shown in (14)

(14)

where is TD error,

(15)

Compared to TRPO, PPO is much simpler and faster to implement because it only involves first order optimization, and it has better convergence due to usage of generalized advantage estimation. However, PPO is on-policy method and inevitably has high sample complexity.

Iii-B Model-based RL

Recent model-free reinforcement learning algorithms have proposed incorporating given or learned dynamics models as a source of additional data with the intention of reducing sample complexity. Generally, there are about two general ways to use model: value gradient methods, and using model for imagination.

Value gradient methods link together the policy, model, and reward function to compute an analytic policy gradient by backpropagation of reward along a trajectory [3, 10, 11]. A major limitation of this approach is that the dynamic model can only be used to retrieve information already presented in observed data and albeit with lower variance, the actual improvement in efficiency is relatively small. Alternatively, the given or learned model can also be used for imagination rollouts. This usage can be naturally incorporated in model-free RL framework, however, learned models suffer from overfitting on experience data and lead to large error in large horizon [16, 6].

Iii-C Model accelerated PPO (MA-PPO)

PPO is a model-free on-policy RL algorithm. Model-free means it knows nothing about environment and can only learn from interactions with environment. As a result, it inevitably requires large amount of experience data although its excellent final performance. Besides, the training speed is limited by interaction with real world or simulator. Even worse, property of on-policy makes experiences produced by previous trained policies useless, which aggravates sample inefficiency. This is our motivation to accelerate PPO.

Basically, there are two ways to reduce sample complexity. The first one is incorporating off-policy data in learning process, i.e. using experiences during training. The second one is giving or learning a dynamic model. We claim that off-policy data could not be used due to state distribution mismatch. Assume that off-policy data are generated by another policy , we rewrite PPO objective (10) as (16).

(16)

To acquire correct gradient by off-policy data, we need not only correct action distribution by action probability ratio , but state distribution by stationary probability ratio . However, stationary probability ratio is hard to estimate and thus lead to distribution mismatch, which hinders use of off-policy data in both theory and empirically. As a result, we only employ model to accelerate PPO.

In field of centralized control at intersection, dynamic model is available from human prior knowledge, so we construct a model ourselves rather than using a learned one. To combine model with PPO algorithm naturally, the second type of model usage from III-B is employed by us. MA-PPO is shown in algorithm 1.

  Randomly initialize critic network and actor with weights and , set , total timesteps , inner iteration , batch size , minibatch size , and epoch
  for iteration = 1,  do
     Run policy in environment for timesteps, collecting
     Estimate advantages
     Estimate TD()
     
     for epoch = 1,  do
        
        Update by a gradient method w.r.t.
        
        Update by a gradient method w.r.t.
     end for
     for model iteration = 1,  do
        Run policy by model for timesteps, collecting
        Estimate advantages
        Estimate TD()
        
        for epoch = 1,  do
           Update by a gradient method w.r.t.
           Update by a gradient method w.r.t.
        end for
     end for
  end for
Algorithm 1 MA-PPO

Iv Problem statement and formulation

Iv-a Problem statement

In this paper, we focus on a typical 4-direction intersection shown in Fig. 1. Each direction is denoted by its location in the figure, i.e. up (U), down (D), left (L) and right (R) respectively. We only focus on vehicles within a certain distance from the intersection center. The intersection is unsignalized and each entrance or exits is assumed to have only one lane, as a result, there are 4 entrances in total. Vehicle in each entrance is allowed to turn right, go straight or turn left. Thus there are 12 types of vehicles, denoted by their entrance and exit, i.e. DR, DU, DL, RU, RL, RD, LD, LR, LU, UL, UD, and UR. Their number and meanings are listed in Table I. All their possible conflict relations are also illustrated in the figure, which can be categorized into three classes, including crossing conflict (denoted by red dot), converging conflict(denoted by purple dot), and diverging conflict (denoted by pink dot). To simplify our problem, we choose 8 vehicle modes out of all the 12 modes to cover main conflict modes to conduct our experiment. The 8 modes include DR, DL, RU, RL, LD, LU, UL, UD, as shown in Fig. 2. From the figure, we can summary all repeated types of conflict it contains, which is shown in Fig. 3.


Type Number Meaning
DR 1 From ‘Down’ turn right to ‘Right’
DU 2 From ‘Down’ go straight to ‘Up’
DL 3 From ‘Down’ turn left to ‘Left’
RU 4 From ‘Right’ turn right to ‘Up’
RL 5 From ‘Right’ go straight to ‘Left’
RD 6 From ‘Right’ turn left to ‘Down’
LD 7 From ‘Left’ turn right to ‘Down’
LR 8 From ‘Left’ go straight to ‘Right’
LU 9 From ‘Left’ go straight to ‘Up’
UL 10 From ‘Up’ turn right to ‘Left’
UD 11 From ‘Up’ go straight to ‘Down’
UR 12 From ‘Up’ turn left to ‘Right’
TABLE I: Different types of vehicle

We adopt the following assumptions. First, all vehicles are equipped with positioning and velocity devices so that we can gather location and movement information when they enter interesting zone of the intersection. Then, all approaching vehicles are assumed to be automated vehicles so that vehicles can strictly follow the desired acceleration, control the speed, and pass the intersection automatically. Additionally, There’s a maximum of one vehicle of each type at each lane of entrance, but order of different type is stochastic.

Iv-B RL formulation

We are ready to transform our problem to a RL problem by defining state space, action space and reward function, which are basic elements in RL.

State and action space

By our assumption, we need to control at most 8 vehicles at a time, i.e. 2 different type of vehicles at each entrance. Vector form is used for both state and action, which are respectively concatenation of each vehicle’s state and control by their order, as shown in (17).

(17)

where and denote state and action of vehicle type *.

Fig. 1: Intersection scenario settings

Fig. 2: Vehicles modes chosen for experiment

Fig. 3: Typical modes of conflict in experiment

Fig. 4: State formulation

State of each type should contains position and velocity information of each vehicle. Intuitively, we can form state by a tuple of coordinate and velocity, i.e. , where is coordination of its position and is velocity. However, by our task formulation, every vehicle has a fixed path corresponding to its type. There would be redundant information if we use this formula for every vehicle. Besides, for continuous state, it is necessary to decrease state space dimensional to speed up learning and enhance stability. Observing all of paths are cross intersection, we further compress state of each vehicle by , where is distance between vehicle and center of its path. Note that is positive when vehicle is heading for the center and negative when it is leaving. State formulation is shown as Fig. 4.

For action space, we choose acceleration of each vehicle. In total, a 16-dimensional state space and a 8-dimensional action space are constructed.

Reward settings

Reward function is designed under consideration of safety, efficiency and task completion. First of all, the task is designed in episodic manner, in which two types of termination are given, collision or all VEHs passing intersection. To avoid collision, a large negative reward is given if it happens. To enhance efficiency, a minor negative step reward is given every time step. To encourage task completion, there is a positive reward as long as some vehicle passes the intersection, and a large positive reward will be given when all vehicles pass the intersection. All reward settings are listed in Table II.


Reward items Reward
Collision -50
Step reward -1
Some vehicle passes 10
All vehicles pass 50
TABLE II: Reward settings

Iv-C Algorithm architecture

In this section, we illustrate how to apply MA-PPO algorithm to this centralized control problem.

Model construction

MA-PPO learns from data come from both simulation and model. Simulation incorporates true dynamics of environment, i.e. kinematics module with noise, but it takes too much time to interact with simulation. MA-PPO accelerates learning process by incorporates a prior model to generate data which also used for learning. The model is constructed by simple kinematics model. Given current position, velocity and expected acceleration of each vehicle, their next position and velocity can be inferred by this kinematics model.

Overall architecture

Learning algorithm for this RL problem consists of two main parts including MA-PPO learner and worker. Worker is in charge of getting updated policy from learner and using it to collect experience data from simulation or the kinematics model. MA-PPO learner then uses experience data from worker to update value and policy network by backpropagation, and finally sends the updated policy to worker for the next iteration. This overall architecture is shown in Fig. 5.

Fig. 5: Overall architecture of algorithm

V Experiments

V-a Experimental settings

In this section, we train and test MA-PPO and original PPO in set of vehicles mentioned above, in which there are two vehicles of different types in each entrance of intersection. Thus, we have 8 vehicles in total in this experiment. These vehicles are chosen to cover all conflict modes shown in Fig. 3. The initial position of all vehicles are random, and multiple vehicles enter the intersection from different entrances, follow their trajectories at the intersection zone, and leave it at different exits. The central controller is capable to control the acceleration of all vehicles to adjust their speed and position to ensure traffic safety and efficiency, i.e. all vehicles pass through the intersection as quickly as possible without collision. For results, training processes of MA-PPO and PPO are shown and compared to illustrate our improvement on PPO, and we also visualize effects of policy at the start of training and at the end of training in simulation to show what the trained policy has learned.

V-B Implement details

We employ multiple layers perceptron with two hidden layers as approximate function of actor and critic. Both actor and critic have 128 units in each hidden layer, and actor has 16 output units for Gaussian distribution of each vehicle (mean and standard deviation) while critic only has one output unit for state value. Note that actor and critic network have no shared parameters. We use Adam as optimizer. For MA-PPO, we collect transitions and use minibatch epoch for update. For model simulation loop, we set . Besides, we train both PPO and MA-PPO under 5 seeds to eliminate impact of randomness. Complete parameter setting is listed in Table III.


Parameter Value
Discounted factor 0.99
0.95
Clip range 0.2
Total timesteps
Inner iteration 1
Seed number 5
Batch size 2048
Minibatch size 64
Epoch 10
Learning rate 0.0003 0
Hidden layer number 2
Hidden units number 128
Adam
TABLE III: Hyperparameters of experiment

We use parallel workers to improve exploration and stabilize policy gradient and thus speed up learning process. Concretely, 16 parallel workers learn simultaneously. In an iteration, each worker interacts with environment respectively and collects 2048 timesteps batch of data, then takes the first minibatch to calculate gradient, then global gradient is conducted by averaging all local gradients of workers. Each workers updates its parameters by using the global gradient in Adam and takes the next minibatch and going on like this.

V-C Results and discussion

In this section, we show the performance of our algorithm at intersection and analyze the empirical results.

(a) Mean episode reward
(b) Mean episode length
Fig. 6: Training process of MA-PPO and PPO

Fig. (a)a shows the mean episode reward of MA-PPO and PPO during training process. Both MA-PPO and PPO get the highest reward about 50, which means that all 8 vehicle has passed through the intersection successfully. Compared with PPO, MA-PPO converges at around 500 iterations, while PPO algorithm needs nearly 1000 iterations, which shows that MA-PPO converges twice as faster as PPO.

Fig. (b)b shows the change of mean episode length during training process. Both the episode length of MA-PPO and PPO first increase rapidly and then reduce to an equal value. This can be explained that at the beginning the temporary policy mainly focuses on how to avoid collision because this case corresponds to a large negative reward. At that time, one reasonable policy is to let the vehicles with no conflicting trajectories pass through the intersection, such as the RL and LD, DR and UD. Meanwhile, other vehicles have to wait until the next non-collision chance, which leads to the long episode length. However, such a policy is too conservative and suffers poor efficiency because every step has a negative reward -1. Therefore, the following policy would optimize this process to avoid long waiting time, leading to the decrease of mean episode length. Besides, MA-PPO obtained more faster convergence speed in term of mean episode length compared with origin PPO, which has the same trend as the Fig. (a)a.

(a) Episode with collision
(b) Distance vs Timestep
(c) Velocity vs Timestep
(d) Action vs Timestep
Fig. 7: Example of colliding episode in experiment, including distance to intersection center, speed information and action they take during the whole episode

Fig. 7 visualizes one episode in 20th iteration of the training process. At this moment, VEH5 (mode: LD) has pass through the intersection successfully, however, VEH4 (mode: RL) and VEH8 (mode: UD) collided at the last step of episode. From Fig. 7(b) we can see all these 8 vehicles is approaching the center of intersection, however, almost none of them realized to decelerate their speed to avoid collision except for VEH2. During the last few steps before collision, the velocities of VEH4 and VEH8 still maintained their trend without significant change. Besides, VEH2 has realized that front collision and our policy began to control the acceleration to avoid another collision. On this occasion, the learning time step finally got a reward of 10 because of the success pass of VEH2. We can conclude that at this point, the learned policy cannot coordinate all vehicles successfully and some agents such as VEH4 and VEH8 can not learn effective policy to address this intersection traffic situation.

Fig. 8 shows a successful example that the central decision agent learned good policies after 1000 iterations of training. At this moment, VEH3 (mode: RU) has passed through the intersection successfully. VEH8 slowed down from beginning to step 26 to wait VEH7 to turn right. Besides, VEH2 (mode: DL) has to wait the pass of VEH7 which got a closer distance to the center of intersection. Also, VEH4 (mode: RL) has to wait VEH2 to turn right and decreased its acceleration. It learned a human-like policy, detecting the potential collision according to the distance to the center of intersection and assigned the order to pass through. One reasonable explanation that VEH8 has to wait and pass lastly is that it has a longer distance than any other vehicles to the center of intersection, as shown in Fig. 8(b). After step 26, VEH7, VEH2 and VEH4 passed central area of the intersection, VEH8 started to speed up and then passed the intersection successfully. As shown in Fig. 8(d), VEH8 remained stable between -2 to -1 from initial time to 26 time-steps, then it accelerated rapidly after time-step 26, which also illustrated that VEH8 learned a waiting policy to avoiding collision.

(a) Episode without collision
(b) Distance vs Timestep
(c) Velocity vs Timestep
(d) Action vs Timestep
Fig. 8: Successful example that all VEHs pass the intersection in experiment, including distance to intersection center, speed information and action they take during the whole episode

VEH5 and VEH6 have similar velocity curves, both of which has large change in velocity. At the beginning, they slowed down and kept low velocity until VEH2, VEH3 and VEH4 passed the intersection. After time-step 25, both of them began to speed up and pass the intersection because with no potential collision around the area of intersection, more larger acceleration would reduce the negative reward during riding. On the other hand, the velocity curves of VEH2, VEH3 and VEH4 demonstrated that they learned to speed up so that they could pass the intersection quickly. From Fig. 8(b), compared with other VEHs, VEH5 and VEH6 decreased slowly first, after time-step 25, the distance curves became sharper, which also proved that VEH5 and VEH6 learned a waiting policy to avoid collide.

In conclusion, results have shown that RL based control can address the intersection situation with multiple vehicles, not only considering the collide avoidance but also improving pass efficiency. Unlike human rules can be applied in control of intersection, our algorithm can coordinate vehicle from different direction corresponding to their velocity and distance to the center of intersection. Our methods based on reinforcement learning is prone to show more advantages when there are more vehicles, in which human rules may not work or is difficult to find the optimal solution to coordinate all vehicles. Besides, we use model to accelerate the learning process and obtain a good acceleration effect, which shows the importance of prior model in learning algorithm.

Vi Conclusion

In this paper, we employ reinforcement learning method to solve centralized conflict-free cooperation for connected and automated vehicles at intersection, which have been long regarded as a challenge problem due to its large scale and high dimension property. We use PPO algorithm as our baseline, which has state-of-the-art performance on several benchmarks. And we propose MA-PPO to enhance sample efficiency and speed up learning process. A typical 4-direction intersection which contains 8 different modes of vehicle is studied. We find that our method is more efficient than PPO and the learned driving policy shows intelligent behaviors to increase driving safety and traffic efficiency, which indicates that RL is promising to deal with centralized cooperative driving at intersection.

Vii Acknowledgments

This work is partially supported by International Science Technology Cooperation Program of China under 2016YFE0102200. Special thanks should be given to TOYOTA for funding this study. We would like to acknowledge Mr. Jingliang Duan, Mr. Zhengyu Liu, for their valuable suggestions throughout this research.

References

  1. M. Ahmane, A. Abbas-Turki, F. Perronnet, J. Wu, A. El Moudni, J. Buisson and R. Zeo (2013) Modeling and controlling an isolated urban intersection based on cooperative vehicles. Transportation Research Part C: Emerging Technologies 28, pp. 44–62. Cited by: §I.
  2. P. Dai, K. Liu, Q. Zhuge, E. H. Sha, V. C. S. Lee and S. H. Son (2016) Quality-of-experience-oriented autonomous intersection control in vehicular networks. IEEE Transactions on Intelligent Transportation Systems 17 (7), pp. 1956–1967. Cited by: §I.
  3. M. Deisenroth and C. E. Rasmussen (2011) PILCO: a model-based and data-efficient approach to policy search. In Proceedings of the 28th International Conference on machine learning (ICML-11), pp. 465–472. Cited by: §III-B.
  4. A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez and V. Koltun (2017) CARLA: an open urban driving simulator. arXiv preprint arXiv:1711.03938. Cited by: §I.
  5. K. Dresner and P. Stone (2008) A multiagent approach to autonomous intersection management. Journal of artificial intelligence research 31, pp. 591–656. Cited by: §I.
  6. V. Feinberg, A. Wan, I. Stoica, M. I. Jordan, J. E. Gonzalez and S. Levine (2018) Model-based value estimation for efficient model-free reinforcement learning. arXiv preprint arXiv:1803.00101. Cited by: §III-B.
  7. Y. Feng, K. L. Head, S. Khoshmagham and M. Zamanipour (2015) A real-time adaptive signal control in a connected vehicle environment. Transportation Research Part C: Emerging Technologies 55, pp. 460–473. Cited by: §I.
  8. F. Gao, X. Hu, S. E. Li, K. Li and Q. Sun (2018) Distributed adaptive sliding mode control of vehicular platoon with uncertain interaction topology. IEEE Transactions on Industrial Electronics 65 (8), pp. 6352–6361. Cited by: §I.
  9. N. J. Goodall, B. L. Smith and B. Park (2013) Traffic signal control with connected vehicles. Transportation Research Record 2381 (1), pp. 65–72. Cited by: §I.
  10. I. Grondman (2015) Online model learning algorithms for actor-critic control. Cited by: §III-B.
  11. N. Heess, G. Wayne, D. Silver, T. Lillicrap, T. Erez and Y. Tassa (2015) Learning continuous control policies by stochastic value gradients. In Advances in Neural Information Processing Systems, pp. 2944–2952. Cited by: §III-B.
  12. M. Jaritz, R. De Charette, M. Toromanoff, E. Perot and F. Nashashibi (2018) End-to-end race driving with deep reinforcement learning. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 2070–2075. Cited by: §I.
  13. D. Jingliang, L. Shengbo, G. Yang, S. Qi and C. Bo (2019) Hierarchical reinforcement learning for self-driving decision-making without reliance on labeled driving data. IET Intelligent Transport Systems. Cited by: §I.
  14. S. M. Kakade (2002) A natural policy gradient. In Advances in neural information processing systems, pp. 1531–1538. Cited by: §II-B.
  15. A. Kendall, J. Hawke, D. Janz, P. Mazur, D. Reda, J. Allen, V. Lam, A. Bewley and A. Shah (2018) Learning to Drive in a Day. External Links: 1807.00412, Link Cited by: §I.
  16. T. Kurutach, I. Clavera, Y. Duan, A. Tamar and P. Abbeel (2018) Model-ensemble trust-region policy optimization. arXiv preprint arXiv:1802.10592. Cited by: §III-B.
  17. J. Lee and B. Park (2012) Development and evaluation of a cooperative vehicle intersection control algorithm under the connected vehicles environment. IEEE Transactions on Intelligent Transportation Systems 13 (1), pp. 81–90. Cited by: §I.
  18. S. E. Li, X. Qin, K. Li, J. Wang and B. Xie (2017) Robustness analysis and controller synthesis of homogeneous vehicular platoons with bounded parameter uncertainty. IEEE/ASME Transactions on Mechatronics 22 (2), pp. 1014–1025. Cited by: §I.
  19. S. E. Li, S. Xu, X. Huang, B. Cheng and H. Peng (2015) Eco-departure of connected vehicles with v2x communication at signalized intersections. IEEE Transactions on Vehicular Technology 64 (12), pp. 5439–5449. Cited by: §I.
  20. T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver and D. Wierstra (2015) Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971. Cited by: §I.
  21. V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. Lillicrap, T. Harley, D. Silver and K. Kavukcuoglu (2016) Asynchronous methods for deep reinforcement learning. In International conference on machine learning, pp. 1928–1937. Cited by: §I.
  22. V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg and D. Hassabis (2015) Human-level control through deep reinforcement learning.. Nature 518 (7540), pp. 529–33. Note: proposed DQN on Atari games External Links: Document, ISSN 1476-4687, Link Cited by: §I.
  23. Z. Ruiming, L. Chengju and C. Qijun (2018) End-to-end control of kart agent with deep reinforcement learning. In 2018 IEEE International Conference on Robotics and Biomimetics (ROBIO), pp. 1688–1693. Cited by: §I.
  24. J. Schulman, S. Levine, P. Abbeel, M. Jordan and P. Moritz (2015) Trust region policy optimization. In International Conference on Machine Learning, pp. 1889–1897. Cited by: §I.
  25. J. Schulman, P. Moritz, S. Levine, M. Jordan and P. Abbeel (2015) High-dimensional continuous control using generalized advantage estimation. arXiv preprint arXiv:1506.02438. Cited by: §II-A.
  26. J. Schulman, F. Wolski, P. Dhariwal, A. Radford and O. Klimov (2017) Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347. Cited by: §I.
  27. D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam and M. Lanctot (2016) Mastering the game of go with deep neural networks and tree search. nature 529 (7587), pp. 484. Cited by: §I.
  28. D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai and A. Bolton (2017) Mastering the game of go without human knowledge. Nature 550 (7676), pp. 354. Cited by: §I.
  29. R. S. Sutton and A. G. Barto (2018) Reinforcement learning: an introduction. MIT press. Cited by: §I, §II-A.
  30. R. S. Sutton, D. A. McAllester, S. P. Singh and Y. Mansour (2000) Policy gradient methods for reinforcement learning with function approximation. In Advances in neural information processing systems, pp. 1057–1063. Cited by: §II-A.
  31. P. Thomas (2014) Bias in natural actor-critic algorithms. In International Conference on Machine Learning, pp. 441–448. Cited by: §II-A.
  32. P. Wolf, C. Hubschneider, M. Weber, A. Bauer, J. Härtl, F. Dürr and J. M. Zöllner (2017) Learning how to drive in a real world simulation with deep q-networks. In 2017 IEEE Intelligent Vehicles Symposium (IV), pp. 244–250. Cited by: §I.
  33. Y. Wu, S. E. Li, J. Cortés and K. Poolla (2019) Distributed sliding mode control for nonlinear heterogeneous platoon systems with positive definite topologies. IEEE Transactions on Control Systems Technology. Cited by: §I.
  34. X. Xiong, J. Wang, F. Zhang and K. Li (2016) Combining deep reinforcement learning and safety based control for autonomous driving. arXiv preprint arXiv:1612.00147. Cited by: §I.
  35. B. Xu, S. E. Li, Y. Bian, S. Li, X. J. Ban, J. Wang and K. Li (2018) Distributed conflict-free cooperation for multiple connected vehicles at unsignalized intersections. Transportation Research Part C: Emerging Technologies 93, pp. 322–334. Cited by: §I.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
402493
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description