Computing Systems for Autonomous Driving: State-of-the-Art and Challenges

Computing Systems for Autonomous Driving: State-of-the-Art and Challenges


The recent proliferation of computing technologies, e.g., sensors, computer vision, machine learning, hardware acceleration, and the broad deployment of communication mechanisms, e.g., DSRC, C-V2X, 5G, have pushed the horizon of autonomous driving, which automates the decision and control of vehicles by leveraging the perception results based on multiple sensors. The key to the success of these autonomous systems is making a reliable decision in a real-time fashion. However, accidents and fatalities caused by early deployed autonomous vehicles arise from time to time. The real traffic environment is too complicated for the current autonomous driving computing systems to understand and handle. In this paper, we present the state-of-the-art computing systems for autonomous driving, including seven performance metrics and nine key technologies, followed by eleven challenges and opportunities to realize autonomous driving. We hope this paper will gain attention from both the computing and automotive communities and inspire more research in this direction.

I Introduction

Recently, with the vast improvements in computing technologies, e.g., sensors, computer vision, machine learning, hardware acceleration, and the wide deployment of communication mechanisms, e.g., Dedicated short-range communications (DSRC), Cellular V2X (C-V2X), 5G, autonomous driving techniques have attracted massive attention from both the academic and automotive communities. According to [66], the Global autonomous driving market expects to grow up to $173.15B by 2030. Many auto companies have made an enormous investment in this domain, including Waymo, GM Cruise, Argo AI, Tesla, Baidu, Uber, etc. Several fleets of the Society of Automotive Engineers (SAE) Autonomy Level 4 vehicles in the United States and China [1, 32].

To achieve autonomous driving, how to make the vehicle understand the environment correctly and make safe controls in real-time is the essential task. Rich sensors including camera, LiDAR, Radar, Inertial Measurement Unit (IMU), Global Navigation Satellite System (GNSS), and Sonar, as well as powerful computation devices, are installed on the vehicle [33, 205, 15, 72, 127]. This design makes the autonomous driving a real powerful ”computer on the wheel.” In addition to hardware, the rapid development of deep learning algorithms in object/lane detection, simultaneous localization and mapping (SLAM), and vehicle control also promotes the real deployment and prototyping of the autonomous vehicles [167, 145, 202, 235]. The autonomous vehicles’ computing systems are defined to cover everything, excluding the vehicle’s mechanical parts, including sensors, computation, communication, storage, power management, and full-stack of software. Plenty of algorithms and systems are designed to process sensor data and make a reliable decision in real-time.

However, news of fatalities caused by early developed autonomous vehicles (AVs) arises from time to time. Until August 2020, five self-driving car fatalities happened for level-2 autonomous driving: four of them from Tesla while one from Uber [122]. Table I summarizes the date, place, company, and reasons for these five fatalities. The first two fatalities of Tesla happened in 2016 because neither the Autopilot system nor the driver failed to the truck under thick haze or mistook the truck as for open space. Another incident from Tesla in 2018 is that the Autopilot fails to recognize the higher divider and crushed. The recent fatality from Tesla happens in 2019 because it failed to recognize a semitrailer. The fatality from Uber happens because the autonomous driving system fails to recognize that pedestrians jaywalk.

In summary, all four incidents from Tesla are due to perception failure, while Uber’s incident happens because of the failure to predict human behavior. Another fact to pay attention to is that currently, the field-testing of level-2 autonomous driving vehicles mostly happen in places with good weather and light conditions like Arizona and Florida. The real traffic environment is too complicated for the current autonomous driving systems to understand and handle. The objectives of level-4 and level-5 autonomous driving require colossal improvement of the computing systems for autonomous vehicles.

This paper presents the state-of-the-art computing systems for autonomous driving, including seven performance metrics and nine key technologies, followed by eleven challenges and opportunities to realize autonomous driving. The remaining parts of this paper organized as follows. Section II discusses the reference architecture of the computing systems for autonomous driving. In Section III, we show some metrics used in the evaluation of the computing system. Section IV discusses the key technologies for autonomous driving. Section V presents the possible challenges and opportunities. Finally, this paper concludes in Section VI.

Date Place Company Reason
20 Jan. 2016 Handan, Hebei China Tesla fail to recognize truck under a thick haze
07 May 2016 Williston, Florida USA Tesla mistook the truck for open sky
18 Mar. 2018 Tempe, Arizona USA Uber fail to recognize pedestrians jaywalk at night
23 Mar. 2018 Mountain View, California USA Tesla fail to recognize the highway divider
1 Mar. 2019 Delray Beach, Florida USA Tesla fail to recognize semitrailer
TABLE I: List of fatalities caused by Level-2 autonomous driving vehicles

Ii Reference Architecture

As an essential part of the whole autonomous driving vehicle, the computing system plays a significant role in the whole pipeline of driving autonomously. There are two types of designs for computing systems on autonomous vehicles: modular-based and end-to-end based. Modular design decouples the localization, perception, control, etc. as separate modules and make it possible for people with different background work together [211, 27, 2, 21, 186]. End-to-end based design is largely motivated by the development of artificial intelligence. Compared with modular design, end-to-end system purely relies on machine learning techniques to process the sensor data and generate control commands to the vehicle [29, 142, 24, 216, 175, 85]. Although the end-to-end based approach promises to decrease the modular design’s error propagation and computation complexity, there is no real deployment and testing of it [225]. As most prototypes are still modular-based, we choose it as the basis of the computing system reference architecture. Figure 1 shows a representative reference architecture of the computing system on autonomous vehicles. Generally, the computing system for autonomous driving vehicles can be divided into computation, communication, storage, security and privacy, and power management. Each part covers four layers with sensors, operating system (OS), middleware, and applications. The following paragraphs will discuss the corresponding components.

For safely, one of the essential tasks is to enable the “computer” to understand the road environment and send correct control messages to the vehicle. The whole pipeline starts with the sensors. Plenty of sensors can be found on an autonomous driving vehicle: camera, LiDAR, radar, GPS/GNSS, ultrasonic, inertial measurement unit (IMU), etc. These sensors capture real-time environment information for the computing system, like the eyes of human beings. Operating system (OS) plays a vital role between hardware devices (sensors, computation, communication) and applications. Within the OS, drivers are bridges between the software to hardware devices; the network module provides the abstraction communication interface; the scheduler manages the competition to all the resources; the file system provides the abstraction to all the resources. For safety-critical scenarios, the operating system must satisfy real-time requirements.

As the middle layer between applications and operating systems [179], middleware provides usability and programmability to develop and improve systems more effectively. Generally, middleware supports publish/subscriber, remote procedure call (RPC) or service, time synchronization, and multi-sensor collaboration. A typical example of the middleware system is the Robot Operating System (ROS) [162]. On top of the operating system and middleware system, several applications, including object/lane detection, SLAM, prediction, planning, and vehicle control, are implemented to generate control commands and send them to the vehicle’s drive-by-wire system. Inside the vehicle, several Electronic Control Units (ECUs) are used to control the brake, steering, etc., which are connected via Controller Area Network (CAN bus). In addition to processing the data from on-board sensors, the autonomous driving vehicle is also supposed to communicate with other vehicles, traffic infrastructures, pedestrians, etc. as complementary.

Fig. 1: Representative reference architecture of the computing system for autonomous driving.

Iii Metrics for Computing System

According to the report about autonomous driving technology from the National Science & Technology Council (NSTC) and the United States Department of Transportation (USDOT) [48], ten technology principles are designed to foster research, development, and integration of AVs and guide consistent policy across the U.S. Government. These principles cover safety, security, cybersecurity, privacy, data security, mobility, accessibility, etc. Corresponding to the autonomous driving principles, we define several metrics to evaluate the computing system’s effectiveness.

Accuracy Accuracy is defined to evaluate the difference between the detected/processed results with the ground truth. Take object detection and lane detection, for example, the Intersection Over Union (IOU) and mean Average Precision (mAP) are used to calculate the exact difference between the detected bounding box of objects/lanes and the real positions [50, 63]. For vehicle controls, the accuracy would be the difference between the expected controls in break/steering with the vehicle’s real controls.

Timeliness Safety is always the highest priority. Autonomous driving vehicles should be able to control themselves autonomously in real-time. According to [96], if the vehicle is self-driving at 40km per hour in an urban area and wants the control effective every 1 meter, then the whole pipeline’s desired response time should be less than 90ms. To satisfy the desired response time, we need each module in the computing system to finish before the deadline.

Power Since the on-board battery power the whole computing system, the computing system’s power dissipation can be a big issue. For electrical vehicles, the computing system’s power dissipation for autonomous driving reduces the vehicle’s mileage with up to 30% [9]. In addition to mileage, heat dissipation is another issue caused by high power usage. Currently, the NVIDIA Drive PX Pegasus provides 320 INT8 TOPS of AI computational power with a 500 Watts budget [138]. With the power budget of sensors, communication devices, etc., the total power dissipation will be higher than 1000 Watts. The power budget is supposed to be a significant obstacle for producing the real autonomous driving vehicle.

Cost Cost is one of the essential factors that affect the board deployment of autonomous vehicles. According to [188, 68], the cost of a level 4 autonomous driving vehicle attains 300,000 dollars, in which the sensors, computing device, and communication device cost almost 200,000 dollars. In addition to the hardware cost, the operator training, vehicle maintenance cost of AVs like insurance, parking, and repair are also more expensive than traditional vehicles.

Reliability To guarantee the safety of the vehicle, reliability is a big concern. On the one hand, the worst-case execution time is supposed to be longer than the deadline. Interruptions or emergence stops should be applied in such cases. On the other hand, failures happen in sensors, computing/communication devices, algorithms, and systems integration. How to handle these potential failures is also an essential part of the design of the computing system.

Privacy As the vehicle captures a massive amount of sensor data from the environment, vehicle data privacy becomes a big issue. For example, the pedestrian’s face and the license plate captured by the vehicle’s camera should be masked as soon as possible. Furthermore, who owns the driving data is also an important issue, which requires the system’s support for data access, storage, and communication.

Security How secure the on-board computing system is essential to the success of autonomous driving since, ultimately, the computing system is responsible for the driving process. Cyber attacks can be launched quickly to any part of the computing system. We divide the security into four aspects: sensing security, communication security, data security, and control security. We envision that the on-board computing system will have to pass a certain security test level before deploying it into real products.

Iv Key Technologies

Fig. 2: A typical example of a computing system for autonomous driving.

An autonomous vehicle involves multiple subjects, including computing systems, machine learning, communication, robotics, mechanical engineering, and systems engineering, to integrate different technologies and innovations. Figure 2 shows a typical example of autonomous driving vehicles called Hydra, which is developed by The CAR lab at Wayne State University [200]. An NVIDIA Drive PX2 is used as the vehicle computation unit (VCU). Multiple sensors, including six cameras, six radars, one LiDAR, one GNSS antenna, and one DSRC antenna, are installed for sensing and connected with VCU. The CAN bus is used to transmitting messages between different ECUs controlling steering, throttle, shifting, brake, etc. Between the NVIDIA Drive PX2 and the vehicle’s CAN bus, a drive-by-wire system is deployed as an actuator of the vehicle control commands from the computing system. Besides, a power distribution system is used to provide extra power for the computing system. It is worth noting that the computing system’s power distribution is non-negligible in modern AVs [124]. In this section, we summarize several key technologies and discuss their state-of-the-art.

Iv-a Sensors


In terms of usability and cost, cameras are the most popular sensors on autonomous driving vehicles. The camera image gives straightforward 2D information, making it useful in some tasks like object classification and lane tracking. Besides, the range of the camera can vary from several centimeters to near one hundred meters. The relatively low cost and commercialization production also contribute to the complete deployment in the real autonomous driving vehicle. However, based on lights, the camera’s image can be affected by low lighting or bad weather conditions. The usability of the camera decreases significantly under heavy fog, raining, and snowing. Besides, the data from the camera is also a big problem. On average, every second, one camera can produce 20-40MB of data.


The radar’s full name is Radio Detection and Ranging, which means to detect and get the distance using radio. The radar technique measures the Time of Flight (TOF) and calculates the distance and speed. Generally, the working frequency of the vehicle radar system is 24GHz or 77GHz. Compared with 24GHz, 77GHz shows higher accuracy in distance and speed detection. Besides, 77GHz has a smaller antenna size, and it has less interference than 24GHz. For 24GHz radar, the maximum detection range is 70 meters, while the maximum range increases to 200 meters for 77GHz radar. According to [33], the price for Continental’s long-range Radar can be around $3000, which is higher than the camera’s price. However, compared with a camera, radar is less affected by the weather and low lighting environment, making it very useful in some applications like object detection and distance estimation. The data size is also smaller than the camera. Each radar produces 10-100KB per second.


Similar to Radar, LiDAR’s distance information is also calculated based on the TOF. The difference is that LiDAR uses the laser for scanning, while radar uses electromagnetic waves. LiDAR consists of a laser generator and a high accuracy laser receiver. LiDAR generates a three-dimensional image of the objects, so it is widely used to detect static objects and moving objects. LiDAR shows good performance with a range from several centimeters to 200 meters, and the accuracy of distance goes to centimeter-level. LiDAR is widely used in object detection, distance estimation, edge detection, Simultaneously Localization and Mapping (SLAM) [232, 202], and High-Definition (HD) Map generation [116, 105, 51, 235]. However, in terms of the cost, LiDAR seems less competitive with other sensors. According to [205], the 16 lines Velodyne LiDAR costs almost $8000, while the Velodyne VLS-128E costs over $100,000. High costs restrict the wide deployment of LiDAR on autonomous vehicles, contributing to the autonomous vehicle’s high cost. LiDAR can generate almost 10-70MB data per second, a huge amount of data for the computing platform to process in real-time.

Ultrasonic sensor

Ultrasonic sensor is based on ultrasound to detect the distance. Ultrasound is a particular sound that has a frequency higher than 20kHz. The distance is also detected by measuring TOF. The ultrasonic sensor’s data size is close to the radar’s, which is 10-100KB per second. Besides, the ultrasonic sensor shows good performance in bad weather and low lighting environment. The ultrasonic sensor is much cheaper than the camera and radar. The price of the ultrasonic sensor is always less than $100. The shortcoming of ultrasonic is the maximum range of only 20 meters, limiting its application to short-range detection like parking assistance.

Metrics Human Camera Radar LiDAR
Techniques - Lights Electromagnetic Laser Reflection Ultrasound
Sensing Range 0-200m 0-100m
1cm-200m (77GHz)
1cm-70m (24GHz)
0.7-200m 0-20m
Cost - $500 $3000 100,000 $100
Data per second - 20-40MB 10-100KB 10-70MB 10-100KB
Bad weather functionality Fair Poor Good Fair Good
Low lighting
Poor Fair Good Good Good
Application Scenarios
Object Detection
Object Classification
Edge Detection
Lane Tracking
Object Classification
Edge Detection
Lane Tracking
Object Detection
Distance Estimation
Object Detection
Distance Estimation
Edge Detection
Object Detection
Distance Estimation
TABLE II: Comparisons of camera, radar, LiDAR, and ultrasonic sensor.


Except for sensing and perception of the surrounding environment, localization is also a significant task running on top of the autonomous driving system. In the localization system of the autonomous vehicle, Global Position System (GPS), Global Navigation Satellite System (GNSS), and Inertial Measurement Unit (IMU) are widely deployed. GNSS is the name for all the satellite navigation systems, including GPS developed by the US, Galileo from Europe, and BeiDou Navigation Satellite System (BDS) [15] from China. The accuracy of GPS can vary from several centimeters to several meters when different observation values and different processing algorithms are applied [72]. The strength of GPS is the cost is low, and the error is not accumulated with time. The drawback of GPS is that the GPS deployed on current vehicles only has accuracy in one meter, and GPS requires an unobstructed view in the sky, so it doesn’t work in an environment like tunnels. Besides, the GPS sensing data updates every 100ms, which is not enough for the vehicle’s real-time localization.

IMU represents for inertial measurement unit, which consists of gyroscopes and accelerometers. Gyroscopes are used to measure the axes’ angular speed to calculate the carrier’s position. In comparison, the accelerometer measures the object’s three axes’ linear acceleration and can be used to calculate the carrier’s speed and position. The strength of IMU is that it doesn’t require an unobstructed view from the sky. The drawback is that the accuracy is low, and the error is accumulated with time. IMU can be a complimentary sensor to the GPS because it has an updated value every 5ms, and it works appropriately in the environment like tunnels. Usually, a Kalman filter is applied to combine the sensing data from GPS and IMU to get fast and accurate localization results [127].

Table II shows a comparison of sensors, including camera, radar, LiDAR, and ultrasonic sensors with human beings. From the comparison, we can easily conclude that although humans have strength in the sensing range and show more advantaged application scenarios than any sensor, the combination of all the sensors can do a better job than human beings, especially in bad weather and low lighting conditions.

Iv-B Data Source

Data characteristics

As we listed before, various sensors, such as GPS, IMU, camera, LiDAR, radar, are equipped in AVs, and they will generate hundreds of megabytes of data per second, fed to different autonomous driving algorithms. The data in AVs could be classified into two categories, real-time data, and historical data. Typically, the former is transmitted by a messaging system with the Pub/Sub pattern in most AVs solutions, enabling different applications to access one data simultaneously. Historical data includes application data. The data persisted from real-time data, where structured data, i.e., GPS, is stored into a database, and unstructured data, i.e., video, is stored as files.

Dataset and Benchmark

Autonomous driving dataset is collected by survey fleet vehicles driving on the road, which provides the training data for research in machine learning, computer vision, and vehicle control. Several popular datasets provide benchmarks, which are pretty useful in autonomous driving systems and algorithms design. Here are a few popular datasets: (1) KITTI: As one of the most famous autonomous driving dataset, the KITTI [61] dataset covers stereo, optical flow, visual odometry, 3D object detection, and 3D tracking. It provides several benchmarks, such as stereo, flow, scene, optical flow, depth, odometry, object tracking [84], road, and semantics [57]. (2) Cityscapes: For the semantic understanding of urban street scenes, the Cityscapes [194] dataset includes 2D semantic segmentation on pixel-level, instance-level, and panoptic semantic labeling, and provides corresponding benchmarks on them. (3) BDD100K: As a large-scale and diverse driving video database, BDD100K [223] consists of 100,000 videos and covers different weather conditions and times of the day. (4)  DDD17: As the first end-to-end dynamic and active-pixel vision sensors (DAVIS) driving dataset, DDD17 [20] has more than 12 hours of DAVIS sensor data under different scenarios and different weather conditions, as well as vehicle control information like steering, throttle, and brake.


Data labeling is an essential step in a supervised machine learning task, and the quality of the training data determines the quality of the model. Here are a few different types of annotations methods: (1) Bounding boxes: the most commonly used annotation method (rectangular boxes) in object detection tasks to define the location of the target object, which can be determined by the x and y-axis coordinates in the upper-left corner and the lower-right corner of the rectangle. (2) Polygonal segmentation: since objects are not always rectangular, polygonal segmentation is another annotation approach where complex polygons are used to define the object’s shape and location in a much precise way. (3) Semantic segmentation: a pixel-wise annotation, where every pixel in an image is assigned to a class. It is primarily used in cases where environmental context is essential. (4) 3D cuboids: They provide 3D representations of the objects, allowing models to distinguish features like volume and position in a 3D space. (5) Key-Point and Landmark are used to detect small objects and shape variations by creating dots across the image. As to the annotation software, MakeSense.AI [135], LabelImg [110], VGG image annotator [206], LabelMe [111], Scalable [178], and RectLabel [164] are the popular image annotation tools.

Iv-C Algorithms

Plenty of algorithms are deployed in the computing system for sensing, perception, localization, prediction, and control. In this part, we present the state-of-the-art works for algorithms including object detection, lane detection, localization and mapping, prediction and planning, and vehicle control.

Object detection

Accurate object detection under challenging scenarios is essential for real-world deep learning applications for AVs [139]. In general, it is widely accepted that the development of object detection algorithms has gone through two typical phases: (1) conventional object detection phase, and (2) deep learning supported object detection phase [237]. Viola Jones Detectors [207], Histogram of Oriented Gradients (HOG) feature descriptor [34], and Deformable Part-based Model (DPM) [52] are all the typical traditional object detection algorithms. Although today’s most advanced approaches have far exceeded the accuracy of traditional methods, many dominant algorithms are still deeply affected by their valuable insights, such as hybrid models, bounding box regression, etc. As to the deep learning-based object detection approaches, the state-of-the-art methods include the Regions with CNN features (RCNN) series [65, 170, 63, 77], Single Shot MultiBox Detector (SSD) series [129, 55], and You Only Look Once (YOLO) series [165, 166, 167]. Girshick et al. first introduce deep learning into the object detection field by proposing RCNN in 2014 [63, 64]. Later on, Fast RCNN [65] and Faster RCNN [170] were developed to accelerate detection speed. In 2015, the first one-stage object detector, i.e., YOLO was proposed [165]. Since then, the YOLO series algorithms have been continuously proposed and improved, for example, YOLOv3 [167] is one of the most popular approaches, and YOLOv4 [23] is the latest version of the YOLO series. To solve the trade-off problem between speed and accuracy, Liu et al. proposed SSD [129] in 2015, which introduces the regression technologies for object detection. Then, RetinaNet was proposed in 2017 [121] to further improve detection accuracy by introducing a new loss function to reshape the standard cross-entropy loss.

Lane detection

Performing accurate lane detection in real-time is a crucial function of advanced driver-assistance systems (ADAS) [145], since it enables AVs to drive themselves within the road lanes correctly to avoid collisions, and it supports the subsequent trajectory planning decision and lane departure.

Traditional lane detection approaches (e.g. [25, 39, 89, 92, 195, 214]) aims to detect lane segments based on diverse handcrafted cues, such as color-based features [30], the structure tensor [130], the bar filter [197], and ridge features [131]. This information is usually combined with a Hough transform [123, 234] and particle or Kalman filters [101, 35, 197] to detect lane markings. Then, post-processing methods are leveraged to filter out misdetections and classify lane points to output the final lane detection results [81]. However, in general, they are prone to effectiveness issues due to the road scene variations, e.g., changing from city scene to highway scene and hard to achieve a reasonable accuracy under challenging scenarios without a visual clue.

Recently, deep learning-based segmentation approaches have dominated the lane detection field with more accurate performance [69]. For instance, VPGNet [114] proposes a multi-task network for lane marking detection. To better utilize visual information more of lane markings, SCNN [154] applies a novel convolution operation that aggregates diverse dimension information via processing sliced features and then adds them together. In order to accelerate the detection speed, light-weight DNNs have been proposed for real-time applications. For example, self-attention distillation (SAD) [85] adopts an attention distillation mechanism. Besides, other methods such as Sequential prediction and clustering are also introduced. In [118], a long short-term memory (LSTM) network is presented to face the lane’s long line structure issue. Similarly, Fast-Draw [158] predicts the lane’s direction at the pixel-wise level. In [87], the problem of lane detection is defined as a binary clustering problem. The method proposed in [86] also uses a clustering approach for lane detection. Later on, a 3D form of lane detection  [59] is introduced to face the non-flatten ground issue.

Localization and mapping

Localization and mapping are fundamental to autonomous driving. Localization is responsible for finding ego-position relative to a map [108]. The mapping constructs multi-layer high definition (HD) maps [91] for path planning. Therefore, the accuracy of localization and mapping affects the feasibility and safety of path planning. Currently, GPS-IMU based localization methods have been widely utilized in navigation software like Google Maps. However, the accuracy required for urban automated driving can’t be fulfilled by GPS-IMU systems [204].

Currently, systems that use a pre-build HD map are more practical and accurate. There are three main types of HD maps: landmark-based, point cloud-based, vision-based. Landmarks such as poles, curbs, signs, and road markers can be detected with LiDAR [76] or camera [191]. Landmark searching consumes less computation than the point cloud-based approach but fails in scenarios where landmarks are insufficient. The point cloud contains detailed information about the environment with thousands of points from LiDAR [236] or camera [190]. Iterative closest point (ICP) [18] and normal distributions transform (NDT) [19] are two algorithms used in point cloud-based HD map generation. They utilize numerical optimization algorithms to calculate the best match. ICP iteratively selects the closest point to calculate the best match. On the other side, NDT represents the map as a combination of the normal distribution, then use the maximum likelihood estimation equation to search match. NDT’s computation complexity is less than ICP [134], but it was not as robust as ICP. Vision-based HD maps are another direction recently becoming more and more popular. The computational overhead limits its application in real systems. Several methods for matching maps with the 2D camera as well as matching 2D image to the 3D image are proposed for mapping [213, 157, 137].

In contrast, simultaneous localization and mapping (SLAM) [26] is proposed to build the map and localize the vehicle simultaneously. SLAM can be divided into LiDAR-based SLAM and camera-based SLAM. Among LiDAR-based SLAM algorithms, LOAM [229] can be finished in real-time. IMLS-SLAM [38] focuses on reducing accumulated drift by utilizing a scan-to-model matching method. Cartographer [78], a SLAM package from Google, improves performance by using sub-map and loop closure while supporting both 2D and 3D LiDAR. Compared with LiDAR-based SLAM, camera-based SLAM approaches use frame-to-frame matching. There are two types of matching methods, feature-based and direct matching. Feature-based methods [143, 192, 180] extract features and track them to calculate the motion of the camera. Since features are sparse in the image, feature-based methods are also called sparse visual SLAM. Direct matching [146, 46, 109] is called dense visual SLAM, which adopts original information for matching that is dense in the image, such as color and depth from an RGB-D camera. The inherent properties of feature-based methods lead to its faster speed but bring the failure in texture-less environments as well. The dense SLAM solves the issues of the sparse SLAM with higher computation complexity. For situations that are lack of computation resources, semiDense [47, 163] SLAM methods that only use direct methods are proposed. Besides the above methods, deep learning methods are also utilized in solving feature extraction [221], motion estimation [119], and long-term localization [60].

Prediction and planning

The prediction module evaluates the driving behaviors of the surrounding vehicles and pedestrians for risk assessment [225]. Hidden Markov model (HMM) has been used to predict the target vehicle’s future behavior and detect unsafe lane change events [62, 218].

Planning means finding feasible routes on the map from origin to the destination. GPS navigation systems are known as global planners [14] to plan a feasible global route, but it does not guarantee safety. In this context, the local planner is developed  [67], which can be divided into three groups: (1) Graph-based planners that give the best path to the destination. (2) Sampling-based planners which randomly scan the environments and only find a feasible path. (3) Interpolating curve planners that are proposed to smooth the path. A* [75] is a heuristic implementation of Dijkstra that always preferentially searches the path from the origin to the destination, without considering the vehicle’s motion control, which causes the planning generated by A* cannot always be executed by the vehicle. To remedy this problem, hybrid A* [141] generates a drivable curve between each node instead of a jerky line. Sampling-based planners [94] randomly select nodes for search in the graph, reducing the searching time. Among them, Rapidly-exploring Random Tree (RRT) [112] is the most commonly used method for automated vehicles. As an extension of RRT, RRT* [173, 95] tries to search the optimal paths satisfying real-time constraints. How to balance the sampling size and computation efficiency is a big challenge for sampling-based planners. Graph-based planners and sampling-based planners can achieve optimal or sub-optimal with jerky paths that can be smoothed with interpolating curve planners.

Vehicle control

Vehicle control connects autonomous driving computing systems and the drive-by-wire system. It adjusts the steering angle and maintains the desired speed to follow the planning module’s trajectories. Typically, vehicle control is accomplished using two controllers: lateral controller and longitudinal controller. Controllers must handle rough and curvy roads, and quickly varying types, such as gravel, loose sand, and mud puddles [83], which are not considered by vehicle planners. The output commands are calculated from the vehicle state and the trajectory by control law. There are various control laws, such as fuzzy control [5, 43], PID control [13, 160], Stanley control [83] and Model predictive control (MPC) [31, 224, 90]. PID control creates outputs based on proportional, integral, and derivative teams of inputs. Fuzzy control accepts continuous values between 0 and 1 instead of either 1 or 0 as the inputs continuously respond. Stanley control is utilized to follow the reference path by minimizing the heading angle and cross-track error using a nonlinear control law. MPC performs a finite horizon optimization to identify the control command. Since it can handle various constraints and use past and current errors to predict more accurate solutions, MPC has been used to solve hard control problems like following overtaking trajectories [40]. Controllers derive control laws depending on the vehicle model. Kinematic bicycle models and dynamic bicycle models are most commonly used. In [104], a comparison is present to determine which of these two models is more suitable for MPC in forecast error and computational overhead.

Iv-D Computation Hardware

To support real-time data processing from various sensors, powerful computing hardware is essential to autonomous vehicles’ safety. Currently, plenty of computing hardware with different design shows up on the automobile and computing market. In this section, we will show several representative designs based on Graphic Processor Unit (GPU), Digital Signal Processor (DSP), Field Programmable Gate Arrays (FPGA), and Application-Specific Integrated Circuit (ASIC).

NVIDIA DRIVE AGX is the newest solution from NVIDIA unveiled on CES 2018 [138]. AGX is the world’s most powerful System-on-Chip (SoC), and it’s ten times more powerful than the NVIDIA Drive PX2 platform. Each DRIVE AGX consists of two Xavier cores. Each Xavier has a custom 8-core CPU and a 512-core Volta GPU. DRIVE AGX is capable of 320 trillion operations per second (TOPS) of processing performance.

Zynq UltraScale+ MPSoC is an automotive-grade product from Xilinx [215]. It’s an FPGA-based device designed for autonomous driving. It includes 64-bit quad-core ARM® Cortex™-A53 and dual-core ARM Cortex-R5. This scalable solution claims to deliver the right performance/watt with safety and security [45].

Texas Instruments’ TDA provides a DSP-based solution for autonomous driving. A TDA3x SoC consists of two C66x Floating-Point VLIW DSP cores with vision AccelerationPac. Besides, each TDA3x SoC has dual Arm Cortex-M4 image processors. The vision accelerator is designed to accelerate the process functions on images. Compared with an ARM Cortex-15 CPU, TDA3x SoC provides an eight-fold acceleration on computer vision tasks with less power consumption [198].

MobileEye EyeQ5 is the leading ASIC-based solution to support fully-autonomous (Level 5) vehicles [201]. EyeQ5 is designed based on 7nm-FinFET semiconductor technology, and it provides 24Tops computation capability with 10 Watts’ power budget. TPU is Google’s AI accelerator ASIC mainly for neural network and machine learning [6]. TPU v3 is the newest release, which provides 420 teraflops computation for a single board.

Iv-E Storage

The data captured by an autonomous vehicle is proliferating, typically generating between 20TB and 40TB per day, per vehicle [54]. The data includes cameras (20 to 40MB), as well as sonar (10 to 100KB), radar (10 to 100KB), and LiDAR (10 to 70MB) [36, 199]. Storing data securely and efficiently can accelerate overall system performance. Take object detection as an example: the history data could contribute to the improvement of detection precision using machine learning algorithms. Map generation can also benefit from the stored data in updating traffic and road conditions appropriately. Besides, the sensor data can be utilized to ensure public safety and predict and prevent crime. The biggest challenge is to ensure that sensors collect the right data, and it is processed immediately, stored securely, and transferred to other technologies in the chain, such as Road-Side Unit (RSU), cloud data center, and even third-party users [231]. More importantly, creating hierarchical storage and workflow that enables smooth data accessing and computing is still an open question for the future development of autonomous vehicles.

In [172], a computational storage system called HydraSpace is proposed to tackle the storage issue for autonomous driving vehicles. HydraSpace is designed with multi-layered storage architecture and practical compression algorithms to manage the sensor pipe data. OpenVDAP is a full-stack edge-based data analytic platform for connected and autonomous vehicles (CAVs) [231]. It envisions the future four types of CAVs applications, including autonomous driving, in-vehicle infotainment, real-time diagnostics, and third-party applications like traffic information collector and SafeShareRide [126]. The hierarchical design of the storage system called driving data integrator (DDI) is proposed in OpenVDAP to provide sensor-aware and application-aware data storage and processing [231].

Iv-F Real-Time Operating Systems

According to the automation level definitions from the Society of Automotive Engineers (SAE) [183], the automation of vehicles increases from level 2 to level 5, and the level 5 requires full automation of the vehicle, which means the vehicle can drive under any environment without the help from the human. To make the vehicle run in a safe mode, how to precept the environment and make decisions in real-time becomes a big challenge. That’s why real-time operating system becomes a hot topic in the design and implementation of autonomous driving systems.

RTOS is widely used in the embedded system of ECUs to control the vehicle’s throttle, brake, etc. QNX and VxWorks are two representative commercialized RTOS widely used in the automotive industry. The QNX kernel contains only CPU scheduling, inter-process communication, interrupt redirection, and timers. Everything else runs as a user process, including a unique process known as “proc”, which performs process creation and memory management by operating in conjunction with the microkernel [79]. VxWorks is designed for embedded systems requiring real-time, deterministic performance and, in many cases, safety and security certification [208]. VxWorks supports multiple architectures, including Intel, POWER, and ARM. VxWorks also uses real-time kernels for mission-critical applications subject to real-time constraints, which guarantees a response within pre-defined time constraints.

RTLinux is a microkernel-based operating system that supports hard real-time [222]. The scheduler of RTLinux allows full preemption. Compared with using a low-preempt patch in Linux, RTLinux allows preemption for the whole Linux system. RTLinux makes it possible to run real-time critical tasks and interprets them together with the Linux [177].

NVIDIA DRIVE OS is a foundational software stack from NVIDIA, which consists of an embedded RTOS, hypervisor, NVIDIA CUDA libraries, NVIDIA Tensor RT, etc. that needed for the acceleration of machine learning algorithms [149].

Iv-G Middleware Systems

Robotic systems, such as autonomous vehicle systems, often involve multiple services, with many dependencies. A middleware is required to facilitate communications between different autonomous driving services.

Most existing autonomous driving solutions utilize the Robot Operating System (ROS) [162]. Specifically, ROS is a communication middleware that facilitates communications between different modules of an autonomous vehicle system. ROS supports four communication methods: topic, service, action, and parameter. ROS2 is a promising middleware developed to make communications more efficient, reliable, and secure [171]. However, currently, most of the packages and tools for sensor data process are still based on ROS.

The Autoware Foundation is a non-profit organization supporting open-source projects enabling self-driving mobility [212]. Autoware.AI is developed based on ROS, and it’s the world’s first ”All-in-One” open-source software for autonomous driving technology. Apollo Cyber [11] is another open-source middleware developed by Baidu. Apollo aims to accelerate the development, testing, and deployment of autonomous vehicles. Apollo Cyber is a high-performance runtime framework that is greatly optimized for high concurrency, low latency, and high throughput in autonomous driving.

In traditional automobile society, the runtime environment layer in Automotive Open System Architecture(AutoSAR[10] can be seen as middleware. Many companies develop their middleware to support AutoSAR. However, there are few independent open-source middlewares nowadays because it is a commercial vehicle company’s core technology. Auto companies prefer to provide middleware as a component of a complete set of autonomous driving solutions.

Iv-H V2X Communication

In addition to obtaining information from the on-board sensors, the recent proliferation in communication mechanisms, e.g., DSRC, C-V2X, and 5G, has enabled the autonomous driving vehicles to obtain information from other vehicles, infrastructures like traffic lights and RSU, as well as pedestrians.


Long-Term Evolution (LTE) is a transitional product in the transition from 3G to 4G [125], which provides downlink peak rates of 300 Mbit/s, uplink peak rates of 75 Mbit/s. The fourth-generation communications (4G) comply with 1 Gbit/s for stationary reception and 100 Mbit/s for mobile. As the next-generation mobile communication, the fastest U.S. users experienced average 5G download speed reached 494.7 Mbps on Verizon, 17.7 times faster than that of 4G. And from Version’s early report, the latency of 5G is less than 30 ms, 23 ms faster than average 4G metrics. However, we cannot deny that 5G still has the following challenges: complex system, high costs, and poor obstacle avoidance capabilities.


Dedicated Short Range Communication (DSRC) [99] is a type of vehicle-to-everything (V2X) communication protocol, which is specially designed for connected vehicles. DSRC is based on the IEEE 802.11p standard, and its working frequency is 5.9GHz. Fifteen message types are defined in the SAE J2735 standard [187], which covers information like the vehicle’s position, map information, emergence warning, etc. [99]. Limited by the available bandwidth, DSRC messages have small size and low frequency. However, DSRC provides reliable communication, even when the vehicle is driving 120 miles per hour.


Cellular vehicle-to-everything (C-V2X) combines the traditional V2X network with the cellular network, which delivers mature network assistance and commercial services of 4G/5G into autonomous driving. Like DSRC, the working frequency of C-V2X is also the primary common spectrum, 5.9 GHz [7]. Different from the CSMA-CA in DSRC, C-V2X has no contention overheads by using semi-persistent transmission with relative energy-based selection. Besides, the performance of C-V2X can be seamlessly improved with the upgrade of the cellular network. Generally, C-V2X is more suitable for V2X scenarios where cellular networks are widely deployed.

Iv-I Security and Privacy

With the increasing degree of vehicle electronification and the reliance on a wide variety of technologies, such as sensing and machine learning, the security of AVs has risen from the hardware damage of traditional vehicles to comprehensive security with multi-domain knowledge. Here, we introduce several security problems strongly associated with AVs with the current attacking methods and standard coping methods. In addition to the security and privacy issues mentioned as follows, AVs systems should also take care of many other security issues in other domains, such as patching vulnerabilities of hardware or software systems and detecting intrusions [210].

Sensing security

As the eye of autonomous vehicles, the security of sensors is nearly essential. Typically, jamming attacks and spoofing attacks are two primary attacks for various sensors [169, 217]. For example, the spoofing attack generates an interference signal, resulting in a fake obstacle captured by the vehicle [220]. Besides, GPS also encounters spoofed attacks [227]. Therefore, protection mechanisms are expected for sensor security. Randomized signals and redundant sensors are usually used by these signal-reflection sensors [185, 156], including LiDAR and radar. The GPS can check signal characteristics [106] and authenticate data sources [150] to prevent attacks. Also, sensing data fusion is an effective mechanism.

Communication security

Communication security includes two aspects: internal communication and outside communication. Currently, internal communication like CAN, LIN, and FlexRay, has faced severe security threats [107, 49, 147]. The cryptography is frequently-used technology to keep the transmitted data confidential, integrated, and authenticated [189]. However, the usage of cryptography is limited by the high computational cost for these resource-constrained ECUs. Therefore, another attempt is to use the gateway to prevent unallowed access [100]. The outside communication has been studied in VANETs with V2V, V2R, and V2X communications [144, 161, 4]. Cryptography is the primary tool. A trusted key distribution and management is built in most approaches, and vehicles use assigned keys to authenticate vehicles and data.

Data security

Data security refers to preventing data leakage from the perspectives of transmission and storage. The former has been discussed in communication security, where various cryptography approaches are proposed to protect data in different scenarios [233, 58]. The cryptography is also a significant technology of securing data storage, such as an encrypted database [159] and file system [22]. Besides, access control technology [176] protects stored data from another view, wide-used in modern operating systems. An access control framework [230] has been proposed for AVs to protect in-vehicle data in real-time data and historical data, with different access control models.

Control security

With vehicles’ electronification, users could open the door through an electronic key and control their vehicles through an application or voice. However, this also leads to new attack surfaces with various attack methods, such as jamming attacks, replay attacks, relay attacks, etc. [169]. For example, the attacker could capture the communication between key and door and replay it to open the door [93]. Also, for those voice control supported vehicles, the attackers could successfully control the vehicle by the voices that humans cannot hear [228]. Parts of these attacks could be classified into sensing security, communication security, or data security, which can be addressed by corresponding protection mechanisms.


An attacker can learn user privacy from analyzing user data. For example, by analyzing vehicle control data, the driver identity can be recognized [136]. Thus, the most straightforward but hard protection is to prevent data from being obtained by an attacker, such as access control and data encryption. Another way is data desensitization, including anonymization and differential privacy [117].

V Challenges and Opportunities

From the review of the current key technologies of the computing system for autonomous driving, we can find that there are still many challenges and open issues for the research and development of L4 or L5 autonomous driving vehicles. In this section, we summarize ten remaining challenges and discuss the opportunities with our visions for autonomous driving.

V-a Multi-sensors Data Synchronization

Data on the autonomous driving vehicle has various sources: its sensors, other vehicle sensors, RSU, and even social media. One big challenge to handle a variety of data sources is how to synchronize them.

For example, a camera usually produces 30-60 frames per second, while LiDAR’s point cloud data frequency is 10HZ. For applications like 3D object detection, which requires camera frames and point cloud at the same time, should the storage system do synchronization beforehand or let the application developer do it? This issue becomes more challenging, considering that the timestamps’ accuracy from different sensors falls into different granularities. For example, considering the vehicles that use network time protocol (NTP) for time synchronization, the timestamp difference can be as long as 100ms [140, 71]. For some sensors with a built-in GNSS antenna, the time accuracy goes to the nanosecond level. In contrast, other sensors get a timestamp from the host machine system time when accuracy is at the millisecond level. Since the accuracy of the time synchronization is expected to affect the vehicle control’s safety, handling the sensor data with different frequency and timestamp accuracy is still an open question.

V-B Failure Detection and Diagnostics

Today’s AVs are equipped with multiple sensors, including LiDARs, radars, and GPS [219]. Although we can take advantage of these sensors in terms of providing a robust and complete description of the surrounding area, some open problems related to the failure detection are waiting to be solved. Here, we list and discuss four failure detection challenges: (1) Definition of sensor failure: there is even no standard, agreed-upon universal definition or standards to define the scenario of sensor failures [174]. However, we must propose and categorize the standard of sensor failures to support failure detection by applying proper methods. (2) Sensor failure: more importantly, there is no comprehensive and reliable study on sensor failure detection, which would be extremely dangerous since most of the self-driving applications are relying on the data produced by these sensors [152]. If some sensors encountered a failure, collisions and environmental catastrophes might happen. (3) Sensor data failure: in the real application scenario, even the sensor themselves are working correctly, the generated data may still not reflect the actual scenario and report the wrong information to people [203]. For instance, the camera is blocked by unknown objects such as leaves or mud, or the radar deviates from its original fixed position due to wind force. In this context, sensor data failure detection is very challenging, (4) Algorithm failure: In the challenge scenarios with severe occlusion and extreme lighting conditions, such as night, rainy days, and snowy days, deploying and executing state-of-the-art algorithms cannot guarantee output the ideal results [196]. For example, lane markings are usually failed to be detected at nights by the algorithms that are hard to explicitly utilize the prior information like rigidity and smoothness of lanes [193]. However, humans can easily infer their positions and fill in the occluded part of the context. Therefore, how to develop advanced algorithms to further improve detection accuracy is still a big challenge.

For a complex system with rich sensors and hardware devices, failures could happen everywhere. How to tackle the failure and diagnose the issue becomes a big issue. One example is the diagnose of lane controller systems from Google [113]. The idea is to determine the root cause of malfunctions based on comparing the actual steering corrections applied to those predicted by the virtual dynamics module.

V-C How to Deal with Normal-Abnormal?

Normal-abnormal represents the normal scenarios in daily life, but it’s abnormal in the autonomous driving dataset. Typically, there are three cases of normal-abnormal: adverse weather, emergency maneuvers, and work zone.

Adverse weather

One of the most critical issues in the development of AVs is the poor performance under adverse weather conditions, such as rain, snow, fog, and hail, because the equipped sensors (e.g., LiDAR, radar, camera, and GPS) might be significantly affected by the extreme weather. The work of  [226] characterized the effect of rainfall on millimeter-wave (mm-wave) radar and proved that under heavy rainfall conditions, the detection range of millimeter-wave radar can be reduced by as much as 45%. Filgueira et al. [53] pointed out that as the rain intensity increases, the detected LiDAR intensity will attenuate. At the same time, Bernardin et al. [16] proposed a methodology to quantitatively estimate the loss of visual performance due to rainfall. Most importantly, experimental results show that, compared to training in narrow cases and scenarios, using various data sets to train object detection networks may not necessarily improve the performance of these networks. [82]. However, there is currently no research to provide a systematic and unified method to reduce the impact of weather on various sensors used in AVs. Therefore, there is an urgent need for novel deep learning networks that have sufficient capabilities to cope with safe autonomous driving under severe weather conditions.

Emergency maneuvers

In emergency situations, such as a road collapse, braking failure, a tire blowout, or suddenly seeing a previously “invisible” pedestrian, the maneuvering of the AVs may need to reach its operating limit to avoid collisions. However, these collision avoidance actions usually conflict with stabilization actions aimed at preventing the vehicle from losing control, and in the end, they may cause collision accidents. In this context, some research has been done to guarantee safe driving for AVs in emergent situations. For example, Hilgert et al. proposed a path planning method for emergency maneuvers based on elastic bands [80].  [56] is proposed to determine the minimum distance at which obstacles cannot be avoided at a given speed. Guo et al. [74] discussed dynamic control design for automated driving, with particular emphasis on coordinated steering and braking control in emergency avoidance. Nevertheless, how an autonomous vehicle safely responds to different classes of emergencies with on-board sensors is still an open problem.

Work zone

Work zone recognition is another challenge for an autonomous driving system to overcome. For most drivers, the work zone means congestion and delay of the driving plan. Many projects have been launched to reduce and eliminate work zone injuries and deaths for construction workers and motorists. ”” summarizes recent years of work zone crashes and supplies training programs to increase public awareness of the importance of work-zone safety. Seo [184] proposed a machine learning-based method to improve the recognition of work zone signs. Developers from Kratos Defense & Security Solutions [12] present an autonomous truck which safely passes a work zone. Their system relied on V2V communications to connect the self-driving vehicle with a leader vehicle. The self-driving vehicle accepted navigation data from the leader vehicle to travel along its route while kept a pre-defined distance. Until now, the work zone is still a threat to drivers and workers’ safety but hasn’t attracted too much attention to autonomous driving researchers. There are still significant gaps in this research field, waiting for researchers to explore and tackle critical problems.

V-D Cyberattack Protection

Attacks and defenses are always opposites, and absolute security does not exist. The emerging CAVs face many security challenges, such as reply attacks to simulate a vehicle’s electronic key and spoof attacks to make vehicle detour [93, 169]. With the integration of new sensors, devices, technologies, infrastructures, applications, the attack surface of CAVs is further expanded.

Many attacks aim to one part of the CAVs system and could be protected by the method of fusing several other views. For example, a cheated roadblock detected by radars could be corrected by camera data. Thus, how to build such a system to protect CAVs, systematically, is the first challenge for the CAVs system. The protection system is expected to detect potential attacks, evaluate the system security status, and recover from attacks.

Besides, some novel attack methods should be attended. Recently, some attacks have been proposed to trick these algorithms [28]. For example, a photo instead of a human to pass the face recognition or a note-sized photo posted on the forehead makes machine learning algorithms fail to detect faces [103]. Thus, how to defend the attacks on machine learning algorithms is a challenge for CAVs systems.

Furthermore, some new technologies could be used to enhance the security of the CAVs system. With the development of quantum computing technology, the existing cryptography standards cannot ensure protected data, communication, and systems. Thus, designing post-quantum cryptography [17] and architecture is a promising topic for CAVs and infrastructure in ITS.

Also, we notice that the hardware-assistant trusted execution environment [148] could improve the system security, which provides an isolated and trusted execution environment (TEE) for applications. However, it has limited physical memory size, and execution performance will drop sharply as the total memory usage increases. Therefore, how to split the system components and make critical parts in the TEE with high security is still a challenge in design and implementation.

V-E Vehicle Operating System

The vehicle operating system is expected to abstracts the hardware resources for higher layer middleware and autonomous driving applications. In the vehicle operating system development, one of the biggest challenges is the compatibility with the vehicle’s embedded system. Take Autoware as an example: although it is a full-stack solution of the vehicle operating system which provides a rich set of self-driving modules composed of sensing, computing, and actuation capabilities, the usage of it is still limited to several commercial vehicles with a small set of supportable sensors [97]. On a modern automobile, as many as 70 electronic control units (ECU) are installed for various subsystems, and they are communicated via CAN bus. For the sake of system security and commercial interests, most of the vehicles’ CAN protocol is not open-sourced, which is the main obstacle for developing a unified vehicle operating system.

AUTOSAR (AUTomotive Open System ARchitecture) is a standardization initiative of leading automotive manufacturers and suppliers founded in the autumn of 2003 [10]. AUTOSAR is promising in narrowing the gap for developing an open-source vehicle operating system. However, most automobile companies are relatively conservative to open-source their vehicle operating systems, restricting the availability of AUTOSAR to the general research and education community. There is still a strong demand for a robust, open-source vehicle operating system for AVs.

V-F Energy Consumption

With rich sensors and powerful computing devices implemented on the vehicle, energy consumption becomes a big issue. Take the NVIDIA Drive PX Pegasus as an example: it consumes 320 INT8 TOPS of AI computational power with a 500 Watts budget. If we added the external devices like sensors, communication antennas, storage, battery, etc., the total energy consumption would be larger than 1000W [138]. Besides, if a duplicate system is installed for the autonomous driving applications’ reliability, the total power dissipation could go up to almost 2000W.

How to handle such a tremendous amount of power dissipation is not only a problem for the battery management system; it is also a problem for the heat dissipation system. What makes this issue more severe is the size limitation and auto-grid requirements from the vehicle’s perspective. How to make the computing system of the autonomous driving vehicle become energy efficient is still an open challenge. E2M tackles this problem by proposed as an energy-efficient middleware for the management and scheduling deep learning applications to save energy for the computing device [124]. However, according to the profiling results, most of the energy is consumed by vehicles’ motors. Energy-efficient autonomous driving requires the co-design in battery cells, energy management systems, and autonomous vehicle computing systems.

V-G Cost

In the United States, the average cost to build a traditional non-luxury vehicle is roughly $30,000, and for an AV, the total cost is around $250,000 [115]. AVs need an abundance of hardware equipment to support their normal functions. These additional hardware equipments required for AVs include but are not limited to the communication device, computing equipment, drive-by-wire system, extra power supply, various sensors, cameras, LiDAR, and radar. In addition, to ensure AV’s reliability and safety, a backup of these hardware devices may be necessary [182]. For example, if the main battery fails, the vehicle should have a backup power source to support computing systems to move the vehicle.

The cost of building an autonomous vehicle is already very high, not to mention the maintenance cost of an AV, e.g., diagnostics, repair, etc. High maintenance cost leads to declining consumer demand and undesirable profitability for the vehicle manufacturers. Companies like Ford and GM have already cut their low-profit production lines to save the cost [73, 132].

Indeed, the cost of computing systems for AVs currently in the research and development stage is very high. However, we hope that with the maturity of the technologies and the emergence of some alternative solutions, the price will ultimately drop to a level that individuals can afford. Take battery packs of the electric vehicles (EVs) as an example: when the first mass-market EVs were introduced in 2010, their battery packs were estimated at the US $1,000 per kilowatt-hour (kWh). However, Tesla’s Model 3 battery pack costs $190 per kilowatt-hour, and General Motors’ 2017 Chevrolet Bolt battery pack is estimated to cost $205 per kilowatt-hour. In 6 years, the price per kilowatt-hour has dropped by more than 70% [151]. Also, Waymo claims to have successfully reduced the experimental version of high-end LiDAR to approximately $7,500. Besides, Tesla, which uses only radar instead of LIDAR, says its autonomous vehicle equipment is around $8,000 [115]. In addition to the reduction of hardware costs, we believe that the optimization of computing software in an AV can also help reduce the cost to a great extent.

V-H How to Benefit from Smart Infrastructure?

Smart infrastructure combines sensors, computing platforms, and communication devices with the physical traffic infrastructure [98]. It is expected to enable the AVs to achieve a more efficient and reliable perception and decision making. Typically, AVs could benefit from smart infrastructure in three aspects: (1) Service provider It’s struggling for an AV to find a parking space in the parking lot. By deploying sensors like RFID on the smart infrastructure, the parking services can be handled quickly [153]. As the infrastructure becomes a provider for parking service, it is possible to schedule service requests to achieve the maximum usage. Meanwhile, AVs can reduce the time and computation for searching services. (2) Traffic information sharing: Traffic information is essential to safe driving. Lack of traffic information causes traffic congestion or even accidents. Roadside Units (RSUs) is implemented to provide traffic information to passing vehicles through V2X communications. Besides, RSUs are also used to surveillance road situations using various on-board sensors like cameras and LiDARs [3]. The collected data is used for various tasks, including weather warning, map updating, road events detection, and making up blind spots of AVs. (3) Task offloading: Various algorithms are running on the vehicle for safe driving. Handling all workloads in real-time requires a tremendous amount of computation and power, infeasible on a battery-powered vehicle [120]. Therefore, offloading heavy computation workloads to the infrastructure is proposed to accelerate the computation and save energy. However, to perform a feasible offloading, the offloading framework must offload computations to the infrastructure while ensuring timing predictability [41]. Therefore, how to schedule the order of offloading workloads is still a challenge to benefit from the smart infrastructure.

V-I Dealing with Human Drivers

According to NHTSA data collected from all 50 states and the District of Columbia, 37,461 lives were lost on U.S. roads in 2016, and 94% of crashes were associated with “a human choice or error” [168]. Although autonomous driving is proposed to replace human drivers with computers/machines for safety purposes, human driving vehicles will never disappear. How to enable computers/machines in AVs to interact with a human driver becomes a big challenge.

Compared with a human driver, machines are generally more suited for tasks like vehicle control and multi-sensor data processing. In contrast, the human driver maintains an advantage in perception and sensing the environment [181]. One of the fundamental reasons is that the machine cannot think like a human. Current machine learning-based approaches cannot handle situations that are not captured in the training dataset. For example, in driving automation from SAE, one of the critical differences between level 2 and level 3/4/5 is whether the vehicle can make decisions like overtaking or lane changing by itself [128]. In some instances, interacting with other human drivers becomes a big challenge because human drivers can make mistakes or violate traffic rules.

Many works focus on getting a more accurate speed and control predictions of the surrounding vehicles to handle the machine-human interaction [88, 62]. Deep reinforcement learning shows promising performance in complex scenarios requiring interaction with other vehicles [118, 175]. However, they are either simulation-based or demonstration in limited scenarios. Another promising direction to tackle machine-human interaction is through V2X communications. Compared with predicting other vehicles’ behavior, it’s more accurate to communicate safety information [37].

V-J Experimental Platform

The deployment of autonomous driving algorithms or prototypes requires complex tests and evaluations in a real environment, which makes the experimental platform becomes one of the fundamental parts of conducting research and development. However, building and maintaining an autonomous driving vehicle is enormous: the cost of a real autonomous driving could attain $250,000; maintaining the vehicle requires parking, insurance, and auto maintenance. Let alone the laws and regulations to consider for field testing.

Given these limitations and problems, lots of autonomous driving simulators and open-source prototypes are proposed for research and development purposes. dSPACE provides an end-to-end simulation environment for sensor data processing and scenario-based testing with RTMaps and VEOS [44]. The automated driving toolbox is Simulink’s software, which provides algorithms and tools for designing, simulating, and testing ADAS and autonomous driving systems [8]. In addition to these commercialized products, there are also open-source projects like CARLA and Gezabo for urban driving or robotics simulations [102, 42].

Another promising direction is to develop affordable research and development of autonomous driving platforms. Several experiment platforms are quite successful for indoor or low-speed scenarios. HydraOne is an open-source experimental platform for indoor autonomous driving, and it provides full-stack programmability for autonomous driving algorithms developers and system developers [209]. DragonFly is another example that supports self-driving with a speed of fewer than 40 miles per hour and a price of less than $40,000 [155].

V-K Physical Worlds Coupling

Autonomous driving is a typical cyber-physical system [70], where the computing systems and the physical world have to work closely and smoothly. With a human driver, the feeling of a driver is easily coupled with the vehicle control actions. For example, if the driver doesn’t like the abrupt stop, he or she can step the brake gradually. In autonomous driving, the control algorithm will determine the speed of braking and accelerating. We envision that different human feeling, coupled with complex traffic environment, bring an unprecedented challenge to the vehicle control in autonomous driving. Take the turning left as an example: how fast should the drive-by-wire system turn 90 degrees? An ideal vehicle control algorithm of turning left should consider many factors, such as the friction of road surface, vehicle’s current speed, weather conditions, and the movement range, as well as human comfortableness, if possible. Cross-layer design and optimization among perception, control, vehicle dynamics, and drive-by-wire systems might be a promising direction [133].

Vi Conclusion

The recent proliferation of computing and communication technologies like machine learning, hardware acceleration, DSRC, C-V2X, and 5G has dramatically promoted autonomous driving vehicles. Complex computing systems are designed to leverage the sensors and computation devices to understand the traffic environments correctly in real-time. However, the early developed autonomous vehicles’ fatalities arise from time to time, which reveals the big gap between the current computing system with the expected robust system for level-4/level-5 full autonomous driving. In this paper, we present the state-of-the-art computing systems for autonomous driving, including seven performance metrics, nine key technologies, and eleven challenges and opportunities to realize the vision of autonomous driving. We hope this paper will bring these challenges to the attention of both the computing and automotive communities.


  1. Technical Report: CAR-TR-2020-009


  1. (2020)(Website) External Links: Link Cited by: §I.
  2. N. Akai, L. Y. Morales, T. Yamaguchi, E. Takeuchi, Y. Yoshihara, H. Okuda, T. Suzuki and Y. Ninomiya (2017) Autonomous driving based on accurate localization using multilayer LiDAR and dead reckoning. In 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), pp. 1–6. Cited by: §II.
  3. A. Al-Dweik, R. Muresan, M. Mayhew and M. Lieberman (2017-04) IoT-based multifunctional scalable real-time enhanced road side unit for intelligent transportation systems. pp. 1–6. External Links: Document Cited by: §V-H.
  4. I. Ali, A. Hassan and F. Li (2019) Authentication and privacy schemes for vehicular ad hoc networks (VANETs): a survey. Vehicular Communications 16, pp. 45 – 61. External Links: ISSN 2214-2096, Document, Link Cited by: §IV-I2.
  5. S. Allou, Z. Youcef and B. Aissa (2017-12) Fuzzy logic controller for autonomous vehicle path tracking. pp. 328–333. External Links: Document Cited by: §IV-C5.
  6. (2019)(Website) External Links: Link Cited by: §IV-D.
  7. 5. A. Association (2016) The case for cellular V2X for safety and cooperative driving. White Paper, November 16. Cited by: §IV-H3.
  8. (2020)(Website) External Links: Link Cited by: §V-J.
  9. (2019)(Website) External Links: Link Cited by: §III.
  10. AUTOSAR AUTOSAR website. Note: \url[Online] Cited by: §IV-G, §V-E.
  11. Baidu Apollo Cyber. Note: [Online] External Links: Link Cited by: §IV-G.
  12. A. Barwacz (2019)(Website) External Links: Link Cited by: §V-C3.
  13. A. Baskaran, A. Talebpour and S. Bhattacharyya (2020-01) End-to-end drive by-wire PID lateral control of an autonomous vehicle. pp. 365–376. External Links: ISBN 978-3-030-32519-0, Document Cited by: §IV-C5.
  14. H. Bast, D. Delling, A. Goldberg, M. Müller-Hannemann, T. Pajor, P. Sanders, D. Wagner and R. Werneck (2015-04) Route planning in transportation networks. pp. . Cited by: §IV-C4.
  15. (2019)(Website) External Links: Link Cited by: §I, §IV-A5.
  16. F. Bernardin, R. Bremond, V. Ledoux, M. Pinto, S. Lemonnier, V. Cavallo and M. Colomb (2014) Measuring the effect of the rainfall on the windshield in terms of visual performance. Accident Analysis & Prevention 63, pp. 83–88. Cited by: §V-C1.
  17. D. J. Bernstein and T. Lange (2017) Post-quantum cryptography. Nature 549, pp. 188–194. External Links: ISSN 7671 Cited by: §V-D.
  18. P. Besl and H.D. McKay (1992-03) A method for registration of 3-D shapes.. Pattern Analysis and Machine Intelligence, IEEE Transactions on 14, pp. 239–256. External Links: Document Cited by: §IV-C3.
  19. P. Biber and W. Straßer (2003-11) The normal distributions transform: a new approach to laser scan matching. Vol. 3, pp. 2743 – 2748 vol.3. External Links: ISBN 0-7803-7860-1, Document Cited by: §IV-C3.
  20. J. Binas, D. Neil, S. Liu and T. Delbruck (2017) DDD17: end-to-end DAVIS driving dataset. arXiv preprint arXiv:1711.01458. Cited by: §IV-B2.
  21. M. Birdsall (2014) Google and ITE: The road ahead for self-driving cars. Institute of Transportation Engineers. ITE Journal 84 (5), pp. 36. Cited by: §II.
  22. M. Blaze (1993) A cryptographic file system for UNIX. In Proceedings of the 1st ACM Conference on Computer and Communications Security, CCS ’93, New York, NY, USA, pp. 9–16. External Links: ISBN 0897916298, Link, Document Cited by: §IV-I3.
  23. A. Bochkovskiy, C. Wang and H. M. Liao (2020) YOLOv4: optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934. Cited by: §IV-C1.
  24. M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller and J. Zhang (2016) End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316. Cited by: §II.
  25. A. Borkar, M. Hayes and M. T. Smith (2011) A novel lane detection system with efficient ground truth generation. IEEE Transactions on Intelligent Transportation Systems 13 (1), pp. 365–374. Cited by: §IV-C2.
  26. G. Bresson, Z. Alsayed, L. Yu and S. Glaser (2017-09) Simultaneous localization and mapping: a survey of current trends in autonomous driving. IEEE Transactions on Intelligent Vehicles PP, pp. 1–1. External Links: Document Cited by: §IV-C3.
  27. A. Broggi, M. Buzzoni, S. Debattisti, P. Grisleri, M. C. Laghi, P. Medici and P. Versari (2013) Extensive tests of autonomous driving technologies. IEEE Transactions on Intelligent Transportation Systems 14 (3), pp. 1403–1415. Cited by: §II.
  28. Y. Cao, C. Xiao, B. Cyr, Y. Zhou, W. Park, S. Rampazzi, Q. A. Chen, K. Fu and Z. M. Mao (2019) Adversarial sensor attack on LiDAR-based perception in autonomous driving. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, CCS ’19, New York, NY, USA, pp. 2267–2281. External Links: ISBN 9781450367479, Link, Document Cited by: §V-D.
  29. C. Chen, A. Seff, A. Kornhauser and J. Xiao (2015) Deepdriving: learning affordance for direct perception in autonomous driving. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2722–2730. Cited by: §II.
  30. K. Chiu and S. Lin (2005) Lane detection using color-based segmentation. In IEEE Proceedings. Intelligent Vehicles Symposium, 2005., pp. 706–711. Cited by: §IV-C2.
  31. W. Choi, H. Nam, B. Kim and C. Ahn (2020-02) Model predictive control for evasive steering of autonomous vehicle. pp. 1252–1258. External Links: ISBN 978-3-030-38076-2, Document Cited by: §IV-C5.
  32. S. O. A. V. S. Committee (2018) Taxonomy and definitions for terms related to driving automation systems for on-road motor vehicles. SAE International: Warrendale, PA, USA. Cited by: §I.
  33. (2017)(Website) External Links: Link Cited by: §I, §IV-A2.
  34. N. Dalal and B. Triggs (2005) Histograms of oriented gradients for human detection. In 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR’05), Vol. 1, pp. 886–893. Cited by: §IV-C1.
  35. R. Danescu and S. Nedevschi (2009) Probabilistic lane tracking in difficult road scenarios using stereovision. IEEE Transactions on Intelligent Transportation Systems 10 (2), pp. 272–282. Cited by: §IV-C2.
  36. Data storage is the key to autonomous vehicles’ future. Note: \url 2019-12-30 Cited by: §IV-E.
  37. R. Deng, B. Di and L. Song (2019) Cooperative collision avoidance for overtaking maneuvers in cellular V2X-based autonomous driving. IEEE Transactions on Vehicular Technology 68 (5), pp. 4434–4446. Cited by: §V-I.
  38. J. Deschaud (2018-05) IMLS-SLAM: scan-to-model matching based on 3D data. pp. 2480–2485. External Links: Document Cited by: §IV-C3.
  39. H. Deusch, J. Wiest, S. Reuter, M. Szczot, M. Konrad and K. Dietmayer (2012) A random finite set approach to multiple lane detection. In 2012 15th International IEEE Conference on Intelligent Transportation Systems, pp. 270–275. Cited by: §IV-C2.
  40. S. Dixit, U. Montanaro, S. Fallah, M. Dianati, D. Oxtoby, T. Mizutani and A. Mouzakitis (2018-11) Trajectory planning for autonomous high-speed overtaking using MPC with terminal set constraints. pp. . External Links: Document Cited by: §IV-C5.
  41. Z. Dong, W. Shi, G. Tong and K. Yang (2020-02) Collaborative autonomous driving: vision and challenges. pp. . External Links: Document Cited by: §V-H.
  42. A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez and V. Koltun (2017) CARLA: an open urban driving simulator. arXiv preprint arXiv:1711.03938. Cited by: §V-J.
  43. I. Emmanuel (2017-03) Fuzzy logic-based control for autonomous vehicle: a survey. International Journal of Education and Management Engineering 7, pp. 41–49. External Links: Document Cited by: §IV-C5.
  44. (2020)(Website) External Links: Link Cited by: §V-J.
  45. (2020)(Website) External Links: Link Cited by: §IV-D.
  46. F. Endres, J. Hess, J. Sturm, D. Cremers and W. Burgard (2014-02) 3-D mapping with an RGB-D camera. Robotics, IEEE Transactions on 30, pp. 177–187. External Links: Document Cited by: §IV-C3.
  47. J. Engel, T. Schoeps and D. Cremers (2014-09) LSD-SLAM: large-scale direct monocular SLAM. Vol. 8690, pp. 1–16. External Links: Document Cited by: §IV-C3.
  48. (2020)(Website) External Links: Link Cited by: §III.
  49. J. M. Ernst and A. J. Michaels (2018) LIN bus security analysis. In IECON 2018 - 44th Annual Conference of the IEEE Industrial Electronics Society, Vol. , pp. 2085–2090. Cited by: §IV-I2.
  50. M. Everingham, L. Van Gool, C. K. Williams, J. Winn and A. Zisserman (2010) The pascal visual object classes (voc) challenge. International journal of computer vision 88 (2), pp. 303–338. Cited by: §III.
  51. N. Fairfield and C. Urmson (2011) Traffic light mapping and detection. In 2011 IEEE International Conference on Robotics and Automation, pp. 5421–5426. Cited by: §IV-A3.
  52. P. Felzenszwalb, D. McAllester and D. Ramanan (2008) A discriminatively trained, multiscale, deformable part model. In 2008 IEEE conference on computer vision and pattern recognition, pp. 1–8. Cited by: §IV-C1.
  53. A. Filgueira, H. González-Jorge, S. Lagüela, L. Díaz-Vilariño and P. Arias (2017) Quantifying the influence of rain in LiDAR performance. Measurement 95, pp. 143–148. Cited by: §V-C1.
  54. Flood of data will get generated in autonomous cars. Note: \url 2020-2-18 Cited by: §IV-E.
  55. C. Fu, W. Liu, A. Ranga, A. Tyagi and A. C. Berg (2017) DSSD: deconvolutional single shot detector. arXiv preprint arXiv:1701.06659. Cited by: §IV-C1.
  56. J. Funke, M. Brown, S. M. Erlien and J. C. Gerdes (2017) Collision avoidance and stabilization for autonomous vehicles in emergency scenarios. IEEE Transactions on Control Systems Technology 25 (4), pp. 1204–1216. Cited by: §V-C2.
  57. A. Garcia-Garcia, S. Orts-Escolano, S. Oprea, V. Villena-Martinez and J. Garcia-Rodriguez (2017) A review on deep learning techniques applied to semantic segmentation. arXiv preprint arXiv:1704.06857. Cited by: §IV-B2.
  58. S. Garg, A. Singh, K. Kaur, G. S. Aujla, S. Batra, N. Kumar and M. S. Obaidat (2019) Edge computing-based security framework for big data analytics in VANETs. IEEE Network 33 (2), pp. 72–81. Cited by: §IV-I3.
  59. N. Garnett, R. Cohen, T. Pe’er, R. Lahav and D. Levi (2019) 3D-LaneNet: end-to-end 3D multiple lane detection. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2921–2930. Cited by: §IV-C2.
  60. A. Gawel, C. Don, R. Siegwart, J. Nieto and C. Cadena (2018-07) X-View: graph-based semantic multi-view localization. IEEE Robotics and Automation Letters 3, pp. 1687 – 1694. External Links: Document Cited by: §IV-C3.
  61. A. Geiger, P. Lenz, C. Stiller and R. Urtasun (2013) Vision meets robotics: the KITTI dataset. The International Journal of Robotics Research 32 (11), pp. 1231–1237. Cited by: §IV-B2.
  62. X. Geng, H. Liang, B. Yu, P. Zhao, L. He and R. Huang (2017) A scenario-adaptive driving behavior prediction approach to urban autonomous driving. Applied Sciences 7 (4), pp. 426. Cited by: §IV-C4, §V-I.
  63. R. Girshick, J. Donahue, T. Darrell and J. Malik (2014) Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 580–587. Cited by: §III, §IV-C1.
  64. R. Girshick, J. Donahue, T. Darrell and J. Malik (2015) Region-based convolutional networks for accurate object detection and segmentation. IEEE transactions on pattern analysis and machine intelligence 38 (1), pp. 142–158. Cited by: §IV-C1.
  65. R. Girshick (2015) Fast R-CNN. In Proceedings of the IEEE international conference on computer vision, pp. 1440–1448. Cited by: §IV-C1.
  66. (2018)(Website) External Links: Link Cited by: §I.
  67. D. Gonzalez Bautista, J. Pérez, V. Milanes and F. Nashashibi (2015-11) A review of motion planning techniques for automated vehicles. IEEE Transactions on Intelligent Transportation Systems, pp. 1–11. External Links: Document Cited by: §IV-C4.
  68. (2017)(Website) External Links: Link Cited by: §III.
  69. R. Gopalan, T. Hong, M. Shneier and R. Chellappa (2012) A learning approach towards detection and tracking of lane markings. IEEE Transactions on Intelligent Transportation Systems 13 (3), pp. 1088–1098. Cited by: §IV-C2.
  70. D. Goswami, R. Schneider, A. Masrur, M. Lukasiewycz, S. Chakraborty, H. Voit and A. Annaswamy (2012) Challenges in automotive cyber-physical systems design. In 2012 International Conference on Embedded Computer Systems (SAMOS), pp. 346–354. Cited by: §V-K.
  71. (2020)(Website) External Links: Link Cited by: §V-A.
  72. (2019)(Website) External Links: Link Cited by: §I, §IV-A5.
  73. G. Guilford (2018-04) Ford can only afford to give up on cars because of american protectionism. Note: Cited by: §V-G.
  74. J. Guo, P. Hu and R. Wang (2016) Nonlinear coordinated steering and braking control of vision-based autonomous vehicles in emergency obstacle avoidance. IEEE Transactions on Intelligent Transportation Systems 17 (11), pp. 3230–3240. Cited by: §V-C2.
  75. P. Hart, N. Nilsson and B. Raphael (1972-12) A formal basis for the heuristic determination of minimum cost paths. Intelligence/sigart Bulletin - SIGART 37, pp. 28–29. External Links: Document Cited by: §IV-C4.
  76. A. Hata and D. Wolf (2014-10) Road marking detection using LiDAR reflective intensity data and its application to vehicle localization. pp. 584–589. External Links: Document Cited by: §IV-C3.
  77. K. He, G. Gkioxari, P. Dollár and R. Girshick (2017) Mask R-CNN. In Proceedings of the IEEE international conference on computer vision, pp. 2961–2969. Cited by: §IV-C1.
  78. W. Hess, D. Kohler, H. Rapp and D. Andor (2016-05) Real-time loop closure in 2D LiDAR SLAM. pp. 1271–1278. External Links: Document Cited by: §IV-C3.
  79. D. Hildebrand (1992) An architectural overview of QNX.. In USENIX Workshop on Microkernels and Other Kernel Architectures, pp. 113–126. Cited by: §IV-F.
  80. J. Hilgert, K. Hirsch, T. Bertram and M. Hiller (2003) Emergency path planning for autonomous vehicles using elastic band theory. In Proceedings 2003 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM 2003), Vol. 2, pp. 1390–1395 vol.2. Cited by: §V-C2.
  81. A. B. Hillel, R. Lerner, D. Levi and G. Raz (2014) Recent progress in road and lane detection: a survey. Machine vision and applications 25 (3), pp. 727–745. Cited by: §IV-C2.
  82. M. Hnewa and H. Radha (2020) Object detection under rainy conditions for autonomous vehicles. arXiv preprint arXiv:2006.16471. Cited by: §V-C1.
  83. G. Hoffmann, C. Tomlin, M. Montemerlo and S. Thrun (2007-08) Autonomous automobile trajectory tracking for off-road driving: controller design, experimental validation and racing. pp. 2296 – 2301. External Links: Document Cited by: §IV-C5.
  84. J. Hong Yoon, C. Lee, M. Yang and K. Yoon (2016) Online multi-object tracking via structural constraint event aggregation. In Proceedings of the IEEE Conference on computer vision and pattern recognition, pp. 1392–1400. Cited by: §IV-B2.
  85. Y. Hou, Z. Ma, C. Liu and C. C. Loy (2019) Learning lightweight lane detection CNNs by self attention distillation. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1013–1021. Cited by: §II, §IV-C2.
  86. Y. Hou (2019) Agnostic lane detection. arXiv preprint arXiv:1905.03704. Cited by: §IV-C2.
  87. Y. Hsu, Z. Xu, Z. Kira and J. Huang (2018) Learning to cluster for proposal-free instance segmentation. In 2018 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. Cited by: §IV-C2.
  88. C. Hubmann, M. Becker, D. Althoff, D. Lenz and C. Stiller (2017) Decision making for autonomous driving considering interaction and uncertain prediction of surrounding vehicles. In 2017 IEEE Intelligent Vehicles Symposium (IV), pp. 1671–1678. Cited by: §V-I.
  89. J. Hur, S. Kang and S. Seo (2013) Multi-lane detection in urban driving environments using conditional random fields. In 2013 IEEE Intelligent Vehicles Symposium (IV), pp. 1297–1302. Cited by: §IV-C2.
  90. H. Jafarzadeh and C. Fleming (2019-08) Learning model predictive control for connected autonomous vehicles. pp. . External Links: Document Cited by: §IV-C5.
  91. K. Jiang, D. Yang, C. Liu, T. Zhang and Z. Xiao (2019) A flexible multi-layer map model designed for lane-level route planning in autonomous vehicles. Engineering 5 (2), pp. 305 – 318. External Links: ISSN 2095-8099, Document, Link Cited by: §IV-C3.
  92. H. Jung, J. Min and J. Kim (2013) An efficient lane detection algorithm for lane departure detection. In 2013 IEEE Intelligent Vehicles Symposium (IV), pp. 976–981. Cited by: §IV-C2.
  93. S. Kamkar (2015) Drive it like you hacked it: new attacks and tools to wirelessly steal cars. In Presentation at DEFCON, Cited by: §IV-I4, §V-D.
  94. S. Karaman and E. Frazzoli (2011-06) Sampling-based algorithms for optimal motion planning. International Journal of Robotic Research - IJRR 30, pp. 846–894. External Links: Document Cited by: §IV-C4.
  95. S. Karaman, M. Walter, A. Perez, E. Frazzoli and S. Teller (2011-06) Anytime motion planning using the RRT*. pp. 1478–1483. External Links: Document Cited by: §IV-C4.
  96. S. Kato, E. Takeuchi, Y. Ishiguro, Y. Ninomiya, K. Takeda and T. Hamada (2015) An open approach to autonomous vehicles. IEEE Micro 35 (6), pp. 60–68. Cited by: §III.
  97. S. Kato, S. Tokunaga, Y. Maruyama, S. Maeda, M. Hirabayashi, Y. Kitsukawa, A. Monrroy, T. Ando, Y. Fujii and T. Azumi (2018) Autoware on board: enabling autonomous vehicles with embedded systems. In 2018 ACM/IEEE 9th International Conference on Cyber-Physical Systems (ICCPS), pp. 287–296. Cited by: §V-E.
  98. Keith (2016) Smart infrastructure: getting more from strategic assets. University of Cambridge. Cited by: §V-H.
  99. J. B. Kenney (2011) Dedicated short-range communications (DSRC) standards in the United States. Proceedings of the IEEE 99 (7), pp. 1162–1182. Cited by: §IV-H2.
  100. J. H. Kim, S. Seo, N. Hai, B. M. Cheon, Y. S. Lee and J. W. Jeon (2015) Gateway framework for in-vehicle networks based on CAN, FlexRay, and ethernet. IEEE Transactions on Vehicular Technology 64 (10), pp. 4472–4486. Cited by: §IV-I2.
  101. Z. Kim (2008) Robust lane detection and tracking in challenging scenarios. IEEE Transactions on Intelligent Transportation Systems 9 (1), pp. 16–26. Cited by: §IV-C2.
  102. N. Koenig and A. Howard (2004) Design and use paradigms for gazebo, an open-source multi-robot simulator. In 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)(IEEE Cat. No. 04CH37566), Vol. 3, pp. 2149–2154. Cited by: §V-J.
  103. S. Komkov and A. Petiushko (2019) AdvHat: real-world adversarial attack on ArcFace face ID system. External Links: 1908.08705 Cited by: §V-D.
  104. J. Kong, M. Pfeiffer, G. Schildbach and F. Borrelli (2015-06) Kinematic and dynamic vehicle models for autonomous driving control design. pp. 1094–1099. External Links: Document Cited by: §IV-C5.
  105. K. Konolige (2004) Large-scale map-making. In AAAI, pp. 457–463. Cited by: §IV-A3.
  106. A. Konovaltsev, M. Cuntz, C. Hättich and M. Meurer (2013-09) Autonomous spoofing detection and mitigation in a GNSS receiver with an adaptive antenna array. In ION GNSS+ 2013, External Links: Link Cited by: §IV-I1.
  107. K. Koscher, A. Czeskis, F. Roesner, S. Patel, T. Kohno, S. Checkoway, D. McCoy, B. Kantor, D. Anderson, H. Shacham and S. Savage (2010) Experimental security analysis of a modern automobile. In 2010 IEEE Symposium on Security and Privacy, Vol. , pp. 447–462. Cited by: §IV-I2.
  108. S. Kuutti, S. Fallah, K. Katsaros, M. Dianati, F. Mccullough and A. Mouzakitis (2018-03) A survey of the state-of-the-art localisation techniques and their potentials for autonomous vehicle applications. IEEE Internet of Things Journal PP, pp. 1–1. External Links: Document Cited by: §IV-C3.
  109. M. Labbé and F. Michaud (2018-10) RTAB-Map as an open-source LiDAR and visual simultaneous localization and mapping library for large-scale and long-term online operation: labbÉ and michaud. Journal of Field Robotics 36, pp. . External Links: Document Cited by: §IV-C3.
  110. (Website) External Links: Link Cited by: §IV-B3.
  111. (Website) External Links: Link Cited by: §IV-B3.
  112. S. LaValle and J. Kuffner (1999-01) Randomized kinodynamic planning.. Vol. 20, pp. 473–479. External Links: Document Cited by: §IV-C4.
  113. J. Lee and B. B. Litkouhi (2015-October 27) System diagnosis in autonomous driving. Google Patents. Note: US Patent 9,168,924 Cited by: §V-B.
  114. S. Lee, J. Kim, J. Shin Yoon, S. Shin, O. Bailo, N. Kim, T. Lee, H. Seok Hong, S. Han and I. So Kweon (2017) Vpgnet: vanishing point guided network for lane and road marking detection and recognition. In Proceedings of the IEEE international conference on computer vision, pp. 1947–1955. Cited by: §IV-C2.
  115. S. LeVine (2017-03) What it really costs to turn a car into a self-driving vehicle. Note: Cited by: §V-G, §V-G.
  116. J. Levinson, M. Montemerlo and S. Thrun (2007) Map-based precision vehicle localization in Urban environments.. In Robotics: Science and Systems, Vol. 4, pp. 1. Cited by: §IV-A3.
  117. H. Li, D. Ma, B. Medjahed, Y. S. Kim and P. Mitra (2019-04) Analyzing and preventing data privacy leakage in connected vehicle services. SAE Int. J. Adv. & Curr. Prac. in Mobility 1 (), pp. 1035–1045. External Links: Link, Document Cited by: §IV-I5.
  118. J. Li, X. Mei, D. Prokhorov and D. Tao (2016) Deep neural network for structural prediction and lane detection in traffic scene. IEEE transactions on neural networks and learning systems 28 (3), pp. 690–703. Cited by: §IV-C2, §V-I.
  119. K. Lianos, J. Schönberger, M. Pollefeys and T. Sattler (2018-09) VSO: visual semantic odometry. pp. . Cited by: §IV-C3.
  120. L. Lin, X. Liao, H. Jin and P. Li (2019-07) Computation offloading toward edge computing. Proceedings of the IEEE 107, pp. 1584–1607. External Links: Document Cited by: §V-H.
  121. T. Lin, P. Goyal, R. Girshick, K. He and P. Dollár (2017) Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pp. 2980–2988. Cited by: §IV-C1.
  122. (2020)(Website) External Links: Link Cited by: §I.
  123. G. Liu, F. Wörgötter and I. Markelić (2010) Combining statistical hough transform and particle filter for robust lane detection and tracking. In 2010 IEEE Intelligent Vehicles Symposium, pp. 993–997. Cited by: §IV-C2.
  124. L. Liu, J. Chen, M. Brocanelli and W. Shi (2019) E2M: an energy-efficient middleware for computer vision applications on autonomous mobile robots. In Proceedings of the 4th ACM/IEEE Symposium on Edge Computing, pp. 59–73. Cited by: §IV, §V-F.
  125. L. Liu, Y. Yao, R. Wang, B. Wu and W. Shi (2020) Equinox: a road-side edge computing experimental platform for CAVs. In 2020 International Conference on Connected and Autonomous Driving (MetroCAD), pp. 41–42. Cited by: §IV-H1.
  126. L. Liu, X. Zhang, M. Qiao and W. Shi (2018) SafeShareRide: edge-based attack detection in ridesharing services. In 2018 IEEE/ACM Symposium on Edge Computing (SEC), pp. 17–29. Cited by: §IV-E.
  127. S. Liu, L. Li, J. Tang, S. Wu and J. Gaudiot (2017) Creating autonomous vehicle systems. Synthesis Lectures on Computer Science 6 (1), pp. i–186. Cited by: §I, §IV-A5.
  128. S. Liu, L. Liu, J. Tang, B. Yu, Y. Wang and W. Shi (2019) Edge computing for autonomous driving: opportunities and challenges. Proceedings of the IEEE 107 (8), pp. 1697–1716. Cited by: §V-I.
  129. W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Fu and A. C. Berg (2016) SSD: single shot multibox detector. In European conference on computer vision, pp. 21–37. Cited by: §IV-C1.
  130. H. Loose, U. Franke and C. Stiller (2009) Kalman particle filter for lane recognition on rural roads. In 2009 IEEE Intelligent Vehicles Symposium, pp. 60–65. Cited by: §IV-C2.
  131. A. López, J. Serrat, C. Canero, F. Lumbreras and T. Graf (2010) Robust lane markings detection and road geometry computation. International Journal of Automotive Technology 11 (3), pp. 395–407. Cited by: §IV-C2.
  132. A. Luft (2020-07) The Chevrolet Sonic’s days are numbered. Note: Cited by: §V-G.
  133. C. Lv, X. Hu, A. Sangiovanni-Vincentelli, Y. Li, C. M. Martinez and D. Cao (2018) Driving-style-based codesign optimization of an automated electric vehicle: a cyber-physical system approach. IEEE Transactions on Industrial Electronics 66 (4), pp. 2965–2975. Cited by: §V-K.
  134. M. Magnusson, A. Nuchter, C. Lorken, A. Lilienthal and J. Hertzberg (2009-05) Evaluation of 3D registration reliability and speed - a comparison of ICP and NDT. pp. 3907 – 3912. External Links: Document Cited by: §IV-C3.
  135. (Website) External Links: Link Cited by: §IV-B3.
  136. F. Martinelli, F. Mercaldo, A. Orlando, V. Nardone, A. Santone and A. K. Sangaiah (2020) Human behavior characterization for driving style recognition in vehicle system. Computers & Electrical Engineering 83, pp. 102504. External Links: ISSN 0045-7906, Document, Link Cited by: §IV-I5.
  137. C. Mcmanus, W. Churchill, A. Napier, B. Davis and P. Newman (2013-05) Distraciton suppression for vision-based pose estimation at city scales. pp. . External Links: Document Cited by: §IV-C3.
  138. (2018)(Website) External Links: Link Cited by: §III, §IV-D, §V-F.
  139. C. Michaelis, B. Mitzkus, R. Geirhos, E. Rusak, O. Bringmann, A. S. Ecker, M. Bethge and W. Brendel (2019) Benchmarking robustness in object detection: autonomous driving when winter is coming. arXiv preprint arXiv:1907.07484. Cited by: §IV-C1.
  140. D. Mills (1992) RFC1305: network time protocol (version 3) specification, implementation. RFC Editor. Cited by: §V-A.
  141. M. Montemerlo, J. Becker, S. Bhat, H. Dahlkamp, D. Dolgov, S. Ettinger, D. Haehnel, T. Hilden, G. Hoffmann, B. Huhnke, D. Johnston, S. Klumpp, D. Langer, A. Levandowski, J. Levinson, J. Marcil, D. Orenstein, J. Paefgen, I. Penny and S. Thrun (2008-09) Junior: the stanford entry in the urban challenge. Journal of Field Robotics 25, pp. 569 – 597. External Links: Document Cited by: §IV-C4.
  142. U. Muller, J. Ben, E. Cosatto, B. Flepp and Y. L. Cun (2006) Off-road obstacle avoidance through end-to-end learning. In Advances in neural information processing systems, pp. 739–746. Cited by: §II.
  143. R. Mur-Artal and J. Tardos (2016-10) ORB-SLAM2: an open-source SLAM system for monocular, stereo and RGB-D cameras. IEEE Transactions on Robotics PP, pp. . External Links: Document Cited by: §IV-C3.
  144. A. Nanda, D. Puthal, J. J. P. C. Rodrigues and S. A. Kozlov (2019) Internet of autonomous vehicles communications security: overview, issues, and directions. IEEE Wireless Communications 26 (4), pp. 60–65. Cited by: §IV-I2.
  145. D. Neven, B. De Brabandere, S. Georgoulis, M. Proesmans and L. Van Gool (2018) Towards end-to-end lane detection: an instance segmentation approach. In 2018 IEEE intelligent vehicles symposium (IV), pp. 286–291. Cited by: §I, §IV-C2.
  146. R. Newcombe, S. Lovegrove and A. Davison (2011-11) DTAM: dense tracking and mapping in real-time. pp. 2320–2327. External Links: Document Cited by: §IV-C3.
  147. D. K. Nilsson, U. E. Larson, F. Picasso and E. Jonsson (2009) A first simulation of attacks in the automotive network communications protocol flexray. In Proceedings of the International Workshop on Computational Intelligence in Security for Information Systems CISIS’08, E. Corchado, R. Zunino, P. Gastaldo and Á. Herrero (Eds.), Berlin, Heidelberg, pp. 84–91. Cited by: §IV-I2.
  148. Z. Ning, F. Zhang, W. Shi and W. Shi (2017) Position paper: challenges towards securing hardware-assisted execution environments. In Proceedings of the Hardware and Architectural Support for Security and Privacy, HASP ’17, New York, NY, USA. External Links: ISBN 9781450352666, Link, Document Cited by: §V-D.
  149. (2020)(Website) External Links: Link Cited by: §IV-F.
  150. B. W. O’Hanlon, M. L. Psiaki, J. A. Bhatti, D. P. Shepard and T. E. Humphreys (2013) Real-time GPS spoofing detection via correlation of encrypted signals. NAVIGATION 60 (4), pp. 267–278. External Links: Document, Link, Cited by: §IV-I1.
  151. U. of Concerned Scientists (2018-03) Electric vehicle batteries: materials, cost, lifespan. Note: Cited by: §V-G.
  152. A. Orrick, M. McDermott, D. M. Barnett, E. L. Nelson and G. N. Williams (1994) Failure detection in an autonomous underwater vehicle. In Proceedings of IEEE Symposium on Autonomous Underwater Vehicle Technology (AUV’94), pp. 377–382. Cited by: §V-B.
  153. Z. Pala and N. Inanc (2007-10) Smart parking applications using RFID technology. pp. 1 – 3. External Links: ISBN 978-975-01566-0-1, Document Cited by: §V-H.
  154. X. Pan, J. Shi, P. Luo, X. Wang and X. Tang (2018) Spatial as deep: spatial CNN for traffic scene understanding. In Thirty-Second AAAI Conference on Artificial Intelligence, Cited by: §IV-C2.
  155. (2018)(Website) External Links: Link Cited by: §V-J.
  156. J. Petit, B. Stottelaar, M. Feiri and F. Kargl (2015) Remote attacks on automated vehicles sensors: experiments on camera and LiDAR. Black Hat Europe 11, pp. 2015. Cited by: §IV-I1.
  157. Q. Pham, M. A. Uy, B. Hua, D. T. Nguyen, G. Roig and S. Yeung (2020-04) LCD: learned cross-domain descriptors for 2D-3D matching. Proceedings of the AAAI Conference on Artificial Intelligence 34, pp. 11856–11864. External Links: Document Cited by: §IV-C3.
  158. J. Philion (2019) FastDraw: addressing the long tail of lane detection by adapting a sequential prediction network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 11582–11591. Cited by: §IV-C2.
  159. R. A. Popa, C. M. S. Redfield, N. Zeldovich and H. Balakrishnan (2011) CryptDB: protecting confidentiality with encrypted query processing. In Proceedings of the Twenty-Third ACM Symposium on Operating Systems Principles, SOSP ’11, New York, NY, USA, pp. 85–100. External Links: ISBN 9781450309776, Link, Document Cited by: §IV-I3.
  160. M. Prexl, N. Zunhammer and U. Walter (2019-11) Motion prediction for teleoperating autonomous vehicles using a PID control model. pp. 133–138. External Links: Document Cited by: §IV-C5.
  161. F. Qu, Z. Wu, F. Wang and W. Cho (2015) A security and privacy review of VANETs. IEEE Transactions on Intelligent Transportation Systems 16 (6), pp. 2985–2996. Cited by: §IV-I2.
  162. M. Quigley, K. Conley, B. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler and A. Y. Ng (2009) ROS: an open-source robot operating system. In ICRA workshop on open source software, Vol. 3, pp. 5. Cited by: §II, §IV-G.
  163. H. Rebecq, T. Horstschaefer, G. Gallego and D. Scaramuzza (2016-12) EVO: a geometric approach to event-based 6-DOF parallel tracking and mapping in real-time. IEEE Robotics and Automation Letters PP, pp. . External Links: Document Cited by: §IV-C3.
  164. (Website) External Links: Link Cited by: §IV-B3.
  165. J. Redmon, S. Divvala, R. Girshick and A. Farhadi (2016) You Only Look Once: unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 779–788. Cited by: §IV-C1.
  166. J. Redmon and A. Farhadi (2017) YOLO9000: better, faster, stronger. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7263–7271. Cited by: §IV-C1.
  167. J. Redmon and A. Farhadi (2018) YOLOv3: an incremental improvement. arXiv preprint arXiv:1804.02767. Cited by: §I, §IV-C1.
  168. U. Releases (2016) Fatal traffic crash data. Article (CrossRef Link). Cited by: §V-I.
  169. K. Ren, Q. Wang, C. Wang, Z. Qin and X. Lin (2020) The security of autonomous driving: threats, defenses, and future directions. Proceedings of the IEEE 108 (2), pp. 357–372. Cited by: §IV-I1, §IV-I4, §V-D.
  170. S. Ren, K. He, R. Girshick and J. Sun (2015) Faster R-CNN: towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pp. 91–99. Cited by: §IV-C1.
  171. (2020)(Website) External Links: Link Cited by: §IV-G.
  172. L. L. Ruijun Wang and W. Shi (December, 2020) HydraSpace: computational data storage for autonomous vehicles. In IEEE Collaborative and Internet Computing Vision Track (CIC), Cited by: §IV-E.
  173. J. Ryu, D. Ogay, S. Bulavintsev, H. Kim and J. Park (2013-01) Development and experiences of an autonomous vehicle for high-speed navigation and obstacle avoidance. Vol. 466, pp. 105–116. External Links: Document Cited by: §IV-C4.
  174. G. Sabaliauskaite, L. S. Liew and J. Cui (2018) Integrating autonomous vehicle safety and security analysis using STPA method and the six-step model. International Journal on Advances in Security 11 (1&2), pp. 160–169. Cited by: §V-B.
  175. A. E. Sallab, M. Abdou, E. Perot and S. Yogamani (2017) Deep reinforcement learning framework for autonomous driving. Electronic Imaging 2017 (19), pp. 70–76. Cited by: §II, §V-I.
  176. R. S. Sandhu and P. Samarati (1994-Sep.) Access control: principle and practice. IEEE Communications Magazine 32 (9), pp. 40–48. External Links: Document, Link, ISSN 1558-1896 Cited by: §IV-I3.
  177. H. Sato and T. Yakoh (2000) A real-time communication mechanism for RTLinux. In 2000 26th Annual Conference of the IEEE Industrial Electronics Society. IECON 2000. 2000 IEEE International Conference on Industrial Electronics, Control and Instrumentation. 21st Century Technologies, Vol. 4, pp. 2437–2442. Cited by: §IV-F.
  178. (Website) External Links: Link Cited by: §IV-B3.
  179. R. E. Schantz and D. C. Schmidt (2002) Middleware. Encyclopedia of Software Engineering. Cited by: §II.
  180. D. Schlegel, M. Colosi and G. Grisetti (2018-05) ProSLAM: graph SLAM from a programmer’s perspective. pp. 1–9. External Links: Document Cited by: §IV-C3.
  181. B. Schoettle (2017) Sensor fusion: a comparison of sensing capabilities of human drivers and highly automated vehicles. University of Michigan. Cited by: §V-I.
  182. D. Sedgwick (2017-02) When driverless cars call for backup. Note: Cited by: §V-G.
  183. (2019)(Website) External Links: Link Cited by: §IV-F.
  184. Y. Seo, J. Lee, W. Zhang and D. Wettergreen (2014-08) Recognition of highway workzones for reliable autonomous driving. IEEE Transactions on Intelligent Transportation Systems 16, pp. 1–11. External Links: Document Cited by: §V-C3.
  185. H. Shin, D. Kim, Y. Kwon and Y. Kim (2017) Illusion and dazzle: adversarial optical channel exploits against LiDARs for automotive applications. In Cryptographic Hardware and Embedded Systems – CHES 2017, W. Fischer and N. Homma (Eds.), Cham, pp. 445–467. Cited by: §IV-I1.
  186. H. Somerville, P. Lienert and A. Sage (2018) Uber’s use of fewer safety sensors prompts questions after Arizona crash. Business news, Reuters. Cited by: §II.
  187. S. S. V. Standard (2009) Dedicated Short Range Communications (DSRC) message set dictionary. SAE International, November. Cited by: §IV-H2.
  188. Steve LeVine (2017)(Website) External Links: Link Cited by: §III.
  189. D. R. Stinson and M. Paterson (2018) Cryptography: theory and practice. CRC press. Cited by: §IV-I2.
  190. B. Su, J. Ma, Y. Peng and M. Sheng (2016-10) Algorithm for RGBD point cloud denoising and simplification based on k-means clustering. 28, pp. 2329–2334 and 2341. Cited by: §IV-C3.
  191. J. Suhr, J. Jang, D. Min and H. Jung (2016-08) Sensor fusion-based low-cost vehicle localization system for complex urban environments. IEEE Transactions on Intelligent Transportation Systems 18, pp. 1–9. External Links: Document Cited by: §IV-C3.
  192. S. Sumikura, M. Shibuya and K. Sakurada (2019-10) OpenVSLAM: a versatile visual SLAM framework. pp. 2292–2295. External Links: ISBN 978-1-4503-6889-6, Document Cited by: §IV-C3.
  193. Y. Tamai, T. Hasegawa and S. Ozawa (1996) The ego-lane detection under rainy condition. In World Congress on Intelligent Transport Systems (3rd: 1996: Orlando Fla.). Intelligent transportation: realizing the future: abstracts of the Third World Congress on Intelligent Transport Systems, Cited by: §V-B.
  194. A. Tampuu, M. Semikin, N. Muhammad, D. Fishman and T. Matiisen (2020) A survey of end-to-end driving: architectures and training methods. arXiv preprint arXiv:2003.06404. Cited by: §IV-B2.
  195. H. Tan, Y. Zhou, Y. Zhu, D. Yao and K. Li (2014) A novel curve lane detection based on improved river flow and RANSA. In 17th International IEEE Conference on Intelligent Transportation Systems (ITSC), pp. 133–138. Cited by: §IV-C2.
  196. L. Tang, Y. Shi, Q. He, A. W. Sadek and C. Qiao (2020) Performance test of autonomous vehicle LiDAR sensors under different weather conditions. Transportation research record 2674 (1), pp. 319–329. Cited by: §V-B.
  197. Z. Teng, J. Kim and D. Kang (2010) Real-time lane detection by using multiple cues. In ICCAS 2010, pp. 2334–2337. Cited by: §IV-C2.
  198. Texas Instruments TDA. Note: \url 2018-12-28 Cited by: §IV-D.
  199. The basics of LiDAR - light detection and ranging - remote sensing. Note: \url 2020-2-18 Cited by: §IV-E.
  200. (2020)(Website) External Links: Link Cited by: §IV.
  201. (2020)(Website) External Links: Link Cited by: §IV-D.
  202. S. Thrun and J. J. Leonard (2008) Simultaneous localization and mapping. In Springer handbook of robotics, pp. 871–889. Cited by: §I, §IV-A3.
  203. Y. Tian, K. Pei, S. Jana and B. Ray (2018) Deeptest: automated testing of deep-neural-network-driven autonomous cars. In Proceedings of the 40th international conference on software engineering, pp. 303–314. Cited by: §V-B.
  204. C. Urmson, J. Anhalt, M. Clark, T. Galatali, J. Gonzalez, J. Gowdy, A. Gutierrez, S. Harbaugh, M. Johnson-Roberson, P. Koon, K. Peterson and B. Smith (2004-01) High speed navigation of unrehearsed terrain: red team technology for grand challenge 2004. Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, Tech. Rep. CMU-RI-04-37, pp. . Cited by: §IV-C3.
  205. (2019)(Website) External Links: Link Cited by: §I, §IV-A3.
  206. (Website) External Links: Link Cited by: §IV-B3.
  207. P. Viola and M. Jones (2001) Rapid object detection using a boosted cascade of simple features. In Proceedings of the 2001 IEEE computer society conference on computer vision and pattern recognition. CVPR 2001, Vol. 1, pp. I–I. Cited by: §IV-C1.
  208. VxWorks. Note: \url 2018-12-28 Cited by: §IV-F.
  209. Y. Wang, L. Liu, X. Zhang and W. Shi (2019) HydraOne: an indoor experimental research and education platform for CAVs. In 2nd USENIX Workshop on Hot Topics in Edge Computing (HotEdge 19), Cited by: §V-J.
  210. C. Warrender, S. Forrest and B. Pearlmutter (1999) Detecting intrusions using system calls: alternative data models. In Proceedings of the 1999 IEEE Symposium on Security and Privacy (Cat. No.99CB36344), Vol. , pp. 133–145. Cited by: §IV-I.
  211. J. Wei, J. M. Snider, J. Kim, J. M. Dolan, R. Rajkumar and B. Litkouhi (2013) Towards a viable autonomous driving research platform. In 2013 IEEE Intelligent Vehicles Symposium (IV), pp. 763–770. Cited by: §II.
  212. (2020)(Website) External Links: Link Cited by: §IV-G.
  213. R. Wolcott and R. Eustice (2014-10) Visual localization within LiDAR maps for automated urban driving. IEEE International Conference on Intelligent Robots and Systems, pp. 176–183. External Links: Document Cited by: §IV-C3.
  214. P. Wu, C. Chang and C. H. Lin (2014) Lane-mark extraction for automobiles under complex conditions. Pattern Recognition 47 (8), pp. 2756–2767. Cited by: §IV-C2.
  215. (2019)(Website) External Links: Link Cited by: §IV-D.
  216. H. Xu, Y. Gao, F. Yu and T. Darrell (2017) End-to-end learning of driving models from large-scale video datasets. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2174–2182. Cited by: §II.
  217. E. Yağdereli, C. Gemci and A. Z. Aktaş (2015) A study on cyber-security of autonomous and unmanned vehicles. The Journal of Defense Modeling and Simulation 12 (4), pp. 369–381. External Links: Document, Link, Cited by: §IV-I1.
  218. S. Yamazaki, C. Miyajima, E. Yurtsever, K. Takeda, M. Mori, K. Hitomi and M. Egawa (2016) Integrating driving behavior and traffic context through signal symbolization. In 2016 IEEE Intelligent Vehicles Symposium (IV), pp. 642–647. Cited by: §IV-C4.
  219. C. Yan, W. Xu and J. Liu (2016) Can you trust autonomous vehicles: contactless attacks against sensors of self-driving vehicle. DEF CON 24 (8), pp. 109. Cited by: §V-B.
  220. C. Yan, W. Xu and J. Liu (2016) Can you trust autonomous vehicles: contactless attacks against sensors of self-driving vehicle. Vol. 24, pp. 109. Cited by: §IV-I1.
  221. S. Yang, Y. Song, M. Kaess and S. Scherer (2016-10) Pop-up SLAM: semantic monocular plane SLAM for low-texture environments. pp. 1222–1229. External Links: Document Cited by: §IV-C3.
  222. V. Yodaiken (1999) The RTLinux manifesto. In Proc. of the 5th Linux Expo, Cited by: §IV-F.
  223. F. Yu, H. Chen, X. Wang, W. Xian, Y. Chen, F. Liu, V. Madhavan and T. Darrell (2020) BDD100K: a diverse driving dataset for heterogeneous multitask learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2636–2645. Cited by: §IV-B2.
  224. J. Yu, x. guo, X. Pei, z. chen and M. Zhu (2019-02) Robust model predictive control for path tracking of autonomous vehicle. pp. . External Links: Document Cited by: §IV-C5.
  225. E. Yurtsever, J. Lambert, A. Carballo and K. Takeda (2020-03) A survey of autonomous driving: common practices and emerging technologies. IEEE Access PP, pp. 1–1. External Links: Document Cited by: §II, §IV-C4.
  226. S. Zang, M. Ding, D. Smith, P. Tyler, T. Rakotoarivelo and M. A. Kaafar (2019) The impact of adverse weather conditions on autonomous vehicles: how rain, snow, fog, and hail affect the performance of a self-driving car. IEEE Vehicular Technology Magazine 14 (2), pp. 103–111. Cited by: §V-C1.
  227. K. (. Zeng, S. Liu, Y. Shu, D. Wang, H. Li, Y. Dou, G. Wang and Y. Yang (2018-08) All your GPS are belong to us: towards stealthy manipulation of road navigation systems. In 27th USENIX Security Symposium (USENIX Security 18), Baltimore, MD, pp. 1527–1544. External Links: ISBN 978-1-939133-04-5, Link Cited by: §IV-I1.
  228. G. Zhang, C. Yan, X. Ji, T. Zhang, T. Zhang and W. Xu (2017) DolphinAttack: inaudible voice commands. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, CCS ’17, New York, NY, USA, pp. 103–117. External Links: ISBN 9781450349468, Link, Document Cited by: §IV-I4.
  229. J. Zhang and S. Singh (2014-07) LOAM: lidar odometry and mapping in real-time. pp. . External Links: Document Cited by: §IV-C3.
  230. Q. Zhang, H. Zhong, J. Cui, L. Ren and W. Shi (2020) AC4AV: a flexible and dynamic access control framework for connected and autonomous vehicles. IEEE Internet of Things Journal (), pp. 1–1. Cited by: §IV-I3.
  231. Q. Zhang, Y. Wang, X. Zhang, L. Liu, X. Wu, W. Shi and H. Zhong (2018) OpenVDAP: an open vehicular data analytics platform for CAVs. In 2018 IEEE 38th International Conference on Distributed Computing Systems (ICDCS), pp. 1310–1320. Cited by: §IV-E, §IV-E.
  232. Z. Zhang, S. Liu, G. Tsai, H. Hu, C. Chu and F. Zheng (2018) Pirvs: an advanced visual-inertial SLAM system with flexible sensor fusion and hardware co-design. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 1–7. Cited by: §IV-A3.
  233. H. Zhong, L. Pan, Q. Zhang and J. Cui (2019) A new message authentication scheme for multiple devices in intelligent connected vehicles based on edge computing. IEEE Access 7 (), pp. 108211–108222. Cited by: §IV-I3.
  234. S. Zhou, Y. Jiang, J. Xi, J. Gong, G. Xiong and H. Chen (2010) A novel lane detection based on geometrical model and gabor filter. In 2010 IEEE Intelligent Vehicles Symposium, pp. 59–64. Cited by: §IV-C2.
  235. J. Ziegler, P. Bender, M. Schreiber, H. Lategahn, T. Strauss, C. Stiller, T. Dang, U. Franke, N. Appenrodt and C. G. Keller (2014) Making bertha drive—an autonomous journey on a historic route. IEEE Intelligent transportation systems magazine 6 (2), pp. 8–20. Cited by: §I, §IV-A3.
  236. I. Zolanvari, S. Ruano, A. Rana, A. Cummins, A. Smolic, R. Da Silva and M. Rahbar (2019-09) DublinCity: annotated LiDAR point cloud and its applications. pp. . Cited by: §IV-C3.
  237. Z. Zou, Z. Shi, Y. Guo and J. Ye (2019) Object detection in 20 years: a survey. arXiv preprint arXiv:1905.05055. Cited by: §IV-C1.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description