Security of Future Self-Driving Vehicular Networking Systems: Challenges and Solutions

Security of Future Self-Driving Vehicular Networking Systems: Challenges and Solutions

Adnan Qayyum Information Technology University (ITU), Punjab, Lahore, Pakistan
Hamad Bin Khalifa University (HBKU), Doha, Qatar
Muhammad Usama Information Technology University (ITU), Punjab, Lahore, Pakistan
Hamad Bin Khalifa University (HBKU), Doha, Qatar
Junaid Qadir Information Technology University (ITU), Punjab, Lahore, Pakistan
Hamad Bin Khalifa University (HBKU), Doha, Qatar
Ala Al-Fuqaha Information Technology University (ITU), Punjab, Lahore, Pakistan
Hamad Bin Khalifa University (HBKU), Doha, Qatar

Securing Self-Driving Cars and ML-Driven VANETs: The Challenges and The Way Forward

Adnan Qayyum Information Technology University (ITU), Punjab, Lahore, Pakistan
Hamad Bin Khalifa University (HBKU), Doha, Qatar
Muhammad Usama Information Technology University (ITU), Punjab, Lahore, Pakistan
Hamad Bin Khalifa University (HBKU), Doha, Qatar
Junaid Qadir Information Technology University (ITU), Punjab, Lahore, Pakistan
Hamad Bin Khalifa University (HBKU), Doha, Qatar
Ala Al-Fuqaha Information Technology University (ITU), Punjab, Lahore, Pakistan
Hamad Bin Khalifa University (HBKU), Doha, Qatar

Securing Future Autonomous & Connected Vehicles: Challenges Posed by Adversarial Machine Learning and The Way Forward

Adnan Qayyum Information Technology University (ITU), Punjab, Lahore, Pakistan
Hamad Bin Khalifa University (HBKU), Doha, Qatar
Muhammad Usama Information Technology University (ITU), Punjab, Lahore, Pakistan
Hamad Bin Khalifa University (HBKU), Doha, Qatar
Junaid Qadir Information Technology University (ITU), Punjab, Lahore, Pakistan
Hamad Bin Khalifa University (HBKU), Doha, Qatar
Ala Al-Fuqaha Information Technology University (ITU), Punjab, Lahore, Pakistan
Hamad Bin Khalifa University (HBKU), Doha, Qatar

Securing Connected & Autonomous Vehicles: Challenges Posed by Adversarial Machine Learning and The Way Forward

Adnan Qayyum Information Technology University (ITU), Punjab, Lahore, Pakistan
Hamad Bin Khalifa University (HBKU), Doha, Qatar
Muhammad Usama Information Technology University (ITU), Punjab, Lahore, Pakistan
Hamad Bin Khalifa University (HBKU), Doha, Qatar
Junaid Qadir Information Technology University (ITU), Punjab, Lahore, Pakistan
Hamad Bin Khalifa University (HBKU), Doha, Qatar
Ala Al-Fuqaha Information Technology University (ITU), Punjab, Lahore, Pakistan
Hamad Bin Khalifa University (HBKU), Doha, Qatar
Abstract

Connected and autonomous vehicles (CAVs) will form the backbone of future next-generation intelligent transportation systems (ITS) providing travel comfort, road safety, along with a number of value-added services. Such a transformation—which will be fuelled by concomitant advances in technologies for machine learning (ML) and wireless communications—will enable a future vehicular ecosystem that is better featured and more efficient. However, there are lurking security problems related to the use of ML in such a critical setting where an incorrect ML decision may not only be a nuisance but can lead to loss of precious lives. In this paper, we present an in-depth overview of the various challenges associated with the application of ML in vehicular networks. In addition, we formulate the ML pipeline of CAVs and present various potential security issues associated with the adoption of ML methods. In particular, we focus on the perspective of adversarial ML attacks on CAVs and outline a solution to defend against adversarial attacks in multiple settings.

Connected and autonomous vehicles, machine/deep learning, adversarial machine learning, adversarial perturbation, perturbation detection, and robust machine learning.

I Introduction

In recent years, connected and autonomous vehicles (CAVs) have emerged as a promising area of research. The connected vehicles are an important component of intelligent transportation systems (ITS) in which vehicles communicate with each other and with communications infrastructure to exchange safety messages and other critical information (e.g., traffic and road conditions). One of the main driving force for CAVs is the advancement of machine learning (ML) methods, particularly deep learning (DL), that are used for decision making at different levels. Unlike conventional connected vehicles, the autonomous (self-driving) vehicles have two important characteristics; namely, automation capability and cooperation (connectivity) [5]. In future smart cities, CAVs are expected to have a profound impact on the vehicular ecosystem and society.

The phenomenon of connected vehicles is realized through technology known as vehicular networks or vehicular ad-hoc networks (VANETs) [6]. Over the years, various configurations of connected vehicles have been developed including the use of dedicated short-range communications (DSRC) in the United States and ITS-G5 in Europe based on the IEEE 802.11p standard. However, a recent study [7] has shown many limitations of such systems such as (1) short-lived infrastructure-to-vehicle (I2V) connection, (2) non-guaranteed quality of service (QoS), and (3) unbounded channel access delay, etc. To address such limitations, the 3rd generation partnership project (3GPP) has been initiated with a focus on leveraging the high penetration rate of long term evolution (LTE) and 5G cellular networks to support vehicle-to-everything (V2X) services [8]. The purpose of developing V2X technology is to enable the communication between all entities encountered in the road environment including vehicles, communications infrastructure, pedestrians, cycles, etc.

The impressive ability of ML/DL to leverage increasingly accessible data, along with the advancement in other concomitant technologies (such as wireless communications), seems to be set to enable autonomous and self-organizing connected vehicles in the future. In addition, future vehicular networks will evolve from normal to autonomous vehicles and will enable ubiquitous Internet access on vehicles. ML will have a predominant role in building the perception system of autonomous and semi-autonomous connected vehicles.

Fig. 1: Outline of the paper

Despite the development of different configurations of connected vehicles, they are still vulnerable to various security issues and there are various automotive attack surfaces that can be exploited [9]. The threat is getting worse with the development of fully autonomous vehicles. As the autonomous vehicles are being equipped with many sensors such as cameras, RADAR, LIDAR, and mechanical control units, etc. These sensors share critical sensory information with onboard devices through CAN bus and with other nearby vehicles as well. The backbone of self-driving vehicles is the onboard intelligent processing capabilities using the data collected through the sensory system. This data can be used for many other purposes, e.g., getting information about vehicle kinetics, traffic flow, road, and network conditions, etc. Such data can be potentially used for improving the performance of the vehicular ecosystem using adaptive data-driven decision making and can also be used to accomplish various destructive objectives. Therefore, ensuring data integrity and security are necessarily important to avoid various risks and attacks on CAVs.

It is common for the perception and control systems of CAVs to be built using ML/DL methods. However, ML/DL techniques have been recently found vulnerable to carefully crafted adversarial perturbations [10] and different physical world attacks have been successfully performed on the vision system of autonomous cars [11, 12]. This has raised many privacy and security concerns about the use of such methods particularly for security-critical applications like CAVs. In this paper, we aim to highlight various security issues associated with the use of ML and we present a review of adversarial ML literature mainly focusing on CAVs. In addition, we also present a taxonomy of possible solutions to restrict adversarial ML attacks and open research issues on autonomous, connected vehicles, and ML.

ML in general and DL schemes specifically perform exceptionally well in learning hidden patterns from data. DL schemes such as deep neural networks (DNN) have outperformed human-level intelligence in many perception and detection tasks by accurately learning from a large corpus of training data and classifying/predicting with high accuracy on unseen real-world test examples. As DL schemes produce outstanding results, they have been used in many real-world security-sensitive tasks such as perception system in self-driving cars, anomaly and intrusion detection in vehicular networks, etc. ML/DL schemes are designed for benign and stationary environments where it is assumed that the training and test data belongs to the same statistical distribution. The application of this assumption in a real-world application is flawed as training and test data can have different statistical distributions which gives rise to an opening for adversaries to compromise the ML/DL-based systems. Furthermore, the lack of interpretability of the learning process, imperfections in training process, and discontinuity in the input-output relationship of DL schemes also resulted in an incentive for adversaries to fool the deployed ML/DL system [13].

Year Authors Publisher Papers Cited Focused Area Conventional Challenges Threat Models Adversarial ML Robust ML Solutions Autonomous Vehicles Connected Vehicles Open Research Issues
2014 Mejri et al. [1] Elsevier Vehicular Communication 69 Security of vehicular networks
2016 Gardiner et al. [2] ACM Computing Surveys (CSUR) 40 Security of ML for malware classification
2018 Chakraborty et al. [3] arXiv 79 Adversarial attacks and defenses
2018 Akhter et al. [4] IEEE Access 195 Adversarial attacks and defenses in computer vision
2018 Siegel et al. [14] IEEE Transactions on ITS 198 Survey on connected vehicles’ landscape
2018 Hussain et al. [6] IEEE Communications Surveys and Tutorials (COMST) 230 Autonomous cars: research results, issues and future challenges
2019 Yuan et al. [15] IEEE Transactions on NN & LS (TNNLS) 146 Adversarial attacks and defenses for deep learning systems
2019 Wang et al. [16] arXiv 128 Adversarial ML attacks and defenses in text domain
2019 Our Paper 239 Security of CAVs and ML
TABLE I: Comparison of this paper with existing survey and review papers on the security of machine learning (ML) and connected and autonomous vehicles (CAVs). (Legend: means covered; means not covered; means partially covered.)

Contributions of this Paper: In this paper, we build upon the existing literature available on CAVs and present a comprehensive review of that literature. The following are the major contributions of this study.

  1. We formulate the ML pipeline of CAVs and describe in detail various security challenges that arise with the increasing adoption of ML techniques in CAVs, specifically emphasizing the challenges posed by adversarial ML;

  2. We present a taxonomy of various threat models and highlight the generalization of attack surfaces for general ML, autonomous, and connected vehicle applications;

  3. We review existing adversarial ML attacks with a special emphasis on their relevance for CAVs;

  4. We review robust ML approaches and provide a taxonomy of these approaches with a special emphasis on their relevance for CAVs; and

  5. Finally, we highlight various open research problems that require further investigation.

Organization of the Paper: The organization of this paper is depicted in Figure 1. The history, introduction, and various challenges associated with connected and automated vehicles (CAVs) are presented in Section II. Section III presents an overview of the ML pipeline in CAVs. The detailed overview of adversarial ML and its threats for CAVs are described in Section IV. An outline of various solutions to robustify applications of ML along with common methods and recommendations for evaluating robustness are presented in Section V. Section VI presents open research problems on the use of ML in the context of CAVs. Finally, we conclude the paper in Section VII. A summary of the salient acronyms used in this paper is presented in Table II for convenience.

BSM Basic Safety Message
BFGS Broyden–Fletcher–Goldfarb–Shanno Algorithm
CAN Controller Area Network
CAVs Connected and Automated (Autonomous) Vehicles
CIFAR Canadian Institute for Advanced Research
CNN Convolutional Neural Network
C&W Carlini and Wagner Algorithm
DARPA Defense Advanced Research Projects Agency
DL Deep Learning
DNN Deep Neural Network
ECUs Electronic Control Units
FGSM Fast Gradient Sign Method
GAN Generative Adversarial Networks
GPS Global Positioning System
GTSDB German Traffic Sign Detection Benchmark
GTSRB German Traffic Sign Recognition Benchmark
IoV Internet of Vehicles
JSMA Jacobian-based Saliency Map Attack
L-BFGS Limited-memory BFGS
LIDAR LIght Detection and Ranging
LISA Laboratory for Intelligent & Safe Automobiles
LSTM Long Short-Term Memory
MC/DC Modified Condition/Decision Coverage
ML Machine Learning
MNIST Modified National Institute of Standards and Technology
ODD Operational Design Domain
RADAR RAdio Detection And Ranging
RL Reinforcement Learning
RSU Road-Side Unit
SAE Society of Automotive Engineers
SVM Support Vector Machine
VANETs Vehicular Ad-hoc Networks
V2I Vehicle to Infrastructure
V2V Vehicle to Vehicle
V2X Vehicle to Everything
VGG (Oxford University’s) Visual Geometry Group
YOLO You Only Look Once (Classifier)
TABLE II: List of Acronyms

Ii Connected and Autonomous Vehicles (CAVs): History, Introduction, and Challenges

In this section, we provide the history, introduction, background of CAVs along with different conventional and security challenges associated with them.

Ii-a Autonomous Vehicles and Levels of Automation

Fig. 2: The taxonomy of the levels of automation in driving.

The Society of Automotive Engineers (SAE) has defined a taxonomy of driving automation that is organized in six levels. The SAE defined the potential of driving automation at each level that is described next and depicted in Figure 2. Moreover, according to a recent scientometric and bibliometric review article on autonomous vehicles [17], different naming conventions have been used over the years to refer to autonomous vehicles. These names are illustrated in Figure 3; note that the year denotes the publication year of first paper mentioning the corresponding name.

Fig. 3: The illustration of different naming conventions used for referring autonomous vehicles in past years, the year denotes the publication year of first paper mentioning corresponding name. We see that self-driving car is not entirely a new concept and it is referred to through a number of terms. (Source: [17])
  • Level 0 No automation: all driving tasks and major systems are controlled by a human driver;

  • Level 1 Function-specific automation: provides limited driver assistance, e.g., lateral or longitudinal motion control;

  • Level 2 Partial driving automation: at least two primary control functions are combined to perform an action, e.g., lane keeping assistance and adaptive cruise control;

  • Level 3 Conditional driving automation: enables limited self-driving automation, i.e., allows the driver to temporarily deviate his attention from driving to perform another activity but the presence of driver is always required to retake control within a few seconds;

  • Level 4 High driving automation: an automated driving system performs all dynamic tasks of driving, e.g., monitoring of the environment and motion control. However, the driver is capable of getting full control of the vehicle’s safety-critical functions under certain scenarios;

  • Level 5 Self-driving automation: an automated driving system performs all dynamic functions of driving and monitors the nearby environment for the entire trip, without any human intervention at any time.

The SAE defines the operational design domain (ODD) for the safe operation of autonomous vehicles as “the specific conditions under which a given driving automation system or feature thereof is designed to function, including, but not limited to, driving modes” [18]. ODD refers to the domain of operation which an autonomous vehicle has to deal with. An ODD representing an ability to drive in good weather conditions is quite different from an ODD that embraces all kinds of weather and lighting conditions. The SAE recommends that ODD should be monitored at run-time to gauge if the autonomous vehicle is in a situation that it was designed to safely handle.

Ii-B Development of Autonomous Vehicles: Historical Overview

Self-driving vehicles, especially ones considering lower levels of automation (referring to the taxonomy of automation as presented in Figure 2), have existed for a long time. In 1925, Francis Udina presented a remote controlled car famously known as American wonder. In the 1939-1940 New York World’s Fair, General Motors Futurama exhibited aspects of what we call self-driving car today. The first work around the design and development of self-driving vehicles was initiated by General Motors and RCA in early 1950 [19] that was followed by Prof. Robert Fenton at The Ohio State University from 1964–80.

In 1986, Ernst Dickens at University of Munich designed a robotic van that can drive autonomously without traffic and by 1987 the robotic van drove up to 60 Kmhr. This group had also started the development of video image processing to recognize driving scenes [20] and it was followed by a demonstration performed under Eureka Prometheus project. The super-smart vehicle systems (SSVS) program in Europe [21] and Japan [22] were also based on the earlier work of Ernst Dickens. In 1992, four vehicles drove in a convoy using magnetic markers on the road for relative positioning, a similar test was repeated in 1997 with eight vehicles using radar systems and V2V communications. This work has paved the way for modern adaptive cruise control and automated emergency braking systems. This R&D work then witnessed initiatives of programs like the PATH Program by Caltrans and the University of California in 1986, in particular, the work on self-driving got huge popularity with the demonstration of research work done by the national automated highway systems consortium (NAHSC) during 1994-98 [23] and this climax remained until 2003.

In the year 2002, Defence Advanced Research Project Agency (DARPA) announced the grand autonomous vehicles challenge, first episode was held in 2004 where very few cars were able to navigate miles through the Mojave desert. The first grand challenge was won by Carnegie Mellons University (CMU) where their car only drove nearly seven miles where the finish line was at 140 miles. In 2005, the second episode of DARPA grand challenge was held, in this episode, five out of twenty-three teams were able to make it through to the finish line. This time Stanford University’s vehicle “Stanley” has won the challenge. In the third episode of DARPA grand challenge in 2007, universities were invited to present the autonomous vehicles on busy roads to shift the perception of the public, tech, and automobile industries about the design and feasibility of autonomous vehicles.

In 2007, Google hired the team leads of Stanford and CMU autonomous vehicle projects and started pushing towards their self-driving car design on the public roads. By the year 2010, Google’s self-driving car has navigated approximately 140 thousand miles on the roads of California in quest of achieving the target of 10 million miles by 2020. In 2013, VisLab (a spin-off company of the University of Parma) successfully completed the international autonomous driving challenge by driving two orange vans 15000 km with minimal driver interventions from University of Parma in Italy to Shanghai in China. A year later in 2014, Volvo demonstrated the road train concept where one vehicle controls several other vehicles behind it in order to avoid road congestion. In 2016, Tesla cars have started the commercial sales of highway speed intelligent cruise control based cars with minimal human intervention.

In October 2018, Google self-driving car has successfully achieved the 10 million miles target. The main aim of Google’s self-driving car program is to reduce the number of deaths caused by traffic accidents by half and to date, they are working towards achieving this ambitious goal. It is expected that by 2020 the state departments of motor vehicles (DMV) may permit self-driving cars on the highways with their special lanes and control settings. By 2025, it is expected that public transportation will also become driver-less and by 2030 it is foresighted that we will have level-5 autonomous vehicles111https://bit.ly/2Kei9ci. A timeline for the development of autonomous vehicles over the past decades is depicted in Figure 4.

Fig. 4: The timeline for the development of autonomous vehicles.
Fig. 5: The basic system architecture of connected vehicles having three types of communications: vehicle-to-vehicle (V2V), infrastructure-to-infrastructure (I2I), and infrastructure-to-vehicle (I2V).
Fig. 6: Autonomous vehicle’s major sensor types, their range, and position (figure adapted from [24]).

Ii-C Introduction to Connected and Autonomous Vehicles (CAVs)

The term connected vehicles refers to the technologies, services, and applications that together enable inter-vehicles connectivity. In connected vehicles’ settings, the vehicles are equipped with a wide variety of onboard sensors that communicate with each other via CAN bus and nearby communication infrastructures and vehicles (as illustrated in Figure 5). The applications of connected vehicles include everything from traffic safety, roadside assistance, infotainment, efficiency, telematics, and remote diagnostics to autonomous vehicles and GPS. In general, the connected vehicles can be regarded as a cooperative intelligent transport system [25] and fundamental component of the internet of vehicles (IoV) [26]. A review of truck platooning automation projects formulating the settings of connected vehicles (described earlier) together with various sensors (i.e., RADAR, LIDAR, localization, laser scanners, etc.) and computer vision techniques is presented in [27]. The key purpose of initiating and investigating such projects is to reduce energy consumption and personnel costs by automated operation of following vehicles. Furthermore, it has been suggested in the literature that throughput on urban roads can be doubled using vehicle platooning [28].

Fig. 7: The systematic software workflow of autonomous vehicles. Nuts and bolts of all important operational blocks of software workflows are depicted to provide reader with a better understanding of the system design involved in developing a state-of-art autonomous vehicle.

CAVs is an emerging area of research that is drawing substantial attention from both academia and industry. The idea of connected vehicles has been conceptualized to enable inter-vehicle communications to provide better traffic flow, road safety, and greener vehicular environment while reducing fuel consumption and travel cost. There are two types of nodes in a network of connected vehicles: (1) vehicles having onboard units (OBUs), and (2) roadside wireless communication infrastructure or roadside units (RSUs). The basic configuration of a vehicular network is shown in Figure 5. There are three modes of communications in such networks: vehicle-to-vehicle (V2V), infrastructure-to-infrastructure (I2I), and vehicle-to-infrastructure (V2I). Besides these, there are two more types of communication—vehicle to pedestrian (V2P) and vehicle to anything (V2X)—that are expected to become part of the future connected vehicular ecosystem.

In modern vehicles, self-contained embedded systems called electronic control units (ECUs) are used to digitally control a heterogeneous combination of components (such as brakes, lighting, entertainment, and drivetrain/powertrain, etc.) [29]. There are more than 100 such embedded ECUs in a car that are executing about 100 million expressions of code and are interconnected to control and provide different functionalities such as acceleration, steering, and braking [30]. The security of ECUs can be compromised and remote attacks can be realized to gain control of the vehicle as illustrated in [29].

Modern CAVs utilize a number of onboard sensors including proximity, short, middle, and long range sensors. While each of these sensors works in its dedicated range, they can act together to detect objects and obstacles over a wide range. The major types of sensors deployed in autonomous vehicles and their sensing range are shown in Figure 6 and are briefly discussed next.

  • Proximity Sensors (5m): Ultrasonic sensors are proximity sensors that are designed to detect nearby obstacles when the car is moving at a low speed, especially, they provide parking assistance.

  • Short Range Sensors (30m): There are two types of short-range sensors: (1) forward and backward cameras and (2) short range radars (SRR). Forward cameras assist in traffic signs recognition and lane departure while backward cameras provide parking assistance and SRR help in blind spot detection and cross traffic alert.

  • Medium Range Sensors (80-160m): The LIDAR and medium-range radars (MRR) are designed with a medium range and are used for pedestrian detection and collision avoidance.

  • Long Range Sensors (250m): Long range radars (LRR) enable adaptive cruise control (ACC) at high speeds in conjunction with the information collected from internal sensors and from other vehicles and nearby RSU [31].

The software design of the autonomous vehicles utilizing ML/DL schemes is divided into five inter-connected modules; namely, environmental perception, a mapping module, planning module, controller module, and system supervisor. The software modules take input from the sensor block of autonomous vehicles and output intelligent actuator control commands. Figure 7 highlights the software design of the autonomous vehicles and it also provides the sensory input required for each software module to perform the designated task.

Ii-D Security-Related Challenges in Developing Robust CAVs

Modern vehicles are controlled by complex distributed computer systems comprising millions of lines of code executing on tens of heterogeneous processors with rich connectivity provided by internal networks (e.g., CAN) [9]. While this structure has offered significant efficiency, safety, and cost benefits, it has also created the opportunity for new attacks. Ensuring the integrity and security of vehicular systems is crucial, as they are intended to provide road safety and are essentially life critical. Vehicular networks are designed using a combination of different technologies and there are various attack surfaces which can be broadly classified into internal and external attacks. Different types of attacks on vehicular networks are described below.

Ii-D1 Application Layer Attacks

The application layer attacks affect the functionality of a specific vehicular application such as beaconing and message spoofing. Application layer attacks can be broadly classified as integrity or authenticity attacks and are briefly described below.

  • Integrity Attacks: In the message fabrication attack, the adversary continuously listens to the wireless medium and upon receiving each message, fabricates its content accordingly and rebroadcasts it to the network. Modification of each message may have a different effect on the system state and depends solely on the design of the longitudinal control system. A comprehensive survey on attacks on the fundamental security goals, i.e., confidentiality, integrity, and availability can be found in [32].

    In the spoofing attack, the adversary imitates another vehicle in the network to inject falsified messages into the target vehicle or a specific vehicle preceding the target. Therefore, the physical presence of the attacker close to the target vehicle is necessarily not required. In a recent study [33], the use of onboard ADAS sensors is proposed for the detection of location spoofing attack in vehicular networks. A similar type of attack in a vehicular network can be GPS spoofing/jamming attack[34] in which an attacker transmits false location information by generating strong GPS signals from a satellite simulator. In addition, a thief can use integrated GPS/GSM jammer to restrain a vehicle’s anti-theft system from reporting the vehicle’s actual location [35].

    In the replay attack, the adversary stores the message received by one of the network’s nodes and tries to replay it later to attain evil goals [36]. The replayed message contains old information that can cause different hazards to both the vehicular network and its nodes. For example, consider the message replaying attack by a malicious vehicle that is attempting to jam traffic [37].

  • Authenticity Attacks: Authenticity is another major challenge in vehicular networks which refers to protecting the vehicular network from inside and outside malicious vehicles (possessing falsified identity) by denying their access to the system [38]. There are two types of authenticity attacks; namely, Sybil attack and impersonation attacks [39]. In a Sybil attack, a malicious vehicle pretends many fake identities [40] and in an impersonation attack, the adversary exploits a legitimate vehicle to obtain network access and performs malicious activities. For example, a malicious vehicle can impersonate a few non-malicious vehicles to broadcast falsified messages [41]. This type of attack is also known as the masquerading attack.

To avoid application layer attacks, various cryptographic approaches can be effectively leveraged especially when an attacker is a malicious outsider [1]. For instance, digital signatures can be used to ensure messages’ integrity and to protect them against unauthorized use [42]. In addition, digital signatures can potentially provide both data and entity level authentication. Moreover, to prevent replay attacks, a timestamp-based random number (nonce) can be embedded within messages. While the aforementioned methods are general, there are other unprecedented challenges related to vehicular networks implementation, deployment, and standardization. For example, protection against security threats becomes more challenging with the presence of a trusted compromised vehicle with a valid certificate. In such cases, data-driven anomaly detection methods can be used [43, 44]. A survey on anomaly detection for enhancing the security of connected vehicles is presented in [45].

Ii-D2 Network Layer Attacks

Network layer attacks are different from the application layer attacks in a way that they can be launched in a distributed manner. One prominent example of such attacks on vehicular systems is the use of vehicular botnets to attempt a denial of service (DoS) or distributed denial of service (DDoS) attack. The potential of vehicular network-based botnet attack for autonomous vehicles is presented in [46]. The study demonstrates that such an attack can cause severe physical congestion on hot spot road segments resulting in an increased trip duration of vehicles in the target area. Another way to realize the DoS attack is to use network jamming that causes disruption in the communications network over a small or large geographic area. As discussed earlier, current configurations of vehicular networks are based on the IEEE 802.11p standard that uses single control channel (CCH) with multiple service channels (SCH) and can be attacked by attempting single channel or multi-channel jamming by swiping between all channels. Various conventional techniques can be adopted to mitigate network layer attacks such as frequency hopping, channel, and technology switching, etc. Coalition or platooning attack is a similar type of attack in which a group of compromised vehicles can cooperate to perform malicious activities such as blocking or interrupting communications between legitimate vehicles.

Ii-D3 System Level Attacks

The attacks on the vehicle’s hardware and software are known as system level attacks and can be performed by either malicious insiders at the time of development or outsiders using unattended vehicular access. Such attacks are more serious in nature as they can cause damage even in the presence of the deployed state of the art security measures and secure end-to-end communications [47]. For instance, if the onboard hardware or software system of a vehicle is maliciously modified then the information exchange between the vehicle and communication systems will be inaccurate and with such a phenomenon the overall performance and security of the vehicular network will be compromised. In [48], authors investigated a non-invasive sensor spoofing attack on car’s anti-lock braking system such that the braking system mistakenly reports a specific velocity.

Ii-D4 Privacy Breaches

In vehicular networks, vehicles broadcast safety messages periodically that contain critical information such as vehicle identity, current location, velocity, acceleration, etc. The adversary can exploit such kind of information by attempting an eavesdropping attack which is a type of passive attack and is more difficult to be detected. Therefore, preserving the privacy of vehicles and drivers is of utmost importance. This allows the vehicles to communicate with each other without disclosing their identities, which is accomplished by masking their identities, e.g., using pseudonyms. In vehicular networks, knowing the origin of the message is crucial for authentication purposes, therefore, vehicles should be equipped with privacy-preserving authentication mechanism ensuring that the communication among vehicles (V2V) and with infrastructure (V2I) is confidential. However, inter-vehicular communication can be eavesdropped by anyone within the radio range, e.g., a malicious vehicle can collect and misuse confidential information. Similarly, an attacker can construct location profiles of vehicles by establishing a connection with the RSU. Therefore, the effectiveness of pseudonymous or even complete anonymous schemes in vehicular networks remains vulnerable to privacy breaches [49].

Ii-D5 Sensors Attacks

Although sensors of autonomous vehicles are by design resilient to environmental noises such as acoustical interference from nearby objects and vehicles, etc. However, current sensors cannot resist intentional noise and it can be injected to realize various attacks such as jamming and spoofing.

Ii-D6 Attacks on Perception System

The perception system of self-driving vehicles is developed using various computer vision techniques including modern ML/DL-based methods for identifying objects, e.g., pedestrians, traffics signs, and symbols, etc. The perception system of self-driving vehicle is highly vulnerable to the physical world and adversarial attacks. For example, suppose we’re learning a controller to predict the steering angle in an autonomous car as a function of the vision-based input (captured into a feature vector x). The adversary may introduce small manipulations (i.e., is modified into ) such that the predicted steering angle is maximally distant from the optimal angle .

Ii-D7 Intrusion Detection

The detection of malicious activities is one of the major challenges of VANETs. Intrusion detection systems enable the identification of various types of attacks being performed on the system, e.g., sink- and black-hole attacks, etc. Without such a system, communication in vehicular networks is highly vulnerable to numerous attacks such as selective forwarding rushing, and Sybil attacks, etc. To detect the selective forwarding attack, a trust system based method utilizing local and global detection of attacks among inter-nodes mutual monitoring and detection of abnormal driving patterns is presented in [50]. Ali et al. proposed a system for intelligent intrusion detection of gray holes and rushing attack [51].

Ii-D8 Certificate Revocation

The security mechanism of vehicular networks is based on trusted certification authority (CA) that manages the identities and credentials of the vehicles by issuing valid certificates to them. The vehicles are essentially unable to operate in the system without a valid certificate and validity of certificate must be revoked after a certain amount of time. The revocation process is a challenging task administratively due to challenges such as the identification of nodes with illegitimate behavior and the need to change the registered domain. Moreover, it is necessary to restrain malicious nodes by revoking their certificates to prevent them from attacking the system. To tackle this problem, three different certificate revocation protocols have been proposed in [52].

Fig. 8: The machine learning (ML) pipeline of CAVs comprising of four major modules: (1) perception; (2) prediction; (3) planning; and (4) control.

Ii-E Non-Security-Related Challenges in Deploying CAVs

The phenomenon of connected vehicles is realized using a technology named vehicular networks which have various challenges that need to be addressed for their efficient deployment in the longer term. Various challenges associated with vehicular networks are described below.

Ii-E1 High Mobility of Nodes

The large scale mobility of vehicles in vehicular networks result in a highly dynamic topology; thus, raising several challenges for the communication networks [53]. In addition, the dynamic nature of traffic can lead to a partitioned network having isolated clusters of nodes [54]. As the connections between the vehicles and nearby RSUs are short-lived, the wireless channel coherence time is short. This makes accurate real-time channel estimation more challenging at the receiver end. This necessitates the design of dynamic and robust resource management protocols that can efficiently utilize available resources while adapting to the vehicular density variations [55].

Ii-E2 Heterogeneous and Stringent QoS Requirements

In vehicular networks, there are different modes of communications that can be broadly categorized into V2V and V2I communications. In V2V communications, vehicles exchange safety-critical information (e.g., information beacons, road and traffic conditions) among each other known as basic safety messages (BSM). This communication, which can be performed periodically or when triggered by some event, requires high reliability and is sensitive to delay [56]. In V2I communications, on the other hand, vehicles can communicate with nearby communications infrastructure to get support for route planning, traffic information, operational data, and to access entertainment services that requires more bandwidth and frequent access to the Internet, e.g., for downloading high-quality maps and accessing infotainment services, etc. Therefore, the heterogeneous and stringent QoS requirements of VANETs cannot be simultaneously met with traditional wireless design approaches.

Ii-E3 Learning Dynamics of Vehicular Networks

As discussed above, vehicular networks exhibit high dynamicity; thus, to meet the real-time and stringent requirements of vehicular networks, historical data-driven predictive strategies can be adopted, e.g., traditional methods like hidden Markov models (HMM) and Bayesian methods [56]. In addition to using traditional ML methods, more sophisticated DL models can be used, for example, recurrent neural networks (RNN) and long short term memory (LSTM) have been shown beneficial for time series data and can be potentially used for modeling temporal dynamics of vehicular networks.

Ii-E4 Network Congestion Control

Vehicular networks are geographically unbounded and can be developed for a city, several cities, and countries as well. The unbounded nature of vehicular networks leads to the challenge of network congestion [57]. As the traffic density is high in urban areas as compared to rural areas, particularly during rush hours, that can possibly lead to network congestion issues.

Ii-E5 Time Constraints

The efficient application of vehicular networks requires hard real-time guarantees because it lays out the foundation for many other applications and services that require strict deadlines [58], for example, traffic flow prediction [59], traffic congestion control [60], and path planning [61]. Therefore, safety messages should be broadcasted in acceptable time either by vehicles or RSUs.

Authors Application Methodology
Yao et al. [62] Location prediction based scheduling and routing Hidden Markov models
Xue et al. [63] Variable-order Markov models
Zeng et al. [64] Recursive least squares
Karami et al. [65] Network congestion control Feed forward neural network
Taherkhani et al. [66] k-means clustering
Li et al. [67] Load balancing Reinforcement learning
Taylor et al. [68] Network security LSTM
Zheng et al. [69] Virtual resource allocation Reinforcement learning
Atallah et al. [70, 71] Resource management Reinforcement learning
Ye et al. [57] Distributed resource management Reinforcement learning
Kim et al. [72] Vehicle trajectory prediction Reinforcement learning
TABLE III: Overview of machine learning (ML)-based research on different vehicular network’s applications

Iii The ML Pipeline in CAVs

The driving task elements of self-driving vehicles that can benefit from ML can be broadly categorized into the following four major components (as shown in Figure 8).

  1. Perception: assists in perceiving the nearby environment and recognizing objects;

  2. Prediction: predicting the actions of perceived objects, i.e., how environmental actors such as vehicles and pedestrians will move;

  3. Planning: route planning of vehicle, i.e., how to reach from point A to B;

  4. Decision Making & Control: making decisions relating to vehicle movement, i.e., how to make the longitudinal and lateral decisions to control and steer the vehicle.

These components are combined to develop a feedback system for enabling the phenomenon of self-driving without any human intervention. This ML pipeline can then facilitate autonomous real-time decisions by leveraging insights from the diverse types of data (e.g., vehicles’ behavioral patterns, network topology, vehicles’ locations, and kinetics information, etc.) that can be in easily gathered by CAVs.

In the remainder of this section, we will discuss some of the most prominent applications of ML-based methods for performing these tasks (a summary is presented in Table III).

Iii-a Applications of ML for the Perception Task in CAVs

Different ML techniques, particularly, DL models have widely been used for developing the perception system of autonomous vehicles [73]. In addition to using video cameras as major visionary sensors, these vehicles also use other sensors for detection of different events in the car’s surroundings, e.g., RADAR and LIDAR. The surrounding environment of the autonomous vehicles is perceived in two stages [74]. In the first stage, the whole road is scanned for the detection of changes in the driving conditions such as traffic signs and lights, pedestrian crossing, and other obstacles, etc. In the second stage, knowledge about the other vehicles is acquired. In [75], a CNN model is trained for developing direct perception representation of autonomous vehicles.

Iii-B Applications of ML for the Prediction Task in CAVs

In CAVs, accurate and timely prediction of different events encountered in driving scenes is another important task which is mainly accomplished using different ML and DL algorithms. For instance, autonomous vehicles uses DL models for the detection and localization of obstacles [76], different objects (e.g., vehicles, pedestrians, and bikes, etc.) [77] and their behavior (e.g., tracking pedestrians along the way [78]) and traffic signs [79] and traffic lights recognition [80]. Another prediction tasks in CAVs that involve the application of ML/DL methods are vehicle trajectory and location prediction [81], efficient and intelligent wireless communication [82], and traffic flow prediction and modeling [83]. Moreover, ML schemes have also been used for the prediction of uncertainties in autonomous driving conditions [84].

Iii-C Applications of ML for the Planning Task in CAVs

CAVs are equipped with onboard data processing compatibilities and they intelligently process the data collected from heterogeneous sensors for efficient route planning and for other optimized operations using different ML and DL techniques. The key goal of route planning is to reach the destination in a small amount of time while avoiding traffic congestion, potholes, and other vehicles by navigating through GPS and consuming less fuel as possible. In the literature, motion planning of autonomous vehicles is studied in three dimensions: (1) finding a path for reaching destination point; (2) searching for the fastest manoeuvre; and (3) determining the most feasible trajectory [85]. Moreover, to avoid collisions between vehicles in CAVs, predicting the trajectories of other vehicles is a crucial task for the planning trajectory of an autonomous vehicle [86]. For instance, Li et al. presented a hybrid approach to model uncertainty in vehicle trajectory prediction for CAVs application using deep learning and kernel density estimation [87].

Fig. 9: The illustration of the generalization of attack surfaces in ML systems: generic model (top), autonomous vehicles model (middle), and connected vehicles model (bottom).

Iii-D Applications of ML for the Decision Making and Control

In recent years, DL based algorithms have been extensively used for control of autonomous vehicles that are refined through millions of kilometers of test drives. For instance, Bojarski et al. presented a CNN based end-to-end learning framework for self-driving cars [88]. The model was able to drive the car on local roads with or without markings and on highways with small training data. In a similar study, CNN is trained for end-to-end learning of lane keeping for autonomous cars [89]. Recently, researchers have now started working on utilizing deep reinforcement learning (RL) for performing actions and decision making in driving conditions [90]. Bouton et al. proposed a generic approach to enforce probabilistic guarantees on RL learning for which they derived an exploration strategy that restricts the RL agent to choose among only those actions that satisfy a desired probabilistic specification criteria prior to training [91]. Moreover, human-like speed control of autonomous vehicles using deep RL with double Q-learning is presented in [92] that uses scenes generated by naturalistic driving data for learning. In [93], authors presented an integrated framework that uses a deep RL based approach for dynamic orchestration of networking, caching, and computing resources for connected vehicles.

In addition, ML-based methods have been used for many other applications in CAVs. For example, adaptive traffic flow in which smart infrastructure integrates V2V signals from the moving cars to optimize speed limits, traffic-light timing, and the number of lanes in each direction on the basis of the actual traffic load. The traffic flow can be further improved in CAVs by using cooperative adaptive cruise control technology [94]. Also, vehicles can take advantage of cruise control and save fuel by following one another in the form of vehicles platoons. Moreover, DL based methods have been proposed for intrusion detection for in-vehicle security of CAN bus [95]. The overview of intelligent and connected vehicles, current and future perspectives are presented in [96].

Autonomous vehicles are evolving through four stages of development. The first stage includes passive warning and convenience systems such as front and backward facing cameras, cross-traffic warning mechanism, radar for blind spot detection, etc. These warning systems use different computer vision and machine learning techniques to perceive the surrounding views of the vehicle on the road and to recognize traffic signs, static, and moving objects. In the second stage, these systems are used to assist the active control system of the vehicle while parking, braking, and to prevent backing over unseen objects. In the third stage, the vehicle is equipped with some semi-autonomous operations—as the vehicle may behave unexpectedly and the on seat driver should be able to resume control. In the final stage, the vehicle is designed to perform fully autonomous operations.

CAVs together formulate the settings of the self-driving vehicular network and there is a strong synergy between them [5]. In addition, autonomous vehicles are an important component of future vehicular networks that are equipped with complex sensory equipment. The autonomous vehicular networks are predictive and adaptive to their environments and are designed with two fundamental goals, i.e., autonomy and interactivity. The first goal enables the network to monitor, plan, and control itself and the later ensures that the infrastructure is transparent and friendly to interact with.

The deployment of ML in CAVs entails the following stages:

  1. Data Collection: Input data is collected using sensors or from other digital repositories. In autonomous vehicles, input data is collected using a complex sensory network, e.g., cameras, RADAR, GPS, etc. (see Figure 6); in a connected vehicular ecosystem, there is also inter-vehicle information communication.

  2. (Pre-)Processing: The heterogeneous data (video imagery, network, and traffic information, etc.) collected by the sensors is then digitally processed and appropriate features (e.g., traffic signs information and traffic flow information, etc.) are extracted.

  3. Model Training: Using the extracted features from the input data, a ML model is trained to recognize and distinguish between different objects events encountered in the driving environment, e.g., recognizing moving objects like pedestrian, vehicles, and cyclists, etc. and distinguishing between traffic signs, i.e., stop or speed limit sign, etc.

  4. Decision or Action: A decision or an action (e.g., stopping the car at the stop sign and predicting traffic flow based on the knowledge acquired by the vehicular network) is performed according to the learned knowledge and underlying system.

Fig. 10: The taxonomy of adversarial examples, perturbation methods, and benchmarks (datasets and models).

We present an illustration of the generalization of attack surfaces in ML systems from generic models to the more specific cases of autonomous and connected vehicles in Figure 9. As we shall discuss later in the paper, each of these stages is vulnerable to adversarial intrusion since an adversary can try to manipulate the data collection and processing system, tamper the model, or its outputs.

Iv Adversarial ML Attacks and the Adversarial ML Threat for CAVs

A comprehensive overview of adversarial ML in the context of CAVs is presented in this section.

Iv-a Adversarial Examples

Formally, adversarial examples are defined as inputs to a deployed ML/DL model created by an attacker by adding an imperceptible perturbation in the actual input to compromise the integrity of the ML/DL model. An adversarial sample is created by adding a small carefully crafted perturbation to the correctly classified sample . The perturbation is calculated by approximating the optimization problem given in Eq. 1 iteratively until the crafted adversarial example gets classified by ML classifier in targeted class . A taxonomy of adversarial examples, perturbation methods, and benchmarks is presented in Figure 10.

(1)

Iv-A1 Adversarial Attacks

An adversarial attack affecting the training phase of the learning process is termed as poisoning attack where an attacker compromises the learning process of the ML/DL scheme by manipulating the training data [97], whereas the adversarial attack on the inference phase of the learning process is termed as evasion attack where an attacker manipulates the test data or real-time inputs to the deployed model for producing a false result [98]. Usually, examples used for fooling the ML/DL schemes at inference time are called adversarial examples.

Iv-A2 Adversarial Perturbations

The adversarial perturbation crafting is divided in three major categories; namely, local search, combinatorial optimization, and convex relaxation. This division is based on solving the objective function given in Eq. 1. Local search is the most common method of generating adversarial perturbations where the adversarial examples are generated by solving the objective function provided in Eq. 1 to obtain a lower bound on the adversarial perturbation by using gradient-based methods. A prime example of local search adversarial example crafting is the fast gradient sign method (FGSM) where an adversarial example is created by taking a step in the direction of the gradient [99]. In another study, the authors demonstrated that adversarial images are very easy to be constructed using evolutionary algorithms or gradient ascent [100]. Combinatorial optimization is also a method for creating adversarial examples where we find the exact solution of the optimization problem provided in Eq. 1, a major shortcoming of this method is the increase in the computational complexity with the increase of the number of examples in the dataset. Recently, Khalil et al. [101] launched a successful adversarial attack based on combinatorial and integer programming on binarized neural networks but the performance of the proposed attack reduces as the size and dimensions of data increase. Recently, convex relaxation is also used to generate [102] and defend [103] against adversarial examples where the upper bound on the objective function provided in Eq. 1 is calculated.

Iv-A3 Different Aspects of Perturbations

The adversarial examples are designed to look like the original ones and imperceptible to humans. In this regard, the addition of small perturbations is of utmost importance. Whereas, the literature suggests that even one-pixel perturbation is often sufficient to fool the deep model trained for classification task [104]. Here we analyze different aspects of adversarial perturbations.

  • Perturbation Scope: Adversarial perturbations are generated from two aspects: (1) perturbations for each legitimate input and (2) universal perturbations for the complete datasets, i.e., for each original cleaned sample. To date, most of the studies considered the first scope of adversarial perturbations.

  • Perturbation Limitation: Similarly, there are two types of limitations, optimizing the system at a low perturbations scale and optimizing the system at a low perturbations scale with constrained optimization.

  • The magnitude of the perturbations is mainly measured using three norms , , and norm. In -norm-based attacks, the attacker aims to minimize the squared error between the original and adversarial example. -norm measures the Euclidean distance between the adversarial example and the original sample and results in a very small amount of noise added to the adversarial sample. attacks are perhaps the simplest type of attacks which aim to limit or minimize the extent to which the maximum change for all pixels in adversarial examples is achieved. Also, this constraint forces to only make very small changes to each pixel. -norm-based attacks work by minimizing the number of perturbed pixels in an image and force the modifications only to very few pixels.

Fig. 11: The taxonomy of various types of threat models used in literature to design adversarial ML attacks. This figure also provides the information needed by a defender to ensure the robustness of ML-based autonomous system.

To ensure tightly constrained action space available to an adversary, imperceptibility of perturbations is important to develop an attack. Considering the important constraints: (1) what constraints are placed on the attacker’s “starting point”? and (2) where did this initial example come from? Gilmer et al. identified four salient features (described below) of adversarial perturbations [105].

  • Indistinguishable Perturbation: The attacker does not have to select a starting point but it is given a draw from the data distribution and introduces such perturbation in the input sample that is indistinguishable by a human.

  • Content-Preserving Perturbation: The attacker does not have to select a starting point but it is given a draw from the data distribution and creates such perturbation as long as the original content of the sample is preserved.

  • Non-suspicious Input: The attacker can generate any type of desired perturbed input sample as long as it remains undetectable to a human.

  • Content-Constrained Input: The attacker can generate any type of desired perturbed input sample as long as it maintains some content payload, i.e., it must be a picture of dog but not necessarily a particular dog. This includes payload-constrained input, where human perception might not be important. Rather, the intended function of the input example remains intact.

  • Unconstrained Input: There is no constraint on the input and an attacker can produce any type of input example to get the desired output or behavior from the system.

Iv-A4 Adversarial ML Benchmarks

In this section, we describe the benchmarks datasets and victim ML models used for evaluating adversarial examples. Researchers mostly adopt an inconsistent approach and report the performance of the attacks on diverse datasets and victim models. The widely used benchmark datasets and victim models are described below.

  • Datasets: MNIST [106], CIFAR-10 [107], and ImageNet [108] are the widely used datasets in adversarial ML research and are also regarded as the standard deep learning datasets.

  • Victim Models: The widely used victim ML/DL models for evaluating adversarial examples are LeNet [106], AlexNet [109], VGG [110], GoogLeNet [111], CaffeNet [112], and ResNet [113].

Iv-B Threat Models for Adversarial ML Attacks on CAVs

Threat modeling is the procedure of answering a few common and straight forward questions related to the system being developed or deployed from a hypothetical attacker’s point of view. Threat modeling is a fundamental component of security analysis. It requires that some fundamental questions related to the threat are addressed [114]. In particular, a threat model should identify:

  • the system principals: what is the system and who are the stakeholders?

  • the system goals: what does the system intend to do?

  • the system adversities: what potential bad things can happen due to adverse situations or motivated adversaries?

  • the system invariants: what must be always true about the system even if bad things happen?

The key goal of threat modeling is to optimize the security of the system by determining security goals, identifying potential threats, and vulnerabilities, and to develop countermeasures for preventing or mitigating their associated effects on the system. Answering these questions requires careful logical thoughts and significant expertise and time.

As the focus of this paper is on highlighting the potential vulnerabilities of using ML techniques in CAVs, the scope of our study is restricted to the adversarial ML threat in CAVs. In the remainder of this section, we discuss the various facets of the adversarial ML threat in CAVs (a taxonomy aggregating these issues is illustrated in Figure 11).

Iv-B1 Adversarial Attack Type

In the literature, the attacks on learning systems are generally categorized into three dimensions [115]:

  • Influence: It includes causative (trying to get control over training data) and exploratory (exploiting mis-classifications of the model without affecting the training process) attacks.

  • Specificity: It involves targeted and indiscriminate attacks on a specific instance.

  • Security Violation: It is concerned with the integrity of assets and availability of the service attack.

The first dimension describes the capabilities of the adversary and whether the attacker has the ability to affect the learning by poisoning training data. Instead, the attacker exploits the model by sending new samples and observing their responses to get the intended behavior. The second axis indicates the specific intentions of the attacker, i.e., whether the attacker is interested in realizing a targeted attack on one particular sample or he aims to cause learned model t fail in an indiscriminate fashion. The third dimension detail the types of security violation an attacker can cause, e.g., the attacker may aim to bypass harmful messages to bypass through the filter as false negatives or realizing denial of service by causing benign samples misclassified as false positives.

Iv-B2 Adversarial Knowledge

Based on the adversarial knowledge available to the adversaries, the adversarial ML attacks are divided into three types; namely, white-box, gray-box, and black-box attacks. White-box attacks assume complete knowledge about the underlying ML model including information about the optimization technique, the trained ML model, model architecture, activation function, hyper-parameters, layer weights, and training data. Gray-box attacks assume a partial knowledge about the targeted model whereas the black-box adversarial attack assumes the adversary has zero knowledge and no access to the underlying ML model and the training data. Black box attack refers to the real-world knowledge where there is not much information about the targeted ML/DL scheme is available. In such cases, the adversary acts as a normal user and tries to infer from the output of the ML system. Black-box adversarial attacks make use of transferability property of adversarial examples where it is assumed that adversarial examples created for one ML/DL model will affect other models trained on datasets with a similar distribution to that of the original model [116].

Iv-B3 Adversarial Capabilities

Adversarial capabilities are important to be identified in security practice. As they define the strength of the adversaries to compromise the security of the system. In general, an adversary can be stronger or weaker based on the knowledge and access to the system. Adversarial capabilities advocate how and what type of attacks an adversary can realize using what type of attack vector on which attack surface. The attacks can be launched at two main phases; namely, inference and training. Inference time attacks are exploratory attacks that do not modify the underlying model. Instead they influence it to produce incorrect outputs. Inference attacks vary with the availability of system knowledge. The training time attacks aim at tampering with the model itself or influence its learning process and involve two types of attack methods [117]. In the first type, adversarial examples are injected in the training data and in the second type, training data is directly modified.

Iv-B4 Adversarial Specificity

Another classification of the adversarial attacks is based on the specificity of the adversarial examples, where adversarial attacks are classified as targeted and non-targeted attacks. The attacks where adversarial perturbations are added to compromise the performance of a specific class in the data are known as the targeted adversarial attacks. Targeted adversarial attacks are launched by adversaries to create targeted misclassification (i.e., a specific road sign will be misclassified by the self-driving vehicle while the rest of the road sign classification system will function correctly) or source/target misclassification (i.e., a certain road traffic sign will be always classified in a pre-determined wrong class by the road sign classifier in a self-driving vehicle). Whereas adversarial perturbations created for deteriorating the performance of the model irrespective of any class of data are known as non-targeted adversarial attacks. Non-targeted attacks are launched by adversaries to reduce the classification confidence (i.e., a traffic sign will be detected with less accuracy which was previously detected with high accuracy) and misclassification (i.e., a road traffic sign will be classified in any other class than its original one).

Iv-B5 Adversarial Falsification

The adversary can attempt two types of falsification attacks; namely, false positive attacks, and false negative attacks [15]. In the first attack, an adversary generates a negative sample which can be misclassified as a positive one. Let’s assume such attack has been launched on the image classification system of an autonomous vehicle. A false positive will be an adversarial image predicted to be of a class with high confidence to whom it did not belong and is unrecognizable to humans. On the contrary, while attempting false negative attacks, the adversary generates a positive sample which can be misclassified as a negative one. In adversarial ML, this type of attack is referred to as an evasion attack.

Iv-B6 Attack Frequency

The adversarial attacks can be single step or consist of an iterative optimization process. Compared to single step attacks, iterative adversarial attacks are stronger; however, they require frequent interactions for querying the ML system and subsequently require a large amount of time and computational resources for their efficient generation.

Iv-B7 Adversarial Goals

The last component of the threat modeling is the articulation of the adversary’s goals. The classical approach to model adversarial goals includes modeling of the adversary’s desires to impact the confidentiality, integrity, and availability (known as the CIA model) and a fourth, yet important dimension is the privacy [117].

Year Authors Method Adversarial Knowledge Adversarial Specificity Perturbation Scope Perturbation Norm Attack Learning
2014 Szegedy et al. [13] L-BFGS White box Targeted Image specific One Shot
2015 Goodfellow et al. [99] FGSM White box Targeted Image specific One shot
2016 Kurakin et al. [118] BIM & ILCM White box Non targeted Image specific Iterative
2016 Papernot et al. [10] JSMA White box Targeted Image specific Iterative
2016 Moosavi et al. [119] DeepFool White box Non targeted Image specific , Iterative
2017 Carlini et al. [120] C&W attacks White box Targeted Image specific ,, Iterative
2017 Moosavi et al. [121] Uni. perturbations White box Non targeted Universal , Iterative
2017 Sarkar et al. [122] UPSET Black box Targeted Universal Iterative
2017 Sarkar et al. [122] ANGRI Black box Targeted Image specific Iterative
2017 Cisse et al. [123] Houdini Black box Targeted Image specific , Iterative
2018 Baluja et al. [124] ATNs White box Targeted Image specific Iterative
2019 Su et al. [104] One-pixel Black box Non Targeted Image specific Iterative
TABLE IV: Summary of the state-of-the-art attacks

Iv-C Review of Existing Adversarial ML Attacks

Iv-C1 Adversarial ML Attacks on Conventional ML Schemes

A pioneering work on adversarial ML was performed by Dalvi et al. [125] in 2004 where they proposed a minimum distance evasion of the linear classifier and tested there proposed attack on spam classification system highlighting the threat of adversarial ML examples. A similar contribution was made by Lowd et al. [126] in 2005 where they proposed adversarial classifier reverse engineering technique for constructing an adversarial attack on classification problems. In 2006, Barreno et al. [127] discussed the security of ML in adversarial environments and provided a taxonomy of attacks on ML schemes along with the potential defenses against them. In 2010, Huang et al. [128] provided the first consolidated review of adversarial ML where they discussed the limitations on the classifiers and adversaries in real-world settings. Biggio et al. [97] proposed poisoning attack on Support Vector Machines (SVM) to increase the test error in SVM, their attack successfully altered the test error of SVM with linear and non-linear kernels. The same authors also proposed an evasion attack where they used a gradient-based approach for evading PDF malware detectors [98] and tested their attack on SVM and simple neural networks.

Year Company Cause of the accident Damages System failure
2014 Hyundai Weather (Rain fall) Car crashed Camera object detection failure
2016 Google Waymo Speed estimation failure Car crashed in the bus Dynamic object movement detection failure
2016 Tesla Image classification and image contrast failure
Car crashed in the neighboring truck
and the driver was killed
Camera’s detection and
classification suite failure
2017 Uber
Overreaction to an unseen event
(Near by accident)
Car crashed Lack of robustness in control system
2017 General Motors
Stuck in a dilemma
(Lane change decision reversal)
Car knocked over a motorcyclist Coordination and state estimation failure
2018 Uber
Confusion in the software decision system
and safety system failure
Killed a person on the road Failure of planning and perception system
TABLE V: Accidents caused by self-driving vehicles due to unintended adversarial conditions

Iv-C2 Adversarial ML Attacks on DNNs

Adversarial ML attacks on DNNs were first observed by Szegedy et al. [13] where they demonstrated that DNNs can be fooled by minimally perturbing their input images at test time, the proposed attack was a gradient-based attack where minimum distance based adversarial examples were crafted to fool the image classifiers. Another gradient-based attack was proposed by Goodfellow et al. [99]. In this attack, they formulated adversarial ML as a min-max problem and adversarial examples were produced by calculating the lower bound on the adversarial perturbations. This method was termed as FGSM and is still considered a very effective algorithm for creating adversarial examples. Adversarial training was also introduced in the same paper as a defensive mechanism against adversarial examples. Kurakin et al. [118] highlighted the fragility of ML/DL schemes in real-world settings using images taken from a cell phone camera for adversarial example generation. The adversarial samples were created by using the basic iterative method (BIM) an extended version of FGSM. The resultant adversarial examples were able to fool the state-of-art image classifier. In [129], authors demonstrated that only rotation and translation are sufficient for fooling state-of-the-art deep learning based image classification models, i.e., convolutional neural networks(CNNs). In a similar study [130], ten state-of-the-art DNNs were shown to be fragile to the basic geometric transformation, e.g., translation, rotation, and blurring. Liu et al. presented a trojaning attack on neural networks that works by modifying the neurons of the trained model instead of affecting the training process [131]. Authors used trojan as a backdoor to control the trojaned ML model as desired and tested it on an autonomous vehicle. The car misbehaves when a specific billboard (trojan trigger) is encountered by it on the roadside.

Papernot et al. [10] exploited the mapping between the input and output of DNNs to construct a white-box Jacobian saliency-based adversarial attack (JSMA) scheme to fool the DNN classifiers. The same authors also proposed another defense against adversarial perturbations by using defensive distillation. Defensive distillation is a training method in which a model is trained to predict the classification probabilities of another model which was trained on the baseline standard to give more importance to accuracy. Papernot et al. [132] also proposed a black-box adversarial ML attack where they exploited the transferability property of adversarial examples to fool the ML/DL classifiers. This black-box adversarial attack was based on the substitute model training which not only fools the ML/DL classifiers but also breaks the distillation defensive mechanism. Carlini et al. [120] proposed a suite of three adversarial attacks termed as C&W attacks on DNNs by exploiting three distinct distance measures , and . These attacks have not only evaded the DNN classifiers but also evaded the defensive distillation successfully. This demonstrated that defensive distillation is not an appropriate method for building robustness. In another paper, Carlini et al. [133] presented that the proposed adversarial attacks in [120] have successfully evaded the ten well known defensive schemes against adversarial examples. Right now these attacks are also considered as state-of-art adversarial ML attacks. Furthermore, Carlini et al. successfully demonstrated an adversarial attack on speech recognition system by adding small noise in the audio signal that forces the underlying ML model to generate intended commands/text [134]. In [135], an adversarial patch affixed to an original image forces the deep model to mis-classify that image. Such universal targeted patches fool classifiers without requiring knowledge of the other items in the scene. Sich patches can be created offline and then broadly shared. More details on adversarial ML attacks can be found in [15, 16, 3, 136, 137, 4]. A summary of different state-of-the-art adversarial perturbation generation methods is provided in Table IV.

Domain Application Papers
Imaging Digit Recognition [10], [99], [120], …
Object Detection [120], [132], [102], …
Traffic Signs Recognition [132], [12], [138], …
Semantic Segmentation [139], [140], [141], …
Reinforcement Learning [142], [143], [144], …
Generative Modeling [145], [146], [147], …
Text Text Classification [148], [149], [150], …
Sentiment Analysis [151], [150]
Reading Comprehension [152], [153]
Networking Intrusion Detection [154], [155], [156], …
Anomaly Detection [157], [158]
Malware Classification [159], [160], [161], …
Traffic Classification [162], [163]
Audio Speech Recognition [134], [164], [123], …
TABLE VI: Domains affected by adversarial machine learning (ML) and their applications

Iv-D Adversarial ML attacks on CAVs

ML and DL act are core ingredients for performing many key tasks in self-driving vehicles. Beyond providing deeply embedded information for the decision making process within the vehicle’s components, they also play an important role in V2I and V2V, and V2X communications. As described in earlier sections, ML/DL schemes are very vulnerable to small carefully crafted adversarial perturbations. Self-driving vehicles are also threatened by this security risk along with other traditional security risks. Adversarial ML has affected many application domains including imaging, text, networking, and audio as highlighted in Table VI.

Iv-D1 Autonomous Vehicles Accidents Due to Unintended Adversarial Conditions

The autonomous vehicles developed so far are not robust to unintended adversarial conditions and there have been few reported fatalities caused by the malfunctioning of DNN-based autonomous vehicles where adversarial examples were unintentionally created by the DNN operating the autonomous vehicle. In 2014, during Hyundai competition, an autonomous vehicle crashed because of a sensor failure due to shifting in the angle of the car and direction of the sun222https://bit.ly/2SWLxUY. Another incident was reported in 2016 where a Tesla autopilot was not able to handle the image contrast which resulted in the death of the driver333https://cnnmon.ie/2VOB283. It was also reported that the Tesla autopilot unable to differentiate between the bright sky and a white truck which resulted in a horrible accident. A similar accident happened to Google self-driving car where the car was unable to estimate the relative speed which resulted in a collision with a bus444https://bit.ly/1U0O6yx. In 2018 Uber self-driving car also faced an accident due to malfunctioning in the DNN-based system which resulted in a pedestrian fatality555https://bit.ly/2SWmb9N. Table V provides a detailed description of accidents caused by malfunctions in different components of self-driving vehicles.

Iv-D2 Physical World Attacks on Autonomous Vehicles

Aung et al. [165] used FGSM and JSMA schemes to generate adversarial traffic signs to successfully evade the DNN-based traffic sign detection schemes to highlight the problem of adversarial examples in autonomous driving. Sitawarin et al. [138] proposed a real-world adversarial ML attack by altering the traffic signs and logos with adversarial perturbations while keeping the visual perception of the traffic and logo signs. In another work, Sitawarin et al. [12] proposed a technique for generating out-of-distribution adversarial examples to perform an evasion attack on ML-based sign recognition system of autonomous vehicles. They also proposed a Lenticular printing attack where they exploited the camera height in autonomous vehicles to create an illusion of false traffic signs in the physical environment to fool the sign recognition system of autonomous vehicles.

Object detection is another integral part of the perception module of autonomous vehicles where state-of-the-art DNN-based schemes such as Mask R-CNN [166] and YOLO [167] are used for object detection. Zhang et al. [168] proposed a camouflage physical world adversarial attack by approximately imitating how a simulator applies camouflage to the vehicle and then minimized the approximated detection score by using local search for optimal camouflage. The proposed adversarial attack successfully fooled image-based object detection systems. Another physical world adversarial example generation scheme on object detection is performed by Song et al. [169] where the perturbed “STOP” sign remained hidden from the state-of-art object detectors like Mask R-CNN and YOLO. They produced adversarial perturbations by the robust physical perturbations (RP) [170] algorithm. Recently Zhou et al. [171] proposed DeepBillboard a systematic way for generating adversarial advertisement billboards to inject a malfunction in the steering angle of the autonomous vehicle. The proposed adversarial billboard misled the average steering angle by 26.44 degrees. Table VII provides a summary of state-of-the-art adversarial attacks on self-driving vehicles. In a recent study [172], imitation learning has been shown robust enough for autonomous vehicles to drive in a realistic environment. Authors proposed a model named ChauffeurNet that learns to drive the vehicle by imitating best and synthesizing worst.

Attack Objective Specific work Problem formulation Data Threat model Attack results
Perception system
failure
DARTS [12]: Traffic signs
manipulation
Generating adversarial examples for
CNN-based traffic sign detection by
performing out-of-distribution attacks
along with Lenticular printing.
GTSRB
GTSDB
1) Virtual and physical world attack
2) White and black-box mode
DARTS has successfully
fooled the perception system
of self-driving car.
Perception system
failure
Rogue Signs [138]: Traffic signs
and logos manipulation
End-to-end pipeline for adversarial
example generation for CNN-based
traffic signs and logo detection.
GTSRB
1) Virtual and physical world attack
2) White-box mode
Fooled the perception system
of self-driving car with a success
rate of 99.7%.
Object detection
failure
ShapeShifter [173]: Adversarial
attack on Faster R-CNN
Adversarial attack on bounding boxes
of Faster R-CNN by using expectation
over transformation techniques.
MS-COCO
1) Virtual and physical world attack
2) White-box mode
Caused malfunction in Faster
R-CNN of self-driving car
with 93% success.
Motion planning and
perception system
failure
DeepBillboard [171]: Adversarial
attack through drive-by
billboards.
Adversarial attack on steering angle
of the self-driving car by using adversarial
perturbations in drive-by billboards.
1) Udacity self-driving
car challenge dataset
2) Dave testing dataset
3) Kitti dataset
1) Virtual and physical world attack
2) White and black-box mode
Caused a 23 degree malfunction
in steering angle of
self-driving car.
Object detection
failure
CAMOU [168]: Adversarial
camouflage to fool the
object detector
Generating adversarial perturbation
to prevent the self-driving car from
Mask R-CNN-based object detector.
Unreal engine
simulator
1) Virtual and physical world attack
2) White and black-box mode
Caused a 32.74% drop
in the performance of
Mask R-CNN.
Object detection
failure
Song et al. [169]: Object
disappearance and
creation attack
Generating adversarial perturbation
based stickers where object detection
schemes like Yolo and R-CNN used
in self-driving cars fails to recognize
certain signs and logos. Furthermore
object detection schemes start
detecting things that are not present
in the frame.
Video of traffic signs
1) Virtual and physical world attack
2) White-box mode
The object detection schemes
fails to recognize traffic signs
nearly in 86% of the frames in
the video.
Perception system
failure
Eykholt et al. [170]: Robust
physical perturbations
against visual classification
under different physical
environment.
Robust physical perturbations are
created to fool LISA-CNN and
GTSRB-CNN based traffic sign
classification schemes.
1) LISA
2) GTSRB
1) Virtual and physical world attack
2) White and black-box mode
In few cases caused 100%
performance drop in visual
classification of traffic signs
Perception and
controller system
failure
Tuncali et al. [174]: Simulation
based adversarial test
generation for self-driving
cars.
Testing and verifying self-driving
car’s perception and controller
system against adversarial examples.
Simulated data
from proposed
simulator
1) Virtual environment
2) White and black-box mode
Designed system was able
to detect critical cases in
autonomous car’s perception
and control.
Controller system
failure
Yaghoubi et al. [175]: Finding
gray-box adversarial examples
for closed loop autonomous
cars control system.
Testing the controller and perception
system of self-driving cars against
gradient-based gray-box adversarial
examples.
Simulated data
1) Virtual environment
2) Gray box mode
Gray-box adversarial examples
have outperformed simulated
Annealing optimization in
a dummy control system
problem.
End-to-end
autonomous control
failure
Boloor et al. [176]: Physical
adversarial examples
against E2E driving
models.
Disrupting steering by using physical
perturbation in the environment.
CARLA simulated
data
1) Virtual environment
2) White-box mode
Physical adversarial
perturbation has forced the
self-driving car to crash.
TABLE VII: Adversarial attacks on self-driving vehicles: summary of state-of-the-art

V Towards Developing Adversarially Robust ML Solutions

As discussed above, despite the outstanding performance of ML techniques in many settings, including human level accuracy at recognizing images. These techniques exhibit strict vulnerability to carefully crafted adversarial examples. In this section, we present an outline of approaches for developing adversarially robust ML solutions. We define the robustness as the ability of the ML model to restrain adversarial examples.

In the literature, defenses against adversarial attacks have been divided into two broad categories: (1) reactive detect adversarial observations (input) after deep models are trained; and (2) proactive make the deep model robust against adversarial examples before the attack.

Alternatively, these techniques can also be broadly divided into three categories: (1) modifying data; (2) adding auxiliary models; and (3) modifying models. The reader is referred to Figure 12 for a visual depiction of a taxonomy of robust ML solutions in which various techniques that fall in these categories are also listed. These categories are detailed next.

Fig. 12: Taxonomy of robust machine learning (ML) methods categorized into three classes: (1) Modifying Data (2) Adding Auxiliary Model(s) and (3) Modifying Models.

V-a Modifying Data

The methods falling under this category mainly deal with modification of either the training data (e.g., adversarial retraining) and its features or test data (e.g., data pre-processing). Widely used approaches that utilize such methods are described below.

V-A1 Adversarial (Re-)training

The training with adversarial examples has been firstly proposed by Goodfellow et al. [99] and Huang et al. [177] as a defense strategy to make deep neural networks (DNNs) robust against adversarial attacks. They trained the model by augmenting adversarial examples in the training set. Furthermore, Goodfellow et al. showed that adversarial training could provide better regularization for DNNs. In [99, 177], the adversarial robustness of ML models was evaluated on the MNIST dataset having 10 classes while in [178], a comprehensive evaluation of adversarial training was performed on a considerably large dataset, i.e., ImageNet having 1000 classes. The authors used 50% of the dataset for adversarial training and this strategy increased the robustness of DNNs for single step adversarial attack (e.g., FGSM [99]). However, the strategy failed for iterative adversarial examples generation methods such as the basic iterative method (BIM) [118].

V-A2 Input Reconstruction

The idea of input reconstruction is to clean the adversarial examples to transform them back to legitimate ones. Once the adversarial examples have been transformed, they will not affect the prediction of DNN models. For robustifying DNN, a technique named deep contractive autoencoder has been proposed in [179]. They trained a denoising autoencoder for cleaning adversarial perturbations.

V-A3 Feature Squeezing

Xu et al. [180] leveraged the observation that input feature spaces are typically unnecessarily large and provide a vast room for an adversary to construct adversarial perturbations and thereby proposed feature squeezing as a defense strategy to adversarial examples. The available feature space to an adversary can be reduced using feature squeezing that combines samples having heterogeneous feature vectors in the original space into a single space. They perform feature squeezing at two levels: (1) reducing color bit depth; (2) spatial domain smoothing using both local and non-local method. Also, they evaluated eleven state-of-the-art adversarial perturbations generation methods on three different datasets, i.e., MNIST, CIFAR-10, and ImageNet. However, this defense strategy was found to be less effective in a later study [181].

V-A4 Features Masking

In [182], authors proposed to add a masking layer before the softmax layer of the classifier that is mainly responsible for the classification task. The purpose of adding the masking layer was to mask the most sensitive features of input that are more prone to adversarial perturbations by forcing the corresponding weights of this layer to zero.

V-A5 Developing Adversarially Robust Features

This method has been recently proposed as an effective approach to make DNNs resilient against adversarial attacks [183]. Authors leveraged the connections between the natural spectral geometrical property of the dataset and the metric of interest for developing adversarially robust features. They empirically demonstrated that the spectral approach can be effectively used to generate adversarially robust features that can be ultimately used to develop robust models.

V-A6 Manifold Projection

In this method, input examples are projected on the manifold of learned data from another ML model, generally, the manifold is provided by a generative model. For instance, Song et al. [184] leveraged generative models to clean the adversarial perturbations from malicious images and then the cleaned images are given to the non-modified ML model. Furthermore, this paper ascertains that regardless of the attack type and targeted model, the adversarial examples lie in the low probability regions of the training data distribution. In a similar study [185], authors used generative adversarial networks (GANs) for cleaning adversarial perturbations. Similarly, Meng et al. proposed a framework named MagNet that includes one or more detectors and a reformer network [186]. The detector network is used to classify normal and adversarial examples by learning the manifold of normal examples, whereas, the reformer network moves adversarial examples towards the learned manifold.

V-B Modifying Model

The methods that fall in this category mainly modify the parameters/features learned by the trained model (e.g., defensive distillation), a few prominent such methods are described next.

V-B1 Network Distillation

Papernot et al. [187] adopted network distillation as a procedure to defend against adversarial attacks. The notion of distillation was originally proposed by Hinton et al. [188] as a mechanism for effectively transferring knowledge from a larger network to a smaller one. The defense method developed by Papernot et al. uses the probability distribution vector generated by the first model as an input to the original DNN model. This increases the resilience of the DNN model towards very small perturbations. However, Carlini et al. showed that the defensive distillation method does not work against their proposed attack [133].

V-B2 Network Verification

Network verification aims to verify the properties of DNN, i.e., whether an input satisfies or violates certain property because it may restrain new unseen adversarial perturbations. For instance, a network verification method for robustifying DNN models using ReLU activation is presented in [178]. To verify the properties of the deep model, the authors used the satisfiability modulo theory (SMT) solver and showed that the network verification problem is NP-complete. The assumption of using ReLU with certain modifications is addressed in [189].

V-B3 Gradient Regularization

Ross et al. [190] proposed using input gradient regularization as a defense strategy against adversarial attacks. In the proposed approach, they used differentiable DNN models and penalized the variation that results in the output with a change in the input. As a result, adversarial examples with small perturbations were unlikely to modify the output of deep models but this increases the training complexity with a factor of two. The notion of penalizing the gradient of loss function of models with respect to the inputs for robustification has been already been investigated in [191].

V-B4 Classifier Robustifying

In this method, classification models that are robust to adversarial attacks are designed from the ground up instead of detecting or transforming them. Bradshaw et al. [192] utilized the uncertainty around the adversarial examples and developed a hybrid model using Gaussian processes (GPs) with RBF kernels on top of DNNs and showed that their approach is robust against adversarial attacks. The latent variable in GPs is expressed using a Gaussian distribution and is parameterized by mean and covariance and encoded with RBF kernels. Schott et al. [193] proposed the first adversarially robust classifier for MNIST dataset, where robustness is achieved by using analysis by synthesis through learned class-conditional data distribution. This work highlights the lack of research that provides guaranteed robustness against adversarial attacks.

V-B5 Explainable and Interpretable ML

In a recent study [194], an adversarial example detection approach is presented for a face recognition task that leverages the interpretability of DNN models. The key in this approach is the identification of critical neurons for an individual task that is performed by establishing a bi-directional correspondence inference between the neurons of a DNN model and its attributes. Then the activation values of these neurons are amplified to augment the reasoning part and the values of other neurons are decreased to conceal the uninterpretable part. Recently, Nicholas Carlini showed that this approach does not defend against untargeted adversarial perturbations generated using norm with a bound of 0.01 [195].

V-B6 Masking ML Model

In a recent study [196], authors formulated the problem of adversarial ML as learning and masking problem and presented a classifier masking method for secure learning. To mask the deep model, they introduced noise in the DNN’s logit output that was able to defend against low distortion attacks.

V-C Adding Auxiliary Model(s)

These methods aim to utilize additional ML models to enhance the robustness of the main model (e.g., using generative models for adversarial detection), such widely used methods are described as follows.

V-C1 Adversarial Detection

In adversarial detection strategy, a binary classifier (detector) is trained, e.g., DNN to identify the input as a legitimate or an adversarial one [197, 198]. In [199], authors used a simple DNN-based binary adversarial detector as an auxiliary network to the main model. In a similar study [200], authors introduced an outlier class while training the DNN model, the model then detects the adversarial examples by classifying them as an outlier. This defense approach has been used in a number of studies in the literature.

V-C2 Ensembling Defenses

As adversarial examples can be developed in a multi-facet fashion, therefore, multiple defense methods can be combined together (parallelly or sequentially) to defend against them [201]. PixelDefend [184] is a prime example of ensemble defense in which an adversarial detector and an “input reconstructor” are integrated to restrain adversarial examples. However, He et al. showed that an ensemble of weak defense strategies does not provide a strong defense to adversarial attacks [181]. Further, they demonstrated that adaptive adversarial examples transfer across several defense or detection proposals.

V-C3 Using Generative ML Models

Goodfellow et al. [99] firstly coined the idea of using generative training to defend adversarial attacks, however, in the same study they argued that being generative is not sufficient and presented an alternative hypothesis of ensemble training that works by ensembling multiple instances of original DNN models. In [202], an approach named cowboy is presented to detect and defend against adversarial examples. They transformed adversarial samples back to data manifold by cleaning them using a GAN trained on the same dataset. Furthermore, authors empirically showed that adversarial examples lie outside the data manifold learned by the GAN, i.e., the discriminator of GAN consistently scores the adversarial perturbations lower than the real samples across multiple attacks and datasets. In another similar study [203], a GAN-based framework named Defense-GAN is trained for modeling the distribution of legitimate images. During inference time, Defense-GAN finds a similar output without adversarial perturbations that is then fed to the original classifier. Also, the authors of both of these studies claimed that their method is independent of the DNN model and attack type and that it can be used in existing settings. The summary of various state-of-the-art adversarial defense studies is presented in Table VIII.

Fig. 13: The taxonomy of different adversarial defense evaluation methods and recommendations.
Reference Type Defense Adv. Perturbation Dataset Threat Model Original Accuracy Adversarial Accuracy Defense Success
Gu et al. [179] Modifying Data Adversarial examples cleaning using denoising autoencoders (DAEs). Local perturbations, e.g., additive Gaussian noise. MNIST Not clearly articulated 99 (%) 100 (%) 99.1%
Xu et al. [180] Reduced the feature space available to an adversary. Evaluated different state of the art perturbation generation methods. MNIST, CIFAR-10, and ImageNet White Box MNIST (99.43%), CIFAR-10 (94.84%), and ImageNet (68.36%) Roughly achieved 100% for each model using different attack algorithms. MNIST (62.7%), CIFAR-10 (77.27%), and ImageNet (68.11%)
Gao et al. [182] Proposed DeepCloak that removes unnecessary features in the model. Perturbations are generated using FGSM. CIFAR-10 Not clearly articulated 93.72% (1% masking) 39.23% (1% masking) 10% increase in adversarial settings
Garg et al. [183] Constructed adversarially robust features using spectral property of the dataset. Perturbations MNIST Not clearly articulated Not clearly articulated Not clearly articulated Provided empirical evidence for the effectiveness of the proposed defense.
Song et al. [184] Proposed PixelDefend to clean adversarial examples by moving them back to the manifold of original training data. Used five state of the art adversarial attacks. Fashion MNIST and CIFAR-10 White Box 90% Fashion MNIST (63%), CIFAR-10 (32%) for strongest defense. Adversarial accuracy increased from 63% to 84% for Fashion MNIST and 32% to 70% for CIFAR-10.
Prakash et al. [204] Used wavelet-based denoising method to clean natural and adversarial noise. Generated perturbations using pixel deflection . ImageNet White Box 98.9% Not clearly articulated 81% accuracy
Xie et al. [205] Proposed to use randomization at inference stage and used random resizing and random padding. ImageNet White Box and Black Box 99.2% Not clearly articulated 86% accuracy
Guo et al. [206] Investigated different image transformation methods defending adversarial attacks. ImageNet Gray Box and Black Box 75% Not clearly articulated 70% accuracy
Goodfellow et al. [99] Augmented adversarial examples into the training set. Fast Gradient Sign (FGSM) method MNIST Not clearly articulated Not clearly articulated 99.9% With adversarial training, the error rate fell to 17.9%
Schott et al. [193] Adding Auxiliary Model Used generative modelling using variational autoencoder (VAE.) Applied score-based, decision-based, transfer-based, and gradient-based attacks using . MNIST Not clearly articulated 99% Not clearly articulated 80%
Wong et al. [103] Formulated a robust optimization problem using convex outer approximation for detection of adversarial examples. FGSM and gradient descent based methods, . MNIST Not clearly articulated 98.2% accuracy Not clearly articulated 94.2% accuracy
Liao et al. [207] Proposed a high-level representation guided denoiser (HGD) for defending adversarial attacks. ImageNet White Box and Black Box 75% Not clearly articulated 75% accuracy
Ross et al. [190] Modifying Model Trained the model with input gradient regularization for defending adversarial attacks. Evaluated three famous attacks, i.e., FGSM, TGSM, and JSMA. MNIST, Street-View House Numbers (SVHN), and notMNIST White Box and Black Box Not clearly articulated Not clearly articulated MNIST (100%), SVHN (90%), and notMNIST (100%)-approximately same for each type of attack.
Madry et al. [208] Trained the model with optimized parameters using robust optimization. CIFAR-10 White Box and Weaker Black Box 87% Not clearly articulated 46% accuracy
Buckman et al. [209] Proposed to use thermometer encoding for inputs. CIFAR-10 White Box 90% Not clearly articulated 79% accuracy
Dhillon et al. [210] Proposed stochastic activation pruning of the trained model for defense. CIFAR-10 Not clearly articulated 83% Not clearly articulated 51% accuracy
Croce et al. [211] Proposed a regularization scheme for ReLU networks Perturbations using and methods. MNIST, German Traffic Signs (GTS), Fashion MNIST, and CIFAR-10. Not clearly articulated 98.81% Not clearly articulated 96.4% accuracy (on first 1000 test points)
TABLE VIII: Summary of state-of-the-art adversarial defense approaches

V-D Adversarial Defense Evaluation: Methods and Recommendations

This section presents different potential methods for performing the evaluation of adversarial defenses along with an outline of common evaluation recommendations, as depicted in Figure 13.

V-D1 Principles for Performing Defense Evaluations

In a recent study [212], Carlini et al. provided recommendations for evaluating adversarial defenses and thereby provided three common reasons to evaluate the performance of adversarial defenses. These recommendations are briefly described below.

Defending Against the Adversary

Defending against adversaries attempting adversarial attacks on the system is crucial as it is a matter of security concern. In real-world applications, if the ML-based systems are deployed without considering the security threats then the adversaries willing to harm the system will continue to practice attacking the system as long as there are incentives. The nature and sovereignty of attacks vary with adversarial capabilities and knowledge, etc. In this regard, proper and well-thought threat modeling (described in detail in an earlier section) is of paramount importance.

Testing Worst-Case Robustness

In real-world settings, testing the worst-case robustness of ML models from the perspective of an adversary is crucial as real-world systems exhibit randomness that is hard to be predicted. Compared to the random testing approach, worst-case analysis can be a powerful tool to distinguish a system that fails one time in a billion trials from a system that never fails. For instance, if a powerful adversary who is attempting to harm a system to get intentional misbehavior fails to do so, then it provides strong evidence that the system will not misbehave in case of previously unforeseen randomness.

Measuring Progress of ML Towards Human Level Abilities

To advance ML techniques, it is important to understand why ML algorithms fail in a particular setting. In the literature, we see that the performance gap between ML methods and humans is considerably small on many complex tasks, e.g., natural image classification [109], mastering the game of Go using reinforcement learning [213], and human level accuracy in the medical domain [214, 215]. However, in case of evaluating adversarial robustness, the performance gap between humans and ML systems is very large. This is so true for the cases where ML models exhibit super-human accuracy, i.e., an adversarial attack can completely evade the prediction performance of the system. This leads to the belief that there exists a fundamental difference between the decision making process of humans and ML models. So, keeping this aspect in mind, adversarial robustness is the measure of ML progress that is orthogonal to performance.

V-D2 Common Evaluation Recommendations

In this section, we provide a brief discussion on the common evaluation recommendations and we refer interested readers to the recent article of Carlini et al. [212] for a detailed and comprehensive description on evaluation recommendations and pitfalls for adversarial robustness. As authors promised to update this paper timely, therefore, we also refer interested readers to following URL666https://github.com/evaluating-adversarial-robustness/adv-eval-paper for an updated version of this paper. To avoid unintended consequences and pitfalls of evaluation methods, the following evaluation recommendations can be adopted.

Use Both Targeted and Untargeted Attacks

Adversarial robustness should be evaluated on both targeted and untargeted attacks. In any case, it is important to explicitly state which attack were considered while evaluating. Theoretically, an untargeted attack is considered to be strictly easier than a targeted attack but practically, performing an untargeted attack can give better results than targeting any of classes. Many untargeted attacks mainly work by minimizing the prediction confidence of the correct label. Contrarily, targeted attacks work by maximizing the prediction confidence of some other class.

Perform Ablation

Perform ablation analysis by removing a combination of defense components and verifying that the attack succeeds on a similar but undefended model. This is useful to develop a straight forward understanding of the goals of the evaluation and assess the effectiveness of combining multiple defense strategies for robustifying the model.

Diverse Test Settings

Perform the evaluation in diverse settings, i.e., test the robustness to random noise, validate broader threat models, and carefully evaluate the attack hyperparameters and select those that provide the best performance. It is also important to verify that the attack converges under selected hyperparameters. Also, investigate whether attack results are sensitive to a specific set of hyperparameters. In addition, experiment witg at least one hard label attack and one gradient free attack.

Evaluate Defense on Broader Domains

For a defense to be truly effective, consider evaluating the proposed defense method on broader domains other than images. For instance, the majority of works on adversarial machine learning mainly investigate the imaging domain. State explicitly if the defense is only capable of defending adversarial perturbations in a specific domain (e.g., images).

Ensemble Over Randomness

It is important to create adversarial examples by ensembling over the randomness of those defenses that randomize aspects of DNN inference. The introduced randomness enforces stochasticity and standard attacks become hard to be realized. Verify that the attack remains successful when randomness is assigned a fixed value. Also, define the threat model and the availability of randomness knowledge to the adversary.

Transferability Attack

Select a similar substitute model (to the defended model) and perform transferability of the attack. Because the adversarial examples are often transferable across different models, i.e., an adversarial sample constructed for one model often appears adversarial to another model with identical architecture [216]. This is true regardless of the fact that the other model is trained on completely different data distribution.

Upper Bound of Robustness

To provide upper bound on robustness, apply adaptive attacks, i.e., give access to a full defense. Apply the strongest attack for a given threat model and defense being evaluated. Also, verify that adaptive attacks perform better than others and evaluate their performance in multiple settings, e.g., the combination of transfer, random-noise, and black-box attacks. For instance, Ruan et al. evaluated the robustness of DNN and presented an approximate approach to provide lower and upper bounds on robustness for norm with provable guarantees [217].

V-E Testing of ML Models and Autonomous Vehicles

V-E1 Behavior Testing of Models

In a recent study, Sun et al. proposed four novel testing criteria for verifying structural features of DNN using MC/DC777MC/DC (Modified Condition/Decision Coverage) is a method of measuring the extent to which safety-critical software has been adequately tested. coverage criteria [218]. They validated proposed methods by generating test cases guided by their proposed coverage criteria using both symbolic and gradient-based approach and showed that their method was able to capture undesired behaviors of DNN. Similarly, a set of multi-granularity testing criteria named DeepGauge is presented in [219] that works by rendering a multi-faceted testbed. The security analysis of neural networks based system using symbolic intervals is presented in [220] which uses interval arithmetics and symbolic intervals together with other optimization methods to minimize confidence bound of over-estimation of outputs. A coverage guided fuzzing method for testing neural networks for goals (e.g., finding numerical errors, generating disagreements, and determining the undesirable behavior of models) is presented in [221]. In [222], the first approach utilizing differential fuzzing testing is presented for exploiting incorrect behavior of DL systems.

V-E2 Automated Testing of ML Empowered Autonomous Vehicles: An Overview

To ensure a completely secure functionality of autonomous vehicles in a real-world environment, the development of automated testing tools is required. As the backbone of autonomous vehicles leverage different ML techniques for building decision systems at different levels, e.g., perception, decision making, and control, etc. In this section, we provide an overview of various studies performing test of autonomous vehicles.

Tian et al. [223] proposed and investigated a tool named DeepTest to perform testing of DNN empowered autonomous vehicles to automatically detect erroneous behaviors of the vehicle that can potentially cause fatal accidents. Their proposed tool automatically generates test cases using changes in real-world road conditions such as weather and lighting conditions and then systematically explores different components of DNN logic that maximize the number of activated neurons. Furthermore, they tested three DNNs that won top positions in Udacity self-driving car challenge and found various erroneous behaviors in different real-world road conditions (e.g., rain, fog, blurring, etc.) that led to fatal accidents. In [224], used a GAN-based approach to generate synthetic scenes of different driving conditions for testing autonomous cars. A metamorphic testing approach for evaluating the software part of self-driving vehicles is presented in [225].

A generic framework for testing security and robustness of ML models for computer vision systems depicting realistic properties is presented in [130]. Authors evaluated the security of fifteen state of the art computer vision systems in black box setting including Nvidia’s Dave self-driving system. Moreover, it has been provably demonstrated that there exists a trade-off between adversarial robustness to perturbations and the standard accuracy of the model in a fairly simple and natural setting [226]. A simulation-based framework for generating adversarial test cases to evaluate the closed-loop properties of ML enabled autonomous vehicles is presented in [174]. In [227], authors generated adversarial driving scenes using Bayesian optimization to improve self-driving behavior utilizing vision-based imitation learning. An autoencoder-based approach for automatic identification of unusual events using small dashcam video and the inertial sensor is presented in [228] that can potentially be used to develop a robust autonomous driving system. Various factors and challenges impacting driveability of autonomous vehicles along with an overview of available datasets for training self-driving is presented in [229] and challenges in designing such datasets are described in [230]. Furthermore, Dreossi et al. suggested that while robustifying the ML systems, the effect of adversarial ML should be studied by considering the semantics and context of the whole system [231]. For example, in DL empowered autonomous vehicle, not every adversarial observation might lead to harmful action(s). Moreover, one might be interested in those adversarial examples that can significantly modify the desired semantics of the whole system.

Vi Open Research Issues

The advancement of ML research and its state of the art performance in various complex domains, in particular, the advent of more sophisticated DL methods might be an inherent panacea to the conventional challenges of vehicular networks. However, ML/DL methods cannot be naively applied to vehicular networks that possess unique characteristics and adaption of these methods for learning such distinguishing features of vehicular networks is a challenging task [232]. In this section, we highlight a few promising areas of research that require further investigation.

Vi-a Efficient Distributed Data Storage

In the connected vehicular ecosystem, the data is generated and stored in a distributed fashion that raises a question about the applicability of ML/DL models at a global level. As ML models are developed with the assumption that data is easily accessible and managed by a central entity, there is a need to utilize distributed learning methods for connected vehicles so that data may be scalably acquired from multiple units in the ecosystem.

Vi-B Interpretable ML

Another major security vulnerability in CAVs is the lack of interpretability of ML schemes. ML techniques in general and DL techniques specifically are based on the idea of function approximation, where the approximation of the empirical function is performed using DNN architectures. Current works in ML/DL lack interpretability, which is resulting in a major hurdle in the progress of ML/DL empowered CAVs. The lack of interpretability is exploited by the adversaries to construct adversarial examples for fooling the deployed ML/DL schemes in autonomous vehicles, i.e., physical attacks on self-driving vehicles as discussed above. Development of secure, explainable, and interpretable ML techniques for security-critical applications of CAVs is another open research issue.

Vi-C Defensive and Secure ML

Despite many defense proposals presented in the literature for adversarial attacks, developing adversarially robust ML models remains yet another open research problem. Almost every defense has been shown to be only effective for a specific attack type and fails for stronger or unseen attacks. Moreover, most defenses address the problem of adversarial attacks for computer vision tasks but adversarial ML is being developed for many other vertical application domains. Therefore, development of efficient and effective novel defense strategies is essentially required, particularly, for safety-critical applications, e.g., communication between connected vehicles.

Vi-D Privacy Preserving ML

Preserving privacy in any user-centric application is of high concern. Privacy means that models should not reveal any additional information about the subjects involved in collected training data (a.k.a. differential privacy) [233]. As CAVs involve human subjects, ML model learning should be capable of preserving the privacy of drivers, passengers, and pedestrians where privacy breaches can results in extremely harmful consequences.

Vi-E Security Centric Proxy Metrics

Development of security-centric proxy metrics to evaluate security threats against systems is fundamentally important. Currently, there is no way to formalize different types of perturbation properties, e.g., indistinguishable and content-preserving, etc. In addition, there is no function to determine that a specific transformation is content-preserving. Similarly, the process of measuring perceptual similarity between two images is very complex and widely used perceptual metrics are shallow functions that fail to account for many subtle distinctions of human perception [234].

Vi-F Fair and Accountable ML

The literature on ML reveals that ML-based results and predictions lack fairness and accountability. The fairness property ensures that the ML model did not nurture discrimination against specific cases, e.g., favoring cyclists over pedestrians. This bias in ML predictions is introduced by the biased training data and results in social bias and higher error rate for a particular demographic group. For example, researchers identified a risk of bias in the perception system of autonomous vehicles to recognize pedestrians with dark skin [235]. This is an experimental work in which authors evaluated different models developed by other academic researchers for autonomous vehicles. Despite the fact that this work does not use an actual object detection model that is being used by autonomous vehicles in the market, nor did it use the training data being used by autonomous vehicle manufactures, this study highlights a major vulnerability of ML models used in autonomous vehicles and raises serious concerns about their applicability in real-world settings where a self-driving vehicle may encounter people from a variety of demographic backgrounds.

The accountability of ML models is associated with their interpretability property as we are interested in developing such models that can explain their predictions using the models’ internal parameters. The notion of accountability is fundamentally important to understand ML model failures for adversarial examples.

Vi-G Robustifying ML Models Against Distribution Drifts

To restrict the integrity attacks, ML models should be made robust against distribution drifts which refer to the situation where train and test data distributions are different. This difference between the training and test distributions gives rise to adversarial examples. These examples can also be considered as the worst case distribution drifts [117]. It is fairly clear that the data collection process in the vehicular ecosystem is temporal and dynamic in nature so such distribution drifts are highly possible and will affect the robustness of the underlying ML systems. Moreover, such drifts can be exploited by the adversaries to create adversarial samples during inference, for example, in [236] authors investigated this distribution drift by introducing positively connotated words in spam emails to evade detection. Moreover, modification of the training distribution is also possible in a similar way and distribution drift violates the widely known presumption that we can achieve low learning error when a large training data is available. Ford at al. [237] have presented empirical and theoretical evidence that adversarial examples are a consequence of test error in noise caused by a distributional shift in the data. To ensure that the adversarial defense is trustworthy, it must provide defense against data distribution shifts. As the perception system of CAVs is mainly based on data-driven modeling using historical training data, it is highly susceptible to the problem of distribution drifts. Therefore, robustifying ML models against the aforementioned distribution drifts is very important. One way to counter this problem is to leverage deep reinforcement learning (RL) algorithms for developing the perception system of autonomous vehicles but this is not yet practically possible, as the state and action spaces in realistic settings (road and vehicular environment) are continuous and very complex. Therefore, fine control is required for the efficacy of the system [156]. However, the work on leveraging deep RL-based methods for autonomous vehicles is building up. For instance, Sallab et al. proposed a deep RL-based framework for autonomous vehicles that enables the vehicle to handle partially observable scenarios [238]. They investigated the effectiveness of their system using an open source 3D car racing simulator (TORCS888http://torcs.sourceforge.net/) and demonstrated that their model was able to learn complex road curvatures and simple inter-vehicle interactions. On the counter side, deep RL-based systems have been shown vulnerable to policy induction attacks [239].

Vii Conclusions

The recent discoveries that machine learning (ML) techniques are vulnerable to adversarial perturbations have raised questions on the security of connected and autonomous vehicles (CAVs), which utilize ML techniques for various tasks ranging from environmental perception to objection recognition and movement prediction. The safety-critical nature of CAVs clearly demands that the technology it uses should be robust to all kinds of potential security threats—be they accidental, intentional, or adversarial. In this work, we present for the first time a comprehensive analysis of the challenges posed by adversarial ML attacks for CAVs aggregating insights from both the ML and CAV literature. Our major contributions include: a broad description of the ML pipeline used in CAVs; description of the various adversarial attacks that can be launched on the various components of the CAV ML pipeline; a detailed taxonomy of the adversarial ML threat for CAVs; a comprehensive survey of adversarial ML attacks and defenses proposed in literature. Finally, open research challenges and future directions are discussed to provide readers with the opportunity to develop robust and efficient solutions for the application of ML models in CAVs.

References

  • [1] Mohamed Nidhal Mejri, Jalel Ben-Othman, and Mohamed Hamdi. Survey on VANET security challenges and possible cryptographic solutions. Vehicular Communications, 1(2):53–66, 2014.
  • [2] Joseph Gardiner and Shishir Nagaraja. On the security of machine learning in malware c&c detection: A survey. ACM Computing Surveys (CSUR), 49(3):59, 2016.
  • [3] Anirban Chakraborty, Manaar Alam, Vishal Dey, Anupam Chattopadhyay, and Debdeep Mukhopadhyay. Adversarial attacks and defences: A survey. arXiv preprint arXiv:1810.00069, 2018.
  • [4] Naveed Akhtar and Ajmal Mian. Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access, 6:14410–14430, 2018.
  • [5] Steven E Shladover. Connected and automated vehicle systems: Introduction and overview. Journal of Intelligent Transportation Systems, 22(3):190–200, 2018.
  • [6] Rasheed Hussain and Sherali Zeadally. Autonomous cars: Research results, issues and future challenges. IEEE Communications Surveys & Tutorials, 2018.
  • [7] Giuseppe Araniti, Claudia Campolo, Massimo Condoluci, Antonio Iera, and Antonella Molinaro. LTE for vehicular networking: a survey. IEEE communications magazine, 51(5):148–157, 2013.
  • [8] Haixia Peng, Le Liang, Xuemin Shen, and Geoffrey Ye Li. Vehicular communications: A network layer perspective. IEEE Transactions on Vehicular Technology, 2018.
  • [9] Stephen Checkoway, Damon McCoy, Brian Kantor, Danny Anderson, Hovav Shacham, Stefan Savage, Karl Koscher, Alexei Czeskis, Franziska Roesner, Tadayoshi Kohno, et al. Comprehensive experimental analyses of automotive attack surfaces. In USENIX Security Symposium, pages 77–92. San Francisco, 2011.
  • [10] Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z Berkay Celik, and Ananthram Swami. The limitations of deep learning in adversarial settings. In 2016 IEEE European Symposium on Security and Privacy (EuroS&P), pages 372–387. IEEE, 2016.
  • [11] Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, and Dawn Song. Robust Physical-World Attacks on Deep Learning Visual Classification. In Computer Vision and Pattern Recognition (CVPR), 2018.
  • [12] Chawin Sitawarin, Arjun Nitin Bhagoji, Arsalan Mosenia, Mung Chiang, and Prateek Mittal. DARTS: Deceiving Autonomous Cars with Toxic Signs. arXiv preprint arXiv:1802.06430, 2018.
  • [13] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
  • [14] Joshua E Siegel, Dylan C Erb, and Sanjay E Sarma. A survey of the connected vehicle landscape—architectures, enabling technologies, applications, and development areas. IEEE Transactions on Intelligent Transportation Systems, 19(8):2391–2406, 2018.
  • [15] Xiaoyong Yuan, Pan He, Qile Zhu, and Xiaolin Li. Adversarial examples: Attacks and defenses for deep learning. IEEE transactions on neural networks and learning systems, 2019.
  • [16] Wenqi Wang, Benxiao Tang, Run Wang, Lina Wang, and Aoshuang Ye. A survey on adversarial attacks and defenses in text. arXiv preprint arXiv:1902.07285, 2019.
  • [17] Rodrigo Marçal Gandia, Fabio Antonialli, Bruna Habib Cavazza, Arthur Miranda Neto, Danilo Alves de Lima, Joel Yutaka Sugano, Isabelle Nicolai, and Andre Luiz Zambalde. Autonomous vehicles: scientometric and bibliometric review. Transport reviews, 39(1):9–28, 2019.
  • [18] Society of Automotive Engineers. Taxonomy and definitions for terms related to driving automation systems for on-road motor vehicles, 2016.
  • [19] Steven E Shladover. Roadway automation technology–research needs. Transportation Research Record, (1283), 1990.
  • [20] Ernst Dieter Dickmanns. Vision for ground vehicles: History and prospects. International Journal of Vehicle Autonomous Systems, 1(1):1–44, 2002.
  • [21] Hans-Benz Glathe. Prometheus-a cooperative effort of the european automotive manufacturers. Technical report, SAE Technical Paper, 1994.
  • [22] Sadayuki Tsugawa, Takaharu Saito, and Akio Hosaka. Super smart vehicle system: Avcs related systems for the future. In Proceedings of the Intelligent Vehicles92 Symposium, pages 132–137. IEEE, 1992.
  • [23] James H Rillings. Automated highways: Cars that drive themselves in tight formation might alleviate the congestion now plaguing urban freeways. Scientific American. Vol. 277, no. 4, 1997.
  • [24] Roman Staszewski and Hannes Estl. Making cars safer through technology innovation. White Paper by Texas Instruments Incorporated, 2013.
  • [25] Elisabeth Uhlemann. Introducing connected vehicles [connected vehicles]. IEEE Vehicular Technology Magazine, 10(1):23–31, 2015.
  • [26] Ning Lu, Nan Cheng, Ning Zhang, Xuemin Shen, and Jon W Mark. Connected vehicles: Solutions and challenges. IEEE internet of things journal, 1(4):289–299, 2014.
  • [27] Sadayuki Tsugawa, Sabina Jeschke, and Steven E Shladover. A review of truck platooning projects for energy savings. IEEE Transactions on Intelligent Vehicles, 1(1):68–77, 2016.
  • [28] Jennie Lioris, Ramtin Pedarsani, Fatma Yildiz Tascikaraoglu, and Pravin Varaiya. Platoons of connected vehicles can double throughput in urban roads. Transportation Research Part C: Emerging Technologies, 77:292–305, 2017.
  • [29] Karl Koscher, Alexei Czeskis, Franziska Roesner, Shwetak Patel, Tadayoshi Kohno, Stephen Checkoway, Damon McCoy, Brian Kantor, Danny Anderson, Hovav Shacham, et al. Experimental security analysis of a modern automobile. In Security and Privacy (SP), 2010 IEEE Symposium on, pages 447–462. IEEE, 2010.
  • [30] Pierre Kleberger, Tomas Olovsson, and Erland Jonsson. Security aspects of the in-vehicle network in the connected car. In 2011 IEEE Intelligent Vehicles Symposium (IV), pages 528–533. IEEE, 2011.
  • [31] Steven E Shladover, Christopher Nowakowski, Xiao-Yun Lu, and Robert Ferlis. Cooperative adaptive cruise control: Definitions and operating concepts. Transportation Research Record, 2489(1):145–152, 2015.
  • [32] Irshad Ahmed Sumra, Halabi Bin Hasbullah, and Jamalul-lail Bin AbManan. Attacks on security goals (confidentiality, integrity, availability) in vanet: a survey. In Vehicular Ad-Hoc Networks for Smart Cities, pages 51–61. Springer, 2015.
  • [33] Kiho Lim, Kastuv M Tuladhar, and Hyunbum Kim. Detecting location spoofing using adas sensors in vanets. In 2019 16th IEEE Annual Consumer Communications & Networking Conference (CCNC), pages 1–4. IEEE, 2019.
  • [34] Sebastian Bittl, Arturo A Gonzalez, Matthias Myrtus, Hanno Beckmann, Stefan Sailer, and Bernd Eissfeller. Emerging attacks on vanet security based on gps time spoofing. In 2015 IEEE Conference on Communications and Network Security (CNS), pages 344–352. IEEE, 2015.
  • [35] Jonathan Petit and Steven E Shladover. Potential cyberattacks on automated vehicles. IEEE Transactions on Intelligent Transportation Systems, 16(2):546–556, 2015.
  • [36] Bryan Parno and Adrian Perrig. Challenges in securing vehicular networks. In Workshop on Hot Topics in Networks (HotNets-IV), pages 1–6. Maryland, USA, 2005.
  • [37] Bassem Mokhtar and Mohamed Azab. Survey on security issues in vehicular ad hoc networks. Alexandria engineering journal, 54(4):1115–1126, 2015.
  • [38] Sherali Zeadally, Ray Hunt, Yuh-Shyan Chen, Angela Irwin, and Aamir Hassan. Vehicular ad hoc networks (VANETS): status, results, and challenges. Telecommunication Systems, 50(4):217–241, 2012.
  • [39] Jie Li, Huang Lu, and Mohsen Guizani. Acpn: A novel authentication framework with conditional privacy-preservation and non-repudiation for vanets. IEEE Transactions on Parallel and Distributed Systems, 26(4):938–948, 2015.
  • [40] John R Douceur. The sybil attack. In International workshop on peer-to-peer systems, pages 251–260. Springer, 2002.
  • [41] Debiao He, Sherali Zeadally, Baowen Xu, and Xinyi Huang. An efficient identity-based conditional privacy-preserving authentication scheme for vehicular ad hoc networks. IEEE Transactions on Information Forensics and Security, 10(12):2681–2691, 2015.
  • [42] Mani Amoozadeh, Arun Raghuramu, Chen-Nee Chuah, Dipak Ghosal, H Michael Zhang, Jeff Rowe, and Karl Levitt. Security vulnerabilities of connected vehicle streams and their impact on cooperative driving. IEEE Communications Magazine, 53(6):126–132, 2015.
  • [43] Nikita Lyamin, Denis Kleyko, Quentin Delooz, and Alexey Vinel. Ai-based malicious network traffic detection in vanets. IEEE Network, 32(6):15–21, 2018.
  • [44] Seyhan Ucar, Sinem Coleri Ergen, and Oznur Ozkasap. Data-driven abnormal behavior detection for autonomous platoon. In 2017 IEEE Vehicular Networking Conference (VNC), pages 69–72. IEEE, 2017.
  • [45] Gopi Krishnan Rajbahadur, Andrew J Malton, Andrew Walenstein, and Ahmed E Hassan. A survey of anomaly detection for connected vehicle cybersecurity and safety. In 2018 IEEE Intelligent Vehicles Symposium (IV), pages 421–426. IEEE, 2018.
  • [46] Mevlut Turker Garip, Mehmet Emre Gursoy, Peter Reiher, and Mario Gerla. Congestion attacks to autonomous cars using vehicular botnets. In NDSS Workshop on Security of Emerging Networking Technologies (SENT), San Diego, CA, 2015.
  • [47] Philip Koopman and Michael Wagner. Autonomous vehicle safety: An interdisciplinary challenge. IEEE Intelligent Transportation Systems Magazine, 9(1):90–96, 2017.
  • [48] Yasser Shoukry, Paul Martin, Paulo Tabuada, and Mani Srivastava. Non-invasive spoofing attacks for anti-lock braking systems. In International Workshop on Cryptographic Hardware and Embedded Systems, pages 55–72. Springer, 2013.
  • [49] Björn Wiedersheim, Zhendong Ma, Frank Kargl, and Panos Papadimitratos. Privacy in inter-vehicular networks: Why simple pseudonym change is not enough. In Wireless On-demand Network Systems and Services (WONS), 2010 Seventh International Conference on, pages 176–183. IEEE, 2010.
  • [50] Suwan Wang and Yuan He. A trust system for detecting selective forwarding attacks in vanets. In International Conference on Big Data Computing and Communications, pages 377–386. Springer, 2016.
  • [51] Khattab Ali Alheeti, Anna Gruebler, and Klaus McDonald-Maier. Intelligent intrusion detection of grey hole and rushing attacks in self-driving vehicular networks. Computers, 5(3):16, 2016.
  • [52] Maxim Raya, Daniel Jungels, Panos Papadimitratos, Imad Aad, and Jean-Pierre Hubaux. Certificate revocation in vehicular networks. Laboratory for computer Communications and Applications (LCA) School of Computer and Communication Sciences, EPFL, Switzerland, 2006.
  • [53] Felipe Cunha, Leandro Villas, Azzedine Boukerche, Guilherme Maia, Aline Viana, Raquel AF Mini, and Antonio AF Loureiro. Data communication in vanets: Protocols, applications and challenges. Ad Hoc Networks, 44:90–103, 2016.
  • [54] Jeremy J Blum, Azim Eskandarian, and Lance J Hoffman. Challenges of intervehicle ad hoc networks. IEEE transactions on intelligent transportation systems, 5(4):347–351, 2004.
  • [55] Ramon Dos Reis Fontes, Claudia Campolo, Christian Esteve Rothenberg, and Antonella Molinaro. From theory to experimental evaluation: Resource management in software-defined vehicular networks. IEEE Access, 5:3069–3076, 2017.
  • [56] Le Liang, Hao Ye, and Geoffrey Ye Li. Toward intelligent vehicular networks: A machine learning framework. IEEE Internet of Things Journal, 6(1):124–135, 2019.
  • [57] Hao Ye and Geoffrey Ye Li. Deep reinforcement learning for resource allocation in v2v communications. In 2018 IEEE International Conference on Communications (ICC), pages 1–6. IEEE, 2018.
  • [58] Xue Yang, Leibo Liu, Nitin H Vaidya, and Feng Zhao. A vehicle-to-vehicle communication protocol for cooperative collision warning. In The First Annual International Conference on Mobile and Ubiquitous Systems: Networking and Services, 2004. MOBIQUITOUS 2004., pages 114–123. IEEE, 2004.
  • [59] Jiafu Wan, Jianqi Liu, Zehui Shao, Athanasios Vasilakos, Muhammad Imran, and Keliang Zhou. Mobile crowd sensing for traffic prediction in internet of vehicles. Sensors, 16(1):88, 2016.
  • [60] Allan M de Souza, Roberto S Yokoyama, Guilherme Maia, Antonio Loureiro, and Leandro Villas. Real-time path planning to prevent traffic jam through an intelligent transportation system. In 2016 IEEE Symposium on Computers and Communication (ISCC), pages 726–731. IEEE, 2016.
  • [61] Miao Wang, Hangguan Shan, Rongxing Lu, Ran Zhang, Xuemin Shen, and Fan Bai. Real-time path planning based on hybrid-vanet-enhanced transportation system. IEEE Transactions on Vehicular Technology, 64(5):1664–1678, 2015.
  • [62] Lin Yao, Jie Wang, Xin Wang, Ailun Chen, and Yuqi Wang. V2X routing in a VANET based on the hidden markov model. IEEE Transactions on Intelligent Transportation Systems, 19(3):889–899, 2018.
  • [63] Guangtao Xue, Yuan Luo, Jiadi Yu, and Minglu Li. A novel vehicular location prediction based on mobility patterns for routing in urban VANET. EURASIP Journal on Wireless Communications and Networking, 2012(1):222, 2012.
  • [64] Fanhui Zeng, Rongqing Zhang, Xiang Cheng, and Liuqing Yang. Channel prediction based scheduling for data dissemination in VANETs. IEEE Communications Letters, 21(6):1409–1412, 2017.
  • [65] Amin Karami. ACCPNDN: Adaptive congestion control protocol in named data networking by learning capacities using optimized time-lagged feedforward neural network. Journal of Network and Computer Applications, 56:1–18, 2015.
  • [66] Nasrin Taherkhani and Samuel Pierre. Centralized and localized data congestion control strategy for vehicular ad hoc networks using a machine learning clustering algorithm. IEEE Transactions on Intelligent Transportation Systems, 17(11):3275–3285, 2016.
  • [67] Zhong Li, Cheng Wang, and Chang-Jun Jiang. User association for load balancing in vehicular networks: An online reinforcement learning approach. IEEE Transactions on Intelligent Transportation Systems, 18(8):2217–2228, 2017.
  • [68] Adrian Taylor, Sylvain Leblanc, and Nathalie Japkowicz. Anomaly detection in automobile control network data with long short-term memory networks. In Data Science and Advanced Analytics (DSAA), 2016 IEEE International Conference on, pages 130–139. IEEE, 2016.
  • [69] Qiang Zheng, Kan Zheng, Haijun Zhang, and Victor CM Leung. Delay-optimal virtualized radio resource scheduling in software-defined vehicular networks via stochastic learning. IEEE Transactions on Vehicular Technology, 65(10):7857–7867, 2016.
  • [70] Ribal F Atallah, Chadi M Assi, and Jia Yuan Yu. A reinforcement learning technique for optimizing downlink scheduling in an energy-limited vehicular network. IEEE Transactions on Vehicular Technology, 66(6):4592–4601, 2017.
  • [71] Ribal Atallah, Chadi Assi, and Maurice Khabbaz. Deep reinforcement learning-based scheduling for roadside communication networks. In Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks (WiOpt), 2017 15th International Symposium on, pages 1–8. IEEE, 2017.
  • [72] ByeoungDo Kim, Chang Mook Kang, Jaekyum Kim, Seung Hi Lee, Chung Choo Chung, and Jun Won Choi. Probabilistic vehicle trajectory prediction over occupancy grid map via recurrent neural network. In 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), pages 399–404. IEEE, 2017.
  • [73] Chenggang Yan, Hongtao Xie, Dongbao Yang, Jian Yin, Yongdong Zhang, and Qionghai Dai. Supervised hash coding with deep neural network for environment perception of intelligent vehicles. IEEE transactions on intelligent transportation systems, 19(1):284–295, 2018.
  • [74] Francisca Rosique, Pedro J Navarro, Carlos Fernández, and Antonio Padilla. A systematic review of perception system and simulators for autonomous vehicles research. Sensors, 19(3):648, 2019.
  • [75] Chenyi Chen, Ari Seff, Alain Kornhauser, and Jianxiong Xiao. Deepdriving: Learning affordance for direct perception in autonomous driving. In The IEEE International Conference on Computer Vision (ICCV), December 2015.
  • [76] Sebastian Ramos, Stefan Gehrig, Peter Pinggera, Uwe Franke, and Carsten Rother. Detecting unexpected obstacles for self-driving cars: Fusing deep learning and geometric modeling. In 2017 IEEE Intelligent Vehicles Symposium (IV), pages 1025–1032. IEEE, 2017.
  • [77] Ricardo Omar Chavez-Garcia and Olivier Aycard. Multiple sensor fusion and classification for moving object detection and tracking. IEEE Transactions on Intelligent Transportation Systems, 17(2):525–534, 2016.
  • [78] Heng Wang, Bin Wang, Bingbing Liu, Xiaoli Meng, and Guanghong Yang. Pedestrian recognition and tracking using 3d lidar for autonomous vehicle. Robotics and Autonomous Systems, 88:71–78, 2017.
  • [79] Yujun Zeng, Xin Xu, Dayong Shen, Yuqiang Fang, and Zhipeng Xiao. Traffic sign recognition using kernel extreme learning machines with deep perceptual features. IEEE Transactions on Intelligent Transportation Systems, 18(6):1647–1653, 2017.
  • [80] Vijay John, Keisuke Yoneda, B Qi, Zheng Liu, and Seiichi Mita. Traffic light recognition in varying illumination using deep learning and saliency map. In 17th International IEEE Conference on Intelligent Transportation Systems (ITSC), pages 2286–2291. IEEE, 2014.
  • [81] Lei Lin, Siyuan Gong, Tao Li, and Srinivas Peeta. Deep learning-based human-driven vehicle trajectory prediction and its application for platoon control of connected and autonomous vehicles. In The Autonomous Vehicles Symposium, volume 2018, 2018.
  • [82] Qian Mao, Fei Hu, and Qi Hao. Deep learning for intelligent wireless networks: A comprehensive survey. IEEE Communications Surveys & Tutorials, 20(4):2595–2621, 2018.
  • [83] Lanhang Ye and Toshiyuki Yamamoto. Modeling connected and autonomous vehicles in heterogeneous traffic flow. Physica A: Statistical Mechanics and its Applications, 490:269–277, 2018.
  • [84] Feihu Zhang, Clara Martinez, Clark Daniel, Dongpu Cao, and Alois C Knoll. Neural network based uncertainty prediction for autonomous vehicle application. Frontiers in Neurorobotics, 13:12, 2019.
  • [85] Christos Katrakazas, Mohammed Quddus, Wen-Hua Chen, and Lipika Deka. Real-time motion planning methods for autonomous on-road driving: State-of-the-art and future research directions. Transportation Research Part C: Emerging Technologies, 60:416–442, 2015.
  • [86] Adam Houenou, Philippe Bonnifait, Véronique Cherfaoui, and Wen Yao. Vehicle trajectory prediction based on motion model and maneuver recognition. In 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 4363–4369. IEEE, 2013.
  • [87] Tao Li. Modeling uncertainty in vehicle trajectory prediction in a mixed connected and autonomous vehicle environment using deep learning and kernel density estimation. In The Fourth Annual Symposium on Transportation Informatics, 2018.
  • [88] Mariusz Bojarski, Davide Del Testa, Daniel Dworakowski, Bernhard Firner, Beat Flepp, Prasoon Goyal, Lawrence D Jackel, Mathew Monfort, Urs Muller, Jiakai Zhang, et al. End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316, 2016.
  • [89] Zhilu Chen and Xinming Huang. End-to-end learning for lane keeping of self-driving cars. In 2017 IEEE Intelligent Vehicles Symposium (IV), pages 1856–1860. IEEE, 2017.
  • [90] Jingchu Liu, Pengfei Hou, Lisen Mu, Yinan Yu, and Chang Huang. Elements of effective deep reinforcement learning towards tactical driving decision making. arXiv preprint arXiv:1802.00332, 2018.
  • [91] Maxime Bouton, Jesper Karlsson, Alireza Nakhaei, Kikuo Fujimura, Mykel J Kochenderfer, and Jana Tumova. Reinforcement learning with probabilistic guarantees for autonomous driving. arXiv preprint arXiv:1904.07189, 2019.
  • [92] Yi Zhang, Ping Sun, Yuhan Yin, Lin Lin, and Xuesong Wang. Human-like autonomous vehicle speed control by deep reinforcement learning with double q-learning. In 2018 IEEE Intelligent Vehicles Symposium (IV), pages 1251–1256. IEEE, 2018.
  • [93] Ying He, Nan Zhao, and Hongxi Yin. Integrated networking, caching, and computing for connected vehicles: A deep reinforcement learning approach. IEEE Transactions on Vehicular Technology, 67(1):44–55, 2018.
  • [94] Vicente Milanés, Steven E Shladover, John Spring, Christopher Nowakowski, Hiroshi Kawazoe, and Masahide Nakamura. Cooperative adaptive cruise control in real traffic situations. IEEE Transactions on Intelligent Transportation Systems, 15(1):296–305, 2014.
  • [95] Min-Joo Kang and Je-Won Kang. Intrusion detection system using deep neural network for in-vehicle network security. PloS one, 11(6):e0155781, 2016.
  • [96] DianGe Yang, Kun Jiang, Ding Zhao, ChunLei Yu, Zhong Cao, ShiChao Xie, ZhongYang Xiao, XinYu Jiao, SiJia Wang, and Kai Zhang. Intelligent and connected vehicles: Current status and future perspectives. Science China Technological Sciences, 61(10):1446–1471, 2018.
  • [97] Battista Biggio, B Nelson, and P Laskov. Poisoning attacks against support vector machines. In 29th International Conference on Machine Learning, pages 1807–1814. ArXiv e-prints, 2012.
  • [98] Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim Šrndić, Pavel Laskov, Giorgio Giacinto, and Fabio Roli. Evasion attacks against machine learning at test time. In Joint European conference on machine learning and knowledge discovery in databases, pages 387–402. Springer, 2013.
  • [99] Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
  • [100] Anh Nguyen, Jason Yosinski, and Jeff Clune. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 427–436, 2015.
  • [101] Elias B Khalil, Amrita Gupta, and Bistra Dilkina. Combinatorial attacks on binarized neural networks. arXiv preprint arXiv:1810.03538, 2018.
  • [102] Emilio Rafael Balda, Arash Behboodi, and Rudolf Mathar. On generation of adversarial examples using convex programming. In 2018 52nd Asilomar Conference on Signals, Systems, and Computers, pages 60–65. IEEE, 2018.
  • [103] Eric Wong and Zico Kolter. Provable defenses against adversarial examples via the convex outer adversarial polytope. In International Conference on Machine Learning (ICML), pages 5283–5292, 2018.
  • [104] Jiawei Su, Danilo Vasconcellos Vargas, and Kouichi Sakurai. One pixel attack for fooling deep neural networks. IEEE Transactions on Evolutionary Computation, 2019.
  • [105] Justin Gilmer, Ryan P Adams, Ian Goodfellow, David Andersen, and George E Dahl. Motivating the rules of the game for adversarial example research. arXiv preprint arXiv:1807.06732, 2018.
  • [106] Yann LeCun, Léon Bottou, Yoshua Bengio, Patrick Haffner, et al. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
  • [107] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009.
  • [108] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. IEEE, 2009.
  • [109] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012.
  • [110] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. International Conference on Learning Representations (ICLR), 2015.
  • [111] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1–9, 2015.
  • [112] Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. In Proceedings of the 22nd ACM international conference on Multimedia, pages 675–678. ACM, 2014.
  • [113] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
  • [114] Kevin Riggle. An introduction to approachable threat modeling. Accessed: 23 April 2019, URL: https://increment.com/security/approachable-threat-modeling/.
  • [115] Marco Barreno, Blaine Nelson, Anthony D Joseph, and J Doug Tygar. The security of machine learning. Machine Learning, 81(2):121–148, 2010.
  • [116] Florian Tramèr, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. The space of transferable adversarial examples. arXiv preprint arXiv:1704.03453, 2017.
  • [117] Nicolas Papernot, Patrick McDaniel, Arunesh Sinha, and Michael Wellman. Towards the science of security and privacy in machine learning. arXiv preprint arXiv:1611.03814, 2016.
  • [118] Alexey Kurakin, Ian J Goodfellow, and Samy Bengio. Adversarial examples in the physical world. In Artificial Intelligence Safety and Security, pages 99–112. Chapman and Hall/CRC, 2018.
  • [119] Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2574–2582, 2016.
  • [120] Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), pages 39–57. IEEE, 2017.
  • [121] Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. Universal adversarial perturbations. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1765–1773, 2017.
  • [122] Sayantan Sarkar, Ankan Bansal, Upal Mahbub, and Rama Chellappa. Upset and angri: Breaking high performance image classifiers. arXiv preprint arXiv:1707.01159, 2017.
  • [123] Moustapha M Cisse, Yossi Adi, Natalia Neverova, and Joseph Keshet. Houdini: Fooling deep structured visual and speech recognition models with adversarial examples. In Advances in neural information processing systems, pages 6977–6987, 2017.
  • [124] Shumeet Baluja and Ian Fischer. Learning to attack: Adversarial transformation networks. In AAAI, 2018.
  • [125] Nilesh Dalvi, Pedro Domingos, Sumit Sanghai, Deepak Verma, et al. Adversarial classification. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 99–108. ACM, 2004.
  • [126] Daniel Lowd and Christopher Meek. Adversarial learning. In Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining, pages 641–647. ACM, 2005.
  • [127] Marco Barreno, Blaine Nelson, Russell Sears, Anthony D Joseph, and J Doug Tygar. Can machine learning be secure? In Proceedings of the 2006 ACM Symposium on Information, computer and communications security, pages 16–25. ACM, 2006.
  • [128] Ling Huang, Anthony D Joseph, Blaine Nelson, Benjamin IP Rubinstein, and JD Tygar. Adversarial machine learning. In Proceedings of the 4th ACM workshop on Security and artificial intelligence, pages 43–58. ACM, 2011.
  • [129] Logan Engstrom, Brandon Tran, Dimitris Tsipras, Ludwig Schmidt, and Aleksander Madry. A rotation and a translation suffice: Fooling cnns with simple transformations. arXiv preprint arXiv:1712.02779, 2017.
  • [130] Kexin Pei, Yinzhi Cao, Junfeng Yang, and Suman Jana. Towards practical verification of machine learning: The case of computer vision systems. arXiv preprint arXiv:1712.01785, 2017.
  • [131] Yingqi Liu, Shiqing Ma, Yousra Aafer, Wen-Chuan Lee, Juan Zhai, Weihang Wang, and Xiangyu Zhang. Trojaning attack on neural networks. In 25th Annual Network and Distributed System Security Symposium, NDSS 2018, San Diego, California, USA, February 18-21, 2018, 2018.
  • [132] Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, pages 506–519. ACM, 2017.
  • [133] Nicholas Carlini and David Wagner. Adversarial examples are not easily detected: Bypassing ten detection methods. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pages 3–14. ACM, 2017.
  • [134] Nicholas Carlini and David Wagner. Audio adversarial examples: Targeted attacks on speech-to-text. In 2018 IEEE Security and Privacy Workshops (SPW), pages 1–7. IEEE, 2018.
  • [135] Tom B Brown, Dandelion Mané, Aurko Roy, Martín Abadi, and Justin Gilmer. Adversarial patch. 31st Conference on Neural Information Processing Systems (NIPS 2017), 2017.
  • [136] Yevgeniy Vorobeychik and Murat Kantarcioglu. Adversarial machine learning. Synthesis Lectures on Artificial Intelligence and Machine Learning, 12(3):1–169, 2018.
  • [137] Battista Biggio and Fabio Roli. Wild patterns: Ten years after the rise of adversarial machine learning. Pattern Recognition, 84:317–331, 2018.
  • [138] Chawin Sitawarin, Arjun Nitin Bhagoji, Arsalan Mosenia, Prateek Mittal, and Mung Chiang. Rogue signs: Deceiving traffic sign recognition with malicious ads and logos. arXiv preprint arXiv:1801.02780, 2018.
  • [139] Anurag Arnab, Ondrej Miksik, and Philip HS Torr. On the robustness of semantic segmentation models to adversarial attacks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 888–897, 2018.
  • [140] Jan Hendrik Metzen, Mummadi Chaithanya Kumar, Thomas Brox, and Volker Fischer. Universal adversarial perturbations against semantic image segmentation. In 2017 IEEE International Conference on Computer Vision (ICCV), pages 2774–2783. IEEE, 2017.
  • [141] Volker Fischer, Mummadi Chaithanya Kumar, Jan Hendrik Metzen, and Thomas Brox. Adversarial examples for semantic image segmentation. arXiv preprint arXiv:1703.01101, 2017.
  • [142] Edgar Tretschk, Seong Joon Oh, and Mario Fritz. Sequential attacks on agents for long-term adversarial goals. arXiv preprint arXiv:1805.12487, 2018.
  • [143] Anay Pattanaik, Zhenyi Tang, Shuijing Liu, Gautham Bommannan, and Girish Chowdhary. Robust deep reinforcement learning with adversarial attacks. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, pages 2040–2042. International Foundation for Autonomous Agents and Multiagent Systems, 2018.
  • [144] Yen-Chen Lin, Zhang-Wei Hong, Yuan-Hong Liao, Meng-Li Shih, Ming-Yu Liu, and Min Sun. Tactics of adversarial attack on deep reinforcement learning agents. arXiv preprint arXiv:1703.06748, 2017.
  • [145] Jernej Kos, Ian Fischer, and Dawn Song. Adversarial examples for generative models. In 2018 IEEE Security and Privacy Workshops (SPW), pages 36–42. IEEE, 2018.
  • [146] George Gondim-Ribeiro, Pedro Tabacof, and Eduardo Valle. Adversarial attacks on variational autoencoders. arXiv preprint arXiv:1806.04646, 2018.
  • [147] Dario Pasquini, Marco Mingione, and Massimo Bernaschi. Out-domain examples for generative models. arXiv preprint arXiv:1903.02926, 2019.
  • [148] Motoki Sato, Jun Suzuki, Hiroyuki Shindo, and Yuji Matsumoto. Interpretable adversarial perturbation in input embedding space for text. International Joint Conference on Artificial Intelligence (IJCAI), 2018.
  • [149] Takeru Miyato, Andrew M Dai, and Ian Goodfellow. Adversarial training methods for semi-supervised text classification. International Conference on Learning Representations (ICLR), 2017.
  • [150] Jinfeng Li, Shouling Ji, Tianyu Du, Bo Li, and Ting Wang. Textbugger: Generating adversarial text against real-world applications. The Network and Distributed System Security Symposium, 2019.
  • [151] Nicolas Papernot, Patrick McDaniel, Ananthram Swami, and Richard Harang. Crafting adversarial input sequences for recurrent neural networks. In MILCOM 2016-2016 IEEE Military Communications Conference, pages 49–54. IEEE, 2016.
  • [152] Robin Jia and Percy Liang. Adversarial examples for evaluating reading comprehension systems. In Proceedings of the 2017 conference on empirical methods in natural language processing (EMNLP), 2017.
  • [153] Pramod Kaushik Mudrakarta, Ankur Taly, Mukund Sundararajan, and Kedar Dhamdhere. Did the model understand the question? The 56th Annual Meeting of the Association for Computational Linguistics, 2018.
  • [154] Igino Corona, Giorgio Giacinto, and Fabio Roli. Adversarial attacks against intrusion detection systems: Taxonomy, solutions and open issues. Information Sciences, 239:201–225, 2013.
  • [155] Zilong Lin, Yong Shi, and Zhi Xue. Idsgan: Generative adversarial networks for attack generation against intrusion detection. arXiv preprint arXiv:1809.02077, 2018.
  • [156] Zheng Wang. Deep learning-based intrusion detection with adversaries. IEEE Access, 6:38367–38384, 2018.
  • [157] Milad Salem, Shayan Taheri, and Jiann Shiun Yuan. Anomaly generation using generative adversarial networks in host based intrusion detection. arXiv preprint arXiv:1812.04697, 2018.
  • [158] Gang Wang, Tianyi Wang, Haitao Zheng, and Ben Y Zhao. Man vs. machine: Practical adversarial detection of malicious crowdsourcing workers. In 23rd USENIX Security Symposium USENIX Security 14), pages 239–254, 2014.
  • [159] Kathrin Grosse, Nicolas Papernot, Praveen Manoharan, Michael Backes, and Patrick McDaniel. Adversarial perturbations against deep neural networks for malware classification. arXiv preprint arXiv:1606.04435, 2016.
  • [160] Bojan Kolosnjaji, Ambra Demontis, Battista Biggio, Davide Maiorca, Giorgio Giacinto, Claudia Eckert, and Fabio Roli. Adversarial malware binaries: Evading deep learning for malware detection in executables. In 2018 26th European Signal Processing Conference (EUSIPCO), pages 533–537. IEEE, 2018.
  • [161] Muhammad Usama, Junaid Qadir, and Ala Al-Fuqaha. Adversarial attacks on cognitive self-organizing networks: The challenge and the way forward. In 2018 IEEE 43rd Conference on Local Computer Networks Workshops (LCN Workshops), pages 90–97. IEEE, 2018.
  • [162] Muhammad Ejaz Ahmed and Hyoungshick Kim. Poster: Adversarial examples for classifiers in high-dimensional network data. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pages 2467–2469. ACM, 2017.
  • [163] Eduardo Viegas, Altair Santin, Vilmar Abreu, and Luiz S Oliveira. Stream learning and anomaly-based intrusion detection in the adversarial settings. In 2017 IEEE Symposium on Computers and Communications (ISCC), pages 773–778. IEEE, 2017.
  • [164] Lea Schönherr, Katharina Kohls, Steffen Zeiler, Thorsten Holz, and Dorothea Kolossa. Adversarial attacks against automatic speech recognition systems via psychoacoustic hiding. arXiv preprint arXiv:1808.05665, 2018.
  • [165] Arkar Min Aung, Yousef Fadila, Radian Gondokaryono, and Luis Gonzalez. Building robust deep neural networks for road sign detection. arXiv preprint arXiv:1712.09327, 2017.
  • [166] Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask R-CNN. In Proceedings of the IEEE international conference on computer vision, pages 2961–2969, 2017.
  • [167] Joseph Redmon and Ali Farhadi. Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767, 2018.
  • [168] Yang Zhang, Hassan Foroosh, Philip David, and Boqing Gong. Camou: Learning physical vehicle camouflages to adversarially attack detectors in the wild. 2018.
  • [169] Dawn Song, Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Florian Tramer, Atul Prakash, and Tadayoshi Kohno. Physical adversarial examples for object detectors. In 12th USENIX Workshop on Offensive Technologies (WOOT 18), 2018.
  • [170] Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, and Dawn Song. Robust physical-world attacks on deep learning visual classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1625–1634, 2018.
  • [171] Husheng Zhou, Wei Li, Yuankun Zhu, Yuqun Zhang, Bei Yu, Lingming Zhang, and Cong Liu. Deepbillboard: Systematic physical-world testing of autonomous driving systems. arXiv preprint arXiv:1812.10812, 2018.
  • [172] Mayank Bansal, Alex Krizhevsky, and Abhijit Ogale. Chauffeurnet: Learning to drive by imitating the best and synthesizing the worst. arXiv preprint arXiv:1812.03079, 2018.
  • [173] Shang-Tse Chen, Cory Cornelius, Jason Martin, and Duen Horng Polo Chau. Shapeshifter: Robust physical adversarial attack on faster R-CNN object detector. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 52–68. Springer, 2018.
  • [174] Cumhur Erkan Tuncali, Georgios Fainekos, Hisahiro Ito, and James Kapinski. Simulation-based adversarial test generation for autonomous vehicles with machine learning components. In 2018 IEEE Intelligent Vehicles Symposium (IV), pages 1555–1562. IEEE, 2018.
  • [175] Shakiba Yaghoubi and Georgios Fainekos. Gray-box adversarial testing for control systems with machine learning components. In Proceedings of the 22nd ACM International Conference on Hybrid Systems: Computation and Control, HSCC ’19, pages 179–184, New York, NY, USA, 2019. ACM.
  • [176] Adith Boloor, Xin He, Christopher Gill, Yevgeniy Vorobeychik, and Xuan Zhang. Simple physical adversarial examples against end-to-end autonomous driving models. arXiv preprint arXiv:1903.05157, 2019.
  • [177] Ruitong Huang, Bing Xu, Dale Schuurmans, and Csaba Szepesvári. Learning with a strong adversary. arXiv preprint arXiv:1511.03034, 2015.
  • [178] Guy Katz, Clark Barrett, David L Dill, Kyle Julian, and Mykel J Kochenderfer. Reluplex: An efficient SMT solver for verifying deep neural networks. In International Conference on Computer Aided Verification, pages 97–117. Springer, 2017.
  • [179] Shixiang Gu and Luca Rigazio. Towards deep neural network architectures robust to adversarial examples. arXiv preprint arXiv:1412.5068, 2014.
  • [180] Weilin Xu, David Evans, and Yanjun Qi. Feature squeezing: Detecting adversarial examples in deep neural networks. In 25th Annual Network and Distributed System Security Symposium, NDSS 2018, San Diego, California, USA, February 18-21, 2018, 2018.
  • [181] Warren He, James Wei, Xinyun Chen, Nicholas Carlini, and Dawn Song. Adversarial example defense: Ensembles of weak defenses are not strong. In 11th USENIX Workshop on Offensive Technologies (WOOT)’17), 2017.
  • [182] Ji Gao, Beilun Wang, Zeming Lin, Weilin Xu, and Yanjun Qi. Deepcloak: Masking deep neural network models for robustness against adversarial samples. arXiv preprint arXiv:1702.06763, 2017.
  • [183] Shivam Garg, Vatsal Sharan, Brian Zhang, and Gregory Valiant. A spectral view of adversarially robust features. In Advances in Neural Information Processing Systems (NeurlIPS), pages 10159–10169, 2018.
  • [184] Yang Song, Taesup Kim, Sebastian Nowozin, Stefano Ermon, and Nate Kushman. Pixeldefend: Leveraging generative models to understand and defend against adversarial examples. In International Conference on Learning Representations (ICLR), 2018.
  • [185] Guoqing Jin, Shiwei Shen, Dongming Zhang, Feng Dai, and Yongdong Zhang. APE-GAN: adversarial perturbation elimination with GAN. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 3842–3846. IEEE, 2019.
  • [186] Dongyu Meng and Hao Chen. Magnet: a two-pronged defense against adversarial examples. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pages 135–147. ACM, 2017.
  • [187] Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. Distillation as a defense to adversarial perturbations against deep neural networks. In 2016 IEEE Symposium on Security and Privacy (SP), pages 582–597. IEEE, 2016.
  • [188] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. Published in Deep Learning Workshop, Advances in Neural Information Processing Systems (NIPS), 2014.
  • [189] Nicholas Carlini, Guy Katz, Clark Barrett, and David L Dill. Ground-truth adversarial examples. arXiv preprint arXiv:1709.10207, 2017.
  • [190] Andrew Slavin Ross and Finale Doshi-Velez. Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
  • [191] Chunchuan Lyu, Kaizhu Huang, and Hai-Ning Liang. A unified gradient regularization family for adversarial examples. In 2015 IEEE International Conference on Data Mining, pages 301–309. IEEE, 2015.
  • [192] John Bradshaw, Alexander G de G Matthews, and Zoubin Ghahramani. Adversarial examples, uncertainty, and transfer testing robustness in Gaussian process hybrid deep networks. arXiv preprint arXiv:1707.02476, 2017.
  • [193] L Schott, J Rauber, M Bethge, and W Brendel. Towards the first adversarially robust neural network model on mnist. In Seventh International Conference on Learning Representations (ICLR 2019), pages 1–17, 2019.
  • [194] Guanhong Tao, Shiqing Ma, Yingqi Liu, and Xiangyu Zhang. Attacks meet interpretability: Attribute-steered detection of adversarial samples. In Advances in Neural Information Processing Systems, pages 7728–7739, 2018.
  • [195] Nicholas Carlini. Is AmI (attacks meet interpretability) robust to adversarial examples? arXiv preprint arXiv:1902.02322, 2019.
  • [196] Linh Nguyen, Sky Wang, and Arunesh Sinha. A learning and masking approach to secure learning. In International Conference on Decision and Game Theory for Security, pages 453–464. Springer, 2018.
  • [197] Jiajun Lu, Theerasit Issaranon, and David Forsyth. Safetynet: Detecting and rejecting adversarial examples robustly. In Proceedings of the IEEE International Conference on Computer Vision, pages 446–454, 2017.
  • [198] Divya Gopinath, Guy Katz, Corina S Pasareanu, and Clark Barrett. Deepsafe: A data-driven approach for checking adversarial robustness in neural networks. arXiv preprint arXiv:1710.00486, 2017.
  • [199] Jan Hendrik Metzen, Tim Genewein, Volker Fischer, and Bastian Bischoff. On detecting adversarial perturbations. International Conference on Learning Representations (ICLR), 2017.
  • [200] Guy Katz, Clark Barrett, David L Dill, Kyle Julian, and Mykel J Kochenderfer. Towards proving the adversarial robustness of deep neural networks. Formal Verification of Autonomous Vehicles (FVAV) Workshop, 2017.
  • [201] Florian Tramer, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. Ensemble adversarial training: Attacks and defenses. In International Conference on Learning Representations (ICLR), 2018.
  • [202] Gokula Krishnan Santhanam and Paulina Grnarova. Defending against adversarial attacks by leveraging an entire GAN. arXiv preprint arXiv:1805.10652, 2018.
  • [203] Pouya Samangouei, Maya Kabkab, and Rama Chellappa. Defense-GAN: Protecting classifiers against adversarial attacks using generative models. In International Conference on Learning Representations (ICLR), 2018.
  • [204] Aaditya Prakash, Nick Moran, Solomon Garber, Antonella DiLillo, and James Storer. Deflecting adversarial attacks with pixel deflection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 8571–8580, 2018.
  • [205] Cihang Xie, Jianyu Wang, Zhishuai Zhang, Zhou Ren, and Alan Yuille. Mitigating adversarial effects through randomization. In International Conference on Learning Representations (ICLR), 2018.
  • [206] Chuan Guo, Mayank Rana, Moustapha Cisse, and Laurens van der Maaten. Countering adversarial images using input transformations. In International Conference on Learning Representations (ICLR), 2018.
  • [207] Fangzhou Liao, Ming Liang, Yinpeng Dong, Tianyu Pang, Xiaolin Hu, and Jun Zhu. Defense against adversarial attacks using high-level representation guided denoiser. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1778–1787, 2018.
  • [208] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations (ICLR), 2018.
  • [209] Jacob Buckman, Aurko Roy, Colin Raffel, and Ian Goodfellow. Thermometer encoding: One hot way to resist adversarial examples. In International Conference on Learning Representations (ICLR), 2018.
  • [210] Guneet S. Dhillon, Kamyar Azizzadenesheli, Jeremy D. Bernstein, Jean Kossaifi, Aran Khanna, Zachary C. Lipton, and Animashree Anandkumar. Stochastic activation pruning for robust adversarial defense. In International Conference on Learning Representations (ICLR), 2018.
  • [211] Francesco Croce, Maksym Andriushchenko, and Matthias Hein. Provable robustness of ReLU networks via maximization of linear regions. In Proceedings of Machine Learning Research, volume 89, pages 2057–2066. PMLR, 16–18 Apr 2019.
  • [212] Nicholas Carlini, Anish Athalye, Nicolas Papernot, Wieland Brendel, Jonas Rauber, Dimitris Tsipras, Ian Goodfellow, and Aleksander Madry. On evaluating adversarial robustness. arXiv preprint arXiv:1902.06705, 2019.
  • [213] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484, 2016.
  • [214] Pranav Rajpurkar, Jeremy Irvin, Kaylie Zhu, Brandon Yang, Hershel Mehta, Tony Duan, Daisy Ding, Aarti Bagul, Curtis Langlotz, Katie Shpanskaya, et al. Chexnet: Radiologist-level pneumonia detection on chest x-rays with deep learning. arXiv preprint arXiv:1711.05225, 2017.
  • [215] Monika Grewal, Muktabh Mayank Srivastava, Pulkit Kumar, and Srikrishna Varadarajan. Radnet: Radiologist level accuracy using deep learning for hemorrhage detection in CT scans. In 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), pages 281–284. IEEE, 2018.
  • [216] Nicolas Papernot, Patrick McDaniel, and Ian Goodfellow. Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277, 2016.
  • [217] Wenjie Ruan, Min Wu, Youcheng Sun, Xiaowei Huang, Daniel Kroening, and Marta Kwiatkowska. Global robustness evaluation of deep neural networks with provable guarantees for norm. arXiv preprint arXiv:1804.05805, 2018.
  • [218] Youcheng Sun, Min Wu, Wenjie Ruan, Xiaowei Huang, Marta Kwiatkowska, and Daniel Kroening. Concolic testing for deep neural networks. In Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering, ASE 2018, pages 109–119, New York, NY, USA, 2018. ACM.
  • [219] Lei Ma, Felix Juefei-Xu, Fuyuan Zhang, Jiyuan Sun, Minhui Xue, Bo Li, Chunyang Chen, Ting Su, Li Li, Yang Liu, et al. Deepgauge: Multi-granularity testing criteria for deep learning systems. In Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering, pages 120–131. ACM, 2018.
  • [220] Shiqi Wang, Kexin Pei, Justin Whitehouse, Junfeng Yang, and Suman Jana. Formal security analysis of neural networks using symbolic intervals. In 27th USENIX Security Symposium USENIX Security 18), pages 1599–1614, 2018.
  • [221] Augustus Odena, Catherine Olsson, David Andersen, and Ian Goodfellow. TensorFuzz: Debugging neural networks with coverage-guided fuzzing. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 4901–4911, Long Beach, California, USA, 09–15 Jun 2019. PMLR.
  • [222] Jianmin Guo, Yu Jiang, Yue Zhao, Quan Chen, and Jiaguang Sun. Dlfuzz: differential fuzzing testing of deep learning systems. In Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, pages 739–743. ACM, 2018.
  • [223] Yuchi Tian, Kexin Pei, Suman Jana, and Baishakhi Ray. Deeptest: Automated testing of deep-neural-network-driven autonomous cars. In Proceedings of the 40th International Conference on Software Engineering, ICSE ’18, pages 303–314, New York, NY, USA, 2018. ACM.
  • [224] Mengshi Zhang, Yuqun Zhang, Lingming Zhang, Cong Liu, and Sarfraz Khurshid. Deeproad: GAN-based metamorphic testing and input validation framework for autonomous driving systems. In Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering, pages 132–142. ACM, 2018.
  • [225] Zhi Quan Zhou and Liqun Sun. Metamorphic testing of driverless cars. Commun. ACM, 62(3):61–67, February 2019.
  • [226] Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Madry. Robustness may be at odds with accuracy. stat, 1050:11, 2018.
  • [227] Yasasa Abeysirigoonawardena, Florian Shkurti, and Gregory Dudek. Generating adversarial driving scenarios in high-fidelity simulators. International Conference on Robotics and Automation (ICRA), 2019.
  • [228] Hongyu Li, Hairong Wang, Luyang Liu, and Marco Gruteser. Automatic unusual driving event identification for dependable self-driving. In Proceedings of the 16th ACM Conference on Embedded Networked Sensor Systems, pages 15–27. ACM, 2018.
  • [229] Junyao Guo, Unmesh Kurup, and Mohak Shah. Is it safe to drive? an overview of factors, challenges, and datasets for driveability assessment in autonomous driving. arXiv preprint arXiv:1811.11277, 2018.
  • [230] Michal Uricar, David Hurych, Pavel Krizek, and Senthil Yogamani. Challenges in designing datasets and validation for autonomous driving. Accepted in 14th International Conference on Computer Vision Theory and Applications (VISAPP), 2019.
  • [231] Tommaso Dreossi, Somesh Jha, and Sanjit A Seshia. Semantic adversarial deep learning. In International Conference on Computer Aided Verification, pages 3–26. Springer, 2018.
  • [232] Hao Ye, Le Liang, Geoffrey Ye Li, JoonBeom Kim, Lu Lu, and May Wu. Machine learning for vehicular networks: Recent advances and application examples. IEEE Vehicular Technology Magazine, 13(2):94–101, 2018.
  • [233] Cynthia Dwork, Aaron Roth, et al. The algorithmic foundations of differential privacy. Foundations and Trends® in Theoretical Computer Science, 9(3–4):211–407, 2014.
  • [234] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 586–595, 2018.
  • [235] Benjamin Wilson, Judy Hoffman, and Jamie Morgenstern. Predictive inequity in object detection. arXiv preprint arXiv:1902.11097, 2019.
  • [236] Daniel Lowd and Christopher Meek. Good word attacks on statistical spam filters. In CEAS, volume 2005, 2005.
  • [237] Nic Ford, Justin Gilmer, Nicolas Carlini, and Dogus Cubuk. Adversarial examples are a natural consequence of test error in noise. arXiv preprint arXiv:1901.10513, 2019.
  • [238] Ahmad EL Sallab, Mohammed Abdou, Etienne Perot, and Senthil Yogamani. Deep reinforcement learning framework for autonomous driving. Electronic Imaging, 2017(19):70–76, 2017.
  • [239] Vahid Behzadan and Arslan Munir. Vulnerability of deep reinforcement learning to policy induction attacks. In International Conference on Machine Learning and Data Mining in Pattern Recognition, pages 262–275. Springer, 2017.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
369581
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description