Counterexample-Guided Synthesis of Perception Models and Control

Counterexample-Guided Synthesis of Perception Models and Control

Shromona Ghosh, Hadi Ravanbakhsh, and Sanjit A. Seshia
Department of Electrical Engineering and Computer Science
University of California, Berkeley, USA
Equal contributionThis work is supported in part by NSF grants CPS-1545126 (VeHICaL), CCF-1837132, by the DARPA Assured Autonomy grant, and by Berkeley Deep Drive.
Abstract

We consider the problem of synthesizing safe and robust controllers for real world robotic systems like autonomous vehicles, which rely on complex perception modules. We propose a counterexample-guided synthesis framework which iteratively learns perception models that enable finding safe control policies. We use counterexamples to extract information relevant for modeling the errors in perception modules. Such models then can be used to synthesize controllers robust to errors in perception. If the resulting policy is not safe, we gather new counterexamples. By repeating the process, we eventually find a controller which can keep the system safe even when there is perception failure. Finally, we show that our framework computes robust controllers for autonomous vehicles in two different simulated scenarios: (i) lane keeping, and (ii) automatic braking.

I Introduction

Recent advances in perception algorithms have enabled robotics and control research to focus on the development of complicated systems such as autonomous vehicles, and surgical robots, which critically rely on perception modules. However, for such safety-critical applications, we need to ensure safety for the closed-loop system. While machine learning (ML)-based perception systems have shown to be effective on average, their use in such systems can be dangerous and cause unexpected results. Hence, it is important to design controllers which are robust to errors produced by perception modules. In this work, we focus on autonomous systems (such as autonomous vehicles, AVs) that rely on ML-based perception modules to sense and understand the environment.

Standard approaches for designing controllers for autonomous vehicles decompose the the design problem into (i) designing the perception system, and (ii) controller synthesis. In theory, by imposing assumptions and guarantees on each component, we can design them independently. Hence, perception design is usually studied in isolation where the focus is on improving local robustness [11, 10, 9, 22]. However, the proliferation of literature on adversarial attacks (see, e.g. [7]) and verification of ML-based cyber-physical systems (e.g. [2, 5]) shows us that state of the art ML-based perception systems are still prone to errors. Therefore, there is a need to design the control component of the system so as to compensate for perception errors and keep the autonomous system safe. In this work, we tackle this problem using a counterexample-guided inductive synthesis technique which iteratively learns a model of perception modules and uses the model for control synthesis.

Our framework focuses on synthesizing simple models of perception modules which can be used for control design. In this work, we consider simulation environments (such as Webots™, see Fig. 1) to study the autonomous agent in a variety of environments. We use the simulator to generate data which is used to learn models of perception modules. These models are then employed to synthesize controllers which are verified to be robust with respect to errors in perception, using state of the art simulation-based verification techniques.

Fig. 1: Webots ™ for design and analysis of AVs.
Fig. 2: Overview of our framework.

Our framework is shown in Fig. 2. The challenge is to find robust controllers without an explicit white-box analysis of perception modules, which can be very expensive, at best, and impossible, at worst, due to the lack of formal specification for perception. Our approach overcomes this by using inductive synthesis to infer a perception model in counterexample-guided loop. We begin with an arbitrary controller. The verifier then uses simulation-based verification to execute the controller in different environments, until it finds environments where the behavior is unsafe, which are marked as counterexamples. These counterexamples are sent to the synthesizer. The synthesizer uses them to generate a perception model and uses the model for designing a new candidate controller. This process repeats till a controller is found that is verifiably robust to perception errors, or terminates when no such controller can be found.

Various options are possible for the perception model, including using ML models such as neural networks. We take a simpler approach where a perception model maps each output generated by the actual perception component to a set of possible true values that could correspond to . This perception model is inferred from data using unsupervised techniques such as clustering. This approach stands in contrast with approaches such as counterexample-guided data augmentation [4] which directly seeks to improve the accuracy of perception modules. Instead of improving perception modules in isolation, we use counterexamples to directly improve the controller robustness. Our experience is that this approach leads to improvement in the overall system safety while using small amount of data.

We build upon existing work on automatically finding counterexamples through simulation-driven falsification (e.g., [3]). Using counterexamples, the framework identifies a diverse set of behaviors and systematically refines the perception model using those behaviors. And finally it uses the perception model for control synthesis, yielding a fully automated synthesis procedure.

To summarize our key contributions are,

  • a novel counterexample-guided method to synthesize controllers robust to perception errors;

  • data-driven inference of simple models of complex perception modules, including ML-based perception, and

  • two case studies from the domain of autonomous driving: (i) lane-keeping with a classical vision-based perception module, and (ii) automatic braking with a neural network-based perception module, demonstrating that our framework is general enough to handle both ML-based and non ML-based perception modules.

Ii Problem Statement

We are interested in synthesizing closed-loop controllers which are robust to perception errors for autonomous agents in simulation. The autonomous agent (ego) interacts with the external environment, and is controlled by the controller. Hence, the closed loop system comprises (1) the simulation environment which is made of the ego agent (also known as the plant) and external environment wherein it operates; and (2) the controller . The state of the simulator is defined by . The controller does not have direct access to the simulator state, but relies on a perception module to extract information , and computes the control input . In our work, the controller is described with a finite set of parameters . While our framework is applicable to non-deterministic simulators, for simplicity here we consider deterministic simulators. A schematic view of the closed-loop system is shown in Fig. 2(a). In the following paragraphs, we formally define each component.

Definition 1 (Simulator)

A simulator is a tuple where defines the transition function, and is the output or perception function. is the set of possible initial states.

Remark 1

The output function has two parts. The first is the renderer which given a state of the simulator produces the associated sensor readings e.g., image, point cloud. These sensor readings are sent to the perception modules which provide a state estimation required by the controller . However, we consider the composition of the renderer and perception modules as the perception function.

Running Example: Consider an Autonomous Vehicle (AV) maintaining its lane in the simulator. The AV’s perception unit has (a) a camera that is mounted in front of the AV, to estimate its position with respect to the center of the lane it is following, and (b) a compass along with an imprecise GPS which can estimate orientation of the AV w.r.t. the road. Estimations are sent to a feedback controller which tries to minimizes distance between the car and the lane center by steering the AV. In this setting, the state of the simulation includes (i) state of the AV —position, orientation and speed of the AV, and (ii) potentially time-varying environment parameters such as time of the day, and AV’s target lane on the map. Given a state of the simulator , the simulator renders the image seen by the AV’s camera. The perception unit processes this image to estimate the distance to the lane center. Similarly GPS and compass readings are used to measure the AV’s relative orientation. These processed information () is fed to the feedback controller to compute the steering ().

Definition 2 (Controller)

A controller or the control policy is deterministic and defined as .

Trajectories or traces of the closed-loop system are defined by , wherein, for a given , initial state , and all times ():

In this work, we focus on finite-horizon safety properties. Formally, given a time-varying safe set , a safety specification simply requires for all and a finite horizon . Ultimately, we would like the controller to safely control the ego agent in all environments. could be used to encode all environment assumptions along with the initial state of the plant.

Running Example: In our lane-keeping example, we wish to enforce all scenarios to happen in the morning. Such environment assumption can be enforced by defining a proper as includes time of the day.

We assume we have access to internal state of the simulator and we can initialize the environment in the simulator to enforce the environment assumptions. Moreover, we can modify the control policy by modifying the control parameters . We refer to such a control policy without fixed parameters as parameterized policy.

Definition 3 (Control Synthesis)

Given a simulator , a specification , and a parameterized policy , find a s.t. all traces of the closed loop system are safe.

Model: Synthesizing a controller for the closed-loop system can be quite challenging, and to manage the complexity of the simulator, we wish to have a simple model (abstraction) for the simulator. Abstraction of dynamical system has been studied rigorously for correct-by-construction control design [21]. Abstraction of learning-enabled systems such as neural network has been studied but not for perception units [16, 1]. In this work, we wish to come up with simple models of perception units to simplify control design process. We emphasize that we are not interested in a general purpose models, but models tailored specifically for control synthesis.

We define to be state of the model that includes the relevant information required for synthesizing a controller for . We assume the model state can be efficiently computed from a simulator state , i.e., there exists a transfer function that can map every state to its corresponding model state (). Similar to the simulator, we define a transition relation () which models the dynamics and an output or perception relation () which captures imperfect perception.

Definition 4 (Model)

A model is a tuple , where , , and is the initial set of states.

(a) simulator in the loop
(b) model in the loop
Fig. 3: Schematic view of closed-loop systems.

Similar to traces for a simulator, a trace of the closed-loop system for a model (Fig. 2(b)) is shown with , which defined similar to , but contains instead of , where for a given

Relation defines possible values of output for a given , and defines possible values of the next state. Moreover, for a finite-horizon safety specification , we define a specification s.t. where is the projection of into .

Definition 5 (Control Synthesis for Model)

Given a model , a specification , and a parameterized policy , find a s.t. traces of the closed-loop system are safe.

Goal of Modeling: The ultimate goal is to design a controller for the model while providing formal guarantees for the simulator. To achieve this goal, the model needs the ability to generate all possible behaviors that could be generated by the simulator [21]. Then, a solution to control synthesis problem for model is a solution to the original problem.

Now, we elaborate on the differences between , , and . is the true state of the simulator, and represents the measured information that is used for the decision process in realtime. While and are given as part of the problem, defines the level of abstraction we wish to use to design . Success in the control synthesis phase depends on the level of abstraction that is used in the model. The control synthesis problem becomes infeasible (has no solution) if we use a coarse abstraction. On the other hand, using refined abstractions increases complexity and makes control synthesis problem harder to solve. Intuitively, we should rely on a model that captures only essential factors required for the control design process. Moreover, may have information not only related to dynamics, but also information needed to model the perception units (e.g. environment weather).

Running Example: Returning to the lane-keeping example, a typical model for the AV can capture (a) relative orientation of the AV w.r.t. the target lane, (b) relative deviation from the center of the target lane, (c) speed of AV. Here, instead of absolute position stored in , only stores relative position and other information in including the target lane on the map is ignored. We also note that, a more complicated model can include environmental features like time of the day as well.

In this work, we assume domain , along with , , and are given by an expert. To complete our model, we need to define and . As we focus on modeling perception units in this work, for simplicity we assume a transition relation that can mimic is provided by an expert using a system identification process:

This ensures transitions produced by the simulator is producible by the model as well. Hence, the only missing part of the model is .

Iii Perception Model and Control Synthesis

In order to complete the model we need to find a perception relation , which is a challenging problem if we want to ensure the model can capture all possible behavior of the simulator. We take a learning-based approach to build models for simulation environments. While this procedure seems similar to model identification [12], there are slight differences. Data-driven methods typically provide formal guarantees only when we consider some assumption about the data distribution (e.g. Gaussian processes [17], or piecewise affine approximations [19]) or if the size of dataset gets large enough (e.g. PAC learning [14]). In contrast, we exploit the fact that we have access to a simulator, i.e., an oracle providing data. This allows us to employ simulation-based verification to search for counterexamples, which can be used to augment the data set.

It is worth mentioning that may be significantly complicated or may not be available in a closed-form. We only need mathematical representation of if the verifier we use to test the system relies on that. Otherwise, we consider to be black-box functions, which allows a larger set of simulators to be used within our framework.

The data set is simply a set of traces . The idea is to learn a model s.t. when a trace in is mapped into the model trace, it could be generated with the model as well. Formally, given data , the goal is to learn s.t.

(1)

This constraint allows us to argue that as the dataset enlarges and covers more behaviors, the model improves and under some mild assumptions in the limit the learned model is completely capable of mimicking the simulator. However, it is not practical to gather a huge amount of data covering all possible behaviors. We argue that we need carefully selected data points, where the selection criteria should be determined during the control design process. Inspired by recent work on counterexample-guided data augmentation [4], the key idea here is to aggregate data iteratively in a counterexample-guided loop. At each iteration we determine what types of data are required to improve the model. This iterative data collection allows us to learn a model which is carefully crafted for control synthesis purposes.

Now, we sketch the overall inductive learning approach. The procedure is iterative and at each iteration we update the model by gathering more data. We start with initial model. Then, at each iteration the following steps are performed:

  1. Can we use the model and synthesize a controller?

    1. If yes, let s.t. all satisfy .

    2. Otherwise, declare failure.

  2. Check whether all satisfy ?

    1. If yes, Done.

    2. Otherwise, add counterexamples to .

  3. Learn a new model from dataset .

Fig. 4: Inductive learning of models for control design.

Starting with an initial model, at each iteration, we design a controller using the model. Having a controller for the model, if the model can generate all behaviors of the simulator, then it is guaranteed that all traces of the simulator satisfy safety. Therefore, we check if the controller is generating acceptable behavior for the original system. If not, we conclude that the model needs improvement. In that case, unsafe behaviors provide us new data and we learn a new model (with the initial model as the prior) that can mimic the whole dataset. As the dataset gets bigger in each iteration, the model improves until it is able to completely capture all behaviors of the simulator. The overall procedure is depicted in Fig. 4.

We argue that this iterative process can help us to find good models specifically designed for the synthesis problem. Comparing with a simple approach in which a huge amount of data is generated randomly to learn a relatively accurate model, our solution has several benefits. First, the data is generated adversarially by a verifier. This allows to gather critical data points that may not occur in a statistically generated dataset. Second, our approach in practice does not generate huge amount of data as data is generated iteratively only when it is needed. Third, instead of directly testing the learned model we test the overall system. This has two benefits: (i) we can stop the process even if the model is not accurate, but the system-level requirement is met. (ii) we generate only data points wherein wrong models can potentially lead to unsafe behaviors. As such, the model gets more accurate only in regions wherein safety is relevant.

Iv An Instantiation of the Framework

Our framework has three main components: (a) a control synthesizer, (b) a system verifier, and (c) a model generator. In this section, we develop an instance of our framework and describe how each component is implemented.

We start with an initial model , and in each iteration, we first solve a control synthesis problem using the model.

Control Synthesis

Any control synthesis routine could be integrated inside our framework, and the complexity/completeness of synthesis procedure depends on the controller structure and the perception model. Aiming for generality, here we use a variant of gradient-based methods with random initialization to search for parameters which yield a robust policy w.r.t. the uncertainties in the model. This synthesizer uses CEGIS [20] and iteratively synthesizes a candidate policy and tests it over the model.

System Verifier

Once a policy is obtained, we need to verify that the policy is safe for the simulator as well. For this purpose, we use a falsification procedure to find counterexamples. As active sampling-based methods have been shown to be effective in falsifying black-box closed-loop systems [6], we employ VerifAI, a recently developed toolbox for analysis of AI-based systems, to implement this sub-procedure [3]. More specifically, we use Bayesian optimization for each safety property to find counterexamples. If the verifier can not find a counterexample for the synthesized controller within a fixed number of iterations, we declare success. Or else, we use the counterexamples to refine (improve) our model.

Model Generator

We learn a new model from data set (counterexamples) that satisfies Eq. (1). In each iteration, we use the generated counterexamples to improve the model. In the rest of section, we describe a model generation procedure we propose.

As mentioned, transition relations are derived from laws of physics with simple models for uncertainties [12]. We argue such approach is not suitable for modeling systems with perception modules. In particular, perception components are hard to decompose into smaller systems and reason over in a compositional way. Moreover, instead of laws of physics, perception modules are generally artifacts of machine learning or complex vision-based algorithms.

A natural way to define is to use an initial guess with an addition of some errors. We model perception errors with state dependent error functions:

where . Intuitively speaking, error of measuring component of does not directly depend on other measurement, but depends only on the state.

Next, we investigate an unsupervised learning technique for modeling . Our approach is to cluster the datapoints and learn a local model for each cluster. Recall that we wish to use counterexamples to learn the model. These counterexamples may be ad-hoc because (i) errors in learning-enabled components are ad-hoc, and (ii) counterexamples reveal different types of errors in the closed-loop system. The clusters helps to better model errors for regions in which perception may not be accurate. In fact, clustering methods has been used for analysis of perception components in [15], but not in the context of control design.

Fig. 5: Modeling perception relation with clustering.

Initially datapoints are extracted from and is then projected into : . Next, we use standard clustering algorithms such as KMeans or Gaussian Mixture Models to partition the dataset into different clusters [18]. Let be data points in the cluster. For simplicity we define domain of cluster to be the smallest box containing and defines the error range for that cluster. Next we use a linear model for ; i.e. . We solve for and using linear programming such that . Similiar procedure is used for .

Notice that clustering is performed in domain , but domain of each cluster is defined over . As depicted in Fig. 5, a given can belong to multiple clusters. To obtain a model we define as follows:

If does not belong to any cluster, we assume the error is zero. Otherwise, we consider the ranges for all clusters with in their domain to guarantee Eq. (1) holds for .

V Experiments

For our experiments, we consider two case studies for AVs with faulty perception units. While we use Webots ™ [13], our technique can be used with many other simulators.

V-a Case Study I – Lane Keeping

In this case study, we consider our running example, the lane keeping on straight roads. The AV can accurately estimate its orientation using a compass and the orientation of the road using an HD map and an imprecise GPS. However, to estimate the deviation from the lane center it relies on a camera. The image taken by the camera is processed to detect lane boundaries [8] and a regression process is used to learn a model for estimating deviation using detected boundaries. However, this deviation estimation is not always reliable as it involves image processing and ML.

The steering policy is simply a linear feedback law w.r.t. relative orientation and deviation; . For the requirements, we assume initially and radian. Also, the speed is initially in range . The goal is to keep the deviation in range and get to in seconds. Recall that the model state includes (a) deviation , (b) relative orientation , and speed and perception output is measured state (). Since we assume measurements of speed , and orientations and are accurate, . We only model which is defined over , , and and provides a range for . However, the speed of AV does not affect measurement of as we rely on a single image to estimate . Thus, for simplicity we model as a map from and to range of possible .

Starting from a model that has a perfect perception, the control synthesis procedure yields parameters , , and many uniformly random generated simulations confirm that the property holds. However, VerifAI could find a set of corner case counterexamples. Next, these counterexamples are used to improve the model for perception and the process continues. After a couple of iterations, the procedure terminates successfully. Set of data points extracted from counterexamples are shown in Fig. 6. The figure shows error on as a function of and . As depicted, when and are close to origin, the error is small and the error gets unpredictably large in other places.

Fig. 6: Data extracted from counterexamples for lane-keeping. Points with same colors belong to same clusters.

Using this data extracted from only counterexample traces we find that parameters and yield a robust system and even VerifAI can not find counterexamples. Intuitively, because the error in measurement of is relatively large compared to , the feedback law is safe only when is relatively smaller than .

V-B Case Study II – Automatic Braking System

We consider scenarios in which the AV detects construction cones on the road and brakes. More precisely in these scenarios two lanes are blocked by a broken car and cones are used to warn drivers. The AV uses a camera and a trained neural network to detect cones. While the simulation environment can have many variations, including: color of the broken car, orientation of the broken car and speed of the traffic (all environment vehicles have same constant speed), the model only includes distance to the cones and speed of the AV : , (where ).

A straightforward solution for designing a controller is to brake as soon as possible to guarantee safety. While this is the best strategy when the distance to the cones is small, it reduces passenger comfort and increases chance of accident when there is a car behind the AV. To avoid such behavior we consider another car that moves behind the AV with the same speed and has a full knowledge about the cone and brakes in an optimal way. Then, we require the AV to stop while avoiding a crash with the car behind.

Fig. 7: Automatic braking scenario.

We wish to design a braking control system that uses estimated distance to the cone. Having the distance and speed of AV one could use laws of physics to find the minimum force needed for the brake. In the perception unit, the neural network not only detects cones, but also provides a bounding box around detected cones. We wish to use size of these bounding boxes to estimate the distance. However, such measurement is not reliable especially when the distance is large. To design a safe controller we consider the following policy. The controller assumes measured distance is reliable only when and in that case it provides an optimal force: , where is the measured speed. However, when , the controller just reduces the speed to () after detecting cones and ignores value of for feedback calculation. Fig. 8 shows a qualitative trace of the system.

Fig. 8: Automatic braking policy.

Recall that contains only and and output includes estimated value of () and (). We assume and since is independent of , only maps to range of possible . We also set to infinity if no cone is detected. Finding parameters for the policy is not a trivial task as these parameters heavily depend on the measurement errors and the dynamics of the agents. Initially, using a perfect perception, there are many solutions and the synthesizer picks and to start. However, VerifAI finds few counterexamples. For these counterexamples, the distance is measured to be less than , while the actual distance is very larger, and the AV reduces speed quickly crashing with the car behind it. Using counterexamples, we update the model. In the next iteration only few counterexamples were found, and in all of those cases, the color of the broken car is close to the color of the cones. This suggest that the perception unit behaves differently in these cases, causing AV to brake early and collide with the car behind it. Again, by updating the model, the policy synthesizer finds parameters and . In other words, the policy uses value of only if . This strategy allows to safely stop the car by measuring the distance using neural networks. The final model generated using counterexamples is shown in Fig. 9. Notice that when , can be infinity (no cone detected).

Fig. 9: Modeling NN-based perception. Inf means the cone could not be detected. Top: clustered data obtained from counterexamples. Bottom: the learned perception relation.

Vi Conclusion

In this work we investigated problem of control synthesis for a closed-loop system with faulty perception components. Our method iteratively learns models for perception components and then synthesizes controllers for those models. At each iteration the method puts the designed controller under test to prove its safety or alternatively it finds counterexamples which are used to improve the perception model. We demonstrate effectiveness of our method for designing safe controllers for autonomous vehicles which use ML-based perception components.

In the future we wish to work on other instantiations of our framework. In particular we wish to investigate other methods for modeling perception components, and control synthesis for those models. Moreover, co-design of control and perception components is another direction worth exploring.

References

  • [1] Sumanth Dathathri, Sicun Gao, and Richard M Murray. Inverse abstraction of neural networks using symbolic interpolation. 2019.
  • [2] Tommaso Dreossi, Alexandre Donzé, and Sanjit A. Seshia. Compositional falsification of cyber-physical systems with machine learning components. In NASA Formal Methods - 9th International Symposium, NFM, 2017.
  • [3] Tommaso Dreossi, Daniel J Fremont, Shromona Ghosh, Edward Kim, Hadi Ravanbakhsh, Marcell Vazquez-Chanlatte, and Sanjit A Seshia. Verifai: A toolkit for the design and analysis of artificial intelligence-based systems. In Proceedings of the 31st Conference on Computer Aided Verification, CAV 2019. Springer, 2019.
  • [4] Tommaso Dreossi, Shromona Ghosh, Xiangyu Yue, Kurt Keutzer, Alberto Sangiovanni-Vincentelli, and Sanjit A Seshia. Counterexample-guided data augmentation. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, pages 2071–2078. AAAI Press, 2018.
  • [5] Tommaso Dreossi, Somesh Jha, and Sanjit A. Seshia. Semantic adversarial deep learning. In 30th International Conference on Computer Aided Verification (CAV), 2018.
  • [6] S. Ghosh, F. Berkenkamp, G. Ranade, S. Qadeer, and A. Kapoor. Verifying controllers against adversarial examples with bayesian optimization. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pages 7306–7313, May 2018.
  • [7] Ian J. Goodfellow, Patrick D. McDaniel, and Nicolas Papernot. Making machine learning robust against adversarial inputs. 2018.
  • [8] Filippo Grazioli. Advanced lane detection. https://bakeaselfdrivingcar.blogspot.com/2017/11/project-3-advanced-lane-detection.html. Accessed Sep 14, 2019.
  • [9] Jonghoon Jin, Aysegul Dundar, and Eugenio Culurciello. Robust convolutional neural networks under adversarial noise. arXiv preprint arXiv:1511.06306, 2015.
  • [10] Jihun Kim and Minho Lee. Robust lane detection based on convolutional neural network and random sample consensus. In Chu Kiong Loo, Keem Siah Yap, Kok Wai Wong, Andrew Teoh, and Kaizhu Huang, editors, Neural Information Processing, pages 454–461, Cham, 2014. Springer International Publishing.
  • [11] Hanxi Li, Yi Li, and Fatih Porikli. Robust online visual tracking with a single convolutional neural network. In Daniel Cremers, Ian Reid, Hideo Saito, and Ming-Hsuan Yang, editors, Computer Vision – ACCV 2014, pages 194–209, Cham, 2015. Springer International Publishing.
  • [12] Lennart Ljung. System identification. Wiley Encyclopedia of Electrical and Electronics Engineering, pages 1–19, 1999.
  • [13] Olivier Michel. Cyberbotics ltd. webots™: professional mobile robot simulation. International Journal of Advanced Robotic Systems, 1(1):5, 2004.
  • [14] Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar. Foundations of machine learning. MIT press, 2018.
  • [15] Corina S. Păsăreanu, Divya Gopinath, and Huafeng Yu. Compositional Verification for Autonomous Systems with Deep Learning Components, pages 187–197. Springer International Publishing, Cham, 2019.
  • [16] Luca Pulina and Armando Tacchella. An abstraction-refinement approach to verification of artificial neural networks. In International Conference on Computer Aided Verification, pages 243–257. Springer, 2010.
  • [17] Carl Edward Rasmussen. Gaussian Processes in Machine Learning, pages 63–71. Springer Berlin Heidelberg, Berlin, Heidelberg, 2004.
  • [18] Christian Robert. Machine learning, a probabilistic perspective, 2014.
  • [19] Sadra Sadraddini and Calin Belta. Formal guarantees in data-driven model identification and control synthesis. In Proceedings of the 21st International Conference on Hybrid Systems: Computation and Control (Part of CPS Week), HSCC ’18, pages 147–156, 2018.
  • [20] Armando Solar-Lezama, Liviu Tancau, Rastislav Bodik, Sanjit Seshia, and Vijay Saraswat. Combinatorial sketching for finite programs. ACM Sigplan Notices, 41(11):404–415, 2006.
  • [21] Paulo Tabuada. Verification and control of hybrid systems: a symbolic approach. Springer Science & Business Media, 2009.
  • [22] Eric Wong and J Zico Kolter. Provable defenses against adversarial examples via the convex outer adversarial polytope. arXiv preprint arXiv:1711.00851, 2017.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
398273
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description