A Data-driven, Falsification-based Model of Human Driver Behavior

A Data-driven, Falsification-based Model of Human Driver Behavior

Abstract

We propose a novel framework to differentiate between vehicle trajectories originating from human and non-human drivers by constructing a data-driven boundary using parametric signal temporal logic (STL). Such construction allows us to evaluate the trajectories, detect rare-events, and reduce the uncertainty of driver behaviors when it assumes the form of a disturbance in control synthesis and evaluation problems. We train a classifier that separates admissible (i.e. human) examples - which arise from real-world demonstrations - and inadmissible (i.e. non-human) examples that are generated by falsifying specifications synthesized from the same real-world driving data. Proceeding in this fashion allows for finding a reasonable boundary of human behaviors exhibited in real-world driving records. The framework is demonstrated using a case study involving a human-driven vehicle approaching a signalized intersection.

I Introduction

The field of human driver research has received significant attention, in part due to its relevance to connected and automated vehicles (CAVs) and subsequent problems of path-planning and control synthesis. Consequently, there is a significant body of research in the field of modeling human driver behavior that has leveraged different techniques, such as dynamic system modeling [13]; neural networks [12], [15]; stochastic processes [16]; and inverse reinforcement learning [18]. Much of the prior work in the field has focused on predicting likely actions based on inference from driver studies or real-world observations of human drivers. Unfortunately, “interesting” edge cases are rare events and may not be explicitly captured or reproduced in the aforementioned approaches. Therefore, we advocate a mapping as in [5], which leverages real-world driving data to construct a realistic set of trajectories which accommodate the reactive and uncertain nature of human drivers. Such a method can be extended to evaluating controllers by sampling rare, (and likely dangerous) events.

In contrast to the differential game setting of [5], we instead generate examples of non-human behaviors using falsification. The literature on cyberphysical system verification is substantial, and several mature toolboxes have been developed to address the falsification problem [1], [7]. Furthermore, recent literature has addressed the synthesis of precise specifications by considering template formulae and searching for the range of parameters for which these formulae are falsified [10], or satisfied [11] by a given system. Herein, parameter synthesis is used to precisely describe human driver behavior by studying real-world examples; then falsifying these rules generates possible non-human actions. The observed and generated examples are subsequently used in the construction of a classifier.

Such an approach has consequences for control synthesis and evaluation. Given a state-dependent description of human driver behavior, we can compute special sets of interest, such as initial conditions from which a human disturbance can initiate collisions. Such scenarios are instructive to test the robustness of a path-planner or a controller. There are philosophical similarities between this strategy and the work developed in [6].

Motivated by [15] and [14], we demonstrate the proposed framework on a case study of a human-driven vehicle (HV) approaching a signalized intersection. In this setting, the leading HV plays the role of a disturbance signal to the following controlled vehicle, which desires a safe, fuel-optimal policy. Our objective is to obtain a set-valued, state-dependent mapping that describes human actions using the aforementioned classification approach; such a mapping can be conceived as a driver model, which can be utilized for synthesizing a fuel-optimal safe controller as in [14]. In the absence of such a mapping, we may resort to a worst-case approach based only on the physical limitations of a given situation or one of the aforementioned probabilistic methods.

The remainder of this paper is structured in the following way: Section II gives an overview of the problem under study including a hybrid system formulation, summary of real-world driving data, and a preview of the solution approach. Section III summarizes the key mathematical tools that are leveraged to solve the problem. Section IV consolidates the methods and tools of Section III to detail the solution approach alluded to in Section II. Section V discusses results and practical considerations when using the methods of Section III. Section VI offers concluding remarks and plans for future work.

Ii Problem Formulation

The objective of this work is to systematically determine a set-valued, state-dependent bound on human driving behavior. Given potential applications of our framework, this approach considers only longitudinal dynamics of a leading vehicle. Furthermore, we reason that while a driver is unlikely to modify his or her behavior based on the actions of a trailing vehicle, the behavior will be affected by other factors such as the state of a traffic light or length of the vehicle queue already formed at an intersection [15]. We make the following assumptions on the motion of the HV: (1) The HV passes through the intersection, i.e. no left/right turning actions; (2) The HV does not change lanes; (3) The acceleration of the HV is only determined by a short history of the state of traffic light its own kinematics.

Formally, the problem can be stated as the construction of a mapping from a sequence of the states to an admissible subset of the acceleration input of HV at the next time instance:

Ii-a System Model

We represent the model of [15] as a hybrid system, , as represented in Figure 1. Relevant quantities of are summarized in Table I. In this study, the signal light will cycle through the different colors on a fixed schedule reflecting the most frequent values of signal phasing and timing data from the SPMD database (c.f. Section II-B). Among the four continuous states, the estimated length of queue formed at the intersection, , will be 0 during green and yellow lights, and estimated by a constant value on red light.

Qty. Description Type Range
Distance to intersection Continuous state [0,
Velocity Continuous state [0,
Time since last change Continuous state [0,
Traffic queue at intersection Continuous state [0,
State of traffic light Discrete state
Acceleration Input []
TABLE I: Summary of important quantities of
Fig. 1: Hybrid automaton representation of

Ii-B Overview of Real-World Driving Data

The human driving data were collected from the Safety Pilot Model Deployment (SPMD), a large-scale connected vehicle study conducted in the Ann Arbor, MI area [3]. It contains records on the driving patterns and behaviors of 2,842 equipped vehicles in Michigan. For this project, 556 eastbound trajectories from three weeks in 2014 were extracted from the database and synchronized with V2X communication units installed at the Fuller-Bonisteel intersection (map available at: https://www.google.com/maps/@42.2873631,-83.7196829,19z).

Ii-C Overview of Proposed Method

The problem of constructing a state-dependent set of HV acceleration inputs is framed as finding a boundary between human and non-human driving behavior, and subsequently translated into one of classifications. SPMD provides examples of human driving traces, i.e. positive examples for the classification; on the other hand, generating negative training examples for the classification problem, i.e. the driving traces that are “non-human”, is less straightforward.

The fundamental assumption of our framework is that HVs will satisfy certain specifications representing traffic rules, driving norms, etc. We attempt to capture these specifications using a set of Parametric Signal Temporal Logic (PSTL) formulae. A feasible parameter set for these PSTL formulae is synthesized from analysis of real-world (naturalistic) driving data; the boundary of the parameter set is used to convert PSTL formulae to STL formulae. Then falsifications of the STL formulae represent violations of the traffic rules that humans are assumed to satisfy. Consequently, such violations constitute negative training examples for the classifier. The mapping to human driver actions then corresponds to those actions for which the state-action tuple is classified as “human” behavior.

Iii Mathematical Preliminaries

Central to this approach is the construction of precise specifications that represent human driver behavior and subsequent classification of state-action tuples as “human” or “non-human”. In the following, we give a brief overview of the mathematical tools employed in this framework. More details can be found in [2] and [9]. The interplay between the tools used to achieve our objective will be described in more detail in Section IV.

Iii-a Parametric Signal Temporal Logic

PSTL builds upon STL, which considers real-valued predicates over a real-valued time domain [8]. STL specifications can be conceived as constraints on a signal, , as it evolves over time. Such a constraint can be captured by inequalities, called predicates of the form , where . The syntax for building specifications can be defined inductively as:

where the subscript of is used to emphasize the dependence of the predicate on the parameter . The main distinction between STL formulae and PSTL formulae is that in the latter, some of the parameters, which include the scale parameter , and time parameters and , are left unspecified. The semantics of (P)STL formulae are given as:

In the following, we also consider the temporal operators always and eventually:

Due to the continuous nature of the (P)STL predicate, Boolean satisfaction can be refined to consider the degree to which a signal satisfies a specification. This is done using a robustness metric , which induces the following quantitative semantics:

The positive (or negative) sense of captures Boolean satisfaction (or violation) of the specification, and the absolute value captures the robustness with which the signal satisfies (or violates) the specification.

Iii-B Parameter Synthesis

The parameter synthesis problem is one of finding the set of parameters which result in tight satisfaction of a PSTL specification by signals. In particular, we consider specifications where monotonically increases or decreases with a specific parameter. For such problems, parameter synthesis can be reduced to a generalized binary search [2], [11]. For this work, we use the methods developed and implemented in the Breach toolbox [7].

Iii-C Falsification

The falsification problem can be thought of as dual to that of parameter synthesis. The objective of this problem is to find an input signal which results in the violation of a given specification. In [11], this is formulated as an optimization problem on over input signals :

A negative , which is the solution to this program, indicates a specification violation and the pair is referred to as a counterexample. In general, the falsification problem is undecidable, and the Breach toolbox may not find a counterexample even if one exists; this feature is reflected in the general non-linear structure of the optimization program.

Iii-D Classification

The classification problem seeks to identify boundaries between distinct classes given labeled examples of valid class members. In this work, we operate on time-series of the state and control input, , and check for membership in the aforementioned classes.

(a) Feed-forward Neural Network Classifier
(b) Recurrent Neural Network Classifier
Fig. 2: Two classifiers are used to classify “human” traces from “non-human” traces

The former can be modeled using a feed-forward neural network (or multi-layer perceptron, MLP), and the latter using a recurrent neural network (RNN) (Figure 2). The MLP is a generic non-linear function approximator and is widely used for regression and classification; however, it is best equipped to handle only instantaneous snapshots of a time-series. On the other hand, RNN is a class of artificial neural networks where connections between nodes form a directed graph along a sequence. This allows for incorporating the temporal dynamic behavior of a time sequence. Unlike feed-forward neural networks, RNN can use their internal hidden units to process sequences of inputs.

In this project, we demonstrate both methods of classification and compare the performance of the two classifiers. The output can be either modeled using a binary variable (where 0 indicates “non-human”, and 1 indicates “human”) or using two distinct variables where and indicate the probability of the given trace to be non-human and human, respectively. Note that . In this work, we will use the notion of the output that is most convenient in the relevant discussion.

Iv Solution Approach

As described in Section II, the goal is to construct a set of driver inputs given a finite history of the states and inputs,

(1)

In our solution, we will consider 3-second intervals consisting of tuples of the state and control input. Suppose the classifier takes the form

and maps arguments to the real numbers. In this setting, a positive value implies that the tuple is an example of human behavior and a negative value implies that the tuple is an example of non-human behavior. Hence, the boundary of human behavior is the set of tuples for which the classifier evaluates to zero:

(Note: this approach can be adapted for classifiers which produce an output in , i.e. expressing the probability of an input tuple belonging to either class, by finding the set of tuples for which the output is .)

Given the classifier, , the set-valued driver behavior mapping, , can be defined as where the lower limit, , is found from (1):

In practice, we seek a compact set to represent the range of inputs as in (1). Moreover, from the perspective of control synthesis treating the driver as a disturbance for our case study, we are interested in the lower limit of the human acceleration. Therefore, one method may be to initiate a root-finding routine for initialized at the lowest acceleration permissible by the road friction. These details are explored in more detail in Section V.

The selection of negative training examples for classification requires actions which no human would undertake given the state. In order to generate such examples, we posit that humans generally satisfy a set of specifications; then violations of these specifications are candidates for negative training examples. In this study, we consider linear-time properties representing traffic rules. While not every violation of a traffic rule constitutes non-human behavior, we argue that violation of traffic rules is a necessary condition for non-human behavior. The basis for counterexample generation then is falsification of specifications that human drivers satisfy. The problem of creating precise specifications for subsequent falsification is posed as one of parameter synthesis where the template reflects some traffic rule. Note that this is a different flavor from the requirement mining methods of [11]: Jin et al. developed a framework to synthesize the requirements to which legacy controllers were developed for subsequent analysis (possibly using formal methods). On the contrary, in this work, the controller under study is the human driver itself, and the falsifier becomes a proxy for “non-human” behavior.

It is reasonable to ask whether the synthesized set of specifications itself can be used to define the boundary of human driver behavior. We argue that such a method may encounter the following issues: (1) the set of specifications would have to be “complete” in some sense, i.e. it should represent all the rules that human drivers follow; (2) we argue that violation of traffic rules is a necessary condition for identifying non-human behavior but it is not sufficient: hence there may be examples of violating behavior that is valid human behavior. In brief, the authors are not aware of methods to quantify the quality of the set of specifications but this may be possible with a classification approach as described in Section VI.

The overall work-flow is described in the context of Figure 3.

: traces of human driver behavior

Parameter synthesis

: PSTL representations of traffic rules

Counterexample generation using falsification

: dynamical model

Classification

: driver behavior model

Fig. 3: Description of solution approach

Red boxes represent inputs; these are traces of human driver behavior, specification templates representing driver behavior in the form of PSTL formulae, and a dynamical model in Simulink with an interface to Breach. The traces and PSTL formulae are inputs to the parameter synthesis problem. The output of this block is a set of feasible parameters for a given specification. The feasible parameter set and specifications are considered in the falsification problem wherein we seek a control signal to violate the specification; consequently the falsifier is a proxy for a non-human driver. These negative training examples are combined with the positive training examples used for parameter synthesis in the classification block wherein we seek the aforementioned function . Finally is obtained through using the querying process described above. The first two blue blocks corresponding to parameter synthesis and falsification are treated separately from the last blue block corresponding to classification. Some iteration between the two, i.e. sampling traces belonging to the “human driver” class and including these in for subsequent classifier construction, may be considered in future work - this is discussed in more detail in Section VI. A counterexample generation strategy corresponding to the first two blue blocks is described in Algorithm 1. The reason for controlling the initial condition for subsequent falsification is to obtain good coverage and diversity in the falsifying traces. Note that we applied slight modifications to this routine. For clarity, these were omitted in the presentation of Algorithm 1 - details are discussed in Section V.

Data: Traces of driver behavior: ; State space discretization: ; PSTL formulae: ; Dynamical model:
Result: Falsifying traces: such that
1 ; ;
2 forall  do
       // Parameter synthesis [2]
3      
4       forall  do
             // Falsification [11]
5             forall  do
6                  
7                  
8             end forall
9            
10       end forall
11      
12 end forall
Algorithm 1 Counterexample Generation

V Results and Discussion

V-a Driver Behavior as PSTL Formulae

Following the framework of Section IV, we construct a collection of specifications in PSTL to represent traffic rules or common sense driving norms. A simple example of this is a specification on speed limit: “never exceed speed limit”. While this technically represents a traffic rule, a higher priority traffic norm is to travel with the flow of traffic; therefore, human drivers typically exceed the posted speed limit by some margin. Consequently, we synthesize a PSTL specification based on the traffic rule and parameterize the true speed limit to accommodate following traffic norms:

(2)

Aside from rules such as (2), we investigate how driver behaviors vary based on the state of the traffic light. Essentially, this translates into modeling the driver as a switched system where we seek to learn the behavior rules in each traffic light state. Herein, we formulate a PSTL formula for each traffic light state based on a basic traffic rule activated by that particular traffic light state. In (2) and in subsequent specifications, is the length of the human driver trajectory.

At a green light, a vehicle should move fast enough to avoid blocking traffic:

(3)

Intuition: If the traffic light has been green “for some time”, and one is “sufficiently far” from the intersection, then one should “not drive too slowly”. All expressions in quotation marks are represented as parameters in the PSTL formula.

At a yellow light, vehicles may decide to pass or stop:

(4)

Intuition: Based on the vehicle speed and distance to the intersection, if one “recognizes” a yellow light, then one must decide to pass or stop. In reality, the decision is determined by whether the driver perceives to be larger than her accepted/anticipated stopping distance at current speed. If so, the driver will stop.

At a red light, a vehicle should never cross the intersection:

(5)

Intuition: If the traffic light has been red “for some time” and one is “close” to the intersection, then one should “drive slowly”.

V-B Parameter Synthesis Results

The parameter synthesis module of Breach was applied to find the feasibility domain for (2), (3), (4), and (5). In order to exploit Breach’s binary-search solver for monotonic specifications, we implement an alternation scheme for PSTL formulae with multiple parameters; we found the results to be consistent regardless of the order of alternation.

For the speed limit specification (2), the result of parameter synthesis found the feasible parameter set to be all speeds less than 25.5 m/s. Observe that this is about over the posted speed limit of 15.6 m/s (35 mph).

For the remaining specifications, we found a multi-dimensional Pareto frontier to represent the boundary of the feasible parameter set as in [11], [2]. These frontiers are illustrated in Figure 4.

(a) Green light specification: As increases, i.e. the HV is farther from the intersection, the lower bound on velocity, , also increases; therefore, if the HV is far away from the intersection, it should drive fast. Furthermore, as , the lower bound on , increases, also increases; therefore, after the traffic light turns green for some time, all the through traffic should not move too slowly.
(b) Yellow light specification: If the HV is near the intersection, i.e. is small, and it is traveling with a high speed, i.e. is large, when the traffic light turns yellow and it is had enough time to register this change, i.e. , then in the following three seconds, the HV will try to pass, i.e. its speed will never drop below a high value of . On the other hand, when the vehicle is far away and traveling slowly, i.e. is large and is small, then it will decelerate in anticipation of the impending red light. Interestingly, we can observe the transition from where changes from a high to low value as a function of the distance to intersection and speed at the time when the HV registered the light change. These results are fairly intuitive.
(c) Red light specification: As decreases, the upper bound on velocity, decreases, meaning vehicles tend to slow down when close to the intersection; This trend is similar across all .
Fig. 4: Validity frontiers for traffic light specifications

V-C Falsification Results

The falsification routine as described in Section III-C requires a parameter set, ; hence we sample parameters from the Pareto frontier in Figure 4 in order to ensure we obtain good coverage and diversity in the falsifying traces.

The falsification problem requires classes of input signals, which are used together with a dynamical model to create falsifying traces. In this work, we consider piece-wise constant signals of duration 0.5 seconds; during each constant segment, the input can assume a value within the set m/s. The simulation horizon is three seconds and thus the input signal contains six control points.

Figure 4(a) illustrates a falsifying trajectory of the red light specification

Here, the HV simply maintains its speed when approaching the intersection, and thus violates the spec on “HV should lower its speed as it approaches the intersection”. Figure 4(b) shows the robust satisfaction of

which is the portion of within the “always”.

Observe that the violation given in Figure 5 is almost trivial. To avoid only generating such trivially falsifying traces, we use the following strategy: (1) Use CMA-ES solver instead of the Nelder-Mead method; (2) Apply a difference metric criterion to select diverse falsifying traces; (3) Accept traces with sub-optimal robustness violation.

Experimental results showed that the CMA-ES solver produced more diverse input signals resulting in specification violation than the simplex-based Nelder-Mead method [17]. The difference criterion involved checking that the Euclidean distance between two candidate falsifying inputs was sufficiently large to avoid repetition of the same signals. And finally, accepting sub-optimal robustness violations allowed for generating counterexamples closer to the expected boundary between “human” and “non-human” behavior.

Using this strategy together with the falsification method described previously, we found 170 falsifying traces for (2); 7,068 falsifying traces for (3); 21,784 falsifying traces for (4); and 2,926 falsifying traces for (5). Note that additional falsifying traces can be generated by increasing the maximum iterations allowed for the solver.

(a) A falsifying trace of the red light specification
(b) Robust satisfaction along the trace
Fig. 5: An example of falsification of the STL formula

V-D Classification Results

In our initial treatment of the classification problem, we construct individual classifiers for each state of the traffic light to address the hybrid nature of this system. Furthermore, we take measures to make the classification task more computationally tractable by sub-sampling the training examples and omitting the queue length (). Elimination of the queue length from this initial analysis is justified for the green traffic light state because the queue length is always zero; additionally, our specification for the red traffic light does not incorporate the queue length and consequently, the queue length does not factor into deciding whether or not a trace falsifies or satisfies the specification. Sub-sampling the traces from 10 Hz to 2 Hz reduces the trace sizes by 5 times. The number of sub-sampled traces (both positive and negative traces) are 11,000 for , and 6,000 for . An overview of the individual classifiers is given in the following:

  • Green traffic light classification: negative examples are traces found to violate (3) and the features considered are . For MLP classifiers, the input to the classifier is a flattened sequence, and the input to the RNN classifier is the sequence of .

  • Red traffic light classification: negative examples are traces found to violate (5). Since is not considered in the spec, we omit it from this analysis. The resulting features are . Inputs to the MLP and RNN classifiers are the same as those of the green classifier.

Herein, we have only constructed classifiers for the green and the red traffic lights, leaving treatment of the yellow light as future work for the following reason: the duration of the yellow light is short, and transitions between green/yellow and yellow/red are important and can indeed take place during the considered horizon. However, currently the negative training example are generated with a constant traffic light state and thus the transitions themselves are not well captured; furthermore, there are very few positive training examples for a yellow traffic light.

The MLP is modeled with a dense layer with 28 hidden units, ReLu activation, and soft-max function at the end of the network. The RNN is modeled with a recurrent layer containing 36 hidden units, ReLu activation, and soft-max function at the end of the network. We used categorical cross entropy as our loss function, and the ADAM optimizer.

Table II summarizes the (converged) accuracy of the two classifiers when tested on a test set.

MLP RNN MLP RNN
99.4 99.8 99.7 99.9
TABLE II: Comparison of MLP and RNN for different traffic light states

We speculate that one reason for the extremely high accuracy is that the classification was too easy or trivial for much of the data. This indicates that perhaps the falsified trajectories were too far away from the true boundary between the “human” and “non-human” classes. Possible rectifications to this issue are addressed in Section VI.

(a)
(b)
Fig. 6: Examples of the resulting bound on HV acceleration

We examine the generated bound on HV acceleration for some cases where the classifier was effective at reducing uncertainty. Next, we briefly describe the querying process for computing the the set of “human” accelerations given a classifier. For a vector of states and inputs in the horizon,

we sweep the next input signal across the entire range of acceleration to form ; each is used to complete a vector

Next, the are passed into the classifier. Finally, “human” inputs are defined to be:

In Figures 5(a) and 5(b), we plot two 3-second traces extracted from human naturalistic driving data. For each trace, the lower bound on next acceleration input is estimated using the querying routine above. The yellow circle marks the estimated input based on the physical acceleration limit of the HV, which is always m/s; the red circle marks the estimated lower bound of acceleration from the classification method, while the green circle shows the actual acceleration undertaken by the HV. The proposed method shows a tighter acceleration bound in comparison to physical limits in both cases while remaining below the actual acceleration, i.e. being conservative. In case (a), the HV is already moving at a low speed, so it is intuitive that it will not suddenly conduct full brake; in case (b), the HW is accelerating from a low speed, so it is unlikely to suddenly brake at the next instant as well. However, we also observed that the bound from the proposed method is still very conservative, as braking with acceleration around m/s can be already perceived as hard brake; moreover, the method does not always give tighter bound than the physical limit.

Vi Conclusion and Future Work

In this work, we proposed a framework to construct a data-driven bound on human driver behavior that allows for verifying whether a given trajectory originates from a human driver. Our results and contributions are summarized below:

  • Generation of data-driven bounds on HV acceleration. From the perspective of control synthesis, the benefit of tighter bounds on human action is less uncertainty about the disturbance.

  • Synthesis of reasonable specifications for HV behavior.

  • Generation of “non-human behavior” as falsifying traces of STL formulae.

  • Construction of classifiers to distinguish between human and non-human driving traces.

This work is a first step in using falsification-based generation of negative training examples. Consequently, many avenues should be explored to improve the performance of the proposed framework. In particular, the classifier gave useful results for some traces, but failed to restrict the bound on human acceleration for many others. This is likely due to the high dimensionality of the problem, since our approach seeks to leverage information over a time horizon. Furthermore, only a subsets of the 556 trajectories were considered for a given specification. Consequently, the training set may be insufficient. Thus possible future approaches may include seeking a larger data set or shortening the time horizon to reduce problem dimensionality.

Additionally, negative training examples were generated by considering piece-wise constant input signals, which often featured large differences between constant segments. For instance, an input signal could be constant at m/s during the s interval before changing to m/s during the s interval. However, HV accelerations do not feature such excursions. Consequently, it is possible that many of the generated negative training examples were very far from positive training examples in the feature space of the classification problem. Therefore, a well-performing classifier may indeed find the boundary to be very close to the negative training examples and as a result, deem many actions that intuitively appear to be non-human as human. A potential remedy that will be considered in future work is attempt falsification using a class of smooth input signals and again accept traces with sub-optimal robustness violations. If the resulting negative training examples are closer to the positive training examples in the feature space, then we can expect a well-performing classifier to be more discerning between human and non-human behavior. To address the risk of potentially over-fitting to the observed data, the iterative method introduced briefly in Section IV may be of value. The core idea is to follow the procedure of Section IV to generate a nominal classifier, and then sample this classifier near its boundary points to augment the set of positive training examples, , before repeating the procedure of Section IV. By augmenting , we speculate that the parameter synthesis method will find a larger feasible parameter domain and consequently “push out” the classifier towards more negative training examples.

In addition to reducing uncertainty by determining tighter bounds on human action, it is also important to have a notion of uncertainty quantification. A classifier based upon convex programming principles can offer this quality through the notion of an upper limit on the probability of a new observation violating the constructed input bound [5], [4]. However, the approach of [5] considered stationary points as opposed to the time series considered here, which add additional complexity.

Finally, re-visiting the motivation of this work, we believe our framework can be applied to the synthesis of safe & optimal controllers and the identification of corner cases for controller evaluation. The critical ingredient in achieving this objective will be computing reachable sets using the state-dependent disturbance bounds induced by our approach.

Acknowledgment

The authors would like to thank Prof. Necmiye Ozay and Dr. Alex Donzé for their valuable insights and instructive conversations.

References

  1. Y.S.R. Annapureddy, C. Liu, G. E. Feinekos and S. Sankaranarayanan (2011) S-taliro: a tool for temporal logic falsification for hybrid systems. Proc. of Tools and algorithms for the construction and analysis of systems (TACAS). Cited by: §I.
  2. E. Asarin, A. Donzé, O. Maler and D. Nickovic (2011) Parametric identification of temporal properties. Runtime Verification Lecture Notes in Computer Science, pp. 147–160. External Links: Document Cited by: §III-B, §III, §V-B, 1.
  3. D. Bezzina and J. R. Sayer (2015) Safety Pilot: Model Deployment Test Conductor Team Report. Technical report Technical Report June. External Links: Link Cited by: §II-B.
  4. G. C. Calafiore (2009-11) On the expected probability of constraint violation in sampled convex programs. Journal of Optimization Theory and Applications 143 (2), pp. 405–412. External Links: Document Cited by: §VI.
  5. Y. Chen, N. Sohani and H. Peng (2018) Modelling of uncertain reactive human driving behavior: a classification approach. 57th IEEE Conference on Decision and Control. External Links: Document Cited by: §I, §I, §VI.
  6. G. Chou, Y. E. Sahin, L. Yang, K. J. Rutledge, P. Nilsson and N. Ozay (2018) Using control synthesis to generate corner cases: a case study on autonomous driving. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 37 (11). Cited by: §I.
  7. A. Donzé (2010) Breach, a toolbox for verification and parameter synthesis of hybrid systems. Cited by: §I, §III-B.
  8. A. Donzé (2014-02) On signal temporal logic. Cited by: §III-A.
  9. F. Fages and A. Rizk (2009) From model-checking to temporal logic constraint solving. Principles and Practice of Constraint Programming - CP 2009 Lecture Notes in Computer Science, pp. 319–334. External Links: Document Cited by: §III.
  10. B. Hoxha, A. Dokhanchi and G. Fainekos (2017-03) Mining parametric temporal logic properties in model-based design for cyber-physical systems. International Journal on Software Tools for Technology Transfer 20 (1), pp. 79–93. External Links: Document Cited by: §I.
  11. X. Jin, A. Donzé, J. V. Deshmukh and S. A. Seshia (2015) Mining requirements from closed-loop control models. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 34 (11), pp. 1704–1717. External Links: Document Cited by: §I, §III-B, §III-C, §IV, §V-B, 1.
  12. Khodayari,A, Ghaffari,A, Kazemi,R and Braunstingl,R (2012) A modified car-following model based on a neural network model of the human driver effects. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans 42 (6), pp. 1440–1449. Cited by: §I.
  13. C.C. Macadam (2003) Understanding and modeling the human driver. Vehicle System Dynamics 40 (1-3), pp. 101–134. Cited by: §I.
  14. G. Oh and H. Peng (2018) Eco-driving at signalized intersections: what is possible in the real world?. The 21st IEEE International Conference on Intelligent Transportation Systems. Cited by: §I.
  15. G. Oh and H. Peng (2019) Longitudinal trajectory forecasting of human-driven vehicles near traffic lights using vehicle communications. arXiv preprint arXiv:1906.00486. Cited by: §I, §I, §II-A, §II.
  16. A. Pentland and A. Liu (1999) Modeling and prediction of human behavior. Neural Computation 11 (1), pp. 229–242. Cited by: §I.
  17. W. H. Press (2007) Numerical recipes 3rd edition: the art of scientific computing. Cambridge University Press. Cited by: §V-C.
  18. D. Sadigh, S. Sastry, S. A. Seshia and A.D. Dragan (2016) Planning for autonomous cars that leverage effects on human actions. Robotics: Science and Systems. Cited by: §I.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
402515
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description