Self-Awareness In Intelligent Vehicles: Experience based Abnormality Detection

Self-Awareness In Intelligent Vehicles: Experience based Abnormality Detection

Abstract

The evolution of Intelligent Transportation System in recent times necessitates the development of self-driving agents: the self-awareness consciousness. This paper aims to introduce a novel method to detect abnormalities based on internal cross-correlation parameters of the vehicle. Before the implementation of Machine Learning, the detection of abnormalities were manually programmed by checking every variable and creating huge nested conditions that are very difficult to track. Nowadays, it is possible to train a Dynamic Bayesian Network (DBN) model to automatically evaluate and detect when the vehicle is potentially misbehaving. In this paper, different scenarios have been set in order to train and test a switching DBN for Perimeter Monitoring Task using a semantic segmentation for the DBN model and Hellinger Distance metric for abnormality measurements.

Autonomous vehicles, Intelligent Transportation System(ITS), Dynamic Bayesian Network(DBN), Hellinger distance, Abnormality detection.
\mainmatter

1 Introduction

The self-awareness field is vast in terms of detecting abnormalities in the field of Intelligent Vehicles [22]. It is possible to classify critical, medium, or minor abnormalities by defining the line between normal and abnormal behaviour with the help of top design architectures. The problem of self-awareness systems is to measure every sensor, data acquired, and behavior of the system at every moment, comparing each measurement with the nominal range. Due to the huge amount of data, these tasks are not easy and becomes typically dead-end in big projects where the re-usability is not possible. Furthermore, it is possible to be unaware of situations where the vehicle is not working in the normal ranges for a very short period of time. Self-awareness management could be divided into three main categories which are hardware, software, and behavior. The first category is based on the detection of malfunctions on electronic devices, actuators, sensors, CPUs, communication, etc. The second category focuses on software requirements where the most important measurements for message delivery are time, load, bottlenecks, delays, heartbeat, among others. Finally, the last self-awareness field analyzes the behavior of the vehicle which is related to the performance of the task assigned at each moment such as keep in lane, lane change, intersection management, roundabout management, overtaking, stop, etc. Accordingly, the management of self-awareness is a cross-layer problem where every manager should be built subsequently to the other layers to create a coherent self-awareness system [20].

To reduce the amount of process effort in intelligent self-awareness system, the emergent techniques in Machine Learning allow the creation of models using Dynamic Bayesian Networks (DBN) to automatize this process [14]. The novelty of this paper is the use of DBN models to generate a cross-correlation between a pair of internal features of the vehicle using a Hellinger distance metric for abnormality detection. Finally, compared the performance of different DBN models in order to select the best model for abnormality detection.

The remainder of this paper is organized as follows. Section 2 presents a survey of the related work. In section 3, described the proposed method, defining principles exploited in the training phase and the steps involved in the test phase for detecting the abnormality. Section 4 summarizes the experimental setup in addition to the description of the research platform used. Section 5 gathered the results (i.e., abnormality measurements) from pair based DBNs for each vehicle and compared the results, and finally, section 6 concludes the paper.

2 State of the art

This section describes some of the related work regarding the development of self-awareness in agents. In [4], the authors propose an approach to develop a multilevel self-awareness model learned from multisensory data of an agent. Such a learned model allows the agent to detect abnormal situations present in the surrounding environment. In another work [19], the learning of the self-awareness model for autonomous vehicles based on the data collected from human driving. On the other hand, in [13], the authors propose a new architecture for mobile robots with a model for risk assessment and decision making when detecting difficulties in the autonomous robot design. In [21], the authors proposed a model of driving behavior awareness (DBA) that can infer driving behaviours such as lane change. In all the above works either used the data from one entity or the objective was limited; for example in [21] the objective was to detect lane change either on left or right side of the considered vehicles. In this work, we have considered the data from the real vehicles and developed pair based switching DBN models for each vehicle and finally made the performance comparison among different DBNs learned.

3 Proposed method

This section discusses how to develop “intelligence” and “awareness” into vehicles to generate “Self-aware intelligent vehicles.” The first step is to perform synchronization operation over the acquired multisensory data to make them synchronized in time in a way to match their time stamps. The data sets collected for training and testing are heterogeneous, and two vehicles are involved in the considered scenarios. The observed multisensory data from the vehicles are partitioned into different groups to learn a pair-based switching DBN model for each pair-based vehicle feature. Then compare the performances to qualify the best pair-based feature for automatic detection of abnormality. Switching DBNs are probabilistic models to integrate observations received from multiple sensors in order to understand the environment where they operate and take appropriate actions in different situations. The proposed method is divided into two phases: offline training and online testing. In the offline training phase, learn DBNs from the experiences of the vehicle in their normal behaviour. In the next phase, online testing, we have used a dynamic data series that are collected from the vehicles while they pass through different experiences than in the training phase. Accordingly, a filter called Markov Jump Particle Filter (MJPF) applied to the learned DBN models to estimate the future states of the vehicles and finally detects the abnormality situations present in the environment.

3.1 Offline training phase

In this phase, learn switching DBNs from the datasets collected from the experiences of the vehicle in their normal behaviour. The various steps involved in learning a DBN model are described below.

Generalized states

The intelligent vehicles used in this work are equipped with one lidar of 16 layers and 360 degrees of Field of view(FOV), a stereo camera, and encoder devices to monitor different tasks being performed. In this work, it is assumed that each vehicle is aware of the other vehicle by its communication scheme and cooperation skills. By considering the vehicles endowed with a certain amount of sensors that monitors its activity, it is possible to define as any combination (indexed by ) of the available measurements in a time instant . Let be the states related to measurements , such that: ; where represents the sensor noise. The generalized states of a sensory data combination can be defined as:

(1)

where indicates the -th time derivative of the state.

Vocabulary generation and state transition matrix calculation

In order to learn the desecrate level of the DBN (i.e., the orange outlined box in Fig. 1), it is required to map the generalized states into a set of nodes. We have used a clustering algorithm called Growing Neural Gas (GNG) to group these generalized states and to obtain nodes. In GNG, the learning is continuous, and the addition or removal of nodes is automated [8]. These are the main reasons to choose GNG algorithm over other clustering algorithms such as K-means [9] or self-organizing map (SOM) [11]. The output of each GNG consists of a set of nodes that encode constant behaviours for specific sensory data combinations time derivative order. In other words, at a time instant, each GNG takes the data related to a single time derivative and cluster it with the same time derivative data acquired in previous time instances. The nodes associated with each GNG can be called as a set of letters containing the main behaviours of generalized states. The collection of nodes encoding -th order derivatives in an observed data combination is defined as follows:

(2)

where is the number of nodes in the GNG associated to the -th order derivative of data in data combination . Each node in a GNG represents the centroids of associated data point clusters. By taken into consideration all the possible combinations of centroids obtained from GNGs, we can get a set of words, that define generalized states in an entirely semantic way. The obtained words can form a dictionary and can be defined as:

(3)

where . contains all possible combinations of discrete generalized states. is a high-level hierarchy variable that explains the system states from a semantic viewpoint. In this work, we have only considered states, and it’s first-order derivatives.
Furthermore, estimated the state transition matrices based on the timely evolution of such letters and words. The state transition matrix is a matrix that contains the transition probability between the discrete level nodes of the switching DBN shown in Fig. 1. When the first data emission occurs, the state transition matrix provides the probability of the next discrete level, i.e., the probability of activation of a node from the GNG associated with the first-order derivatives of the states. The vocabulary (i.e.,letters, words and dictionary and the transition matrices constitute the discrete level of the DBN model.

DBN model Learning

All the previous steps are the step by step learning process of the switching DBNs by each entity taken into consideration. The number of DBNs learned by each entity in the network can be written as:

(4)

where represents the vehicle in the network and is the total number DBN learned by the vehicle. The same DBN architecture is considered for making inferences with different sensory data combinations belong to different vehicles. The learned DBN can be represented as shown in the Fig.1. The DBN has mainly three levels such as measurements, continuous and discrete levels.

Figure 1: Proposed DBN

3.2 Online test phase

In this phase, we have proposed to apply a dynamic switching model called Markov Jump Particle Filter (MJPF) [3] to make inferences on the learned DBN models. MJPF is a mixed approach with particles inside each Kalman filter. The MJPF is able to predict and estimate continuous and discrete future states and to detect deviations from the normal model. In MJPF, we use Kalman Filter (KF) in state-space and Particle Filter (PF) in super state space in Fig. 1.

Abnormality detection and complementarity check

In probability theory, a statistical distance quantifies the distance between two statistical objects, which can be two random variables, or two probability distributions, etc. Some important statistical distances include: Bhattacharya distance [6], Hellinger Distance [17], Jensen–Shannon divergence [7], Kullback–Leibler (KL) divergence [10] etc. and they are the distances generally used between two distributions. Although, the HD is defined between vectors having only positive or zero elements [1]. The datasets in this work are normalized, so the values vary between zero and one; there aren’t any negative values. Moreover, HD is symmetric compared to KL divergence. By these reasons, HD is more appropriate than using other distance metrics as abnormality measure. Moreover, the works in [15] and [2] used HD as an abnormality measurement.
Abnormality measurement can be defined as the distance between predicted state values and the observed evidence. Accordingly, let be the predicted generalized states and be the observation evidence. The HD can be written as:

(5)

where is defined as the Bhattacharyya coefficient [5], such that:

(6)

When a given experience in evaluation an abnormality measurement obtains at each time instant, and can be seen in the equation (5). The variable , where values close to indicate that measurements match with predictions; whereas values close to reveal the presence of an abnormality. After calculating the abnormality measures by HD, it is possible to check the complementarity among different DBN models learned.

4 Experimental Setup

In order to validate the proposed method, it has been used two intelligent research platform called iCab (Intelligent Campus AutomoBile)[16] (see Fig.1(a)) with autonomous capabilities. To process and navigate through the environment, the vehicles count with two powerful computers along with the screen for debugging and Human-Machine Interaction (HMI) purposes. The software prototyping tool used is ROS [18].

(a) The autonomous vehicles (iCab)
(b) The environment
Figure 2: The agents and the environment used for the experiments.

The data sets collected with the two iCab vehicles are synchronized in order to observe the vehicles as different entities in a heterogeneous way to match their time stamps. The intercommunication scheme is proposed in [12] where both vehicles share all its information over the network by a Virtual Private Network(VPN). For this experiment, as long as the synchronization level reaches the nanoseconds, the recorded dataset in both vehicles has been merged and ordered using the timestamp generated by the clock on each vehicle which has been previously configured with a Network Time Protocol (NTP) tool called Chrony. Both vehicles perform a PMT task which consists of the autonomous movement of platooning around a square building (see Fig.1(b)). The data generated from the lidar odometry such as the ego-motion of the vehicle and the different combinations of the control variables such as steering angle, rotor velocity and power of the rotor are considered the main metrics to learn and test the models. Moreover, it aims to detect the unseen dynamics of the vehicles with the proposed method.

(a) Perimeter monitoring
(b) Emergency stop
Figure 3: Odometry data for iCab1
(a) Perimeter monitoring
(b) Emergency stop
Figure 4: Odometry data for iCab2
(a) Steering angle(s) w.r.t position
(b) Velocity(v) w.r.t position
Figure 5: Control data for iCab1 for perimeter monitoring task
Figure 6: Rotor power(p) w.r.t position for iCab1 for perimeter monitoring task

4.1 Perimeter Monitoring Task (PMT)

In order to generate the required data for learning and detect abnormalities, both vehicles perform a rectangular trajectory in a platooning mode. The leader just follows the rectangular path and the follower receives the path and keep the desired distance with the leader. This task has been divided into two different scenarios.

  • Scenario I: Both vehicles perform the platooning operation by following a rectangular trajectory in a closed environment, as shown in Fig. 1(b), four laps in total by recording the ego-motion, stereo camera images, lidar Point Cloud Data, encoders, and self-state. Notice that the GPS has troubles to acquire good signal because of the urban canyon. Fig.2(a) and Fig.3(a) show the plots of odometry data for the perimeter monitoring task for iCab1 and iCab2 respectively. Moreover, Fig.5 shows the steering angle w.r.t the iCab1’s position(Fig.4(a)) and rotor velocity w.r.t the iCab1’s position(Fig.4(b)). The rotor power data plotted w.r.t iCab1’s position is shown in Fig.6.

  • Scenario II: Both vehicles perform the same experiment, but now a pedestrian crosses in front of the leader vehicle(i.e., iCab1). When the leader vehicle detects the pedestrian, automatically executes a stop and wait until the pedestrian fully crosses and move out from the danger zone. Meanwhile, the follower (i.e., iCab2) detects the stop of the leader and stops at a certain distance. When the leader continues the PMT, the follower mimics the action of the leader. Fig.2(b) and Fig.3(b) show the plots of odometry data for the emergency stop criteria for iCab1 and iCab2 respectively.

5 Results

As explained in the previous section, there are two different scenarios taken into consideration with two vehicles. Moreover, the data combinations from odometry and control of vehicles have been treated independently and finally compared the abnormality measures to understand the correlation between them. We set the abnormality threshold to 0.4, considering the average Hellinger distance value of 0.2 when vehicles operate in normal conditions. The DBN models are trained over the scenario I based on PMT where no pedestrians are crossing in front of the vehicles. As said at the beginning of this paper, one of the objectives is the automatic extraction of abnormalities by learning from experiences. Hence, for PMT, the DBN models have been trained to extract the HD by pairing two different variables: Steering Angle-Power (SP), Velocity-Power (VP) and Steering Angle-Velocity (SV).

Testing phase

The switching DBN model in this work is designed for the control part of the vehicles. However, we have considered odometry data and tested the performance of the learned DBN. Fig. 10 shows the plots of abnormality measures by considering odometry data for the vehicle leader (iCab1) and the vehicle follower (iCab2), respectively. During the interval (cyan shaded area) while pedestrian passes and vehicle stops, there isn’’t any significant difference in HD value for iCab1 (leader) as well as iCab2 (follower). This behaviour is due to the fact that, during that interval the vehicles always inside the normal trajectory range. However, there are specific intervals when the vehicle deviates from the normal trajectory range, and the HD measures provided a high value of about 0.2 during those intervals. So the learned DBN model was able to predict if any trajectory deviation occurred.

(a) (b) (a) (b) (a) (b) (a) (b)

Figure 7: Abnormality measurements for odometry: (a) , (b)
Figure 8: Abnormality measurements for control (SV): (a) , (b)
Figure 9: Abnormality measurements for control (SP): (a) , (b)
Figure 10: Abnormality measurements for control (VP): (a) , (b)
  • Steering Angle-Velocity (SV): It is necessary to check that the HD is working in both directions when the metrics involved reflect abnormal behaviour such as power and velocity and in cases when the metric used does not notice when there is an abnormal behavior such as steering angle. For this reason, the pair (S-V) shown in fig. 10 displays that this pair is not detecting abnormalities when a pedestrian crosses in front of the vehicle (cyan shaded), which is expected.

  • Steering Angle-Power (SP): Fig. 10 shows that the HD is high when a pedestrian is crossing in front of the leader vehicle (iCab1). This high value is considered as an abnormality in the behaviour of the leader, and as it is expected, the follower (iCab2) also gives an abnormal behaviour for the platooning task. However, the HD measures for the follower is not as significant as the leader, because the follower was not doing emergence break rather reducing its speed until reaching the minimum distance with the leader.

  • Velocity-Power (VP): The last pair tested is velocity and the power consumption which are highly related. In Fig. 10, it is shown the moment when a pedestrian cross in front of the leader in cyan colour, which match with the highest value of the HD. For the follower vehicle, the abnormality measurement is very significant due to the new performance of the vehicle against the emergency brake of the leader. The consecutive peaks in the HD are caused by the high acceleration when the leader vehicle starts moving, but the current distance between them is still lower than the desired. Next peak in HD is caused by the acceleration of the leader and the emergency stop in the follower due to the pedestrian. To summarize, the switching DBN learned from SP and VP data were able to predict the unusual situations present; however, odometry and SV combination of control was not showing good combination to detect abnormal behavior.

6 Conclusion and future work

It has been proved that the HD for automatically detecting abnormalities in a DBN model learned from experiences, is a possible and plausible solution. The main idea of the proposed method has been demonstrated with the training and testing phase, and the results support that the methodology applied is more useful instead of checking and delimiting each metric of the vehicle depending on the event and defining each upper and lower limits in which an abnormal behaviour is considered.
The future work of this new approach could be extended by establishing communication between the objects involved in the tasks and develop collective awareness models. Such models can make the mutual prediction of the future states of the objects involved in the task and enrich the contextual awareness where they operate. Another direction could be the development of an optimized model from the different feature combinations that can be used for the future state predictions of the considered entities. Additionally, the classification of detected abnormality by considering different test scenarios and comparing the performance of abnormality detection by using different metric as abnormality measure could be considered.

Acknowledgement

Supported by SEGVAUTO 4.0 P2018/EMT-4362) and CICYT projects (TRA2015-63708-R and TRA2016-78886-C3-1-R).

References

  1. H. Abdi and N. J. Salkind (2007) Encyclopedia of measurement and statistics. Thousand Oaks, CA: Sage. Agresti, A.(1990) Categorical data analysis., New York: Wiley. Agresti, A.(1992) A survey of exact inference for contingency tables. Statist Sci 7, pp. 131–153. Cited by: §3.2.1.
  2. M. Baydoun, D. Campo, D. Kanapram, L. Marcenaro and C. S. Regazzoni (2019) Prediction of multi-target dynamics using discrete descriptors: an interactive approach. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 3342–3346. Cited by: §3.2.1.
  3. M. Baydoun, D. Campo, V. Sanguineti, L. Marcenaro, A. Cavallaro and C. Regazzoni (2018) Learning switching models for abnormality detection for autonomous driving. In 2018 21st International Conference on Information Fusion (FUSION), pp. 2606–2613. Cited by: §3.2.
  4. M. Baydoun, M. Ravanbakhsh, D. Campo, P. Marin, D. Martin, L. Marcenaro, A. Cavallaro and C. S. Regazzoni (2018) A multi-perspective approach to anomaly detection for self-aware embodied agents. arXiv preprint arXiv:1803.06579. Cited by: §2.
  5. A. Bhattacharyya (1943) On a measure of divergence between two statistical populations defined by their probability distributions. Bulletin of the Calcutta Mathematical Society 35, pp. 99–109. Cited by: §3.2.1.
  6. A. Bhattacharyya (1943) On a measure of divergence between two statistical populations defined by their probability distributions. Bull. Calcutta Math. Soc. 35, pp. 99–109. Cited by: §3.2.1.
  7. D. M. Endres and J. E. Schindelin (2003) A new metric for probability distributions. IEEE Transactions on Information theory. Cited by: §3.2.1.
  8. B. Fritzke (1995) A growing neural gas network learns topologies. In Advances in neural information processing systems, pp. 625–632. Cited by: §3.1.2.
  9. J. A. Hartigan and M. A. Wong (1979) Algorithm as 136: a k-means clustering algorithm. Journal of the Royal Statistical Society. Series C (Applied Statistics) 28 (1), pp. 100–108. Cited by: §3.1.2.
  10. J. R. Hershey and P. A. Olsen (2007) Approximating the kullback leibler divergence between gaussian mixture models. In 2007 IEEE International Conference on Acoustics, Speech and Signal Processing-ICASSP’07, Vol. 4, pp. IV–317. Cited by: §3.2.1.
  11. T. Kohonen (1990) The self-organizing map. Proceedings of the IEEE 78 (9), pp. 1464–1480. Cited by: §3.1.2.
  12. A. Kokuti, A. Hussein, P. Marín-Plaza, A. de la Escalera and F. García (2017) V2X communications architecture for off-road autonomous vehicles. In Vehicular Electronics and Safety (ICVES), 2017 IEEE International Conference on, pp. 69–74. Cited by: §4.
  13. A. Leite, A. Pinto and A. Matos (2018) A safety monitoring model for a faulty mobile robot. Robotics 7 (3), pp. 32. Cited by: §2.
  14. P. R. Lewis, M. Platzner, B. Rinner, J. Tørresen and X. Yao (2016) Self-aware computing systems. Springer. Cited by: §1.
  15. R. Lourenzutti and R. A. Krohling (2014-07) The hellinger distance in multicriteria decision making: an illustration to the topsis and todim methods. Expert Syst. Appl. 41 (9), pp. 4414–4421. External Links: Document, ISSN 0957-4174, Link Cited by: §3.2.1.
  16. P. Marin-Plaza, A. Hussein, D. Martin and A. d. l. Escalera (2018) Global and local path planning study in a ros-based research platform for autonomous vehicles. Journal of Advanced Transportation 2018. Cited by: §4.
  17. L. Pardo (2005) Statistical inference based on divergence measures. Chapman and Hall/CRC. Cited by: §3.2.1.
  18. M. Quigley, K. Conley, B. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler and A. Y. Ng (2009) ROS: an open-source robot operating system. In ICRA workshop on open source software, Vol. 3, pp. 5. Cited by: §4.
  19. M. Ravanbakhsh, M. Baydoun, D. Campo, P. Marin, D. Martin, L. Marcenaro and C. S. Regazzoni (2018) Learning multi-modal self-awareness models for autonomous vehicles from human driving. arXiv preprint arXiv:1806.02609. Cited by: §2.
  20. J. Schlatow, M. Möstl, R. Ernst, M. Nolte, I. Jatzkowski, M. Maurer, C. Herber and A. Herkersdorf (2017) Self-awareness in autonomous automotive systems. In Proceedings of the Conference on Design, Automation & Test in Europe, pp. 1050–1055. Cited by: §1.
  21. G. Xie, H. Gao, B. Huang, L. Qian and J. Wang (2018) A driving behavior awareness model based on a dynamic bayesian network and distributed genetic algorithm. International Journal of Computational Intelligence Systems 11 (1), pp. 469–482. Cited by: §2.
  22. G. Xiong, P. Zhou, S. Zhou, X. Zhao, H. Zhang, J. Gong and H. Chen (2010) Autonomous driving of intelligent vehicle bit in 2009 future challenge of china. In Intelligent Vehicles Symposium (IV), 2010, pp. 1049–1053. Cited by: §1.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
420680
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description