Image-based OoD-Detector Principles on Graph- based Input Data in Human Action Recognition

Image-based OoD-Detector Principles on Graph- based Input Data in Human Action Recognition

Abstract

Living in a complex world like ours makes it unacceptable that a practical implementation of a machine learning system assumes a closed world. Therefore, it is necessary for such a learning-based system in a real world environment, to be aware of its own capabilities and limits and to be able to distinguish between confident and unconfident results of the inference, especially if the sample cannot be explained by the underlying distribution. This knowledge is particularly essential in safety-critical environments and tasks e.g. self-driving cars or medical applications. Towards this end, we transfer image-based Out-of-Distribution (OoD)-methods to graph-based data and show the applicability in action recognition.

The contribution of this work is (i) the examination of the portability of recent image-based OoD-detectors for graph-based input data, (ii) a Metric Learning-based approach to detect OoD-samples, and (iii) the introduction of a novel semi-synthetic action recognition dataset.

The evaluation shows that image-based OoD-methods can be applied to graph-based data. Additionally, there is a gap between the performance on intraclass and intradataset results. First methods as the examined baseline or ODIN provide reasonable results. More sophisticated network architectures – in contrast to their image-based application – were surpassed in the intradataset comparison and even lead to less classification accuracy.

I Introduction

Modern deep convolutional neural networks are able to recognize objects in images, segment areas pixelwise, and even generate realistic looking photos. Despite their superb capabilities in those areas, they are not able to exposure their own lack of knowledge. As some studies have found out, the confidence of a a network in its output is as high for irrelevant or non-human understandable input data as for in-distribution input data [37, 17, 30]. As a result, there are numerous different approaches [17, 25, 11, 22, 27] detecting so called out-of-distribution (OoD) data.

Instead of proposing another image based approach, this work investigates the applicability of OoD-detection methods on graph-based input data. To the best of our knowledge there are no OoD-detection methods which are usable and have been investigated on graph-based data. Since human skeleton graphs can be easily generated from RGB images [6, 32], depth data [31], and even RF-signals [41], the representation of the dynamics of human actions can be captured without the high computational cost of optical flow or problems regarding poor visual conditions. The contribution of this work is: (i) the examination of the portability of ODIN [25] and the confidence learning approach from [11], when using graph-structured input data in an action recognition task. As a baseline, the softmax output comparison proposed in [17] is used. Additionally, (ii) a Metric Learning-based approach detecting OoD-samples is developed. (iii) To ensure to have a controlled and repeatable evaluation environment, a novel semi-synthetic action recognition dataset is also introduced.

In the following section an overview of related work on both graph-based structured action recognition and OoD-detection is given. The baseline method and the examined methods are explained in section III. The semi-synthetic dataset and the quantitative evaluation are presented in section IV.

Fig. 1: The definition of OoD-data is mandatory. Values in the range are explainable by the given distribution but significantly less common then values in the range .

Ii Related Work

Both in action recognition and outlier detection there is a large number of related work. We focus on skeleton-based action recognition as well as deep neural network outlier detection approaches. However, additional information regarding action recognition can be found in the surveys [39, 33, 24]. A good overview on outlier detection is given by [19, 3, 42].

Ii-a Skeleton-based Action Recognition

Recognizing actions based on image data is one way to solve action recognition tasks. Another strategy uses skeleton data which can be extracted by a 2D or 3D pose estimator such as Stacked Hourglass Networks [29], PersonLab [32], or OpenPose [6]. The extracted landmarks can be seen as human joints and form the nodes of a skeleton graph (Figure 4). Based upon a time series of this graph input data, there are several ways on how to recognize an action.

A common approach is the analysis and classification of hand-crafted features using Hidden Markov Models [31], Support Vector Machines [21], or k-Nearest-Neighbor classifiers [10]. Deep learning models [12, 35, 40, 36] are trained in an end-to-end manner and do not rely on handcrafted features.

In [12], the skeleton graph is divided into five parts according to the human physical structure. These five parts are then fed separately into five bidirectional recurrent subnets (BRNN). The outputs of the subnets are successive fused to be the input of higher BRNN layers and build finally the input of the classification layer.

Part-aware LSTM networks are introduced in [35]. A part-aware LSTM splits the entire motion of the human body into multiple part-based LSTM cells. To keep the context of each body part separated from one another, each cell has its individual input, forget, and modulation gates. Only the output gate is shared among all body parts.

An approach using spatial temporal graph convolutional networks (ST-GCN) is given by [40]. The skeleton sequence is interpreted as a graph in such a way, that in each frame, the corresponding joints of naturally connected joints in a human body are connected by an edge. To include the temporal domain, the same joints between consecutive frames share an edge. The resulting graph is then propagated through the proposed graph convolution network which forms the input of the final classification layer.

Spatial reasoning and temporal stack learning networks are used in [36]. While the former models high-level spatial structural information within each frame, the latter is responsible for generating detailed temporal dynamics. A spatial reasoning network encodes the coordinate vector of each body part and feeds them into a residual graph neural network, which models the structural relationship between body parts. Those relationships are then analyzed in the temporal stack learning network, which stacks previous high-level features to generate even more high-level features. Based on the most high-level features, the system classifies an action.

(a)
(b)
Fig. 4: Available skeleton data in the dataset: 18 node ground truth data (a) and 25 node OpenPose generated data (b).

Ii-B OoD-Detection

Since detecting OoD-samples is an established topic, there are numerous detection methods which [19] categorizes into statistical [15, 23, 20, 1, 34, 9, 38], machine learning [14, 13], and neural network [4, 28, 17, 25, 11, 22, 27] based methods.

Assuming normal distributed data, [15] calculates the mean and standard deviation of an attribute among all given data. An outlier is present, if the difference of the querying data and the mean divided by the standard deviation is lower than a predefined significance level.

To be able to handle multivariate data, [20] uses the Mahalanobis distance to handle possible inter-attribute dependencies. The outlier detection is then performed by generating a boxplot based upon the calculated distance.

A biologically inspired method to detect novelty is presented by [9] and uses an ensemble of simple detectors. Each detector checks the given data against its own definition of normality. If a detector detects an abnormal state, novelty has been detected.

To detect inlier, [38] uses a Support Vector Machine where the decision boundary is given by the sphere with minimal volume containing all data.

A method based on decision trees is presented by [14], where a decision tree is repeatedly constructed and pruned. After each pruning step, incorrect classified samples are removed from the training set and are marked as outliers.

Some early neural network based methods are given by [4] and [28]. The former takes advantage of the fact that a multilayer perceptron (MLP) works well for interpolating but bad for extrapolating data. More precisely, the MLP models the unconditional probability density of the input data used during training [4]. The latter trains an autoencoder based on the training data. If the system is not able to sufficiently reconstruct the input during the test phase, the input is marked as OoD [28].

A more recent neural network method is given by [17] who propose to check the maximum value of the softmax output of a classifying neural network against a predefined threshold. If the maximum is below the threshold, the system marks the input as an outlier. The authors mention that this method can be considered as a baseline, as it is the most naïve way to decide whether an in- or an outlier is present.

The method proposed in [25] can be seen as an extension of the baseline method above. It only differs in the use of the tempered softmax [18] during the test phase. Since the network is trained with the default softmax, the tempered softmax (parameterized with high temperatures) forces the network to be sure with its decisions during the test phase.

In [11], a confidence estimation branch is appended to the network. This branch enables the network to directly estimate a degree of confidence instead of just classify the input in- or out-of-distribution. During the training, an additional confidence loss is added to prevent the network from being doubtful. The trained network is then able to provide an additional confidence output for a given input.

Another method modifying the basic network is given by [27]. Instead of a confidence branch, the presented extension maps the basic output onto a manifold and enables the possibility of using the Euclidean distance as a measure of out-of-distributioness.

Iii Out-of-Distribution Detectors

Our proposed method is inspired by the metric learning [27] and confidence learning [11] methods and tries to enable the network to estimate the local density around a sample in the estimated manifold.

We start with a definition of OoD-samples: A naïve definition is that OoD-samples are not explainable by an underlying learned distribution. As shown in Figure 1, this interpretation is problematic. Even if the values between -47 and 53 are explainable by the given distribution, the likelihood of having a sample in this range is negligibly small. Therefore, this work requires an in-distribution sample to be significantly explainable by the learned distribution.

For OoD-samples, [27] distinguishes between novelty and anomaly based OoD-samples. While the former describes samples sharing some common space with the trained distribution, the latter includes samples that are not related with the trained distribution. Credit card fraud, terrorist activities, and system failures are prominent examples of anomalies of high interest [8]. A third category ignored by [27] are plain outliers that are neither part of a new class nor part of an anomaly. They simply lie on or beyond the decision borders for their classes. This can be the result of bad training data or insufficient training.

The experimental setup of this work can be seen as novelty detection problem: A predetermined single class is excluded during the training and only present during the test phase. The predetermined class can be seen as the OoD-class and should be rejected by the system.

Iii-a Baseline

The approach presented in [17] is used as a baseline. Given a pre-trained classifier which uses a softmax output layer, the proposed OoD-detector simply checks the maximum softmax output against a predefined threshold. If the maximum is greater than the threshold, the system continues its classification task. Otherwise the input is marked as OoD and hence rejected. The threshold value is determined in such a way, that an error of is allowed. Therefore, the true positive rate is fixed at . Except for the threshold determination, this method is one of the most naïve ways on handling the detection of OoD-samples.

Iii-B Out-of-DIstribution detector for Neural networks

ODIN [25] can be seen as an extension of the baseline approach. Inspired by [18], the tempered softmax

(1)

is used during the test phase. The higher the temperature parameter , the more equally distributed is its output among all available classes (see Figure 5). As a result, a high temperature parameter during the test phase forces the network to be confident in its classification decision. If not, the maximum softmax output is oppressed by the resulting almost equal distributed class probabilities. Both ODIN and the baseline have the advantage that no further changes to the network architecture are required.

Fig. 5: Tempered softmax applied to the same input with different values for . The higher the temperature parameter, the more equally distributed is the output. Note range of the y-axis.

Iii-C Learning Confidence Approach

In comparison to the mentioned methods above, the confidence learning approach presented in [11] changes the underlying network architecture by adding an additional confidence branch. The branch enables the network to output a degree of confidence for a given input instead of just declaring an input sample as in- or out-of-distribution. To be able to estimate the confidence inside this branch, the training procedure is changed as follows: The classification output is interpolated with the one-hot encoded ground truth ,

(2)

where the degree of interpolation is the confidence of the network for the given input. In order to prevent the network from always state a low confidence and therefore get a low classification loss, a weighted confidence loss

(3)

is added to the classification loss. The weight of the confidence loss is defined by a budget parameter and is adjusted whenever the weights are updated: If the confidence loss is greater than , is increased and the system is more punished for low confidences. Otherwise is decreased and the system is getting less punished as a result of having a high confidence.

Iii-D Metric Learning-based Approach

Like the confidence learning method, the approach based on Metric Learning changes the underlying network. More precisely, a Metric Learning layer ( in Figure 6) is inserted between the base network and the classification layer. Additionally a branch for learning the confidence, by approximating either the density or entropy of the learned manifold is appended. The training is divided into two phases. First the Metric Learning layer is trained with the contrastive loss [16]. Afterwards the classification and confidence branches are trained on the resulting embeddings. The classification branch is trained straight forward by propagating the embeddings through a residual layer followed by a softmax activation. In contrast, the confidence approximation is a bit trickier.

The density as well as the entropy approximation use both the local neighborhood

(4)

of an embedded sample in a batch to calculate the appropriate score. The neighborhood is given by all other samples in the batch where the (Euclidean) distance to the corresponding embedding is lower than a predefined margin .

Density Approximation

One of the simplest method on detecting how dense the area around a given sample is, is by calculate the local density

(5)

of the neighborhood, where a normalization factor is directly given by the batch size. Indeed, using this calculation as the ground truth whilst the confidence training, the network learns to approximate the density but still lacks of the information about the pureness of the area. In an unclear decision region, the network should be able to give additional information, especially in terms of decision confidence. The entropy approach tries to fix this issue.

Entropy Approximation

Instead of simply calculating the density, the entropy (Equation 6) or Gini impurity (Equation 7) provide information about the purity of the local neighborhood.

(6)
(7)

where is the set of all available classes. The approach is similar to the density approximation but requires a few tweaks in the ground truth calculation. Since the Gini impurity (the entropy) reaches its minimum (maximum) if all samples belong to the same class, a weighting needs to take care of empty neighborhoods. This refers to neighborhoods consisting only of the processed sample itself. The weighting term for a sample is therefore given by

(8)

and weights neighborhoods according to their size. After applying the weighting term, the resulting loss for the Gini impurity and entropy approximation is

(9)

which is the mean of the absolute differences between the calculated ground truth values for the batch and the networks confidence output .

base architecture

classifier

confidence

Fig. 6: Metric Learning-based approach to detect in- and out-of-distribution samples. The base network is extended by a Metric Learning layer () as well as a confidence layer. The classification layer contains a residual block and is activated with a softmax.

Iv Evaluation

The following section first describes the pipeline and introduce the novel semi-synthetic dataset. Afterwards the evaluation metrics are explained. Finally, the results are presented in a quantitative way.

Iv-a Pipeline

As already mentioned, this work does not focus on the application of OoD-detector methods in the image domain. Instead, the applicability to graph-based data is examined. As an example, the problem of action recognition is analyzed where the input data is given in form of a sequence of graph skeleton data. Figure 7 shows the basic pipeline, which is similar to the one presented in [40]. Given a video input, single frames are extracted and analyzed by a 2D pose estimator (e.g. OpenPose [6]). The resulting sequence of skeleton data is then propagated through a graph CNN (e.g. ST-GCN [40]) resulting in a regularized high-level representation of the input data. Based on this extracted high-level representation, the OoD-detectors are examined and the classification is done.

Video Input

2D Pose Estimation (OpenPose)

Graph CNN (ST-GCN)

ood-Detector

Fig. 7: Basic pipeline for all experiments. First, the video input is divided into single frames. Those frames are then analyzed by an 2D pose estimator (e.g. OpenPose). The resulting skeleton sequences are then propagated through a graph CNN (e.g. ST-GCN) and finally analyzed by an OoD-detector.

Iv-B Semi-synthetic Dataset

To obtain reproducible results, a novel semi-synthetic dataset is used. The dataset provides a controllable environment and is based on skeleton data of the CMU Graphics Lab Motion Capture Database [7]. This skeleton data is used to animate a human 3D model [26]. The resulting sequences are rendered with Blender [5] from 144 different camera settings (Figure 12). This can be seen as data augmentation and enables a scale and viewpoint invariance of the network [2]. Each rendered RGB image has a resolution of px, depth data in form of a corresponding px 16bit-grayscale image (Figure 11) and a 18 node ground truth skeleton (Figure 4). Currently, there are 32 different classes of actions in 109 sequences.
To verify the results and be able to check on interdataset OoD-samples, the NTU RGB+D [35] dataset is used additionally. It contains 60 action classes, presented in RGB videos with a resolution of px each, recorded from three different viewpoints. Compared to the short basic actions of the novel synthetic dataset, the NTU-RGB+D dataset contains more complex actions in which several persons may be involved.

(a)
(b)
(c)
Fig. 11: Example image of a sequence of the semi-synthetic dataset: The same image as (a) RGB image, (b) depth image and (c) RGB image with a modified background.
Fig. 12: Camera positions during the rendering. All cameras (blue dots and green triangles) are directed towards the scene center (black cross). The blue cameras have a distance of 5m to the scene center while the green ones have a distance of 10m.

Iv-C Metrics

There are four established metrics used for the comparison of the different approaches [17, 25, 11]. The first one is the false positive rate (FPR) when the true positive rate (TPR) is fixed at (FPR 95). The second one is the detection error at the same fixed true positive rate. The area under the receiver-operator characteristic (AUROC) and the area under the precision-recall curve (AUPR) are the last two.

Fpr 95

The FPR 95 measures the false positive rate when the true positive rate is fixed at .

Detection Error

The detection error measures the misclassification probability when a a fixed true positive rate at is given. More precisely, the error is given by:

(10)

Auroc

The receiver-operator characteristic compares the true positive rate of a classifier with the corresponding false positive rate. The area under the receiver-operator characteristic is a threshold independent metric, measuring the overall performance of a classifier.

Aupr

Another threshold independent metric is the area under the precision-recall curve. Unlike the AUROC, the AUPR is more sensitive to imbalanced datasets which is a desirable feature when examining OoD-detectors. Since the inlier and outliers can both be handled as positives in the AUPR calculation, a AUPR-IN and AUPR-OUT score is given respectively.

Iv-D Experimental Setup

This work distinguishes between an interclass and interdataset OoD-detection. For the interclass case, only the semi-synthetic dataset is taken into account. For each of the 32 different classes and each detector, a network is trained. In each training, a single class represents the OoD-class and is excluded from the training whilst the other 31 classes are in-distribution classes and included in the training. Therefore this problem can be labelled as novelty detection. For the interdataset case, the trained networks from the interclass OoD-detection were investigated on how good they distinguish between the 31 semi-synthetic (inlier) classes and the NTU-RGB+D plus the selected semi-synthetic (outlier) classes.

The data is split according to a stratified cross-validation into a test- and training set in a ratio of 1:4. As data augmentation, the skeleton graphs are modified by the following pipeline: First a random start and end point of a sequence is defined, guaranteeing a sequence length of 20 frames. Then a Gaussian noise is added to the node values. After this, there is a chance that nodes will be set to zero (kind of a dropout) and a chance that a vertical and horizontal mirroring is also applied. The noise as well as the application of dropout and the mirroring is fix for a whole sequence.

The human skeleton graphs are extracted by OpenPose using the default settings. ST-GCN with the spatial configuration partitioning strategy has been chosen for the analysis of the resulting graphs [40]. The ST-GCN networks are all initialized with random values. Adam was used as optimizer with its default parameters except for the initial learning rate.

For the evaluation, the procedure described in [17] has been followed. First the test set is separated into correctly and incorrectly classified examples. From the two resulting groups, the AUROC and AUPR scores are calculated. Afterwards, the confidence threshold is estimated in such a way, that the true positive rate of the correctly classified examples drops to . Based on this threshold, the FPR and detection error is calculated. Since there are 32 different classes and therefore 32 trained networks for a given method, the results are averaged and the corresponding standard deviation is given.

Baseline and ODIN

The baseline method and ODIN do not require a modification of the existing network, so they can easily be examined without retraining. However, in order to be able to compare these straightforward approaches without interference from the smarter architectures, they are analyzed with the base architecture. The networks are trained over 100 epochs with a batch size of 512. The initial learning rate is set to 0.001 and shrinks all 40 epochs by a factor of 10.

Learning Confidence

Like the previous networks, the learning confidence system is also trained over 100 epochs. The batch size is set to 1024 and the initial learning rate is 0.001 and shrinks all 30 epochs by a factor of 10. The budget parameter is set to 0.3.

Metric Learning

The training of the Metric Learning approach is divided into three parts. First the Metric Learning layer is trained to get a good representation of the data in the manifold. Based on the embedding, the classifier and OoD-detector are trained, while the weights of the Metric Learning layer are being held fixed.

The Metric Learning layer is trained over 200 epochs with a batch size of 512. The learning rate also starts at 0.001 and shrinks every 80 epochs by a factor of 10. The layer maps the input onto a 256 dimension output. The classification layer and the confidence layer are then both separately trained over 50 epochs with an initial learning rate of 0.0001 and a reduction every 20 epochs by a factor 10.

Iv-E Results

In the following, the results are presented in quantitative terms. To give a hint where the value resides, each of the following plots contains a dotted red line. The temperature parameter is displayed logarithmically on the x-axis. Table I and Table II provide results of the intraclass and intradataset evaluation. The row of the ODIN method in both tables contains the values of the ODIN parameterization with the lowest valid FPR score.

Baseline and ODIN

The results for the baseline as well as ODIN are shown in Figure 13. The parameter equals the baseline. In the intraclass OoD-detection, the FPR reaches its minimum at a temperature parameter of . For temperature parameter values above 100, the required TPR of cannot be guaranteed and are therefore not taken into account. Compared to the results in [25], the curve has an unusual course for a rising temperature parameter. The curve of the intradataset, on the other hand, shows a similar course as the results in [25].

Learning Confidence

Figure 14 shows the results of the confidence learning approach. The intradataset ODIN curve vary heavily from the ones, depicted in Figure 13. It should also be noted, that for the intradataset case the method performs significantly worse than ODIN, even if ODIN operates on the modified network architecture. Another remarkable problem is, that the average accuracy not included in the plots has dropped from (base architecture) down to (learning confidence).

Metric Learning

The density approximating as well as the entropy approximating Metric Learning approaches were investigated and result in similar plots (Figure 15, Figure 16) for the intraclass and intradataset comparison. Therefore the following analysis can be applied to both of them.
The ODIN intraclass FPR curve has some similarities with the curve of the base architecture. Nonetheless, the TPR curve drops at much heavier than in the base architecture. In comparison to the learning confidence approach and in terms of the intraclass task, both Metric Learning approaches perform worse than the learning confidence or ODIN. In terms of the intradataset task, they perform slightly better than the learning confidence approach but also have a higher variance. The average accuracies of the density and entropy classifiers are and and therefore also slightly better than the learning confidence ones ().

Fig. 13: ODIN Results: TPR and FPR for different temperature parameters. At , the TPR is no longer fixed at , which leads to a better FPR but also allows more errors.
Fig. 14: Learning Confidence Results: It is noticeable, that the FPR as well as the TPR increase for , which is a strange behavior compared to the other plots.
Fig. 15: Metric Learning Results: Density approximation. The intraclass and intradataset FPR values for the density approach differ primarily in the standard deviation.
Fig. 16: Metric Learning Results: Entropy approximation. As the density approximation, the FPR values of the entropy approach differ primarily in the standard deviation.
Method FPR AUROC AUPR-IN AUPR-OUT ERR ACC
Baseline
ODIN
Confidence
Density
Gini
TABLE I: Intraclass comparison: Training and test on the same dataset. A single class is excluded from the training
and serves as OoD-class in the test phase.
Method FPR AUROC AUPR-IN AUPR-OUT ERR ACC
Baseline
ODIN
Confidence
Density
Gini
TABLE II: Intradataset comparison: Training on the semi-synthetic dataset, test on the NTU-RGBD+D dataset.

V Conclusion

As the evaluation shows, OoD-methods can successfully be applied to graph-based data, but their behavior is different as on image-based data. Experiments showed, that the OoD-detector method ODIN outperforms the more sophisticated learning confidence and the metric learning based method. Despite the modification of the network architecture made by these two methods, ODIN is superior when used with the modified architecture. As in Table I and Table II is shown, the modified network architectures even have a negative impact on the classification accuracy (ACC). This is of particular interest as the learning confidence method outperforms ODIN in the original paper in nearly every case. Another interesting observation is, that the learning confidence method performs better in the intraclass task than the intradataset task.

In this paper we have shown with our novel semi-synthetic dataset, that applying ODIN on graph-based data is currently the best OoD-method.

Our presented metric learning based method embeds the high-level features into a manifold and learns to estimate the density or entropy of the local neighborhood of an embedded sample. Since it is crucial to find a good embedding, other embedding methods than the contrastive loss could be investigated in future work.

Acknowledgment

This work was developed in Fraunhofer Cluster of Excellence “Cognitive Internet Technologies”.

References

  1. J. Allan, J. Carbonell, G. Doddington, J. Yamron and Y. Yang (2001) Topic Detection and Tracking Pilot Study. Topic Detection and Tracking Workshop Report. External Links: Document Cited by: §II-B.
  2. J. Bayer, D. Muench and M. Arens (2020) Viewpoint independency for skeleton based human action recognition. Technical report Fraunhofer IOSB, Ettlingen. Cited by: §IV-B.
  3. S. V. Bhosale (2014) Holy Grail of Outlier Detection Technique: A Macro Level Take on the State of the Art. IJCSIT. External Links: ISSN 20407459 Cited by: §II.
  4. C. M. Bishop (1994) Novelty detection and neural network validation. IEE Proceedings: Vision, Image and Signal Processing 141 (4), pp. 217–222. External Links: Document, ISSN 1350245X Cited by: §II-B, §II-B.
  5. Blender Online Community (2019) Blender - a 3D modelling and rendering package. Blender Foundation, Amsterdam. Cited by: §IV-B.
  6. Z. Cao, T. Simon, S. E. Wei and Y. Sheikh (2017) Realtime multi-person 2D pose estimation using part affinity fields. CVPR 2017-Janua, pp. 1302–1310. External Links: ISBN 9781538604571, Document Cited by: §I, §II-A, §IV-A.
  7. Carnegie Mellon Graphics Lab CMU Graphics Lab Motion Capture Database. Cited by: §IV-B.
  8. V. Chandola, A. Banerjee and V. Kumar (2009) Anomaly detection. ACM Computing Surveys 41 (3), pp. 1–58. External Links: Document, ISSN 03600300 Cited by: §III.
  9. D. Dasgupta and S. Forrest (1999) Novelty detection in time series data using ideas from immunology. The 8th International Conference on Intelligent Systems, pp. 82–87. Cited by: §II-B, §II-B.
  10. M. Devanne, H. Wannous, S. Berretti, P. Pala, M. Daoudi and A. Del Bimbo (2015) 3-D Human Action Recognition by Shape Analysis of Motion Trajectories on Riemannian Manifold. IEEE Transactions on Cybernetics 45 (7), pp. 1340–1352. External Links: Document, ISSN 2168-2267 Cited by: §II-A.
  11. T. DeVries and G. W. Taylor (2018) Learning Confidence for Out-of-Distribution Detection in Neural Networks. arXiv preprint arXiv:1802.04865. Cited by: §I, §I, §II-B, §II-B, §III-C, §III, §IV-C.
  12. Y. Du, W. Wang and L. Wang (2015) Hierarchical recurrent neural network for skeleton based action recognition. In CVPR, Cited by: §II-A, §II-A.
  13. M. Ester, H. Kriegel, J. Sander, X. Xu and Others (1996) A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise. In Kdd, Vol. 96, pp. 226–231. External Links: ISBN 9780444527011, Document Cited by: §II-B.
  14. J. G. (1995) Robust Decision Trees: Removing Outliers from Databases. KDD, pp. 174–179. Cited by: §II-B, §II-B.
  15. F. E. Grubbs (1969) Procedures for Detecting Outlying Observations in Samples. Technometrics 11 (1), pp. 1–21. External Links: Document, ISSN 15372723 Cited by: §II-B, §II-B.
  16. R. Hadsell, S. Chopra and Y. LeCun (2006) Dimensionality reduction by learning an invariant mapping. In CVPR, Vol. 2, pp. 1735–1742. External Links: ISBN 0769525970, Document, ISSN 10636919 Cited by: §III-D.
  17. D. Hendrycks and K. Gimpel (2017) A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks. ICLR. Cited by: §I, §I, §II-B, §II-B, §III-A, §IV-C, §IV-D.
  18. G. Hinton, O. Vinyals and J. Dean (2015) Distilling the Knowledge in a Neural Network. arXiv preprint arXiv:1503.02531. Cited by: §II-B, §III-B.
  19. V. J. Hodge and J. Austin (2004) A survey of outlier detection methodologies. Artificial Intelligence Review 22 (2), pp. 85–126. External Links: Document, ISSN 02692821 Cited by: §II-B, §II.
  20. L. J., J. M. and K. E. (2000) Informal Identification of Outliers in Medical Data. IDAMAP 1, pp. 20–24. Cited by: §II-B, §II-B.
  21. T. Kerola, N. Inoue and K. Shinoda (2014) Spectral graph skeletons for 3d action recognition. In ACCV, Springer, pp. 417–432. Cited by: §II-A.
  22. M. Kliger and S. Fleishman (2018) Novelty Detection with GAN. arXiv preprint arXiv:1802.10560. Cited by: §I, §II-B.
  23. E. M. Knox and R. T. Ng (1998) Algorithms for mining distancebased outliers in large datasets. In VLDB, pp. 392–403. Cited by: §II-B.
  24. Y. Kong and Y. Fu (2018) Human Action Recognition and Prediction: A Survey. arXiv preprint arXiv:1806.11230. Cited by: §II.
  25. S. Liang, Y. Li and R. Srikant (2018) Enhancing the reliability of out-of-distribution image detection in neural networks. ICLR. Cited by: §I, §I, §II-B, §II-B, §III-B, §IV-C, §IV-E1.
  26. makehumancommunity www.makehumancommunity.org. Cited by: §IV-B.
  27. M. Masana, I. Ruiz, J. Serrat, J. van de Weijer and A. M. Lopez (2018) Metric Learning for Novelty and Anomaly Detection. BMVC. Cited by: §I, §II-B, §II-B, §III, §III.
  28. J. N., M. C. and G. M. (1995) A Novelty Detection Approach to Classification.. In IJCAI, Montreal, pp. 518–523. Cited by: §II-B, §II-B.
  29. A. Newell, K. Yang and J. Deng (2016) Stacked hourglass networks for human pose estimation. In ECCV, pp. 483–499. Cited by: §II-A.
  30. A. Nguyen, J. Yosinski and J. Clune (2015) Deep neural networks are easily fooled: high confidence predictions for unrecognizable images. In CVPR, pp. 427–436. Cited by: §I.
  31. G. T. Papadopoulos, A. Axenopoulos and P. Daras (2014) Real-time skeleton-tracking-based human action recognition using kinect data. In MMM, Springer, pp. 473–483. Cited by: §I, §II-A.
  32. G. Papandreou, T. Zhu, L. Chen, S. Gidaris, J. Tompson and K. Murphy (2018) PersonLab: person pose estimation and instance segmentation with a bottom-up, part-based, geometric embedding model. In ECCV, Cited by: §I, §II-A.
  33. R. Poppe (2010) A survey on vision-based human action recognition. Image and Vision Computing 28 (6), pp. 976–990. External Links: ISBN 0262-8856, Document, ISSN 02628856 Cited by: §II.
  34. A. H. Seheult, P. J. Green, P. J. Rousseeuw and A. M. Leroy (1989) Robust Regression and Outlier Detection.. Journal of the Royal Statistical Society. Series A (Statistics in Society) 152 (1), pp. 133. External Links: ISBN 0471852333, Document, ISSN 09641998 Cited by: §II-B.
  35. A. Shahroudy, J. Liu, T. Ng and G. Wang (2016) NTU rgb+d: a large scale dataset for 3d human activity analysis. In CVPR, Cited by: §II-A, §II-A, §IV-B.
  36. C. Si, Y. Jing, W. Wang, L. Wang and T. Tan (2018) Skeleton-Based Action Recognition with Spatial Reasoning and Temporal Stack Learning. In Lecture Notes in Computer Science, V. Ferrari, M. Hebert, C. Sminchisescu and Y. Weiss (Eds.), Vol. 11205 LNCS, Cham, pp. 106–121. External Links: ISBN 9783030012458, Document, ISSN 16113349 Cited by: §II-A, §II-A.
  37. C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow and R. Fergus (2014) Intriguing properties of neural networks. ICLR. Cited by: §I.
  38. D. Tax, A. Ypma and R. Duin (1999) Support vector data description applied to machine vibration analysis. Proc. 5th Annual Conference of the Advanced School for Computing and Imaging, pp. 15–23. Cited by: §II-B, §II-B.
  39. P. Turaga, R. Chellappa, V. S. Subrahmanian and O. Udrea (2008) Machine recognition of human activities: A survey. TCSVT 18 (11), pp. 1473–1488. External Links: Document, ISSN 10518215 Cited by: §II.
  40. S. Yan, Y. Xiong and D. Lin (2018) Spatial temporal graph convolutional networks for skeleton-based action recognition. AAAI, pp. 7444–7452. External Links: ISBN 9781577358008 Cited by: §II-A, §II-A, §IV-A, §IV-D.
  41. M. Zhao, T. Li, M. Abu Alsheikh, Y. Tian, H. Zhao, A. Torralba and D. Katabi (2018) Through-wall human pose estimation using radio signals. In CVPR, Cited by: §I.
  42. A. Zimek and P. Filzmoser (2018) There and back again: Outlier detection between statistical reasoning and data mining algorithms. Vol. 8, Wiley-Blackwell. External Links: Document, ISSN 19424795 Cited by: §II.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
410790
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description