1 Introduction


Deep neural networks for classification of videos, just like image classification networks, may be subjected to adversarial manipulation. The main difference between image classifiers and video classifiers is that the latter usually use temporal information contained within the video in the form of optical flow or implicitly by various differences between adjacent frames. In this work we present a manipulation scheme for fooling video classifiers by introducing a spatial patternless temporal perturbation that is practically unnoticed by human observers and undetectable by leading image adversarial pattern detection algorithms. After demonstrating the manipulation of action classification of single videos, we generalize the procedure to make adversarial patterns with temporal invariance that generalizes across different classes for both targeted and untargeted attacks.

Figure 1: Top: Consecutive frames of the original video from the ”Triple jump” category. Middle: Consecutive frames of the miss-classified adversarial video. The adversarial perturbation is practically unnoticed by the human observer. Bottom: The patternless adversarial perturbation of each frame. The perturbation is a constant offset applied to the entire frame. Due to the fact that the perturbation can be negative, it is displayed over gray background. The gentle hue changes displayed at each frame are the adversarial pattern.

1 Introduction

In recent years, Deep Neural Networks (DNN’s) have shown phenomenal performance in a wide range of tasks, such as image classification Krizhevsky et al. (2012), object detection Ren et al. (2015), semantic segmentation Shelhamer et al. (2017) etc. Despite their success, DNN’s have been found vulnerable to adversarial attacks. Many works Szegedy et al. (2014); Goodfellow et al. (2015b); Papernot et al. (2015) have shown that a slight (sometimes imperceptible) perturbation added to an image, can make a given DNN’s prediction false. These findings have raised many concerns, particularly for critical systems such as face recognition systems Sun et al. (2014), surveillance cameras Sultani et al. (2018), autonomous vehicles and medical applications Litjens et al. (2017). In recent years most of the attention was given to the study of adversarial patterns in images and less in video action recognition. Only in the last two years works on adversarial video attacks were published Wei et al. (2019); Inkawhich et al. (2018); Wei (2019); Jiang et al. (2019), even though DNN’s have been applied to video-based tasks for several years, in particular video action recognition Carreira and Zisserman (2017); Wang et al. (2017); Feichtenhofer et al. (2018). In video action recognition networks temporal information is of the essence in categorizing actions, in addition to per-frame image classification. In some of the proposed attacks the emphasis was, beyond adversarial categorization, the sparsity of the perturbation. In our paper, we approach adversarial attacks against video action recognition under white-box setting, with an emphasis on the imperceptible nature of the perturbation in the spatio-temporal domain to the human observer and for image based adversarial pattern detectors. We introduce spatial-patternless perturbation by applying a uniform RGB perturbation to each frame, thus constructing a temporal adversarial pattern.

Unlike previous works, in our case sparsity in the common sense is undesirable, because it helps the adversarial perturbation to be detectable by human observers for its unnatural pattern, and to image based adversarial perturbation detectors for the exact same reason. The adversarial perturbation presented in this work does not contain any spatial information on a single frame other than a constant (and usually small) offset, as presented in Figure 1. This type of perturbation occurs in natural videos by changing lighting (electrical light interlacing or natural), auto-gain corrections, scene changes, etc. Algorithms that are designed to detect image adversarial patterns based on finding pattern anomalies or detecting out-of-distribution samples will not be able to detect our adversarial perturbation, for the natural behaviour of videos contains such fluctuations to begin with. Further explanation will be provided later on. In this paper, we aim to attack the video action recognition task Kay et al. (2017). For the threat model we choose I3D Carreira and Zisserman (2017) model based on InceptionV1 Szegedy et al. (2015). Specifically we attack the RGB stream of the model, rather than on the easier to influence optical flow stream. The attacked network trained on The Kinetics-400 Human Action Video Dataset Kay et al. (2017).

In order to make the adversarial perturbation unnoticed by human observers, we reduce the thickness and temporal roughness of the adversarial perturbation, which will be defined precisely later on. In order to do so we apply three regularization terms during the optimization process, each corresponds to a different effect of the perceptibly of the adversarial pattern. In addition, we introduce an improved adversarial-loss function that allows better integration of these regularization terms with the adversarial loss.

We will first introduce a targeted and untargeted patternless attacks on a single video and present the trade-off between the different regularization terms. In addition, we will introduce temporal invariant perturbation and thus remove the dependency of video synchronization with the adversarial perturbation. Finally, class-generalization and universal perturbations will be presented for several setups.

The main contributions of this work are:

  • A methodology for developing universal, time-invariant adversarial attack against video recognition networks, undetectable by image based adversarial pattern detectors.

  • Incorporate regularization for effecting the visibility of the adversarial pattern to the human observer.

  • Providing a new loss mechanism for adversarial perturbations.

The rest of the paper is organized as follows: In Section 2 we briefly review related work. In section 3 we present the patternless adversarial attack. Section 4 shows experimental results. Section 5 presents generalization of the adversarial attack. Finally, we present conclusions and future work in Section 6.

We encourage the readers to view the adversarial videos and additional material in the project page1.

2 Related Work

2.1 Video Action Recognition

With deep convolutional neural networks (CNN) achieving state-of-the-art performance on image recognition task, many works propose to adapt this achievement to video-based computer vision tasks. In particular, DNN’s for video action recognition. The most straightforward approach for acheiving this is to add temporally-recurrent layers such as LSTM Sak et al. (2014) models to the traditional 2D-CNN. This way long term temporal dependencies can be assigned to spatial features Wang et al. (2018); Simonyan and Zisserman (2014). Another approach implemented in C3D Ji et al. (2013); Tran et al. (2014); Varol et al. (2016) extends the 2D CNN (image-based) to 3D CNN (video-based) kernels and learns hierarchical spatio-temporal representations directly from the raw videos. Despite to the simplicity of this approach, it is very difficult to train this network due to it’s huge parameter space. To address this, Carreira and Zisserman (2017) proposes the Inflated 3D CNN (I3D) with inflated 2D pre-trained filters Russakovsky et al. (2014). In addition to the RGB pipeline, the Optical flow is also useful for temporal information encoding, and indeed several architectures greatly improved their performance by incorporating an optical-flow stream Carreira and Zisserman (2017). In this paper, we use the I3D video recognition model as the attacked module and we focus our attack on the RGB pipeline.

2.2 Adversarial Attack on Video Models

Unlike image-based classifiers, the research of the vulnerability of video-based classifiers to adversarial attacks emerged only in the last years. Adversarial attacks can be roughly divided into two main categories, white-box and black-box attacks.

White-box adversarial attacks

Wei et al. (2019) were the first to investigate a white-box attack on video action recognition. They proposed an norm based optimization algorithm to compute sparse adversarial perturbations. Unlike our threat model, they choose the networks with a CNN+RNN architecture in order to investigate the propagation properties of perturbations. Li et al. (2019) generated an offline universal perturbation using GAN-based model, that they applied to the learned model on unseen input for real-time video recognition models. Rey-de-Castro and Rabitz (2018) proposed a nonlinear adversarial perturbation by using another neural network model (besides the attacked model), which was optimized to transform the input into adversarial pattern under the norm. Inkawhich et al. (2018) proposed both white and black box untargeted attacks on two-stream model (optical-flow and RGB), based on the original and the iterative version of FGSM Goodfellow et al. (2015a); Kurakin et al. (2017), and use FlowNet2 Ilg et al. (2016) to estimates optical flow in order to provide gradients estimation.

Black-box adversarial attacks

Jiang et al. (2019) were first to explore a black-box attack on video action recognition network, proposing video attack (both untargeted and targeted) framework that uses tentative perturbations from image model, and partition-based rectifications found by the Natural Evolution Strategies on patches of tentative perturbations, in order to obtain good adversarial gradient estimates with fewer queries to the targeted model. Wei (2019) proposed a heuristic-based algorithm that measures the importance of each frame in the video for generating the adversarial example. Based on the frame’s importance and the salient regions, the proposed algorithm searches for a subset of frames in which adversarial attack would be more efficient.

3 Patternless Adversarial attack

The spatial-patternless adversarial attack consists of a uniform offset added to the entire frame that changes with each frame. This novel approach is desirable for several reasons. It contains no spatial pattern within individual frames but a RGB offset. This type of perturbation can easily be mistaken (if visible to the human eye at all) as changing lighting conditions of the scene or typical sensor behaviour. Because these artifacts are found in many videos, the phenomena can’t be associated to out-of-distribution statistics of ”natural” videos, which are at the core of image adversarial pattern detectors. Furthermore, this pattern allows a smooth and thin adversarial pattern, which is practically unnoticed by the human observer in most cases.

3.1 Preliminaries

Video action recognition is a function that accepts an input from consecutive frames with rows, columns and color channels, and produces an output which can be treated as probability distribution over the output domain, where is the number of classes. The model implicitly depends on some parameters that are fixed during the attack. The classifier assigns the label,

to the input .
We denote video by where the video , and each individual adversarial frame by

Untargeted Attacks

Untargeted attack cause the target model to classify adversarial examples to any label but the correct class, i.e. . These attacks minimize the probability of the targeted class until a different class becomes most probable.

Targeted Attacks

Targeted attack cause the target model to classify adversarial examples to a specific predetermined incorrect class. The objective become , where is the target class. This attack, usually lowers the probability of the original class by raising the probability of the adversarial class.

Video Adversarial Attack

Given video , we would like to construct another video That meets the following requirements

  • is adversarial i.e. (Untargeted Attack in this example)

  • and look similar, in particular and look similar

3.2 Threat Model

Our threat model follows the white-box setting, which assumes the complete knowledge of the targeted model, its parameter values and architecture. In the experiments, video recognition model I3D Carreira and Zisserman (2017) is used as target model, focused on the RGB pipeline. The adversarial attacks described in this work are both targeted and untargeted, and the theory and implementation can be easily adapted accordingly.

3.3 Dataset

We use Kinetics-400 Kay et al. (2017) for our experiments. Kinetics is a standard benchmark for action recognition in videos. It contains about 240K video of 400 different human action categories (220K - training split, 20K - validation split). In sections 4.3 and 5.2 we used the validation split. In section 5.1 we train on the training split and evaluate on the validation split. We pre-process the dataset by excluding the movies on which the network misclassified to begin with. Each video contains 90-frame snippets.

3.4 Methodology

In our attack is designed to be spatial-constant on the three color channels of the frame, meaning for each pixel in image offset is added with same value (RGB). Thus, the perturbation, which corresponds with the frame of the video, can be represented by three scalars. We denote those triplets . In total we have parameter to optimize. To generate untargeted adversarial perturbation, we use the following objective function


The first terms in Equation (1) are the regularization terms, while the second are the adversarial classification loss.
is the classifier output (probability distribution) corresponding to adversarial video and is the original class. The parameter weights the relative importance of being adversarial and the regularization terms. The set of functions controls the regularization terms that allows us to achieve better imperceptibility for the human observer. The parameter weights the relative importance of each regularization term.

As we will present further on, the constrain in Equation (2) makes sure that after applying the adversarial perturbation, the perturbed video will be clipped between the valid values: , that represents the minimum and maximum allowed gray-level values. We believe that clipping is the more realistic approach, rather than activating the perturbed input by as commonly acceptable, for this causes contrast reduction of the original video, which makes it more susceptible than it should be for adversarial attacks.

3.5 Improved Adversarial loss function

The reason for developing a new loss mechanism was the fact that we have observed that the common loss mechanisms were too crude in achieving smooth convergence. Therefore, quite similar to the loss term introduced in Carlini and Wagner (2016), our loss term relies on reaching the adversarial goal only to the desired extent and leaving enough freedom to the other regularization terms to effect the solution accordingly. The boundaries between the different regions of the loss term were engineered to prevent overshoot by the gradients and momentum of the optimizer, as will be explained in the next sections. In some cases, it would be beneficial to follow Carlini and Wagner (2016) and use the logits instead of the probabilities for calculating the loss. We suggest adapting this method partially, by keeping the desired margin in probability space, normalized at each iteration accordingly.

Untargeted adversarial loss

The general approach in untargeted adversarial attacks is to demand that the desired original class will be slightly below any other class, rather than zero, in order to allow other regularization terms to be more prominent. We have devised a method for untargeted adversarial attacks which is as follows


is the desired margin of the original class probability below the adversarial class probability. Once loss values have entered the desired margin, therefore adversarial, the quadratic loss term relaxes the relatively steep gradients and momentum of the optimizer and the difference between the first and second class probabilities approach the desired margin . After the other regularization terms counter the trend of the adversarial loss minimization, it starts rising toward the margin from below. Then, the quadratic term kicks in smoothly and maintains the desired difference between these two classes, while dominating the other regularization terms. This ”soft” margin induces a smoothly varying loss, therefore preventing overshoot effects.

Targeted adversarial loss

In addition, We have performed a targeted adversarial attack, by changing the loss term as follows:

This time, is the targeted adversarial class. This loss term relies on the same reasoning as the untargeted adversarial attack. In some cases it would be beneficial to follow Carlini and Wagner (2016) and use the logits instead of the probabilities for calculating the loss. We suggest adapting this method partially by keeping the desired margin in probability space, normalized at each iteration accordingly, for margin defined in logit space may be less intuitive as a regularization term.

Figure 2: Top: The adversarial perturbation of the RGB channels (color represents relevant channel) as a function of the frame number at the case that = 0 and = 1 ( minimization is preferred). Bottom: The adversarial perturbation of the RGB channels as a function of the frame number at the case that = 1 and = 0 ( minimization is preferred). Top and bottom graphs are presented in percents from the full scale of the image. Middle: The gradual change of the adversarial pattern between the two extreme cases where = 0 corresponds to the top graph and = 1 corresponds to the bottom graph. Color (stretched for visualization purposes) represents the RGB parameters of the adversarial pattern of each frame.

3.6 Regularization terms

We quantify the distortion introduced by the perturbation with in the spatio-temporal domain. This metric will be constrained in order for the perturbation to be imperceptible to the human observer while remaining adversarial. Unlike previously published work on adversarial patches in images, in video domain imperceptible may reference thin patches in gray-level space or slow changing patches in temporal frame space. In contrast to previously related work Wei et al. (2019); Wei (2019) in our case temporal sparsity is not of the essence but the unnoticability to the human observer. In order to achieve the most imperceptible perturbation we introduce three regularization terms, each one controls different aspects that may influence human perception mechanisms.

Thickness regularization

This loss term forces the adversarial perturbation to be as small as possible in gray-level over the three color channels. This loss term has no temporal constrain and can be related to the ”thickness” of the adversarial pattern.

Roughness regularization

We will introducing two temporal losses functions and incorporate them into a single term.



The first order temporal difference shown in Equation (4) controls the difference between each two consecutive frame perturbations. By minimizing this term we control the maximal temporal change we allow for the adversarial pattern. This term penalize for temporal changes of the adversarial pattern. Within the context of human visual perception, this term is perceived as ”flickering”, thus we wish to minimize it.


The second order temporal difference shown in Equation (5) controls the trend of the adversarial perturbation. Visually, this term inflicts penalty on fast trend changes, such as spikes, and may be considered as scintillation reducing term. The weighting between the term and the temporal term will be noted by and , respectively.

4 Experiments

4.1 Implementation Details

Experiment codes are implemented in TensorFlow2 and based on I3D source code3. The code is executed on a server with four Nvidia Titan-X GPUs, Intel i7 processor and 128GB RAM. For optimization we adopt the ADAM Kingma and Ba (2014) optimizer with learning rate of 1e-3 and with batch size of 8 for the generalization section and 1 for a single video attack. Except where explicitly stated . For single video attack and for generalization sections . In the kinetics Dataset .

4.2 Metric

Let us define several metrics in order to quantify the performance of our adversarial attacks.
: is defined as the percentage of adversarial videos that are successfully misclassified.

The mean absolute perturbation per pixel:

Mean absolute difference perturbation per pixel:

The thickness and roughness values in this paper will be presented as percents from the full applicable values of the image span, e.g.

4.3 Adversarial Perturbation

In order to perform the patternless adversarial attacks, as demonstrated in Figure 1, we have selected a random sub-sample from the validation section of the kinetics dataset. The attacks described here were performed under several regularization schemes as described earlier. In Figure 2, we see the temporal amplitude of the adversarial perturbation of each frame and for each color channel, respectively. The extreme case of minimizing only (given success of the untargeted adversarial attack) and leaving unconstrained ( = 1, = 0) can be seen at the graph at the top of the figure. As we can see, the signal of the three channels (RGB) is very fluctuating with thickness value of 0.87%. The roughness value is 1.24%. The other extreme case is where was constrained and was not ( = 0, = 1). Here we can see that the thickness value is 1.66% and the roughness value is 0.6%. The central image within the figure displays all the gradual cases between the two extremities described above: goes from 1 to 0, and from 0 to 1 on the y-axis. The row denoted by = 0 corresponds to the upper graph and the row denoted by = 1 corresponds to the lower graph. Both and are very dominant in the received perturbation, as desired.

Figure 3: Learning process of the improved loss mechanism. Probabilities (green and red lines) corresponds to the left y-scale. Roughness and thickness (blue lines) are in percents from the full gray-level range of the image (right y-scale).

In the convergence process of the adversarial perturbation, visualized in Figure 3, several trends can be observed under the improved loss mechanism we have defined. At first, the adversarial perturbation is zero and rising in thickness and roughness. At iteration the top-probability class switches from the original class to the adversarial class, which until now was not plotted for this adversarial attack is untargeted. At the same iteration, the adversarial loss is zero and the regularization starts to be prominent in the learning process, causing the thickness and roughness to decay. This change of trend occurs slightly after the adversarial class change due to the momentum of the Adam optimizer and the remaining intrinsic gradients. At iteration the difference between the adversarial class probability and the original class probability is at as defined by our loss term, and the quadratic loss term kicks in smoothly and maintaining the desired difference between these two classes, while diminishing the adversarial temporal pattern thickness and roughness. At the interface between adversarial success and failure, binary loss changes caused convergence issues and our implementation of the quadratic term, as defined in Equation (3) handled this issue.

Figure 4: Convergence curve in probability-thickness-roughness space of an untargeted adversarial attack with different and parameters.

In order to visualize the path taken by our loss mechanisms at different and values we have plotted a 3D representation in probability-thickness-roughness space for different regularizing scenarios. Figure 4 shows the probability of the maximal class at 10 different scenarios as described in the legend. One can see that at the beginning the maximal probability (original class) drops from the initial probability (upper section of the graph) on the same path for all of the described cases, until the adversarial perturbation takes hold of the top class. From there, the ’s parameters takes the lead. At this point, each different case is converging along a different path to a different location on the thickness-roughness plane. The user may choose the desired ratios for each specific application.

Table 1 shows the results of a single-video attack using . This attack reached fooling ratio with low roughness and thickness values.

5 Adversarial Attack Generalization

In order to obtain universal and class generalization of adversarial perturbation across videos, we modified Equation (1) as follows:


where is total number of training videos, is the adversarial video and is the corresponding targeted label in targeted attack or the original label in untargeted attack.

Attack Targeted Time invariance Fooling ratio[%] Thickness[%] Roughness[%]
Single Video 100 1.00.5 0.83 0.4
Single Video 100 3.32.3 3.33.0
Single Class 85.1 9.6 5.5
Universal 92.1 15.5 15.7
Universal 84.1 11.6 11.6
Universal 83.0 12.9 12.0

Table 1: Results over several types of attacks

5.1 Class generalization - Untargeted Attack

Adversarial attack on a single video has limited applicability. An excellent solution is found but it lacks generalization and relevance for real world cases. Training on a set of examples makes the adversarial pattern more robust and generalized for practical implementation. We present an untargeted adversarial attack that was trained to fail in recognizing a specific activity, that was selected to be ”Triple jump”. The training was as described in previous sections with the main difference of training using the training dataset of the triple jump and validating using the validation dataset of the same class in the kinetics dataset Kay et al. (2017). The initially miss-classified videos within the dataset were removed in order to start with zero miss-ratio. Table 1 shows the success ratio when =. It is obvious that generalizing produces an adversarial pattern with larger thickness and roughness values. When applying this pattern, of the triple jump videos will be classified to another category. Selecting different weights to the regularization terms can produce a more tailored adversarial pattern.

5.2 Universal Targeted and Untargeted Attacks

In order to classify all videos to a specific class, regardless of the actual activity shown, we have used the same methodology in order to perform a generalized targeted adversarial attack. As can be seen in Table 1, we have successfully attacked of the random-class videos to be recognized by the attacked network as the adversarial class. Once again, we can optimize with a different set of parameters in order to increase the fooling rate or reduce the other regularization parameters. This attack was performed using categorical cross-entropy loss, for the 5% margin presented in Equation (3) is quite irrelevant when targeting a batch of different prediction probabilities. Similar to the targeted attack, an untargeted attack was performed. In this case, every video that will be misclassified. Before evaluation every video that was misclassified, was removed from the dataset, providing an initial fooling ratio of . The achieved fooling ratio under the given set of parameters was . The universal targeted attacks were performed on a specific class. Other classes may be more robust for adversarial attacks, and may require thicker and rougher perturbations.

5.3 Time Invariance

In real world attacks on streaming videos one can not synchronize the adversarial pattern with each corresponding frame of the video. Therefore, a decent attack should demand temporal invariance of the adversarial pattern. Similarly to the generalized adversarial attacks described in the previous subsection, a training methodology was applied with a random shift between the adversarial pattern and the video. Table 1 shows the success rate on videos from the test set of various random categories on single videos and in universal attacks. An example of the pattern over a single video can be seen in Figure 5.

Figure 5: RGB time-invariant adversarial pattern. The repeating 8 frames sub-pattern emerged from the temporal generalization

6 Conclusions and future work

The Patternless Adversarial Attack was presented for the first time, over several scenarios summarized in Table 1. This attack have several benefits, such as the relative unperceptability to the human observer, achieved by small and smooth perturbations. In the single video attack, the adversarial pattern adds only % in average to the roughness, which is almost half of the roughness of the average perturbation itself, for in some cases the addition of the perturbation reduces the total roughness. Another benefit is the undetectability by image based adversarial pattern detectors, achieved by the patternless perturbation in each frame. In addition, this adversarial pattern is applicable for real world scenarios, by universal generalization and temporal invariance. Furthermore, this perturbation can be implemented in real world scenarios as it does not require a complex spatial adversarial pattern to be projected on the scene, but a simple temporal one. In extreme cases where regularization causes the pattern to be very thick and noticed by the human observer, the usage of such perturbation can be relevant for non-man-in-the-loop systems or cases where the human observer will see image-flickering without realizing that the system is being fooled. In the future, we will expand the current research to include adversarial attacks with black box setting.


  1. https://github.com/roipony/Patternless_Adversarial_Video
  2. https://www.tensorflow.org/
  3. https://github.com/deepmind/kinetics-i3d


  1. Towards evaluating the robustness of neural networks. CoRR abs/1608.04644. Cited by: §3.5.2, §3.5.
  2. Quo vadis, action recognition? A new model and the kinetics dataset. CoRR abs/1705.07750. Cited by: §1, §1, §2.1, §3.2.
  3. SlowFast networks for video recognition. CoRR abs/1812.03982. Cited by: §1.
  4. Explaining and harnessing adversarial examples. In International Conference on Learning Representations, Cited by: §2.2.1.
  5. Explaining and harnessing adversarial examples. In International Conference on Learning Representations, Cited by: §1.
  6. FlowNet 2.0: evolution of optical flow estimation with deep networks. CoRR abs/1612.01925. Cited by: §2.2.1.
  7. Adversarial attacks for optical flow-based action recognition classifiers. CoRR abs/1811.11875. Cited by: §1, §2.2.1.
  8. 3D convolutional neural networks for human action recognition.. IEEE Trans. Pattern Anal. Mach. Intell. 35, pp. 221–231. Cited by: §2.1.
  9. Black-box adversarial attacks on video recognition models. ArXiv abs/1911.09449. Cited by: §1, §2.2.2.
  10. The kinetics human action video dataset. CoRR abs/1705.06950. Cited by: §1, §3.3, §5.1.
  11. Adam: a method for stochastic optimization. International Conference on Learning Representations. Cited by: §4.1.
  12. ImageNet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems 25, F. Pereira, C. J. C. Burges, L. Bottou and K. Q. Weinberger (Eds.), pp. 1097–1105. Cited by: §1.
  13. Adversarial machine learning at scale. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings, Cited by: §2.2.1.
  14. Stealthy adversarial perturbations against real-time video classification systems. In 26th Annual Network and Distributed System Security Symposium, NDSS 2019, San Diego, California, USA, February 24-27, 2019, Cited by: §2.2.1.
  15. A survey on deep learning in medical image analysis. Medical Image Analysis 42, pp. 60 – 88. External Links: ISSN 1361-8415 Cited by: §1.
  16. The limitations of deep learning in adversarial settings. CoRR abs/1511.07528. Cited by: §1.
  17. Faster r-cnn: towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems 28, C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama and R. Garnett (Eds.), pp. 91–99. Cited by: §1.
  18. Targeted nonlinear adversarial perturbations in images and videos. CoRR abs/1809.00958. Cited by: §2.2.1.
  19. ImageNet large scale visual recognition challenge. CoRR abs/1409.0575. Cited by: §2.1.
  20. Long short-term memory recurrent neural network architectures for large scale acoustic modeling. In INTERSPEECH 2014, 15th Annual Conference of the International Speech Communication Association, Singapore, September 14-18, 2014, pp. 338–342. Cited by: §2.1.
  21. Fully convolutional networks for semantic segmentation.. IEEE Trans. Pattern Anal. Mach. Intell. 39 (4), pp. 640–651. Cited by: §1.
  22. Two-stream convolutional networks for action recognition in videos. CoRR abs/1406.2199. Cited by: §2.1.
  23. Real-world anomaly detection in surveillance videos. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1.
  24. Deep learning face representation by joint identification-verification. In Advances in Neural Information Processing Systems 27, Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence and K. Q. Weinberger (Eds.), pp. 1988–1996. Cited by: §1.
  25. Going deeper with convolutions. In Computer Vision and Pattern Recognition (CVPR), Cited by: §1.
  26. Intriguing properties of neural networks. In International Conference on Learning Representations, Cited by: §1.
  27. C3D: generic features for video analysis. CoRR abs/1412.0767. Cited by: §2.1.
  28. Long-term temporal convolutions for action recognition. CoRR abs/1604.04494. Cited by: §2.1.
  29. Human action recognition by learning spatio-temporal features with deep neural networks. IEEE Access 6, pp. 17913–17922. Cited by: §2.1.
  30. Non-local neural networks. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7794–7803. Cited by: §1.
  31. Sparse adversarial perturbations for videos. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pp. 8973–8980. Cited by: §1, §2.2.1, §3.6.
  32. Heuristic black-box adversarial attacks on video recognition models. ArXiv abs/1911.09449. Cited by: §1, §2.2.2, §3.6.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description