Weakly Supervised Action Localization by Sparse Temporal Pooling Network

Weakly Supervised Action Localization by Sparse Temporal Pooling Network

Phuc Nguyen
University of California, Irvine
nguyenpx@uci.edu
Equal contribution.
   Ting Liu11footnotemark: 1
Google, Inc.
liuti@google.com
   Gautam Prasad
Google, Inc.
gautamprasad@google.com
   Bohyung Han
POSTECH, Korea
bhhan@postech.ac.kr
Abstract

We propose a weakly supervised temporal action localization algorithm on untrimmed videos using convolutional neural networks. Our algorithm predicts temporal intervals of human actions given video-level class labels with no requirement of temporal localization information of actions. This objective is achieved by proposing a novel deep neural network that recognizes actions and identifies a sparse set of key segments associated with the actions through adaptive temporal pooling of video segments. We design the loss function of the network to comprise two terms—one for classification error and the other for sparsity of the selected segments. After recognizing actions with sparse attention weights for key segments, we extract temporal proposals for actions using temporal class activation mappings to estimate time intervals that localize target actions. The proposed algorithm attains state-of-the-art accuracy on the THUMOS14 dataset and outstanding performance on ActivityNet1.3 even with weak supervision.

1 Introduction

Action recognition in videos is a critical problem for high-level video understanding necessary for tasks such as event detection, video summarization, visual question answering in videos. Many researchers have been investigating the problem extensively in the last decades. The main challenge in action recognition is lack of appropriate representation methods of videos. Contrary to almost immediate success of convolutional neural networks (CNNs) in many visual recognition tasks related to images, applying deep neural networks to video data is not straightforward due to inherently complex structures of data, high computation demand, lack of knowledge for modeling temporal information, and so on. This issues means that techniques based on the representations from deep learning [15, 25, 31, 36] were not significantly better than methods relying on hand-crafted visual features [18, 32, 33]. As a result, many existing algorithms attempt to achieve state-of-the-art performance by combining hand-crafted and learned features.

Another issue in this problem is the lack of annotations required for video understanding. Most of existing techniques assume trimmed videos for video-level classification or rely on annotations of action intervals for temporal localization. Because untrimmed videos typically contain a large number of irrelevant frames pertaining to their class labels, both video representation learning and action classification are likely to fail due to challenges in extracting the salient information from raw videos. On the other hand, annotating a large scale dataset for action detection is prohibitively expensive and time-consuming, making it more practical to develop competitive algorithms running without such labels.

Figure 1: Overview of the proposed algorithm. Our algorithm takes a two-stream input—RGB and optical flow—for a video, and perform action classification and localization concurrently. For localization, Temporal Class Activation Mappings (T-CAMs) are computed from the two streams and employed to generate one dimensional temporal action proposals from which target actions are localized in temporal domain.

Our goal is to localize actions in untrimmed videos temporally. To this end, we propose a novel deep neural network that has the capability to select a sparse subset of frames useful for action recognition, where the loss function measures classification error and sparsity of the frame selection in each video. For localization, Temporal Class Activation Mappings (T-CAMs) are employed to generate one dimensional temporal action proposals and compute target actions’ localization in the temporal domain. Note that we do not exploit any temporal information from the actions in the target dataset during training, and learn models based only on video-level class labels of actions. The overview of our algorithm is illustrated in Figure 1.

The contributions of this paper are summarized below.

  • We introduce a principled deep neural network architecture for weakly supervised action recognition and localization on untrimmed videos, where actions are detected from a sparse subset of frames identified by the network.

  • We present a technique to compute temporal class activation mappings followed by temporal action proposals using learned attention weights for localizing target actions.

  • The proposed weakly supervised action localization technique achieves state-of-the-art accuracy on THUMOS14 [14] and outstanding performance in the first public evaluation on ActivityNet1.3 [12].

The rest of this paper is organized as follows. We discuss the related work in Section 2, and describe our action localization algorithm in Section 3. Section 4 presents the details of our experiment and Section 5 concludes this paper.

2 Related Work

We need proper video datasets to learn the models for action recognition and detection. There are several existing datasets for action recognition including UCF101 [30], Sports-1M [15], HMDB51 [17], and AVA [11]. However, they include only trimmed videos, where target actions appear in all frames throughout videos. In contrast, the THUMOS14 [14] and ActivityNet [12] datasets contain background frames with annotations about which frames are relevant to target actions. Note that each video in THUMOS14 and ActivityNet may have multiple actions even at the same frame.

Action recognition aims to identify a single or multiple actions per video, and is often formulated as a simple classification problem. In the long history of addressing this problem, the algorithm based on improved dense trajectories [32] presented outstanding performance before deep learning started being used actively. Convolutional neural networks are very successful in many computer vision problems and have been applied to action recognition tasks. There are several algorithms focusing on video representation learning and applying the learned representations to action recognition. Two-stream networks [25] and 3D convolutional neural networks (C3D) [31] are popular solutions for video representation, and those techniques, including their variations, are widely used for action recognition. Recently, a combination of the two-stream network and 3D convolution, referred to as I3D [4], is proposed as a generic video representation method. On the other hand, many algorithms develop technologies to learn actions based on existing representation methods [36, 38, 6, 9, 7, 22].

Action detection and localization is a slightly different problem from action recognition, because it requires detection of temporal or spatio-temporal volumes containing target actions. Most approaches for this task rely on supervised learning and employ annotations for action localization to learn the models. There are various existing methods based on deep learning including structured segment network [45], contextual relation learning [29], multi-stage CNNs [24], temporal association of frame-level action detections [10], and techniques using recurrent neural networks [42, 19]. To facilitate action detection and localization, many algorithms employ action proposals [3, 5, 34], which is a straightforward extension of object proposals for object detection in images.

There are only a few approaches based on weakly supervised learning, which rely on video-level labels only to localize actions in the temporal space. UntrimmedNet [35] first extracts proposals to recognize and detect actions, where softmax functions across class labels and action proposals are used for action recognition and localization results. However, the use of the softmax function across proposals may not be effective to detect multiple instances. Hide-and-seek [28] applies the same technique—hiding random regions to force attention learning—to weakly supervised object detection and action localization. This method works well in spatial localization but not in the temporal domain. Both algorithms are motivated by recent success in weakly supervised object localization in images. In particular, the formulation of UntrimmedNet for action localization relies heavily on the idea proposed in [2].

Figure 2: Neural network architecture for our weakly supervised temporal action localization. We first extract feature representations from a set of uniformly sampled video segments using a pretrained network. The attention module generates attention weights corresponding to individual features, which are employed to compute a video-level representation by temporal weighted average pooling. The representation is given to the classification module, and an loss is placed upon this attention weight vector to enforce the sparsity constraint.

3 Proposed Algorithm

We describe our weakly supervised temporal action localization algorithm based only on video-level action labels. This goal is achieved by designing a deep neural network for video classification based on a sparse subset of segments and identifying time intervals relevant to target classes.

3.1 Main Idea

We claim that an action can be recognized from a video by identifying a series of key segments presenting important action components. Our algorithm proposes a novel deep neural network to predict class labels per video using a subset of representative and unique segments to target actions, which are selected automatically from an input video. Note that the proposed deep neural network is designed for classification but has the capability to measure the importance of each segment in predicting classification labels. After finding the relevant classes in each video, we estimate temporal intervals corresponding to the identified actions by computing temporal attention of individual segments, generating temporal action proposals, and aggregating relevant proposals. Our approach relies only on video-level class labels to perform temporal action localization and presents a principled way to extract key segments and determine appropriate time intervals corresponding to target actions. It is possible to recognize and localize multiple actions in a single video using our framework. The deep neural network architecture for our weakly supervised action recognition component is illustrated in Figure 2. We describe each step of our algorithm as follows.

3.2 Action Classification

To predict class labels in each video, we first sample a set of video segments from an input video and extract a feature representation from each segment using pretrained convolutional neural networks. Each of these representations is then fed to an attention module that consists of two fully connected (FC) layers and a ReLU layer located between the two FC layers. The output of the second FC layer is given to a sigmoid function, forcing the generated attention weights to be normalized between 0 and 1. These attention weights are then used to modulate the temporal average pooling—a weighted sum of the feature vectors—to create a video-level representation. We pass this representation through a FC and sigmoid layers to obtain the class scores.

Formally, let be the dimensional feature representation extracted from a video segment centered at time , and be the corresponding attention weight. The video level representation, denoted by , corresponds to an attention weighted temporal average pooling, which is given by

(1)

where is a vector of the scalar outputs from the sigmoid function to normalize the range of activations, and is the total number of video segments considered for classification. The attention weight vector is learned with the sparsity constraint in a class-agnostic way. This is useful to identify temporal segments relevant to any action of interest and estimate the time intervals for action candidates.

The loss function in the proposed network is composed of two terms, the classification and the sparsity loss, which is given by

(2)

where denotes the classification loss computed on the video level, is the sparsity loss, and is a constant to control the trade-off between the two terms. The classification loss is based on the standard multi-label cross-entropy loss between ground-truth and (after passing through a few layers as illustrated in Figure 2), while the sparsity loss is given by loss on attention weights as . Since we apply a sigmoid function to each attention weight , all the attention weights are likely to have near 0-1 binary values due the loss. Note that integrating the sparsity loss is aligned with our claim that an action can be recognized with a sparse subset of key segments in a video.

3.3 Temporal Class Activation Mapping

Figure 3: Illustration of the ground-truth temporal intervals for the ThrowDiscus class, the temporal attention, and the T-CAM for an example video in THUMOS14 [14] dataset. The horizontal axis in the plots denote the time index. In this example, the T-CAM values for ThrowDiscus provide accurate action localization information. Note that the temporal attention weights are large at several locations that do not correspond to the ground-truth annotations. This is because temporal attention weights are trained in a class-agnostic way.

To identify the time intervals corresponding to target actions, we first extract a number of action interval candidates. Based on the idea in [46], we derive one dimensional class activation mapping in temporal domain, referred to as Temporal Class Activation Mapping (T-CAM). Denote by the -th element in the classification model parameter , corresponding to class . The input to the final sigmoid layer for class is

(3)

T-CAM, denoted by , indicates the relevance of the representation to individual classes at time step , where each element for class () is given by

(4)

Figure 3 illustrates an example of attention weights and T-CAM outputs in a video given by the proposed algorithm. We can observe that the discriminative temporal regions are highlighted by the attention weights and T-CAMs effectively. Note that some temporal intervals with large attention weights do not correspond to large T-CAM values, because such intervals may represent other actions of interest. The attention weights measure the generic actionness of temporal video segments, while T-CAMs present class-specific information.

3.4 Two-stream CNN Models

We employ the recently proposed I3D model [4] to compute the representations of video segments. Using multiple streams of information such as RGB and optical flow has become a standard practice in action recognition and detection [4, 8, 25], as it often provides a significant boost in performance. We also learn two action recognition networks with identical settings as illustrated in Figure 2 for the RGB and the flow stream. Note that we use the I3D network as a feature extraction machine without any fine-tuning on the target datasets. The two separately trained networks are then fused to localize actions in an input video. The procedure is discussed in the following subsection.

3.5 Temporal Action Localization

For an input video, we first identify relevant class labels based on video-level classification scores from the deep neural network described in Section 3.2. For each relevant action, we generate temporal proposals, i.e., one-dimensional time intervals that potentially consist of multiple segments, with their class labels and confidence scores. The proposals correspond to video segments that potentially enclose target actions and are detected using T-CAMs in our algorithm.

To generate temporal proposals, we first compute T-CAMs from both the RGB and the flow streams using (4) as and , and use them to derive the weighted T-CAMs, and as

(5)
(6)

Note that is an element of a sparse vector , and multiplying can be interpreted as a soft selection of the value from the following sigmoid function. Similar to [46], we apply thresholds to the weighted T-CAMs, and , to segment these signals. The temporal proposals are then the one-dimensional connected components extracted from each streams independently. It is intuitive to generate action proposals using the weighted T-CAMs, instead of directly from attention weights, because each proposal should contain a single kind of action. Optionally, we linearly interpolate the weighted T-CAM signals between sampled segments before thresholding to improve the temporal resolution of the proposals.

Unlike the original CAM-based bounding box proposals [46] where only the largest bounding box is retained, we keep all the connected components that pass the predefined threshold. Each proposal defined by is assigned a score as the weighted mean T-CAM of all the frames within the proposal:

(7)

where . This value corresponds to the temporal proposal score in the each stream for class . Finally, we perform non-maximum suppression among temporal proposals of each class independently to remove highly overlapped detections.

3.6 Discussion

Our algorithm attempts to localize actions in untrimmed videos temporally by estimating sparse attention weights and T-CAMs for generic and specific actions, respectively. We believe that the proposed method is principled and novel compared with the existing UntrimmedNet [35] algorithm, because it has a unique deep neural network architecture with classification and sparsity losses, and its action localization procedure is based on a completely different pipeline that leverages class-specific action proposals using T-CAMs. Note that [35] follows a similar framework used in [2], where softmax functions are employed across both action classes and proposals; it has a critical limitation in handling multiple action classes and instances in a single video.

Supervision Method AP@IoU
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Fully supervised Heilbron et al. [13] 13.5
Richard et al. [21] 39.7 35.7 30.0 23.2 15.2
Shou et al. [24] 47.7 43.5 36.3 28.7 19.0 10.3 05.3
Yeung et al. [42] 48.9 44.0 36.0 26.4 17.1
Yuan et al. [43] 51.4 42.6 33.6 26.1 18.8
Escordia et al. [5] 13.9
Shou et al. [23] 40.1 29.4 23.3 13.1 07.9
Yuan et al.[44] 51.0 45.2 36.5 27.8 17.8
Xu et al.[41] 54.5 51.5 44.8 35.6 28.9
Zhao et al. [45] 66.0 59.4 51.9 41.0 29.8
Alwasssel et al. [1]* 49.6 44.3 38.1 28.4 19.8
Weakly supervised Wang et al [35] 44.4 37.7 28.2 21.1 13.7
Singh & Lee [28] 36.4 27.8 19.5 12.7 06.8
Ours 52.0 44.7 35.5 25.8 16.9 09.9 04.3 01.2 00.1
Table 1: Comparison of our algorithm with other recent techniques tested on THUMOS14. We divide the algorithms into two groups depending on their levels of supervision. Each group is sorted chronologically, from older to newer ones. Our algorithm clearly presents state-of-the-art performance in the weakly supervised setting and is even competitive to many fully supervised approaches. The method with asterisk (*) denotes the reference available in arXiv only.

4 Experiments

This section first describes the details of our benchmark datasets and the evaluation setup. Then, our algorithm is compared with state-of-the-art techniques based on fully and weakly supervised learning. Finally, we analyze the contribution of individual components in our algorithm.

4.1 Datasets and Evaluation Method

We evaluate the proposed algorithm on two popular action detection benchmark datasets, THUMOS14 [14] and ActivityNet1.3 [12]. Both datasets are untrimmed, meaning that the videos include frames that contain no target action, and we do not exploit the temporal annotations during training. Note that there may exist multiple actions in a single video and even in a single frame in these datasets.

The THUMOS14 dataset has video-level annotations of 101 action classes in its training, validation, and testing sets, and temporal annotations for a subset of videos the validation and testing sets for 20 classes. We train our model with the 20-class validation subset, which is composed of 200 untrimmed videos, without using the temporal annotations. We evaluate our algorithm using 212 videos in the 20-class testing subset with temporal annotations. This dataset is challenging as some videos are relatively long (up to 26 minutes) and contain multiple action instances. The length of an action varies significantly, from less than a second to minutes.

The ActivityNet dataset is a recently introduced benchmark for action recognition and detection in untrimmed videos. We use ActivityNet1.3, which originally consists of 10,024 videos for training, 4,926 for validation, and 5,044 for testing111There are 9,740, 4791, and 4911 videos accessible from YouTube in the training, validation, and testing set, respectively, in our experiments., of 200 activity classes. This dataset contains a large number of natural videos that involve various human activities under a semantic taxonomy.

We follow the standard evaluation protocol based on mean average precision (mAP) values at several different levels of intersection over union (IoU) thresholds. The evaluation of both datasets is conducted using the benchmarking code for the temporal action localization task provided by ActivityNet222https://github.com/activitynet/ActivityNet/blob/master/Evaluation/. The result on the ActivityNet1.3 testing set is obtained by submitting results to the evaluation server.

4.2 Implementation Details

We employ the two-stream I3D networks [4] trained on the Kinetics dataset [16] to extract features from individual video segments. For the RGB stream, we rescale the smallest dimension of a frame to 256 and perform the center crop of size . For the flow stream, we apply the TV- optical flow algorithm [39] and truncate the flow magnitude to be in the range of . We add a third channel of all 0’s to the optical flow image. The input for I3D is a stack of 16 (RGB or optical flow) frames. To save space and processing time, we subsampled the video at 10 frame per second.

In all experiments, we randomly sample segments at uniform interval from each video in both training and testing. During training, we perform stratified random perturbation on the segments sampled for data augmentation. The network is trained using Adam optimizer with learning rate . We stop training when the model localization performance reaches its peak on the training set. At testing time, we first reject classes whose video-level probabilities are below , and then retrieve box proposals for the remaining classes. Our algorithm is implemented in TensorFlow.

4.3 Results

Table 1 summarizes the test results on the THUMOS14 dataset for all published action localization methods in the past two years. We included both fully and weakly supervised approaches in the table. Our algorithm outperforms the other two existing approaches based on weakly supervised learning [35, 28]. Even with significant difference in the level of supervision, our algorithm presents competitive performance to several recent fully supervised approaches.

We also present performance of our algorithm on the validation and the testing set of the ActivityNet1.3 dataset in Table 2 and 3, respectively. We can see that our algorithm outperforms some fully supervised approaches in the validation and the testing set. Note that most of the available action localization results on ActivityNet1.3 are from the ActivityNet Challenge submission, and we do not believe they are directly comparable to our algorithm. To our knowledge, this is the first attempt to evaluate weakly supervised action localization performance on this dataset, and we report the results as a baseline for future reference.

The qualitative results on the THUMOS14 dataset are demonstrated in Figure 4. As mentioned in Section 4.1, videos in this dataset are often long and contain many action instances, which are even from different categories. Figure 3(a) shows an example with many action instances along with our predictions and the corresponding T-CAM signals. Even with video-level labels only, our algorithm effectively pinpoints temporal boundaries of action instances. In Figure 3(b), the appearances of all frames are similar and there is little motion between each frame. Despite this challenge, our model still localizes the target action fairly well. Figure 3(c) illustrates an example of a video containing action instances from two different classes. Visually, the two involved action classes—Shotput and ThrowDiscus—look similar in their appearances (green grass, person with blue shirt, on a gray platform) and motion patterns (circular throwing). Our algorithm is still able to not only localize the target action but also classify the action category of the proposed window successfully although there are several short-term false positives. Figure 3(d) shows a instructional video for JavelinThrow, where our algorithm detects most of the ground-truth action instances while it also generates many false positives. There are two causes for the false alarms. First, the ground-truth annotations for JavelinThrow are often missing, making true detections counted as false positives. The second source is related to the segments, where the instructors demonstrate javelin throwing but only parts of such actions are visible. These segments resemble a real JavelinThrow action in both appearance and motion.

(a) An example of the HammerThrow action.
(b) An example of the VolleyballSpiking action.
(c) An example of the ThrowDiscus (blue) and Shotput (red) actions.
(d) An example of the JavelinThrow action.
Figure 4: Qualitative results on THUMOS14. The horizontal axis in the plots denote time index (in seconds). (a) There are many action instances in the input video and our algorithm shows good action localization performance. (b) The appearance of the video remains similar from the beginning to end. There is little motion between each frame. Our model is still be able to localize the small time window where the action actually happens. (c) Two different actions appear in a single video and their appearances along with motion patterns are similar. Even in the case, the proposed algorithm successfully identifies two actions accurately although there are some false alarms. (d) Our results have several false positives but they are often from missing ground-truth annotations. Another source of false alarms is the similarity of the observed actions to the target action.
Method AP@IoU
0.5 0.75 0.95
Fully supervised Singh & Cuzzonlin [27]* 34.5
Wang & Tao [37]* 45.1 04.1 00.0
Shou et al. [23]* 45.3 26.0 00.2
Xiong et al. [40]* 39.1 23.5 05.5
Montes et al. [20] 22.5
Xu et al. [41] 26.8
Weakly supervised Ours 29.3 16.9 02.6
Table 2: Results on ActivityNet1.3 validation set. The methods with asterisk (*) report ActivityNet challenge results, may only be available in arXiv only, and are not comparable to our algorithm directly. Although [23] shows good accuracy, it is a post-processing result from [37], making comparison difficult.
Method mAP
Fully supervised Singh & Cuzzolin [27]* 17.83
Wang & Tao [37]* 14.62
Xiong et al. [40]* 26.05
Singh et al. [26] 17.68
Zhao et al. [45] 28.28
Weakly supervised Ours 20.07
Table 3: Results on the ActivityNet1.3 testing set. The methods with asterisk (*) report ActivityNet challenge results only and are not comparable to our algorithm directly.

4.4 Ablation Study

We investigate the contribution of several components proposed in our weakly supervised architecture and implementation variations. All the experiments for our ablation study are performed on THUMOS14 dataset.

Choice of architectures

Our premise is that an action can be recognized by a sparse subset of segments in a video. When we learn our action classification network, two loss terms—classification and sparsity losses—are employed. Our baseline is the architecture without the attention module and the sparsity loss, which share motivation with the architecture in [46]. We also test another baseline with the attention module but without sparsity loss. Figure 5 shows comparisons between our baselines and the full model. We observe that both the sparsity loss and the attention weighted pooling make substantial contributions to performance improvement.


Figure 5: Performance with respect to architecture choices. The attention module is useful as it allows the model to explicitly focus on the important parts of input videos. Enforcing the sparsity in action recognition via loss gives a significant boost in performance.

Figure 6: Performance with respect to feature choices. Optical flow offers stronger cues than RGB for action localization, and a combination of the two features leads to significant performance improvement.

Choice of features

As mentioned in Section 3.4, the representation of each temporal segment is based on the two-stream I3D network, which employs two sources of information: one from the RGB image and the other from optical flow. Figure 6 illustrates the effectiveness of each modality and their combination. When comparing the individual performances of each modality, the flow stream offers stronger performance than the RGB steam. Similar to action recognition, the combination of these modalities provides significant performance improvement.

5 Conclusion

We presented a novel weakly supervised temporal action localization technique, which is based on deep neural networks with classification and sparsity losses. The classification is performed by evaluating a video-level representation given by a sparsely weighted mean of segment-level features, where the sparse coefficients are learned with the sparsity loss in our deep neural network. For weakly supervised temporal action localization, one-dimensional action proposals are extracted, from which proposals relevant to target classes are selected to present time intervals of actions. The proposed approach achieved the state-of-the-art results in THUMOS14 dataset and, to the best of our knowledge, we are the first to report weakly supervised temporal action localization results on the ActivityNet1.3 dataset.

References

  • [1] H. Alwassel, F. C. Heilbron, and B. Ghanem. Action search: Learning to search for human activities in untrimmed videos. In arXiv preprint arXiv:1706.04269, 2017.
  • [2] H. Bilen and A. Vedaldi. Weakly supervised deep detection networks. In CVPR, 2016.
  • [3] S. Buch, V. Escorcia, C. Shen, B. Ghanem, and J. C. Niebles. SST: single-stream temporal action proposals. In CVPR, 2017.
  • [4] J. Carreira and A. Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. In CVPR, 2017.
  • [5] V. Escorcia, F. C. Heilbron, J. C. Niebles, , and B. Ghanem. DAPs: deep action proposals for action understanding. In ECCV, 2016.
  • [6] C. Feichtenhofer, A. Pinz, and R. P. Wildes. Spatiotemporal residual networks for video action recognition. In NIPS, 2016.
  • [7] C. Feichtenhofer, A. Pinz, and R. P. Wildes. Spatiotemporal multiplier networks for video action recognition. In CVPR, 2017.
  • [8] C. Feichtenhofer, A. Pinz, and A. Zisserman. Convolutional two-stream network fusion for video action recognition. In CVPR, 2016.
  • [9] R. Girdhar, D. Ramanan, A. Gupta, J. Sivic, and B. Russell. Actionvlad: Learning spatio-temporal aggregation for action classification. In CVPR, 2017.
  • [10] G. Gkioxari and J. Malik. Finding action tubes. In CVPR, 2015.
  • [11] C. Gu, C. Sun, S. Vijayanarasimhan, C. Pantofaru, D. A. Ross, G. Toderici, Y. Li, S. Ricco, R. Sukthankar, C. Schmid, and J. Malik. AVA: A video dataset of spatio-temporally localized atomic visual actions. In arXiv:1705.08421, 2017.
  • [12] F. C. Heilbron, V. Escorcia, B. Ghanem, and J. C. Niebles. ActivityNet: a large-scale video benchmark for human activity understanding. In CVPR, 2015.
  • [13] F. C. Heilbron, J. C. Niebles, and B. Ghanem. Fast temporal activity proposals for efficient detection of human actions in untrimmed videos. In CVPR, 2016.
  • [14] Y.-G. Jiang, J. Liu, A. R. Zamir, G. Toderici, I. Laptev, M. Shah, and R. Sukthankar. THUMOS challenge: Action recognition with a large number of classes, 2014.
  • [15] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei. Large-scale video classification with convolutional neural networks. In CVPR, 2014.
  • [16] W. Kay, J. Carreira, K. Simonyan, B. Zhang, C. Hillier, S. Vijayanarasimhan, F. Viola, T. Green, T. Back, P. Natsev, et al. The kinetics human action video dataset. arXiv preprint arXiv:1705.06950, 2017.
  • [17] H. Kuehne, H. Jhuang, E. Garrote, T. Poggio, and T. Serre. HMDB: a large video database for human motion recognition. In ICCV, 2011.
  • [18] I. Laptev. On space-time interest points. IJCV, 64(2-3):107–123, 2005.
  • [19] S. Ma, L. Sigal, and S. Sclaroff. Learning activity progression in lstms for activity detection and early detection. In CVPR, 2016.
  • [20] A. Montes, A. Salvador, S. Pascual, and X. Giro-i Nieto. Temporal activity detection in untrimmed videos with recurrent neural networks. In 1st NIPS Workshop on Large Scale Computer Vision Systems (LSCVS), 2016.
  • [21] A. Richard and J. Gall. Temporal action detection using a statistical language model. In CVPR, 2016.
  • [22] Y. Shi, Y. Tian, Y. Wang, W. Zeng, and T. Huang. Learning long-term dependencies for action recognition with a biologically-inspired deep network. In ICCV, 2017.
  • [23] Z. Shou, J. Chan, A. Zareian, K. Miyazawa, and S.-F. Chang. CDC: convolutional-de-convolutional networks for precise temporal action localization in untrimmed videos. CVPR, 2017.
  • [24] Z. Shou, D. Wang, and S.-F. Chang. Temporal action localization in untrimmed videos via multi-stage cnns. In CVPR, 2016.
  • [25] K. Simonyan and A. Zisserman. Two-stream convolutional networks for action recognition in videos. In NIPS, 2014.
  • [26] B. Singh, T. K. Marks, M. Jones, O. Tuzel, and M. Shao. A multi-stream bi-directional recurrent neural network for fine-grained action detection. In CVPR, 2016.
  • [27] G. Singh and F. Cuzzolin. Untrimmed video classification for activity detection: submission to ActivityNet challenge. arXiv preprint arXiv:1607.01979, 2016.
  • [28] K. K. Singh and Y. J. Lee. Hide-and-seek: Forcing a network to be meticulous for weakly-supervised object and action localization. In ICCV, 2017.
  • [29] K. Soomro, H. Idrees, and M. Shah. Action localization in videos through context walk. In ICCV, 2015.
  • [30] K. Soomro, A. R. Zamir, and M. Shah. UCF101: a dataset of 101 human action classes from videos in the wild. Technical Report CRCV-TR-12-01, University of Central Florida, 2012.
  • [31] D. Tran, L. D. Bourdev, R. Fergus, L. Torresani, and M. Paluri. Learning spatiotemporal features with 3D convolutional networks. In ICCV, 2015.
  • [32] H. Wang and C. Schmid. Action recognition with improved trajectories. In ICCV, 2013.
  • [33] L. Wang, Y. Qiao, and X. Tang. Motionlets: Mid-level 3d parts for human motion recognition. In CVPR, 2013.
  • [34] L. Wang, Y. Qiao, X. Tang, and L. V. Gool. Actionness estimation using hybrid fully convolutional networks. In CVPR, 2016.
  • [35] L. Wang, Y. Xiong, D. Lin, and L. van Gool. Untrimmednets for weakly supervised action recognition and detection. In CVPR, 2017.
  • [36] L. Wang, Y. Xiong, Z. Wang, Y. Qiao, D. Lin, X. Tang, and L. val Gool. Temporal segment networks: Towards good practices for deep action recognition. In ECCV, 2016.
  • [37] R. Wang and D. Tao. UTS at Activitynet 2016. AcitivityNet Large Scale Activity Recognition Challenge, 2016.
  • [38] Y. Wang, M. Long, J. Wang, and P. S. Yu. Spatiotemporal pyramid network for video action recognition. In CVPR, 2017.
  • [39] A. Wedel, T. Pock, C. Zach, H. Bischof, and D. Cremers. An Improved Algorithm for TV- Optical Flow. Statistical and geometrical approaches to visual motion analysis. Springer, 2009.
  • [40] Y. Xiong, Y. Zhao, L. Wang, D. Lin, and X. Tang. A pursuit of temporal accuracy in general activity detection. arXiv preprint arXiv:1703.02716, 2017.
  • [41] H. Xu, A. Das, and K. Saenko. R-C3D: region convolutional 3d network for temporal activity detection. In ICCV, 2017.
  • [42] S. Yeung, O. Russakovsky, G. Mori, and L. Fei-Fei. End-to-end learning of action detection from frame glimpses in videos. In CVPR, 2016.
  • [43] J. Yuan, B. Ni, X. Yang, and A. A. Kassim. Temporal action localization with pyramid of score distribution features. In CVPR, 2016.
  • [44] Z. Yuan, J. C. Stroud, T. Lu, and J. Deng. Temporal action localization by structured maximal sums. In CVPR, 2017.
  • [45] Y. Zhao, Y. Xiong, L. Wang, Z. Wu, X. Tang, and D. Lin. Temporal action detection with structured segment networks. In ICCV, 2017.
  • [46] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba. Learning deep features for discriminative localization. In CVPR, 2016.
Comments 1
Request Comment
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
332461
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel
1

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description