Temporal Bilinear Networks for Video Action Recognition
Abstract
Temporal modeling in videos is a fundamental yet challenging problem in computer vision. In this paper, we propose a novel Temporal Bilinear (TB) model to capture the temporal pairwise feature interactions between adjacent frames. Compared with some existing temporal methods which are limited in linear transformations, our TB model considers explicit quadratic bilinear transformations in the temporal domain for motion evolution and sequential relation modeling. We further leverage the factorized bilinear model in linear complexity and a bottleneck network design to build our TB blocks, which also constrains the parameters and computation cost. We consider two schemes in terms of the incorporation of TB blocks and the original 2D spatial convolutions, namely wide and deep Temporal Bilinear Networks (TBN). Finally, we perform experiments on several widely adopted datasets including Kinetics, UCF101 and HMDB51. The effectiveness of our TBNs is validated by comprehensive ablation analyses and comparisons with various stateoftheart methods.
Temporal Bilinear Networks for Video Action Recognition
Yanghao Li Sijie Song Yuqi Li Jiaying Liu^{†}^{†}thanks: Corresponding author. This work was supported by National Natural Science Foundation of China under contract No. 61772043 and Peking University â Tencent Rhino Bird Innovation Fund. Peking University lyttonhao@gmail.com ssj940920@pku.edu.cn liyuqi.ne@gmail.com liujiaying@pku.edu.cn
Copyright © 2019, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
Introduction
Deep convolutional neural networks (CNNs) (?; ?) have witnessed the tremendous progress in computer vision over the past few years. CNNs have demonstrated their power in many visual tasks, from image classification (?; ?), object detection (?; ?), semantic segmentation (?) and video action recognition (?; ?). However, the progress of video action recognition is relatively much slower. One of the main challenges in this area is the modeling of both spatial appearance and temporal motion across different frames. Therefore, researchers in video action recognition recently devote most efforts to the effective modeling of temporal dynamics in the deep architectures.
There are three typical schemes of temporal motion modeling for deep learning methods in action recognition. (1) Some methods capture temporal dependencies by utilizing temporal pooling (?) or recurrent layers (?; ?) on the top of the 2D CNNs. However, the visual features are extracted independently by CNNs in a framewise manner, and the recurrent layers are fed in merely onedimensional highlevel semantic features without spatial information. It makes the temporal dynamics ignored in the preceding CNNs, especially for subtle motion dynamics. (2) Following the twostream architecture (?; ?), many methods capture appearance and motion information separately by different stream networks with RGB and optical flow as input. Despite of the good results, it also potentially prevents the model from fully utilizing the appearance and motion information in a single network. Furthermore, the estimation of optical flow is very time and resourceconsuming. Thus, they are unsuitable for existing largescale datasets (?). (3) Different from the above methods which use 2D CNNs, methods like C3D (?) adopt 3D convolution operators to learn spatiotemporal structures directly from RGB frames. Recently, 3D CNNs start to present their effectiveness on some largescale action recognition datasets (?), but the temporal modeling in 3D convolution operators is still limited in linear transformation. It is unclear whether 3D convolution operators are effective enough to capture complex temporal relations across frames. In addition, 3D convolution operators also introduce much more parameters.
In this paper, we propose a new Temporal Bilinear (TB) model to enhance the capacities of CNNs to model spatiotemporal dependencies. Specifically, TB model employs bilinear transformation to capture pairwise interactions among CNN features of adjacent frames for video action recognition. We believe this explicit temporal quadratic transformations on adjacent frames are more powerful to model complex motion relations in the temporal domain. We insert the TB model into the original 2D CNNs to model appearance and motion information simultaneously. At the same time, through multiple TB blocks embedded in different layers of TBNs, multiple levels of temporal dynamics could be captured with different temporal receptive fields.
To avoid the explosion of quadratic parameters in bilinear models, we explore the factorized bilinear model (?) in the temporal domain for video data. To further reduce the complexity of our TB model, we build the temporal bilinear block with a bottleneck structure, which stacks two temporal convolutional layers and one TB layer in between. The TB model could be built by existing common neural network layers, thus it could be trained effectively and seamlessly with the whole network.
To summarize, our contributions of this paper are threefold:

We present a novel Temporal Bilinear (TB) model to consider the temporal pairwise feature interactions across adjacent frames. By incorporating the TB model into 2D CNNs, original 2D convolution operators are complemented to capture both appearance context and motion dynamics. We also construct two different TBNs: Wide and Deep Temporal Bilinear Networks (WTBN and DTBN) to explore the effective combination of 2D convolution layers and the TB model.

The TB model is implemented based on factorized bilinear model with linear complexity. To further reduce the computation cost, we leverage the bottleneck design to build the TB block. Thus, the complexity of our model is much lower than 3D convolution operators in terms of both parameters and computation cost.

The effectiveness of our approach is validated on several standard benchmarks, including Kinetics (?), UCF101 (?) and HMDB51 (?). Our proposed method achieves superior or comparable results to stateoftheart methods.
Related Work
Deep learning for action recognition.
After the breakthrough of deep learning in image recognition (?), many research works start to apply deep neural networks in video action recognition (?; ?; ?; ?). In (?), several temporal fusion strategies were explored when applying 2D CNNs on video data, but the performance is not satisfying compared to traditional handcraft features (?). Later, Long ShortTerm Memory (LSTM) networks appended after CNNs were investigated for better sequence modeling in action recognition (?; ?). Highlevel semantic features from CNNs are fed into the following recurrent layers. Thus, it is hard for the network to capture subtle motion dynamics across frames, even though LSTM and CNNs are jointly endtoend trainable.
In (?), the twostream architecture was proposed for action recognition, which contains one spatial stream fed with RGB data and one temporal stream taking optical flow as input. The final results of the twostream networks are obtained by fusion of softmax scores. Based on the twostream structure, Temporal Segment Network (?) further performed sparse sampling and temporal fusion to capture global structures in the videos, and achieved stateoftheart results on UCF101 (?) and HMDB51 (?). To obtain powerful videolevel representation with twostream networks, ActionVLAD (?) incorporated learnable spatiotemporal aggregation methods in the networks. One dilemma of twostream networks lies in the inefficient extraction of optical flow, especially for largescale datasets (?) and practical applications.
Another typical approach for CNNbased action recognition is 3D CNNs, which extend convolution operations into the temporal domain (?). By incorporating 3D convolutional layers and 3D pooling layers, C3D (?) proposed a standard 3D CNN architecture for generic feature extraction. Based on a more powerful CNN architecture, I3D (?) inflated the 2D convolutional filters into 3D convolutions in the Inception (?) architecture and achieved stateoftheart results on the largescale Kinetics dataset (?). The most obvious problem for 3D CNNs is that they inevitably bring in much more parameters. Therefore, methods like (?) factorized 3D convolutional kernel with a 2D spatial kernel and a 1D temporal kernel to reduce parameters.
Instead of using optical flow as input in twostream methods, our work directly learns spatiotemporal features from RGB frames similar to 3D CNNs. Without 3D convolutional filters, our proposed TB blocks directly capture temporal evolutions between adjacent frames. Compared with 3D convolution filters that use linear transformation across different frames, our TB model enhances the capacity of the network by modeling temporal pairwise interactions. Due to the factorization bilinear scheme and bottleneck structure design, there are much fewer parameters in our TBNs than 3D CNNs.
Bilinear models.
A method called Bilinear Pooling (?) was introduced to first incorporate bilinear models with CNNs for finegrained image recognition. Bilinear Pooling calculates a global bilinear descriptor by averaging pooling of outer product of the final convolutional layer. Since the dimension of bilinear descriptors could be very large, several methods were proposed to reduce this quadratic dimensionality. Compact Bilinear Pooling (?) presented two approximation methods to obtain compact bilinear representations. Furthermore, Factorized Bilinear model (?) was proposed as a generalized bilinear model which is extended to convolutional layers and meets linear complexity. In (?), a SecondOrder Response Tranform approach was propoosed to append elementwise product to a twobranch network module. Different from the above bilinear models for image recognition, we apply bilinear models on the temporal domain for video data, aiming to improve pairwise motion relations and dependency learning between adjacent frames.
Another related work is Spatiotemporal Pyramid Network (?) for the action recognition task. This model was based on the twostream architecture, and utilized compact bilinear to fuse highlevel spatial and temporal features extracted from CNNs independently. In our work, TB blocks are introduced to model temporal relations between adjacent frames, and be embedded at different levels of CNNs, which is more flexible. Meanwhile, our TB model are also combined with original 2D convolutional layers to jointly capture spatiotemporal structure for video action recognition.
Temporal Bilinear Networks
In this section we describe our proposed method for video action recognition. First, we discuss the existing standard temporal modeling methods. Next, we elaborate the details of our proposed Temporal Bilinear model. Finally, we introduce our design of the Temporal Bilinear block and explain how we incorporate it into the current 2D CNNs.
Temporal Modeling Methods
Suppose we have the input features (usually the filter responses of one layer in the network) of frames. For simplicity, here we assume the features from each time step is onedimensional. And we denote them as where and is the feature dimension, we aim to aggregate these features in the temporal domain for temporal modeling. The output signals of the temporal modeling are defined as where and is the output temporal dimension. In practice, each output feature corresponds to several consecutive input frames. In the following, we consider one output signal and its corresponding input features which are centered at . Then the temporal modeling methods could be formulated as follows:
(1)  
where is the number of considered input frames (e.g. the kernel size in the pooling and convolutional layers) and is the aggregation function. Next we discuss two standard temporal modeling methods for the aggregation function .
Temporal Pooling.
A natural choice of aggregation function is temporal pooling (?; ?), which extends the traditional spatial pooling layers to temporal domain. The common pooling strategies could be max or average pooling:
(2) 
where and are the th element of and , respectively. Such pooling operations are easy to implement and fast to compute, but they ignore valuable implicit relations between different frames.
Temporal Convolution.
Similar to temporal pooling, the temporal convolution (?; ?) is extended from the spatial convolution operator. It performs learnable transformation on input frames as follows:
(3) 
where is the weight matrix for the th output neuron. Temporal convolution learns the transformation within several adjacent frames. However, the expressiveness of such linear transformation is limited to model complex motion structures.
These temporal modeling methods could also be combined with spatial domain operators and extended to 3D pooling or convolutional layers (?; ?).
Temporal Bilinear Model
The above existing temporal modeling methods are lack of the capacity to explicitly capture interactions between adjacent frames. We are motivated to exploit bilinear transformations by a novel Temporal Bilinear model for video action recognition.
Formulation.
Following the bilinear models in image recognition (?; ?), we define a generic temporal bilinear operation in deep neural networks as:
(4) 
where is the interaction weight matrix between the two adjacent frames. Since each time we only consider one output neuron , we omit this subscript for simplicity.
Although the above bilinear model is capable of capturing the temporal interactions, it introduces a quadratic number of parameters in the weight matrix . Following the factorization scheme in (?), we adopt a factorized bilinear weight to reduce the computation cost and parameter complexity as follows:
(5) 
where is the factorized interaction weight between the th and +1th input frames with factors. The factor number constrains the complexity of the TB model. To explain the TB model more clearly, Eq. (5) can be expanded as:
(6) 
where and correspond to the th and th variables of the input features and , is the th column of and calculates the inner product of and . Therefore, each pair of the variables between two adjacent frames has their own explicit interaction weight. Meanwhile, the shared factors also reduce the risk of overfitting in the bilinear model.
Instantiation.
For video action recognition, the input feature is usually three dimensional. To implement the above TB model in Eq. (5) efficiently, we propose a general TB module for 3D feature maps using existing common neural network operators. Suppose the input feature map is where is the number of feature channel, and are the height and width of the feature map. Figure 1 shows an example of the TB module. Here the output temporal dimension is the same as the input, i.e., .
The TB module is based on Eq. (5) and extended to combine with spatial domain. First, the convolution operator with filters calculates the transformation . Then we use a temporal shift operator, which could be implemented by some indexing operators, to create the tensor where the time index starts from 2 (The last element is padded with the th frame). Finally, we utilize the elementwise multiplication and sum over the factor axis (the third axis) to obtain the final TB results for each spatial and temporal element. Therefore, the TB model could be easily implemented and incorporated into the standard CNN architectures. Note that different from the factorization implementation in (?), our TB module is built on standard neural network operators in common deep learning platforms. Thus, it could be fully optimized by the standard optimization libraries such as cuDNN (?).
Method  Parameter  Computation  Temporal RFS 

2D Conv ()  O()  1  
3D Conv ()  O()  3  
TB Block  =  O() = O()  2 
Bottleneck TB Block  =  O() = O()  6 
Temporal Bilinear Block.
Although the complexity of TB model in Eq. (5) is linear with the factor number and feature dimension , it is still times larger than convolution operators. Following the bottleneck design of (?), we build a bottleneck Temporal Bilinear block to further reduce computation as shown in Figure 2. Since our TB block focuses on temporal modeling, we append two temporal convolution operators before and after the TB module. The number of output channels of the first temporal convolution and the TB module are set as . This reduces the computation of the TB module by 1/16. Table 1 compares the complexity and temporal Receptive Field Size (RFS) of the proposed TB blocks and 2D/3D convolution operators. As indicated, our bottleneck TB block reduces the complexity of parameters and computation by nearly 1/3, even lower than 2D convolution operator. Further, the bottleneck TB block also achieves larger temporal RFS due to the stacked combination of temporal convolutions.
Temporal Bilinear Networks
Our TB blocks are flexible to incorporate with standard CNNs. In this paper we adopt ResNet (?) owing to its good performance and simplicity. We first introduce the 2D ResNet baselines and then describe our proposed TBNs.
2D CNN Baseline.
layer name  C2D ResNet18  C3D ResNet18  
layers  output  layers  output  
conv1  , 64, stride  , 64, stride  
res1  
res2  
res3  
res4  
global average pooling, fc  global average pooling, fc 
In this paper, we adopt 2D ResNet and C3D ResNet as our baseline CNN structures. Table 2 shows the 2D ResNet18 and C3D ResNet18 structures. The input video clip consists of 8 frames with the resolution of . Note that the 2D kernels in ResNet18 are equivalent to kernels.
Wide and Deep TBNs.
Our TB blocks are flexible to insert into both C2D and C3D ResNet. Since the TB blocks are responsible for temporal modeling, we mainly focus on incorporating them into C2D ResNet and compare the full model with C3D ResNet to validate its effectiveness in temporal domain. This also reduces the complexity of parameters and computation with C2D ResNet. To combine with spatial convolution operators in the residual block, we investigate parallel and serial integration schemes.
Figure 3 shows the structures of the proposed Wide and Deep TB blocks in terms of the integration scheme. Note that we also add the identify path of the Conv to keep the original appearance stream. Therefore, for the Wide TB block, the appearance and temporal information is learned in two parallel paths. While in the Deep TB block, temporal modeling is appended after the spatial convolutions. Finally, we replace the original block (Figure 2(a)) in ResNet with our proposed TB blocks to construct Wide or Deep TBNs.
Experiments
In this section, we first conduct comprehensive ablation studies on MiniKinectis200 dataset (?). Then we compare the results with stateoftheart methods on Kinetics (?), UCF101 (?) and HMDB51 (?) datasets. The proposed Wide and Deep TBNs are denoted by WTBN and DTBN, respectively.
Implementation Details.
Our models are trained on the training set of Kinetics dataset from scratch. All the network weights are initialized by the method in (?). Following (?), the networks take clips as input. The frame sampling stride is set as 4. The video frames are scaled to and randomly cropped to . We train our models for 150 epochs with an initial learning rate of 0.1, which is decayed by a factor of 10 after 45, 90, 125 epochs. We use SGD as the optimizer with a weight decay of 0.0005 and batch size of 384. The standard augmentation methods like random cropping and random flipping are adopted during training for all the methods. For TBNs, we set the factor number as and also adopt the Dropfactor scheme (?) to mitigate overfitting.
For testing, following the common evaluation scheme (?; ?), we uniformly sample 15 clips from input videos and then generate 10 crops for each clip. The final prediction results are obtained by averaging scores of all the clips.
Ablation Study
In this section, we investigate the design of TBNs with different ablation experiments. Since the full Kinetics dataset is quite large, we adopt the MiniKinetics200 (?) for evaluation to speed up. It consists of 200 categories with most training examples from Kinetics. There are 80k and 5k videos in training and validation sets. For the baselines and TBNs, we utilize the ResNet18 as our default backbone.
Method  Backbone  Top1  Top5 

CNN+LSTM (?)  ResNet50  57.0  79.0 
RGBStream (?)  ResNet50  56.0  77.3 
C3D (?)  VGG11  56.1  79.5 
I3DRGB (?)  Inception  68.4  88.0 
3DRes (?)  ResNet34  58.0  81.3 
TSNRGB (?)  Inception  69.1  88.7 
C2D/TBN  ResNet18  61.165.0  83.786.4 
C2D/TBN  ResNet34  65.469.5  86.488.9 
C2D/TBN  ResNet50  66.970.1  87.289.3 
Stage to embed TB blocks.
Table 2(a) compares the results of ResNet18 with TB blocks embedded in different stages. We replace the two ResNet blocks in one stage with our Wide or Deep TB blocks. We can see that each TBN can lead to significant improvements (around 3% to 5%) on the C2D baseline for both WTBN and DTBN, which demonstrates the effectiveness of our TB model compared to the simple temporal pooling in C2D. The improvement of TB blocks on is relatively smaller. It is probably because is more related to highlevel semantic features with insufficient temporal motion information.
Number of TB blocks.
We also investigate adding more TB blocks in TBN as shown in Table 2(b). We add 2 (), 4 (, ) and 6 (, , ) TB blocks in WTBN and DTBN, respectively. Table 2(b) shows that more TB blocks lead to better results in general, which validates the capacity of temporal modeling of TBNs. It is also demonstrated that multiple TB blocks with larger temporal receptive fields perform better in modeling longterm temporal dependencies.
Bottleneck design.
To validate the effectiveness of our bottleneck structure design in Figure 2, we compare the results of TBNs with and without the bottleneck at the top of Table 2(c). The two TBNs both use 4 TB blocks. The results demonstrate the bottleneck structure not only improves the performance (by 2.2%) but also reduces the number of parameters a lot (by 39%). Note that compared to C2D baselines, our bottleneck TBN achieves a significant improvement in performance with only 14% additional parameters.
Combined with C3D ResNet.
In the above comparisons, the backbone network of TBNs is C2D ResNet. We further study the performance of adding TB blocks into C3D ResNet. The bottom of Table 2(c) shows the results of adding 2 TB blocks () into C2D and C3D, respectively. From the results, we can see that our WTBN C3D improves 1% on C3D with almost equal number of parameters. It demonstrates that our TB blocks capture complementary information with 3D temporal convolutions. WTBN C3D does not achieve higher performance than WTBN C2D. It is mainly because of the gradual decrease of the temporal dimension in C3D, as shown in Table 2, which weakens the capacity of temporal modeling in TB blocks.
Method  Pretrain  backbone  UCF101  HMDB51 

TwoStreamRGB (?)  ImageNet  VGGM  73.0  40.5 
TDD Spatial (?)  ImageNet  VGGM  82.8  50.0 
ResRGB (?)  ImageNet  ResNet50  82.3  43.4 
TSNRGB (?)  ImageNet  Inception  85.1  51.0 
I3DRGB (?)  ImageNet  Inception  84.5  49.8 
C2D  ImageNet  ResNet18  76.9  41.2 
TBN  ImageNet  ResNet18  77.8  42.7 
TBN  ImageNet  ResNet34  81.4  46.4 
C3D (?)  Sports1M  VGG11  82.3  51.6 
TSNRGB (?)  ImageNet+Kinetics  Inception  91.1   
I3DRGB (?)  ImageNet+Kinetics  Inception  95.6  74.8 
C2D  Kinetics  ResNet18  85.0  53.9 
TBN  Kinetics  ResNet18  89.6  62.2 
TBN  Kinetics  ResNet34  93.6  69.4 
Evaluation on Multiple Datasets
In this section, we compare our TBNs with other methods on multiple datasets, including Kinetics, UCF101 and HMDB51. Since the performance of WTBN and DTBN is slightly different, here we adopt WTBN with ResNet18, ResNet34 and ResNet50 as our backbone networks. We add 6 TB blocks (in , and ) for ResNet18, 5 TB blocks (2 in and 3 in ) for ResNet34, and 2 TB blocks (2 in ) For ResNet50 in WTBN.
Results on Full Kinetics Dataset
Kinetics (?) is a largescale video action recognition dataset, which contains around 240k training videos and 20k validation videos with 400 action classes.
Table 4 shows the results compared to some stateoftheart methods. For a fair comparison, we consider the methods that only use RGB as input and are trained from scratch. Our proposed TBNs significantly improve the baseline methods. Meantime, our TBN with ResNet18 achieves comparable results with C2D using ResNet34 as backbone which has nearly twice the number of parameters. It demonstrates that the improvement of TBN is not just increasing depth and it is complementary to using deeper network. Compared to the recent stateoftheart I3D (?) and TSN (?),our method achieves the best Top1 accuracy (70.1%). Note that current published stateoftheart methods could achieve higher performance, like (?), by utilizing more modalities, larger spatial resolutions and deeper structures.
Results on UCF101 and HMDB51 Datasets
We transfer the learned TBN models to two widely adopted action recognition datasets: UCF101 (?) and HMDB51 (?). UCF101 contains around 13,320 videos with 101 action classes, while HMDB51 has 6,766 videos from 51 action categories. We use the models trained on Kinetics or ImageNet as initialization and report the averaged accuracy over three splits. For finetuning, we use the same settings as Kinetics, but change the learning rate to 0.001 with total 100 training epochs.
The results are summarized in Table 5. Our TBN consistently outperforms the baseline C2D method, regardless of the employed datasets in pretraining. It is observed that better results are obtained with ResNet34. And the performance is further improved when pretrained with Kinetics (e.g., from 81.4% to 93.6% on UCF101 for TBN with ResNet34), owing to its largescale and highquality video data. Finally, our TBN obtains comparable performance with other stateoftheart methods like TSN (?) and I3D (?) which adopt deeper network structures pretrained on ImageNet and Kinetics.
Conclusions
In this paper, we have presented the Temporal Bilinear (TB) model to incorporate temporal pairwise interactions in neural networks. The factorized bilinear model and the bottleneck design bring fewer parameters and lower computational complexity. Besides, our TB block is very compact and flexible to combine with existing 2D or 3D CNNs. The Temporal Bilinear Networks (TBN) achieve consistent improvements over baselines in several video action recognition benchmarks. We believe that TB model can be an essential component for temporal modeling and we will make efforts to apply TBNs to other video domain tasks in the future.
References
 [\citeauthoryearBian et al.2017] Bian, Y.; Gan, C.; Liu, X.; Li, F.; Long, X.; Li, Y.; Qi, H.; Zhou, J.; Wen, S.; and Lin, Y. 2017. Revisiting the effectiveness of offtheshelf temporal modeling approaches for largescale video classification. arXiv preprint arXiv:1708.03805.
 [\citeauthoryearCarreira and Zisserman2017] Carreira, J., and Zisserman, A. 2017. Quo vadis, action recognition? a new model and the Kinetics dataset. In CVPR.
 [\citeauthoryearChetlur et al.2014] Chetlur, S.; Woolley, C.; Vandermersch, P.; Cohen, J.; Tran, J.; Catanzaro, B.; and Shelhamer, E. 2014. cuDNN: Efficient primitives for deep learning. arXiv preprint arXiv:1410.0759.
 [\citeauthoryearDonahue et al.2015] Donahue, J.; Anne Hendricks, L.; Guadarrama, S.; Rohrbach, M.; Venugopalan, S.; Saenko, K.; and Darrell, T. 2015. Longterm recurrent convolutional networks for visual recognition and description. In CVPR.
 [\citeauthoryearFeichtenhofer, Pinz, and Wildes2016] Feichtenhofer, C.; Pinz, A.; and Wildes, R. 2016. Spatiotemporal residual networks for video action recognition. In NIPS.
 [\citeauthoryearGao et al.2016] Gao, Y.; Beijbom, O.; Zhang, N.; and Darrell, T. 2016. Compact bilinear pooling. In CVPR.
 [\citeauthoryearGirdhar et al.2017] Girdhar, R.; Ramanan, D.; Gupta, A.; Sivic, J.; and Russell, B. 2017. Actionvlad: Learning spatiotemporal aggregation for action classification. In CVPR.
 [\citeauthoryearGirshick et al.2014] Girshick, R.; Donahue, J.; Darrell, T.; and Malik, J. 2014. Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR.
 [\citeauthoryearHara, Kataoka, and Satoh2017] Hara, K.; Kataoka, H.; and Satoh, Y. 2017. Learning spatiotemporal features with 3D residual networks for action recognition. In ICCV Workshop.
 [\citeauthoryearHe et al.2015] He, K.; Zhang, X.; Ren, S.; and Sun, J. 2015. Delving deep into rectifiers: Surpassing humanlevel performance on imagenet classification. In ICCV.
 [\citeauthoryearHe et al.2016] He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In CVPR.
 [\citeauthoryearIoffe and Szegedy2015] Ioffe, S., and Szegedy, C. 2015. Batch normalization: accelerating deep network training by reducing internal covariate shift. In ICML.
 [\citeauthoryearJi et al.2013] Ji, S.; Xu, W.; Yang, M.; and Yu, K. 2013. 3d convolutional neural networks for human action recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 35(1):221–231.
 [\citeauthoryearKarpathy et al.2014] Karpathy, A.; Toderici, G.; Shetty, S.; Leung, T.; Sukthankar, R.; and FeiFei, L. 2014. Largescale video classification with convolutional neural networks. In CVPR.
 [\citeauthoryearKay et al.2017] Kay, W.; Carreira, J.; Simonyan, K.; Zhang, B.; Hillier, C.; Vijayanarasimhan, S.; Viola, F.; Green, T.; Back, T.; Natsev, P.; Suleyman, M.; and Zisserman, A. 2017. The Kinetics human action video dataset. arXiv preprint arXiv:1705.06950.
 [\citeauthoryearKrizhevsky, Sutskever, and Hinton2012] Krizhevsky, A.; Sutskever, I.; and Hinton, G. E. 2012. ImageNet classification with deep convolutional neural networks. In NIPS.
 [\citeauthoryearKuehne et al.2011] Kuehne, H.; Jhuang, H.; Garrote, E.; Poggio, T.; and Serre, T. 2011. HMDB: a large video database for human motion recognition. In ICCV.
 [\citeauthoryearLi et al.2017] Li, Y.; Wang, N.; Liu, J.; and Hou, X. 2017. Factorized bilinear models for image recognition. In ICCV.
 [\citeauthoryearLin, RoyChowdhury, and Maji2015] Lin, T.Y.; RoyChowdhury, A.; and Maji, S. 2015. Bilinear CNN models for finegrained visual recognition. In CVPR.
 [\citeauthoryearLong, Shelhamer, and Darrell2015] Long, J.; Shelhamer, E.; and Darrell, T. 2015. Fully convolutional networks for semantic segmentation. In CVPR.
 [\citeauthoryearNg et al.2015] Ng, J. Y.H.; Hausknecht, M.; Vijayanarasimhan, S.; Vinyals, O.; Monga, R.; and Toderici, G. 2015. Beyond short snippets: Deep networks for video classification. In CVPR.
 [\citeauthoryearRen et al.2015] Ren, S.; He, K.; Girshick, R.; and Sun, J. 2015. Faster RCNN: Towards realtime object detection with region proposal networks. In NIPS.
 [\citeauthoryearRussakovsky et al.2015] Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; Berg, A. C.; and FeiFei, L. 2015. ImageNet large scale visual recognition challenge. International Journal of Computer Vision 115(3):211–252.
 [\citeauthoryearSimonyan and Zisserman2014] Simonyan, K., and Zisserman, A. 2014. Twostream convolutional networks for action recognition in videos. In NIPS.
 [\citeauthoryearSoomro, Zamir, and Shah2012] Soomro, K.; Zamir, A. R.; and Shah, M. 2012. UCF101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402.
 [\citeauthoryearSun et al.2015] Sun, L.; Jia, K.; Yeung, D.Y.; and Shi, B. E. 2015. Human action recognition using factorized spatiotemporal convolutional networks. In ICCV.
 [\citeauthoryearTran et al.2015] Tran, D.; Bourdev, L.; Fergus, R.; Torresani, L.; and Paluri, M. 2015. Learning spatiotemporal features with 3d convolutional networks. In ICCV.
 [\citeauthoryearTran et al.2017] Tran, D.; Ray, J.; Shou, Z.; Chang, S.F.; and Paluri, M. 2017. Convnet architecture search for spatiotemporal feature learning. arXiv preprint arXiv:1708.05038.
 [\citeauthoryearWang and Schmid2013] Wang, H., and Schmid, C. 2013. Action recognition with improved trajectories. In ICCV.
 [\citeauthoryearWang et al.2016] Wang, L.; Xiong, Y.; Wang, Z.; Qiao, Y.; Lin, D.; Tang, X.; and Van Gool, L. 2016. Temporal segment networks: towards good practices for deep action recognition. In ECCV.
 [\citeauthoryearWang et al.2017a] Wang, Y.; Xie, L.; Liu, C.; Qiao, S.; Zhang, Y.; Zhang, W.; Tian, Q.; and Yuille, A. L. 2017a. Sort: Secondorder response transform for visual recognition. In ICCV, 1368–1377.
 [\citeauthoryearWang et al.2017b] Wang, Y.; Long, M.; Wang, J.; and Yu, P. S. 2017b. Spatiotemporal pyramid network for video action recognition. In CVPR.
 [\citeauthoryearWang, Qiao, and Tang2015] Wang, L.; Qiao, Y.; and Tang, X. 2015. Action recognition with trajectorypooled deepconvolutional descriptors. In CVPR.
 [\citeauthoryearXie et al.2017] Xie, S.; Sun, C.; Huang, J.; Tu, Z.; and Murphy, K. 2017. Rethinking spatiotemporal feature learning for video understanding. arXiv preprint arXiv:1712.04851.