Learning Class Regularized Features for Action Recognition

Learning Class Regularized Features for Action Recognition

Abstract

Training Deep Convolutional Neural Networks (CNNs) is based on the notion of using multiple kernels and non-linearities in their subsequent activations to extract useful features. The kernels are used as general feature extractors without specific correspondence to the target class. As a result, the extracted features do not correspond to specific classes. Subtle differences between similar classes are modeled in the same way as large differences between dissimilar classes. To overcome the class-agnostic use of kernels in CNNs, we introduce a novel method named Class Regularization that performs class-based regularization of layer activations. We demonstrate that this not only improves feature search during training, but also allows an explicit assignment of features per class during each stage of the feature extraction process. We show that using Class Regularization blocks in state-of-the-art CNN architectures for action recognition leads to systematic improvement gains of 1.8%, 1.2% and 1.4% on the Kinetics, UCF-101 and HMDB-51 datasets, respectively.

\name

Alexandros Stergiou1, Ronald Poppe, Remco C. Veltkamp \address Department of Information and Computing Sciences, Utrecht University, Utrecht, Netherlands
{a.g.stergiou, r.c.veltkamp, r.w.poppe}@uu.nl

{keywords}

Regularization, explainable convolutions, spatio-temporal activations.

1 Introduction

Figure 1: Class Regularization. Activation maps are vectorized (pool()) and multiplied by the convolved class weights (for dimensionality matching), to select the resulting highest class activation (pool), and regularize the layer activations.

Video-based action recognition has seen tremendous progress since the introduction of Convolutional Neural Networks (CNNs). The use of multiple 3D convolutional operations in each layer has shown to effectively capture informative and descriptive spatio-temporal features. A large body of work has focused on finding optimal architectures, depth and feature-extraction methods [10, 19].

In recognition tasks, networks include multiple layers that are stacked together in a single, hierarchical architecture. Features are extracted through successive convolution operations, where each layer employs a set of kernels whose parameters are learned during training. Early layer kernels focus on simple textures and patterns, while deeper layers focus on complex object parts or specific parts of scenes. However, as these features become more dependent on the different weighting of neural connections in previous layers, only a portion of them becomes descriptive for a specific class [1, 8]. Yet, all kernels are learned in a class-agnostic way. This hinders easy interpretation of the part of the network that is informative for a specific class. Moreover, it complicates model transfer to other datasets.

We explicitly focus on this space-time relationship and propose a method named Class Regularization. We relate class information to extracted features of different network blocks. This information is added back to the network as a means of amplifying activation values, with respect to predicted classes. Class Regularization has a beneficial effect on the non-linearities of the network by decreasing or increasing the effects of the activations. Based on this, the architecture can effectively distinguish between the most class-informative kernels in each part of the network hierarchy given a selected class. This also reduces the dependency on many uncorrelated features during the final class predictions, essentially penalizing overfitting given the random sampling noise of the data.

Our contributions are the following:

  • We propose Class Regularization, a regularization method applied in spatio-temporal CNNs without changing the overall network structure.

  • We introduce a weight sharing function for learned weights of previous epochs with Class Regularization.

  • We demonstrate the improvement in model explainability through intermediate class-spceific features.

  • We report performance gains for benchmark action recognition datasets Kinetics, UCF-101 and HMDB-51 by including Class Regularization blocks.

The advances made in vision-based action recognition are discussed in Section 2. A detailed overview of the algorithm appears in Section 3. Results and evaluation tests are presented in Section 4. We conclude in Section 5.

2 Related work

Because of the indirect relationship between temporal and spatial information, one of the first attempts on video recognition with neural models was the use of Two-stream networks [15]. These models contain two separate branches for still video frames and optical flow inputs, respectively. Two-steam networks were also used as a base method for approaches such as Temporal Segment Networks (TSN) [22] using scattered snippets from the video and later fusing their predictions. This also led to research on the selection of frames [4] while other approaches use residual connections [6] to share spatio-temporal information across multiple layers.

Other approaches consider 3D convolusions, which include time information as part of their operations and have shown to outperform standard image-based networks in video classification. A fusion of Two-stream networks and 3D convolutions has been explored with the I3D architecture [2], with two spatio-temporal models trained in parallel on both frame and optical flow data. Further structures include Residual Networks [9], depth-wise and channel-wise convolutions to deal with spatio-temporal data [3, 20], combinations of spatial-only followed by temporal-only filters [14, 21] and the use of long-sequence and short-sequence kernels [5].

Although these techniques have shown great promise, there is still a lack of better spatio-temporal representations for intermediate network layers. Yet, no standardized way for processing the temporal information exists. Our proposed method, named Class Regularization, can be added to networks with minimum additional computational costs in order to further enforce the relation between features and action classes.

3 Regularization for convolutional blocks

Explicitly adding class information through regularization is challenging based on the ambiguity of the model’s inner workings. The underlying idea is that in each layer, different combinations of extracted spatio-temporal features lead to patterns that are significant parts of different classes. These patterns are depth dependent, i.e. deeper layers can distinguish class-specific features better given their higher feature complexity. Therefore, class estimates at different parts of the model should be weighted differently. We define an affection rate value (), that specifies how strong the intermediary predictions in that layer should be. The values are chosen given the layer depth and the level of uncertainty of their class estimates. We further use point-wise convolutions for feature dimensionality matching between the predictions and layer activations.
We now discuss the various steps that layer activations are regularised over predicted class weights as shown in Figure 1

Figure 2: Visualization of feature amplification. As class specific saliency is re-used by the network, informative spatio-temporal features for specific classes during an iteration will be amplified. The effect of this amplification is propagated to deeper layers in the network through the connections of the layers in which Class Regularization is applied.

3.1 Layer fusion with class predictions

Class estimates through features from convolution block () are obtained by initially creating a vector representation of the activations channels. Considering the produced activation map of the block (denoted as ) and spatio-temporal sampling operation, (Equation 1), the produced volume can be interpreted as a descriptor containing feature intensity values.

(1)

Class predictions are obtained based on the class weights of the network’s classifier (), as updated by the preceding iteration. Thus allowing to establish a relationship between the previous and current iterations, in a recurrent fashion. As the feature space of the layer’s activations varies from the prediction weights, a 3D point-wise convolution () is applied to the class weights.

Based on the vectorized activations and class weights, the produced class activation volume () will be of size with () being the number of classes. This operation allows an early estimate for the indexes of the most relevant features for each class.

3.2 Class-specific excitation through class estimates

Considering the class-based activations (), the maximum class probability () can then be obtained through a normalized exponential function as in Equation 2. This converts the weighed sum logit score to a probabilistic distribution over all classes ().The obtained maximum class probability index () can be then used to select a specific class weight () based on which the activations of the layer will be regularized.

(2)

For amplifying each of the features of the activations, the selected class weights selected () are normalized within a discrete range of values. This is done for scaling down the effects of features, that are less informative for a specific class, while informative features are scaled up. This is performed based on the affection rate value () that determines the bounds that the weight vector will be normalized to (): as in Equation 3.

(3)

We are not using a standardization method as in batch normalization [11] that guarantees a zero-mean output. This is because we use a multiplication operation for including the class weight information to the activation maps. Therefore, zero-mean normalization will remove part of the information as values below one will decrease the feature intensity. It also hinders performance as it effectively contributes to the occurrence of vanishing gradients with the produced activation map values being reduced at each iteration.

In our final step, we inflate the normalized weight vector (), to correspond to the same dimensions as the spatio-temporal activation maps and, in turn, create class-excited activations ()

3.3 Improving visual explainability

Being able to represent the class features, given a different feature space, further empowers the overall explainability capabilities of the model. Through feature correlation, the method alleviates the curse of dimensionality problem of current visualization methods that rely on back-propagating from the predictions to a particular layer [17]. Since the classes are represented in the same feature space as the activation maps of the block, we can discover regions in space and time that are informative over multiple network layers. To the best of our knowledge, this is the first method to visualize spatio-temporal class-specific features at each layer of the network. This can be seen in Figure 2 through the extension of the Saliency Tubes[18] method, for each block, to create visual representations of the features with the highest activations per class.

4 Experiments

We demonstrate the merits of Class Regularization on three widely used datasets: Kinetics-400 [12], UCF-101 [16] and HMDB-51 [13]. The models trained on Kinetics are initialized with a standard Kaiming initialization, without inflating the 3D weights. This was done to allow for a direct comparison between architectures with and without Class Regularization blocks and compare the respective accuracy rates in each case. For all the experiments we use SGD as our optimizer with 0.9 momentum. Class Regularization is added at the end of each bottleneck block in the ResNet architectures and at the end of each mixed block in I3D.

4.1 Main results

A comparison between our results on Kinectics-400 and those previously reported in literature appears in Table 1. Existing networks consider a complete change in the overall architecture or convolution operations in models, which is significantly computationally challenging given the large memory (based on batch sizes) and computations requirements of spatio-temporal models (as shown by the number of GFLOPs). New models need to be trained for a significant number of iterations in order to achieve mild improvements: +3.6% from I3D [2] to R(2+1)D [21], while additionally pre-training on even larger datasets [7]. In contrast, the proposed Class Regularization method is used on top of existing architectures and only requires fine-tuning the dimensionality correspondence between the number of features in a specific layer and the features that are used for class predictions. For a direct comparison, in the retrained models with batch sizes of 32, we achieve an overall average improvement of: +1.29% on 101-layer ResNet, +1.5% on 50-layer Wide ResNet and +1.45% on I3D as seen in Tables 12.

Model Pre-training Layers GFLOPS Top-1
ResNet50-3D [9] - 50 80.32 0.613
ResNet101-3D [9] - 101 110.98 0.652
ResNeXt101-3D [9] - 101 148.91 0.651
Wide ResNet50-3D [9] - 50 72.32 0.639
I3D [2] ImageNet 48 55.79 0.664
MF-Net [3] ImageNet 50 22.7 0.728
R(2+1)D-ResNet50 [21] Sports1M 50 238.12 0.720
ResNet101-3D (w/ ClassReg) - 101 + 4 126.13 0.677
Wide ResNet50-3D (w/ ClassReg) - 50 + 4 82.67 0.653
I3D (w/ ClassReg) - 48 + 3 62.96 0.678
Table 1: Comparisons of accuracy rates over different spatio-temporal convolutional architectures on Kinetics-400. Computational overhead is denoted by the number of GFLOPS.

4.2 Direct comparisons with Class Regularization

In Table 2 we compare the Class Regularization method in a per-architecture fashion by keeping a base network and reporting accuracy rates in pairs. For each architecture and dataset, networks with Class Regularization outperform those without. The largest gain was observed in the 101-layer 3D Resnet with a gain of +2.45% on Kinetics, +0.61% on UCF-101 and +0.81% on HMDB-51. On Wide-Resnet50 we further obtained improvements of +1.37%, +1.59% and +1.62% for each of the respected datasets. On I3D Class Regularization provided an increase of +1.43% for Kinetics, +1.37% on UCF-101 and +1.56% on HMDB-51. This is also based on the direct correlation between the complexity of the class features in the prediction layer given the architectural depth. Since the effective description of classes is done through large feature spaces, Class Regularization can significantly benefit models that include highly complex and large class weight spaces. With this, the corresponding set of influential class features are being better distinguishable with minimal computational costs as seen in Figure 3

Figure 3: Class Regularization accuracy/computation trade-off. Clip top-1 accuracy for the Kinetics, UCF-101 and HMDB-51 in comparison to the computational cost (in GFLOPs).
Model added latency(msec.) Kinetics UCF101 HMDB51
ResNet101 - 65.29 88.23 62.47
ResNet101 + 98.786 67.74 88.84 63.31
Wide ResNet50 - 63.96 87.52 61.62
Wide ResNet50 +102.995 65.33 89.11 63.24
I3D - 66.42 91.80 64.27
I3D +68.340 67.85 93.17 65.83
Table 2: Direct comparison with and without regularization block. Models that include Class Regularization are in orange. Reported accuracy rates (top-1 %) achieved on Kinetics-400, UCF-101 and HMDB-51 datasets on the validation sets (split 1 for UCF-101 and HMDB-51), with all networks being re-trained with the same settings. All models use inputs of size for Kinetics and for UCF-101 and HMDB-51. Initially, all networks are trained for 170 epochs. During fine-tuning, we trained for 100 epochs.

5 Conclusions

In this paper, we have introduced Class Regularization, a method that focuses on class-specific features. Class Regularization allows the network to strengthen or weaken layer activations based on how informative they are to specific class predictions. The method can be added to any layer or block of convolutions in pre-trained models. It is lightweight as the class weights from the prediction layer are shared throughout Class Regularization. To avoid the vanishing gradient problem, and the possibility of negatively influencing activations, the weights are normalized between a range given an affection rate () value.

We evaluate the proposed method on three benchmark datasets: Kinetics, UCF-101 and HMDB-51 and report results on three models: ResNet101, Wide ResNet50 and I3D with average increases in accuracy of +1.29%, +1.5% and 1.45% respectively. In addition, the achieved improvements were done with minimal additional computational cost over the original architectures.

We demonstrate how Class Regularization can be used in order to improve explainability of 3D-CNNs through qualitative class feature visualizations across layers, and quantitative class predictions improvements for different layer depths.

6 Acknowledgments

This publication is supported by the Netherlands Organization for Scientific Research (NWO) with a TOP-C2 grant for “Automatic recognition of bodily interactions” (ARBITER).

Footnotes

  1. thanks: Corresponding author

References

  1. D. Bau, J. Zhu, H. Strobelt, B. Zhou, J. B. Tenenbaum, W. T. Freeman and A. Torralba (2019) Visualizing and understanding generative adversarial networks. arXiv preprint arXiv:1901.09887. Cited by: §1.
  2. J. Carreira and A. Zisserman (2017) Quo vadis, action recognition? A new model and the Kinetics dataset. In Computer Vision and Pattern Recognition (CVPR), pp. 4724–4733. Cited by: §2, §4.1, Table 1.
  3. Y. Chen, Y. Kalantidis, J. Li, S. Yan and J. Feng (2018) Multi-fiber networks for video recognition. In European Conference on Computer Vision (ECCV), Cited by: §2, Table 1.
  4. A. Diba, V. Sharma and L. Van Gool (2017) Deep temporal linear encoding networks. In Computer Vision and Pattern Recognition (CVPR), pp. 2329–2338. Cited by: §2.
  5. C. Feichtenhofer, H. Fan, J. Malik and K. He (2019) SlowFast networks for video recognition. In International Conference on Computer Vision (ICCV), Cited by: §2.
  6. C. Feichtenhofer, A. Pinz and R. Wildes (2016) Spatiotemporal residual networks for video action recognition. In Advances in Neural Information Processing Systems (NIPS), pp. 3468–3476. Cited by: §2.
  7. D. Ghadiyaram, D. Tran and D. Mahajan (2019) Large-scale weakly-supervised pre-training for video action recognition. In Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §4.1.
  8. L. H. Gilpin, D. Bau, B. Z. Yuan, A. Bajwa, M. Specter and L. Kagal (2018) Explaining explanations: an overview of interpretability of machine learning. In International Conference on data science and advanced analytics (DSAA), pp. 80–89. Cited by: §1.
  9. K. Hara, H. Kataoka and Y. Satoh (2018) Can spatiotemporal 3d cnns retrace the history of 2d cnns and imagenet?. In Computer Vision and Pattern Recognition (CVPR), pp. 18–22. Cited by: §2, Table 1.
  10. S. Herath, M. Harandi and F. Porikli (2017) Going deeper into action recognition: a survey. Image and vision computing 60, pp. 4–21. Cited by: §1.
  11. S. Ioffe and C. Szegedy (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning (ICML), pp. 448–456. Cited by: §3.2.
  12. W. Kay, J. Carreira, K. Simonyan, B. Zhang, C. Hillier, S. Vijayanarasimhan, F. Viola, T. Green, T. Back and P. Natsev (2017) The kinetics human action video dataset. arXiv preprint arXiv:1705.06950. Cited by: §4.
  13. H. Kuehne, H. Jhuang, E. Garrote, T. Poggio and T. Serre (2011) HMDB: a large video database for human motion recognition. In International Conference on Computer Vision (ICCV), pp. 2556–2563. Cited by: §4.
  14. Z. Qiu, T. Yao and T. Mei (2017) Learning spatio-temporal representation with pseudo-3d residual networks. In International Conference on Computer Vision (ICCV), pp. 5534–5542. Cited by: §2.
  15. K. Simonyan and A. Zisserman (2014) Two-stream convolutional networks for action recognition in videos. In Advances in Neural Information Processing Systems (NIPS), pp. 568–576. Cited by: §2.
  16. K. Soomro, A. R. Zamir and M. Shah (2012) UCF101: a dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402. Cited by: §4.
  17. A. Stergiou, G. Kapidis, G. Kalliatakis, C. Chrysoulas, R. Poppe and R. Veltkamp (2019-10) Class feature pyramids for video explanation. In The IEEE International Conference on Computer Vision Workshops (ICCVW), Cited by: §3.3.
  18. A. Stergiou, G. Kapidis, G. Kalliatakis, C. Chrysoulas, R. Veltkamp and R. Poppe (2019) Saliency tubes: visual explanations for spatio-temporal convolutions. In International Conference on Image Processing (ICIP), Cited by: §3.3.
  19. A. Stergiou and R. Poppe (2019) Analyzing human-human interactions: a survey. Computer Vision and Image Understanding 188, pp. 102799. Cited by: §1.
  20. D. Tran, H. Wang, L. Torresani and M. Feiszli (2019) Video classification with channel-separated convolutional networks. In International Conference on Computer Vision (ICCV), Cited by: §2.
  21. D. Tran, H. Wang, L. Torresani, J. Ray, Y. LeCun and M. Paluri (2018) A closer look at spatiotemporal convolutions for action recognition. In Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6450–6459. Cited by: §2, §4.1, Table 1.
  22. L. Wang, Y. Xiong, Z. Wang, Y. Qiao, D. Lin, X. Tang and L. Van Gool (2016) Temporal segment networks: towards good practices for deep action recognition. In European Conference on Computer Vision (ECCV), pp. 20–36. Cited by: §2.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
407122
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description