Action Anticipation with RBF Kernelized Feature Mapping RNN 

Action Anticipation with RBF Kernelized Feature Mapping RNN 

Yuge Shi The Australian National University, Australia    Basura Fernando The Australian National University, Australia    Richard Hartley The Australian National University, Australia
Abstract

We introduce a novel Recurrent Neural Network-based algorithm for future video feature generation and action anticipation called feature mapping RNN . Our novel RNN architecture builds upon three effective principles of machine learning, namely parameter sharing, Radial Basis Function kernels and adversarial training. Using only some of the earliest frames of a video, the feature mapping RNN is able to generate future features with a fraction of the parameters needed in traditional RNN. By feeding these future features into a simple multilayer perceptron facilitated with an RBF kernel layer, we are able to accurately predict the action in the video.

In our experiments, we obtain 18% improvement on JHMDB-21 dataset, 6% on UCF101-24 and 13% improvement on UT-Interaction datasets over prior state-of-the-art for action anticipation.

Keywords:
Human action prediction, novel Recurrent Neural Network, Radial Basis Function kernel, Adversarial training

1 Introduction

Figure 1: Overview of Proposed feature mapping RNN : Given a frame extracted from video data, the algorithm first passes the RGB image through a deep CNN to acquire high level features of the image . The vector is then split into smaller segments of equal length. Each scalar element in the segmented vector is used as input to a single LSTM cell that produces the prediction of corresponding feature element in frame , where . After all segments are processed with LSTMs, all the prediction segments are concatenated back together to form , which contains high level features of .

Action anticipation (sometimes referred to as action prediction) is gaining a lot of attention due to its many real world applications such as human-computer interaction [2, 33, 30], sports analysis [3, 4, 56] and pedestrian movement prediction [9, 22, 18, 5, 46] especially in the autonomous driving scenarios.

In contrast to most widely studied human action recognition methods, in action anticipation, we aim to recognize human action as early as possible [39, 23, 28, 42, 49]. This is a challenging task due to the complex nature of video data. Although a video containing a human action consists of a large number of frames, many of them are not representative of the action being performed; large amount of visual data also tend to contain entangled information about variations in camera position, background, relative movements and occlusions. This results in cluttered temporal information and makes recognition of the human action a lot harder. The issue becomes even more significant for action anticipation methods, as the algorithm has to make a decision using only a fraction of the video at the very start. Therefore, finding a good video representation that extracts temporal information relevant to human action is crucial for the anticipation model.

To over come some of these issues, we resort to use deep convolutional neural networks (CNNs) and take the deep feature on the penultimate layer of CNN as video representation. Another motivation to use deep CNNs stems from the difficulty of generating visual appearances for future. Therefore, similar to Vondrick et al.  [49], we propose a method to generate future features tailored for action anticipation task: given an observed sequence of deep CNN features, a novel Recurrent Neural Network (RNN) model is used to generate the most plausible future features and thereby predicting the action depicted in video data. An overview of this model can be found in Fig. 1.

The objective of our RNN is to map the feature vector at time denoted by to the future feature vector at denoted by . Because only a fraction of the frames are observed during inference, the future feature generator should be highly regularized to avoid over-fitting. Furthermore, feature generator needs to model complex dynamics of future frame features.

This can be resolved by parameter sharing. Parameter sharing is a strong machine learning concept that is being used by many modern leaning methods. Typically, CNNs share parameters in the spatial domain and RNNs in the temporal dimension. In our work, we propose to utilize parameter sharing in an unconventional way for RNN models by expanding it to the feature domain. This is based on the intuition that the CNN feature activations are correlated to each other.

By utilizing parameter sharing across feature activations, our proposed RNN is able to learn the temporal mapping from to with significantly fewer parameters. This greatly boosts the computational efficiency of the prediction model and correspondingly shortens the response time. We call our novel RNN architecture feature mapping RNN .

To model complex dynamic nature of video data, we make use of a novel mapping layer inside our RNN. In principle, the hidden state of the RNN captures the temporal information of observed sequence data. In our method, hidden state of the RNN is processed by a linear combination of Gaussian Radial Basis Function (RBF) kernels to produce the future feature vector. While a linear model defines a simple hyperplane as mapping functions, the kernelized mapping with RBF kernels can model complex surfaces and therefore has the potential of improving the prediction accuracy. In our work, we also implement RBF kernels on the action classification multi-layer perceptron to improve the performance of classifiers.

Ideally, we are interested in learning the probability distribution of future given the past features. To learn this conditional distribution, inspired by the of Generative Adversarial Networks [12], an adversarial approach is used to evaluate the cost of the feature mapping RNN. The RNN is trained with an adversarial loss and re-constrictive L2 loss. In this way, the model is optimized not only with the intention of reducing the Euclidean distance between the prediction and ground truth, but also taking probability distribution of the feature vector into consideration.

In a summary, our contributions are:

  • [leftmargin=.5in]

  • We propose a novel RNN architecture that share parameters across temporal domain as well as feature space.

  • We propose a novel RBF kernel to improve the prediction performance of RNNs.

  • We demonstrate the effectiveness of our method for action anticipation task beating state-of-the-art on standard benchmarks.

2 Related Work

The model proposed in this paper focuses on future video content generation for action prediction and action anticipation [23, 50, 35, 37, 55, 27, 20, 42, 39, 29, 49, 10]. In contrast to the widely studied action recognition problem, the action anticipation literature focuses on developing novel loss functions to reduce the predictive generalization error [39, 29, 16] or to improve the generalization capacity of future content such as future appearance [10] and future features [49]. The method propose in this paper also focuses on future content generation and therefore could further benefit from novel loss functions as proposed in [39, 29, 16].

In the early days, Yu et al.  [55] make use of spatial temporal action matching to tackle early action prediction. Their method relies on spatial-temporal implicit shape models. By explicitly considering all history of observed features, temporal evolution of human actions is used to predict the class label as early as possible by Kong et al.  [20]. Li et al. ’s work [27] exploits sequence mining, where a series of actions and object co-occurrences are encoded as symbolic sequences. Soomro et al.  [43] propose to use binary SVMs to localize and classify video snippets into sub-action categories, and obtain the final class label in an online manner using dynamic programming. In [50], action prediction is approached using still images with action-scene correlations. Different from the above mentioned methods, our work is focused on action anticipation from videos. We rely on deep CNNs along with a RNN that shares parameters across both feature and time dimensions to generate future features. To model complex dynamics of video data, we are the first to make use of effective RBF kernel functions inside RNNs for the action anticipation task.

On the other hand, feature generation has been studied with the aim of learning video representation, instead of specifically for action anticipation. Inspired by natural language processing technique [1], authors in [34] propose to predict the missing frame or extrapolate future frames from an input video sequence. However, they demonstrate this only for unsupervised video feature leaning. Other popular models include the unsupervised encoder-decoder scheme introduced by  [45] for action classification, probabilistic distribution generation model by [25] as well as scene prediction learning using object location and attribute information introduced by [8]. Research in recent years on applications of Generative Adversarial Network on video generation have given rise to models such as MoCoGAN [48], TGAN [40] and Walker et al. ’s work [53] on video generation using pose as a conditional information. The mechanisms of these GAN variations are all capable of exploiting both the spatial and temporal information in videos, and therefore have showed promising results in video generation.

Moreover, trajectory prediction [21], optical-flow prediction [52], path prediction [51, 54] and motion planning [11, 19], sports forecasting [7], activity forecasting of [31] are also related to our work. All these methods generate future aspects of the data. Our novel RNN model, however, focuses on generating future features for action anticipation.

3 Approach

3.1 Overview

Similar to methods adopted by other action anticipation algorithms, our algorithm makes predictions of action by only observing a fraction of video frames at the beginning of a long video. The overall pipeline of our method is shown in Fig. 1. First, we extract some CNN feature vectors from frames and predict the future features based on the past features. Subsequently, a multilayer perceptron (MLP) is used to classify generated features. We aggregate predictions from observed and generated features to recognize the action as early as possible.

3.2 Motivation

Denote observed sequence of feature vectors up to time by and future feature vector we aim to produce by , where and . We are interested in modeling the conditional probability distribution of , where denotes the parameters of the probabilistic model.

It is natural to use RNNs or RNN variants such as Long Short Term Memory (LSTM) [14] to model the temporal evolution of the data. However, learning such a mapping could lead to over-fitting since these methods tend not to utilise the temporal coherence and the evolutionary nature of video data [32].

Furthermore, a naive CNN feature mapping using a LSTM from past to the future is also prone to over-fitting. A LSTM with hidden state of dimensionality and takes feature vectors of dimensionality as input uses parameters in the order of . As an example, if we use the penultimate activations of Inception V3 [47] as feature vectors (), a typical LSTM () would require parameters in the order of . We believe that the effectiveness of such models can be largely improved by utilising the correlation of high level activations of modern CNN architectures [47, 13].

Motivated by these arguments, we propose to train a LSTM model where parameters are not only shared in the time domain, but also across feature activations. By doing so, we aim to self-regularize the feature generation of the algorithm. We name our novel architecture feature mapping RNN . Furthermore, to increase the functional capacity of RNNs, we make use of Radial Basis Functions (RBF) to model temporal dynamics of the conditional probability distribution . These mechanisms will be introduced in details in the following subsection.

3.3 Feature Mapping RNN with RBF Kernel Mapping

A traditional feature generation RNN architecture takes a sequence of vectors up to time as input and predicts the future feature vector . Typically, the following recurrent formula is used to model the prediction:

(1)

Where is the hidden state () which captures the temporal information of the sequence and are the parameters of the recurrent formula. Then we utilize this hidden state to predict the future feature vector using the following formula:

(2)

where is the parameter that does the linear mapping to predict the future feature vector.

As introduced previously, in our feature mapping RNN the parameters are shared across several groups of feature activations. This is achieved by segmenting the input feature vector of dimensionality into equal size sub-vectors of dimensionality , where is referred to as feature step size.

Now let us denote the sub-feature vector of size by . Intuitively, if we concatenate all such sub-feature vectors in an end-to-end manner, we will be able to reconstruct the original feature vector . The time sequence of data for the sub-feature vector is now denoted by . If we process each sequence in units of with the RNN model in equation 1 and equation 2, we will be able to predict and by concatenating them end-to-end, generate . This approach reduces the number of parameters used in the RNN model from to , which results in a considerable boost in computational efficiency especially when . However, the parameter complexity of the model would remain polynomial and is relevant to multiple hyperparameters.

To further improve the efficiency of our model, we adopt an even bolder approach: we propose to convert the sequence of vectors of to a sequence of scalars. Let us denote the j-th dimension of sub-vector by . Now instead processing sequence of vectors , we convert the sequence to a new sequence of scalars . Size of the sequence of scalars is equal to and we generate number of such sequences from each original sequence of feature vector .

We then propose to process sequence of scalars using a RNN (LSTM) model. The computation complexity is now linear, with number of parameters used in the recurrent model (LSTM) reduced to and depends only on the hidden state size.

Again, given the current sequence of vectors , we want to generate future feature vector . In the our RNN model, this is translated to predicting sequence of scalars from sequence for all sub-feature vectors to . Then we merge all predicted scalars for time to obtain .

Therefore, mathematically our new RNN model that share the parameter over feature activations can be denoted by the following formula:

(3)

where is the new parameter set of the RNN (LSTM) and the future l-th scalar of i-th sub-feature vector is given by:

(4)

To further improve the functional capacity of our feature mapping RNN , we make use of Radial Basis Functions (RBF). Instead of using a simple linear projection of the hidden state to the future feature vector, we propose to exploit the more capable Radial Basis Functional mapping. We call this novel RNN architecture the RBF kernelized feature mapping RNN , denoted by the following formula:

(5)

where , and are parameters learned during training and the number of RBF kernels used. These parameters are shared across all sub-feature vectors. The future feature vector is calculated as the linear combination of RBF kernels outputs. Since the RBF kernels are better at modeling complex planes in the feature space, this functional mapping is able to accurately capture more complicated dynamics. Implementing the kernalised RBF on our feature mapping RNN enables the model to do so with fewer parameters than classical RNNs. Illustration of our RBF kernelized MLP is shown in Fig 3.

Note that the method we have presented here only uses non-overlapping feature-sub-vectors, i. e. no overlapping exists between 2 consecutive sub-vectors. However, overlapping feature-sub-vectors can be used to improve the robustness of feature generation. Therefore, instead of using a non-overlapping feature stride of , we use an overlapping stride of size . In this case, we take the average between all overlapping parts of 2 consecutive sub-vectors to obtain .

3.4 Training of feature mapping RNN

Data generation, especially visual data generation with raw images, has remained a challenging problem for years mainly due to the absence of suitable loss function. The most commonly used function for this task is the loss. However, it works under the assumption that data is drawn from a Gaussian distribution, which makes the loss function ineffective when dealing with data that follows other distributions. As an example, if there exists only two equally possible value and for a pixel, the possibility for to be the true value for that pixel is minimal. However, will be assigned to the output in a neural network that uses loss to evaluate the cost. This property of the loss function causes a ”blurry” effect on the generated output. Similar observations can be seen for feature vector generation.

Recent developments in Generative Adversarial Networks address this issue successfully [12]. Traditional GAN consists of 2 CNNs, one of them is named generator (denote as ) and the other discriminator (denote as ). The GAN effectively learns the probabilistic distribution of the original data, and therefore eliminates the ”blockiness” effect caused by loss function. Here, we propose to train the feature mapping RNN algorithm using a combination of and adversarial loss, which is realized by implementing the feature mapping RNN as the generator denoted by . By doing so, we are able to produce prediction that is both accurate and realistic.

loss: The loss is defined as the mean squared error between the generated feature and the real feature vector of the future frame given as follows:

(6)

Adversarial loss: We use generator adversarial loss proposed by [12] where we train so that believes comes from the dataset, at which point . The loss function is defined as:

(7)

By adding this loss to our objective function, the RNN is encouraged to generate feature prediction with probabilistic distribution similar to the original data. Finally, the loss function of our RNN generator is given by:

(8)

The discriminator is trained to judge whether its inputs are real or synthetic. The objective is to output when given input is the real data and when input is generated data . Therefore, the discriminator loss is defined as:

(9)

3.5 Action classifier and inference

To evaluate the authentication of predicted features generated by the feature matching RNN, we again use the frame features to train a 2-layer MLP appended with a RBF kernel layer (equation 5) to classify videos as early as possible. The classification loss is evaluated using a cross-entropy loss. Feature mapping RNN and the action classification MLP is trained separately. One might consider training both MLP and the feature mapping RNN jointly. However, in terms of performance, we did not see that much of advantage.

Figure 2: Illustration of RBF keneralized multilayer perceptron.
Figure 3: Testing Procedure of Feature Mapping RNN 

During inference, we take advantage of all observed and generated features to increase the robustness of the results. Accuracy is calculated by performing temporal average pooling on all predictions (see  Fig 3).

4 Experiments

4.1 Datasets

Three datasets are used to evaluate the performance of our model, namely UT-Interaction [36], JHMDB-21 [17] and UCF101-24 [44]. We follow the standard protocols for each of the datasets in our experiments. We select these datasets because they are the most related to action anticipation task that has been used in prior work [39, 37].

UT-Interaction The UT-Interaction dataset (UTI) is a popular human action recognition dataset with complicated dynamics. The dataset consists of 6 types of human interactions executed under different backgrounds, zoom rates and interference. It has a total of 20 video sequences split into 2 sets. Each video is of approximately 1 minute long, depicting 8 interactions on average. The available action classes include handshaking, pointing, hugging, pushing, kicking and punching. The performance evaluation methodology requires the recognition accuracy to be measured using a 10-fold leave-one-out cross validation per set. The accuracy is evaluated for 20 times while changing the test sequence repeatedly and final result is yielded by taking the average of all measurements.

JHMDB-21 JHMDB-21 is another challenging dataset that contains 928 video clips of 21 types of human actions. Quite different from the UT-interaction where video clips of different actions are scripted and shot in relatively noise-free environments, all videos in JHMDB-21 are collected from either movies or online sources, which makes the dataset a lot more realistic. Each video contains an execution of an action and the dataset is split into 3 sets for training, validation and testing.

UCF101-24 UCF101-24 is a subset of UCF101. The dataset consists of more than 3000 videos from 24 action classes of UCF101. Since all the videos are collected from YouTube, the diversity of data in terms of action types, backgrounds, camera motions, lighting conditions etc are guaranteed. In addition, each video depicts up to 12 actions of the same category with different temporal and spatial features, which makes it one of the most challenging dataset to date.

4.2 Implementation Details

Feature Mapping RNN  The Feature Mapping RNN is trained with batch size of , using a hidden size () of 4 in all experiments unless otherwise specified. The default dimensionality of feature sub vector referred to as feature step size() is set to 128. We make use of six RBF kernels within the RBF kernelized feature mapping RNN . Feature stride is set to 64 and weight of the adversarial loss () is set to 1 and the weight for loss is set to 10 (i. e. ).

Action classifier MLP The a simple two layer MLP classifier consists of two hidden layers with 256 and 128 activation respectively. We also use RBF kernels along with the MLP where number of kernels set to 256. MLP is trained with batch size of .

Training and Testing Procedures We use pre-trained Inception V3 [47] penultimate activation as the frame feature representation. The dimensions of each feature vector is 2048 (). The action classification MLP is trained on the feature vectors from the training split of the datasets. These features are also used to train our feature mapping RNN to generate future features. Both models are trained with learning rate and exponential decay rate .

Protocols Following the experimental protocol [39, 37], we used only the first ( for UT-Interaction and for JHMDB-21) of the video frames to predict action class for each video. To utilise our model, we generate extra (referred to as prediction percentage) of the video features using our RBF kernalized feature mapping RNN . Therefore, we make use of feature vectors of the original video length to make the final prediction. To generate the next future feature at test time, we recursively apply our feature mapping RNN given all previous features (including the generated ones). We then use our action classification MLP to predict the action label using max pooling or simply average the predictions. This procedure is demonstrated more intuitively in Fig.3.

4.3 Comparison to State-of-the-Art

We compare our model to the state-of-the-art algorithms for action anticipation task on the JHMDB-21 dataset. Results are shown in Table 2. Our best algorithm (denoted as fm+RBF+GAN+Inception V3 in the table) outperforms the state-of-the-art by , and we can clearly see that the implementation of kernel SVM and adversarial training improves the accuracy by around to . In addition, to show the progression of how our method is able to outperform the baseline by such a large margin, we also implemented the Feature Mapping RNN on top of VGG16 so that the deep CNN pre-processing is consistent with other methods in Table 2. The fm+VGG16 entry in the table shows an improvement from baseline ELSTM, which is purely influenced by the implementation of Feature Mapping RNN .

Experiments are also carried out on the two other mentioned datasets, where our best method outperforms the state-of-the-art by on UT-Interaction and on UCF101-24, as shown in Table 2 and Table 3 respectively.

We believe these significant improvements suggests the effectiveness of two main principles, the parameter sharing and expressive capacity of RBF functionals. To further investigate the impact of each component, we perform a series of experiments in the following sections.

Table 1: Comparison of our model against state-of-the-arts on JHMDB-21 dataset for action anticipation. We follow the protocol of JHMDB-21 for action anticipation and predictions are made from using only of video sequence.
Method Accuracy
Others ELSTM [39]
Within-class Loss [28]
DP-SVM [42]
S-SVM [42]
Where/What [43]
Context-fusion [16]
Ours fm+VGG16
fm+kSVM+GAN+VGG16
fm+Inception V3
fm+RBF+GAN+Inception V3
Table 2: Comparison of our model against state-of-the-arts on UT-Interaction dataset for action anticipation. Following protocol of UT-Interaction, predictions are made from using only of video sequence.
Method Accuracy
ELSTM [39]
Within-class Loss [28]
Context-fusion [16]
Cuboid Bayes [37]
I-BoW [37]
D-BoW [37]
Cuboid SVM [38]
BP-SVM [26]
Ours
Method Accuracy
Temporal Fusion [6]
ROAD [41]
ROAD + BroxFlow [41]
Ours
Table 3: Comparison of our model against state-of-the-arts on UCF101-24 dataset for action anticipation. Again, predictions are made from using only of video sequence.

4.4 Analysis

In this section we compare the influence of different components of our RBF kernelized feature mapping RNN . As shown in Table 4, we compare following variants of our RNN model, including:

  1. [leftmargin=0.15in]

  2. (a) Feature Mapping RNN : use only loss to train the Feature Mapping RNN ;

  3. (b) Feature Mapping RNN +RBF: our RNN with kernalised RBF, still only using loss;

  4. (c) Feature Mapping RNN + RBF + GAN: RBF kernelized feature mapping RNN with adversarial loss.

Apart from the Feature Mapping RNN -based models, we also conduct experiments on the following method as comparisons to our model:

  1. [leftmargin=0.15in]

  2. (d) Linear: a matrix of size is used for feature generation ( is dimension of input feature);

  3. (e) Vanilla LSTM: generate future action features with traditional vanilla LSTM. loss is used to train it;

  4. (f) Vanilla LSTM + RBF: vanilla LSTM with kernalised RBF, using only loss;

  5. (g) Vanilla LSTM + RBF + GAN: RBF kernalized vanilla LSTM with added adversarial loss.

Note that all the results are obtained using features extracted by Inception V3 network, and the accuracy are acquired using max pooling at prediction percentage .

Method Accuracy
Linear
Vanilla LSTM
Vanilla LSTM + RBF
Vanilla LSTM + RBF + GAN -
Feature Mapping RNN
Feature Mapping RNN + RBF
Feature Mapping RNN + RBF + GAN
Table 4: Comparison of different approach on JHMDB-21 dataset

The results in Table 4 shows the proposed scheme outperforms the linear model significantly while using fewer parameters. Most interestingly, the feature mapping RNN outperforms vanilla LSTM by almost 6% indicating the impact of parameter sharing in the feature space. We can also conclude from Table 4 that the application of adversarial loss as well as RBF kernel layers encourages the model to generate more realistic future features, which is reflected by the improvement in accuracy with Feature Mapping RNN +RBF and Feature Mapping RNN +RBF+GAN. It is also shown in the Table 4 that vanilla LSTM trained with RBF kernel yields almost higher accuracy than plain vanilla LSTM, which proves further that the RBF layer is something the baseline can benefit from. Regrettably, the vanilla LSTM with adversarial training model failed to stabilise due to large number of parameters needed in the LSTM cells to reconstruct the original feature distribution.

The influence of RBF kernalized feature mapping RNN is quite distinctive. If we compare the red curve to the green one, we can see that the discrepency between them becomes larger as the prediction percentage increases. This indicates that the RBF kernalized feature mapping RNN generate more accurate future features in the long term, and hence it is a more robust model than plain feature mapping RNN . Comparing the red and green curve to the orange and blue one, we can also conclude that the adversarial loss assist the RNN training in a similar way. Even without the assistance of GAN loss and RBF kernel, the feature mapping RNN still performs better than liner projection RNN.

Figure 4: Prediction accuracy without pooling for JHMDB-21 dataset at different video prediction percentages . RBF kernalized Feature mapping RNN is trained using adversarial loss is able to achieve the highest stable accuracy.
Figure 5: Prediction accuracy evaluated at different feature step sizes on JHMDB-21 dataset. The accuracy plotted in the image is found by implementing feature step size between to with increment of on the model and the rolling average is taken among every measurements. No temporal pooling is used.

4.5 Influence of Hyper-parameters

Feature Step Size The accuracy of the generated data indicates the existence of strong correlations between the -dimensional segments of the feature vectors. By default, we resort to feature step size of 128 (). In order to further explore this property, we experimented with different feature step sizes.

Interval Size Accuracy
Table 6: Prediction accuracy using LSTM cells with different state size ().
Hidden State Size Accuracy
Table 7: Prediction accuracy using different number of RBF kernels.
No. of Kernels Accuracy
Table 5: Prediction accuracy at different feature stride size ()

In Fig.5, we plot the recognition accuracy against feature step size. We observe that small feature step size guarantees effective feature generation. Specifically, the prediction remains above when feature step size is smaller than . This phenomena can be explained by the intuition that when feature step size is large, the model tries to generalize a large set of features with mixed information at one time step, which results in degraded performance.

It is also interesting to note that the prediction accuracy oscillates drastically as the feature step size exceeds . This indicates that perhaps the feature vector summarizes information of the original image in fixed-size clusters, and when we attempt to break these clusters by setting different feature step size, the information within each time step lacks continuity and consistency, which subsequently compromises the prediction performance.

Although smaller feature step size builds a more robust model, the training time with feature step size takes only half the amount of time of training with step size , with no compromise on prediction accuracy. Therefore, it might be beneficial sometimes to choose a larger feature step size to save computational time.

Interval SizeIn this section we experiment the effect of overlapping sub-feature vectors on our RBF kernalized feature mapping RNN . Recall that the feature mapping RNN is denoted by . Instead of incriminating by the multiple of feature step size , in an attempt to improve the prediction accuracy, we define an feature stride that is smaller than . The prediction accuracy of Feature Mapping RNN with several different feature stride value is shown in Table 7.

LSTM state size This section aims at investigating the influence of LSTM cell’s hidden state size () on the model’s performance. Since the hidden state stores essential information of all the input sequence data, it is common to consider it as the ”memory” of the RNN. It is intuitive to expect an improvement in performance when we increase the size of the hidden state up to some extent.

However, the results in Table 7 shows that increasing the LSTM state size does not have much effect on the prediction accuracy, especially when the state size becomes larger than . This is because in the proposed feature mapping RNN model, each LSTM cell takes only one scalar as input, as opposed to the traditional RNN cells that process entire vectors. As the hidden state size is always greater than the input size (equal to 1), it is not surprising that very large does not have much influence on the model performance.

Number of RBF Kernels In this section we study the influence of number of Gaussian surfaces used in feature mapping RNN . We calculate prediction accuracy while increasing the number of Gaussian kernels from to . Results are as shown in Table 7. The results show a general trend of increasing prediction performance as we add more number of kernels, with the highest accuracy achieved at when . However, result obtained when is worse than when . This phenomena could be explained by over-fitting, resulted from RBF kernel’s strong capability of modeling temporal dynamics of data with complex boundaries.

Conclusions for hyper-parameters tuning The conclusion from these experiments is that the model is not too sensitive to the variation of these hyper-parameters in general, which demonstrates its robustness. Results further demonstrated the computational efficiency of our approach. Since it is possible to effectively train the model with very few parameters, it can be stored on mobile devices for fast future action anticipation.

5 Conclusions

The proposed RNN which uses a very few parameters outperforms state-of-the-art algorithms on action anticipation task. Our extensive experiments indicates the model’s ability to produce accurate prediction of future features only observing a fraction of the features. Furthermore, our RNN model is fast and consumes fraction of the memory which makes it suitable for real-time execution on mobile devices. Proposed feature mapping RNN can be trained with and without lables to generate future features. Our feature generator does not use class level annotations of video data. Therefore, in principle, we can increase the robustness of the model utilizing large amount of available unlabelled data. The fact that the model is able to generate valid results using very few parameters provides strong proofs for the existence of inner-correlation between deep features, which is a characteristic that can have implications on many related problems such as video tracking, image translation, and metric learning.

In addition, by appending a RBF layer to the RNN, we observe significant improvement in prediction accuracy. However, it was also noted that over-fitting occurs when the model is implemented with too many kernel RBFs. To fully explore functional capacity of RBF function, in future studies, we aim to implement kernel RBFs on fully connected layer of popular deep CNN models such as ResNet [13], AlexNet [24] and DenseNet [15].

In conclusion, proposed RBF kernalized feature mapping RNN demonstrates the power of parameter sharing and RBF functions in a challenging sequence learning task of video action anticipation.

References

  • [1] Y. Bengio, R. Ducharme, P. Vincent, and C. Jauvin (2003) A neural probabilistic language model. Journal of machine learning research 3 (Feb), pp. 1137–1155. Cited by: §2.
  • [2] A. Dix (2009) Human-computer interaction. In Encyclopedia of database systems, pp. 1327–1331. Cited by: §1.
  • [3] L. Duan, M. Xu, T. Chua, Q. Tian, and C. Xu (2003) A mid-level representation framework for semantic sports video analysis. In 2003 ACM International Conference on Multimedia, pp. 33–44. Cited by: §1.
  • [4] A. Ekin, A. M. Tekalp, and R. Mehrotra (2003) Automatic soccer video analysis and summarization. IEEE Transactions on Image processing 12 (7), pp. 796–807. Cited by: §1.
  • [5] M. Enzweiler and D. M. Gavrila (2010) Integrated pedestrian classification and orientation estimation. In 2010 IEEE Conference on Computer Vision and Pattern Recognition, pp. 982–989. Cited by: §1.
  • [6] Z. Fan, T. Lin, X. Zhao, W. Jiang, T. Xu, and M. Yang (2017) An online approach for gesture recognition toward real-world applications. In Image and Graphics, Y. Zhao, X. Kong, and D. Taubman (Eds.), Cham, pp. 262–272. External Links: ISBN 978-3-319-71607-7 Cited by: Table 3.
  • [7] P. Felsen, P. Agrawal, and J. Malik (2017) What will happen next? forecasting player moves in sports videos. In 2017 IEEE International Conference on Computer Vision, Cited by: §2.
  • [8] D. F. Fouhey and C. L. Zitnick (2014) Predicting object dynamics in scenes. In 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2027–2034. External Links: Document, ISSN 1063-6919 Cited by: §2.
  • [9] T. Gandhi and M. M. Trivedi (2008) Image based estimation of pedestrian orientation for improving path prediction. In 2008 IEEE Intelligent Vehicles Symposium, pp. 506–511. Cited by: §1.
  • [10] J. Gao, Z. Yang, and R. Nevatia (2017) RED: reinforced encoder-decoder networks for action anticipation. arXiv preprint arXiv:1707.04818. Cited by: §2.
  • [11] H. Gong, J. Sim, M. Likhachev, and J. Shi (2011-11) Multi-hypothesis motion planning for visual object tracking. In 2011 IEEE International Conference on Computer Vision, pp. 619–626. External Links: Document, ISSN 1550-5499 Cited by: §2.
  • [12] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680. Cited by: §1, §3.4, §3.4.
  • [13] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In 2016 IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §3.2, §5.
  • [14] S. Hochreiter and J. Schmidhuber (1997) Long short-term memory. Neural computation 9 (8), pp. 1735–1780. Cited by: §3.2.
  • [15] G. Huang, Z. Liu, L. van der Maaten, and K. Q. Weinberger (2017) Densely connected convolutional networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §5.
  • [16] A. Jain, A. Singh, H. S. Koppula, S. Soh, and A. Saxena (2016) Recurrent neural networks for driver activity anticipation via sensory-fusion architecture. In 2016 IEEE International Conference on Robotics and Automation, pp. 3118–3125. Cited by: §2, Table 2.
  • [17] H. Jhuang, J. Gall, S. Zuffi, C. Schmid, and M. J. Black (2013-12) Towards understanding action recognition. In 2013 IEEE International Conference on Computer Vision, pp. 3192–3199. Cited by: §4.1.
  • [18] C. G. Keller and D. M. Gavrila (2014) Will the pedestrian cross? a study on pedestrian path prediction. 2014 IEEE Transactions on Intelligent Transportation Systems 15 (2), pp. 494–506. Cited by: §1.
  • [19] K. M. Kitani, B. D. Ziebart, J. A. Bagnell, and M. Hebert (2012) Activity forecasting. In 2012 European Conference on Computer Vision, A. Fitzgibbon, S. Lazebnik, P. Perona, Y. Sato, and C. Schmid (Eds.), pp. 201–214. External Links: ISBN 978-3-642-33765-9 Cited by: §2.
  • [20] Y. Kong, D. Kit, and Y. Fu (2014) A discriminative model with multiple temporal scales for action prediction. In 2014 European Conference on Computer Vision, Springer, pp. 596–611. Cited by: §2, §2.
  • [21] J. F. P. Kooij, N. Schneider, F. Flohr, and D. M. Gavrila (2014) Context-based pedestrian path prediction. In 2014 European Conference of Computer Vision, D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars (Eds.), pp. 618–633. External Links: ISBN 978-3-319-10599-4 Cited by: §2.
  • [22] J. F. P. Kooij, N. Schneider, F. Flohr, and D. M. Gavrila (2014) Context-based pedestrian path prediction. In 2014 European Conference on Computer Vision, pp. 618–633. Cited by: §1.
  • [23] H. S. Koppula and A. Saxena (2016-01) Anticipating human activities using object affordances for reactive robotic response. IEEE Transactions on Pattern Analysis and Machine Intelligence 38 (1), pp. 14–29. External Links: Document, ISSN 0162-8828 Cited by: §1, §2.
  • [24] A. Krizhevsky, I. Sutskever, and G. E. Hinton (2012) ImageNet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems 25, F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger (Eds.), pp. 1097–1105. Cited by: §5.
  • [25] C. H. Lampert (2015) Predicting the future behavior of a time-varying probability distribution. In 2015 IEEE Conference on Computer Vision and Pattern Recognition, pp. 942–950. External Links: Document, ISSN 1063-6919 Cited by: §2.
  • [26] K. Laviers, G. Sukthankar, D. W. Aha, M. Molineaux, C. Darken, et al. (2009) Improving offensive performance through opponent modeling.. In 2009 AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, Cited by: Table 2.
  • [27] K. Li and Y. Fu (2014) Prediction of human activity by discovering temporal sequence patterns. 2014 IEEE transactions on pattern analysis and machine intelligence 36 (8), pp. 1644–1657. Cited by: §2, §2.
  • [28] S. Ma, L. Sigal, and S. Sclaroff (2016) Learning activity progression in lstms for activity detection and early detection. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1942–1950. External Links: Document Cited by: §1, Table 2.
  • [29] S. Ma, L. Sigal, and S. Sclaroff (2016) Learning activity progression in lstms for activity detection and early detection. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1942–1950. Cited by: §2.
  • [30] I. S. MacKenzie (1992) Fitts’ law as a research and design tool in human-computer interaction. Human-computer interaction 7 (1), pp. 91–139. Cited by: §1.
  • [31] T. Mahmud, M. Hasan, and A. K. Roy-Chowdhury (2017) Joint prediction of activity labels and starting times in untrimmed videos. In 2017 IEEE International Conference on Computer Vision, pp. 5784–5793. Cited by: §2.
  • [32] H. Mobahi, R. Collobert, and J. Weston (2009) Deep learning from temporal coherence in video. In 2009 International Conference on Machine Learning, pp. 737–744. Cited by: §3.2.
  • [33] A. Newell and S. K. Card (1985) The prospects for psychological science in human-computer interaction. Human-computer interaction 1 (3), pp. 209–242. Cited by: §1.
  • [34] M. Ranzato, A. Szlam, J. Bruna, M. Mathieu, R. Collobert, and S. Chopra (2014) Video (language) modeling: a baseline for generative models of natural videos. CoRR abs/1412.6604. Cited by: §2.
  • [35] M. S. Ryoo and J. K. Aggarwal (2009) Spatio-temporal relationship match: video structure comparison for recognition of complex human activities. In 2009 IEEE International Conference on Computer Vision, pp. 1593–1600. External Links: Document, ISSN 1550-5499 Cited by: §2.
  • [36] M. S. Ryoo and J. K. Aggarwal (2010) UT-Interaction Dataset, ICPR contest on Semantic Description of Human Activities (SDHA). Note: http://cvrc.ece.utexas.edu/SDHA2010/Human_Interaction.html Cited by: §4.1.
  • [37] M. S. Ryoo (2011-11) Human activity prediction: early recognition of ongoing activities from streaming videos. In 2011 International Conference on Computer Vision, pp. 1036–1043. External Links: Document, ISSN 1550-5499 Cited by: §2, §4.1, §4.2, Table 2.
  • [38] M. Ryoo, C. Chen, J. Aggarwal, and A. Roy-Chowdhury (2010) An overview of contest on semantic description of human activities (sdha) 2010. In Recognizing Patterns in Signals, Speech, Images and Videos, pp. 270–285. Cited by: Table 2.
  • [39] M. Sadegh Aliakbarian, F. Sadat Saleh, M. Salzmann, B. Fernando, L. Petersson, and L. Andersson (2017-10) Encouraging lstms to anticipate actions very early. In 2017 IEEE International Conference on Computer Vision, Cited by: §1, §2, §4.1, §4.2, Table 2.
  • [40] M. Saito, E. Matsumoto, and S. Saito (2017) Temporal generative adversarial nets with singular value clipping. In 2017 IEEE International Conference on Computer Vision, Vol. 2, pp. 5. Cited by: §2.
  • [41] G. Singh, S. Saha, M. Sapienza, P. Torr, and F. Cuzzolin (2017) Online real time multiple spatiotemporal action localisation and prediction. Cited by: Table 3.
  • [42] K. Soomro, H. Idrees, and M. Shah (2016) Online localization and prediction of actions and interactions. CoRR abs/1612.01194. External Links: 1612.01194 Cited by: §1, §2, Table 2.
  • [43] K. Soomro, H. Idrees, and M. Shah (2016) Predicting the where and what of actors and actions through online action localization. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2648–2657. Cited by: §2, Table 2.
  • [44] K. Soomro, A. R. Zamir, and M. Shah (2012) UCF101: A dataset of 101 human actions classes from videos in the wild. CoRR abs/1212.0402. Cited by: §4.1.
  • [45] N. Srivastava, E. Mansimov, and R. Salakhudinov (2015) Unsupervised learning of video representations using lstms. In 2015 International Conference on Machine Learning, pp. 843–852. Cited by: §2.
  • [46] F. Suard, A. Rakotomamonjy, A. Bensrhair, and A. Broggi (2006) Pedestrian detection using infrared images and histograms of oriented gradients. In 2006 IEEE Intelligent Vehicles Symposium, pp. 206–212. Cited by: §1.
  • [47] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna (2016) Rethinking the inception architecture for computer vision. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818–2826. External Links: Document Cited by: §3.2, §4.2.
  • [48] S. Tulyakov, M. Liu, X. Yang, and J. Kautz (2017) Mocogan: decomposing motion and content for video generation. arXiv preprint arXiv:1707.04993. Cited by: §2.
  • [49] C. Vondrick, H. Pirsiavash, and A. Torralba (2016) Anticipating visual representations from unlabeled video. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, pp. 98–106. Cited by: §1, §1, §2.
  • [50] T. Vu, C. Olsson, I. Laptev, A. Oliva, and J. Sivic (2014) Predicting actions from static scenes. In 2014 European Conference on Computer Vision, D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars (Eds.), pp. 421–436. External Links: ISBN 978-3-319-10602-1 Cited by: §2, §2.
  • [51] J. Walker, A. Gupta, and M. Hebert (2014) Patch to the future: unsupervised visual prediction. In 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 3302–3309. External Links: Document, ISSN 1063-6919 Cited by: §2.
  • [52] J. Walker, A. Gupta, and M. Hebert (2015-12) Dense optical flow prediction from a static image. In 2015 IEEE International Conference on Computer Vision, pp. 2443–2451. External Links: Document Cited by: §2.
  • [53] J. Walker, K. Marino, A. Gupta, and M. Hebert (2017) The pose knows: video forecasting by generating pose futures. In 2017 IEEE International Conference on Computer Vision, pp. 3352–3361. Cited by: §2.
  • [54] D. Xie, S. Todorovic, and S. C. Zhu (2013-12) Inferring dark matter and dark energy from videos. In 2013 IEEE International Conference on Computer Vision, pp. 2224–2231. External Links: Document, ISSN 1550-5499 Cited by: §2.
  • [55] G. Yu, J. Yuan, and Z. Liu (2012) Predicting human activities using spatio-temporal structure of interest points. In 2012 ACM International Conference on Multimedia, pp. 1049–1052. Cited by: §2, §2.
  • [56] D. Zhong and S. Chang (2001) Structure analysis of sports video using domain models. In 2001 IEEE International Conference on Multimedia & Expo, Cited by: §1.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
398560
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description