Self-Attention Network for Skeleton-based Human Action Recognition

Self-Attention Network for Skeleton-based Human Action Recognition

Abstract

Skeleton-based action recognition has recently attracted a lot of attention. Researchers are coming up with new approaches for extracting spatio-temporal relations and making considerable progress on large-scale skeleton-based datasets. Most of the architectures being proposed are based upon recurrent neural networks (RNNs), convolutional neural networks (CNNs) and graph-based CNNs. When it comes to skeleton-based action recognition, the importance of long term contextual information is central which is not captured by the current architectures. In order to come up with a better representation and capturing of long term spatio-temporal relationships, we propose three variants of Self-Attention Network (SAN), namely, SAN-V1, SAN-V2 and SAN-V3. Our SAN variants has the impressive capability of extracting high-level semantics by capturing long-range correlations. We have also integrated the Temporal Segment Network (TSN) with our SAN variants which resulted in improved overall performance. Different configurations of Self-Attention Network (SAN) variants and Temporal Segment Network (TSN) are explored with extensive experiments. Our chosen configuration outperforms state-of-the-art Top-1 and Top-5 by 4.4 and 7.9 respectively on Kinetics and shows consistently better performance than state-of-the-art methods on NTU RGB+D.

\wacvfinalcopy

1 Introduction

Figure 1: An example of self-attention response from the last self-attention layer. Eight frames are uniformly sampled from an action with the class ‘put on jacket’ and illustrated as frame 0 to 7. Frame 0 has the strongest correlation with the last frame, frame 7, at the fourth head  , and attends heavily itself at the second head  . Note that with the self-attention network each frame is associated with other frames so that local and global context information can be acquired.

Video-based action recognition has been an active research topic due to its important practical applications in many areas, such as video surveillance, behavior analysis, and video retrieval. Human action recognition can also be applicable to human-computer interaction or human-robot interaction to help machines understand human behaviors better [39, 21, 3]. Unlike a single image that contains only spatial information, a video provides additional motion information as an important cue for recognition. Although a video provides more information, it is non-trivial to extract the information due to a number of difficulties such as viewpoint changes, camera motions, and scale variations, to name a few. There has been extensive research in RGB video-based action recognition and one of the mainstream methods is to employ both temporal optical flow and spatial appearance to obtain spatial and temporal information [25] . The RGB video datasets typically contain an extensive amount of data to process, hence require large models and resources to train them properly. On the other hand, skeleton based action recognition comprises of only key joint locations of human bodies. With the advent of cost-effective depth cameras [42], stereo cameras, and the advanced techniques for human pose estimation [2], the cost to obtain key points has reduced and skeleton-based human action recognition has garnered increasing attraction [1, 7, 40]. Although the key joint locations dont include appearance information, humans are able to recognizing actions from the motion of a few human skeleton joints according to Johansson [11]. In this paper, we focus on human action recognition based on 3D skeleton sequences.

To extract information from skeleton sequences, many works naturally apply recurrent neural networks (RNNs) to model temporal dynamics [23, 18, 41]. They also utilize CNNs to model spatio-temporal dynamics by treating the 3D skeleton data as 2D pseudo images with 3 channels [17, 36]. Another method is to retrieve structure information of human body by constructing a graph with human joints as edges [40], which also based on CNNs. Despite the significant improvements in performance, there exist a problem to be solved. Both recurrent and convolutional operations are neighborhood-based local operations [38] either in space or time; hence local-range information is repeatedly extracted and propagated to capture long-range dependencies. Many works have designed networks with hierarchical structure [7, 16, 4] to obtain longer range and deeper semantic information but the problem still persists if there are back and forth semantic dependencies.

In this paper, we propose a novel model with Self-Attention Network (SAN) to overcome the above limitation and retrieve better semantic information (Fig. 1). Fig. 2 shows the overall pipeline of our model. The framework is motivated by temporal segment network [35] that extracts short-term information from each video sequence. Our model extracts semantic information from each video sequence by SAN variants. SAN-Variants take a sequence of features from encoded signals and computes the response at each position as a weighted sum of features at all positions. This operation enables SAN-Variants to correlate features in distance or even in opposite direction. The predicted outputs based on each clip are merged with consensus operations to capture deeper semantic understanding. Therefore, our model can effectively solve the problem of acquiring long-term semantic information. Experimental results show that the learned SAN variants outperforms state of the art methods on challenging large scale datasets. We also visualize the attention correlations trying to understand how the network works and provide some insights. The main contributions of the paper are summarized as follows:

  1. We propose Self Attention Network (SAN) variants SAN-V1, SAN-V2 and SAN-V3 for effectively capturing deep semantic correlations from action sequences involving human skeleton.

  2. We have integrated the Temporal Segment Network (TSN) with our SAN variants. We observed improved performance because of this integration of TSN and SAN variants.

  3. We visualize self-attention probabilities to show how each frame is correlated with other frames.

  4. Our proposed method achieves state-of-the-art results on two large scale datasets: NTU RGB+D and Kinetics-skeleton

Figure 2: The overall pipeline of the proposed model. The network takes as inputs temporally segmented clips and extracts contextual information from each snippet by one of SAN variants described in section 4.3. Predictions of each snippet are fused to compute the final prediction.

2 Related Work

Handcrafted features are used to represent the skeleton motion information in early works. [10] computes covariance matrix for joint positions over time. [31] extracts 3D geometric relationships of body parts in Lie group based on rotations and translations of joints. With further progress in deep learning, researchers started using Recurrent Neural Networks to extract temporal dynamics between joints as RNNs use sequential processing. [7] proposes a hierarchical RNN that splits the human body into five parts with each part fed into different subnetworks and fuses them hierarchically. [23] splits a cell in an LSTM into part based cells and human body parts are applied to each cell to learn a representation of each part over time. [43] proposes a spatio-temporal LSTM network that learns the co-occurrence features of skeleton joints with a group sparse regularization. [18] introduces trust gate to reduce the influence of noisy joints and employs a spatio-temporal LSTM network to explore the spatila and temporal relationships. [26] introduces attention mechanism in the LSTM network to focus on more important joints at each time instances. In recent works, CNN based approaches [13, 6, 19, 37] are adopted to learn skeleton features and achieves significant performance. They attempt to convert a skeleton sequence into pseudo images and utilize CNNs to learn. [6] maps a skeleton sequence to a tensor with frames, joints, and xyz coordinates treating it as image and leverages CNNs to train. [13] proposes a method to use relative positions between the joints and the reference joints based on CNNs. [37] maps trajectories of joints to orthogonal planes by using the 2D projection. CNNs are also employed in our method to obtain more informative features from the raw skeleton joints. However, while the aforementioned RNNs and CNNs lack the ability to extract long-term correlation between features, our proposed method fills the gap to obtain high-level semantic information with long-range connections of features.

A self-attention network learns to generate hidden state representations for a sequence of input symbols using a multi-layer architecture [30]. The hidden states of the upper layer are built from the hidden states of the lower layer using a self-attention mechanism. It learns to aggregate information from lower layer hidden states according to their similarities to the -th hidden state. The learned representations are highly effective because they capture deep contextualized information of the input sequence. The self-attentive network with multi-head attention has demonstrated success on a number of tasks including machine translation [30, 28], language modeling and natural language inference [5], semantic role labeling [27], often surpassing recurrent neural networks by a substantial margin. Particularly, [30] describes the Transformer model that makes the self-attention mechanism an integral part of the architecture for improved sequence modeling. [5] learns deep contextualized word representations that have led to state-of-the-art performance on question answering and natural language inference without task-specific architecture modifications. Despite the success, self-attentive networks have been less investigated for the task of skeleton-based action recognition. In this paper, we introduce a novel self-attentive architecture to fill this gap.

Temporal information can be extracted from a sequence data or a video. Many research endeavors have introduced methods for modeling the temporal structure for action recognition [20, 34, 8]. [20] proposes to employ latent variables to decompose complex actions in time and [34] introduces a latent hierarchical model that extends the temporal decomposition of complex actions. [8] utilizes a rank SVM to model the temporal evolution of BoVW representations. [35] introduces a method to model a long-range temporal structure by simply splitting a video into snippets and fusing CNN outputs from each part. We adopt this method since it effectively extracts long-range temporal information and also is applicable to any network with end-to-end training.

3 Self-Attention Network

(a) SAN-Block
(b) SAN-V1
(c) SAN-V2
(d) SAN-V3
Figure 3: Different designs of Self-Attention Network architecture. (a) self-attention network block (SAN) computing pairwise correlated attentions; (b) baseline model with early fused input features; (c) model that learns movements of each person in a scene; (d) model that learn different modalities for available people in a scene.

In this section, we briefly review the Self-attention network. Self-attention network [30] is a powerful method to compute correlation between arbitrary positions of a sequence input. An attention function consists of a query , keys , and values where query and keys have same vector dimension , and values and outputs have same size of dimension . The output is computed as a weighted sum of the values, and the weight assigned to each value is computed by scaled dot-product of query and keys. The vectors of query , keys and values are packed in a matrix generating Q, K, and V matrices. Then the attention function is defined as

(1)

where is a scaling factor.The equation computes scaled dot-product attention and the network computes the attention multiple times in parallel (multi-head) to extract different correlation information. The multi-head attention outputs are concatenated and transformed to the same vector dimension the input sequence. A residual connection is adopted to take the input and output of the multi-head self-attention layer and a layer normalization is applied to the summed output. A fully-connected feed-forward network with a residual connection is applied to the normalized self-attention output. The entire network is illustrated as a self-attention layer in Fig. 2(a) and multiple layers are repeated to extract better representation.

4 Approach

In this paper, we propose an effective model for skeleton-based action recognition, which is based on Self-Attention Network. The overall framework of the model is shown in Fig. 2. Primarily we have position and motion of joints. We can use raw position of the joints for figuring out the motion/velocity of the joints. Our SAN variants operate on encoded representations of position and motion sequences. We will be using simple non-linear projection (FCNN) and CNN based encoders for encoding the raw position and velocity sequences. First we will explain the data transformation from raw sequences of position and motion of the joints to encoded features. Once features are encoded, we will make use of three different SAN based architectures for effectively capturing the contextual information from the encoded features.

4.1 Raw Position and Motion Data

The raw skeleton position in a video clip is defined with the number of frames , the number of joints per person , and the coordinates of each joint . There may be skeletons in a frame so the total number of joints is . The position data can be depicted for each person as , where .

The motion or velocity data, , can be explicitly retrieved by taking differences of each joint , where and , between consecutive frames:

(2)

Similarly, the motion data for each person is represented as .

4.2 Encoder

Our SAN variant models (Fig. 3) operate upon the encoded position and motion features . In this section, we describe two methods to encode the raw position and motion data .

Non-Linear Encoder

A non-linear encoder simply uses a feed-forward neural network (FCNN) with a non-linear activation function for projecting the input vector to higher dimension. For example, when encoding for SAN-V1 (Fig. 2(b)) we perform early fusion of and to get and then use our non-linear encoder to get . On the other hand, encoding for SAN-V2 (Fig. 2(c)) and SAN-V3 (Fig. 2(d)) individual skeletons are incorporated. In this case non-linear encoding is used to extend the skeleton joint position and motion tensor to , and , respectively.

CNN Based Encoder

A CNN based encoder is employed for encoding low level features from raw joint position and motion data , , or , and . 2D convolutions can serve the purpose of extracting features from 3D tensors of raw skeleton data. Our encoder block consist of 4 convolutional layers as evident from Fig. 4. We will explain the general encoding scheme by keeping in view the encoding requirements for SAN-V1 architecture. As we mentioned earlier in 4.2.1, for SAN-V1, which is the output of early fusion of and . First layer uses filters with stride . Output of the first layer are the extended coordinates in the form of tensor. Layer two operates with filters and stride , and outputs a tensor of shape . Note that convolution window size for layer two is because we are interested in extracting local contextual information over frames. Now, we transpose joints and cooridinates making the tensor of shape in order to extract features from correlations of all joints over local frames. Third layer uses filters with stride 1 and max pooling with pooling window is also applied. Output of third layer is a tensor with shape . Final convolution layer applies filters with stride 1. Similar to third layer, max pooling with a pooling window of is also applied producing a tensor. Last two CNN layers encode correlated local features from all joints of human body. For SAN-V2 (Fig. 2(c)) and SAN-V3 (Fig. 2(d)) we encode and for individual skeletons in the frames. Note that remains the same so feature representations for each frame are acquired with encoders.

4.3 SAN Variant Architecture

We investigate three SAN based network architectures as shown in Fig. 3 for skeleton based action recognition. These architectures employ the same SAN architecture as shown in Fig. 2(a) but operate upon varying combinations of encoded features, , , and . We first discuss the SAN block used in the network in detail.

Self-Attention Network

SAN block operates on encoded representations of position and motion information. The input to SAN block is , where is a feature representation per frame. The dimension of relys on the different encoders and model variants, and with the CNN encoder for SAN-V1. The first layer of the SAN block is a position embedding generating . Position embedding layer is used for providing a sense of order to the feature vectors. The ordering prior knowledge is helpful for each feature vector at each time to capture overall contextual cues from the input sequence. The output of the position embedding layer is an element-wise addition of the input sequence and the position embedding .

Output of position embedding layer is fed to the first self-attention layer . Each SAN layer consumes the output of the previous SAN layer. Each self-attention layer computes pairwise attention probabilities and and parameters described in Eq. 1 are learned. Each self-attention layer outputs where is the number of self-attention layers. We concatenate the outputs from each SAN layer in order to gather all the attention probabilities as shown below

(3)
(4)

where layer concatenates along the vector axis creating a concatenated sequence . Then, a global average layer is applied to along the frame axis to obtain video-level features and a resulting dimension of the feature is . Finally, a fully connected layer with a non-linear activation, ReLU, projects the feature vector to the same input dimension .

Figure 4: An input sequence of skeleton joints over frames, , is fed to the convolutional blocks and output tensor size of is generated, which is denoted by  . Each color denotes the following layers:   convolutional layer;   ReLU activation; and   max-pooling layer.

San-V1

SAN-V1 (Fig.  2(b)) is a baseline network to understand how well the SAN block works for this task. It takes a concatenated input of position () and motion () data generating an input sequence . The concatenation is to achieve feature-level early fusion. requires encoding which is achieved using CNN encoder and non-linear encoder. The shape of the input sequence to the encoders is where . SAN block extracts latent local and global context information out of the input encoded sequences and . Note that is the number of joints for one person, hence represent the joints belonging to all the poeple in the frame. Zero paddings are applied in case that the number of valid people in a frame is less than a pre-defined maximum number of people. The output of the SAN block is fed to a classification layer which consists of a ReLU activation layer, a dropout layer, and a linear layer with softmax activation to predict probabilities for each class. The network is trained with cross-entropy loss.

San-V2

SAN-V2 (Fig. 2(c)) is designed to extract contextual features with the SAN blocks for each subject (skeleton) in a scene. This network computes actions for each skeleton and takes the strongest signal from all available people in a video. Similar to SAN-V1, the encoded position and motion skeleton data for each person is concatenated respectively and the concatenated input sequences are fed to the corresponding SAN blocks. The input dimension for each SAN block is and with the non-linear and CNN encoder, respectively. SAN blocks share weights to learn a variety of movements from different people. SAN outputs can be merged with different operations such as element-wise max, mean or concatenation. According to our preliminary experiments, element-wise max works the best as it captures the strongest action signal among people who may not be available. The final classification layer is identical to the one in SAN-V1. Note that SAN-V2 leverages late fusion strategy and is scalable to arbitrary number of people.

San-V3

Lastly, SAN-V3 (Fig. 2(d)) is designed to deal with different data modalities: position and velocity (or motion). The most prominent signals from all people are chosen by an element-wise max operation for each modality. The input dimension for the SAN block is and for the non-linear and CNN encoder, respectively. The output of each SAN block is fed to separate classifiers and the concatenated signal from the SAN blocks is consumed by another classifier. This network is also scalable to any number of people in a scene. The training losses of the model are calculated by adding all cross entropy losses from each classifier.

4.4 Temporal Segment Self-Attention Network (TS-SAN)

The self-attention network can associate features in distance making it possible to capture long range information. However, as the feature representations for same action can vary with many constraints (viewpoint change, different speed of action by different subjects, etc), the proposed network may not learn well. Thus, we leverage the temporal segment network [35] to train the network more effectively. As shown in Fig. 2, a video is divided into clips and one of the SAN variants in Fig. 3 is employed to learn temporal dynamics on each clip. Note that all layers share weights for different clips. Formally, given segments of a video, the proposed network models a sequence of clips as follows:

(5)

where denotes one of SAN-Variant models and is its parameters. The predictions of each SAN model from each snippet are aggregated based on different function : element-wise max, and average.

5 Experiments

Methods CS CV
H-RNN [7] (2015) 59.1 64.0
PA-LSTM [23] (2016) 62.9 70.3
TG ST-LSTM [18] (2016) 69.2 77.7
Two-stream RNN [33] (2017) 71.3 79.5
STA-LSTM [26] (2017) 73.4 81.2
Ensemble TS-LSTM [16] (2017) 74.6 81.3
VA-LSTM [41] (2017) 79.4 87.6
ST-GCN [40] (2018) 81.5 88.3
DPRL [29] (2018) 83.5 89.8
HCN [17] (2018) 86.5 91.9
SR-TSL [24] (2018) 84.8 92.4
TS-SAN (Ours) 87.2 92.7
Table 1: Results of our method in comparison with state-of-the-art methods on NTU RGB+D with Cross-Subject(CS) and Cross-View(CV) benchmarks.

We perform extensive experiments to evaluate the effectiveness of our proposed Self-Attention frameworks on two large scale benchmark datasets: NTU RGB+D dataset [22], and Kinetics-skeleton dataset [12]. We analyze the performance of our variant models and visualize self-attention probabilities to understand its mechanism.

5.1 Datasets

Ntu Rgb+d

NTU RGB+D is the current largest action recognition dataset with joints annotations that are collected by Microsoft Kinect v2. It has 56,880 video samples and contains 60 action classes in total. These actions are performed by 40 distinct subjects. It is recorded with three cameras simultaneously in different horizontal views. The joints annotations consist of 3D locations of 25 major body joints. [22] defines two standard evaluation protocols for this dataset: Cross-Subject (CS) and Cross-View (CV). For Cross-Subject evaluation, the 40 subjects are split into training and testing groups. Each group consists of 20 subjects. The numbers of training and testing samples are 40,320 and 16,560, respectively. For Cross-View evaluation, all the samples of cameras 2 and 3 are used for training while the samples of camera 1 are used for testing. The numbers of training and testing samples are 37,920 and 18,960, respectively.

Kinetics

Kinetics [12] contains about 266,000 video clips retrieved from YouTube and covers 400 classes. Since no skeleton annotation is provided, the skeleton is estimated by an OpenPose toolbox [2] from the resized videos of 340256 resolution. The toolbox estimates 2D coordinates of 18 human joints and confidence scores for each joint. Each joint is represented as and 2 people are selected at most for each frame based on the highest average joint confidence score. The total number of frames for all clips is fixed to 300 by repeating the sequence from the start. We employ the released skeleton dataset to train our model and report the performance of top-1 and top-5 accuracies as introduced in [40]. The numbers of training and validation samples are around 246,000 and 20,000, respectively.

5.2 Implementation Details

We resize the sequence length to a fixed number of =32/64 (NTU/Kinetics) with bilinear interpolation along the frame dimension. We use =3 of temporal segments and 32 frames are sampled from each clip. The numbers of self-attention layers and multi-heads used for NTU RGB+D and Kinetics datasets are 4, 8 and 8, 8, respectively.

To alleviate the problem of overfitting, we append dropout with a probability of 0.5 before the last prediction layer and after the last convolution layer. For the self-attention network, a 0.2 ratio of dropout is utilized. We employ a data augmentation scheme by randomly cropping sequences with a ratio of uniform distribution between [0.5, 1] for training. We center crop sequence with a ratio of 0.9 when testing. The learning rate is initialized with and reduced by half in case no improvement of accuracy is observed for 5 epochs. Adam optimizer [15] is applied with weight decay of . The model is trained for 200/100 (NTU/Kinetics) epochs with a batch size of 64.

5.3 Comparison to State of the Art

Methods Top-1 Top-5
Feature Enc. [9] (2015) 14.9 25.8
Deep LSTM [23] (2016) 16.4 35.3
Temporal Conv [14] (2017) 20.3 40.0
ST-GCN [40] (2018) 30.7 52.8
TS-SAN (Ours) 35.1 55.7
Table 2: Results of our method in comparison with state-of-the-art methods on Kinetics.

We compare the performance of the proposed method to the state-of-the-art methods on NTU RGB+D and Kinetics datasets as shown in Table 1 and Table 2. The compared methods are based on CNN, RNN (or LSTM), and graph structure and our method consistently outperform state-of-the-art approaches. This demonstrates the effectiveness of our proposed model for the skeleton-based action recognition task.

As shown in Table 1, our proposed model achieves the best performance with 87.2 with CS and 92.7 with CV. Our model and [26] have common in a sense that attention mechanism is used. By comparing with STA-LSTM [26], our model performs 13.8 with CS and 11.5 with CV. Our model encodes the raw skeleton data with CNNs similar to HCN [17] but outperforms by 0.7 with CS and 0.8 with CV. Comparing our model with SR-TSL [24] which is one of the best-performed methods, the performance gaps are 2.4 with CS and 0.3 with CV.

On the Kinetics dataset, we compare with four methods which are based on handcraft features, LSTM, temporal convolution, and graph-based convolution. As shown in Table 2, our method attains the best performance with a significant margin. The proposed method outperforms by 4.4 on top-1 and 2.9 on top-5 accuracies. We observe that CNN based methods [17, 24, 40, 14] are superior to LSTM based methods [41, 16, 23] based on both Table 1 and Table 2, and our model outperforms the CNN based methods.

5.4 Ablation Study

We analyze the proposed network by comparing it with baseline models. We compare SAN variants with hyperparameter options for encoders, self-attention network, and temporal segment network. Each experiment is evaluated on the NTU RGB+D dataset.

Methods CS CV
SAN-V1 + FF 75.4 79.8
SAN-V1 + CNN 80.1 86.2
SAN-V2 + FF 80.3 85.2
SAN-V2 + CNN 85.9 91.7
SAN-V3 + FF 78.6 84.1
SAN-V3 + CNN 85.5 91.4
Table 3: The comparison results of SAN variants shown in Fig. 3 with different encoder inputs on NTU dataset ().
Methods CS CV
SAN-V2 (seq=96) 86.1 92.0
SAN-V3 (seq=96) 85.9 91.7
TS (seg=3) + SAN-V2 (seq=32) 87.2 92.7
TS (seg=3) + SAN-V3 (seq=32) 86.8 92.4
Table 4: The comparison results of effectiveness of temporal segment on NTU dataset ().

Effect of SAN Variants with Different Encoders

Table 3 shows the results with different SAN variants and different inputs to them. The SAN-V2 model performs the best and the SAN-V1 model the worst. The gap between the SAN-V2 model and the SAN-V3 model is minimal. We observe that the CNN encoder boosts the performance accuracy by up to 7.3 for SAN-V3. It shows that the CNN encoder effectively generates rich feature representations for the SAN models and plays a significant role in the network. From the observation that SAN-V2 slightly outperforms SAN-V3, we conclude two facts: late fusion performs better than early fusion; and sharing weights of SAN blocks resulting in better trained models.

Effect of Temporal Segment

The self-attention network is suitable for connecting both short and long-range features and is capable of capturing higher-level context from all correlations. We compare the TS-SAN and SAN variants to see how they perform differently if two networks have the same sequence length. As shown in Table 4, TS-SAN outperforms. This proves that our design goal to make use of the temporal segment is correct. However, the SAN variants without the temporal segment network have an advantage of having less parameters with a small sacrifice of performance. Although TS-SAN models outperform, we observe that the SAN variants perform well for long-range input sequences, =96.

Methods CS CV
TS(Avg) + SAN-V2 87.2 92.7
TS(Max) + SAN-V2 86.1 91.9
TS(Avg) + SAN-V3 86.8 92.4
TS(Max) + SAN-V3 85.9 91.1
Table 5: The comparison results of different aggregation methods for TS network on NTU dataset ().
Methods CS CV
TS + SAN-V2 (L2H2) 86.7 92.1
TS + SAN-V2 (L4H4) 86.9 92.5
TS + SAN-V2 (L4H8) 87.2 92.7
TS + SAN-V2 (L8H8) 87.0 92.4
Table 6: The comparison results of the number of attention layers and multi-heads on NTU dataset ().

Effect of Consensus Function

We consider element-wise operations for the consensus function to compute the final prediction. Two operations are valid: element-wise average, element-wise maximum. Table 5 shows the performances of TS-SAN-V2 and TS-SAN-V3 with the above operations. The element-wise average consensus function outperforms the element-wise max operation in both SAN variants. The TS-SAN model with the element-wise max operation is outperformed by the SAN model without the temporal segment as shown in Table 4. We conjecture that since the self-attention output signals are based on weighted average computation, it makes more sense to use the element-wise average aggregation function for the collected outputs from each snippet. By doing so, the video level self-attention can be computed properly leading to the best performance.

Effect of Number of Layers and Mutli-heads in SAN Block

We compare TS-SAN-V2 model with different number of layers and multi-heads. The results are shown in Table 6. By comparing the row 2 and 3, we observe that the number of heads affect the performance marginally. From the results of the row 3 and 4, we also observe that the network underperform if it contains too many paramerters. On the contrary, the network also underperforms when the number of parameters are not enough (row 1). According to the results, we argue that the proposed model requires a proper number of layers and heads for a cetrain dataset to perform the best.

5.5 Visualization of Self-Attention Layer Response

The self-attention network determines where each frame correlates to other frames. We visualize the self-attention response from the last self-attention layer with a visualization tool [32] to understand how each frame is correlated for a certain action video. As shown in Fig. 5, the vertical axis shows the sampled 32 frames. Self-attention responses for eight multi-heads are displayed and each column shows the coarse shape of the attention pattern between two frames.

The model used for this visualization attains four layers and eight heads, and takes 32 sampled frames as the input sequence. No temporal segment network is used to train the network. The self-attention probabilities are calculated by the equ. 1 in the self-attention layer described in Fig. 2(a). For example, from Fig. 4(a), one of the strongest correlation in the third head can be found from a connection between frame 31 to frame 0 (a line across from bottom left to top right). From the above example, we can check the long range correlation is achieved and the proposed method captures a variety of correlations in both short and long distance.

We observe that the overall self-attention response patterns of the same action class (‘put on jacket’) resembles each other as shown in Fig. 4(a) and Fig. 4(b). The repsonses of head 1 and head 6 from two videos especially shows similar pattern. Although two videos are taken by different subjects, duration, and views, we can see that the self-attention catches a certain latent similarity. Comparing Fig. 4(a) and Fig. 4(b) with Fig. 4(c), there is not much similar response pattern between them due to different action classes (‘put on jacket’ vs ‘reading’). We also learn that the proposed model is robust to subtle motion or speed of action changes from difference subjects or even views.

(a) ‘Put on jacket’ action with subject 1
(b) ‘Put on jacket’ action with subject 2
(c) ‘Reading’ action with subject 1
Figure 5: Self-attention probabilities from the last self-attention layer for three test videos on NTU RGB+D are visualized. The brighter color denotes the higher probability or the stronger connection.

6 Conclusion

In this paper, we propose three novel SAN variations in order to extract high-level context from short and long-range self-attentions. Our proposed architectures significantly outperform state-of-the-art methods. CNN employed in our model is effective to extract feature representations for the input sequence of the self-attention network. SAN can capture the temporal correlations regardless of distance, making it possible to obtain high-level context information from both short and long-range self-attentions. We also propose an effective integration of SAN and TSN which results in observable performance boost. We perform extensive experiments on two large scale datasets, NTU RGB+D and Kinetics-skeleton, and verify the effectiveness of our proposed models for the skeleton-based action recognition task. In the future, we will apply our model to video-based recognition tasks with key point annotations, such as facial expression recognition. We will also explore different methods to extract effective feature representations for the input sequence of SAN.

References

  1. J. K. Aggarwal and L. Xia (2014) Human activity recognition from 3d data: A review. Pattern Recognition Letters 48, pp. 70–80. Cited by: §1.
  2. Z. Cao, G. Hidalgo, T. Simon, S. Wei and Y. Sheikh (2018) OpenPose: realtime multi-person 2d pose estimation using part affinity fields. CoRR abs/1812.08008. Cited by: §1, §5.1.2.
  3. S. Cho and H. Foroosh (2018) A temporal sequence learning for action recognition and prediction. In 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 352–361. Cited by: §1.
  4. S. Cho and H. Foroosh (2018) Spatio-temporal fusion networks for action recognition. In Asian Conference on Computer Vision, pp. 347–364. Cited by: §1.
  5. J. Devlin, M. Chang, K. Lee and K. Toutanova (2018) BERT: pre-training of deep bidirectional transformers for language understanding. CoRR abs/1810.04805. Cited by: §2.
  6. Y. Du, Y. Fu and L. Wang (2015) Skeleton based action recognition with convolutional neural network. In ACPR, pp. 579–583. Cited by: §2.
  7. Y. Du, W. Wang and L. Wang (2015-06) Hierarchical recurrent neural network for skeleton based action recognition. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §1, §2, Table 1.
  8. B. Fernando, E. Gavves, J. O. M., A. Ghodrati and T. Tuytelaars (2015) Modeling video evolution for action recognition. pp. 5378–5387. Cited by: §2.
  9. B. Fernando, E. Gavves, J. O. M., A. Ghodrati and T. Tuytelaars (2015) Modeling video evolution for action recognition. In CVPR, pp. 5378–5387. Cited by: Table 2.
  10. M. E. Hussein, M. Torki, M. A. Gowayyed and M. El-Saban (2013) Human action recognition using a temporal hierarchy of covariance descriptors on 3d joint locations. In Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence, IJCAI ’13, pp. 2466–2472. External Links: ISBN 978-1-57735-633-2, Link Cited by: §2.
  11. G. Johansson (1973-06-01) Visual perception of biological motion and a model for its analysis. Perception & Psychophysics 14 (2), pp. 201–211. External Links: ISSN 1532-5962, Document, Link Cited by: §1.
  12. W. Kay, J. Carreira, K. Simonyan, B. Zhang, C. Hillier, S. Vijayanarasimhan, F. Viola, T. Green, T. Back, P. Natsev, M. Suleyman and A. Zisserman (2017) The kinetics human action video dataset. CoRR abs/1705.06950. Cited by: §5.1.2, §5.
  13. Q. Ke, M. Bennamoun, S. An, F. A. Sohel and F. Boussaïd (2017) A new representation of skeleton sequences for 3d action recognition. In CVPR, pp. 4570–4579. Cited by: §2.
  14. T. S. Kim and A. Reiter (2017) Interpretable 3d human action analysis with temporal convolutional networks. In CVPR Workshops, pp. 1623–1631. Cited by: §5.3, Table 2.
  15. D. P. Kingma and J. Ba (2015) Adam: A method for stochastic optimization. In ICLR, Cited by: §5.2.
  16. I. Lee, D. Kim, S. Kang and S. Lee (2017-10) Ensemble deep learning for skeleton-based action recognition using temporal sliding lstm networks. In The IEEE International Conference on Computer Vision (ICCV), Cited by: §1, §5.3, Table 1.
  17. C. Li, Q. Zhong, D. Xie and S. Pu (2018) Co-occurrence feature learning from skeleton data for action recognition and detection with hierarchical aggregation. In IJCAI, pp. 786–792. Cited by: §1, §5.3, §5.3, Table 1.
  18. J. Liu, A. Shahroudy, D. Xu and G. Wang (2016) Spatio-temporal lstm with trust gates for 3d human action recognition. In ECCV, Cited by: §1, §2, Table 1.
  19. M. Liu, H. Liu and C. Chen (2017) Enhanced skeleton visualization for view invariant human action recognition. Pattern Recognition 68, pp. 346–362. Cited by: §2.
  20. J. C. Niebles, C. Chen and F. Li (2010) Modeling temporal structure of decomposable motion segments for activity classification. In ECCV (2), Lecture Notes in Computer Science, Vol. 6312, pp. 392–405. Cited by: §2.
  21. R. Poppe (2010-06) A survey on vision-based human action recognition. Image Vision Comput. 28 (6), pp. 976–990. External Links: ISSN 0262-8856, Link, Document Cited by: §1.
  22. A. Shahroudy, J. Liu, T. Ng and G. Wang (2016-06) NTU rgb+d: a large scale dataset for 3d human activity analysis. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §5.1.1, §5.
  23. A. Shahroudy, J. Liu, T. Ng and G. Wang (2016-06) NTU rgb+d: a large scale dataset for 3d human activity analysis. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §2, §5.3, Table 1, Table 2.
  24. C. Si, Y. Jing, W. Wang, L. Wang and T. Tan (2018) Skeleton-based action recognition with spatial reasoning and temporal stack learning. In ECCV (1), Lecture Notes in Computer Science, Vol. 11205, pp. 106–121. Cited by: §5.3, §5.3, Table 1.
  25. K. Simonyan and A. Zisserman (2014) Two-stream convolutional networks for action recognition in videos. In NIPS, pp. 568–576. Cited by: §1.
  26. S. Song, C. Lan, J. Xing, W. Zeng and J. Liu (2017) An end-to-end spatio-temporal attention model for human action recognition from skeleton data. In AAAI Conference on Artificial Intelligence, pp. 4263–4270. Cited by: §2, §5.3, Table 1.
  27. E. Strubell, P. Verga, D. Andor, D. Weiss and A. McCallum (2018) Linguistically-informed self-attention for semantic role labeling. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), Cited by: §2.
  28. G. Tang, M. Muller, A. Rios and R. Sennrich (2018) Why self-attention? A targeted evaluation of neural machine translation architectures. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), Cited by: §2.
  29. Y. Tang, Y. Tian, J. Lu, P. Li and J. Zhou (2018-06) Deep progressive reinforcement learning for skeleton-based action recognition. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: Table 1.
  30. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser and I. Polosukhin (2017) Attention is all you need. In NIPS, pp. 6000–6010. Cited by: §2, §3.
  31. R. Vemulapalli, F. Arrate and R. Chellappa (2014) Human action recognition by representing 3d skeletons as points in a lie group. In CVPR, pp. 588–595. Cited by: §2.
  32. J. Vig (2019) A multiscale visualization of attention in the transformer model. arXiv preprint arXiv:1906.05714. External Links: Link Cited by: §5.5.
  33. H. Wang and L. Wang (2017) Modeling temporal dynamics and spatial configurations of actions using two-stream recurrent neural networks. In The Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: Table 1.
  34. L. Wang, Y. Qiao and X. Tang (2014) Latent hierarchical model of temporal structure for complex activity classification. IEEE Trans. Image Processing 23 (2), pp. 810–822. Cited by: §2.
  35. L. Wang, Y. Xiong, Z. Wang, Y. Qiao, D. Lin, X. Tang and L. V. Gool (2016) Temporal segment networks: towards good practices for deep action recognition. In ECCV (8), Lecture Notes in Computer Science, Vol. 9912, pp. 20–36. Cited by: §1, §2, §4.4.
  36. P. Wang, Z. Li, Y. Hou and W. Li (2016) Action recognition based on joint trajectory maps using convolutional neural networks. In ACM Multimedia, pp. 102–106. Cited by: §1.
  37. P. Wang, Z. Li, Y. Hou and W. Li (2016) Action recognition based on joint trajectory maps using convolutional neural networks. In ACM Multimedia, pp. 102–106. Cited by: §2.
  38. X. Wang, R. B. Girshick, A. Gupta and K. He (2018) Non-local neural networks. In CVPR, pp. 7794–7803. Cited by: §1.
  39. D. Weinland, R. Ronfard and E. Boyer (2011) A survey of vision-based methods for action representation, segmentation and recognition. Computer Vision and Image Understanding 115 (2), pp. 224–241. Cited by: §1.
  40. S. Yan, Y. Xiong and D. Lin (2018) Spatial temporal graph convolutional networks for skeleton-based action recognition. In AAAI, Cited by: §1, §1, §5.1.2, §5.3, Table 1, Table 2.
  41. P. Zhang, C. Lan, J. Xing, W. Zeng, J. Xue and N. Zheng (2017) View adaptive recurrent neural networks for high performance human action recognition from skeleton data. In The IEEE International Conference on Computer Vision ICCV, pp. 2136–2145. Cited by: §1, §5.3, Table 1.
  42. Z. Zhang (2012) Microsoft kinect sensor and its effect. IEEE MultiMedia 19 (2), pp. 4–10. Cited by: §1.
  43. W. Zhu, C. Lan, J. Xing, W. Zeng, Y. Li, L. Shen and X. Xie (2016) Co-occurrence feature learning for skeleton based action recognition using regularized deep LSTM networks. In AAAI, pp. 3697–3704. Cited by: §2.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
402488
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description