Skeleton-Based Action Recognition with Synchronous Local and Non-local Spatio-temporal Learning and Frequency Attention

Skeleton-Based Action Recognition with Synchronous Local and Non-local Spatio-temporal Learning and Frequency Attention

Abstract

Benefiting from its succinctness and robustness, skeleton-based action recognition has recently attracted much attention. Most existing methods utilize local networks (e.g. recurrent, convolutional, and graph convolutional networks) to extract spatio-temporal dynamics hierarchically. As a consequence, the local and non-local dependencies, which contain more details and semantics respectively, are asynchronously captured in different level of layers. Moreover, existing methods are limited to the spatio-temporal domain and ignore information in the frequency domain. To better extract synchronous detailed and semantic information from multi-domains, we propose a residual frequency attention (rFA) block to focus on discriminative patterns in the frequency domain, and a synchronous local and non-local (SLnL) block to simultaneously capture the details and semantics in the spatio-temporal domain. Besides, a soft-margin focal loss (SMFL) is proposed to optimize the learning whole process, which automatically conducts data selection and encourages intrinsic margins in classifiers. Our approach significantly outperforms other state-of-the-art methods on several large-scale datasets.

Skeleton-Based Action Recognition with Synchronous Local and Non-local Spatio-temporal Learning and Frequency Attention

Guyue Hu1, 3, Bo Cui1, 3, Shan Yu1, 2, 3
1Brainnetome Center & National Laboratory of Pattern Recognition,
Institute of Automation, Chinese Academy of Sciences
2CAS Center for Excellence in Brain Science and Intelligence Technology
3University of Chinese Academy of Sciences
{guyue.hu, bo.cui, shan.yu}@nlpr.ia.ac.cn


Index Terms—  Action recognition, frequency attention, synchronous local and non-local learning, soft-margin focal loss

1 Introduction

The skeleton-based human action recognition has recently attracted much attention due to its succinctness of representation and robustness to variations of viewpoints, appearances and surrounding distractions [1]. Most previous works treat skeletal actions as sequences and pseudo-images, then apply Recurrent Neural Networks (RNN) [1, 2, 3] and Convolutional Neural Networks (CNN) [4, 5] to model the temporal evolutions and the spatio-temporal dynamics, respectively. Yan et al[6] also feeds skeleton graphs into graph convolutional networks (GCN) to exploit the structure information of human body. However, all the aforementioned methods apply stacked local networks to hierarchically extract spatio-temporal features, which exist two serious problems. 1) The recurrent and convolutional operations are neighborhood-based local operations [7], so the local-range detailed information and non-local semantic information mainly be captured asynchronously in the lower and higher layers respectively, which hinders the fusion of details and semantics in action dynamics. 2) Human actions such as shaking hands, brushing teeth, and clapping have characteristic frequency patterns, but previous works are always limited to the spatio-temporal dynamics and ignore periodic patterns in the frequency domain.

Fig. 1: The overall pipeline of the proposed method. The position and velocity information of human joints are fed into a tranform network, a residual attention network, synchronous local and non-local blocks, and local blocks sequentially. Treated as a pseudo multi-task learning task, the proposed model is optimized according to our soft-margin focal loss.

In this paper, we propose a novel model SLnL-rFA to better extract synchronous detailed and semantic information from multi-domains. SLnL-rFA is equipped with synchronous local and non-local (SLnL) blocks for spatio-temporal learning, and a residual frequency attention (rFA) block for frequency-patterns mining. To optimize whole learning process, a novel soft-margin focal loss (SMFL) is also proposed, which adaptively conducts data selection during training and encourages intrinsic margin in classifiers. Fig.1 shows the pipeline of our method. Firstly, an adaptive transform network augments and transforms the skeletal actions. Secondly, the residual frequency attention block selects discriminative frequency patterns. Then, following with synchronous local and non-local (SLnL) blocks and local blocks in the spatio-temporal domain, where SLnL is designed to simultaneously extract local details and non-local semantics. Finally, three classifiers with inputs from position, velocity and concatenated features are optimized as a pseudo multi-task learning problem according to our soft-margin focal loss.

Our main contributions are summarized as follows: 1) Moving beyond the spatio-temporal domain, we propose a residual frequency attention block to exploit frequency information for skeleton-based action recognition; 2) We propose a synchronous local and non-local block to simultaneously capture details and semantics in the early-stage layers; 3) We propose a soft-margin focal loss, which adaptively conducts data selection during training process and encourages intrinsic soft-margins in the classifiers; 4) Our approach outperforms the state-of-the-art methods with significant margins on two large-scale datasets for skeleton-based action recognition.

2 Related Works

Frequency domain analysis. Generalized frequency domain analysis contains several large classes of methods such as discret Fourier transform (DFT), short-time Fourier transform (SFT) and wavelet tranform, which are classical tools in the fields of signal analysis and image processing. Due to the booming of deep learning techniques [8, 9], methods based on the spatio-temporal domain dominate the field of computer vision, with only a few works paying attention to the frequency domain. For example, frequency domain analysis of critical points trajectories [10] and frequency divergence image [11] are applied for RGB-based action recognition. Our work will revisit the frequency domain, and exploit frequency patterns to improve the skeleton-based action recognition.

Non-local operations. Non-local means is a classical filtering algorithm that allows distant pixels to contribute to the target pixel [12]. Block-matching [13] explores groups of non-local similarity between patches. Block-matching is widely used in computer vision tasks like super-resolution [14], image inpainting [15], etc. The popular self-attention [16] in machine translation can also be viewed as a non-local operation. Recently, different non-local blocks are inserted into CNNs for video classification [7] and RNNs for image restoration [17]. However, their local and non-local operations apply to objects in different level of layers but our SLnL simultaneously operate on the same objects, thus only the proposed SLnL can extract local and non-local information synchronously.

Reformed softmax loss. The softmax loss [18], consisted of the last fully connected layer, the softmax function, and the cross-entropy loss, is widely applied in supervised learning due to its simplicity and clear probabilistic interpretation. However, recent works [18, 19] have exposed its limitations on feature discriminability and have stimulated two types of methods for improvements. One type directly refines or combines the cross-entropy loss with other losses like contrastive loss, triplet loss, etc [19, 20]. The other type reformulates the softmax function with geometrical or algebraic margin [18, 19] to encourage intra-class compactness and inter-class separability of feature learning, which completely destroys the probabilistic meaning of the original softmax function. Our SMFL not only conducts data selection but also encourages intrinsic soft-margins in classifiers with a clear probabilistic interpretation.

3 Methods

3.1 Preliminary

A skeletal action is represented by dimensional locations of body joints in a frame video. Following Li et al[21], we introduce a skeleton transformer to augment the number of joints and rearrange the order of joints. Similarly, a coordinate transformer is also applied to transform the original representations in single rectangular coordinate system to rich representations in oblique coordinate systems. The whole transform network in Fig.1 is implemented with two fully connected layers and corresponding transpose, flatten, and concatenate operations. As a result, a new adaptive expression is formed for each action.

Fig. 2: The residual frequency attention. The spatio-temparal domain and frequency domain are switched conveniently through 2D-FFT and 2D-IFFT. The attention for the sinusoidal and cosine components (, ) are conducted in the frequency domain, and the residual component is applied in the spatio-temporal domain.

3.2 Residual Frequency Attention

Previous works always concentrate on the spatio-temporal domain, but many actions contain inherent frequency-sensitive patterns, such as shaking hands, and brushing teeth, which motivates us to revisit the frequency domain. The classical operations in the frequency domain, such as high-pass, low-pass, and band-pass filters, only have a few parameters that are far from enough, thus we propose a more general frequency attention block (Fig. 2) equipped with abundant learnable parameters to adaptively select frequency components.

(a) 2D Non-local module
(b) Baseline local block
(c) SLnL block
(d) The affinity field of SLnL
Fig. 3: (a) A 2D example of non-local module. (b) The structure of the baseline local block. (c) The structure of the proposed synchronous local and non-local (SLnL) block. (d) The affinity field of SLnL. Note that the affinity field is a more general concept than the receptive field of CNNs. The red and blue represent local and non-local modules repectively in (d).

Given a transformed action after the transform network (=, = ), the 2D discret Fourier transform (DFT) transforms the pseudo spatio-temporal image in each channel to in the frequency domain via

where and are frequencies and channel of spatio-temporal image respectively, and / denotes the cosine/sinusoidal component. The frequency spectrum and the phase spectrum . In practice, the DFT and its inverse (IDFT) are computed through the fast Fourier transform (FFT) algorithm and its inverse (IFFT).

For each action, the attention weights and are complex functions of its cosine and sinusoidal components, i.e. ,

(1)

where . Specifically, after a channel averaging operation, each component is fed into two fully connected layers (FC) to learn adaptive weights for each frequency, followed by a sigmoid transfom function. The first FC layers serve as a bottleneck layer [9] for dimensionality reduction with a ratio factor . Then, the learned attention weights are duplicated to every channel to pay attention to the input frequency image via

(2)
(3)

where denotes the element-wise multiplication. Finally, a spatio-temporal residual component is applied to obtain the output after attention, i.e.

(4)

where denotes the efficient -dimensional IFFT.

3.3 Synchronous Local and Non-local Learning in the Spatio-temporal Domain

Non-local Module. A general non-local operation takes a multi-channel signal as its input and generates a multi-channel output . Here and are channels, and is the number of , where is the set that enumerates all positions of the signal (image, video, feature map, etc.). Let and denote the -th row vector of and , the non-local operation is formulated as follows:

(5)

where the multi-channel unary transform computes the embedding of , the multi-channel binary transform computes the affinity between the positions and , and is a normalization factor. With different choices of and , such as Guassian, embeddded Gaussian and dot product, various of non-local operations could be constructed. For simplicity, we only consider and in the form of linear embedding and embeddded Gaussian respectively, and set , i.e.

(6)

where are learnable transform parameters.

(7)
(8)
(9)

where , and denotes the embedding channel. To weigh how important the non-local information is when compared to local information, a weighting function is appended, i.e

(10)

where . A non-local module can be completed with some transpose operations, some convolutional layers with the kernels of 1, and a softmax layer, Fig.3(a) shows a 2D example.

Baseline local block. The local operation is defined as

(11)

where is the local neighbor set of target position , . The convolution is a typical local operation with identity affinity , liner transform , identity normalization factor , and is the neighbors around target center with a same shape of kernel. Our baseline local block is constructed from convolution operation. As shown in Fig.3(b), two convolutional layers with kernel and are applied to learn temporal local (tLocal) features and spatial local (sLocal) features respectively, and a convolutional layer for spatial-temporal local (stLocal) features. The block also contains a residual path, a rectified linear unit (ReLU) and a batch normalization (BN) layer.

Synchronous local and non-local block. In order to synchronously exploit local details and non-local semantics in human actions, three non-local modules are parallel merged into the above baseline local block. As shown in Fig.3(c), two 1D non-local modules to explore temporal non-local (tNon-Local) and spatial non-local (sNon-Local) information respectively, followed by a 2D non-local module for spatio-temporal non-local (stNon-Local) patterns. We define the affinity field as the representation of the range of pixel indices that could contribute to the target position in the next layer of the local or non-local modules, which is a more general concept than the receptive field of CNNs. The affinity field in Fig.3(d) clearly shows our SLnL can mine local details and non-local semantics synchronously in every layer. Note that our SLnL is significantly different from the methods [7, 17] which only inserted a few non-local modules after stacked local networks, thus the local and non-local operations are still separately conducted in different layers having different resolutions. Contrastively, our SLnL simultaneously captures local and non-local patterns in every layer (Fig.3(d)).

3.4 Soft-margin focal loss

A common challenge for classification tasks is that the discrimination difficulties are different among samples and classes, but most previous works for skeleton-based action recognition use the softmax loss that haven’t taken it into consideration. There are two possible measures to alleviate it, i.e. data selection and margin encouraging.

Intuitively, the larger predicted probability a sample has, the farther away from the decision boundary it might be, and vice versa. Motivated by this intuition, we construct a soft-margin (SM) loss term as follows:

(12)

where is the estimated posterior probability of ground truth class, and is a margin parameter. because that . As Fig.4 shows when the posterior probability is small, the sample is more likely be close to the boundary, thus we penalize it with a large margin loss. Otherwise, a small margin loss is imposed. To further illustrate the idea, we introduce the into cross entropy loss leading to a soft-margin cross entropy (SMCE) loss,

(13)

Assuming that is the features before the last FC layer, the FC layer transforms it into score of classes by multiplying , where is the parameter of the linear classifier corresponding to the class , i.e. Followed with a softmax layer, and , then the SMCE can be rewritten as

(14)

Comparing the standard softmax loss with Eq.14, only the score of the ground truth class is replaced by . Optimizing model with SMCE, we will obtain classifiers that meet the constraint . As a result, an intrinsic margin between the positive (belonging to a specific class) samples and the negative (not belonging to the specific class) samples of each class will be formed in classifiers by adding the SM loss term into the loss function.

Fig. 4: Comparisons among soft-margin focal loss (SMFL), the soft-margin cross entropy (SMCE) loss, the cross-entropy (CE) loss, the focal loss (FL), and the soft-margin loss (SM). The focusing parameter and the margin parameter of losses are expressed as .

In addition, the focal loss [22] defined as

(15)

where is a focusing parameter, can encourage adaptive data selection without any damage to the original model structure and training processes. As Fig.4 shows the relative loss for well-classified easy samples is reduced by FL when compared to CE. Although FL pays more attention to hard samples, it has no margin around the decision boundary. Similar to SMCE, we introduce the term into FL to obtain the soft-margin focal loss (SMFL) as follows:

(16)

Finally, our SMFL can encourage intrinsic margins in classifiers and maintain FL’s advantage of data selection as well.

Our two stream model (Fig.1) predicts three probability vectors , ,  from three modes including position, velocity, and their concatenation. We optimize it as a pseudo multi-task learning problem with our SMFL, i.e. each classifier produces a loss via

(17)

where is mode type, and is the one-hot class label. Thus the final loss is as follows:

(18)

During inference, only is used to predict the final class.

4 Experiments

4.1 Datasets and Experimental details

NTU RGB+D (NTU) dataset [2] is currently the largest in-door action recognition dataset. It contains 56,000 clips in 60 actions performed by 40 subjects. Each clip consists of 25 joint locations with one or two persons. There are two evaluation protocols for this dataset, i.e., cross-subject (CS) and cross-view (CV). For the cross-subject evaluation, 40320 samples from 20 subjects were used for training and 16540 samples from the rest subjects were used for testing. For the cross-view evaluation, samples are split by camera views, with two views for training and the rest one for testing.

Kinetics dataset is by far the largest unconstrained action recognition dataset, which contains 300,000 video clips in 400 classes retrieved from YouTube [6]. The skeleton is estimated by Yan et al. from the raw RGB videos by OpenPose toolbox [6]. Each joint consists of 2D coordinates in the pixel coordinate system and a confidence score , thus finally represented by a tuple of . Each skeleton frame is recorded as an array of 18 tuples.

Implementation Details: During the data preparation, we randomly crop sequences with a ratio uniformly drawn from [0.5,1] for training, and centrally crop sequences with a fixed ratio of 0.95 for inference. We resize the sequences to 64/128 (NTU/Kinetics) frame with bilinear interpolation. Finally, the obtained data are fed into a batch normalization layer to normalize the scale. During training, we apply Adam optimizer with weight decay of 0.0005. Learning rate is initialized as 0.001, followed by an exponential decay with a rate of 0.98/0.95 (NTU/Kinetics) per epoch. A dropout with ratio of 0.2 is applied to each block to alleviate overfitting. The model is trained for 300/100 epoches with a batch size of 32/128 (NTU/Kinetics).

Each stream of model for NTU is composed of totally 6 blocks in Fig.3 with local kernels of 3 and channels of 64, 64, 128, 128, 256, 256 respectively, also max-pooling is applied every two blocks. For Kinetics, two additional blocks with channels of 512 are appended, also the local kernels of the first two blocks are changed into 5. The numbers of new coordinate systems and new joints in the transform network are set as 10 and 64 respectively for both datasets.

Methods CS CV
PA-LSTM [2] 70.3 62.9
ST-LSTM+TG [3] 69.2 77.7
VA-LSTM [1] 79.4 87.6
ST-GCN [6] 81.5 88.3
TS-CNN [21] 83.2 89.3
HCN [5] 86.5 91.1
SR-TSL [23] 84.8 92.4
SLnL-rFA (ours) 89.1 94.9
Table 2: Comparing with the state-of-the-art approaches in action recognition accuracy (%) on Kinetics dataset. Both of the top1 and top5 accuracies are reported.
Methods top1 top5
Feature Enc. [24] 14.9 25.8
Deep LSTM [2] 16.4 35.3
Tem. Conv. [25] 20.3 40.0
ST-GCN [6] 30.7 52.8
SLnL-rFA (ours) 36.6 59.1
Table 1: Comparisons of recognition accuracy (%) on NTU.

4.2 Experimental Results

On NTU RGB+D, we compare with three LSTM-based methods [1, 2, 3], two CNN-based methods [5, 21], one graph convolutional method [6], and one graph and LSTM hybridized method [23]. As the local components of our SLnL are CNN-based while the non-local components learn the affinity degree between each target position (node) to every position (node) in the figure (graph), our SLnL-rFA can be treated as a variant of CNN and graph hybridized method. As shown in Table 2, the CNN-based methods are generally better than LSTM-based methods, and graph-based or graph-hybridized methods also perform well. Our method consistently outperforms the state-of-the-art approaches by a large margin for both cross-subject (CS) and cross-view (CV) evaluation. Specifically, our SLnL-rFA outperforms the best CNN-based method (HCN) by 2.6% (CS) and 3.8% (CV), also outperforms the recent LSTM and graph hybridized method (SR-TSL) by 4.3% (CS) and 2.5% (CV).

On Kinetics, we compare with four characteristic methods, including hand-crafted features [24], deep LSTM network [2], temporal convolutional network [25], and graph convolutional network [6]. Table 2 shows the deep models outperform the hand-crafted features, and the CNN-based methods work better than the LSTM-based methods. Our method outperforms the state-of-the-art approach (ST-GCN) by large margins of 5.9% (top1) and 6.3% (top5).

4.3 Ablation Study

To analyze the effectiveness of every proposed component, extensive ablation studies are conducted on NTU RGB+D.

Comparisons on loss function. The baseline model (Baseline) of this section only contains local blocks in Fig.3(b) and the transform network. The model is optimized with the cross entropy loss (CE), focal loss (FL), soft-margin cross entropy loss (SMCE), and soft-margin focal loss (SMFL), respectively. To save space, at most two best parameters for each loss are listed in Table 4. Due to the adaptive data selection, FL performs better than CE. Benefiting from the encouraged margins between the positive and negative samples, the SMCE and SMFL perform better than their original versions CE and FL, respectively. Finally, our SMFL achieves the best for its advantages from adaptive data selection and intrinsic margin encouraging.

How to select discriminative frequency patterns? We firstly reform the Baseline into Baseline (No FA) for this section by adding the SMFL. To validate the effectiveness of proposed rFA, we compare it with several variants. The Amplitude frequency attention (aFA) is built on frequency spectrum instead of sinusoidal and cosine components. Shared FA (sFA) learns shared parameters for sinusoidal and cosine components, while dependent FA (dFA) learns two set of parameters independently. The rfA is formed by applying the residual learning trick to dFA in the spatio-temporal domain (Fig.2). In Table 4, we observe that aFA is harmful because the phase angle information is missing when only using the frequency spectrum. The dFA outperforms the sFA because that it has more parameters to model the frequency patterns. The rFA finally achieves the best that outperforms Baseline with a large margin, indicating that the frequency information is effective for action recognition.

Loss types CS CV
CE (Baseline) 85.5 91.3
FL(2,) 85.8 91.9
FL(3,) 85.6 91.8
SMCE(,0.4) 86.4 92.0
SMCE(,0.6) 86.2 92.3
SMFL(2,0.4) 86.9 92.5
SMFL(2,0.6) 86.5 92.6
Table 4: Performance comparisons of different frequency attention methods in human action recognition accuracy (%).
Attention methods CS CV
No FA (Baseline) 86.9 92.6
Amplitude FA 84.7 89.8
Shared FA 87.3 92.9
Dependent FA 87.5 93.2
Residual FA (rFA) 87.7 93.6
Table 3: Results of different loss functions in accuracy (%).

Comparisons of methods with different affinity fields. We further reform the Baseline into Baseline with a rFA block for this section. Although non-local dependencies can be captured in higher layers of hierarchical local networks, we argue that synchronously explore and fuse non-local information in early stages is preferable. We merge one temporal non-local block (tSLnL), spatial non-local block (sSLnL), or spatial-temporal block (SLnL) into Baseline to examine their effectiveness. As shown in Table 5, both the non-local information from the temporal and spatial dimensions during early stages are helpful. In addition, benefiting from the synchronous fusion of local details and non-local semantics, our SLnL boosts up the recognition performance by 1.4% (CS) and 1.1% (CV). To further investigate the properties of deeper SLnL, we replace local blocks in Baseline with SLnL. Table 5 shows more SLnL blocks in lower layers generally lead to better results, but the improvements of higher layers is relatively small because the affinity field of local operations is increasing with layers. The results clearly show that synchronously extracting local details and non-local semantics is vital for modeling the spatio-temporal dynamics of human actions.

Affinity Field CS (%) CV (%)
Local (Baseline) 87.7 93.6
tSLnL ( = 1, = 5) 88.1 93.9
sSLnL ( = 1, = 5) 88.0 94.1
SLnL ( = 1, = 5) 88.3 94.3
SLnL ( = 2, = 4) 88.6 94.6
SLnL ( = 3, = 3) 88.8 94.9
SLnL ( = 4, = 2) 88.9 94.8
SLnL ( = 5, = 1) 89.1 94.7
SLnL ( = 6, = 0) 88.8 94.7
Table 5: Comparisons of methods with various affinity fields. and denotes the number of SLnL and local blocks in Fig.1, respectively.

5 Conclusion

In this work, we propose a novel model SLnL-rFA to extract synchronous detailed and semantic information from multi-domains for skeleton-based action recognition. The SLnL synchronously extracts local details and non-local semantics in the spatio-temporal domain. The rFA adaptively selects discriminative frequency patterns, which sheds a new light to exploit information in the frequency domain for skeleton-based action recognition. In addition, we also propose a novel soft-margin focal loss, which can encourage intrinsic margins in classifiers and conducts adaptive data selection. Our approach significantly outperforms other state-of-the-art methods both on the largest in-door dataset NTU RGB+D and on the largest unconstrained dataset Kinetics for skeleton-based action recognition.

References

  • [1] Pengfei Zhang, Cuiling Lan, Junliang Xing, Wenjun Zeng, Jianru Xue, and Nanning Zheng, “View adaptive recurrent neural networks for high performance human action recognition from skeleton data,” in ICCV, 2017, pp. 2136–2145.
  • [2] Amir Shahroudy, Jun Liu, Tian-Tsong Ng, and Gang Wang, “NTU RGB+D: A large scale dataset for 3d human activity analysis,” in CVPR, 2016, pp. 1010–1019.
  • [3] Jun Liu, Amir Shahroudy, Dong Xu, and Gang Wang, “Spatio-temporal LSTM with trust gates for 3d human action recognition,” in ECCV, 2016, pp. 816–833.
  • [4] Qiuhong Ke, Mohammed Bennamoun, Senjian An, Ferdous Ahmed Sohel, and Farid Boussaïd, “A new representation of skeleton sequences for 3d action recognition,” in CVPR, 2017, pp. 4570–4579.
  • [5] Chao Li, Qiaoyong Zhong, Di Xie, and Shiliang Pu, “Co-occurrence feature learning from skeleton data for action recognition and detection with hierarchical aggregation,” in IJCAI, 2018, pp. 786–792.
  • [6] Sijie Yan, Yuanjun Xiong, and Dahua Lin, “Spatial temporal graph convolutional networks for skeleton-based action recognition,” in AAAI, 2018.
  • [7] Xiaolong Wang, Ross B. Girshick, Abhinav Gupta, and Kaiming He, “Non-local neural networks,” in CVPR, 2017.
  • [8] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton, “Imagenet classification with deep convolutional neural networks,” in NIPS, 2012, pp. 1106–1114.
  • [9] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, “Deep residual learning for image recognition,” in CVPR, 2016, pp. 770–778.
  • [10] Cyrille Beaudry, Renaud Péteri, and Laurent Mascarilla, “Action recognition in videos using frequency analysis of critical point trajectories,” in ICIP, 2014, pp. 1445–1449.
  • [11] Albert C. Cruz and Brian Street, “Frequency divergence image: A novel method for action recognition,” in 14th IEEE International Symposium on Biomedical Imaging, 2017.
  • [12] Antoni Buades, Bartomeu Coll, and Jean-Michel Morel, “A non-local algorithm for image denoising,” in CVPR, 2005.
  • [13] Kostadin Dabov, Alessandro Foi, Vladimir Katkovnik, and Karen O. Egiazarian, “Image denoising by sparse 3-d transform-domain collaborative filtering,” IEEE Trans. Image Processing, vol. 16, no. 8, pp. 2080–2095, 2007.
  • [14] Daniel Glasner, Shai Bagon, and Michal Irani, “Super-resolution from a single image,” in ICCV, 2009, pp. 349–356.
  • [15] Connelly Barnes, Eli Shechtman, Adam Finkelstein, and Dan B. Goldman, “Patchmatch: a randomized correspondence algorithm for structural image editing,” ACM Trans. Graph., vol. 28, no. 3, pp. 24:1–24:11, 2009.
  • [16] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin, “Attention is all you need,” in NIPS, 2017.
  • [17] Ding Liu, Bihan Wen, Yuchen Fan, Chen Change Loy, and Thomas S. Huang, “Non-local recurrent network for image restoration,” arXiv preprint arXiv:1806.02919, 2018.
  • [18] Weiyang Liu, Yandong Wen, Zhiding Yu, and Meng Yang, “Large-margin softmax loss for convolutional neural networks,” in ICML, 2016, pp. 507–516.
  • [19] Xiaobo Wang, Shifeng Zhang, Zhen Lei, Si Liu, Xiaojie Guo, and Stan Z. Li, “Ensemble soft-margin softmax loss for image classification,” in IJCAI, 2018, pp. 992–998.
  • [20] Florian Schroff, Dmitry Kalenichenko, and James Philbin, “Facenet: A unified embedding for face recognition and clustering,” in CVPR, 2015, pp. 815–823.
  • [21] Chao Li, Qiaoyong Zhong, Di Xie, and Shiliang Pu, “Skeleton-based action recognition with convolutional neural networks,” in ICME Workshops, 2017, pp. 597–600.
  • [22] Tsung-Yi Lin, Priya Goyal, Ross B. Girshick, Kaiming He, and Piotr Dollár, “Focal loss for dense object detection,” in ICCV, 2017, pp. 2999–3007.
  • [23] Chenyang Si, Ya Jing, Wei Wang, Liang Wang, and Tieniu Tan, “Skeleton-based action recognition with spatial reasoning and temporal stack learning,” in ECCV, 2018.
  • [24] Basura Fernando, Efstratios Gavves, José Oramas M., Amir Ghodrati, and Tinne Tuytelaars, “Modeling video evolution for action recognition,” in CVPR, 2015, pp. 5378–5387.
  • [25] Tae Soo Kim and Austin Reiter, “Interpretable 3d human action analysis with temporal convolutional networks,” in CVPR Workshops, 2017, pp. 1623–1631.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
375295
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description