Learning Conditional Random Fields with Augmented Observations for Partially Observed Action Recognition
This paper aims at recognizing partially observed human actions in videos. Action videos acquired in uncontrolled environments often contain corrupt frames, which make actions partially observed. Furthermore, these frames can last for arbitrary lengths of time and appear irregularly. They are inconsistent with training data, and degrade the performance of pre-trained action recognition systems. We present an approach to address this issue. For each training and testing action, we divide it into segments, and explore the mutual dependency between temporal segments. This property states that the similarity of two actions at one segment often implies their similarity at another. We augment each segment with extra alternatives retrieved from training data. The augmentation algorithm is designed in a way where a few alternatives are good enough to replace the original segment where corrupt frames occur. Our approach is developed upon hidden conditional random fields and leverages the flexibility of hidden variables for uncertainty handling. It turns out that our approach integrates corrupt segment detection and alternative selection into the process of prediction, and can recognize partially observed actions more accurately. It is evaluated on both fully observed actions and partially observed ones with either synthetic or real corrupt frames. The experimental results manifest its general applicability and superior performance, especially when corrupt frames are present in the action videos.
Video-based human action recognition has been an inherent part in many computer vision applications such as surveillance, robotics, human-computer interaction, and intelligent system. Most research efforts such as [3, 57, 26, 30, 24, 48, 46, 55, 25, 31] focus on recognizing fully observed human actions in videos. However, the assumption of full observation may not hold in practice due to various issues, including hardware failures, e.g., signal loss or noise , software limitations, e.g., skeleton estimation errors , and cluttered environments, e.g., partial occlusions [1, 51]. We consider frames where the above mentioned situations happen outliers, which make the actions partially observed. Figure 1 shows a few examples of outlier frames caused by diverse issues. In this work, we present an effective approach to recognizing actions with outlier frames.
Outlier frames are inconsistent with training data. They probably cause severe performance degradation of pre-trained recognition systems. A few studies [4, 38, 41, 49, 52] have attempted to recognize actions with outlier frames. However, they handle outlier frames by using additional domain knowledge and/or assume that outliers are annotated in advance. Thus, their applicability is restricted or extra manual effort is required. Instead, we develop a general approach that infers outlier frames and predicts actions by using the remaining frames, inliers. Our general approach makes no assumption about the causes of outliers and requires no prior knowledge about the number, locations, and durations of outlier segments. It can work with various features such as those extracted from skeleton structures, RGB, or depth images by conventional or deep learning models.
The task we address is called partially observed action recognition (POAR), where three major difficulties arise. First, outlier frames need to be identified to exclude their unfavorable effect on recognition. Second, the remaining inliers may carry insufficient information. Third, removing outliers probably makes the action temporally disjointed. Performance gain by applying temporal regularization is not attainable. The approach developed in this work overcomes these difficulties simultaneously. Our idea is that we divide each action into temporal segments, and seek a set of good alternatives to each segment no matter whether this segment is corrupt. A segment is considered corrupt if replacing it with one of its alternatives leads to sufficiently higher confidence in prediction. The substituted alternatives provide extra information and make the action temporally connected, and hence facilitate POAR.
Specifically, every action video is temporally divided into a fixed number of equal-length segments. We carry out alternative augmentation by leveraging the property of mutual dependency between segments. This property states that the similarity of two actions at one segment often implies their similarity at another. To augment the th segment of a given action, we use its th segment as the query to the training data, seek the training action with the most similar segment , and retrieve the th segment of that training action as an alternative. Suppose the query, segment , is an inlier, the retrieved alternative is probably of high quality no matter if segment of that action is an outlier or not. The procedure is repeated for every segment pair. If an action contains a few inlier segments, each of its segments is then augmented with a couple of high-quality alternatives.
After alternative augmentation, we design an approach for training and predicting actions with the extra alternatives. The approach is developed upon hidden-state conditional random fields (HCRFs) . It leverages hidden variables to model the uncertainty of selecting the original or the alternative observations. With the designed potential functions, our approach can infer outlier segments and seek their alternatives jointly, and hence make a more accurate prediction.
In sum, the main contribution of this work lies in the development of a general approach to partially observed action recognition. It doesn’t require any prior knowledge about the number, the durations, and the locations of outlier frames, and can recognize both fully and partially observed actions. Our approach is evaluated on two datasets, where both synthetic and real outlier frames are present. Compared with several state-of-the-art approaches, our approach demonstrates the effectiveness of outlier frames handling, and achieves remarkably superior results.
2 Related Work
The literature on action recognition is extensive. Our review focuses on approaches that recognize actions in videos.
Due to the recent advances in local descriptors, representing an action in a video as a set of local patches or spatio-temporal cubes, e.g., [21, 28], is widely adopted for its robustness to possible deformations and occlusions. However, temporal and geometric relationships among local features are ignored, which may lead to suboptimal performance. To address this issue, graphical models such as factorial conditional random fields (FCRF)  and hidden Markov model (HMM)  become popular for their expressive power of relationship modeling. Unfortunately, most of these methods recognize only fully observed actions. They are sensitive to outlier frames, and suffer from the performance drop.
Recent approaches adopting features learned by convolutional neural networks (CNNs)  have demonstrated their effectiveness in various computer vision applications such as object recognition [23, 39], human pose estimation [5, 10], tracking , and person re-identification . The success of CNNs also sheds light on video-based vision applications. Recent studies of action recognition, e.g., [47, 27, 13, 27, 15, 16], focus on using deep learning frameworks for generating more discriminative video representations. Gan et al.  developed a method that adopts CNNs for high-level video event detection and key-evidence localization. Li et al.  presented a deep network for human action recognition with the aid of multi-granularity information extracted from videos. Simonyan and Zisserman  delivered a two-steam ConvNet framework that learns a spatial sub-network and a temporal sub-network at the same time, and achieves very promising performance. However, most of these methods concentrate on recognizing fully-observed actions. They are typically sensitive to outlier frames and suffer from performance degradation when outlier frames are present.
Some research efforts have been made on action recognition with incomplete observation. Early prediction, e.g., [11, 18, 20, 35, 36, 56], aims to predict an ongoing action by referring to its beginning part. For instance, Ryoo  accomplished this task by using both the integral and dynamic bag-of-words. Hoai and De la Torre  developed a max-margin early event detector that identifies the temporal location and duration of an action from the video streaming. On the other hand, Cao et al.  presented gapfilling for handling the unobserved frames occurring in an action. They estimated the action likelihood for each observed segment, and inferred the global posterior of the whole action. However, their approach does not take account of temporal coherence between the observed segments. Besides, the approach assumes that the periods of unobserved subsequences have been annotated manually or known in advance. This assumption is less practical in real-world applications.
HCRFs introduce latent variables to model the hidden structures of observations, and have been a powerful model for structured data prediction. Recent studies [43, 44] have shown that action recognition with HCRFs achieves superior performance to that with HMM and CRFs. However, HCRFs cannot work with incomplete actions. Some studies have attempted to tackle this limitation. Chang et al.  presented an incremental inference process to infer HCRFs, and carried out facial expression recognition with incomplete observations. Banerjee and Nevatia  proposed a pose filter based HCRF (PF-HCRF) model, which uses a detection filter for finding key poses and a root filter for modeling the detected key poses. It infers the temporal locations of the key poses even though the video frames are not fully observed. The methods in [2, 8] are able to work with incomplete observations. In this paper, we show a more advanced strategy to deal with incomplete observations: we complete the observations by borrowing additional segments from training data, and further improve the performance.
In this work, a general approach to partially observed action recognition (POAR) is presented. Regular (fully observed here) action recognition can be considered a special case of POAR if no unobserved part exists. POAR becomes early prediction and gapfilling if there exists merely one unobserved subsequence present at the end and in the middle of the action, respectively. Our method retrieves the alternative segments from training data. It identifies outlier segments, selects their alternatives, and makes the prediction simultaneously. In this manner, our approach bridges the gaps caused by outlier frames, and enriches the required information for making more accurate predictions. Therefore, our approach is general enough to carry out regular action recognition, early prediction, gapfilling. It is also applicable to the cases where training and testing actions are with arbitrary occurrence of outlier frames.
3 The Proposed Approach
We introduce our approach in this section. A sketch of using HCRFs for action recognition is firstly given. Then, the two key components of the proposed approach, alternative augmentation and learning HCRFs with augmented observations, are described, respectively.
3.1 Action recognition using HCRFs
A training set of actions is given, where each action instance is uniformly divided into temporal segments of the same length, i.e. and is its class. is the domain of classes. The conditional random fields (CRFs)  model the conditional probabilities of classes given action instance , i.e. , where is the set of model parameters to be learned. The posterior in CRFs is a Gibbs distribution, and is written as
where is the potential function. We will describe it later. is the partition function, which makes a probability function, namely
Parameter set is derived by maximizing the log likelihood of the training set :
where is a positive constant. In Eq. (3), the first term is the log-likelihood of the training data, and the second one is used for regularization.
Instead of CRFs, we conduct partially observed action recognition on HCRFs, which employ intermediate hidden variables to model the latent structure of observations. The hidden variables whose states are considered key poses here are used to explore the dependencies among action classes, key poses, and observations as well as to enforce temporal coherence. Specifically, for an action , a set of hidden variables is created, one variable for each segment. The conditional probability in HCRFs is expressed as
where is the feature vector of the -th segment of action . can be any features selected to characterize . For instance, we select bag-of-words histograms based on either the cuboid descriptors , D skeleton features, or features learned by deep neural networks in the experiments. is the parameter vector of the -th hidden variable. Inner product reflects the consensus between observation and hidden state . Intuitively, can be considered as the learned key pose to facilitate action classification. The number of states of each hidden variable corresponds to the number of key poses. measures the compatibility between action class and hidden state . represents the consistency between action class and two successive hidden states and .
Note that our approach can work with the use of general graph structures with various potential functions. We use the chain structure with potential function given in Eq. (6), because it suffices to get satisfactory results.
With training set and conditional probability in Eq. (4), parameter set can be optimized by solving Eq. (3). Efficient solvers, such as gradient descent based L-BFGS, can be applied to the optimization. After optimization, the HCRFs model is obtained. Given a testing action , its label is then inferred by using loopy belief propagation to solve
Refer to  for more details of the training and testing procedures of HCRFs.
3.2 Alternative Augmentation
For a corrupt segment , the extracted features are inconsistent with the learned HCRFs model in potential function Eq. (6). This issue needs to be handled to avoid substantial performance degradation. The proposed alternative augmentation aims to augment each segment of every training and testing action with a set of alternatives no matter if is a corrupt outlier or not. The alternatives are borrowed from training data. Our approach can detect outlier segments and choose proper alternatives to them. It is not necessary that all the alternatives to are of high quality, but just one or few of them are good enough to replace when it is detected as an outlier. In the following, we introduce the proposed alternative augmentation, which is designed based on this requirement.
Alternative augmentation is developed upon the mutual dependency between segments. Namely, two length-normalized actions are similar at their -th segment. They are likely to be similar at their -th segment. Given the training set , we consider an action to be augmented. To augment the -th segment of , we treat its another segment of as the query to , and seek the training action whose -th segment is the most similar to the query. Then, this training action’s th segment is employed as an alternative, denoted by , i.e.
For a better understanding, the procedure of mutual recommendation is illustrated in Figure 3. By repeating the procedure for every segment pair of , the augmented action of , denoted by , is yielded where each augmented segment is composed of the original segment and the retrieved alternatives , i.e.
Outlier frames may be present in training and testing actions. Therefore alternative augmentation is applied to all training and testing actions. Note that for augmenting a training action, it is tentatively removed from the training set so that all its alternatives come from other training actions. After the procedure, each training or testing action is transformed to the augmented one .
Though the augmentation is done in a temporal alignment manner, our approach does not rely on the action videos to be well-aligned frame by frame. It is because in HCRFs, a hidden node would subsume a temporal window of frame-level features, and is tolerant to temporal inconsistency to an extent. Alternative augmentation can also be extended to be more robust to temporal misalignment between actions via duplicate recommendation. Namely, in Eq. (8) serves as an alterative to not only segment but also its neighboring segments. The main computational cost of augmentation is the nearest neighbor search (NNS). For an action of segments, alternatives are found by mutual recommendation. However, NNS is performed times for augmenting an action of segments with a careful implementation. Consider Eq. (8). Once the NNS for segment is finished, the alternatives recommended by to the rest segments are known. In addition, algorithms for approximate nearest neighbor search, such as k-d tree or locality sensitive hashing can be applied to further speedup the process.
3.3 Learning HCRFs with Augmented Actions
Our approach is designed to work with the augmented actions for POAR. Hence, it needs to detect outliers and select a plausible alternative to each detected outlier. We develop our approach based on HCRFs, where an augmented action is associated with a set of hidden variables , one hidden variable for each augmented segment .
We leverage the hidden variables in HCRFs to model the uncertainty about both poses and observations. Specifically, the hidden variable here is composite, i.e. , where and are the numbers of the alternatives and latent poses, respectively. Element specifies which observation is picked at time stamp . It takes value if the original segment (observation) is identified as an inlier and picked. When takes value , is detected as an outlier and replaced by its -th alternative . Element , like the hidden variables used previously, corresponds to the latent poses.
The chain-structured model of our approach is shown in Figure 4. Compared to that in Figure 2, each hidden variable is composite, and each observation node contains not only the original segment but also the alternatives. To work with the composite variables and augmented observations, the potential function is generalized from Eq (6), and is defined as follows
where is the feature representation of augmented segment , and is defined as the concatenated column vector of all its elements,
The parameter vector corresponding to composite variable takes both the picked observation and pose into account, and is expressed as
where is the parameter set of the -th pose to be learned, and is a vector whose elements are . Function is used to express our bias towards picking the original segment, and is given by
where is a non-negative constant.
The first term corresponding to composite hidden variable in Eq. (10) becomes
It measures the compatibility between the th latent pose and the th alternative (or the original segment if ). If the original segment is picked, extra value is added. It ensures that the original segment is replaced only when the substituted alternative is sufficiently better. The other two terms in Eq. (10), and , evaluate the consistence among adjacent hidden variables and the class label. We simply set and . Namely, they are the same as those in Eq. (6).
Composite hidden variable can be converted into a single one with states. With the new potential in Eq. (10), HCRFs model can be learned with augmented actions by optimizing Eq. (3). The learned model then predicts novel augmented actions via Eq. (7).
The conditional probability in HCRFs in Eq. (4) is inferred by summing all configurations of hidden variables. A configuration of hidden variables specifies how the original segment or one of its alternatives is picked at each time stamp of an augment action. In Eq. (4), the conditional probability is computed by taking the exponential of the potentials of all configurations, so it is dominated by the configuration with the maximal potential value. From the inferred configuration with the maximal potential, it can be realized that our approach recognizes a partially observed action by detecting outliers and picking their alternatives.
As reported in the paper of HCRFs , the key step of HCRFs, belief propagation, is of complexity , where , , and are the numbers of classes, segments, and hidden states, respectively. In addition, the nearest neighbor search is performed times for augmenting an action. On UT-Interaction with , , and , the average running time of augmenting an action and predicting it is seconds on a modern PC with an Intel GHz processor using C++ implementation.
|Walk||Sit down||Sit still||Use a TV remote||Stand up||Stand still||Pick up books||Carry books|
|Put down books||Carry a backpack||Drop a backpack||Make a phone call||Drink water||Wave hand||Clap|
4 Experimental Setup
In this section, we describe the settings of the conducted experiments, including two datasets used for performance evaluation, the adopted feature representations, and the evaluation metrics on each of the two datasets.
4.1 Datasets for Performance Evaluation
Our approach is evaluated on a daily activities dataset we collected, CITI-DailyActivities3D111CITI-DailyActivities3D dataset is available at https://sites.google.com/view/citi3ddataset/ and a benchmark dataset, UT-Interaction . The first one contains actions with outlier frames occurring irregularly and naturally. The second one consists of clean actions. Thus, synthetic outlier frames are added. The primary goal of evaluation on the first dataset is to measure how our approach performs in realistic cases. The goal on the second one is to analyze how well it performs when different fractions of outlier frames are present. The two datasets contain videos of different modalities, such as RGB videos and D skeleton structures, and cover actions ranging from single-person actions and multi-people interactions.
4.1.1 CITI-DailyActivities3D dataset
This work delivers an integrated solution to outlier detection, alternative selection, and action prediction. Existing benchmarks of action recognition, e.g., [44, 32, 53], contain videos where no or few corrupt frames show. For a more realistic evaluation, we adopt this dataset where outlier frames are present.
Ten actors were employed to perform fifteen daily activities in the construction of this dataset. One of the ten actors is left-handed. The fifteen daily activities involve walk, sit down, sit still, use a TV remote, stand up, stand still, pick up books, carry books, put down books, carry a backpack, drop a backpack, make a phone call, drink water, wave hand, and clap. Figure 5 displays one example from each of the fifteen categories. Microsoft Kinect is used in the collection so that the RGB videos and the depth maps are available simultaneously. The skeleton streams are also attainable by applying the method in  to the depth maps.
The resultant dataset is challenging. Outlier frames caused by different issues can occur at any temporal positions with arbitrary durations in videos. The dataset is composed of skeleton sequences. Among them, sequences are clean. More than of frames in each of the other sequences are outliers. Some outlier frames in the skeleton streams are shown in Figure 6. The part in yellow represents the extracted skeletons with low confidence. Similar to many existing benchmarks, difficulties such as large intra-class variations, high inter-class similarity, and different perspective settings, present in this dataset.
4.1.2 UT-Interaction dataset
This database collects high-level human interaction videos of six activity categories, including hand-shaking, hugging, kicking, pointing, punching, and pushing. The dataset has two subsets, i.e. UT-Interaction and . Each subset contains videos of the six types of human interactions. Both segmented and unsegmented versions of this dataset are available. Like approaches for comparison, we choose the former for evaluation.
We added artificial outlier frames to the videos for evaluation. A wide range of outlier ratio, i.e. outlier frames to all frames, from to is considered. The types of the artificial outlier frames include signal noise and occlusions by various objects. Some examples of these synthetic outliers are shown in Figure 7.
4.2 Feature Representation and Evaluation Metrics
We represent actions in our CITI-DailyActivities3D dataset based on the absolute D body joint positions in the skeleton streams. Each action is uniformly sampled skeletons. To make the representation more robust, we first transform from the world coordinate system to the person-centric coordinate for the skeletal data by setting the hip center at the origin. Then, a skeleton in this dataset is randomly chosen as the reference. All the other skeletons are normalized so that their body part lengths can be the same as that of the reference. Finally, we rotate each skeleton so that the ground plane projection of the vector from its left hip to its right hip is parallel to the global -axis.
For UT-Interaction dataset, we follow the method in , where the spatial-temporal interest points (STIPs) are firstly detected. Then, the cuboid descriptor  is applied to each of the detected STIPs. STIPs are detected by using the HarrisD corner detector  in this work. Then, actions are represented by using the bag-of-words model , where the visual words are generated via the -means clustering algorithm with centers. Each training and testing action is partitioned into equal-length segments. A bag-of-words histogram is compiled for each segment. For this dataset, features learned by deep neural networks are also adopted.
Unless further specified, our approach and all the competing approaches use the same feature representation on each dataset in the experiments for fair comparison. In our CITI-DailyActivities3D dataset, we split the ten subjects into two equal-size groups. The actions from one group firstly serve as the training data, while the rest as the testing data. We then switch the two subject groups. The average performance is reported. For the UT-Interaction dataset, we follow [2, 4, 36], and use leave-one-sequence out cross validation for evaluating the performance.
|naïve Bayes classifier (NBC)||64.8||69.2|
|recurrent neural networks (RNNs) ||68.1||71.7|
|hidden Markov model (HMM)||51.6||64.2|
|hidden conditional random fields (HCRFs) ||60.3||68.8|
|hierarchical sequence summarization (HSS) ||84.0||61.5||66.3|
|approach by Gowayyed et al. ||68.3||74.6|
5 Experimental Results
In this section, our approach is evaluated on the dataset we collected and the UT-Interaction dataset. We report and the results.
5.1 Results on CITI-DailyActivities3D dataset
Three evaluation tasks are conducted on this dataset. Task aims at evaluating the performance of approaches on fully-observed videos. Namely, both training and testing action contain no outlier frames. Task and Task focus on the tolerance of approaches to outliers. In Task , approaches are learned with clean training data, but are tested on actions with outliers. In Task , both the training and testing sets are the mixtures of clean and corrupt actions. In Task , we check if our approach with extra components for outlier handling still performs well on clean actions. More importantly, we are interested in the performance gaps between the first task and the other two tasks, which reveal the robustness of an approach to outliers.
We select six existing approaches for comparison, including Naïve Bayes classifier (NBC), recurrent neural networks (RNNs) , hidden Markov model (HMM), hidden-CRFs (HCRFs) , hierarchical sequence summarization (HSS) model , and the approach by Gowayye et al. .
We particularly focus on the comparison between HCRFs and ours. Both methods are established on HCRFs and use the same inference algorithms for training and testing. Two main technical components, alternative augmentation and the extended model for working on augmented actions, distinguish our approach from HCRFs for outlier handling.
Except that in , all the approaches adopt the D skeleton features that we compiled. For graphical model-based classifiers, e.g., HMM, HCRFs, and ours, the feature vector at each segmentation is the representation of the corresponding observation node. For classifiers working on data with representations considering the whole videos, e.g., NBC, we concatenate the feature vectors of all frames. The approach by Gowayye et al.  takes into account the features based on body joint trajectories and uses Fourier temporal pyramid (FTP). The recognition rates of all approaches on the three tasks are reported in Table 1.
Results on Task . The baseline NBC gives the accuracy rate of . The graphical model-based approaches, including RNN, HMM, HCRFs, and HSS, achieve the accuracy between and . The method of Gowayyed et al.  reaches . Our approach gets the recognition rate of . It is comparable to most competing approaches. Note that HCRFs and our approach give almost the same recognition rates in this task with the clean data. It means that the additional mechanisms of our approach do not cause performance drop, even though they are designed to handle outliers.
Results on Task . The major difference between Task and Task is that the testing actions in the latter contain outliers. Compared the performance on the two tasks, all the six competing approaches suffer from substantial performance drops ranging from ( in NBC) to ( in HSS). We also observe that the drops are even more dramatic in graphical model-based approaches, such as HMM and HCRFs, since their complex models are more sensitive to noisy data. The features and the FTP structure used in  show their robustness on this task. Our approach is designed to address outliers. It detects outliers, and replace them with plausible alternatives. It turns out that the drop is only (), even if our approach is established upon graphical models. The achieved accuracy is more favorable than those by all other approaches.
Results on Task .
The difference between this task and the two previous ones is that the training actions also contain outlier frames. Comparing the accuracy in Task and Task , all the six competing approaches still suffer from severe performance degradation. The outlier frame distributions in training and testing data are similar. Task may not be more difficult than Task though both training and testing actions have outlier frames in Task . To sum up, the results indicate that our approach can work with not only corrupt testing data but also corrupt training data, and outperform the competing approaches significantly.
In order to evaluate the sensitivity of our method to the number of hidden states in HCRFs, we conduct an experiment to quantify the parameter sensitivity. The performance of our approach with different numbers of hidden states regarding key poses on all the three tasks are shown in Figure 9. The results point out that a few hidden states, e.g., , suffice for getting the stable performance in all the three tasks. Actually except those in Figure 9, the performances of our approach are reported in all the experiments by setting the number of hidden states regarding key poses to .
5.2 Results on UT-Interaction Dataset
Actions in the UT-Interaction dataset contains synthetic outlier frames and with outlier ratios from to . Two sets of experiments are conducted. In the first one, approaches perform partially observed action recognition (POAR) in the case where the locations of outlier frames are known in advance. In this case, our approach augments only outlier segments with alternatives. We focus on comparing our approach to those working on actions with incomplete observation, and checking its advantage of borrowing alternatives from training data. In the second set, the locations of outlier frames are unknown. We focus on verifying whether our approach can detect outlier frames and pick proper alternatives to them, and result in remarkable performance gains over competing methods.
Our approach is compared with the same competing methods adopted in the experiments on the dataset we collected, except the method by Gowayyed et al. , which is designed on D skeleton features and is not applicable on the UT-Interaction dataset. As mentioned, all the methods evaluated on this dataset adopt the bag-of-words model based on the cuboid descriptor.
5.2.1 POAR with Known Outlier Locations
We choose the setting of gapfilling  and early prediction , where outliers are the missing frames with known locations. The former involves recognizing actions where the outlier (missing here) frames locate in the middle and thus, the observed frames are separated into two observed segments. The latter involves recognizing actions with missing frames at the end of the sequences.
The task of gapfilling is addressed under the assumption that the gap’s location and duration are given. We select the state-of-the-art approaches, including DynamicBoW , sparse coding based method (SC) , mixture of segments sparse coding (MSSC) , for comparison. Figure 8 and 8 report the performance of gapfilling by all the evaluated approaches on UT-Interaction datasets and , respectively. Our approach is consistently superior to all other methods under different outlier ratios. We think the reason is that the compared approaches simply neglect the missing part, while our approach borrows extra alternatives to enrich the information for prediction, and connects the whole action for further temporal regularization.
We choose some of the state-of-the-art methods for comparison in early prediction, including IntegrateBoWs , DynamicBoW , sparse coding based method (SC) , mixture of segments sparse coding (MSSC) , pose filter based hidden random conditional fields (PF-HCRFs) , and hierarchical movemes representation (HMR) . To evaluate the performance of feature representation leaned by CNNs-based methods to action recognition, we also adopted the deep learning-based features (DF) extracted by using the two-stream architecture in . The resultant method is denoted by Ours+DF.
Figure 8 and Figure 8 show the performance of the competing approaches and our approach on the UT-Interaction dataset and , respectively. The recognition rates of each approach with different fractions of the observed segments in videos, i.e., observation ratios, are given. Figure 8 shows that HCRFs-based approaches, such as our approach and PF-HCRFs , perform better than sparse coding based methods, e.g., SC and MSSC, and bag-of-words approaches, e.g., DynamicBow and IntegralBow. The main reason is that HCRFs employ hidden states in a chain structure in its representation, and the implicit temporal coherence in videos is better modeled in the latent space. Actions in UT-Interaction dataset are noisier than those in . It can be seen in Figure 8 that the sparse coding based methods, SC and MSSC, are robust to noises and achieve comparable performance to HMR  on UT-Interaction . However, since the likelihood at each action segment is estimated independently, SC or MSSC would neglect temporal coherence among the observed parts. Our approach based on HCRFs employs temporal coherence information of the observed parts, and performs favorably against SC and MSSC. Our approach with deep learning-based features (DF) performs slightly better than with the ordinary cuboid descriptor-based BoWs features in the both two datasets. The performance gain of adopting features learned by deep neural networks is not evident in our cases. The reason is that temporal convolutions are employed so that outlier frames make the computed features at their temporally nearby locations corrupt.
Compared to PF-HCRFs and HMR, our method achieves superior or similar performance (though is worse sometimes) as shown in Figures 8 and 8. We owe this to the reason that the unobserved part can be replaced by alternatives borrowed from training data in our approach, so carries the time-varying information. Then, by using both the observed part and the borrowed alternatives, temporal regularization is attainable to facilitate recognition in our approach.
The results in Figure 8 demonstrate that our approach can achieve favorable performance in comparison to the state-of-the-art approaches in POAR with known outlier frames. More importantly, our approach can carry out POAR even when the locations of outlier frames are unknown, as shown in the following. This property distinguishes our approach from the approaches compared in both gapfilling and early prediction.
5.2.2 POAR with Unknown Outlier Locations
Two settings are adopted for POAR with unknown outlier locations. The first one is still gapfilling, but the locations of outliers are assumed unknown. The second setting involves randomly located outlier frames whose locations in actions are arbitrarily generated. As in the self-collected dataset, our approach is compared with NBC, RNNs , HMM, HCRFs , and HSS  in this dataset. All approaches adopt the same bag-of-words representation by using the cuboid descriptor.
Figure 10 reports the performance of all evaluated approaches in the two settings on the UT-Interaction dataset and . Except NBC, all methods achieve similar performance when no outlier presents. As the outlier ratio increases, our approach is significantly better than any competing approach. When the ratio is , our approach achieves at least higher accuracy rates than any competing approaches in gapfilling and at least higher in the setting of using randomly located outliers. The results confirm the effectiveness of our approach in outlier detection and handling. Comparing the results in Figure 8 and Figure 10, it can be observed that the performance gains of using our approach are more remarkable with the unknown locations of the outlier frames than with the known locations. This is because our approach can integrate outlier detection and alternative selection into prediction. To the best of our knowledge, this nice property distinguishes our approach from all the existing approaches.
5.2.3 Alternative Quality Analysis
To gain insight into why our method works well on POAR, we first analyze the quality of alternative segments borrowed from training data. We consider an alternative is accurate if it and the original segment belong to actions of the same class. For each augmented segment, we compute the average accuracy of its alternatives. We also measure the probability that at least one of its alternatives is accurate. Figure 11 show the two statistics under different outlier ratios. The results show that our alterative augmentation works well. With a high probability there exists at least one accurate alternative to a segment.
Moreover, we compute the probability that an outlier segment is replaced by an accurate alternative by our approach. As mentioned previously, the alternative with the maximal value in the potential function is selected to replace the original segment. The selected alternative is considered correct if it is from the training action of the same category. The probability of correct replacement under various outlier ratios are reported in Figure 11. It can be observed that more than outliers are correctly replaced when the outlier ratio is not higher than . It reveals the main reason why our approach still works well when outliers occur.
We have introduced an approach to recognizing partially observed actions. We leverage the mutual dependency between video segments, and augment each segment of an action with extra alternatives borrowed from training data. When working on the augmented actions, our approach integrates outlier segment detection and alternative selection into the process of action recognition. To the best of our knowledge, such a generalization of action recognition is novel. Our approach is comprehensively evaluated on two datasets. It works with different features, recognizes actions with either synthetic or real outliers, and accomplishes gapfilling, full- and partially-observed action recognition. Experimental results demonstrates its effectiveness. For future study, we will aim to extend this approach to handle not only segment-level but also region- or trajectory-level outliers for advanced spatiotemporal analysis and further performance enhancement.
-  A. Ayvaci, M. Raptis, and S. Soatto. Sparse occlusion detection with optical flow. Int. J. Computer Vision, 97(3):322–338, 2012.
-  P. Banerjee and R. Nevatia. Pose filter based hidden-crf models for activity detection. In Proc. Euro. Conf. Computer Vision, pages 711–726, 2014.
-  W. Bian, D. Tao, and Y. Rui. Cross-domain human action recognition. IEEE Trans. Systems, Man, and Cybernetics, Part B, 42(2):298–307, 2012.
-  Y. Cao, D. Barrett, A. Barbu, S. Narayanaswamy, H. Yu, A. Michaux, Y. Lin, S. Dickinson, J. M. Siskind, and S. Wang. Recognize human activities from partially observed videos. In Proc. Conf. Computer Vision and Pattern Recognition, pages 2658–2665, 2013.
-  Z. Cao, T. Simon, S.-E. Wei, and Y. Sheikh. Realtime multi-person 2d pose estimation using part affinity fields. arXiv preprint arXiv:1611.08050, 2016.
-  G. Carneiro and J. C. Nascimento. Combining multiple dynamic models and deep learning architectures for tracking the left ventricle endocardium in ultrasound data. IEEE Trans. Pattern Analysis and Machine Intelligence, 35(11):2592–2607, 2013.
-  A. A. Chaaraoui, J. R. Padilla-López, and F. Flórez-Revuelta. Fusion of skeletal and silhouette-based features for human action recognition with rgb-d devices. In Proc. Int’ Conf. Computer Vision Workshops, pages 91–97, 2013.
-  K.-Y. Chang, T.-L. Liu, and S.-H. Lai. Learning partially-observed hidden conditional random fields for facial expression recognition. In Proc. Conf. Computer Vision and Pattern Recognition, pages 533–540, 2009.
-  C.-C. Chen and J. Aggarwal. Modeling human activities as speech. In Proc. Conf. Computer Vision and Pattern Recognition, pages 3425–3432, 2011.
-  X. Chu, W. Ouyang, H. Li, and X. Wang. Structured feature learning for pose estimation. In Proc. Conf. Computer Vision and Pattern Recognition, 2016.
-  J. W. Davis and A. Tyagi. Minimal-latency human action recognition using reliable-inference. Image and Vision Computing, 24(5):455–472, 2006.
-  P. Dollár, V. Rabaud, G. Cottrell, and S. Belongie. Behavior recognition via sparse spatio-temporal features. In Proc. Int’ Workshops Visual Surveillance and Performance Evaluation of Tracking and Surveillance, pages 65–72, 2005.
-  J. Donahue, L. Anne Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell. Long-term recurrent convolutional networks for visual recognition and description. In Proc. Conf. Computer Vision and Pattern Recognition, pages 2625–2634, 2015.
-  L. Fei-Fei and P. Perona. A bayesian hierarchical model for learning natural scene categories. In Proc. Conf. Computer Vision and Pattern Recognition, volume 2, pages 524–531, 2005.
-  C. Feichtenhofer, A. Pinz, and A. Zisserman. Convolutional two-stream network fusion for video action recognition. In Proc. Conf. Computer Vision and Pattern Recognition, 2016.
-  C. Gan, N. Wang, Y. Yang, D.-Y. Yeung, and A. G. Hauptmann. Devnet: A deep event network for multimedia event detection and evidence recounting. In Proc. Conf. Computer Vision and Pattern Recognition, pages 2568–2577, 2015.
-  M. A. Gowayyed, M. Torki, M. E. Hussein, and M. El-Saban. Histogram of oriented displacements (hod): Describing trajectories of human joints for action recognition. In Proc. Int’ Joint Conf. Artificial Intelligence, pages 1351–1357, 2013.
-  M. Hoai and F. De la Torre. Max-margin early event detectors. Int. J. Computer Vision, 107(2):191–202, 2014.
-  A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Proc. Advances in Neural Information Processing Systems, pages 1097–1105, 2012.
-  T. Lan, T.-C. Chen, and S. Savarese. A hierarchical representation for future action prediction. In Proc. Euro. Conf. Computer Vision, pages 689–704, 2014.
-  I. Laptev. On space-time interest points. Int. J. Computer Vision, 64(2-3):107–123, 2005.
-  Q. Li, Z. Qiu, T. Yao, T. Mei, Y. Rui, and J. Luo. Action recognition by learning deep multi-granular spatio-temporal video representation. In Proc. ACM Conf. Multimedia Retrieval, pages 159–166, 2016.
-  X. Li, M. Fang, J.-J. Zhang, and J. Wu. Learning coupled classifiers with rgb images for rgb-d object recognition. Pattern Recognition, 61:433–446, 2017.
-  C. Liu, X. Wu, and Y. Jia. Transfer latent svm for joint recognition and localization of actions in videos. IEEE Trans. Cybernetics, 46(11):2596–2608, 2016.
-  L. Liu, L. Shao, X. Li, and K. Lu. Learning spatio-temporal representations for action recognition: A genetic programming approach. IEEE Trans. Cybernetics, 46(1):158–170, 2016.
-  L. Liu, L. Shao, X. Zhen, and X. Li. Learning discriminative key poses for action recognition. IEEE Trans. Cybernetics, 43(6):1860–1870, 2013.
-  L. Liu, Y. Zhou, and L. Shao. Dap3d-net: Where, what and how actions occur in videos? arXiv preprint arXiv:1602.03346, 2016.
-  S. Maji, L. Bourdev, and J. Malik. Action recognition from a distributed representation of pose and appearance. In Proc. Conf. Computer Vision and Pattern Recognition, pages 3177–3184, 2011.
-  J. Martens and I. Sutskever. Learning recurrent neural networks with hessian-free optimization. In Proc. Int’ Conf. Machine Learning, pages 1033–1040, 2011.
-  B. Ni, Y. Pei, P. Moulin, and S. Yan. Multilevel depth and image fusion for human activity detection. IEEE Trans. Cybernetics, 43(5):1383–1394, 2013.
-  X. Nie, C. Xiong, and S.-C. Zhu. Joint action recognition and pose estimation from video. In Proc. Conf. Computer Vision and Pattern Recognition, pages 1293–1301, 2015.
-  O. Oreifej and Z. Liu. Hon4d: Histogram of oriented 4d normals for activity recognition from depth sequences. In Proc. Conf. Computer Vision and Pattern Recognition, pages 716–723, 2013.
-  O. Oshin, A. Gilbert, and R. Bowden. Capturing the relative distribution of features for action recognition. In Proc. Conf. Automatic Face and Gesture Recognition, pages 111–116, 2011.
-  A. Quattoni, S. Wang, L.-P. Morency, M. Collins, and T. Darrell. Hidden conditional random fields. IEEE Trans. Pattern Analysis and Machine Intelligence, pages 1848–1852, 2007.
-  M. Raptis and L. Sigal. Poselet key-framing: A model for human activity recognition. In Proc. Conf. Computer Vision and Pattern Recognition, pages 2650–2657, 2013.
-  M. Ryoo. Human activity prediction: Early recognition of ongoing activities from streaming videos. In Proc. Int’ Conf. Computer Vision, pages 1036–1043, 2011.
-  M. S. Ryoo and J. K. Aggarwal. UT-Interaction Dataset, ICPR contest on Semantic Description of Human Activities (SDHA), 2010.
-  W. Shen, K. Deng, X. Bai, T. Leyvand, B. Guo, and Z. Tu. Exemplar-based human action pose correction and tagging. In Proc. Conf. Computer Vision and Pattern Recognition, pages 1784–1791, 2012.
-  Y.-F. Shih, Y.-M. Yeh, Y.-Y. Lin, M.-F. Weng, Y.-C. Lu, and Y.-Y. Chuang. Deep co-occurrence feature learning for visual object recognition. In Proc. Conf. Computer Vision and Pattern Recognition, 2017.
-  J. Shotton, R. Girshick, A. Fitzgibbon, T. Sharp, M. Cook, M. Finocchio, R. Moore, P. Kohli, A. Criminisi, A. Kipman, et al. Efficient human pose estimation from single depth images. IEEE Trans. Pattern Analysis and Machine Intelligence, 35(12):2821–2840, 2013.
-  G. Shu, A. Dehghan, O. Oreifej, E. Hand, and M. Shah. Part-based multiple-person tracking with partial occlusion handling. In Proc. Conf. Computer Vision and Pattern Recognition, pages 1815–1821, 2012.
-  K. Simonyan and A. Zisserman. Two-stream convolutional networks for action recognition in videos. In Proc. Advances in Neural Information Processing Systems, pages 568–576, 2014.
-  Y. Song, L.-P. Morency, and R. Davis. Multi-view latent variable discriminative models for action recognition. In Proc. Conf. Computer Vision and Pattern Recognition, pages 2120–2127, 2012.
-  Y. Song, L.-P. Morency, and R. W. Davis. Action recognition by hierarchical sequence summarization. In Proc. Conf. Computer Vision and Pattern Recognition, pages 3562–3569, 2013.
-  C. Sutton and A. McCallum. An Introduction to Conditional Random Fields for Relational Learning. MIT Press, 2007.
-  N. C. Tang, Y.-Y. Lin, J.-H. Hua, S.-E. Wei, M.-F. Weng, and H.-Y. M. Liao. Robust action recognition via borrowing information across video modalities. IEEE Trans. Image Processing, 24(2):709–723, 2015.
-  D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri. Learning spatiotemporal features with 3d convolutional networks. In Proc. Int’ Conf. Computer Vision, pages 4489–4497, 2015.
-  R. Vemulapalli, F. Arrate, and R. Chellappa. Human action recognition by representing 3d skeletons as points in a lie group. In Proc. Conf. Computer Vision and Pattern Recognition, pages 588–595, 2014.
-  J. Wang, Z. Liu, Y. Wu, and J. Yuan. Mining actionlet ensemble for action recognition with depth cameras. In Proc. Conf. Computer Vision and Pattern Recognition, pages 1290–1297, 2012.
-  L. Wang and D. Suter. Recognizing human activities from silhouettes: Motion subspace and factorial discriminative graphical model. In Proc. Conf. Computer Vision and Pattern Recognition, pages 1–8, 2007.
-  X. Wang, T. X. Han, and S. Yan. An hog-lbp human detector with partial occlusion handling. In Proc. Int’ Conf. Computer Vision, pages 32–39, 2009.
-  D. Weinland, M. Özuysal, and P. Fua. Making action recognition robust to occlusions and viewpoint changes. In Proc. Euro. Conf. Computer Vision, pages 635–648, 2010.
-  L. Xia and J. Aggarwal. Spatio-temporal depth cuboid similarity feature for activity recognition using depth camera. In Proc. Conf. Computer Vision and Pattern Recognition, pages 2834–2841, 2013.
-  T. Xiao, H. Li, W. Ouyang, and X. Wang. Learning deep feature representations with domain guided dropout for person re-identification. arXiv preprint arXiv:1604.07528, 2016.
-  Y. Yang, C. Deng, D. Tao, S. Zhang, W. Liu, and X. Gao. Latent max-margin multitask learning with skelets for 3-d action recognition. IEEE transactions on cybernetics, 47(2):439–448, 2017.
-  G. Yu, J. Yuan, and Z. Liu. Predicting human activities using spatio-temporal structure of interest points. In Proc. ACM Conf. Multimedia, pages 1049–1052, 2012.
-  J. Zhang, Y. Han, J. Tang, Q. Hu, and J. Jiang. Semi-supervised image-to-video adaptation for video action recognition. IEEE Trans. Cybernetics, 47(4):960–973, 2017.