Advances in Human Action Recognition: A Survey

Advances in Human Action Recognition: A Survey

Guangchun Cheng Yiwen Wan Abdullah N. Saudagar Kamesh Namuduri Bill P. Buckles Dept. of Computer Science and Engineering, University of North Texas
Denton, TX 76203, USA
Abstract

Human action recognition has been an important topic in computer vision due to its many applications such as video surveillance, human machine interaction and video retrieval. One core problem behind these applications is automatically recognizing low-level actions and high-level activities of interest. The former is usually the basis for the latter. This survey gives an overview of the most recent advances in human action recognition during the past several years, following a well-formed taxonomy proposed by a previous survey Aggarwal and Ryoo (2011). From this state-of-the-art survey, researchers can view a panorama of progress in this area for future research.

keywords:
action recognition, survey, computer vision, video analytics
journal: Pattern Recognition Letters

1 Introduction

Human action recognition is an active topic in the field of computer vision. This is due partially to the rapidly increasing amount of video records and the large number of potential applications based on automatic video analysis such as visual surveillance, human-machine interfaces, sports video analysis, and video retrieval. Among these applications, one of the most interesting is human action recognition especially high-level behavior recognition.

An action is a sequence of human body movements, and may involves several body parts concurrently. From the viewpoint of computer vision, the recognition of action is to match the observation (e.g. video) with previously defined patterns and then assign it a label, i.e. action type. Depending on complexity, human activities can be categorized into four levels: gestures, actions, interactions and group activities Aggarwal and Ryoo (2011), and much research follows a bottom-up construction of human activity recognition. Major components of such systems include feature extraction, action learning and classification, and action recognition and segmentation Poppe (2010). A simple process consists of three steps, namely detection of human and/or its body parts, tracking, and then recognition using the tracking results. For instance, to recognize ”shaking hands” activities, two person’s arms and hands are first detected and tracked to generate a spatial-temporal description of their movement. This description is compared with existing patterns in the training data to determine the action type.

This paradigm heavily relies on the accuracy of tracking, which is not reliable in cluttered scenes. Many other methodologies were proposed, and can be classified according to many different criteria as in existing survey papers. Poppe Poppe (2010) discussed human action recognition from image representation and action classification separately. Weinland et al.Weinland et al. (2011) surveyed methods for action representation, segmentation and recognition. Turaga et al.Turaga et al. (2008) divided the recognition problem into action and activity according to its complexity, and classified approaches according to their ability to handle varying degrees of complexity. There exist many other classification criteria Aggarwal and Ryoo (2011); Chaudhary et al. (2011); Candamo et al. (2010). Among them,Aggarwal and Ryoo Aggarwal and Ryoo (2011) is one of the latest comprehensive summarization and comparison of the most significant progress in this area. Based on whether the action is recognized from input images directly, Aggarwal and Ryoo Aggarwal and Ryoo (2011) divides the recognition methodologies into two major categories: single-layered approaches and hierarchical approaches. Both are further sub-categorized depending on the feature representation and learning methods, as shown in Fig. 1. Aggarwal and Ryoo (2011) surveyed progress up to three years ago.

Figure 1: Hierarchical approach-based taxonomy of human activity recognition methodologiesAggarwal and Ryoo (2011).

In this paper, we focus on the state-of-the-art research not discussed in previous surveys. Additionally, in order for a comparison with previous methods, we use a similar taxonomy as in Aggarwal and Ryoo’s surveyAggarwal and Ryoo (2011). For each of the category in Fig. 1, recent development is presented together with the comparison between it and previously reported methods.

The remainder of this paper is structured as follows. Publicly available datasets for human action recognition are reviewed in Section 2, followed by two sections that review recognition approaches. In Section 3, single-layered recognition approaches are reviewed with different representation and integration methods. Section 4 discusses the advances in hierarchical methdologies. Section 5 concludes this survey.

2 Datasets

In this section we discuss and describe datasets in use since 2009. Datasets that have been utilized earlier than 2009 can be found in Aggarwal and Ryoo (2011) in more detail. We focus on new datasets collected and we further analyze and compare them across several aspects.

2.1 The KTH Dataset

The current database covers six actions – walking, jogging, running, boxing, hand waving and hand clapping performed several times by 25 subjects in four different scenarios outdoors, outdoors with scale variation, outdoors with different clothes and indoors. It contains a total of 2391 sequences. All sequences are taken with a static camera with 25fps frame rate, down sampled to the spatial resolution of 160x120 pixels. In the original paper Sch and Barbara (), sequences were divided into a training set (eight persons), a validation set (eight persons) and a test set (nine persons). The dataset does not provide background models and extracted silhouettes.

2.2 The Weizmann Dataset

The database covers 10 natural actions – running, walking, skipping, jumping-jack, jumping-forward-on-two-legs, jumping-in-place-on-two-legs, galloping sideways, waving-two-hands, waving one- hand and bending performed by nine subjects  Blank et al. (2005). It contains a total of 93 sequences. All sequences are taken with a static camera with 25fps frame rate, down sampled to the spatial resolution of 180x144 pixels. The dataset also has ten additional sequences of walking captured from a different viewpoint varying between 0° and 81° relative to the image plane. The extracted masks after background subtraction and background sequences are provided.

2.3 The IXMAS Dataset

INRIA Xmas Motion Acquisition Sequences (IXMAS) covers 13 daily-life actions – checking watch, crossing arms, scratching head, sitting down, getting up, turning around, walking, waving, punching, kicking, pointing, picking, overhead throwing and bottom up throwing performed three times by 11 subjects  Weinland et al. (2006). It contains a total of 2145 sequences. All sequences are filmed with 5 calibrated and synchronized fire wire cameras. Dataset provides the extracted silhouettes and also reconstructed visual hulls.

2.4 CMU MoBo Dataset

The CMU Motion of Body (MoBo) dataset covers four different actions – slow walking, fast walking, inclined walking, and walking with a ball – performed by 25 subjects walking on a treadmill in the CMU 3D room Gross and Shi (2001). More than 8000 images are captured per subject. All sequences are taken using six high resolution color cameras. The sequences are 11 seconds long at 30 fps frame rate with resolution of 640x480 pixels. The extracted silhouettes are provided.

2.5 HOHA-1 (Hollywood Human Actions I) Dataset

The database contains video samples covering eight actions – answering phone, getting out a car, hand shaking, hugging, kissing, sitting down, sitting up, and standing up – from 32 movies Laptev et al. (2008). The two training sets are originated from 12 movies with 219 samples and test set is originated from 20 movies other than used in training with 211 samples with labels verified manually.

2.6 HOHA-2 (Hollywood Human Actions II) Dataset

This dataset is an extension of the HOHA dataset. The database contains video samples covering 12 actions – answering phone, getting out a car, hand shaking, hugging, kissing, sitting down, sitting up, standing up, driving car, eating, fighting, and running – and 10 classes of scenes from 69 movies Marszałek et al. (2009). The classes of scenes are leaving house, road and entering bedroom, car, hotel, kitchen, living room, office, restaurant, and shop. It contains a total of 3669 samples. The training set originates from 33 movies with 823 samples. The test set originates from 36 movies other than those used in training with 884 samples having labels verified manually.

2.7 Human Eva Dataset

The Human Eva-I dataset covers four gray scale video sequences and three color video sequences from a motion capture system which are calibrated and synchronized with 3D body poses. The database contains 4 subjects covering 6 actions – walking, jogging, gesturing, catching, boxing and combination of walking and jogging  Sigal and Black (2006). The sequences are with resolution of 640x480 pixels captured at 60 Hz.

The Human Eva –II dataset covers extended sequence of combination of walking and jogging actions with two subjects.

2.8 CMU Mocap Dataset

The CMU Mocap Dataset has six categories – Human Interaction, Interaction with Environment Locomotion, Physical Activities & Sports , Situations & Scenarios and Test Motions performed by 144 subjects. These six categories are subdivided into 23 subcategories. The actions are captured by 12 Vicon infrared MX-40 cameras with a resolution of 120 megapixel University (2006).

Above datasets and other datasets – UCF Sports action, UCF Youtube action and i3DPost Multi-View are summarized in Table 1

Dataset Challenges Year Best Accuracy Achieved Category

KTH
Homogeneous backgrounds with a static camera 2004 97.6% [Ziaeefard et al.’10] General purpose action recognition
Weizmann partial occlusions, non-rigid deformations, significant changes in scale and viewpoint, high irregularities in the performance of an action and low quality video 2005 100% [yangwang et al.09; Lin et al.09; Zeng and Ji et al.’10] General purpose action recognition
IXMAS Multi view dataset for view-invariant human actions 2006 89.4% [Xinxiao Wu et al.’11] Motion Acquisition
CMU MoBo Human gait 2001 78.07% [Qinfeng Shi et al.’11] Motion capture
HOHA Unconstrained videos 2008 56.8% [Andrew Gilbert et al.’11] Movie
HOHA-2 comprehensive benchmark for human action recognition 2009 58.3% [Heng Wang et al.’11] Movie
Human Eva synchronized video and ground-truth 3D motion 2009 84.3% [Sang Min Yoon et al.’10] Pose Estimation and Motion Tracking
CMU MoCap 3D marker positions and Skeleton movement 2006 100% [Hu et al.’09] Motion capture
UCF Sports wide range of scenes and viewpoints 2008 93.5% [Simon Jones et al.’11] Sports action
UCF Youtube Unconstrained videos 2008 84.2% [Heng Wang et al.’11] Sports action
i3DPost Multi-View Synchronised/uncompressed-HD 8 view image sequences 2009 80% [Michael B. Holte et al.’11] Motion Acquisition
Table 1: Human Action Dataset

3 Single-layered Approaches

This section reviews the single-layered approaches. The methods are characterized by the activities to be recognized directly from the raw video data instead of primitive sub-actions or sub-activities. Therefore, most single-layered approaches deal with simple video or datasets such as KTH to recognize the actions contained. The image sequences from videos are regarded as being generated from a specific class of actions, and thus such approaches basically involve how to represent the videos (i.e. extracting features) and match them. As such, single-layered approaches mainly recognize common actions and these recognized simple primitive actions can be employed to detect more complex action recognition using hierarchical conbinations, as shown in Section 4.

As shown in a previous survey Aggarwal and Ryoo (2011), various approaches have been proposed for representation and matching in single-layered systems. They can be broadly categorized into two classes: space-time approaches and sequential approaches. The core difference between space-time and sequential approaches is how the temporal dimension (i.e., the third-dimension in a 3-D XYT space) is treated. Space-time approaches treat time as a regular dimension as spatial dimensions and extract features from the 3-D volumetric videos, while sequential approaches consider a human activity as ordered observations along the timeline. Because they take sequential relationships into consideration, sequential approaches generally achieve better results than its space-time counterpart.

In this section, we present a review to the most recent progress in this branch of action recognition, and made comparison among them and previous surveyed methods. Space-time approaches are discussed in Section 3.1, and sequential approaches in Section 3.2.

3.1 Advances in Space-Time Approaches

For most action recognition systems (also the scope of this survey), the input is from videos. All videos discussed here consist of a temporal (T) sequence of 2-D spatial (XY) images, or equivalently a set of pixels in 3-D XYT space. Therefore, a video can be represented as a spatial-temporal volume, and this volume contains necessary information for human beings and machines to recognize the actions and activities in the volume. Based on this assumption, various representation and correspondence matching algorithms have been put forward to compactly characterize the underlying motion patterns.

As shown in Fig. 1, we discuss the progress of space-time approaches using the same representation-based taxonomy. Except for methods using the raw volume as a feature, all three representations use motion-related information to characterize the actions or activities.

3.1.1 Action Recognition with Space-Time Volumes

The most intuitive space-time volume approach would use the entire 3-D volume as feature or template, and match unknown action videos to existing ones to obtain the classification. However, the method suffers from the noise and meaningless background information, and therefore, some effort has been made to model the foreground movement.

Based on Bobick and Davis’s Bobick and Davis (2001) work on movement, various approaches have been explored to extend it for action recognition. Hu et al. Hu et al. (2009) proposed to combine both motion history image (MHI) and appearance information for better characterization of human actions. Two kinds of appearance-based features were proposed. The first appearance-based feature is the foreground image, obtained by background subtraction. The second is the histogram of oriented gradients feature (HOG), which characterizes the directions and magnitudes of edges and corners. SMILE-SVM (simulated annealing multiple instance learning support vector machines) was proposed for classification. It aims to obtain a global optimum via simulated annealing method without relying on model initialization to avoid local minima.

Qian et al. Qian et al. (2010) combined global features and local features to classify and recognize human activities. The global feature was based on binary motion energy image (MEI), and its contour coding of the motion energy image was used instead of MEI as a better global feature because it overcomes the limitation of MEI where hollows exist for parts of human blob are undetected. For local features, an object’s bounding box was used. The feature points were classified using multi-class support vector machines. Roh et al. Roh et al. (2010) also exended Bobick and Davis’s Bobick and Davis (2001) MHI from 2-D to 3-D space, and proposed volume motion template for view-independent human action recognition using stereo videos.

Similarly, motivated by a gait energy image Han and Bhanu (2006), Kim et al. Kim et al. (2010) proposed an accumulated motion image (AMI) to represent spatiotemporal features of occurring actions. The AMI was the average of image differences. A rank matrix was obtained using ordinal measurement of AMI pixels. The distance between rank matrices of query video and candidate video was computed using L1-norms, and the best match, spatially and temporally, was the candidate with the minimum distance.

Various researchers tried to incorporate person models such as silhouettes or skeletons for action recognition. Ikizler and Duygulu Ikizler and Duygulu (2009) proposed a new pose descriptor called histogram of oriented rectangles(HOR) for action recognition. They represented each human pose in an action sequence with oriented rectangular patches extracted over the human silhouette, which then formed spatial oriented histograms to represent the distribution of these rectangular patches. The local dynamics was captured with the summation of the HOR within a sliding window. Four matching methods were performed for classification, namely nearest neighbor, global histogramming, SVM and dynamic time warping.

Fang et al. Fang et al. (2010) first mapped the high dimensional silhouettes to low dimensional points as spatial motion description using locality preserving projection. This low-dimensional motion vector was assumed to describe the intrinsic motion structure. Then three different temporal information, i.e. temporal neighbor, motion difference and motion trajectory, was applied to the spatial descriptors to obtain the feature vectors, which were fed with k-nearest neighborhood classifier.

Ziaeefard and Ebrahimnezhad Ziaeefard and Ebrahimnezhad (2010) proposed the cumulative skeletonized image (CSI) across time as features, and constructed 2-D angular/distance histograms based on it. A hierarchical SVM was used for the matching process. First a coarse classification of CSI histograms using an SVM classifier was obtained with dissimilar actions, and then a second SVM was applied to confused actions using salient features among similar actions.

Wang and Mori Wang and Mori (2009) proposed semilatent topic models (STM) following the bag-of-words framework, where a ”word” corresponds to a frame and a ”document” corresponds to a ”video sequence”. After obtaining stabilized persons in a video sequence, optical flow was computed, and half-wave rectified into four channels followed by filtering to form the motion descriptor, based on which codebook was constructed. Based on latent topics models such as LDA Blei et al. (2003) and CTM Blei and Lafferty (2006), STM does not require a choice for the number of latent topics, yet gave better training efficiency and recognition accuracy.

Guo  Guo et al. (2009) viewed an action as a temporal sequence of local shape-deformations of centroid-centered object silhouettes. Each action was represented by the empirical covariance matrix of a set of 13-dimensional normalized geometric feature vectors that captured the shape of the silhouette tunnel. The similarity of two actions was measured in terms of a Riemannian metric between their covariance matrices. The silhouette tunnel of a test video is broken into short overlapping segments and each segment was classified using a dictionary of labeled action covariance matrices and the nearest neighbor rule.

Figure 2: An example of computing the shape-motion descriptor of a gesture frame with a dynamic background from Lin et al. Lin et al. (2009) (©2009 IEEE). (a) Raw optical flow field, (b) Compensated optical flow field, (c) Combined, part-based appearance likelihood map, (d) Motion descriptor computed from the raw optical flow field, (e) Motion descriptor computed from the compensated optical flow field, (f) Shape descriptor .

Efforts in other directions have also occurred. Kim and Cipolla Kim and Cipolla (2009) extended Canonical Correlation Analysis (CCA) to measure video-to-video similarity. The method acted upon video volumes avoiding the difficult problems of explicit motion estimation, and provided a way of spatiotemporal matching that is robust to intraclass variations of action due to CCA. Liu et al. Liu and Yuen (2010) applied principal component analysis (PCA) to a salient action unit (SAU) (i.e., one cycle of repetitive action in a video), and AdaBoost classifier was used to classify the action in a query video. Cao et al. Cao et al. (2009) provided a new way to combine different features using a heterogeneous feature machine (HFM).

3.1.2 Action Recognition with Space-Time Trajectories

Trajectory-based approaches are based on the observation that the tracking of joint positions is sufficient for humans to recognize actions Johansson (1975). Trajectories are usually constructed by tracking joint points or other interest points on human body. Various representations and corresponding algorithms match the trajectories for action recognition.

Messing et al. Messing and Kautz (2009) extracted feature trajectories by tracking Harris3D interest points using a KLT tracker Lucas and Kanade (1981), and the trajectories were represented as sequences of log-polar quantized velocities. It used a generative mixture model to learn a velocity-history language and classified video sequences. A weighted mixture of bags of augmented trajectory sequences was modeled for action classes. These mixture components can be thought of as velocity history “words”, with each velocity history feature being generated by one mixture component, and each activity class has a distribution over these mixture components. Further, they showed how the velocity history feature can be extended, both with a more sophisticated latent velocity model, and by combining the velocity history feature with other useful information, like appearance, position, and high level semantic information.

Wang et al. Wang et al. (2011) proposed an approach to describe videos by dense trajectories. They sampled dense points from each frame and tracked them based on displacement information from a dense optical flow field. Local descriptors of HOG, HOF and MBH (motion boundary histogram) around interest points were computed. This is shown in Fig. 3.

Figure 3: Illustration of dense trajectory description from Wang et al. (2011) (©2011 IEEE) Left: Feature points are sampled densely for multiple spatial scales. Middle:Tracking is performed in the corresponding spatial scale over L frames. Right: Trajectory descriptors of HOG, HOF and MBH.

3.1.3 Action Recognition with Space-Time Local Features

The application of local features in action recognition was extended from object recognition in images. The local features refer to the description of points and their surroundings in the 3-D volumetric data with unique discriminative characteristics. These points and corresponding local feature descriptors are most informative and more robust. In terms of the density of extracted feature points, the representation of local feature approaches can be divided into two broad categories: sparse and dense. The Harris3D detector Laptev and Lindeberg (2003) and the Dollar detector Dollár et al. (2005) are representative of the former, and optical flow-based methods the latter. Most algorithms are derived from them. Other novel methods have also been applied for finding interest points to recognize actions.

Bregonzio et al. Bregonzio et al. (2009) proposed clouds of space-time interest points to overcome the limitations of the Dollar detector Dollár et al. (2005). Using the detected interest points from Dollár et al. (2005), this was achieved through extracting holistic features from clouds of interest points accumulated over multiple temporal scales followed by automatic feature selection. SVMs and Nearest Neighbor Classifiers (NCCs) were employed for classification. One example of clouds of interest points is shown in Fig. 4. Jones, et al. Jones et al. (2012) also based their research on the Dollar detector Dollár et al. (2005) to detect and describe interest points which were then clustered using k-means. The innovation is that it incorporated relevance feedback mechanism by using ABRS-SVM (i.e., asymmetric bagging and random subspace support vector machine).

In Thi et al. (2010), space-time interest points are detected with the Harris3D detector Laptev and Lindeberg (2003), and assigned labels of indicating if it belongs to the class of interest action by using a Bayesian classifier. The feature vectors of interest point descriptors and labels are then provided to a PCA-SVM classifier to recognize the action type. In this work, the action is also localized based on CRF weighting results.

While 3D Harris corners Laptev and Lindeberg (2003) are widely used, they suffer the problem of sparity. Gilbert et al. Gilbert et al. (2009) used dense simple 2D Harris corners Harris and Stephens (1988) in multiple scales to construct features. A two stage hierarchical grouping process was used to classify features and the actions. Sadek et al. Sadek et al. (2011) also used a Harris corner detector in each frame and described the local feature points with temporal self-similarities defined on the fuzzy log-polar histograms. Together with global features (i.e., change of gravity centers), the feature vectors were classified with SVM.

Optical flow is also commonly used for feature point detection and description Ikizler-Cinbis and Sclaroff (2010); Holte et al. (2011); Oikonomopoulos et al. (2009). Ikizler-Cinbis and Sclaroff Ikizler-Cinbis and Sclaroff (2010) employed optical flow and foreground flow to extract motion features for persons, objects and scenes, based on which the shape feature for each was also extracted. All of these feature channels were inputs to a multiple instance learning (MIL) framework to find the location of interest in a given video.

Holte et al. Holte et al. (2011) constructed 3D optical flow from eight weighted 2D flow fields to achieve view-invariant action recognition. 3D Motion Context (3D-MC) and Harmonic Motion Context (HMC) were used to represent the extracted 3D motion vector fields efficiently and in a view-invariant manner. The resulting 3D-MC and HMC descriptors were classified into a set of human actions using normalized correlation, taking into account the performing speed variations of different actors.

Another optical flow-based work was Oikonomopoulos’s B-spline polynomial descriptor Oikonomopoulos et al. (2009). It was extracted as spatiotemporal salient points detected on the estimated optical flow field for a given image sequence and was based on geometrical properties of three-dimensional piecewise polynomials, namely B-splines. The latter was fitted on the spatio-temporal locations of salient points that fell within a given spatiotemporal neighborhood. The descriptor is invariant in translation and scaling in space-time.

Figure 4: Examples of clouds of interest points. The clouds at different temporal scales are highlighted in yellow boxes. Bregonzio et al. (2009) (©2009 IEEE)

Many efforts have been made to find interest points with other principles Rapantzikos et al. (2009); Minhas et al. (2010); Yu et al. (2010); Shao et al. (2012); Lui and Beveridge (2011); Zhu et al. (2009); Le et al. (2011). For example, Rapantzikos et al. Rapantzikos et al. (2009) proposed a saliency-based interest points detector which incorporates intensity, color and motion. It used a multi-scale volumetric representation of the video and involved spatiotemporal operations at the voxel level. Interest points were selected as the extrema of the saliency response. Different recognition algorithms were used, such as bag-of-words with nearest neighbor for the KTH dataset and SVM kernel for HOHA dataset.

Minhas et al. Minhas et al. (2010) proposed new methods to compute the spatiotemporal features using 3D dual-tree discrete wavelet tranform (DT-DWT). 3D DT-DWT was employed to get the spatiotemporal information (subband vector of wavelet coefficients) efficiently, and an affine SIFT was used for local static features. By using hybrid spatiotemporal and local static features, the extreme learning machine (ELM) classifier reached high accuracy for public datasets.

Yu et al. Yu et al. (2010) introduced a framework based on semantic texton forests (STFs) to achieve real-time action recognition. The FAST detector Rosten and Drummond (2006) was extended to V-FAST for video interest point detection. STFs are applied to classify local space-time volumes around interest points to generate the discriminative codebook. Pyramidal spatiotemporal relationship match (PSRM) was used for local appearance and structural information. A set of 3D relationship histograms were constructed by analyzing every pair of feature points using PSRM.

Zhu et al. Zhu et al. (2009) proposed a new TISR (temporally integrated spatial response) descriptor, which captured the characteristics of individual actions by extracting dense spatiotemporal descriptors and representing actions by bag-of-words features. With a visual vocabulary of the TISR descriptors, the bag-of-words histogram features were able to tolerate spatial and temporal variations.

Le  et al. Le et al. (2011) presented an extension of the independent subspace anlysis (ISA) algorithm to learn invariant spatiotemporal features from unlabled video data in a hierarchical way. More specifically, features were first learnt with small input patches flattened into a vector, convolved with a larger region of the input data, and then used as input to the layer above. The features from both layers were combined as local features for classification. This two-layered stacked convolutional ISA model overcomes the limitation of ISA for large inputs, and performed well on challenging datasets.

Approach Category KTH WZMN Other
Hu’09 Volume CMU:100%
Ikizler’09 Volume 90% 100%
Wang’09 Volume 91.2% 100%
Guo’09 Volume 95.33%
Kim’09 Volume 95.33% Gesture:82%
Cao’09 Volume CMU:88.1%
Liu’10 Volume 81.5% 98.3%
Ziaeefard’10 Volume 97.6%
Fang’10 Volume 90.21%
Qian’10 Volume 88.69%
Kim’10 Volume 96.4%
Messing’09 Trajectory 89% DailyAction:
67%
Wang’11 Trajectory 94.2% HOHA2:58.3%
UCF:88.2%
Bregonzio’09 Local 93.17% 96.66%
Rapantzikos’09 Local 88.3%
Minhas’10 Local 94.83% 99.44%
Thi’10 Local 93.83% 98.2% HOHA:26.63%
TRECVid:23.25%
Ikizler-Cinbis’10 Local Youtube:72.51%
Yu’10 Local 95.67% UT-Itrctn:83.33%
UCF:86.5%
Le’11 Local 93.9% HOHA2:53.3%
Youtube:75.8%
Jones’12 Local 93.2% UCF:93.5%
HOHA:48.4%
Sadek’11 Local 93.6% 97.8%
Gilbert’09 Local 94.5% HOHA:31.4%
mKTH:68.8%
Oikonomopoulos Local 81% 92% Aerobics:95%
Lui’11 Local 97% UCF:88%

Table 2: Comparison of space-time approaches

3.2 Sequential Approaches

Single-layered sequential approaches differ with space-time approaches in that they are designed to capture temporal relationships of observations. Thus, human actions are integrated as a sequence of observations. Generally an observation is associated with local or global features extracted from a frame or a set of frames. As in  Aggarwal and Ryoo (2011) exemplar-based recognition and state model-based analysis are two sub-categories of sequential approaches.

3.2.1 exemplar-based approaches

As we mentioned earlier, sequential approaches define actions to be a sequence of observations and how observations are extracted is not limited. Exemplar-based approaches represent human actions with a template sequence of observation or a set of sample sequence of action observations. Thus the focus of exemplar-based approaches is defining how a new input video can be compared with the template or sample sequence of action observations. In previous work dynamic time warping (DTW) has been widely adopted for exemplar-based human action recognition in Darrell and Pentland (1993); Gavrila and Davis (1995); Veeraraghavan and Roy-chowdhury (). The similarity between input and action template is measured by comparing coefficients of the activity basis after principal component analysis (PCA) in Yacoob and Black (1998). Dynamic feature changes are also utilized to represent an activity as a linear-time-invariant (LTI) system Lublinerman et al. (2006).

Recently Lin et al. Lin et al. (2009) represented actions in videos as a sequence of prototypes. The prototype is based on a novel shape-motion feature and the sequence is generated by matching with a hierachical prototype tree constructed using K-means (K=2) clustering applied iteratively. Given an action video, prototype sequence will be generated for it with a prototype sequence estimation. The prototype matching was fulfilled using FastDTW algorithm to increase computational efficiency.

3.2.2 state model-based approaches

Instead of representing human action as a sequence of observations state model-based approaches learn a state model for each action and each action is represented in terms of a set of hidden states. It generates sequences of observation and every sequence of observation is associated with an instance of the corresponding action. Standard hidden Markov models have been widely used for state model-based approaches in Yamato et al. (1992); Starner et al. (1998); Bobick and Wilson (1997). HMMs are also extended to CHSMMs to model duration of human activities Lv and Nevatia (2007); Natarajan and Nevatia (2007).

Currently, HMMs or extensions are still applied in human action recognition. In Yu and Aggarwal (2009), a flexible star skeleton is described for use in posture representation. The aim is to accurately match human extremities using contours and histograms from an image frame. An HMM is utilized to recognize human actions. In Kellokumpu et al. (2009), novel texture descriptors are proposed to describe motion and an HMM is used to model the temporal development of texture motion histograms.In  Shi et al. (2010), a discriminative semi-Markov model approach is proposed and in order to efficiently solve the inference problem of simultaneously segmenting and recognizing different actions they designed a Viterbi-like dynamic programming algorithm. Comparision of sequential approaches can be seen in Table  3.

Approach Category KTH WZMN Other
Shi’11 State-based 95% CMU:78%
WBD:94%
Yu’09 State-based HumanClimbingFences:97.9%
BalletMovie:93.6%
Kellokumpu’09 State-based 93.8% 98%
Lin’09 Exemplar 95.77% 100%
Table 3: Comparison of sequential approaches

4 Hierarchical Approaches

As described in Aggarwal and Ryoo (2011) hierarchical approaches try to recognize interesting events (high-level activities) based on simpler or low-level sub-activities. In other words a high-level activity can be decomposed into a sequence of several sub-activities such as ”hand shaking” may be integrated as a sequence of two hands being extended, merging into one object, and two hands being withdrawn. Sub-activities can be further considered as high-level activities until decomposed into atomic ones.

The advantage of hierarchical approaches is the capability to model the complex structure of human activities and its flexibility for either individual activities, interaction between humans and/or objects or group activities. Moreover, hierarchical models provide an intuitive and convenient interface for integrating prior knowledge and understanding of structure of activities. Hierarchical approaches to some extent have a close relationship with single-layer approaches. For example non-hierarchical single layer approaches can be easily utilized for low-level or atomic action recognition such as gesture detection. Some non-hierarchical single layer approaches can also be extended to hierarchical models such as extended multi-layered HMMs.

Using the taxonomy proposed in Aggarwal and Ryoo (2011), hierarchical approaches are categorized into three groups: statistical approaches, syntactic approaches, and description-based approaches.

4.1 statistical approaches

HMMs can be considered as a simple case of dynamic Bayesian networks. An HMM represents the state of the world using a single discrete random variable however DBN represents the state of the world using a set of random variables. Multiple levels of hidden states form a representation of hierarchical human activities. Previous research efforts on statistical approaches mainly dwell on applications of extended HMMs and dynamic Bayesian networks: 2-layered hierarchical hidden Markov models (HMMs) Oliver et al. (2002); Zhang et al. (2004); Yu and Aggarwal () and dynamic probabilistic networks (DPNs) also known as dynamic Bayesian networks (DBNs) Gong and Xiang (2003); Dai et al. (2008). Sub-activities can be either concurrent or sequential. HMM-based approaches in the literature handle sequential sub-activities. Thus, a hierarchical approach using a propagation network (P-net) Shi et al. (2004) has been proposed to handle both concurrent and sequential sub-activities. Beyond HMMs and DBNs a new four-layered hierarchical probabilistic latent model is proposed in Yin and Meng (2010). First the spatial-temporal features are detected and clustered using hierarchical Bayesian model to form atomic actions. Then, based on LDA, a hierarchical probabilistic latent model is used to recognition the action without the need to specify the number of latent states. Local feature-spatial-temporal features are utilized instead of global feature such as human gesture. It is an attempt to utilize clustered space-time features as atomic actions and hierarchical descriptions and representations of complex actions.

Another statistical approach Han et al. (2010) is to decompose the body into a hierarchical structure. A hierarchical manifold space is learnt to describe the motion patterns. Cascade condition random fields (CRFs) are used to predict these motion patterns. SVMs are used to classify final human actions based on the motion patterns. Hierarchical representation of human action is proposed rather than simple non-hierarchical bag-of-words representation. In Mauthner et al. (2011) hierarchical K-means tree is also used to represent the feature cues.

The problem of insufficient training data is handled in Zeng and Ji (2010) by integrating with domain knowledge. First-order logic based domain knowledge is exploited for dynamic Bayesian network learning, both the structure and the parameters.

4.2 syntactic approaches

Syntactic approaches integrate actions as a string of symbols. A symbol in this context is actually the atomic sub-activities mentioned in the previous section. Atomic sub-activities can be recognized using any of the previous hierarchical or non-hierarchical techniques. However actions represented as a string of symbols results in a limitation for concurrent action recognition. In previous work context-free grammers (CFGs), based on syntactic approaches, have been studied and applied in human action recognition. Several probabilistic extension of CFGs – stochastic context-free grammers(SCFGs) – are introduced in Ivanov and Bobick (2000); Moore and Essa (2002); Minnen et al. (2003); Joo and Chellappa (2006). Generally two-layer frameworks are proposed; the lower layer mostly functions to recognize atomic or low-level actions and the higher layer uses parsing techniques for the high-level activity recognition. Another limitation is that user must provide a set of production rules and in order to overcome such limitations Kitani et al. Kitani et al. (2007) introduced an algorithm to automatically learn rules from observations.

Recently efforts have been made towards a new hierarchical framework. In Wang et al. (2010) a four-level hierarchy is proposed. Actions are represented by a set of grammar rules categorized into three classes strong, weak, and stochastic relations based on spatio-temporal relations.

4.3 description-based approaches

Description-based approaches differ from statistical and syntactic approaches through a capability to explicitly express human activities’ spatio-temporal structures. Thus, such methods are able to recognize both sequential and concurrent actions instead being limited to sequential actions. Basically, description-based approaches model human activities as an occurrence of embedded sub-activities. Such occurrences must satisfy specified temporal, spatial and logical relationships that are signatory of a high-level activity. Since the introduction of Allen’s temporal interval predicates, they have been adopted for description-based human activity recognition for both sequential and concurrent relationships. Context free grammars have also been utilized for description-based approaches. A formal syntax is required for the representation of human activities as in  Nevatia et al. (2004); Ryoo and Aggarwal (2006). Conversion from Allen’s interval algebra constraint network to a PNF-network is proposed in Pinhanez and Bobick (1997) to describe identical temporal information. The conversion achieves a form that is computationally tractable. Bayesian belief networks and Petri nets are introduced, respectively, in Intille and Bobick (1999) and in Ghanem et al. (2004). Event logic is described by Siskind to recognize high-level activities in Siskind (2001).In order to compensate for the failures of its low-level components due to the deterministic characteristics of description based approaches several probabilistic extensions of the recognition frameworks are proposed in  Aggarwal (2009); Gupta et al. (2009).Symbolic artificial intelligence techniques Markov Logic Networks(MLN) was also adopted to infer interesting activities probabilistically as in  Tran and Davis (2008).

Ijsselmuiden and Stiefelhagen Ijsselmuiden and Stiefelhagen (2010) provide a brief framework for high-level human activity recognition. It combines different input sources and is based on temporal logic. No probabilistic computation is employed in this work.

Recently a framework was proposed in Morariu and Davis (2011) to recognize behavior in one-to-one basketball by means of arbitrary trajectories obtained by tracking the ball, hands, and feet. This framework uses video analysis and mixed probabilistic and logical inference to annotate events. The method requires semantic descriptions of what generally happens in various scenarios. First-order logic based on Allen’s Interval Logic is utilized to encode spatio-temporal structure knowledge and MLN is used to handle uncertainty low-level observation.

Although, much effort has been extended as described previously but common standard dataset has not been utilized to certain extent so that comparison between description-based approaches can be expressed in terms of functionally instead of statistically. Comparison between hierarchical approaches is shown in Table 4.

Approach Category KTH WZMN Other
Yin’10 Statistical 82%
Zeng’10 Statistical 92.1% 100%
Han’10 Statistical CMU:98.27%
Wang’11 Syntactic 92.5%
HOHA:37.6%
UCF:68.3%
Ijsselmuiden’10 Description-based GroupActivities:74.4%
Morariu’09 Description-based Basketball:72%

Table 4: Comparison of hierarchical approaches

5 Conclusion

In this letter we provide a survey of advances in automated human action recognition. A large collection of methods are identified. Among them, 50 specific and influential proposals of the last three years are reported. The discussion uses the same taxonomy as a previous survey based on whether the action is recognized directly from the images or low-level sub-actions. Our goal was to cover the state-of-the-art developments in each catetory, together with the datasets used in validation.

The literature reviewed shows that much research has been devoted to recognition of human actions directly from the videos or images in a single-layered manner. This is especially true for the case using space-time volume and local features. It is natural to extend 2D image processing methods, such as interest point detection, to 3D videos to extract feature descriptors. Meanwhile, more and more researchers are beginning to explore methods for high-level activity recognition. In this case, most methods surveyed use a hierarchical approach, based on statistical, syntactic, or description-based methods to explain and infer activities from low-level events. Particularly, it is of interest to combine the formal descriptors and probabilistic reasoning to interpret human actions, such as done in Siskind (2001); Nevatia et al. (2004); Ryoo and Aggarwal (2006).

While some research has focused on complex real-world actions, most popular test datasets are still simple, constrained, and structured environments. For example, the observed actions are simple in the KTH or Weizmann datasets. Most algorithms achieve high accuracy in recognizing the actions. The introduction of more realistic datasets such as Hollywood movies and Youtube videos are challenging. The accuracy reported is low in the literature surveyed here. Based on the results of low-level actions, we hope more research will be done in the area of high-level action recognition in datasets and real-world scenes.

We know, however, that complete review of all the approaches is beyond reach. As a popular research topic, human action and activity recognition has attracted much attention and will remain important. With more and more application fields being explored, on one side, domain-specific techniques will probably emerge. On the other side, a cross-domain framework would be beneficial to the entire community.

References

  • Aggarwal and Ryoo (2011) Aggarwal, J., Ryoo, M., 2011. Human activity analysis: A survey. ACM Computing Surveys 43, 1–43.
  • Aggarwal (2009) Aggarwal, M.S.R.J.K., 2009. Semantic representation and recognition of continued and recursive human activities. International Journal of Computer Vision 82, 1–24.
  • Blank et al. (2005) Blank, M., Gorelick, L., Shechtman, E., Irani, M., Basri, R., 2005. Actions as space-time shapes, in: IEEE International Conference on Computer Vision (ICCV), pp. 1395–1402.
  • Blei and Lafferty (2006) Blei, D., Lafferty, J., 2006. Correlated Topic Models. Advances in neural information processing systems 18, 147.
  • Blei et al. (2003) Blei, D.M., Ng, A.Y., Jordan, M.I., 2003. Latent Dirichlet allocation. Journal of Machine Learning Research 3, 993–1022.
  • Bobick and Davis (2001) Bobick, A.F., Davis, J.W., 2001. The recognition of human movement using temporal templates. IEEE Transactions on Pattern Analysis and Machine Intelligence 23, 257–267.
  • Bobick and Wilson (1997) Bobick, A.F., Wilson, A.D., 1997. A state-based approach to the representation and recognition of gesture. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) 19, 1325–1337.
  • Bregonzio et al. (2009) Bregonzio, M., Gong, S., Xiang, T., 2009. Recognising ction as clouds of space-time interest points, in: Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pp. 1948–1955.
  • Candamo et al. (2010) Candamo, J., Shreve, M., Goldgof, D.B., Sapper, D.B., Kasturi, R., 2010. Understanding transit scenes : a survey on human behavior-recognition algorithms. IEEE Transactions on Intelligent Transportation Systems 11, 206–224.
  • Cao et al. (2009) Cao, L., Luo, J., Liang, F., Huang, T.S., 2009. Heterogeneous feature machines for visual recognition, in: International Conference on Computer Vision (ICCV), pp. 1095–1102.
  • Chaudhary et al. (2011) Chaudhary, A., Raheja, J.L., Das, K., Raheja, S., 2011. A survey on hand gesture recognition in context of soft computing, in: Meghanathan, N., Kaushik, B.K., Nagamalai, D. (Eds.), Advanced Computing. Springer, Berlin, pp. 46–55.
  • Dai et al. (2008) Dai, P., Di, H., Dong, L., Tao, L., Xu, G., 2008. Group interaction analysis in dynamic context. IEEE Transactions on Systems, Man, and Cybernetics. Part B 39, 34–42.
  • Darrell and Pentland (1993) Darrell, T., Pentland, A., 1993. Space-time gestures, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 335–340.
  • Dollár et al. (2005) Dollár, P., Rabaud, V., Cottrell, G., Belongie, S., 2005. Behavior recognition via sparse spatio-temporal features, in: IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance.
  • Fang et al. (2010) Fang, C.H., Chen, J.C., Tseng, C.C., Lien, J.J.J., 2010. Human Action Recognition Using Spatio-temporal Classification , 98–109.
  • Gavrila and Davis (1995) Gavrila, D.M., Davis, L.S., 1995. Towards 3-D model-based tracking and recognition of human movement: a multi-view approach, in: In International Workshop on Automatic Face- and Gesture-Recognition. IEEE Computer Society, pp. 272–277.
  • Ghanem et al. (2004) Ghanem, N., Dementhon, D., Doermann, D., Davis, L., 2004. Representation and recognition of events in surveillance video using petri nets, in: In: Proceedings of Conference on Computer Vision and Pattern Recognition Workshops CVPRW, p. 2004.
  • Gilbert et al. (2009) Gilbert, A., Illingworth, J., Bowden, R., 2009. Fast realistic multi-action recognition using mined dense spatio-temporal features, in: IEEE International Coference on Computer Vision (ICCV), pp. 925–931.
  • Gong and Xiang (2003) Gong, S., Xiang, T., 2003. Recognition of group activities using dynamic probabilistic networks, in: IEEE International Conference on Computer Vision (ICCV), p. 742.
  • Gross and Shi (2001) Gross, R., Shi, J., 2001. The CMU motion of body (MoBo) database. Technical Report CMU-RI-TR-01-18. Robotics Institute. Pittsburgh, PA.
  • Guo et al. (2009) Guo, K., Ishwar, P., Konrad, J., 2009. Action recognition in video by covariance matching of silhouette tunnels, in: The 2009 XXII Brazilian Symposium on Computer Graphics and Image Processing, pp. 299–306.
  • Gupta et al. (2009) Gupta, A., Srinivasan, P., Shi, J., Davis, L., 2009. Understanding videos, constructing plots learning a visually grounded storyline model from annotated videos, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2012 –2019.
  • Han and Bhanu (2006) Han, J., Bhanu, B., 2006. Individual recognition using gait energy image. IEEE Transaction Pattern Analysis and Machine Intelligence 28.
  • Han et al. (2010) Han, L., Wu, X., Liang, W., Hou, G., Jia, Y., 2010. Discriminative human action recognition in the learned hierarchical manifold space. Image and Vision Computing 28, 836–849.
  • Harris and Stephens (1988) Harris, C., Stephens, M., 1988. A combined corner and edge detector, in: Alvey Vision Conference, pp. 189–192.
  • Holte et al. (2011) Holte, M.B., Moeslund, T.B., Nikolaidis, N., Pitas, I., 2011. 3d human action recognition for multi-view camera systems, in: International Conference on 3D Imaging, Modeling, Processing and Transmission, pp. 342–349.
  • Hu et al. (2009) Hu, Y., Cao, L., Lv, F., Yan, S., Gong, Y., Huang, T., 2009. Action detection in complex scenes with spatial and temporal ambiguities, in: IEEE International Conference on Computer Vision (ICCV), pp. 128–135.
  • Ijsselmuiden and Stiefelhagen (2010) Ijsselmuiden, J., Stiefelhagen, R., 2010. Towards high-llevel human activity recognition through computer vision and temporal logic, in: The 33rd Annual German Conference on Advances in Artificial Intelligence, pp. 426–435.
  • Ikizler and Duygulu (2009) Ikizler, N., Duygulu, P., 2009. Histogram of oriented rectangles: a new pose descriptor for human action recognition. Image and Vision Computing 27, 1515–1526.
  • Ikizler-Cinbis and Sclaroff (2010) Ikizler-Cinbis, N., Sclaroff, S., 2010. Object, scene and actions: combining multiple features for human action recognition, in: European Conference on Computer vision (ECCV): Part I, pp. 494–507.
  • Intille and Bobick (1999) Intille, S.S., Bobick, A.F., 1999. A framework for recognizing multi-agent action from visual evidence, in: In AAAI-99, AAAI Press. pp. 518–525.
  • Ivanov and Bobick (2000) Ivanov, Y., Bobick, A., 2000. Recognition of visual activities and interactions by stochastic parsing. IEEE Transactions on Pattern Analysis and Machine Intelligence 22, 852 –872.
  • Johansson (1975) Johansson, G., 1975. Visual motion perception. Scientific American 232, 76–88.
  • Jones et al. (2012) Jones, S., Shao, L., Zhang, J., Liu, Y., 2012. Relevance feedback for real-world human action retrieval. Pattern Recognition Letters 33, 446–452.
  • Joo and Chellappa (2006) Joo, S.W., Chellappa, R., 2006. Attribute grammar-based event recognition and anomaly detection, in: IEEE Conference on Computer Vision and Pattern Recognition Workshop, pp. 107–114.
  • Kellokumpu et al. (2009) Kellokumpu, V., Zhao, G., Pietikäinen, M., 2009. Recognition of human actions using texture descriptors. Machine Vision and Applications , 767–780.
  • Kim and Cipolla (2009) Kim, T.K., Cipolla, R., 2009. Canonical correlation analysis of video volume tensors for action categorization and detection. IEEE Transaction on Pattern Analysis and Machine Intelligence (PAMI) 31, 1415–1428.
  • Kim et al. (2010) Kim, W., Lee, J., Kim, M., Oh, D., Kim, C., 2010. Human Action Recognition Using Ordinal Measure of Accumulated Motion. EURASIP Journal on Advances in Signal Processing 2010, 1–12.
  • Kitani et al. (2007) Kitani, K., Sato, Y., Sugimoto, A., 2007. Recovering the basic structure of human activities from a video-based symbol string, in: IEEE Workshop on Motion and Video Computing, p. 9.
  • Laptev and Lindeberg (2003) Laptev, I., Lindeberg, T., 2003. Space-time interest points, in: IEEE International Conference on Computer Vision (ICCV), pp. 432–439.
  • Laptev et al. (2008) Laptev, I., Marszałek, M., Schmid, C., Rozenfeld, B., 2008. Learning realistic human actions from movies, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
  • Le et al. (2011) Le, Q.V., Zou, W.Y., Yeung, S.Y., Ng, A.Y., 2011. Learning hierarchical invariant spatio-temporal features for action recognition with independent subspace analysis, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3361–3368.
  • Lin et al. (2009) Lin, Z., Jiang, Z., Davis, L.S., 2009. Recognizing actions by shape-motion prototype trees, in: IEEE International Conference on Computer Vision, pp. 444–451.
  • Liu and Yuen (2010) Liu, C., Yuen, P.C., 2010. Human action recognition using boosted EigenActions. Image and Vision Computing 28, 825–835.
  • Lublinerman et al. (2006) Lublinerman, R., Ozay, N., Zarpalas, D., Camps, O., 2006. Activity recognition from silhouettes using linear systems and model invalidation techniques, in: International Conference on Pattern Recognition (ICPR), pp. 347 –350.
  • Lucas and Kanade (1981) Lucas, B.D., Kanade, T., 1981. An iterative image registration technique with an application to stereo vision, in: The 7th International Joint Conference on Artificial Intelligence - Volume 2, pp. 674–679.
  • Lui and Beveridge (2011) Lui, Y.M., Beveridge, J.R., 2011. Tangent bundle for human action recognition, in: IEEE International Conference on Automatic Face and Gesture Recognition, pp. 97–102.
  • Lv and Nevatia (2007) Lv, F., Nevatia, R., 2007. Single view human action recognition using key pose matching and viterbi path searching, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–8.
  • Marszałek et al. (2009) Marszałek, M., Laptev, I., Schmid, C., 2009. Actions in context, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
  • Mauthner et al. (2011) Mauthner, T., Roth, P.M., Bischof, H., 2011. Temporal feature weighting for prototype-bbased action recognition, in: The 10th Asian conference on Computer vision, pp. 566–579.
  • Messing and Kautz (2009) Messing, R., Kautz, H., 2009. Activity recognition using the velocity histories of tracked keypoints, in: IEEE International Conference on Computer Vision (CVPR), pp. 104–111.
  • Minhas et al. (2010) Minhas, R., Baradarani, A., Seifzadeh, S., Jonathan Wu, Q., 2010. Human action recognition using extreme learning machine based on visual vocabularies. Neurocomputing 73, 1906–1917.
  • Minnen et al. (2003) Minnen, D., Essa, I., Starner, T., 2003. Expectation grammars: leveraging high-level expectations for activity recognition, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 626–632.
  • Moore and Essa (2002) Moore, D., Essa, I., 2002. Recognizing multitasked activities from video using stochastic context-free grammar, in: AAAI National Conference on Artificial Intelligence, pp. 770–776.
  • Morariu and Davis (2011) Morariu, V.I., Davis, L.S., 2011. Multi-agent event recognition in structured scenarios., in: CVPR.
  • Natarajan and Nevatia (2007) Natarajan, P., Nevatia, R., 2007. Coupled hidden semi Markov models for activity recognition, in: IEEE Workshop on Motion and Video Computing, pp. 10–17.
  • Nevatia et al. (2004) Nevatia, R., Hobbs, J., Bolles, B., 2004. An Ontology for Video Event Representation, in: IEEE Conference Computer Vision and Pattern Recognition Workshop, p. 119.
  • Oikonomopoulos et al. (2009) Oikonomopoulos, A., Pantic, M., Patras, I., 2009. Sparse b-spline polynomial descriptors for human activity recognition. Image Vision Computing 27, 1814–1825.
  • Oliver et al. (2002) Oliver, N., Horvitz, E., Garg, A., 2002. Layered representations for human activity recognition, in: IEEE Internatinal Conference on Multimodal Interfaces, pp. 3–8.
  • Pinhanez and Bobick (1997) Pinhanez, C., Bobick, A., 1997. Human action detection using pnf propagation of temporal constraints, in: In Proc. of the Conference on Computer Vision and Pattern Recognition, pp. 898–904.
  • Poppe (2010) Poppe, R., 2010. A survey on vision-based human action recognition. Image and Vision Computing 28, 976–990.
  • Qian et al. (2010) Qian, H., Mao, Y., Xiang, W., Wang, Z., 2010. Recognition of human activities using SVM multi-class classifier. Pattern Recognition Letters 31, 100–111.
  • Rapantzikos et al. (2009) Rapantzikos, K., Avrithis, Y., Kollias, S., 2009. Dense saliency-based spatiotemporal feature points for action recognition .
  • Roh et al. (2010) Roh, M.C., Shin, H.K., Lee, S.W., 2010. View-independent human action recognition with Volume Motion Template on single stereo camera. Pattern Recognition Letters 31, 639–647.
  • Rosten and Drummond (2006) Rosten, E., Drummond, T., 2006. Machine learning for high-speed corner detection, in: European Conference on Computer Vision (ECCV), pp. 430–443.
  • Ryoo and Aggarwal (2006) Ryoo, M.S., Aggarwal, J.K., 2006. Recognition of composite human activities through context-free grammar based representation, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1709–1718.
  • Sadek et al. (2011) Sadek, S., Al-Hamadi, A., Michaelis, B., Sayed, U., 2011. An action recognition scheme using fuzzy log-polar histogram and temporal self-similarity. EURASIP Journal of Advanced Signal Processing 2011.
  • (68) Sch, C., Barbara, L., . Recognizing human actions : A local SVM approach .
  • Shao et al. (2012) Shao, L., Ji, L., Liu, Y., Zhang, J., 2012. Human action segmentation and recognition via motion and shape analysis. Pattern Recognition Letters 33, 438–445.
  • Shi et al. (2010) Shi, Q., Cheng, L., Wang, L., Smola, A., 2010. Human Action Segmentation and Recognition Using Discriminative Semi-Markov Models. International Journal of Computer Vision 93, 22–32.
  • Shi et al. (2004) Shi, Y., Huang, Y., Minnen, D., Bobick, A., Essa, I., 2004. Propagation networks for recognition of partially ordered sequential action, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 862–869.
  • Sigal and Black (2006) Sigal, L., Black, M.J., 2006. HumanEva: synchronized video and motion capture dataset for evaluation of articulated human motion . Science .
  • Siskind (2001) Siskind, J.M., 2001. Grounding the lexical semantics of verbs in visual perception using force dynamics and event logic. Journal of Artificial Intelligence Research 15, 31–90.
  • Starner et al. (1998) Starner, T., Weaver, J., Pentland, A., 1998. Real-time American sign language recognition using desk and wearable computer based video. IEEE Transactions on Pattern Analysis and Machine Intelligence 20, 1371–1375.
  • Thi et al. (2010) Thi, T.H., Zhang, J., Cheng, L., Wang, L., Satoh, S., 2010. Human action recognition and localization in video using structured learning of local space-time features, in: IEEE International Conference on Advanced Video and Signal Based Surveillance, pp. 204–211.
  • Tran and Davis (2008) Tran, S.D., Davis, L.S., 2008. Event modeling and recognition using markov logic networks, in: Tthe 10th European Conference on Computer Vision: Part II, pp. 610–623.
  • Turaga et al. (2008) Turaga, P., Chellappa, R., Subrahmanian, V.S., Udrea, O., 2008. Machine recognition of human activities: a survey. IEEE Transactions on Circuits and Systems for Video Technology 18, 1473–1488.
  • University (2006) University, C.M., 2006. CMU graphics lab Motion Capture Database; website: http://mocap.cs.cmu.edu. Technical Report.
  • (79) Veeraraghavan, A., Roy-chowdhury, A.K., . The function space of an activity, in: in Proc. Comput. Vis. Pattern Recognit, pp. 959–968.
  • Wang et al. (2011) Wang, H., Kläser, A., Schmid, C., Cheng-Lin, L., 2011. Action recognition by dense trajectories, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Colorado Springs, USA. pp. 3169–3176.
  • Wang et al. (2010) Wang, L., Wang, Y., Gao, W., 2010. Mining layered grammar rules for action recognition. International Journal of Computer Vision 93, 162–182.
  • Wang and Mori (2009) Wang, Y., Mori, G., 2009. Human action recognition by semilatent topic models. IEEE Trans. Pattern Anal. Mach. Intell. 31, 1762–1774.
  • Weinland et al. (2006) Weinland, D., Ronfard, R., Boyer, E., 2006. Free viewpoint action recognition using motion history volumes. Computer Vision and Image Understanding 104, 249–257.
  • Weinland et al. (2011) Weinland, D., Ronfard, R., Boyer, E., 2011. A survey of vision-based methods for action representation, segmentation and recognition. Computer Vision and Image Understanding 115, 224–241.
  • Yacoob and Black (1998) Yacoob, Y., Black, M.J., 1998. Parameterized modeling and recognition of activities, in: IEEE International Conference on Computer Vision (ICCV), pp. 120–127.
  • Yamato et al. (1992) Yamato, J., Ohya, J., Ishii, K., 1992. Recognizing human action in time-sequential images using hidden Markov model, in: Computer Vision and Pattern Recognition, 1992. Proceedings CVPR ’92., 1992 IEEE Computer Society Conference on, pp. 379–385.
  • Yin and Meng (2010) Yin, J., Meng, Y., 2010. Human activity recognition in video using a hierarchical probabilistic latent model, in: 2010 IEEE Conference on Computer Vision and Pattern Recognition - Workshops, pp. 15–20.
  • (88) Yu, E., Aggarwal, J.K., . Detection of fence climbing from monocular video, in: The 18th International Conference on Pattern Recognition (ICPR).
  • Yu and Aggarwal (2009) Yu, E., Aggarwal, J.K., 2009. Human Action Recognition with Extremities as Semantic Posture Representation. Vision Research , 1–8.
  • Yu et al. (2010) Yu, T.H., Kim, T.K., Cipolla, R., 2010. Real-time action recognition by spatiotemporal semantic and structural forest, in: Proceedings of the British Machine Vision Conference (BMVC), pp. 52.1–52.12.
  • Zeng and Ji (2010) Zeng, Z., Ji, Q., 2010. Knowledge based activity recognition with dynamic bayesian network, in: The 11th European conference on Computer vision (ECCV), pp. 532–546.
  • Zhang et al. (2004) Zhang, D., Gatica-perez, D., Bengio, S., Mccowan, I., Lathoud, G., 2004. Modeling individual and group actions in meetings: a two-layer hmm framework, in: IEEE Workshop on Event Mining in Video (CVPR EVENT).
  • Zhu et al. (2009) Zhu, G., Yang, M., Yu, K., Xu, W., Gong, Y., 2009. Detecting video events based on action recognition in complex scenes using spatio-temporal descriptor, in: 17th ACM international Conference on Multimedia, pp. 165–174.
  • Ziaeefard and Ebrahimnezhad (2010) Ziaeefard, M., Ebrahimnezhad, H., 2010. Hierarchical human action recognition by normalized-polar histogram, in: International Conference on Pattern Recognition (ICPR), pp. 3720–3723.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
347498
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description