Monocular Total Capture: Posing Face, Body, and Hands in the Wild

Monocular Total Capture: Posing Face, Body, and Hands in the Wild

Donglai Xiang   Hanbyul Joo   Yaser Sheikh
Carnegie Mellon University

We present the first method to capture the 3D total motion of a target person from a monocular view input. Given an image or a monocular video, our method reconstructs the motion from body, face, and fingers represented by a 3D deformable mesh model. We use an efficient representation called 3D Part Orientation Fields (POFs), to encode the 3D orientations of all body parts in the common 2D image space. POFs are predicted by a Fully Convolutional Network (FCN), along with the joint confidence maps. To train our network, we collect a new 3D human motion dataset capturing diverse total body motion of 40 subjects in a multiview system. We leverage a 3D deformable human model to reconstruct total body pose from the CNN outputs by exploiting the pose and shape prior in the model. We also present a texture-based tracking method to obtain temporally coherent motion capture output. We perform thorough quantitative evaluations including comparison with the existing body-specific and hand-specific methods, and performance analysis on camera viewpoint and human pose changes. Finally, we demonstrate the results of our total body motion capture on various challenging in-the-wild videos. Our code and newly collected human motion dataset will be publicly shared.

1 Introduction

Human motion capture is essential for many applications including visual effects, robotics, sports analytics, medical applications, and human social behavior understanding. However, capturing 3D human motion is often costly, requiring a special motion capture system with multiple cameras. For example, the most widely used system [2] needs multiple calibrated cameras with reflective markers carefully attached to the subjects’ body. The actively-studied markerless approaches are also based on multi-view systems [18, 26, 16, 22, 23] or depth cameras [46, 7]. For this reason, the amount of available 3D motion data is extremely limited. Capturing 3D human motion from single images or videos can provide a huge breakthrough for many applications by increasing the accessibility of 3D human motion data, especially by converting all human-activity videos on the Internet into a large-scale 3D human motion corpus.

Reconstructing 3D human pose or motion from a monocular image or video, however, is extremely challenging due to the fundamental depth ambiguity. Interestingly, humans are able to almost effortlessly reason about the 3D human body motion from a single view, presumably by leveraging strong prior knowledge about feasible 3D human motions. Inspired by this, several learning-based approaches have been proposed over the last few years to predict 3D human body motion (pose) from a monocular video (image) [53, 41, 4, 54, 9, 32, 30, 64, 24, 33] using available 2D and 3D human pose datasets [5, 25, 1, 19, 22]. Recently, similar approaches have been introduced to predict 3D hand poses from a monocular view [65, 34, 11]. However, fundamental difficulty still remains due to the lack of available in-the-wild 3D body or hand datasets that provide paired images and 3D pose data; thus most of the previous methods only demonstrate results in controlled lab environments. Importantly, there exists no method that can reconstruct motion from all body parts including body, hands, and face altogether in a single view, although this is important to fully understand human behavior.

In this paper, we aim to reconstruct the 3D total motions [23] of a human using a monocular imagery captured in the wild. This ambitious goal requires solving challenging 3D pose estimation problems for different body parts altogether, which are often considered as separate research domains. Notably, we apply our method to in-the-wild situations (e.g., videos from YouTube), which has rarely been demonstrated in previous work. We use a 3D representation named Part Orientation Fields (POFs) to efficiently encode the 3D orientation of a body part in the 2D space. A POF is defined for each body part that connects adjacent joints in torso, limbs, and fingers, and represents relative 3D orientation of the rigid part regardless of the origin of 3D Cartesian coordinates. POFs are efficiently predicted by a Fully Convolutional Network (FCN), along with 2D joint confidence maps [55, 60, 13]. To train our networks, we collect a new 3D human motion dataset containing diverse body, hands, and face motions from 40 subjects. Separate CNNs are adopted for body, hand and face, and their outputs are consolidated together in a unified optimization framework. We leverage a 3D deformable model that is built for the total capture [22] in order to exploit the shape and motion prior embedded in the model. In our optimization framework, we fit the model to the CNN measurements at each frame to simultaneously estimate the 3D motion of body, face, fingers, and feet. Our mesh output also enables us to additionally refine our motion capture results for better temporal coherency by optimizing the photometric consistency in the texture space.

This paper presents the first approach to monocular total motion capture in various challenging in-the-wild scenarios (e.g., Fig. LABEL:fig:teaser). We demonstrate that our single framework achieves comparable results to existing state-of-the-art 3D body or hand pose estimation methods on public benchmarks. Notably, our method is applied to various in-the-wild videos, which has rarely been demonstrated in either 3D body or hand estimation area. We also conduct thorough experiments on our newly collected dataset to quantitatively evaluate the performance of our method with respect to viewpoint and body pose changes. The major contributions of our paper are summarized as follows:

  • We present the first method to produce 3D total motion capture results from a monocular image or a video in various challenging in-the-wild scenarios.

  • We introduce an optimization framework to fit a deformable human model on 3D POFs and 2D keypoint measurements for total body pose estimation, and show comparable results to the state-of-the-art methods in both 3D body and 3D hand estimation benchmarks.

  • We present a method to enforce photometric consistency across time to reduce motion jitters.

  • We capture a new 3D human motion dataset with 40 subjects to provide training and evaluation data for monocular total motion capture.

2 Related Work

Single Image 2D Human Pose Estimation: Over the last few years, great progress has been made in detecting 2D human body keypoints from a single image [56, 55, 10, 60, 35, 13] by leveraging large-scale manually annotated datasets [25, 5] with deep Convolutional Neural Network (CNN) framework. In particular, the major breakthrough is boosted by using the fully convolutional architectures to produce confidence scores for each joint with a heatmap representation [55, 60, 35, 13], which is known to be more efficient than directly regressing the joint locations with fully connected layers [56]. A recent work [13] similarly learns the connectivity between pairs of adjacent joints, called the Part Affinity Fields (PAFs) in the form of 2D heatmaps, to assemble 2D keypoints for different individuals in the multi-person 2D pose estimation problem.

Figure 1: An overview of our method. Our method is composed of CNN part, mesh fitting part, and mesh tracking part.

Single Image 3D Human Pose Estimation: Early work [41, 4] models the 3D human pose space as an over-complete dictionary learned from a 3D human motion database [1]. More recent approaches rely on deep neural networks, which are roughly divided into two directions: two-stage methods and direct estimation. The two-stage methods take 2D keypoint estimation as input and focus on lifting 2D human poses to 3D independently without input image [9, 14, 30, 33, 36, 17]. These methods ignore rich information in images that encodes 3D information, such as shading and appearance, and also suffer from sensitivity to 2D localization error. Direct estimation methods predict 3D human pose directly from images, in the form of direct coordinate regression [42, 51, 52], voxel prediction [39, 29, 58] or depth map prediction [64]. Similar to ours, a recent work uses 3D orientation fields [28] as an intermediate representation for the 3D body pose. However, these models are usually trained on MoCap datasets, with limited ability to generalize to in-the-wild scenarios.

Due to the above limitations, some methods have been proposed to integrate prior knowledge about human pose for better in-the-wild performance. Some work [38, 44, 59] proposes to use ordinal depth as additional supervision for CNN training. Additional loss functions are introduced in [64, 15] to enforce constraints on predicted bone length and joint angles. Some work [24, 61] uses Generative Adversarial Networks (GAN) to exploit human pose prior in data-driven approaches.

Monocular Hand Pose Estimation: Hand keypoint estimation is often considered as independent research domain from body pose estimation. Most of previous work is based on depth image as input [37, 50, 45, 48, 57, 62], while RGB-based method is introduced recently, for 2D keypoint estimation [47] and 3D pose estimation [65, 11, 20].

3D Deformable Human Models: 3D deformable models are commonly used for markerless body [6, 27, 40] and face motion capture [8, 12] to restrict the reconstruction output to the parametric shape and motion spaces defined by the models. Although the outputs are limited by the expressive power of models (e.g., some body models cannot express clothing and some face models cannot express wrinkles), they greatly simplify the 3D motion capture problem. We can fit the models based on available measurements by optimizing cost functions with respect to the model parameters. Recently, a generative 3D model that can express body and hands is introduced by Romero et al. [43]; the Adam model is introduced by Joo et al. [23] to enable the total body motion capture (face, body and hands), which we adopt for monocular total capture.

3 Method Overview

Our method takes as input a sequence of images capturing the motion of a single person from a monocular RGB camera, and outputs the 3D total body motion capture (including the motion from body, face, hands, and feet) of the target person in the form of a deformable 3D human model [27, 23] for each frame. Given a -frame video sequence, our method produces the parameters of the 3D human body model [23], including body motion parameters , facial expression parameters , and global translation parameters . The body motion parameter includes hands and feet motions, as well as the global rotation of the body. Our method also estimates shape coefficients shared among all frames in the sequence, while , , and are estimated for each frame respectively. The output parameters are defined by the 3D deformable human model Adam [23]. Note that our method can be also applied to capture only a subset of total motions (e.g., body motion only with the SMPL model [27] or hand motion only by separate hand model of Frankenstein in [23]). We denote a set of all parameters by , and denote the result for the -th frame by .

Our method is divided into 3 stages, as shown in Fig. 1. In the first stage, each image is fed into a Convolutional Neural Network (CNN) obtain the joint confidence maps and the 3D orientation information of body parts, which we call the 3D Part Orientation Fields (POFs). In the second stage, we perform total body motion capture by fitting a deformable human mesh model  [23] on the image measurements produced by the CNNs. We utilize the prior information embedded in the human body model for better robustness of results against the noise in CNN outputs. This stage produces the 3D pose for each frame independently, represented by parameters of the deformable model . In the third stage, we additionally enforce temporal consistency across frames to reduce motion jitters. We define a cost function to ensure photometric consistency in the texture domain of mesh model, based on the initial fitting outputs of the second stage. This stage produces refined model parameters . We demonstrate that this temporal refinement is crucial to obtain realistic body motion capture output.

4 Predicting 3D Part Orientation Fields

Figure 2: An illustration of a Part Orientation Field. The orientation for body part is a unit vector from to . In POFs, all pixels belong to this part are assigned the value of this vector in channels.

The 3D Part Orientation Field (POF) encodes the 3D orientation of a body part of an articulated structure (e.g., limbs, torso, and fingers) in 2D image space. The same representation is used in a very recent literature [28], and we describe the details and notations used in our total motion capture framework. We pre-define a human skeleton hierarchy in the form of a set of ‘(parent, child)’ pairs111See the appendix for our body and hand skeleton definition.. A rigid body part connecting a 3D parent joint and a child joint , , is denoted by , with defined in the camera coordinate. Its 3D orientation is represented by a unit vector from to in :


For a specific body part , we define a Part Orientation Field to represent its 3D orientation as a 3-channel heatmap (for coordinates respectively) in the image space, where and are the size of image. The value of the heatmap at in the POF is defined as,


Note that the POF values are defined only for the pixel region belonging to the current target part and we follow [13] to define the pixels belonging to the part as a rectangular (please refer to [13] for details). An example POF of a body part (right lower arm) is shown in Fig. 2.

Implementation Details: We train a CNN to predict joint confidence maps and Part Orientation Fields . The input image is cropped around the target person to , with the bounding box given by OpenPose222 [13, 47] during testing. We follow [13] for CNN architecture with minimum change. We use 3 channels to estimate POF instead of 2 channels in [13] for every body part in . loss is applied to network prediction on and . We also train on our network on images with 2D pose annotations (e.g. COCO). In this situation we only supervise the network with loss on . Two networks are trained for body and hands separately.

5 Model-Based 3D Pose Estimation

Ideally the joint confidence maps and POFs produced by CNN provide sufficient information to reconstruct a 3D skeletal structure up to scale [28]. In practice, the and can be noisy, so we exploit a 3D deformable mesh model to more robustly estimate 3D human pose with the shape and pose priors embedded in the model. In this section, we first describe our mesh fitting process for body, and then extend it to hand pose and facial expression for total body motion capture. We use superscripts and to denote functions and parameters for body, left hand, right hand, toes, and face respectively. We use Adam [23] which encompasses the expressive power for body, hands and facial expression in a single model. Other human models (e.g., SMPL [27]) can be also used if the goal is to reconstruct only part of the total body motion.

5.1 Deformable Mesh Model Fitting with POFs

Given the 2D joint confidence maps predicted by our CNN for body, we obtain 2D keypoint locations by taking the maximum in each channel of . Given the and the other CNN output POFs , we compute the 3D orientation of each bone , by averaging the values of along the segment from to , as in [13]. We obtain a set of mesh parameters , , and that agree with these image measurements by minimizing the following objective function:


where , , and are different constraints as defined below. The 2D keypoint constraint penalizes the discrepancy between network-predicted 2D keypoints and the projections of the joints in the human body model:


where is -th joint of the human model and is projection function from 3D space to image, where we assume a weak perspective camera model. The POF constraint penalizes the difference between POF prediction and the direction of body part in mesh model, defined as:


where is the unit directional vector for the bone in the human mesh model, is a balancing weight for this term, and is inner product between 3-vectors. The prior term is needed to restrict our output to a feasible human pose distribution (especially for rotation around bones), defined as:


where and are poses prior learned from CMU Mocap dataset [1], and is a balancing weight. We use Levenberg-Marquardt algorithm [3] to optimize Equation 3. The mesh fitting process is illustrated in Fig. 3.

Figure 3: Human model fitting on estimated POFs and joint confidence maps. We extract 2D joint locations from 2D joint confidence maps (left) and then body part orientation from POFs (right). Then we optimize a cost function (Equation 3) that minimizes the distance between and and angle between and .

5.2 Total Body Capture with Hands, Feet and Face

Given the network output of the hand network and for both hands, we can additionally fit the Adam model to satisfy the hand pose using similar optimization objectives:


is the objective function for left hand and each term is defined similarly to Equation 4, 5 and 6. The hand pose priors are learned from MANO dataset [43]. The objective function for the right hand is similarly defined.

Once we fit the body and hand parts of deformable model to the CNN outputs, the projection of the 3D model on the image is already well aligned to the target person. Then we can reconstruct other body parts by simply adding more 2D joint constraints using additional 2D keypoint measurements. In particular, we include 2D face and foot keypoints from the OpenPose detector. The additional cost function for toes is defined as:


where are 2D tiptoe keypoints on both feet from OpenPose, and are the 3D joint location of the mesh model in use. Similarly for face we define:


Note that the facial keypoints are determined by all the mesh parameters together. In addition, we also apply regularization for shape coefficients and facial expression coefficients:


Putting everything together, the final optimization objective is


where the balancing weights for all the terms are omitted for clarity. We optimize this final objective function in multiple stages to avoid local minima. We first fit the torso, then add limbs, and finally optimize the full objective function including all constraints. This stage produces 3D total body motion capture results for each frame independently in the form of Adam model parameters .

6 Enforcing Photo-Consistency in Textures

In the previous stages, we perform per-frame processing, which is vulnerable to motion jitters. We propose to reduce the jitters using the pixel-level image cues given the initial model fitting results. The core idea is to enforce photometric consistency in textures of the model, extracted by projecting the fitted mesh models on the input images. Ideally, the textures should be consistent across frames, but in practice there exist discrepancies due to motion jitters. In order to efficiently implement this constraint in our optimization framework, we compute optical flows from projected texture to the target input image. The destination of each flow indicates the expected location of vertex projection. To describe our method, we define a function which extracts a texture given an image and a mesh structure:


Given the input image of the -th frame and mesh determined by , the function extracts a texture map by projecting the mesh structure for -th frame on the image for the visible parts. We ideally expect the texture for (+)-th frame to be the same as . Instead of directly using this constraint for optimization, we use optical flow to compute the discrepancy between these textures for easier optimization. More specifically, we pre-compute the optical flow between the raw image and the rendering of the mesh model at (+)-th frame with the -th frame’s texture map , which we call ‘synthetic image’:


where is the mesh for the (+)-th frame, and is a rendering function that renders a mesh with a texture to an image. The function computes optical flows from the synthetic image to the input image . The output flow maps a 2D location to a new location following the optical flow result. Intuitively, the computed flow mapping drives the projection of 3D mesh vertices toward the directions for better photometric consistency in textures across frames. Based on this flow mapping, we define the texture consistency term:


where is the projection of the -th mesh vertex as a function of model parameters under optimization. is the destination of each optical flow, where is the projection of -th mesh vertex of mesh . Note that is pre-computed and constant during the optimization. This constraint is defined in image space, and thus it mainly reduces the jitters in directions. Since there is no image clue to reduce the jitters along direction, we just enforce a smoothness constraint for -components of 3D joint locations:


where is -coordinate of the -th joint of the mesh model as a function of parameters under optimization, and is the corresponding value in previous frame as a fixed constant. Finally, we define a new objective function:


where the balancing weights are omitted. We minimize this function to obtain the parameter of the (+1)-th frame , initialized from output of last stage . Compared to the original full objective Equation  11, this new objective function is simpler since this optimization starts from a good initialization. Most of the 2D joint constraints are replaced by , while we found that the POF term and face keypoint term are still needed to avoid error accumulation. Note that this optimization is performed recursively—we use the updated parameters of the -th frame to extract the texture in Equation 12, and update the model parameters at the (+1)-th frame from to with this optimization. Also note that the shape parameters should be the same across the sequence, so we take and fix it during optimization. We also freeze facial expression and does not optimize it in this stage.

Figure 4: Illustration of our temporal refinement algorithm. The top row shows meshes projected on input images at previous frame, current target frame, and after refinement. In zoom-in views a particular vertex is shown in blue, which is more consistent after applying our tracking method.

7 Results

We quantitatively evaluate the performance of our method on public benchmarks for 3D body pose estimation and hand pose estimation. We also thoroughly evaluate our method on view point changes and human pose changes in our newly collected multi-view human pose dataset. For all quantitative experiments, we use the camera intrincics provided by the datasets. We finally show our total motion capture results in various challenging videos recorded by us or obtained from YouTube. Our qualitative results are best shown in our supplementary videos.

7.1 Dataset

Body Pose Dataset: Human3.6M [19] is a large-scale indoor marker-based human MoCap dataset, and currently the most commonly used benchmark for 3D body pose estimation. We quantitatively evaluate the body part of our algorithm on it. We follow the standard training-testing protocol as in [39].

Hand Pose Dataset: Stereo Hand Pose Tracking Benchmark (STB) [63] is a 3D hand pose dataset consisting of 30K images for training and 6K images for testing. Dexter+Object (D+O) [49] is a hand pose dataset captured by an RGB-D camera, providing about 3K testing images in 6 sequences. Only the locations of finger tips are annotated.

Newly Captured Total Motion Dataset: We use the Panoptic Studio [21, 22] to capture a new dataset for 3D body and hand pose in a markerless way [23]. We use 31 HD cameras to capture 40 subjects. Each subject makes a wide range of motion in body and hand under the guidance of a video for 2.5 minutes. After cleaning out the erroneous frames, we obtain about 834K body images and 111K hand images with corresponding 3D pose data. We split this dataset into training and testing set such that no subject appears in both. This dataset will be publicly shared.

Method Pavlakos [39] Zhou [64] Luo [28] Martinez [30] Fang [17] Yang [61] Pavlakos [38] Dabral [15] Sun [52] *Kanazawa [24] *Metah [32] *Metah [31] *Ours *Ours+
MPJPE 71.9 64.9 63.7 62.9 60.4 58.6 56.2 55.5 49.6 88.0 80.5 69.9 58.3 64.5
Table 1: Quantitative comparison with previous work on Human3.6M dataset. The ‘*’ signs indicate methods that show results on in-the-wild videos. The evaluation metric is Mean Per Joint Position Error (MPJPE) in millimeter. The numbers are taken from original papers. ‘Ours’ and ‘Ours+’ refer to our results without and with prior respectively.

7.2 Quantitative Comparison with Previous Work

7.2.1 3D Body Pose Estimation.

Comparison on Human3.6M. We compare the performance of our single-frame body pose estimation method with previous state-of-the-arts. Our network is initialized from the 2D body pose estimation network of OpenPose. We train the network using COCO dataset [25], our new 3D body pose dataset, and Human3.6M for 165k iterations with a batch size of 4. During testing time, we fit Adam model [23] onto the network output. Since Human3.6M has a different joint definition from Adam model, we build a linear regressor to map Adam mesh vertices to 17 joints in Human3.6M definition using the training set, as in [24]. For evaluation, we follow [39] to rescale our output to match the size of an average skeleton computed from the training set. The Mean Per Joint Position Error (MPJPE) after aligning the root joint is reported as in [39].

The experimental results are shown in Table 1. Our method achieves competitive performance; in particular, we show the lowest pose estimation error among all methods that demonstrate their results on in-the-wild videos (marked with ‘*’ in the table). We argue that this is important because methods are in general prone to overfitting to this specific dataset. As an example, our result with pose prior shows increased error compared to our result without prior, although we find that pose priors helps to produce good surface structure and joint angles in the wild.

Ablation Studies. We investigate the importance of each dataset through ablation studies on Human3.6M. We compare the reconstruction error by training networks with: (1) Human3.6M; (2) Human3.6M and our captured dataset; and (3) Human3.6M, our captured dataset, and COCO. Note that setting (3) is the method we used for the previous comparison. We follow the same evaluation protocol and metric as in Table 1. The result is shown in Table 2. First, it is worth noting that with only Human3.6M as training data, we already achieve the best results among results marked with ‘*’ in Table 1. Second, comparing (2) with (1), our new dataset provides an improvement despite the difference in background, human appearance and pose distribution between our dataset and Human3.6M. This verifies the value of our new dataset. Third, we see a drop in error when we add COCO to the training data, which suggests that our framework can take advantage of this dataset with only 2D human pose annotation for 3D pose estimation.

Training data MPJPE
(1) Human3.6M 65.6
(2) Human3.6M + Ours 60.9
(3) Human3.6M + Ours + COCO 58.3
Table 2: Ablation studies on Human3.6M. The evaluation metric is Mean Per Joint Position Error in millimeter.
Figure 5: Comparison with previous work on 3D hand pose estimation datasets. We plot PCK curve and show AUC in bracket for each method in legend. Left: results on the STB dataset [63] in 20mm - 50mm; right: results on Dexter+Object dataset [49] in 0 - 100mm. Results with depth alignment are marked with ‘*’; the RGB-D based method is marked with ‘+’.

7.2.2 3D Hand Pose Estimation.

We evaluate our method on the Stereo Hand Pose Tracking Benchmark (STB) and Dexter+Object (D+O), and compare our result with previous methods. For this experiment we use the separate hand model of Frankenstein in [23].

STB. Since the STB dataset has a palm joint rather than the wrist joint used in our method, we convert the palm joint to wrist joint as in [65] to train our CNN. We also learn a linear regressor using the training set of STB dataset. During testing, we regress back the palm joint from our model fitting output for comparison. For the evaluation, we follow the previous work [65] and compute the error after aligning the position of root joint and global scale with the ground truth, and report the Area Under Curve (AUC) of the Percentage of Correct Keypoints (PCK) curve in the 20mm-50mm range. The results are shown in the left of Fig. 5. Our performance is on par with the state-of-the-art methods that are designed particularly for hand pose estimation. We also point out that the performance on this dataset has almost saturated, because the percentage is already above even at the lowest threshold.

Figure 6: Evaluation result in Panoptic Studio. Top: accuracy vs. view point; bottom: accuracy vs. pose. The metric is MPJPE in cm. The average MPJPE for all testing samples is cm.
Figure 7: The comparison of joint location across time before and after tracking with ground truth. The horizontal axes show frame numbers (30fps) and the vertical axes show joint locations in camera coordinate. The target joint here is the left shoulder of the subject.

D+O. Following [34] and [20], we report our results using a PCK curve and the corresponding AUC, as shown in the right of Fig. 5. Since previous methods are evaluated by estimating the absolute 3D depth of 3D hand joints, we follow them by finding an approximate hand scale using a single frame in the dataset, and fix the scale during the evaluation. In this case, our performance (AUC=) is comparable with the previous state-of-the-art [20] (AUC=). However, since there is fundamental depth-scale ambiguity for single-view pose estimation, we argue that aligning the root with the ground truth depth is a more reasonable evaluation setting. In this setting, our method (AUC=) outperforms the previous state-of-the-art method [34] (AUC=) in the same setting, and even achieves better performance than an RGB-D based method [49] (AUC=).

7.3 Quantitative Study for View and Pose Changes

Our new 3D pose data contain multi-view images with the diverse body postures. This allows us to quantitatively study the performance of our method in view changes and body pose changes. We compare our single view 3D body reconstruction result with the ground truth. Due to the scale-depth ambiguity of monocular pose estimation, we align the depth of root joint to the ground truth by scaling our result along the ray directions from the camera center, and compute the Mean Per Joint Position Error (MPJPE) in centimeter. The average MPJPE for all testing samples is cm. We compute the average errors per each camera viewpoint, as shown in the top of Fig. 6. Each camera viewpoint is represented by azimuth and elevation with respect to the subjects’ initial body location. We reach two interesting findings: first, the performance worsens in the camera views with higher elevation due to the severe self-occlusion and foreshortening; second, the error is larger in back views compared to the frontal views because limbs are occluded by torso in many poses. At the bottom of Fig. 6, we show the performance for varying body poses. We run k-means algorithm on the ground truth data to find body pose groups (the center poses are shown in the figure), and compute the error for each cluster. Body poses with more severe self-occlusion or foreshortening tend to have higher errors.

7.4 The Effect of Mesh Tracking

To demonstrate the effect of our temporal refinement method, we compare the result of our method before and after this refinement stage using Panoptic Studio data. We plot the reconstructed left shoulder joint in Fig. 7. We find that the result after tracking (in blue) tends to be more temporally stable than that before tracking (in green), and is often closer to the ground truth (in red).

7.5 Qualitative Evaluation

Qualitative Results on Images: In this section we present qualitative results of our method on individual images in Fig. 8. We show results on images with various background, human appearance and poses. Our method works well for both indoor Mocap images (the first row in Fig. 8) and in-the-wild images (the latter 2 rows).

Figure 8: Qualitative results of our method on in-the-wild images. For each example, we show input images and our prediction with zoom-in views as well as side and top views.

Qualitative Results on Video Sequences: We show results of our method on video sequences. We test our method on two kinds of videos. First, we take videos of human motion using camera by ourselves; second, we use videos downloaded from Youtube. The results are presented in our supplementary video. For videos where only the upper body of the target person is visible, we assume that the orientation of torso and legs is vertically downward in Equation 5.

8 Discussion

In this paper, we present a method to simultaneously reconstruct 3D total motion of a single person from an image or a monocular video. We thoroughly evaluate the robustness of our method on various benchmarks and demonstrate monocular 3D total motion capture results on in-the-wild videos. There are some limitations with our method. First, we observe failure cases when a significant part of the target person is invisible (out of image boundary or occluded by other objects) due to erroneous network prediction. Second, our hand pose detector fails in the case of insufficient resolution or severe motion blur. Third, our CNN requires bounding boxes for body and hands as input, and cannot handle multiple bodies or hands simultaneously. Solving these problems points to interesting future directions.


  • [1] Cmu motion capture database.
  • [2] Vicon motion systems.
  • [3] S. Agarwal, K. Mierle, and Others. Ceres solver.
  • [4] I. Akhter and M. J. Black. Pose-conditioned joint angle limits for 3d human pose reconstruction. In CVPR, 2015.
  • [5] M. Andriluka, L. Pishchulin, P. Gehler, and B. Schiele. 2D human pose estimation: New benchmark and state of the art analysis. In CVPR, 2014.
  • [6] D. Anguelov, P. Srinivasan, D. Koller, S. Thrun, J. Rodgers, and J. Davis. Scape: shape completion and animation of people. TOG, 2005.
  • [7] A. Baak, M. M, G. Bharaj, H.-p. Seidel, and C. Theobalt. A Data-Driven Approach for Real-Time Full Body Pose Reconstruction from a Depth Camera. In ICCV, 2011.
  • [8] V. Blanz and T. Vetter. A morphable model for the synthesis of 3d faces. In Proceedings of the 26th annual conference on Computer graphics and interactive techniques, pages 187–194. ACM Press/Addison-Wesley Publishing Co., 1999.
  • [9] F. Bogo, A. Kanazawa, C. Lassner, P. Gehler, J. Romero, and M. J. Black. Keep it SMPL: Automatic Estimation of 3D Human Pose and Shape from a Single Image. In ECCV, 2016.
  • [10] A. Bulat and G. Tzimiropoulos. Human pose estimation via convolutional part heatmap regression. In ECCV, 2016.
  • [11] Y. Cai, L. Ge, J. Cai, and J. Yuan. Weakly-supervised 3d hand pose estimation from monocular rgb images. In ECCV, 2018.
  • [12] C. Cao, Y. Weng, S. Zhou, Y. Tong, and K. Zhou. Facewarehouse: A 3d facial expression database for visual computing. TVCG, 2014.
  • [13] Z. Cao, T. Simon, S.-E. Wei, and Y. Sheikh. Realtime multi-person 2d pose estimation using part affinity fields. In CVPR, 2017.
  • [14] C.-H. Chen and D. Ramanan. 3D Human Pose Estimation = 2D Pose Estimation + Matching. In CVPR, 2017.
  • [15] R. Dabral, A. Mundhada, U. Kusupati, S. Afaque, A. Sharma, and A. Jain. Learning 3d human pose from structure and motion. In ECCV, 2018.
  • [16] A. Elhayek, E. Aguiar, A. Jain, J. Tompson, L. Pishchulin, M. Andriluka, C. Bregler, B. Schiele, and C. Theobalt. Efficient convnet-based marker-less motion capture in general scenes with a low number of cameras. In CVPR, 2015.
  • [17] H. Fang, Y. Xu, W. Wang, X. Liu, and S.-C. Zhu. Learning pose grammar to encode human body configuration for 3d pose estimation. In AAAI, 2018.
  • [18] J. Gall, C. Stoll, E. De Aguiar, C. Theobalt, B. Rosenhahn, and H.-P. Seidel. Motion capture using joint skeleton tracking and surface estimation. In CVPR, 2009.
  • [19] C. Ionescu, D. Papava, V. Olaru, and C. Sminchisescu. Human3.6m: Large scale datasets and predictive methods for 3d human sensing in natural environments. TPAMI, 2014.
  • [20] U. Iqbal, P. Molchanov, T. Breuel Juergen Gall, and J. Kautz. Hand pose estimation via latent 2.5d heatmap regression. In ECCV, 2018.
  • [21] H. Joo, H. Liu, L. Tan, L. Gui, B. Nabbe, I. Matthews, T. Kanade, S. Nobuhara, and Y. Sheikh. Panoptic studio: A massively multiview system for social motion capture. In CVPR, 2015.
  • [22] H. Joo, T. Simon, X. Li, H. Liu, L. Tan, L. Gui, S. Banerjee, T. Godisart, B. Nabbe, I. Matthews, et al. Panoptic studio: A massively multiview system for social interaction capture. TPAMI, 2017.
  • [23] H. Joo, T. Simon, and Y. Sheikh. Total capture: A 3d deformation model for tracking faces, hands, and bodies. In CVPR, 2018.
  • [24] A. Kanazawa, M. J. Black, D. W. Jacobs, and J. Malik. End-to-end recovery of human shape and pose. In CVPR, 2018.
  • [25] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick. Microsoft coco: Common objects in context. In ECCV, 2014.
  • [26] Y. Liu, J. Gall, C. Stoll, Q. Dai, H.-P. Seidel, and C. Theobalt. Markerless motion capture of multiple characters using multiview image segmentation. TPAMI, 2013.
  • [27] M. Loper, N. Mahmood, J. Romero, G. Pons-Moll, and M. J. Black. Smpl: A skinned multi-person linear model. In TOG, 2015.
  • [28] C. Luo, X. Chu, and A. Yuille. Orinet: A fully convolutional network for 3d human pose estimation. In BMVC, 2018.
  • [29] D. C. Luvizon, D. Picard, and H. Tabia. 2d/3d pose estimation and action recognition using multitask deep learning. In CVPR, 2018.
  • [30] J. Martinez, R. Hossain, J. Romero, and J. J. Little. A simple yet effective baseline for 3d human pose estimation. In ICCV, 2017.
  • [31] D. Mehta, O. Sotnychenko, F. Mueller, W. Xu, S. Sridhar, G. Pons-Moll, and C. Theobalt. Single-shot multi-person 3d pose estimation from monocular rgb. In 3DV, 2018.
  • [32] D. Mehta, S. Sridhar, O. Sotnychenko, H. Rhodin, M. Shafiei, H.-P. Seidel, W. Xu, D. Casas, and C. Theobalt. Vnect: Real-time 3d human pose estimation with a single rgb camera. TOG, 2017.
  • [33] F. Moreno-noguer. 3D Human Pose Estimation from a Single Image via Distance Matrix Regression. In CVPR, 2017.
  • [34] F. Mueller, F. Bernard, O. Sotnychenko, D. Mehta, S. Sridhar, D. Casas, and C. Theobalt. Ganerated hands for real-time 3d hand tracking from monocular rgb. In CVPR, 2018.
  • [35] A. Newell, K. Yang, and J. Deng. Stacked hourglass networks for human pose estimation. In ECCV, 2016.
  • [36] B. X. Nie, P. Wei, and S.-C. Zhu. Monocular 3d human pose estimation by predicting depth on joints. In ICCV, 2017.
  • [37] I. Oikonomidis, N. Kyriazis, and A. A. Argyros. Tracking the articulated motion of two strongly interacting hands. In CVPR, 2012.
  • [38] G. Pavlakos, X. Zhou, and K. Daniilidis. Ordinal depth supervision for 3D human pose estimation. In CVPR, 2018.
  • [39] G. Pavlakos, X. Zhou, K. G. Derpanis, and K. Daniilidis. Coarse-to-fine volumetric prediction for single-image 3d human pose. In CVPR, 2017.
  • [40] G. Pons-Moll, J. Romero, N. Mahmood, and M. J. Black. Dyna: A model of dynamic human shape in motion. TOG, 2015.
  • [41] V. Ramakrishna, T. Kanade, and Y. Sheikh. Reconstructing 3d human pose from 2d image landmarks. In CVPR, 2012.
  • [42] G. Rogez and C. Schmid. MoCap-guided Data Augmentation for 3D Pose Estimation in the Wild. In NIPS, 2016.
  • [43] J. Romero, D. Tzionas, and M. J. Black. Embodied hands: Modeling and capturing hands and bodies together. TOG, 2017.
  • [44] M. R. Ronchi, O. Mac Aodha, R. Eng, and P. Perona. It’s all relative: Monocular 3d human pose estimation from weakly supervised data. In BMVC, 2018.
  • [45] T. Sharp, C. Keskin, D. Robertson, J. Taylor, J. Shotton, D. Kim, C. Rhemann, I. Leichter, A. Vinnikov, Y. Wei, et al. Accurate, robust, and flexible real-time hand tracking. In CHI, 2015.
  • [46] J. Shotton, A. Fitzgibbon, M. Cook, and T. Sharp. Real-time human pose recognition in parts from single depth images. In CVPR, 2011.
  • [47] T. Simon, H. Joo, I. Matthews, and Y. Sheikh. Hand keypoint detection in single images using multiview bootstrapping. In CVPR, 2017.
  • [48] S. Sridhar, F. Mueller, A. Oulasvirta, and C. Theobalt. Fast and robust hand tracking using detection-guided optimization. In CVPR, 2015.
  • [49] S. Sridhar, F. Mueller, M. Zollhöfer, D. Casas, A. Oulasvirta, and C. Theobalt. Real-time joint tracking of a hand manipulating an object from rgb-d input. In ECCV, 2016.
  • [50] S. Sridhar, A. Oulasvirta, and C. Theobalt. Interactive markerless articulated hand motion tracking using RGB and depth data. In ICCV, 2013.
  • [51] X. Sun, J. Shang, S. Liang, and Y. Wei. Compositional Human Pose Regression. In ICCV, 2017.
  • [52] X. Sun, B. Xiao, F. Wei, S. Liang, and Y. Wei. Integral human pose regression. In ECCV, 2018.
  • [53] C. J. Taylor. Reconstruction of articulated objects from point correspondences in a single uncalibrated image. CVIU, 2000.
  • [54] B. Tekin, A. Rozantsev, V. Lepetit, and P. Fua. Direct Prediction of 3D Body Poses from Motion Compensated Sequences. In CVPR, 2016.
  • [55] J. J. Tompson, A. Jain, Y. LeCun, and C. Bregler. Joint training of a convolutional network and a graphical model for human pose estimation. In NIPS, 2014.
  • [56] A. Toshev and C. Szegedy. Deeppose: Human pose estimation via deep neural networks. In CVPR, 2014.
  • [57] D. Tzionas, L. Ballan, A. Srikantha, P. Aponte, M. Pollefeys, and J. Gall. Capturing hands in action using discriminative salient points and physics simulation. IJCV, 2016.
  • [58] G. Varol, D. Ceylan, B. Russell, J. Yang, E. Yumer, I. Laptev, and C. Schmid. Bodynet: Volumetric inference of 3d human body shapes. In ECCV, 2018.
  • [59] M. Wang, X. Chen, W. Liu, C. Qian, L. Lin, and L. Ma. Drpose3d: Depth ranking in 3d human pose estimation. In IJCAI, 2018.
  • [60] S.-E. Wei, V. Ramakrishna, T. Kanade, and Y. Sheikh. Convolutional pose machines. In CVPR, 2016.
  • [61] W. Yang, W. Ouyang, X. Wang, J. Ren, H. Li, and X. Wang. 3d human pose estimation in the wild by adversarial learning. In CVPR, 2018.
  • [62] Q. Ye, S. Yuan, and T.-K. Kim. Spatial attention deep net with partial pso for hierarchical hybrid hand pose estimation. In ECCV, 2016.
  • [63] J. Zhang, J. Jiao, M. Chen, L. Qu, X. Xu, and Q. Yang. 3d hand pose tracking and estimation using stereo matching. arXiv preprint arXiv:1610.07214, 2016.
  • [64] X. Zhou, Q. Huang, X. Sun, X. Xue, and Y. Wei. Towards 3d human pose estimation in the wild: a weakly-supervised approach. In ICCV, 2017.
  • [65] C. Zimmermann and T. Brox. Learning to estimate 3d hand pose from single rgb images. In ICCV, 2017.


Appendix A New 3D Human Pose Dataset

In this section, we provide more details of the new 3D human pose dataset that we collect.

a.1 Methodology

We build this dataset in 3 steps:

  • We randomly recruit 40 volunteers on campus and capture their motion in a multi-view system [21, 22]. During the capture, all subjects follow the motion in the same video of around 2.5 minutes recorded in advance.

  • We use multi-view 3D reconstruction algorithms [21, 22, 47] to reconstruct 3D body, hand and face keypoints.

  • We run filters on the reconstruction results. We compute the average lengths of all bones for every subject, and discard a frame if the difference between the length of any bone in the frame and the average length is above a certain threshold. We further manually verify the correctness of hand annotations by projecting the skeletons onto 3 camera views and checking the alignment between the projection and images.

a.2 Statistics and Examples

Figure 9: Example images and 3D annotations from our new 3D human pose dataset.

To train our networks, we use our captured 3D body data and hand data, include a total of 834K image-annotation pairs for bodies and 111K pairs for hands. Example data are shown in Fig. 9 and our supplementary video.

Appendix B Network Skeleton Definition

In this section we specify the skeleton hierarchy we use for our Part Orientation Fields and joint confidence maps. As shown in Fig. 10, we predict 18 keypoints for the body and POFs for 17 body parts, so . Analogously, we predict 21 joints for each hand and POFs for 20 hand parts, so and have the dimension , while and have the dimension . Note that we train a CNN only for left hands, and we horizontally flip images of right hands before they are fed into the network during testing. Some example outputs of our CNN are shown in Fig. 12, 13, 14, 15.

Appendix C Deformable Human Model

c.1 Model Parameters

As explained in the main paper, we use Adam model introduced in [23] for total body motion capture. The model parameters include the shape parameters , where is the dimension of shape deformation space, the pose parameters where the is the number of joints in the model333The model has 22 body joints and 20 joints for each hand., the global translation parameters , and the facial expression parameter where is the number of facial expression bases.

c.2 3D Keypoints Definition

In this section we specify the correspondences between the keypoints predicted by our networks and Adam keypoints.

Regressors for the body are directly provided by [23], which define keypoints as linear combination of mesh vertices. During mesh fitting (Section 5 of the main paper), given current mesh determined by mesh parameters , we use these regressors to compute joints from the mesh vertices, and further by Equation 1 in the main paper. and follow the skeleton structure in Fig. 10. and are used in Equation 4 and 5 in the main paper respectively to fit the body pose.

Joo et al. [23] also provides regressors for both hands, so we follow the same setup as body to define keypoints and hand parts , which are used in Equation 7 in the main paper to fit hand pose. Note that the wrists appear in both skeletons of Fig. 10, so actually . We only use 2D keypoint constraints from the body network, i.e., in Equation 4, ignoring the keypoint measurements from hand network and in Equation 7, since the body network is usually more stable in output.

For Equation 8 in the main paper, we use 2D foot keypoint locations from OpenPose as , including big toes, small toes and heels of both feet. On the Adam side, we directly use mesh vertices as keypoints for big toes and small toes on both feet. We use the middle point between a pair of vertices at the back of each feet as the heel keypoint, as shown in Fig. 11 (left).

In order to get facial expression, we also directly fit Adam vertices using the 2D face keypoints predicted by OpenPose (Equation 9 in the main paper). Note that although OpenPose provides 70 face keypoints, we only use 41 keypoints on eyes, nose, mouth and eyebrows, ignoring those on the face contour. The Adam vertices used for fitting are illustrated in Fig. 11 (right).

Figure 10: Illustration on the skeleton hierarchy in our POFs and joint confidence maps. The joints are shown in black, and body parts for POFs are shown in gray with indices underlined. On the left we show the skeleton used in our body network; on the right we show the skeleton used in our hand network.

Appendix D Implmentation Details

In this section, we provide details about the parameters we use in our implementation.

In Equation 4 and 5 of the main paper, we use

We have similarly defined weights for left and right hands omitted in Equation 7, for which we use

Weights for Equation 10 (omitted in the main paper) are

In Equation 15, a balancing weight is omitted for which we use

In Equation 16, consists of POF terms for body, left hands and right hands, i.e., . We use weights to balance these 3 terms.

Figure 11: We plot Adam vertices used as keypoints for mesh fitting in red dots. Left: vertices used to fit both feet (the middle points between the 2 vertices at the back are keypoints); right: vertices used to fit facial expression.
Figure 12: Joint confidence maps predicted by our CNN for a body image.
Figure 13: Part Orientation Fields predicted by our CNN for a body image. For each body part we visualize channels separately.
Figure 14: Joint confidence maps predicted by our CNN for a hand image.
Figure 15: Part Orientation Fields predicted by our CNN for a hand image. For each hand part we visualize channels separately.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minumum 40 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description