DRPose3D: Depth Ranking in 3D Human Pose Estimation
Abstract
In this paper, we propose a twostage depth ranking based method (DRPose3D) to tackle the problem of 3D human pose estimation. Instead of accurate 3D positions, the depth ranking can be identified by human intuitively and learned using the deep neural network more easily by solving classification problems. Moreover, depth ranking contains rich 3D information. It prevents the 2Dto3D pose regression in twostage methods from being illposed. In our method, firstly, we design a Pairwise Ranking Convolutional Neural Network (PRCNN) to extract depth rankings of human joints from images. Secondly, a coarsetofine 3D Pose Network(DPNet) is proposed to estimate 3D poses from both depth rankings and 2D human joint locations. Additionally, to improve the generality of our model, we introduce a statistical method to augment depth rankings. Our approach outperforms the stateoftheart methods in the Human3.6M benchmark for all three testing protocols, indicating that depth ranking is an essential geometric feature which can be learned to improve the 3D pose estimation.
DRPose3D: Depth Ranking in 3D Human Pose Estimation
Min Wang, Xipeng Chen, Wentao Liu, Chen Qian, Liang Lin, Lizhuang Ma Department of Computer Science and Engineering, Shanghai Jiao Tong University School of Data and Computer Science, Sun YatSen University Department of Computer Science and Technology, Tsinghua University SenseTime Group Limited School of Computer Science and Software Engineering, East China Normal University yinger650@sjtu.edu.cn, chenxp37@mail2.sysu.edu.cn, liuwtwinter@gmail.com qianchen@sensetime.com, linliang@ieee.org, malz@cs.sjtu.edu.cn
1 Introduction
3D human pose estimation is an important problem that relates to a variety of applications such as humancomputer interaction, augmented reality and behavior analysis, etc. Unlike 2D human pose dataset[?], in which the groundtruth can be obtained from manual labeling, the 3D pose is hard to get without sophisticated tracking devices.
In 3D human pose estimation, endtoend methods [?; ?] map input images to 3D joint positions directly. Their advantages lie in the ability to use shading, occlusion, appearance information contained in images. However, these images are captured in the laboratory environment such as Human3.6M [?] and data augmentation for images can hardly be performed in 3D space. On the other hand, twostage methods with a simple model [?] have achieved competitive performance on the Human3.6M dataset. These methods first predict the 2D locations of the human joints, then estimate 3D poses based on their 2D joint predictions. The first stage can utilize adequate 2D human pose datasets. The second stage can use data augmentation to simulate 2D and 3D data in 3D space to fully utilize the Human3.6M dataset. However, rich 3D information contained in images is not used. Besides, estimating 3D positions from 2D locations in the second stage is an illposed problem since multiple 3D poses can have the same 2D projection.
Depth ranking encodes rich 3D information. As mentioned in Section.3, under some weak assumptions, it can uniquely decide 3D pose when combined with the 2D pose. It changes the second stage, a 2Dto3D problem in previous methods, to a welldefined one. Besides, as a geometric property, depth ranking can also be augmented in 3D space. As the second stage being a onetoone function with depth ranking, data augmentation can be performed fully without concerns. Moreover, for depth ranking, we learn the relationship between each pair of joints, for example in Figure 1, the wrist lies before the head. This makes depth ranking problem a series of classification problems, which can be effectively solved by deep neural networks.
To this end, we propose a twostage method called Depth Ranking 3D Human Pose Estimator (DRPose3D). In Figure 2, our method is divided into two stages. It explicitly learns depth rankings between each pair of human joints from images and then uses it together with 2D joint locations to estimate 3D poses. We make three contributions to investigate the utility of depth rankings in 3D human pose estimation.

We design a Pairwise Ranking Convolutional Neural Network (PRCNN) to extract the depth rankings of pairwise human joints from a single RGB image. PRCNN transforms the depth ranking problem into the pairwise classification problem by generating a ranking matrix representing the depth relations between each pair of human joints.

We propose a coarsetofine 3D Pose Estimator named DPNet composed of the DepthNet and the PoseNet. It regresses the 3D pose from 2D joint locations and the depth ranking matrix. Since noises exist in the estimated ranking matrix, directly using the ranking matrix and 2D joint locations will lead to poor performance. DPNet first estimates coarse depth value that is consistent with majority entries of the depth ranking matrix then regresses the accurate 3D poses in a coarsetofine manner.

Data augmentation in 3D space for the second stage is explored. By synthesizing 3D poses and camera parameters, 2D poses and ranking matrices can be generated adequately. Unlike previous work, synthesized cameras are put around the same circle, which is unknown in real scenarios. We randomly sample camera positions on all possible positions around the subject. To make the augmented data obey the data distribution of training dataset, we use a statistical method to add noises.
The proposed DRPose3D framework achieves thestateoftheart results on three common protocols of Human3.6M dataset compared with both endtoend and twostage methods [?; ?; ?]. Mean per joint position errors (MPJPE) on the three protocols are decreased to , and respectively. And the MPJPE gap between protocol #3 and protocol #1 is reduced to . It proves that our method is robust to new camera positions and our data augmentation is very effective. The experimental results show that the depth ranking is an essential geometric knowledge that can be learned, utilized and augmented in 3D pose estimation.
2 Related Work
Learning to Rank
Learning to rank is widely used in computer science tasks, especially in information retrieval systems. A lot of methods have been proposed in the literature, i.e., pointwise, pairwise and listwise [?]. Among these methods, the pairwise ranking is the most popular because of more efficient data labeling. [?] performs a twostage framework that exploits a preference function to get pairwise rankings. RankNet [?] proposes gradient descent to learn a binary classifier which indicates the pairwise ranks. Their methods provide an effective way of ranking learning but have to extract hand designed features for each item. With the great advance of deep learning, there are increasing applications of rankings such as age estimation[?] and face beautification[?]. These methods try to learn rankings with neural networks but focus on image level attributes. [?] proposes a pairwise ranking method for depth estimation. However, it only learns to rank one pair of pixels in an image explicitly. Different from previous methods, PRCNN learns the depth ranking between each pair of human joints implicitly.
3D Pose Estimation
3D pose estimation methods can be divided into two types: the endtoend methods and the twostage methods. Endtoend method in 3D pose estimation benefits from the completeness of image information but suffers from hardness of accurate 3D localization and limited 3D human pose datasets. In order to better locate 3D joint positions, [?] proposes to regress bone based representation instead of joints. [?] uses a volumetric representation to estimate 3D pose in a coarsetofine manner. These methods still need 2D image and 3D pose pairs. [?] augments images by assembling multiple image fragments according to 3D poses, but the synthetic images contain a lot of artifacts. [?] transfers 2D pose detector into a 3D regression network and [?] learns 3D human poses with extra 2D annotated images. These methods benefit from the diversity of 2D human pose datasets.
Twostage methods usually predict 2D poses first, then use optimization or machine learning methods to obtain 3D pose results. [?] converts 2D and 3D pose data into an Euclidean distance matrix. It uses CNN to regress 3D pose distance matrix from 2D pose distance matrix. [?] established a simple baseline by regressing 3D pose directly from 2D joint locations. [?], [?] encode prior human body shape constrains into 2Dto3D pose estimation. Some other methods propose to search approximate poses from a designed 3D pose library [?; ?; ?; ?; ?]. These methods focus on inferencing possible 3D poses from human body constrains and ignore other geometric knowledge embedded in the image features.
3 Depth Ranking in 3D Pose Estimation
Depth ranking is an important cue to infer the 3D joint positions. Recovering 3D poses purely from 2D joint locations is an illposed problem. The use of depth ranking would alleviate the illposedness. We represent the 2D skeleton by . Given zaxis pointing towards the screen, the 3D pose . Under the assumption of orthogonal projection and fixed length between two adjacent joints, and , we have . If the depth ranking between adjacent joints is known, the relative 3D position between joint and joint is determined. Thus with the knowledge of depth rankings between adjacent joints, 2D joint locations, and limb length priors, the 3D skeleton is almost determined.
In order to learn depth ranking effectively, we introduce a pairwise ranking matrix to represent the depth ranking. By using pairwise ranking matrix, we transform the depth ranking problem into several classification problems. The groundtruth pairwise depth ranking matrix for 3D pose is defined as follows:
(1) 
Where indicates the probability that the joint is behind joint and indicates the tolerable depth difference to avoid the ambiguity of two joints with very close depth. Thus predicting the depth ranking of joints is transformed to classification problems.
In the following paragraphs, we will introduce (1) how to learn pairwise depth ranking and (2) how to use pairwise depth ranking in predicting 3D joint positions. An overview of our framework is illustrated in Figure 2. PRCNN predicts ranking matrix given the image and the 2D joint heat maps . DPNet regresses 3D pose given the ranking matrix and the 2D joint locations .
3.1 PRCNN: Learning Pairwise Rankings
In order to estimate the ranking matrix, we propose the Pairwise Ranking Convolutional Neural Network, PRCNN. We concatenate generated 2D joint heatmaps with the original image as the input of PRCNN. We adopt an 8stack hourglass network [?] as our 2D pose estimator. It is pretrained on MPII dataset [?] and finetuned on Human3.6M [?].
Inspired by RankNet [?] which ranks items pairwisely, the network PRCNN first extracts a onedimensional feature of each joint, , then compute the difference between onedimensional features of a joint pair as the feature of this joint pair. Residual network [?] is used as the backbone of our feature extractor.
(2) 
(3) 
Given the feature number of a joint pair, we apply the following rank transfer function to get the probability that joint has higher Zvalue than joint, which means joint is behind joint.
(4) 
Note that is in range and becomes 0.5 when . It is consistent with our representation, indicating the ranking probability of joint pairs. Let ranking matrix be the desired target values. We adopt cross entropy loss as loss function which is frequently used in classification problems. For training, the probabilistic ranking cost function proposed in [?] is defined as:
(5)  
The final cost function is the summation of all . Thus, this method turns the ranking task into several classification problems. During inference, the output is discretized into three values with corresponding intervals as the final output where is a threshold.
Different from RankNet, where each feature depends on one item, PRCNN requires extracting all features from one image and predicts all of the pairwise rankings together. Hence we apply 19channel tensors (3 for image and 16 for heatmaps) as inputs for our model. We adopt Resnet34 as the backbone of PRCNN and train it from scratch. We find that further increasing network depth doesn’t improve the performance. Data augmentation for PRCNN is performed such as rotation, scaling and flipping like other 2D pose estimation methods. The network performs well on the Human3.6M dataset and gives reasonable results on the nonlab dataset such as MPII as well. Under protocol #3 mentioned in section 4.1, which use three camera perspectives for training and another camera perspective for testing, the performance doesn’t get worse. It proves that PRCNN can generalize well across different camera perspectives.
3.2 DPNet: 3D Pose Estimation via Depth Ranking
According to previous paragraphs, with the knowledge of pairwise depth ranking, 2D pose and human prior knowledge, the 3D human pose is almost determined geometrically. Thus we use only the predicted ranking matrix , and 2D pose as input at this stage.
However, unlike traditional geometric problem that has perfect input, despite a majority of the pairwise ranking matrix is correct, there is a portion of entries of provides noisy information. As shown in Figure 6, some elements in the ranking matrix are hard to learn if there is no clear evidence of the image. Directly learning from the ranking matrix and provides less accurate results. A coarsetofine network is proposed to resolve noisy information from the ranking matrix.
The first part of our coarsetofine network is called DepthNet, which converts the ranking matrix into coarse depth values. Given the input ranking matrix , the DepthNet predicts coarse depth that is consistent with the ranking matrix. The groundtruth of is the ranking order on Zaxis of each human joint. Before normalization, it is a permutation of according to the depth values in . By doing so, the network is trained to convert the noisy ranking matrix into coarse depth values. Thus noisy ranking pairs can be refined by majority correct ranking pairs. The refinement strategy generates more robust depth values than traditional methods such as topological sort algorithm.
The second part of our coarsetofine network PoseNet combines coarse depth values with and predicts more and more accurate 3D pose Inspired by the advance of multistage architectures and coarsetofine mechanisms [?; ?], we use a cascaded regression network with two stages. The first stage predicts the directly and the second stage predicts its residuals. Each stage outputs are supervised by the 3D pose groundtruth . Within each stage, we employ two residual blocks following [?]. The architecture of our coarsetofine network is illustrated in Figure 3. We use mean square error (MSE) to calculate the loss of each supervised layer and sum them up:
(6) 
where is the loss of DepthNet, is the loss of the first stage of PoseNet, and is for the second stage. In order to remove the difference in magnitude of the variables and global shift. Both the supervised and coarse depth are normalized to values whose mean value equals to and standard deviation equals to .
3.3 Data Augmentation
As mentioned in [?], data augmentation in 3D space is very effective for protocol #3 experiment, whose training data are from three camera positions and test data are from another different camera positions. With depth ranking, data augmentation is still available and becomes more powerful on protocol #3 experiment.
Data augmentation is performed by synthesizing input from virtual cameras. We synthesize the in virtual camera coordinate according to groundtruth 3D pose . Then, we project on camera perspective plane to obtain 2D joint locations and compute ranking matrix concerning . Currently, augmented data are generated from ground truth . However, the estimated ranking matrix from PRCNN and 2D joint locations from 2D pose estimator can be noisy. Thus we use Gaussian mixture model (GMM) mentioned in [?] to add noise to . We also propose a statistical method to adjust each entry of the pairwise ranking matrix based on its accuracy. We calculate the accuracy of each entry according to the predicted ranking matrix in training set. We flip the synthesized with the probability of .
Previous work [?] samples virtual cameras on the same circle where three cameras from training data lie. The assumption and prior knowledge about camera settings used in data augmentation are strong. However, our methods only assume that all cameras roughly point towards the performer with motion capture device. We first calculate the rough position of the performer by finding the center position that is the closest point to optical axes of all cameras.
(7) 
where is the line indicating the optical axis. Then the distance between cameras to the center position are sampled using the normal distribution, whose center and variance are computed from training data. Then we sample the camera positions uniformly on the surface of the sphere with center and radius . The optical axis is the line connecting the sampled camera position and . One axis of the camera coordinate is parallel to the ground plane to make the synthesized upright.
4 Experiments
In this section, we introduce datasets and protocols first, then provide details how we implement our framework. We evaluate our method on Human3.6M and compare with stateofthearts methods. To verify impacts of each component in our approach, we also perform ablation studies. Finally, the qualitative results visualize the 3D pose estimation results on Human3.6M and MPII dataset.
4.1 Datasets and Protocols
Human3.6M is currently the largest public 3D human pose benchmark. The dataset captured human poses in a laboratory environment with Motion Capture technology. It consists of 3.6 million images describing daily activities. There are 4 cameras, 11 subjects (actors) and 17 scenarios (actions) in this dataset. We use mean per joint position error (MPJPE) as evaluation metric and adopt it in three protocols described in previous works [?].

Protocol #1 uses subjects S1, S5, S6, S7, S8 for training and S9 and S11 for testing. It is a widely used benchmark when using Human3.6M dataset.

Protocol #2 is based on Protocol #1 and aligns the estimated 3D pose to the groundtruth by a rigid transformation called Procrustes Analysis, which is the protocol to evaluate the correctness of the 3D pose structure.

Protocol #3 aims to evaluate the generality of methods on camera parameters and uses 3 cameras views for training and the other one for testing[?].
MPII is widely used for 2D human pose estimation in the wild. We will provide qualitative evaluation on this dataset.
4.2 Implementation Details
2D pose estimation We follow the configurations in [?] and use the stacked hourglass [?] as 2D pose estimator. The variance of Gaussian is set to 4 in our experiments.
PRCNN The PRCNN is based on the Deep Residual Network. Differently, we set the size of the last full connected layer to . We implement the pairwise layer and ranking transfer layer of RankNet [?] to obtain ranking matrix. We train the PRCNN model with binary cross entropy loss and use Stochastic Gradient Descent (SGD) to train 25 epochs over the whole training set. In all experiments, the models are trained on 8 TITAN Xp GPUs with batch size 64 and the initial learning rate 0.1.
DPNet We set the root of 3D pose to (0,0,0) following [?]. We train our DPNet for 400 epochs using Adam, and the initial learning rate is 0.001 with exponential decay. The minibatches is set to 64. The probability of dropout is set to 0.3 so that it remains more possible information in rankings. With the benefit of low dimensionality, we only use one TITAN Xp GPU to train this network. In protocol #3, the scale of the augmented dataset is triple as the original Human3.6M dataset.
4.3 Comparisons with Stateoftheart Methods
We compare the proposed method with the stateoftheart methods on Human 3.6M dataset. The comparisons with both endtoend and twostage methods under all protocols are shown in Table 2. There are three observations as follows: (1) The depth ranking is an efficient feature, and the proposed DRPose3D outperforms the stateoftheart method including both endtoend [?; ?; ?; ?] and twostage methods [?; ?] under all protocols. (2) Augmentation is effect [?; ?]. (3) Depth ranking improves the robustness of DPNet. The proposed method achieves a reconstruct error of without augmentation on protocol #3, errors drop from [?]. Observed results show that depth rankings eliminate the ambiguities, so that prevent our network from overfitting to specified camera perspectives.
To verify that the depth ranking improves the baseline network, we evaluate all joints by MPJPE. The result shown in Figure 4 indicates that the joints with lager reachable workspace like wrists, elbows, and feet can provide more robust ranking cues and obtain larger improvements than other joints. For example, the right wrist achieves () errors drop while thorax only achieves ().
4.4 Upper Bound of Our Approach
2D Pose  Depth Ranking  MPJPE() 

GT  None  45.5 
GT  Predicted  41.2 
GT  GT  30.2 
To demonstrate our statement that depth ranking improves the network, we use groundtruth 2D poses and pairwise ranking matrix to explore the upper bound of our approach, as shown in Table 1. The result shows that: (1) After using ranking matrix predicted by PRCNN, we get errors drop on MPJPE metric. (2) By using the ground truth 2D poses and depth rankings, the proposed method achieves on Human 3.6M. It is significantly promoted by compared to only performing ground truth 2D positions. (3) It proves that more accurate ranking estimation could further improve the 3D pose estimation.
4.5 Ablation Study
We study each component of our method on the Human3.6M dataset to verify the effectiveness.
Protocol #1  Direction  Discuss  Eat  Greet  Phone  Photo  Pose  Purchase  Sit  SitDown  Smoke  Wait  WalkDog  Walk  WalkT.  Avg. 

LinKDE (PAMI’16)  132.7  183.6  132.3  164.4  162.1  205.9  150.6  171.3  151.6  243.0  162.1  170.7  177.1  96.6  127.9  162.1 
Zhou et al. (ECCV’16)  91.8  102.4  96.7  98.8  113.4  125.2  90.0  93.8  132.2  159.0  107.0  94.4  126.0  79.0  99.0  107.3 
Pavlakos et al. (CVPR’17)  67.4  71.9  66.7  69.1  72.0  77.0  65.0  68.3  83.7  96.5  71.1  65.8  74.9  59.1  63.2  71.9 
Zhou et al. (ICCV’17)  54.8  60.7  58.2  71.4  62.0  65.5  53.8  55.6  75.2  111.6  64.1  66.0  51.4  63.2  55.3  64.9 
Martinez et al. (ICCV’17)  51.8  56.2  58.1  59.0  69.5  78.4  55.2  58.1  74.0  94.6  62.3  59.1  65.1  49.5  52.4  62.9 
Fang et al. (AAAI’18)  50.1  54.3  57.0  57.1  66.6  73.3  53.4  55.7  72.8  88.6  60.3  57.7  62.7  47.5  50.6  60.4 
Sun et al. (ICCV’17)  52.8  54.8  54.2  54.3  61.8  67.2  53.1  53.6  71.7  86.7  61.5  53.4  61.6  47.1  53.4  59.1 
Ours  49.2  55.5  53.6  53.4  63.8  67.7  50.2  51.9  70.3  81.5  57.7  51.5  58.6  44.6  47.2  57.8 
Protocol #2  Direction  Discuss  Eat  Greet  Phone  Photo  Pose  Purchase  Sit  SitDown  Smoke  Wait  WalkDog  Walk  WalkT.  Avg. 
Bogo et al. (ECCV’16)  62.0  60.2  67.8  76.5  92.1  77.0  73.0  75.3  100.3  137.3  83.4  77.3  86.8  79.7  87.7  82.3 
MorenoNoguer (CVPR’17)  66.1  61.7  84.5  73.7  65.2  67.2  60.9  67.3  103.5  74.6  92.6  69.6  71.5  78.0  73.2  74.0 
Zhou et al. (Arxiv’17)  47.9  48.8  52.7  55.0  56.8  65.5  49.0  45.5  60.8  81.1  53.7  51.6  54.8  50.4  55.9  55.3 
Sun et al. (ICCV’17)  42.1  44.3  45.0  45.4  51.5  53.0  43.2  41.3  59.3  73.3  51.0  44.0  48.0  38.3  44.8  48.3 
Martinez et al. (ICCV’17)  39.5  43.2  46.4  47.0  51.0  56.0  41.4  40.6  56.5  69.4  49.2  45.0  49.5  38.0  43.1  47.7 
Fang et al. (AAAI’18)  38.2  41.7  43.7  44.9  48.5  55.3  40.2  38.2  54.5  64.4  47.2  44.3  47.3  36.7  41.7  45.7 
Ours  36.6  41.0  40.8  41.7  45.9  48.0  37.0  37.1  51.9  60.4  43.9  38.4  42.7  32.9  37.2  42.9 
Protocol #3  Direction  Discuss  Eat  Greet  Phone  Photo  Pose  Purchase  Sit  SitDown  Smoke  Wait  WalkDog  Walk  WalkT.  Avg. 
Pavlakos et al. (CVPR’17)  79.2  85.2  78.3  89.9  86.3  87.9  75.8  81.8  106.4  137.6  86.2  92.3  72.9  82.3  77.5  88.6 
Martinez et al. (ICCV’17)  65.7  68.8  92.6  79.9  84.5  100.4  72.3  88.2  109.5  130.8  76.9  81.4  85.5  69.1  68.2  84.9 
Zhou et al. (ICCV’17)  61.4  70.7  62.2  76.9  71.0  81.2  67.3  71.6  96.7  126.1  68.1  76.7  63.3  72.1  68.9  75.6 
Fang et al. (AAAI’18)  57.5  57.8  81.6  68.8  75.1  85.8  61.6  70.4  95.8  106.9  68.5  70.4  73.89  58.5  59.6  72.8 
Ours w/o augmentation  53.6  56.5  73.2  66.6  72.8  79.6  56.4  71.1  87.4  106.3  65.2  64.3  69.7  58.8  57.5  69.0 
Ours  55.8  56.1  59.0  59.3  66.8  70.9  54.0  55.0  78.8  92.4  58.9  56.2  64.6  56.6  55.5  62.8 
We perform an ablative analysis to understand the impact of the design choices of our PRCNN. We present the results in Figure 5 (a). When taking only original images as inputs, the model(w/o heatmaps) achieves the accuracy of . Combing heatmaps of human joints and original image increases the accuracy by , which proves the effectiveness of combining semantic knowledge explicitly. We have also tried basic network with different depth, i.e., Resnet18, Resnet34, and Resnet50. Since we only train PRCNN on Human3.6M dataset, whose images have almost the same backgrounds, very deep network like Resnet50 may cause overfitting. Resnet34 achieves mean accuracy in protocol #1 and is chosen for all the other experiments.
We further illustrate pairwise rankings accuracy in Figure 6. Bright block in the ranking matrix indicates high accuracy. We find that connections between joints in the torso, inside dashed line box such as Hip.RSpine, have lower accuracy because their depths are too close to be recognized. However, relations like right and left shoulders with 96.71% accuracy are accessible to indicate which direction the subject is facing.
Figure 5(b) shows component analysis of DPNet. To evaluate the crosscameraperspective effect, these experiments are conducted under protocol #3. Our proposed method with all components achieves the result of and exceeds the baseline (w/o Rank&Augment) by a large margin (). When we remove the depth ranking, MPJPE increases to , showing that depth rankings effectively enhance the regression from 2D to 3D pose. The model without DepthNet, directly combing 2D joint locations with the noisy ranking matrix, leads to a growth of errors, indicating that DepthNet can reduce the noise in pairwise ranking matrix effectively. After that, we evaluate the effectiveness of data augmentation. The result shows augmentation with groundtruth ranking matrix and 2D joint locations generated by virtual cameras achieves better performance () than the model without augmentation (). The statistical augmentation method for ranking matrix further decreases the joint error by .
4.6 Qualitative Results
Since our DPNet is trained based on 2D locations and rankings, it is possible to estimate 3D poses with images in the wild. We give qualitative results on both Human3.6M dataset and MPII dataset in Figure 7. The first row illustrated some examples from Human3.6M. The reddotted line is the baseline estimation while blue line indicates ours. Depth ranking provides geometric knowledge that eliminates the ambiguities of the limbs and corrects the angle of the torso as shown in the side view of the top row samples.
More challenging samples from MPII dataset are shown in the bottom row. The proposed DRPose3D presents to have better generality and can obtain reasonable 3D poses even in some challenging cases: the subject lies down or do exaggerated actions.
5 Conclusion
In this paper, we propose a twostage DRPose3D approach to tackle the 3D pose estimation task. The proposed method involves depth ranking to fully utilize 2D human pose datasets and 3D information contained in images. To extract the depth ranking from a single RGB image, we first design a PRCNN model to generate pairwise depth relation between human joints. After that, a coarsetofine 3D pose estimator is proposed to predict 3D human pose from both 2D joint and depth ranking. Finally, we explore data augmentation for DRPose3D and prove that depth ranking can further enlarge improvement brought by data augmentation. Overall, the proposed method achieves the stateoftheart results on three common protocols of the Human3.6M dataset.
Acknowledgments
This work is supported by National Natural Science Foundation of China (No. 61472245), and the Science and Technology Commission of Shanghai Municipality Program (No. 16511101300).
References
 [Akhter and Black, 2015] Ijaz Akhter and Michael J Black. Poseconditioned joint angle limits for 3d human pose reconstruction. In CVPR, pages 1446–1455, 2015.
 [Andriluka et al., 2014] Mykhaylo Andriluka, Leonid Pishchulin, Peter Gehler, and Bernt Schiele. 2d human pose estimation: New benchmark and state of the art analysis. In CVPR, pages 3686–3693, 2014.
 [Bogo et al., 2016] Federica Bogo, Angjoo Kanazawa, Christoph Lassner, Peter Gehler, Javier Romero, and Michael J Black. Keep it smpl: Automatic estimation of 3d human pose and shape from a single image. In ECCV, pages 561–578. Springer, 2016.
 [Burges et al., 2005] Chris Burges, Tal Shaked, Erin Renshaw, Ari Lazier, Matt Deeds, Nicole Hamilton, and Greg Hullender. Learning to rank using gradient descent. In ICML, pages 89–96. ACM, 2005.
 [Cao et al., 2007] Zhe Cao, Tao Qin, TieYan Liu, MingFeng Tsai, and Hang Li. Learning to rank: from pairwise approach to listwise approach. In ICML, pages 129–136. ACM, 2007.
 [Chen et al., 2016] Weifeng Chen, Zhao Fu, Dawei Yang, and Jia Deng. Singleimage depth perception in the wild. In NIPS, pages 730–738, 2016.
 [Chen et al., 2017] Shixing Chen, Caojin Zhang, Ming Dong, Jialiang Le, and Mike Rao. Using rankingcnn for age estimation. In CVPR, 2017.
 [Cohen et al., 1998] William W Cohen, Robert E Schapire, and Yoram Singer. Learning to order things. In NIPS, pages 451–457, 1998.
 [Fang et al., 2018] Haoshu Fang, Yuanlu Xu, Wenguan Wang, Xiaobai Liu, and SongChun Zhu. Learning knowledgeguided pose grammar machine for 3d human pose estimation. AAAI, 2018.
 [He et al., 2016] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, pages 770–778, 2016.
 [Ionescu et al., 2014] Catalin Ionescu, Dragos Papava, Vlad Olaru, and Cristian Sminchisescu. Human3. 6m: Large scale datasets and predictive methods for 3d human sensing in natural environments. TPAMI, 36(7):1325–1339, 2014.
 [Jahangiri and Yuille, 2017] Ehsan Jahangiri and Alan L Yuille. Generating multiple hypotheses for human 3d pose consistent with 2d joint detections. arXiv preprint arXiv:1702.02258, 2017.
 [Li et al., 2015] Jianshu Li, Chao Xiong, Luoqi Liu, Xiangbo Shu, and Shuicheng Yan. Deep face beautification. In ACM MM, pages 793–794. ACM, 2015.
 [Lin et al., 2017] Mude Lin, Liang Lin, Xiaodan Liang, Keze Wang, and Hui Chen. Recurrent 3d pose sequence machines. In CVPR, 2017.
 [Martinez et al., 2017] Julieta Martinez, Rayat Hossain, Javier Romero, and James J. Little. A simple yet effective baseline for 3d human pose estimation. In ICCV, Oct 2017.
 [Mehta et al., 2016] Dushyant Mehta, Helge Rhodin, Dan Casas, Oleksandr Sotnychenko, Weipeng Xu, and Christian Theobalt. Monocular 3d human pose estimation using transfer learning and improved cnn supervision. arXiv preprint arXiv:1611.09813, 2016.
 [MorenoNoguer, 2017] Francesc MorenoNoguer. 3d human pose estimation from a single image via distance matrix regression. In CVPR, pages 1561–1570. IEEE, 2017.
 [Mori and Malik, 2006] Greg Mori and Jitendra Malik. Recovering 3d human body configurations using shape contexts. TPAMI, 28(7):1052–1062, 2006.
 [Newell et al., 2016] Alejandro Newell, Kaiyu Yang, and Jia Deng. Stacked hourglass networks for human pose estimation. In ECCV, pages 483–499. Springer, 2016.
 [Pavlakos et al., 2017] Georgios Pavlakos, Xiaowei Zhou, Konstantinos G Derpanis, and Kostas Daniilidis. Coarsetofine volumetric prediction for singleimage 3d human pose. In CVPR, pages 1263–1272. IEEE, 2017.
 [Ramakrishna et al., 2012] Varun Ramakrishna, Takeo Kanade, and Yaser Sheikh. Reconstructing 3d human pose from 2d image landmarks. ECCV, pages 573–586, 2012.
 [Rogez and Schmid, 2016] Grégory Rogez and Cordelia Schmid. Mocapguided data augmentation for 3d pose estimation in the wild. In NIPS, pages 3108–3116, 2016.
 [Sun et al., 2017] Xiao Sun, Jiaxiang Shang, Shuang Liang, and Yichen Wei. Compositional human pose regression. In ICCV, Oct 2017.
 [Zhou et al., 2016] Xingyi Zhou, Xiao Sun, Wei Zhang, Shuang Liang, and Yichen Wei. Deep kinematic pose regression. In ECCV Workshops, pages 186–201. Springer, 2016.
 [Zhou et al., 2017a] Xiaowei Zhou, Menglong Zhu, Georgios Pavlakos, Spyridon Leonardos, Kostantinos G Derpanis, and Kostas Daniilidis. Monocap: Monocular human motion capture using a cnn coupled with a geometric prior. arXiv preprint arXiv:1701.02354, 2017.
 [Zhou et al., 2017b] Xingyi Zhou, Qixing Huang, Xiao Sun, Xiangyang Xue, and Yichen Wei. Towards 3d human pose estimation in the wild: a weaklysupervised approach. In ICCV, 2017.