P{}^{2}GNet: Pose-Guided Point Cloud Generating Networks for 6-DoF Object Pose Estimation

PGNet: Pose-Guided Point Cloud Generating Networks for 6-DoF Object Pose Estimation


Humans are able to perform fast and accurate object pose estimation even under severe occlusion by exploiting learned object model priors from everyday life. However, most recently proposed pose estimation algorithms neglect to utilize the information of object models, often end up with limited accuracy, and tend to fall short in cluttered scenes. In this paper, we present a novel learning-based model, Pose-Guided Point Cloud Generating Networks for 6D Object Pose Estimation (PGNet), designed to effectively exploit object model priors to facilitate 6D object pose estimation. We achieve this with an end-to-end estimation-by-generation workflow that combines the appearance information from the RGB-D image and the structure knowledge from object point cloud to enable accurate and robust pose estimation. Experiments on two commonly used benchmarks for 6D pose estimation, YCB-Video dataset and LineMOD dataset, demonstrate that PGNet outperforms the state-of-the-art method by a large margin and shows marked robustness towards heavy occlusion, while achieving real-time inference.


1 Introduction

6-DoF object pose estimation lies at the core of a wide range of applications including robotic manipulation [43, 7], augmented reality [21], navigation and autonomous driving [6, 10, 38]. Ideally, a handy 6D object pose estimation system should deal with objects of varying shape and texture, achieving marked robustness towards heavy occlusion, sensor noise, and changing lighting conditions, while meeting the speed requirement of real-time tasks.

Figure 1: An example of our pose-guided point cloud generation scheme. Our model takes object point cloud in canonical pose and RGB-D image as input, and generates point clouds in both canonical pose and target pose to faciliate pose estimation.

For many years the main focus in the field of 6D pose estimation of objects has been limited to geometric feature/template-based methods. Based on the pioneering work of [12], practical and robust solutions have been designed to handle objects with poor textures [13, 24, 2, 18, 14, 30, 3, 5]. For most of these systems, the key to success is the use of either hand-crafted templates, or feature representations extracted from data. These systems run typically a two-stage pipeline: a) putative feature matching, b) geometric verification of the matched features. Nevertheless, the techniques mentioned above have in our view several fundamental shortcomings. Firstly, there are common problems of these feature/template-based techniques, such as not being robust to clutter and occlusions as well as changing lighting conditions. Secondly, scalability to large numbers of objects (i.e., either object instances or object categories) is an open challenge to these techniques, due to the growing number of required features/templates. Thirdly, the reliance on hand-crafted features, fixed matching procedures, and particularly on verification procedures have made it difficult for them to satisfy the requirements of accurate pose estimation and fast inference simultaneously.

Recent success in visual recognition has inspired a novel family of data-driven methods using deep networks to surmount the limitations (especially the scalability) raised above [40, 29, 37, 20, 32, 31, 28]. PoseCNN [37] and MCN [20] have achieved decent accuracy on YCB-Video dataset using single-view and multi-view inputs respectively. However, similar to the prior methods, these methods still require elaborate post-processing refinement steps to fully utilize the depth information, such as the computationally expensive Iterative Closest Point (ICP) [9] procedure in PoseCNN and the multi-view hypothesis verification scheme in MCN, preventing them from satisfying the requirements of real-time inference. Inspired by the recent success in 3D object recognition that combines the information from both RGB camera and depth sensor, such as Frustrum PointNet [25] and PointFusion [38], DenseFusion [33] propose to better exploit the complementary nature of color and depth information from RGB-D data with an end-to-end deep model, and has achieved promising performance and the capacity of real-time inference. Still, method using only single-view RGB-D data is inherently incapable of addressing the ambiguity of object appearance [20], which can potentially harm the performance.

Unlike automatic systems that are susceptible to environment, sensor noise and object appearance, humans are capable of performing decent and fast object pose estimation, even under adversarial conditions (i.e., heavy occlusion and changing ambient lighting). Our assumption is that humans exploit learned object model priors from everyday life, and therefore can readily and accurately infer the object pose. Hence, we attempt to explore how the prior knowledge of object can be integrated into state-of-the-art single-view 6D object pose estimation systems, so that our method can 1) perform more accurate pose estimation without prohibitively expensive post-hoc steps, 2) achieve marked robustness towards heavy occlusion, and 3) satisfy the speed requirement of real-time tasks.

In this work we propose an end-to-end deep learning approach for estimating 6-DoF poses of known objects from RGB-D inputs and object point clouds. The core of our approach is to sufficiently explore the intrinsic properties of object point clouds by using pose-guided point cloud generating as proxy task. To be specific, our method takes object point cloud in canonical pose and RGB-D image as inputs, generating object point cloud in the pose corresponding to the RGB-D input to facilitate the object pose estimation. The estimation-by-generation scheme enables our method to explicitly encode both the appearance and 3D geometric information, which is essential to address the ambiguity of object appearance and occlusion in cluttered scenes. As demonstrated by our experiments, by effectively exploiting object model priors, our method combines RGB-D data and 3D point clouds representation in a more effective manner at per-point level, opposing prior work which only uses 2D or limited 3D information to compute dense features [33, 25]. We evaluate our method on two popular benchmarks for 6D pose estimation, YCB-Video dataset [37] and LineMOD dataset [13]. Experiments show that our method alone (i.e., without any post-processing steps) outperforms DenseFusion without iterative refinement [33] significantly by 10.0% in ADD pose accuracy [13], and even outperforms DenseFusion with iterative refinement by 1.9% on LineMOD dataset. On YCB-Video dataset, PGNet shows marked improvement of 2.3% and 0.8% in terms of ADD-S2cm metric comparing with the state-of-the-art DenseFusion without and with iterative refinement respectively. Moreover, we demonstrate that PGNet can also be augmented by iterative refinement, achieveing 99.9% accuracy in terms of ADD-S2cm metric and 93.9% in terms of AUC metric [37] on YCB-Video dataset. It is fair to say that our method further improves the state-of-the-art system, and to our knowledge surpasses all the competing network-based methods without laborious tuning, while maintaining the efficiency in inference time.

We summarize our contributions in this work as follows:

  • We propose an effective way to integrate the prior knowledge of object into state-of-the-art single-view 6D object pose estimation systems;

  • We demonstrate that the proposed method significantly outperforms the competing methods, approaching and exceeding SOTA without post-hoc steps on two benchmark datasets respectively;

  • We show that our method is capable of robust pose estimation under heavy occlusion, efficient enough for real-time inference (1.5ms per object instance) and extendable for better performance (ADD: 97.4% on LineMOD and ADD-S2cm: 99.9%, AUC: 93.9% on YCB-Video dataset with iterative refinement).

2 Related Work

3D representations. Most deep neural networks for 3D inputs model the 3D space as pre-partitioned regular voxels, and extend 2D convolution to voxel-based 3D convolution, such as [4, 8, 36]. The main problem of voxel-based methods is the increasing spatial resolution which leads to fast growth of neural-network size and computational cost. Following the voxel-based methods, a family of methods using elaborately designed sampling strategy such as octree-based [23] and kd-tree-based [19] neural networks partly overcome the challenge. Recently, neural networks based on purely 3D point representations (i.e., 3D coordinates) [1, 26, 27, 39] have been shown to work quite efficiently, while achieving promising performance on several 3D recognition tasks. Point-based neural networks significantly reduce the overhead of converting point clouds into other data formats (such as trees and voxels), circumventing the potential information loss due to the conversion.

Figure 2: Our model for 6D object pose estimation has three components: (a) a point-wise fusion network, (b) a point cloud generating network, and (c) a pose estimation network. Best viewed in color.

Prior to this work, Frustrum PointNets [25] and VoxelNet [42] use a PointNet-like [26] structure and achieve state-of-the-art performances on the KITTI benchmark [10] in tasks regarding on autonomous driving, demonstrating the effectiveness of this architecture. However, urban driving scenarios differ from those of indoor pose estimation since adversarial conditions such as heavy occlusion and sensor noises are more common and severe in the latter ones. In this work, we borrow the idea of directly processing 3D point representations to achieve real-time inference as well as decent accuracy under heavy occlusion.

Pose from RGB/RGB-D images. For many years, approaches for 6D pose estimation focus on constructing appropriate geometric features/templates from the input RGB/RGB-D images and perform correspondence grouping and hypothesis verification [12, 13, 24, 14, 5]. These features/templates methods are either hard coded [12], or requiring extra tuning when applied to different objects [13, 24, 14]. [2, 3, 30, 35] as alternatives, propose methods optimizing surrogate objectives, and provide practical and robust solutions. In a recent benchmark BOP [15] for this task , where no or few real training data is available but only CAD models for rendering synthetic training data, i.e., learning-based methods have to handle a severe domain gap between training and testing, purely geometric approaches (based on [14]) significantly outperform learning-based approaches. However, as we demonstrate in the introduction part, these methods fail to simultaneously achieve good performance and fast inference speed, while scalability remains a critical issue.

Newer network-based methods such as PoseCNN [37] and MCN [20] directly estimates 6D poses from image data using a CNN-based architecture, while they rely on expensive post-processing steps to make full use of depth input. Inspired by the recent success in 3D object recognition, DenseFusion [33] propose to better exploit the complementary nature of color and depth information with an end-to-end network, and has achieved promising performance and the capacity of real-time inference. Nevertheless, from our view, methods using single-view RGB-D data can’t fully capture the 3D object shape which is essential in 6D pose estimation, since the inputs lack the deterministic information to address the ambiguity of object appearance. We show that our method tackle this challenge by effectively learning the 3D object information, therefore outperforming most competing methods and achieving fast inference speed in the mean time.

Our method is most related to DenseFusion [33], in which researchers perform local feature fusion between color and depth information using a heterogeneous architecture. By introducing object point clouds as the object prior, and performing pose-guided point cloud generating task, our novel estimation-by-generation scheme surpasses DenseFusion’s estimation-by-fusion method by a large margin. Since our method is a step forward from DenseFusion, we demonstrate that our method is also suitable for the iterative refinement method [33], which leads to extra improvements in pose estimation, while maintaining the fast inference speed.

3 Approach

Accurately estimating the pose of a known object in adversarial conditions (e.g. heavy occlusion, poor lighting, etc.) is only possible when deterministic geometric and appearance information are accessible. Therefore, pose estimation in 3D space using single-view RGB-D images remains a challenge since the inputs lack the critical 3D shape information to address the ambiguity of object appearance. To overcome this challenge, we propose to leverage the intrinsic properties of object models to guide pose estimation. More specifically, a novel pose-guided point cloud generating framework is designed, where we fuse the appearance information from RGB-D image and the structure knowledge from object point cloud to predict the 6D object pose. The pipeline of our method is presented in Figure 2.

In the following sections, we first introduce our design on pose-guided point cloud generating networks, which extract the knowledge of object models from given point cloud and incorporate this knowledge with RGB-D input to facilitate pose estimation (Sec. 3.1). Then, we show the learning objectives of our method (Sec. 3.2). Finally, we present the training paradigm and other technical details of our method (Sec. 3.3),

3.1 Pose-Guided Point Cloud Generating Networks

Figure 3: Detailed Point-wise Fusion architecture: (a) Point-wise Fusion pipeline, (b) point-fusion block and (c) modified non-local block.

The proposed pose-guided point cloud generating networks (PGNet) are comprised of three basic components: (a) a point-wise fusion network that fuses the information from both RGB-D image and object point cloud in canonical pose to encode the intrinsic object properties (Sec. 3.1.1), (b) a point cloud generating network that utilizes the encoded feature from fusion network and performs pose-guided point cloud generation (Sec. 3.1.2), and (c) a pose estimation network which takes the fused feature from point-wise fusion network to estimate the 3D translation and 3D rotation (Sec. 3.1.3). Details of our architecture are described below.

Point-wise Fusion Network

The first component of our model (Figure 2a) takes both the RGB-D image and the object point cloud as inputs, generating fused feature that represents the encoded object properties. More specifically, to emphasize the appearance information of a object, we use the pre-trained segmentation architecture proposed by [37] to segment the objects of interest in the RGB-D images, and project the segmented depth pixels into 3D representations by employing the known camera intrinsic matrix before feeding them into the rest of the PGNet pipeline. At the beginning of point-wise fusion network, we use a CNN-based embedding network to obtain the deep representation of the object from RGB image, after which we randomly sample data points from each data branch (i.e., deep representation, re-projected depth values and object point cloud). The goal of this step is to form a sparse representation of each data source, thereby enabling the use of the PointNet-like architecture (i.e., MLP followed by a pooling layer) [26] to realize fast inference.

The main idea of this network is to extract intrinsic object properties from different data sources, while discarding properties that are non-essential for the task of 6D pose estimation, such as object texture, lighting conditions and sensor noises. A simple but effective design is to follow DenseFusion [33], using three branches of MLP that separately process deep representation of RGB image, re-projected depth value and sampled object point cloud, and fuse the features by concatenation to generate a dense per-point feature. Since the pose-guided point cloud generating task serves as a strong per-point supervision, it seems natural and justified to do so. However, we have to point out that matching each point in the object point clouds with exactly the corresponding points from RGB-D images is rather difficult, and the mismatched object point cloud can therefore be a potential perturbation degrading the model performance, as we demonstrate in the ablative experiments (see Sec 4.2). In order to address the problem, we propose the point-fusion block, in which each FC layer is followed by a modified non-local block [34]. We find that using point-fusion blocks significantly improves the fusion quality, compared with a naive solution using the plain MLPs and choosing the corresponding point for each point from depth image randomly. Detailed architecture of the point-fusion block and the network architecture are illustrated in Figure 3.

We set the output dimension of all of the three branches to and concatenate the corresponding features, by which feature of each point becomes a -dim vector representing the appearance (from the RGB image) and geometric (from the depth image and the object point cloud) information of the input at the corresponding location. Then, the concatenated features are processed by a PointNet-like architecture as shown in Figure 2a. Here we obtain a fixed-size global feature vector by performing global average pooling on the processed features as in [33], with which we enrich each point-feature to provide a global context. We use squeeze-and-excitation block [16] at the end of point-wise fusion network to increase its sensitivity to informative features.

Point Cloud Generating Network

We expect that the encoded feature produced by the Point-wise Fusion Network contains the complete 3D information about the object, and thus can address the ambiguity of the appearance caused by single-view inputs and more precisely predict the 6D pose. Here we introduce point cloud generating tasks as the supervision to achieve more effective feature fusion and facilitate the extraction of intrinsic properties of object models. To do so, we use two decoder branches (Figure 2b) to separately generate object point cloud in canonical pose and target pose. We generate the object point cloud in target pose to ensure that the feature generated by point-wise fusion network successfully encodes the intrinsic properties of object model. We generate the object point cloud in canonical pose to stabilize the training process and enforce that the encoded feature contains the complete 3D information of object model.

Intuitively, these branches compel the point-wise fusion network to properly learn the information from point cloud models, and explicitly infer the pose information embedded in the fused feature. By fully exploiting the information from point cloud representation, the network can achieve extra robustness and superior accuracy even in heavy occluded scenes, as we show in Sec. 4.2 and Sec. 4.3. Compared with previous works that directly use pose estimation error to supervise models, the proposed estimation-by-generation method can provide stronger and more detailed (i.e., per-point) supervision to help the networks better understand the pose as well as the structure knowledge of input object. In our experiments, we implement a variant of the folding-based decoder [39] that utilizes Gaussian sampled 3D grids to obtain decent generation results.

Pose Estimation Network

We feed the features produced by point-wise fusion network into a final network (Figure 2c) that predicts the object’s 6D pose. Note that here we predict the 6D pose from the features instead of the generated point clouds to reduce the computational cost. By doing so, the proposed generation networks are only used during training to provide supervision signal, which can largely reduce redundant computation for fast inference. Surprisingly, we find that directly predicting the 6d pose from encoded feature can achieve even better performance compared to predicting pose from generated point cloud in our experiments. We assume that the encoded feature produced by our framework offers better global information and sufficient detail information for pose estimation benefiting from strong supervision provided by generation network.

In our experiments, we use a similar pose estimation network as in DenseFusion [33] to make fair comparisons. The only differences lie in the following three points: a) the input channel number is slightly different, which is inevitable since we introduce the object point cloud as an additional input; b) instead of following the principle of “object per output branch”, whereby each object class is associated with an output stream (i.e., parallel FC layers or MLPs) connected to a shared feature basis, we use a single MLP as object point cloud has provided sufficient class information and we want to disentangle the network size from the amount of objects; c) instead of using sigmoid activation function to produce the per-point confidence score, we use softmax activation function to normalize the confidence scores so that we can simplify the learning objective (see Sec 3.2), discarding the confidence regularization term as well as its hyper-parameter proposed by DenseFusion [33].

Figure 4: Qualitative results of our framework: (a) Generating results of object models in target pose on LineMOD dataset, and (b) Pose estimation results on YCB-Video dataset.

3.2 Learning Objective

We can represent 6D pose by a homogeneous transformation matrix, . To be specific, a 6D pose is described by a rotation matrix and a translation vector . In our experiments, the estimated pose is defined in the camera coordinate system. We use quaternion to represent the 3D rotation and algebraically convert the quaternion into the rotation matrix to calculate the loss.

Pose Estimation Loss. Inspired by [13], for a single pair of pose representation , the loss for 6D pose estimation can be defined as [37, 33]


where denotes the point of the randomly selected 3D points from the object’s 3D model, is the target pose, and is the estimated pose. Unfortunately, the above loss function does not handle symmetric objects appropriately, since the canonical orientation of a symmetric object is not well-defined, which leads to multiple correct 3D rotations. Using such a loss function on symmetric objects imposes unnecessary constraints on the networks, i.e., regressing to one of the alternative 3D rotations, thereby giving possibly inconsistent training signals. Therefore, for symmetric objects, we instead minimize the offset between each point on the estimated model and the closest point on the ground truth object model. The loss function becomes:


Since our final goal is to choose the best result among the per-point estimation, we use the point-wise confidence scores to modulate the per-point loss as follows:


where represents the confidence score corresponding with prediction. Intuitively, estimation candidates leading to low pose estimation loss will have higher confidence scores. In the experiments, we view the estimation result that has the highest confidence as the final final.

Generation Loss. Given the set of target points and the generated point set , the generation error for can be computed as the (extended) Chamfer distance [39]:


The term enforces that any 3D point in the original point cloud is assigned to a matching 3D point in the generated point cloud, and the term enforces the matching vice versa. The max operation enforces simultaneous minimization of the distance from to and the distance vice versa.

The overall training loss will then be


where , and denote the real point cloud in canonical pose, the real point cloud in target pose, the generated point cloud in canonical pose, and the generated point cloud in target pose, respectively. and are hyper-parameters adjusted to facilitate the training procedure: all of them are set to 1 to jointly train the networks.

3.3 Technical Details

Training Strategy. We employ a multi-step training paradigm to train the model. First, we jointly train the PGNet to perform pose-guided point cloud generation and pose estimation, which in the mean time incorporates the knowledge of object models into the RGB embedding sub-network contained in the point-wise fusion network. Next, following the training paradigm described in DenseFusion [33], we can also integrate the iterative refinement network into our model, and enable iterative refinement to further improve the performance of our model.

Implementation Details. We use the same CNN-based RGB embedding network as in DenseFusion [33] which features a PSPNet [41] using ResNet18 [11] as the backbone. Noticing that DenseFusion [33] didn’t use BatchNorm in its RGB embedding network, we conduct ablative experiments (see Sec. 4.2) to demonstrate the effectiveness of our method. Besides, due to the limitation of our GPU resources, we use only 1 refinement iteration and sample points for all experiments as opposed to 2 iterations and points used in DenseFusion, which gives the baseline method an extra advantage.

4 Experiments

In this section, we present both qualitative and quantitative results of our framework. On LineMOD [13] dataset, we compare our method with state-of-the-art methods using geometric template/feature-based techniques [2, 3], as well as recently emerged learning-based methods [17, 37, 29, 33]. We also evaluate our method on YCB-Video dataset [37] to show that our method outperforms the SOTA method, and achieves marked robustness under heavy occlusion.

4.1 Evaluation Metrics

We use two metrics to report the performance on the LineMOD dataset and YCB-Video dataset. We use the Average Distance of Model Points (ADD) for non-symmetric objects and ADD-S for the two symmetric objects (eggbox and glue) following prior works [13, 29, 31]. As for YCB-Video dataset, we report the accuracy in terms of both ADD-S2cm and AUC metrics as in prior works [33, 37].

4.2 Evaluation on LineMOD dataset

Object ape ben. cam. can cat dri. duc. box.* glu.* hol. iro. lam. pho. Avg.
77.0 97.5 93.5 96.5 82.1 95.0 77.7 97.1 99.4 52.8 98.3 97.5 87.7 88.6
Imp w/ ICP
20.6 64.3 63.2 76.1 72.0 41.6 32.4 98.6 96.4 49.9 63.1 91.7 71.0 64.7
65 80 78 86 70 73 66 100 100 49 78 73 79 79
DF w/o IR
79.5 84.2 76.5 86.6 88.8 77.7 76.3 99.9 99.4 79.0 92.1 92.3 88.0 86.2
Ours w/o IR
92.2 97.1 92.1 96.8 98.8 95.7 89.4 100.0 99.4 96.3 98.8 99.0 96.0 96.2
DF w/ IR
92.3 93.2 94.4 93.1 96.5 87.0 92.3 99.8 100.0 92.1 97.0 95.3 92.8 94.3
Ours w/ IR
92.9 98.2 97.0 97.4 98.1 97.0 95.2 100.0 100.0 97.9 98.2 97.7 96.7 97.4
Table 1: Quantitative evaluation of 6D pose estimation (ADD [13]) on the LineMOD dataset. We present four groups of methods: RGB (PoseCNN with DeepIM [40]), RGB-D (Implicit [29] and SSD-6D [17] using ICP), models w/o and w/ iterative refinement (DenseFusion and ours). SOTA methods [2] (98.3%) and [3] (99.0%) are not listed since they didn’t report accuracies on each object. Objects with * are symmetric.

LineMOD Dataset. The LineMOD dataset [13] consists of 13 registered video sequences of 13 texture-poor/texture-less 3D objects. It is widely adopted by both feature/template-based methods [2, 3, 14] and recent learning-based approaches [40, 29, 31]. We use the same training and testing set as prior learning-based works [40, 22, 29] without additional synthetic data.

Ours w/o
Ours w/o
Ours w/o
ape 69.3 81.7 89.7 92.2
bench v. 83.9 92.1 93.5 97.1
camera 68.5 83.7 92.1 92.1
can 80.5 88.3 94.2 96.8
cat 89.2 92.4 92.7 98.8
driller 75.4 85.2 90.7 95.7
duck 57.5 80.3 89.4 89.4
eggbox* 100.0 99.9 99.9 100.0
glue* 99.6 99.9 99.9 99.4
hole p. 63.2 85.9 87.3 96.3
iron 90.3 94.8 95.6 98.8
lamp 91.6 96.4 97.9 99.0
phone 88.3 93.0 95.3 96.0
MEAN 81.3 90.3 93.7 96.2
Table 2: Ablative study on the effect of estimation-by-generation paradigm (G), point-fusion block (PF) and BatchNorm (BN) on the LineMOD dataset using ADD/ADD-S metric. Objects with * are symmetric.

We compared our method with previous RGB methods with depth refinement (ICP) [29, 17] and RGB-D fusion method [33] on the ADD/ADD-S metric as presented in Table 1. The results of the color-based state-of-the-art method [37, 40] is also listed for reference. Without iterative refinement step, our method outperforms 10.0% over the baseline method and 1.9% over the baseline method with iterative refinement, proving that the proposed estimation-by-generation scheme helps to achieve accurate pose estimation even without post-processing steps. With iterative refinement, to our knowledge, PGNet achieves state-of-the-art performance (97.4%) in all single-view RGB-D input network-based methods, approaching the overall SOTA performance: [2] reports 98.3% and [3] reports 99.0% on this dataset. Strictly speaking, however, these methods are not directly comparable, since they are based on hand-crafted feature/template and take 0.5s to process a single instance, which is approximately 10 to 20 times slower than our pipeline, and even 200x slower if we don’t take the pre-segmentation step into consideration. As reported in [2], their method only achieved 96.4% accuracy when reducing the processing time to 150ms.

Ablative study of the design. Table 2 summarizes the ablation studies on critical components of PGNet1. The first column indicates that simply introducing object point cloud degrades the model performance (81.3% vs. 86.2%2), as the object point clouds are not exactly aligned with their corresponding point from RGB-D images. In addition, we observe intensified oscillation of the training loss when conducting the experiments, which confirms our assumption that unaligned point cloud input can be a potential perturbation to the network. Comparing the first and the second column shows the effectiveness of our estimation-by-generation scheme as performing pose-guided point cloud task provides the networks with very significant improvements. Employing point-fusion blocks brings an extra 3.4% improvement in performance, which demonstrates that point-fusion block can partly solve the matching problem between point clouds and pixels. For qualitative analysis, we visualize the generated point clouds in Figure 4a. The performance of PGNet on point cloud generating task indicates its capability of extracting intrinsic properties of object models.

4.3 Evaluation on YCB-Video Dataset

PoseCNN+ICP DenseFusion Ours DenseFusion+IR Ours+IR MV5-MCN
Object AUC 2cm AUC 2cm AUC 2cm AUC 2cm AUC 2cm 2cm
002_chf_can 95.8 100.0 95.2 100.0 93.7 100.0 96.4 100.0 92.3 100.0 96.2
003_ckr_box 92.7 91.6 92.5 99.3 92.1 99.6 95.5 99.5 92.9 100.0 90.9
004_sgr_box 98.2 100.0 95.1 100.0 95.4 100.0 97.5 100.0 96.4 100.0 95.3
005_sop_can 94.5 96.9 93.7 96.9 95.9 100.0 94.6 96.9 95.6 100.0 97.5
006_mtd_bottle 98.6 100.0 95.9 100.0 95.7 100.0 97.2 100.0 95.7 100.0 97.0
007_fish_can 97.1 100.0 94.9 100.0 94.8 100.0 96.6 100.0 95.3 100.0 95.1
008_pud_box 97.9 100.0 94.7 100.0 92.1 99.1 96.5 100.0 93.5 100.0 94.5
009_glt_box 98.8 100.0 95.8 100.0 96.1 100.0 98.1 100.0 96.8 100.0 96.0
010_meat_can 92.7 93.6 90.1 93.1 95.6 98.7 91.3 93.1 95.7 98.9 96.7
011_banana 97.1 99.7 91.5 93.9 93.7 98.7 96.6 100.0 94.2 100.0 94.4
019_pit_base 97.8 100.0 94.6 100.0 93.6 100.0 97.1 100.0 94.1 100.0 96.2
021_cleanser 96.9 99.4 94.3 99.8 94.4 99.7 95.8 100.0 94.7 100.0 95.4
024_bowl* 81.0 54.9 86.6 69.5 88.0 75.6 88.2 98.8 89.6 100.0 82.0
025_mug 95.0 99.8 95.5 100.0 95.2 100.0 97.1 100.0 95.1 100.0 96.8
035_drill 98.2 99.6 92.4 97.1 93.5 100.0 96.0 98.7 95.3 100.0 93.1
036_block* 87.6 80.2 85.5 93.4 86.0 87.7 89.7 94.6 85.6 100.0 93.6
037_scissors 91.7 95.6 96.4 100.0 88.0 89.0 95.2 100.0 90.3 100.0 94.2
040_marker 97.2 99.7 94.7 99.2 94.1 99.5 97.5 100.0 95.4 100.0 95.4
051_clamp* 75.2 74.9 71.6 78.5 88.7 94.3 72.9 79.2 90.8 99.2 93.3
052_ex_clamp* 64.4 48.8 69.0 69.5 83.7 73.7 69.8 76.3 89.1 99.6 90.9
061_brick* 97.2 100.0 92.4 100.0 92.8 100.0 92.5 100.0 94.6 100.0 95.9
MEAN 93.0 93.2 91.2 95.3 93.1 97.5 93.1 96.8 93.9 99.9 94.3
Table 3: Quantitative evaluation of 6D pose (ADD-S [13] and AUC [37]) on the YCB-Video dataset. Objects with * are symmetric. IR is the abbreviation of Iterative Refinement.

YCB-Video Dataset. The YCB-Video Dataset [37] features 21 YCB objects of varying shapes and texture levels under different occlusion conditions. The dataset contains 92 RGB-D videos, where each video shows a subset of the 21 objects in different indoor scenes. To ensure a fair comparison, we follow prior work [33] using the same 80 videos and 80,000 synthetic images released by [37] for training, and 2,949 key frames chosen from the rest 12 videos for testing. During the testing procedure, we use the same segmentation masks as in PoseCNN.

Table 3 demonstrates that our method without iterative consistently surpasses DenseFusion with or without iterative refinement. With iterative refinement, PGNet achieves 99.9% accuracy in terms of ADD-S2cm metric, and is comparable with methods using multi-view information (93.9% vs.94.3%). Still, we have to clarify that all the methods except MV5-MCN [20] listed in table 3 use the same segmentation masks released by PoseCNN, which induces a performance drop because of the poor detection result. Using masks provided in YCB-Video dataset, our method achieves 97.8% (ADD-S2cm), 93.3% (AUC) without iterative refinement, and 94.2% (AUC) with iterative refinement. By comparing with the state-of-the-art methods on YCB-Video dataset, we summarize that the proposed estimation-by-generation scheme has following advantages: a) instance-aware estimation: one obstacle for achieving better performance on the YCB-Video dataset is to find discriminative features other than object appearance for the object 051_clamp and 052_ex_clamp, since their appearances are largely the same. Our method provides a possible solution by learning object 3D information, resulting in 15% improvement in each object; b) addressing single-view appearance ambiguity: our method uses only single-view input and object point cloud, while achieving performance comparable to methods using multi-view information. We suggest that our method is more practical in application since circumstances where only single-view inputs are provided are more common.

Robustness towards heavy occlusion. Figure 4b displays some 6D pose estimation results on the YCB-Video dataset. We can see that the prediction is quite accurate even if the center is occluded by another object, indicating the robustness of PGNet towards heavy occlusion. We attribute this to the proposed pose-guided point cloud generating task which enforces the network to capture the appearance and geometric information of the object models well even under adversarial circumstances.

4.4 Time Efficiency

Given a pre-segmented image, PGNet model takes 1.4ms to process a single object instance, and 1.1ms to run iterative refinement on the instance, using batch parallelization on a desktop computer with an Intel i7 3.7GHz CPU and a GTX 1080 Ti GPU. Our implementation for SegNet [37] takes 29.8 ms for segmentation, becoming the bottleneck of system efficiency. Overall, the inference speed is fast enough for real-time application (25 FPS, about 5 objects each frame).

5 Conclusion and Future Works

We present a novel approach to estimate 6D poses of known objects. Our approach explores how to integrate the intrinsic properties of object models into the state-of-the-art single-view-based systems. With the proposed estimation-by-generation scheme, our method outperforms previous approaches on two widely used datasets, approaching the performance of multi-view-based methods, while achieving real-time inference. We hope our system will inspire future research along this challenging but rewarding research direction. Specifically, it would be interesting to explore the limits of the approach w.r.t. the number of objects. We also suggest developing voting-based estimation scheme on top of the system to achieve better performance.


  1. The improvement induced by squeeze-and-excitation block is trivial (less than 0.5%), and is therefore not included
  2. Note that our model w/o BatchNorm, point-fusion blocks and pose-guided point cloud generating task is equivalent to DenseFusion with an additional input branch.


  1. Panos Achlioptas, Olga Diamanti, Ioannis Mitliagkas, and Leonidas Guibas. Learning representations and generative models for 3d point clouds. In ICLR, 2018.
  2. Eric Brachmann, Alexander Krull, Frank Michel, Stefan Gumhold, Jamie Shotton, and Carsten Rother. Learning 6d object pose estimation using 3d object coordinates. In ECCV, 2014.
  3. Eric Brachmann, Frank Michel, Alexander Krull, Michael Ying Yang, Stefan Gumhold, and Carsten Rother. Uncertainty-driven 6d pose estimation of objects and scenes from a single rgb image. In CVPR, 2016.
  4. Andrew Brock, T Lim, James Ritchie, and Nick Weston. Generative and discriminative voxel modeling with convolutional neural networks. In NeurIPS, 2016.
  5. Anders Glent Buch, Lilita Kiforenko, Dirk Kraft, Anders Glent Buch, Lilita Kiforenko, and Dirk Kraft. Rotational subgroup voting and pose clustering for robust 3d object recognition. In ICCV, 2017.
  6. Xiaozhi Chen, Huimin Ma, Ji Wan, Bo Li, and Tian Xia. Multi-view 3d object detection network for autonomous driving. In CVPR, 2017.
  7. Alvaro Collet, Manuel Martinez, and Siddhartha Srinivasa. The moped framework: Object recognition and pose estimation for manipulation. I. J. Robotic Res., Sep. 2011.
  8. Angela Dai, Angel Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, and Matthias Nießner. Scannet: Richly-annotated 3d reconstructions of indoor scenes. In CVPR, 2017.
  9. Robert B. Fisher. Projective icp and stabilizing architectural augmented reality overlays. In Proc. Int. Symp. on Virtual and Augmented Architecture (VAA), 2001.
  10. A. Geiger, P. Lenz, and R. Urtasun. Are we ready for autonomous driving? the kitti vision benchmark suite. In CVPR, 2012.
  11. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Sun Jian. Deep residual learning for image recognition. In CVPR, 2016.
  12. Stefan Hinterstoisser, Stefan Holzer, Cedric Cagniart, Slobodan Ilic, Kurt Konolige, Nassir Navab, and Vincent Lepetit. Multimodal templates for real-time detection of texture-less objects in heavily cluttered scenes. In ICCV, 2011.
  13. Stefan Hinterstoisser, Vincent Lepetit, Slobodan Ilic, Stefan Holzer, Gary Bradski, Kurt Konolige, and Nassir Navab. Model based training, detection and pose estimation of texture-less 3d objects in heavily cluttered scenes. In ACCV, 2012.
  14. Stefan Hinterstoisser, Vincent Lepetit, Naresh Rajkumar, and Kurt Konolige. Going further with point pair features. In ECCV, 2016.
  15. Tomas Hodan, Frank Michel, Eric Brachmann, Wadim Kehl, Anders GlentBuch, Dirk Kraft, Bertram Drost, Joel Vidal, Stephan Ihrke, Xenophon Zabulis, Caner Sahin, Fabian Manhardt, Federico Tombari, Tae-Kyun Kim, Jiri Matas, and Carsten Rother. Bop: Benchmark for 6d object pose estimation. In ECCV, 2018.
  16. Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. TPAMI, Sep. 2017.
  17. W. Kehl, F. Manhardt, F. Tombari, S. Ilic, and N. Navab. Ssd-6d: Making rgb-based 3d detection and 6d pose estimation great again. In ICCV, 2017.
  18. Wadim Kehl, Fausto Milletari, Federico Tombari, Slobodan Ilic, and Nassir Navab. Deep learning of local rgb-d patches for 3d object detection and 6d pose estimation. In ECCV, 2016.
  19. Roman Klokov and Victor Lempitsky. Escape from cells: Deep kd-networks for the recognition of 3d point cloud models. In ICCV, 2017.
  20. Chi Li, Jin Bai, and Gregory Hager. A unified framework for multi-view multi-class object pose estimation. In ECCV, 2018.
  21. E. Marchand, H. Uchiyama, and F. Spindler. Pose estimation for augmented reality: A hands-on survey. TVCG, Dec 2016.
  22. Mahdi Rad and Vincent Lepetit. Bb8: A scalable, accurate, robust to partial occlusion method for predicting the 3d poses of challenging objects without using depth. In ICCV, 2017.
  23. Gernot Riegler, Ali Ulusoy, and Andreas Geiger. Octnet: Learning deep 3d representations at high resolutions. In CVPR, 2017.
  24. Reyes Rios-Cabrera and Tinne Tuytelaars. Discriminatively trained templates for 3d object detection: A real time scalable approach. In ICCV, 2013.
  25. Charles Ruizhongtai Qi, Wei Liu, Chenxia Wu, Hao Su, and Leonidas Guibas. Frustum pointnets for 3d object detection from rgb-d data. In CVPR, 2018.
  26. Charles Ruizhongtai Qi, Hao Su, Kaichun Mo, and Leonidas J. Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In CVPR, 2016.
  27. Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In NeurIPS, 2017.
  28. Max Schwarz, Hannes Schulz, and Sven Behnke. Rgb-d object recognition and pose estimation based on pre-trained convolutional neural network features. In ICRA, 2015.
  29. Martin Sundermeyer, Zoltan Csaba Marton, Maximilian Durner, Manuel Brucker, and Rudolph Triebel. Implicit 3d orientation learning for 6d object detection from rgb images. In ECCV, 2018.
  30. Alykhan Tejani, Danhang Tang, Rigas Kouskouridas, and Tae Kyun Kim. Latent-class hough forests for 3d object detection and pose estimation. In ECCV, 2014.
  31. Bugra Tekin, Sudipta N. Sinha, and Pascal Fua. Real-time seamless single shot 6d object pose prediction. In CVPR, 2018.
  32. Jonathan Tremblay, Thang To, Balakumar Sundaralingam, Yu Xiang, Dieter Fox, and Stan Birchfield. Deep object pose estimation for semantic robotic grasping of household objects. In CoRL, 2018.
  33. Chen Wang, Danfei Xu, Yuke Zhu, Roberto Martín-Martín, and Silvio Savarese. Densefusion: 6d object pose estimation by iterative dense fusion. In CVPR, 2019.
  34. X. Wang, R. Girshick, A. Gupta, and K. He. Non-local neural networks. In CVPR, 2018.
  35. P. Wohlhart and V. Lepetit. Learning descriptors for object recognition and 3d pose estimation. In CVPR, 2015.
  36. Jiajun Wu, Chengkai Zhang, Tianfan Xue, William T. Freeman, and Joshua B. Tenenbaum. Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling. In NeurIPS, 2016.
  37. Yu Xiang, Tanner Schmidt, Venkatraman Narayanan, and Dieter Fox. Posecnn: A convolutional neural network for 6d object pose estimation in cluttered scenes. In RSS, 2018.
  38. D. Xu, D. Anguelov, and A. Jain. Pointfusion: Deep sensor fusion for 3d bounding box estimation. In CVPR, 2018.
  39. Yaoqing Yang, Chen Feng, Yiru Shen, and Dong Tian. Foldingnet: Point cloud auto-encoder via deep grid deformation. In CVPR, 2018.
  40. Li Yi, Wang Gu, Xiangyang Ji, Xiang Yu, and Dieter Fox. Deepim: Deep iterative matching for 6d pose estimation. In ECCV, 2018.
  41. H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia. Pyramid scene parsing network. In CVPR, 2017.
  42. Yin Zhou and Oncel Tuzel. Voxelnet: End-to-end learning for point cloud based 3d object detection. In CVPR, 2018.
  43. M. Zhu, K. G. Derpanis, Y. Yang, S. Brahmbhatt, M. Zhang, C. Phillips, M. Lecce, and K. Daniilidis. Single image 3d object detection and pose estimation for grasping. In ICRA, May 2014.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description