Vision-based Robotic Grasping From Object Localization, Object Pose Estimation to Grasp Estimation for Parallel Grippers: A Review

Vision-based Robotic Grasping From Object Localization, Object Pose Estimation to Grasp Estimation for Parallel Grippers: A Review

Abstract

This paper presents a comprehensive survey on vision-based robotic grasping. We conclude three key tasks during vision-based robotic grasping, which are object localization, object pose estimation and grasp estimation. In detail, the object localization task contains object localization without classification, object detection and object instance segmentation. This task provides the regions of the target object in the input data. The object pose estimation task mainly refers to estimating the 6D object pose and includes correspondence-based methods, template-based methods and voting-based methods, which affords the generation of grasp poses for known objects. The grasp estimation task includes 2D planar grasp methods and 6DoF grasp methods, where the former is constrained to grasp from one direction. These three tasks could accomplish the robotic grasping with different combinations. Lots of object pose estimation methods need not object localization, and they conduct object localization and object pose estimation jointly. Lots of grasp estimation methods need not object localization and object pose estimation, and they conduct grasp estimation in an end-to-end manner. Both traditional methods and latest deep learning-based methods based on the RGB-D image inputs are reviewed elaborately in this survey. Related datasets and comparisons between state-of-the-art methods are summarized as well. In addition, challenges about vision-based robotic grasping and future directions in addressing these challenges are also pointed out.

1 Introduction

An intelligent robot is expected to perceive the environment and interact with it. Among the essential abilities, the ability to grasp is fundamental and significant in that it will bring enormous power to the society [217]. For example, industrial robots can accomplish the pick-and-place task which is laborious for human labors, and domestic robots are able to provide assistance to disabled or elder people in their daily grasping tasks. Endowing robots with the ability to perceive has been a long-standing goal in computer vision and robotics discipline.

As much as being highly significant, robotic grasping has long been researched. The robotic grasping system [121] is considered as being composed of the following sub-systems: the grasp detection system, the grasp planning system and the control system. Among them, the grasp detection system is the key entry point, as illustrated in Fig. 1. The grasp planning system and the control system are more relevant to the motion and automation discipline, and in this survey, we only concentrate on the grasp detection system.

Figure 1: The grasp detection system. (Left)The robotic arm, equipped with one RGB-D camera and one parallel gripper, is to grasp the target object placed on a planar work surface. (Right)The grasp detection system involves target object localization, object pose estimation, and grasp estimation.

The robotic arm and the end effectors are essential components of the grasp detection system. Various 5-7 DoF robotic arms are produced to ensure enough flexibilities and they are equipped on the base or a human-like robot. Different kinds of end effectors, such as grippers and suction disks, can achieve the object picking task, as shown in Fig. 2. The majority of methods paid attentions on parallel grippers [155, 302], which is a relatively simple situation. With the struggle of academia, dexterous grippers [140, 66, 1] are researched to accomplish complex grasp tasks. In this paper, we only talk about grippers, since suction-based end effectors are relatively simple and limited in grasping complex objects. In addition, we concentrate on methods using parallel grippers, since this is the most widely researched.

Figure 2: Different kinds of end effectors. (Left)Grippers. (Right)Suction-based end effectors. In this paper, we mainly consider parallel grippers.

The essential information to grasp the target object is the 6D gripper pose in the camera coordinate, which contains the 3D gripper position and the 3D gripper orientation to execute the grasp. The estimation of 6D gripper poses varies aiming at different grasp manners, which can be divided into the 2D planar grasp and the 6DoF grasp.

2D planar grasp means that the target object lies on a planar workspace and the grasp is constrained from one direction. In this case, the height of the gripper is fixed and the gripper direction is perpendicular to one plane. Therefore, the essential information is simplified from 6D to 3D, which are the 2D in-plane position and 1D rotation angle. In earlier years when the depth information is not easily captured, the 2D planar grasp is mostly researched. The mostly used scenario is to grasp machine components in the factory. The grasping contact points are evaluated whether they can afford the force closure [33]. With the development of deep learning, large number of methods treated oriented rectangles as the grasp configuration, which could be beneficial from the mature 2D detection frameworks. Since then, the capabilities of 2D planar grasp are enlarged extremely and the target objects to be grasped are extended from known objects to novel objects. Large amounts of methods by evaluating the oriented rectangles [111, 126, 189, 155, 172, 201, 307, 121, 41, 173, 318] are proposed. Besides, some deep learning-based methods of evaluating grasp contact points [301, 25, 161] are also proposed in recent years.

6DoF grasp means that the gripper can grasp the object from various angles in the 3D space, and the essential 6D gripper pose could not be simplified. In early years, analytical methods were utilized to analyze the geometric structure of the 3D data, and the points suitable to grasp were found according to force closure. Sahbani et al. [214] presented an overview of 3D object grasping algorithms, where most of them deal with complete shapes. With the development of sensor devices, such as Microsoft Kinect, Intel RealSense, etc, researchers can obtain the depth information of the target objects easily and modern grasp systems are equipped with RGB-D sensors, as shown in Fig. 3. The depth image can be easily lifted into 3D point cloud with the camera intrinsic parameters and the depth image-based 6DoF grasp becomes the hot research areas. Among 6DoF grasp methods, most of them aim at known objects where the grasps could be precomputed, and the problem is thus transformed into a 6D object pose estimation problem [257, 321, 295, 96]. With the development of deep learning, lots of methods [240, 133, 163, 197, 309] illustrated powerful capabilities in dealing with novel objects.

Figure 3: A RGB-D image. The depth image is transformed into 3D point cloud.

Both 2D planar grasp and 6DoF grasp contain common tasks which are object localization, object pose estimation and grasp estimation.

In order to compute the 6D gripper pose, the first thing to do is to locate the target object. Aiming at object localization, there exist three different situations, which are object localization without classification, object detection and object instance segmentation. Object localization without classification means obtaining the regions of the target object without classifying its category. There exist cases that the target object could be grasped without knowing its category. Object detection means detecting the regions of the target object and classifying its category. This affords the grasping of specific objects among multiple candidate objects. Object instance segmentation refers to detecting the pixel-level or point-level instance objects of a certain class. This provides delicate information for pose estimation and grasp estimation. Early methods assume that the object to grasp is placed in a clean environment with simple background and thus simplifies the object localization task, while in relatively complex environments their capabilities are quite limited. Traditional object detection methods utilized machine learning methods to train classifiers based on hand-crafted 2D descriptors. However, these classifiers show limited performance since the limitations of hand-crafted descriptors. With the deep learning, the 2D detection and 2D instance segmentation capabilities improves a lot, which affords object detection in more complex environments.

Figure 4: A taxonomy of tasks in vision-based robotic grasp detection system.

Most of the current robotic grasping methods aim at known objects, and estimating the object pose is the most accurate and simplest way to a successful grasp. There exist various methods in computing the 6D object poses, which varies from 2D inputs to 3D inputs, from traditional methods to deep learning methods, from textured objects to textureless or occluded objects. In this paper, we categorize these methods into correspondence-based methods, template-based methods and voting-based methods, where only feature points, the whole input and each meta unit are involved in computing the 6D object pose. Early methods tackled this problem in 3D domain by conducting partial registration. With the development of deep learning, methods using RGB image only can provide relatively high accurate 6D object poses, which highly improves the grasp capabilities.

Grasp estimation is conducted when we have the localized target object. Aiming at 2D planar grasp, the methods are divided into methods of evaluating the grasp contact points and methods of evaluating the oriented rectangles. Aiming at 6DoF grasp, the methods are categorized into methods based on the partial point cloud and methods based on the complete shape. Methods based on the partial point cloud mean that we do not have the identical 3D model of the target object. In this case, two kinds of methods exist which are methods of estimating grasp qualities of candidate grasps and methods of transferring grasps from existing ones. Methods based on complete shape means that the grasp estimation is conducted on a complete shape. When the target object is known, the 6D object pose could be computed. When the target shape is unknown, it can be reconstructed from single-view point clouds, and grasp estimation could be conducted on the reconstructed complete 3D shape. With the joint development of the above aspects, the kinds of objects that could be grasped, the robustness of the grasp and the affordable complexity of the grasp scenario all have improved a lot, which affords many more applications in industrial as well as domestic applications.

Aiming at these tasks mentioned above, there have been some works [214, 15, 26] concentrating on one or a few tasks, while there is still lack of a comprehensive introduction on these tasks. These tasks are reviewed elaborately in this paper, and a taxonomy of these tasks is shown in Fig. 4. To the best of our knowledge, this is the first review that broadly summarizes the progress and promises new directions in vision-based robotic grasping. We believe that this contribution will serve as an insightful reference to the robotic community.

The remainder of the paper is arranged as follows. Section 2 reviews the methods for object localization. Section 3 reviews the methods for 6D object pose estimation. Section 4 reviews the methods for grasp estimation. The related datasets, evaluation metrics and comparisons are also reviewed in each section. Finally, challenges and future directions are summarized in Section 5.

2 Object localization

Most of the robotic grasping approaches require the target object’s location in the input data first. This involves three different situations: object localization without classification, object detection and object instance segmentation. Object localization without classification only outputs the potential regions of the target objects without knowing their categories. Object detection provides bounding boxes of the target objects as well as their categories. Object instance segmentation further provides the pixel-level or point-level regions of the target objects along with their categories.

2.1 Object localization without classification

In this situation, the task is to find potential locations of the target object without knowing the category of the target object. There exist two cases: if you known the concrete shapes of the target object, you can fit primitives to obtain the locations. If you can not ensure the shapes of the target object, salient object detection(SOD) could be conducted to find the salient regions of the target object. Based on 2D or 3D inputs, the methods are summarized in Table 1.

Methods Fitting shape primitives Salient object detection
2D localization Fitting ellipse [71], Fitting polygons [58] Jiang et al. [110], Zhu et al. [322], Peng et al. [180], Cheng et al. [39], Wei et al. [270], Shi et al. [223], Yang et al. [282], Wang et al. [263], Guo et al. [86], Zhao et al. [311], Zhang et al. [306], DHSNet [142], Hou et al. [106], PICANet [141], Liu et al. [146], Qi et al. [196]
3D localization Rabbani et al. [199], Rusu et al. [211], Goron et al. [81], Jiang et al. [109], Khan et al. [115], Zapata-Impata et al. [298] Peng et al. [181], Ren et al. [205], Qu et al. [198], Han et al. [91], Chen et al. [29, 31], Chen and Li [30], Piao et al. [186], Kim et al. [116], Bhatia et al. [11], Pang et al. [171]
Table 1: Methods of object localization without classification.

2D localization without classification

This kind of methods deal with 2D image inputs, which are usually RGB images. According to whether the object’s contour shape is known or not, methods can be divided into methods of fitting shape primitives and methods of salient object detection. Typical functional flow-chart of 2D object localization without classification is illustrated in Fig. 5.

Figure 5: Typical functional flow-chart of 2D object localization without classification.

Fitting 2D shape primitives The shape of the target object could be an eclipse, a polygon or a rectangle, and these shapes could be regarded as shape primitives. Through fitting methods, the target object could be located. General procedures of this kind of methods usually contain enclosed contour extraction and primitive fitting. There exist many algorithms integrated in OpenCV [24] for primitives fitting, such as fitting ellipse [71] and fitting polygons [58]. This kind of methods are usually used in 2D planar robotic grasping tasks, where the object are viewed from a fixed angle, and the target object are constrained with some known shapes.

2D salient object detection Compared with shape primitives, salient object regions could be represented in arbitrary shapes. 2D salient object detection(SOD) aims to locate and segment the most visually distinctive object regions in a given image, which is more like a segmentation task without object classification. Non-deep learning SOD methods exploit low-level feature representations [110, 322, 180] or rely on certain heuristics such as color contrast [39], background prior [270]. Some other methods conduct an over-segmentation process that generates regions [223], super-pixels [282, 263], or object proposals [86] to assist the above methods.

Deep learning-based SOD methods have shown superior performance over traditional solutions since 2015. Generally, they can be divided into three main categories, which are Multi-Layer Perceptron (MLP)-based methods, Fully Convolutional Network (FCN)-based methods and Capsule-based methods. MLP-based methods typically extract deep features for each processing unit of an image to train an MLP-classifier for saliency score prediction. Zhao et al. [311] proposed a unified multi-context deep learning framework which involves global context and local context, which are fed into an MLP for foreground/background classification to model saliency of objects in images. Zhang et al. [306] proposed a salient object detection system which outputs compact detection windows for unconstrained images, and a maximum a posteriori (MAP)-based subset optimization formulation for filtering bounding box proposals. The MLP-based SOD methods cannot capture well critical spatial information and are time-consuming. Inspired by Fully Convolutional Network (FCN) [148], lots of methods directly output whole saliency maps. Liu and Han [142] proposed an end-to-end saliency detection model called DHSNet, which can simultaneously refine the coarse saliency map. Hou et al. [106] introduced short connections to the skip-layer structures, which provides rich multi-scale feature maps at each layer. Liu et al. [141] proposed a pixel-wise contextual attention network called PiCANet, which generates an attention map for each pixel and each attention weight corresponds to the contextual relevance at each context location. With the raise of Capsule Network [98, 212, 213], some capsule-based methods are proposed. Liu et al. [146] incorporated the part-object relationships in salient object detection, which is implemented by the Capsule Network. Qi et al. [196] proposed CapSalNet, which includes a multi-scale capsule attention module and multi-crossed layer connections for salient object detection. Readers could refer to some surveys [19, 262] for comprehensive understandings of 2D salient object detection.

Discussions The 2D object localization without classification are widely used in robotic grasping tasks but in a junior level. During industrial scenarios, the mechanical components are usually with fixed shapes, and many of them could be localized through fitting shape primitives. In some other grasping scenarios, the background priors or color contract is utilized to obtain the salient object for grasping. In Dexnet 2.0 [155], the target objects are laid on a workspace with green color, and they are easily segmented using color background subtraction.

3D localization without classification

This kind of methods deal with 3D point cloud inputs, which are usually partial point clouds reconstructed from single-view depth images in robotic grasping tasks. According to whether the object’s 3D shape is known or not, methods can also be divided into methods of fitting 3D shape primitives and methods of salient 3D object detection. Typical functional flow-chart of 3D object localization without classification is illustrated in Fig. 6.

Figure 6: Typical functional flow-chart of 3D object localization without classification.

Fitting 3D shape primitives The shape of the target object could be a sphere, a cylinder or a box, and these shapes could be regarded as 3D shape primitives. There exist lots of methods aiming at fitting 3D shape primitives, such as RANdom SAmple Consensus (RANSAC) [70]-based methods, Hough-like voting methods [199] and other clustering techniques [211, 81]. These methods deal with different kinds of inputs and have been applied in areas like modeling, rendering and animation. Aiming at object localization and robotic grasping tasks, the input data is a partial point cloud, where the object is incomplete, and the ambition is to find the points that can constitute one of the 3D shape primitives. Some methods [109, 115] detect planes at object boundaries and assemble them. Jiang et al. [109] and Khan et al. [115] explored the 3D structures in an indoor scene and estimated their geometry using cuboids. Rabbani et al. [199] presented an efficient Hough transform for automatic detection of cylinders in point clouds. Some methods [211, 81] conduct primitive fitting after segmenting the scene. Rusu et al. [211] used a combination of robust shape primitive models with triangular meshes to create a hybrid shape-surface representation optimal for robotic grasping. Goron et al. [81] presented a method to locate the best parameters for cylindrical and box-like objects in a cluttered scene. They increased the robustness of RANSAC fits when dealing with clutter through employing a set of inlier filters and the use of Hough voting. They provided robust results and models that are relevant for grasp estimation. Readers could refer to the survey [113] for more details.

3D salient object detection Compared with 2D salient object detection, 3D salient object detection consumes many kinds of 3D data, such as depth image and point cloud. Although above 2D salient object detection methods have achieved superior performance, they still remain challenging in some complex scenarios, where depth information could provide much assistance. RGB-D saliency detection methods usually utilize hand-crafted or deep learning-based features from RGB-D images and fuse them in different ways. Peng et al. [181] proposed a simple fusion strategy which extends RGB-based saliency models by incorporating depth-induced saliency. Ren et al. [205] exploited the normalized depth prior and the global-context surface orientation prior for salient object detection. Qu et al. [198] trained a CNN-based model which fuses different low level saliency cues into hierarchical features for detecting salient objects in RGB-D images. Chen et al. [29, 31] utilized two-stream CNNs-based models with different fusion structures. Chen and Li [30] further proposed a progressively complementarity-aware fusion network for RGB-D salient object detection, which is more effective than early-fusion methods [106] and late-fusion methods [91]. Piao et al. [186] proposed a depth-induced multi-scale recurrent attention network (DMRANet) for saliency detection, which achieves dramatic performance especially in complex scenarios. Pang et al. [171] proposed a hierarchical dynamic filtering network (HDFNet) and a hybrid enhanced loss. Li et al. [130] proposed a Cross-Modal Weighting (CMW) strategy to encourage comprehensive interactions between RGB and depth channels. These methods demonstrate remarkable performance of RGB-D SOD.

Aiming at 3D point cloud input, lots of methods are proposed to detect saliency maps of a complete object model [314], whereas, our ambitious is to locate the salient object from the 3D scene inputs. Kim et al. [116] described a segmentation method for extracting salient regions in outdoor scenes using both 3D point clouds and RGB image. Bhatia et al. [11] proposed a top-down approach for extracting salient objects/regions in 3d point clouds of indoor scenes.They first segregates significant planar regions, and extracts isolated objects present in the residual point cloud. Each object is then ranked for saliency based on higher curvature complexity of the silhouette.

Discussions 3D object localization is widely used in robotic grasping tasks but also in a junior level. In Rusu et al. [211] and Goron et al. [81], fitting 3D shape primitives has been successfully applied into robotic grasping tasks. In Zapata-Impata et al. [298], the background is first filtered out using the height constraint, and the table is filtered out by fitting a plane using RANSAC [70]. The remained point cloud is clustered and object’s clouds are achieved finally. There also exist some other ways to remove the background points through fitting background points using existing full 3D point cloud. These methods are successfully applied into robotic grasping tasks.

2.2 Object detection

The task of object detection is to detect instances of objects of a certain class, which can be treated as a localization task plus a classification task. Usually, the shapes of the target objects are unknown, and accurate salient regions are hardly achieved. Therefore, the regularly bounding boxes are used for general object localization and classification tasks, and the outputs of object detection are bounding boxes with class labels. Based on whether using region proposals or not, the methods can be divided into two-stage methods and one-stage methods. These methods are summarized respectively in Table 2 aiming at 2D or 3D inputs.

Methods Two-stage methods One-stage methods
2D detection SIFT [150], FAST [208], SURF [6], ORB [209], OverFeat [222], Erhan et al. [64], Szegedy et al. [236], RCNN [76], Fast R-CNN [77], Faster RCNN [206], R-FCN [47], FPN [135] YOLO [202], SSD [144], YOLOv2 [203], RetinaNet [136], YOLOv3 [204], FCOS [243], CornerNet [123], ExtremeNet [317], CenterNet [316, 62], CentripetalNet [57], YOLOv4 [14]
3D detection Spin Images [112], 3D Shape Context [73], FPFH [210], CVFH [2], SHOT [216], Sliding Shapes [230], Frustum PointNets [193], PointFusion [276], FrustumConvNet [268], Deep Sliding Shapes [231], MV3D [37], MMF [134], Part-A [226], PV-RCNN [224], PointRCNN [225], STD [287], VoteNet [192], MLCVNet [275], H3DNet [308], ImVoteNet [191] VoxelNet [319], SECOND [280], PointPillars [122], TANet [147], HVNet [288], 3DSSD [286], Point-GNN [227], DOPS [166], Associate-3Ddet [61]
Table 2: Methods of object detection.

2D object detection

2D object detection means detecting the target objects in 2D images by computing their 2D bounding boxes and categories. The most popular way of 2D detection is to generate object proposals and conduct classification, which is the two-stage methods. With the development of deep learning networks, especially Convolutional Neural Network (CNN), two-stage methods are improved extremely. In addition, large number of one-stage methods are proposed which achieved high accuracies with high speed. Typical functional flow-chart of 2D object detection is illustrated in Fig. 7.

Figure 7: Typical functional flow-chart of 2D object detection.

Two-stage methods The two-stage methods can be referred as region proposal-based methods. Most of the traditional methods utilize the sliding window strategy to obtain the bounding boxes first, and then utilize feature descriptions of the bounding boxes for classification. Large number of hand-crafted global descriptors and local descriptors are proposed, such as SIFT [150], FAST [208], SURF [6], ORB [209], and so on. Based on these descriptors, researchers trained classifiers, such as neural networks, Support Vector Machine (SVM) or Adaboost, to conduct 2D detection. There exist some disadvantages of traditional detection methods. For example, the sliding windows should be predefined for specific objects, and the hand-crafted features are not representative enough for a strong classifier.

With the development of deep learning, region proposals could be computed with a deep neural network. OverFeat [222] trained a fully connected layer to predict the box coordinates for the localization task that assumes a single object. Erhan et al. [64] and Szegedy et al. [236] generated region proposals from a network whose last fully connected layer simultaneously predicts multiple boxes. Besides, deep neural networks extract more representative features than hand-crafted features, and training classifiers using CNN [119] features highly improved the performance. R-CNN [76] uses Selective Search (SS) [247] methods to generate region proposals, uses CNN to extract features and trains classifiers using SVM. This traditional classifier is replaced by directly regressing the bounding boxes using the Region of Interest (ROI) feature vector in Fast R-CNN [77]. Faster R-CNN [206] is further proposed by replacing SS with the Region Proposal Network (RPN), which is a kind of fully convolutional network (FCN) [148] and can be trained end-to-end specifically for the task of generating detection proposals. This design is also adopted in other two-stage methods, such as R-FCN [47], FPN [135]. Generally, two-stage methods achieve a higher accuracy, whereas need more computing resources or computing time.

One-stage methods The one-stage methods can also be referred as regression-based methods. Compared to two-stage approaches, the single-stage pipeline skips separate object proposal generation and predicts bounding boxes and class scores in one evaluation. YOLO [202] conducts joint grid regression, which simultaneously predicts multiple bounding boxes and class probabilities for those boxes. YOLO is not suitable for small objects, since it only regress two bounding boxes for each grid. SSD [144] predicts category scores and box offsets for a fixed set of anchor boxes produced by the sliding window. Compared with YOLO, SSD is faster and much more accurate. YOLOv2 [203] also adopts sliding window anchors for classification and spatial location prediction so as to achieve a higher recall than YOLO. RetinaNet [136] proposed the focal loss function by reshaping the standard cross entropy loss so that detector will put more focus on hard, misclassified examples during training. RetinaNet achieved comparable accuracy of two-stage detectors with high detection speed. Compare with YOLOv2, YOLOv3 [204] and YOLOv4 [14] are further improved with a bunch of improvements, which shows large performance improvements without sacrificing the speed, and is more robust in dealing with small objects. There also exist some anchor-free methods, which doesn’t utilize the anchor bounding boxes, such as FCOS [243], CornerNet [123], ExtremeNet [317], CenterNet [316, 62] and CentripetalNet [57]. Further reviews of these works can refer to recent surveys [323, 313, 139, 232].

Discussions The 2D object detection methods are widely used in 2D planar robotic grasping tasks. This part can refer to Section 4.1.2.

3D object detection

3D object detection aims at finding the amodel 3D bounding box of the target object, which means finding the 3D bounding box that a complete target object occupies. 3D object detection is deeply explored in outdoor scenes and indoor scenes. Aiming at robotic grasping tasks, we can obtain the 2D and 3D information of the scene through RGB-D data, and general 3D object detection methods could be used. Similar with 2D object detection tasks, two-stage methods and one-stage methods both exist. The two-stage methods refer to region proposal-based methods and one-stage methods refer to regression-based methods. Typical functional flow-chart of 3D object detection is illustrated in Fig. 8.

Figure 8: Typical functional flow-chart of 3D object detection.

Two-stage methods Traditional 3D detection methods usually aim at objects with known shapes. The 3D object detection problem is transformed into a detection and 6D object pose estimation problem. Many hand-crafted 3D shape descriptors, such as Spin Images [112], 3D Shape Context [73], FPFH [210], CVFH [2], SHOT [216], are proposed, which can locate the object proposals. In addition, the accurate 6D pose of the target object could be achieved through local registration. This part is introduced in Section 3.1.2. However, these methods face difficulties in general 3D object detection tasks. Aiming at general 3D object detection tasks, the 3D region proposals are widely used. Traditional methods train classifiers, such as SVM, based on the 3D shape descriptors. Sliding Shapes [230] is proposed which slides a 3D detection window in 3D space and extract features from the 3D point cloud to train an Exemplar-SVM classifier [156]. With the development of deep learning, the 3D region proposals could be generated efficiently, and the 3D bounding boxes could be regressed using features from deep neural networks rather than training traditional classifiers. There exist various methods of generating 3D object proposals, which can be roughly divided into three kinds, which are frustum-based methods [193, 276, 268], global regression-based methods [231, 37, 134] and local regression-based methods.

Frustum-based methods generate object proposals using mature 2D object detectors, which is a straightforward way. Frustum PointNets [193] leverages a 2D CNN object detector to obtain 2D regions, and the lifted frustum-like 3D point clouds become 3D region proposals. The amodel 3D bounding boxes are regressed from features of the segmented points within the proposals based on PointNet [194]. PointFusion [276] utilized Faster R-CNN [206] to obtain the image crop first, and deep features from the corresponding image and the raw point cloud are densely fused to regress the 3D bounding boxes. FrustumConvNet [268] also utilizes the 3D region proposals lifted from the 2D region proposal and generates a sequence of frustums for each region proposal.

Global regression-based methods generate 3D region proposals from feature representations extracted from single or multiple inputs. Deep Sliding Shapes [231] proposed the first 3D Region Proposal Network (RPN) using 3D convolutional neural networks (ConvNets) and the first joint Object Recognition Network (ORN) to extract geometric features in 3D and color features in 2D to regress 3D bounding boxes. MV3D [37] represents the point cloud using the bird’s-eye view and employs 2D convolutions to generate 3D proposals. The region-wise features obtained via ROI pooling for multi-view data are fused to jointly predict the 3D bounding boxes. MMF [134] proposed a multi-task multi-sensor fusion model for 2D and 3D object detection, which generates a small number of high-quality 3D detections using multi-sensor fused features, and applies ROI feature fusion to regress more accurate 2D and 3D boxes. Part-A [226] predicts intra-object part locations and generates 3D proposals by feeding the point cloud to an encoder-decoder network. A RoI-aware point cloud pooling is proposed to aggregate the part information from each 3D proposal, and a part-aggregation network is proposed to refine the results. PV-RCNN [224] utilized voxel CNN with 3D sparse convolution [83, 82] for feature encoding and proposals generation, and proposed a voxel-to-keypoint scene encoding via voxel set abstraction and a keypoint-to-grid RoI feature abstraction for proposal refinement. PV-RCNN achieved remarkable 3D detection performance on outdoor scene datasets.

Local regression-based methods mean generating point-wise 3D region proposals. PointRCNN [225] extracts point-wise feature vectors from the input point cloud and generates 3D proposal from each foreground point computed through segmentation. Point cloud region pooling and canonical 3D bounding box refinement are then conducted. STD [287] designs spherical anchors and a strategy in assigning labels to anchors to generate accurate point-based proposals, and a PointsPool layer is proposed to generate dense proposal features for the final box prediction. VoteNet [192] proposed a deep hough voting strategy to generate 3D vote points from sampled 3D seeds points. The 3D vote points are clustered to obtain object proposals which will be further refined. MLCVNet [275] proposed Multi-level Context VoteNet which considers the contextual information between the objects. H3DNet [308] predicts a hybrid set of geometric primitives such as centers, face centers and edge centers of the 3d bounding boxes, and formulates 3D object detection as regressing and aggregating these geometric primitives. A matching and refinement module is then utilized to classify object proposals and fine-tune the results. Compared with point cloud input-only VoteNet [192], ImVoteNet [191] additionally extracts geometric and semantic features from the 2D images, and fuses the 2D features into the 3D detection pipeline, which achieved remarkable 3D detection performance on indoor scene datasets.

One-stage methods One-stage methods directly predict class probabilities and regress the 3D amodal bounding boxes of the objects using a single-stage network. These methods do not need region proposal generation or post-processing. VoxelNet [319] divides a point cloud into equally spaced 3D voxels and transforms a group of points within each voxel into a unified feature representation. Through convolutional middle layers and the region proposal network, the final results are obtained. Compared with VoxelNet, SECOND [280] applies sparse convolution layers [82] for parsing the compact voxel features. PointPillars [122] converts a point cloud to a sparse pseudo-image, which is processed into a high-level representation through a 2D convolutional backbone. The features from the backbone are used by the detection head to predict 3D bounding boxes for objects. TANet [147] proposed a Triple Attention (TA) module and a Coarse-to-Fine Regression (CFR) module, which focuses on the 3D detection of hard objects and the robustness to noisy points. HVNet [288] proposed a hybrid voxel network which fuses voxel feature encoder (VFE) of different scales at point-wise level and projects into multiple pseudo-image feature maps. Above methods are mainly voxel-based 3D single stage detectors, and Yang et al. [286] proposed a point-based 3D single stage object detector called 3DSSD, which contain a fusion sampling strategy in the downsampling process, a candidate generation layer, and an anchor-free regression head with a 3D center-ness assignment strategy. They achieved a good balance between accuracy and efficiency. Point-GNN [227] utilized graph neural network on the point cloud and designed a graph neural network with an auto-registration mechanism which detects multiple objects in a single shot. DOPS [166] proposed an object detection pipeline which utilizes a 3D sparse U-Net [83] and a graph convolution module. Their method can jointly predict the 3D shapes of the objects. Associate-3Ddet [61] learns to associate feature extracted from the real scene with more discriminative feature from class-wise conceptual models. Comprehensive review about 3D object detection could refer to the survey [88].

Discussions 3D object detection only presents the general shape of the target object, which is not sufficient to conduct a robotic grasp, and it is mostly used in autonomous driving areas. However, the estimated 3D bounding boxes could provide approximate grasp positions and provide valuable information for the collision detection.

2.3 Object instance segmentation

Object instance segmentation refers to detecting the pixel-level or point-level instance objects of a certain class, which is closely related to object detection and semantic segmentation tasks. Two kinds of methods also exist, which are two-stage methods and one-stage methods. The two-stage methods refer to region proposal-based methods and one-stage methods refer to regression-based methods. The representative works of the two methods are shown in Table 3 aiming at 2D inputs and 3D inputs.

Methods Two-stage methods One-stage methods
2D instance segmentation SDS [93], MNC [46], PANet [143], Mask R-CNN [94], MaskLab [35], HTC [34], PointRend [117], FGN [67] DeepMask [188], SharpMask [187], InstanceFCN [45], FCIS [131], TensorMask [38], YOLACT [17], YOLACT++ [18], PolarMask [274], SOLO [264], CenterMask [125], BlendMask [32]
3D instance segmentation GSPN [291], 3D-SIS [105], 3D-MPA [63] SGPN [261], MASC [137], ASIS [265], JSIS3D [184], JSNet [310], 3D-BoNet [281], LiDARSeg [303], OccuSeg [92]
Table 3: Methods of object instance segmentation.

2D object instance segmentation

2D object instance segmentation means detecting the pixel-level instance objects of a certain class from an input image, which is usually represented as masks. Two-stage methods follow the mature object detection frameworks, while one-stage methods conduct regression from the whole input image directly. Typical functional flow-chart of 2D object instance segmentation is illustrated in Fig. 9.

Figure 9: Typical functional flow-chart of 2D object instance segmentation.

Two-stage methods This kind of methods could also be referred as region proposal-based methods. The mature 2D object detectors are used to generate bounding boxes or region proposals, and the object masks are then predicted within the bounding boxes. Lots of methods are based on convolutional neural networks (CNN). SDS [93] uses CNN to classify category-independent region proposals. MNC [46] conducts instance segmentation via three networks, respectively differentiating instances, estimating masks, and categorizing objects. Path Aggregation Network (PANet) [143] was proposed which boosts the information flow in the proposal-based instance segmentation framework. Mask R-CNN [94] extends Faster R-CNN [206] by adding a branch for predicting an object mask in parallel with the existing branch for bounding box recognition, which achieved promising results. MaskLab [35] also builds on top of Faster R-CNN [206] and additionally produces semantic and instance center direction outputs. Chen et al. [34] proposed a framework called Hybrid Task Cascade (HTC), which performs cascaded refinement on object detection and segmentation jointly and adopts a fully convolutional branch to provide spatial context. PointRend [117] performs point-based segmentation predictions at adaptively selected locations based on an iterative subdivision algorithm. PointRend can be flexibly applied to instance segmentation tasks by building on top of them, and yields significantly more detailed results. FGN [67] proposed a Fully Guided Network (FGN) for few-shot instance segmentation, which introduces different guidance mechanisms into the various key components in Mask R-CNN [94].

Single-stage methods This kind of methods could also be referred as regression-based methods, where the segmentation masks are predicted as well the objectness score. DeepMask [188], SharpMask [187] and InstanceFCN [45] predict segmentation masks for the the object located at the center. FCIS [131] was proposed as the fully convolutional instance-aware semantic segmentation method, where position-sensitive inside/outside score maps are used to perform object segmentation and detection. TensorMask [38] uses structured 4D tensors to represent masks over a spatial domain and presents a framework to predict dense masks. YOLACT [17] breaks instance segmentation into two parallel subtasks, which are generating a set of prototype masks and predicting per-instance mask coefficients. YOLACT is the first real-time one-stage instance segmentation method and is improved by YOLACT++ [18]. PolarMask [274] formulates the instance segmentation problem as predicting contour of instance through instance center classification and dense distance regression in a polar coordinate. SOLO [264] introduces the notion of instance categories, which assigns categories to each pixel within an instance according to the instance’s location and size, and converts instance mask segmentation into a classification-solvable problem. CenterMask [125] adds a novel spatial attention-guided mask (SAG-Mask) branch to anchor-free one stage object detector (FCOS [243]) in the same vein with Mask R-CNN [94]. BlendMask [32] also builds upon the FCOS [243] object detector, which uses a blender module to effectively predict dense per-pixel position-sensitive instance features and learn attention maps for each instance. Detailed reviews refer to the survey [233, 89].

Discussions 2D object instance segmentation is widely used in robotic grasping tasks. For example, SegICP [271] utilize RGB-based object segmentation to obtain the points belong to the target objects. Xie et al. [273] separately leverage RGB and Depth for unseen object instance segmentation. Danielczuk et al. [48] segments unknown 3d objects from real depth images using Mask R-CNN [94] trained on synthetic data.

3D object instance segmentation

3D object instance segmentation means detecting the point-level instance objects of a certain class from an input 3D point cloud. Similar to 2D object instance segmentation, two-stage methods need region proposals, while one-stage methods are proposal-free. Typical functional flow-chart of 3D object instance segmentation is illustrated in Fig. 10.

Figure 10: Typical functional flow-chart of 3D object instance segmentation.

Two-stage methods This kind of methods could also be referred as proposal-based methods. General methods utilize the 2D or 3D detection results and conduct foreground or background segmentation in the corresponding frustum or bounding boxes. GSPN [291] proposed the Generative Shape Proposal Network (GSPN) to generates 3D object proposals and the Region-PointNet framework to conduct 3D object instance segmentation. 3D-SIS [105] leverages joint 2D and 3D end-to-end feature learning on both geometry and RGB input for 3D object bounding box detection and semantic instance segmentation. 3D-MPA [63] predicts dense object centers based on learned semantic features from a sparse volumetric backbone, employes a graph convolutional network to explicitly model higher-order interactions between neighboring proposal features, and utilizes a multi proposal aggregation strategy other than NMS to obtain the final results.

Single-stage methods This kind of methods could also be referred as regression-based methods. Lots of methods learn to group per-point features to segment 3D instances. SGPN [261] proposed the Similarity Group Proposal Network (SGPN) to predict point grouping proposals and a corresponding semantic class for each proposal, from which we can directly extract instance segmentation results. MASC [137] utilizes the sub-manifold sparse convolutions [83, 82] to predict semantic scores for each point as well as the affinity between neighboring voxels at different scales. The points are then grouped into instances based on the predicted affinity and the mesh topology. ASIS [265] learns semantic-aware point-level instance embedding and semantic features of the points belonging to the same instance are fused together to make per-point semantic predictions. JSIS3D [184] proposed a multi-task point-wise network (MT-PNet) that simultaneously predicts the object categories of 3D points and embeds these 3D points into high dimensional feature vectors that allow clustering the points into object instances. JSNet [310] also proposed a joint instance and semantic segmentation (JISS) module and designed an efficient point cloud feature fusion (PCFF) module to generate more discriminative features. 3D-BoNet [281] was proposed to directly regress 3D bounding boxes for all instances in a point cloud, while simultaneously predicting a point-level mask for each instance. LiDARSeg [303] proposed a dense feature encoding technique, a solution for single-shot instance prediction and effective strategies for handling severe class imbalances. OccuSeg [92] proposed an occupancy-aware 3D instance segmentation scheme, which predicts the number of occupied voxels for each instance. The occupancy signal guides the clustering stage of 3D instance segmentation and OccuSeg achieves remarkable performance.

Discussions 3D object instance segmentation is quite important in robotic grasping tasks. However, current methods mainly leverage 2D instance segmentation methods to obtain the 3D point cloud of the target object, which utilizes the advantages of RGB-D images. Nowadays 3D object instance segmentation is still a fast developing area, and it will be widely used in the future if its performance and speed improve a lot.

3 Object Pose Estimation

In some 2D planar grasps, the target objects are constrained in the 2D workspace and are not piled up, the 6D object pose can be represented as the 2D position and the in-plane rotation angle. This case is relatively simple and is addressed quite well based on matching 2D feature points or 2D contour curves. In other 2D planar grasp and 6DoF grasp scenarios, the 6D object pose is mostly needed, which helps a robot to get aware of the 3D position and 3D orientation of the target object. The 6D object pose transforms the object from the object coordinate into the camera coordinate. We mainly focus on 6D object pose estimation in this section and divide 6D object pose estimation into three kinds, which are correspondence-based, template-based and voting-based methods. During the review of each kind of methods, both traditional methods and deep learning-based methods are reviewed.

3.1 Correspondence-based methods

Correspondence-based 6D object pose estimation involves methods of finding correspondences between the observed input data and the existing complete 3D object model. When we want to solve this problem based on the 2D RGB image, we need to find correspondences between 2D pixels and 3D points of the existing 3D model. The 6D object pose can thus be recovered through Perspective-n-Point (PnP) algorithms [129]. When we want to solve this problem based on the 3D point cloud lifted from the depth image, we need to find correspondences of 3D points between the observed partial-view point cloud and the complete 3D model. The 6D object pose can thus recovered through least square methods. The methods of correspondence-based methods are summarized in Table 4.

Methods Descriptions Traditional methods Deep learning-based methods
2D image-based methods Find correspondences between 2D pixels and 3D points, and use PnP methods SIFT [150], FAST [208], SURF [6], ORB [209] LCD [185], BB8 [200], Tekin et al. [239], Crivellaro et al. [43], KeyPose [145], Hu et al. [107], HybridPose [229], Hu et al. [108], DPOD [297], Pix2pose [176], EPOS [99]
3D point cloud-based methods Find correspondences between 3D points Spin Images [112], 3D Shape Context [73], FPFH [210], CVFH [2], SHOT [216] 3DMatch [300], 3DFeat-Net [289], Gojcic et al. [78], Yuan et al. [296], StickyPillars [228]
Table 4: Summary of correspondence-based 6D object pose estimation methods.

2D image-based methods

When using the 2D RGB image, correspondence-based methods mainly target on the objects with rich texture through the matching of 2D feature points, as shown in Fig. 11. Multiple images are first rendered by projecting the existing 3D models from various angles and each object pixel in the rendered images corresponds to a 3D point. Through matching 2D feature points on the observed image and the rendered images [248, 128], the 2D-3D correspondences are established. Other than rendered images, the keyframes in keyframe-based SLAM approaches [164] could also provide 2D-3D correspondences for 2D keypoints. The common 2D descriptors such as SIFT [150], FAST [208], SURF [6], ORB [209], etc., are usually utilized for the 2D feature matching. Based on the 2D-3D correspondences, the 6D object pose can be calculated with Perspective-n-Point (PnP) algorithms [129]. However, these 2D feature-based methods fail when the objects do not have rich texture.

Figure 11: Typical functional flow-chart of 2D correspondence-based 6D object pose estimation methods. Data from the lineMod dataset [97].

With the development of deep neural networks such as CNN, representative features could be extracted from the image. A straightforward way is to extract discriminative feature points [290, 246] and match them using the representative CNN features. Yi et al. [290] presented a SIFT-like feature descriptor. Truong et al. [246] presented a method to greedily learn accurate match points. Superpoint [52] proposed a self-supervised framework for training interest point detectors and descriptors, which shows advantages over a few traditional feature detectors and descriptors. LCD [185] particularly learns a local cross-domain descriptor for 2D image and 3D point cloud matching, which contains a dual auto-encoder neural network that maps 2D and 3D inputs into a shared latent space representation.

There exists another kind of methods [200, 239, 43, 108], which uses the representative CNN features to predict the 2D locations of 3D points, as shown in Fig. 11. Since it’s difficult to selected the 3D points to be projected, many methods utilize the eight vertices of the object’s 3D bounding box. Rad and Lepetit [200] predicts 2D projections of the corners of their 3D bounding boxes and obtains the 2D-3D correspondences. Different with them, Tekin et al. [239] proposed a single-shot deep CNN architecture that directly detects the 2D projections of the 3D bounding box vertices without posteriori refinements. Some other methods utilize feature points of the 3D object. Crivellaro et al. [43] predicts the pose of each part of the object in the form of the 2D projections of a few control points with the assistance of a Convolutional Neural Network (CNN). KeyPose [145] predicts object poses using 3D keypoints from stereo input, and is suitable for transparent objects. Hu et al. [107] further predicts the 6D object pose from a group of candidate 2D-3D correspondences using deep learning networks in a single-stage manner, instead of RANSAC-based Perspective-n-Point (PnP) algorithms. HybridPose [229] predicts a hybrid intermediate representation to express different geometric information in the input image, including keypoints, edge vectors, and symmetry correspondences. Some other methods predict 3D positions for all the pixels of the object. Hu et al. [108] proposed a segmentation-driven 6D pose estimation framework where each visible part of the object contributes to a local pose prediction in the form of 2D keypoint locations. The pose candidates are them combined into a robust set of 2D-3D correspondences from which the reliable pose estimation result is computed. DPOD [297] estimates dense multi-class 2D-3D correspondence maps between an input image and available 3D models. Pix2pose [176] regresses pixel-wise 3D coordinates of objects from RGB images using 3D models without textures. EPOS [99] represents objects by surface fragments which allows to handle symmetries, predicts a data-dependent number of precise 3D locations at each pixel, which establishes many-to-many 2D-3D correspondences, and utilizes an estimator for recovering poses of multiple object instances.

3D point cloud-based methods

Typical functional flow-chart of 3D correspondence-based 6D object pose estimation methods is illustrated in Fig. 12. When using the 3D point cloud lifted from the depth image, 3D geometric descriptors could be utilized for matching, which eliminates the influence of the texture. The 6D object pose could then be achieved by computing the transformations based on 3D-3D correspondences directly. The widely used 3D local shape descriptors, such as Spin Images [112], 3D Shape Context [73], FPFH [210], CVFH [2], SHOT [216], can be utilized to find correspondences between the object’s partial 3D point cloud and full point cloud to obtain the 6D object pose. Some other 3D local descriptors could refer to the survey [87]. However, this kind of methods require that the target objects have rich geometric features.

Figure 12: Typical functional flow-chart of 3D correspondence-based 6D object pose estimation methods.

There also exist deep learning-based 3D descriptors [300, 289] aiming at matching 3D points, which are representative and discriminative. 3DMatch [300] is proposed to match 3D feature points using 3D voxel-based deep learning networks. 3DFeat-Net [289] proposed a weakly supervised network that holistically learns a 3D feature detector and descriptor using only GPS/INS tagged 3D point clouds. Gojcic et al. [78] proposed 3DSmoothNet, which matches 3D point clouds with a siamese deep learning architecture and fully convolutional layers using a voxelized smoothed density value (SDV) representation. Yuan et al. [296] proposed a self-supervised learning method for descriptors in point clouds, which requires no manual annotation and achieves competitive performance. StickyPillars [228] proposed an end-to-end trained 3D feature matching approach based on a graph neural network, and they perform context aggregation with the aid of transformer based multi-head self and cross attention.

3.2 Template-based methods

Template-based 6D object pose estimation involves methods of finding the most similar template from the templates that are labeled with Ground Truth 6D object poses. In 2D case, the templates could be projected 2D images from known 3D models, and the objects within the templates have corresponding 6D object poses in the camera coordinate. The 6D object pose estimation problem is thus transformed into an image retrieval problem. In 3D case, the template could be the full point cloud of the target object. We need to find the best 6D pose that aligns the partial point cloud to the template and thus the 6D object pose estimation becomes a part-to-whole coarse registration problem. The methods of template-based methods are summarized in Table 5.

Methods Descriptions Traditional methods Deep learning-based methods
2D image-based methods Retrieve the template image that is most similar with the observed image LineMod [97], Hodaň et al. [103] AAE [234], PoseCNN [272], SSD6D [114], Deep-6DPose [54], Liu et al. [138], CDPN [132], Tian et al. [242], NOCS [258], LatentFusion [175], Chen et al. [28]
3D point cloud-based methods Find the pose that best aligns the observed partial 3D point cloud with the template full 3D model Super4PCS [157], Go-ICP [284] PCRNet [219], DCP [266], PointNetLK [3], PRNet [267], DeepICP [151], Sarode et al. [218], TEASER [283], DGR [40], G2L-Net [36], Gao et al. [74]
Table 5: Summary of template-based 6D object pose estimation methods.

2D image-based methods

Traditional 2D feature-based methods could be used to find the most similar template image and 2D correspondence-based methods could be utilized if discriminative feature points exist. Therefore, this kind of methods mainly aim at texture-less or non-texture objects that correspondence-based methods can hardly deal with. In these methods, the gradient information is usually utilized. Typical functional flow-chart of 2D template-based 6D object pose estimation methods is illustrated in Fig. 13. Multiple images which are generated by projecting the existing complete 3D model from various angles are regarded as the templates. Hinterstoisser et al. [97] proposed a novel image representation by spreading image gradient orientations for template matching and represented a 3D object with a limited set of templates. The accuracy of the estimated pose was improved by taking into account the 3D surface normal orientations which are computed from the dense point cloud obtained from a dense depth sensor. Hodaň et al. [103] proposed a method for the detection and accurate 3D localization of multiple texture-less and rigid objects depicted in RGB-D images. The candidate object instances are verified by matching feature points in different modalities and the approximate object pose associated with each detected template is used as the initial value for further optimization. There exist deep learning-based image retrieval methods [80], which could assist the template matching process. However, seldom of them were used in template-based methods and perhaps the number of templates is too small for deep learning methods to learn representative and discriminative features.

Figure 13: Typical functional flow-chart of 2D template-based 6D object pose estimation methods. Data from the lineMod dataset [97].

Above methods find the most similar template explicitly, and there also exist some implicitly ways. Sundermeyer et al. [234] proposed Augmented Autoencoders (AAE), which learns the 3D orientation implicitly. Thousands of template images are rendered from a full 3D model and these template images are encoded into a codebook. The input image will be encoded into a new code and matched with the codebook to find the most similar template image, and the 6D object pose is thus obtained.

There also exist methods [272, 54, 138] that directly estimate the 6D pose of the target object from the input image, which can be regarded as finding the most similar image from the pre-trained and labeled images implicitly. Different from correspondence-based methods, this kind of method learns the immediate mapping from an input image to a parametric representation of the pose, and the 6D object pose can thus be estimated combined with object detection [178]. Yu et al. [272] proposed PoseCNN for direct 6D object pose estimation. The 3D translation of an object is estimated by localizing the center in the image and predicting the distance from the camera, and the 3D rotation is computed by regressing a quaternion representation. Kehl et al. [114] presented a similar method by making use of the SSD network. Do et al. [54] proposed an end-to-end deep learning framework named Deep-6DPose, which jointly detects, segments, and recovers 6D poses of object instances form a single RGB image. They extended the instance segmentation network Mask R-CNN [94] with a pose estimation branch to directly regress 6D object poses without any post-refinements. Liu et al. [138] proposed a two-stage CNN architecture which directly outputs the 6D pose without requiring multiple stages or additional post-processing like PnP. They transformed the pose estimation problem into a classification and regression task. CDPN [132] proposed the Coordinates-based Disentangled Pose Network (CDPN), which disentangles the pose to predict rotation and translation separately. Tian et al. [242] also proposed a discrete-continuous formulation for rotation regression to resolve this local-optimum problem. They uniformly sample rotation anchors in , and predict a constrained deviation from each anchor to the target.

There also exist methods that build a latent representation for category-level objects. This kind of methods can also be treated as the template-based methods, and the template could be implicitly built from multiple images. NOCS [258], LatentFusion [175] and Chen et al. [28] are the representative methods.

3D point cloud-based methods

Typical functional flow-chart of 3D template-based 6D object pose estimation methods is illustrated in Fig. 14. Traditional partial registration methods aim at finding the 6D transformation that best aligns the partial point cloud to the full point cloud. Various global registration methods [157, 284, 315] exist which afford large variations of initial poses and are robust with large noise. However, this kind of method is time-consuming. Most of these methods utilize local registration methods such as the iterative closest points(ICP) algorithm [10] to refine the results. This part can refer to some review papers [237, 7].

Figure 14: Typical functional flow-chart of 3D template-based 6D object pose estimation methods.

Some deep learning-based methods also exist, which can accomplish the partial registration task in an efficient way. These methods consume a pair of point clouds, extract representative and discriminative features from 3D deep learning networks, and regress the relative 6D transformations between the pair of point clouds. PCRNet [219], DCP [266], PointNetLK [3], PRNet [267], DeepICP [151], Sarode et al. [218], TEASER [283] and DGR [40] are the representative methods and readers could refer to the recent survey [253]. There also exist methods [36, 74] that directly regress the 6D object pose from the partial point cloud. G2L-Net [36] extracts the coarse object point cloud from the RGB-D image by 2D detection, and then conducts translation localization and rotation localization. Gao et al. [74] conducts 6D object pose regression via supervised learning on point clouds.

3.3 Voting-based methods

Voting-based methods mean that each pixel or 3D point contributes to the 6D object pose estimation by providing one or more votes. We roughly divide voting methods into two kinds, which are indirectly voting methods and directly voting methods. Indirectly voting methods mean that each pixel or 3D point vote for some feature points, which affords 2D-3D correspondences or 3D-3D correspondences. Directly voting methods mean that each pixel or 3D point vote for a certain 6D object coordinate or pose. These methods are summarized in Table 6.

Methods Descriptions 2D image-based methods 3D point cloud-based methods
Indirect voting methods Voting for correspondence-based methods PVNet [182], Yu et al. [295] PVN3D [96], YOLOff [79], 6-PACK [256]
Direct voting methods Voting for template-based methods Brachmann et al. [22], Tejani et al. [238], Crivellaro et al. [43], PPF [59] DenseFusion [257], MoreFusion [255]
Table 6: Summary of voting-based 6D object pose estimation methods.

Indirect voting methods

This kind of methods can be regarded as voting for correspondence-based methods. In 2D case, 2D feature points are voted and 2D-3D correspondences could be achieved. In 3D case, 3D feature points are voted and 3D-3D correspondences between the observed partial point cloud and the canonical full point cloud could be achieved. Most of this kind of methods utilize deep learning methods for their strong feature representation capabilities in order to predict better votes. Typical functional flow-chart of indirect voting-based 6D object pose estimation methods is illustrated in Fig. 15.

Figure 15: Typical functional flow-chart of indirect voting-based object pose estimation methods.

In 2D case, PVNet [182] votes projected 2D feature points and then finds the corresponding 2D-3D correspondences to compute the 6D object pose. Yu et al. [295] proposed a method which votes 2D positions of the object keypoints from vector-fields. They develop a differentiable proxy voting loss (DPVL) which mimics the hypothesis selection in the voting procedure. In 3D case, PVN3D [96] votes 3D keypoints, and can be regarded as a variation of PVNet [182] in 3D domain. YOLOff [79] utilizes a classification CNN to estimate the object’s 2D location in the image from local patches, followed by a regression CNN trained to predict the 3D location of a set of keypoints in the camera coordinate system. The 6D object pose is then achieved by minimizing a registration error. 6-PACK [256] predicts a handful of ordered 3D keypoints for an object based on the observation that inter-frame motion of an object instance can be estimated through keypoint matching. This method achieves category-level 6D object pose tracking on RGB-D data.

Direct voting methods

This kind of methods can be regarded as voting for template-based methods if we treat the voted object pose or object coordinate as the most similar template. Typical functional flow-chart of direct voting-based 6D object pose estimation methods is illustrated in Fig. 16.

Figure 16: Typical functional flow-chart of direct voting-based 6D object pose estimation methods.

In 2D case, this kind of methods are mainly used for computing the poses of objects with occlusions. For these objects, the local evidence in the image restricts the possible outcome of the desired output, and every image patch is thus usually used to cast a vote about the 6D object pose. Brachmann et al. [22] proposed a learned, intermediate representation in the form of a dense 3D object coordinate labelling paired with a dense class labelling. Each object coordinate prediction defines a 3D-3D correspondence between the image and the 3D object model, and the pose hypotheses are generated and refined to obtain the final hypothesis. Tejani et al. [238] trained a Hough forest for 6D pose estimation from an RGB-D image. Each tree in the forest maps an image patch to a leaf which stores a set of 6D pose votes.

In 3D case, Drost et al. [60] proposed the Point Pair Features (PPF) to recover the 6D pose of objects from a depth image. A point pair feature contains information about the distance and normals of two arbitrary 3D points. PPF has been one of the most successful 6D pose estimation method as an efficient and integrated alternative to the traditional local and global pipelines. Hodan et al. [101] proposed a benchmark for 6D pose estimation of a rigid object from a single RGB-D input image, and a variation of PPF [252] won the 2018 SIXD challenge.

Deep learning-based methods also assist the directly voting methods. DenseFusion [257] utilizes a heterogeneous architecture that processes the RGB and depth data independently and extracts pixel-wise dense feature embeddings. Each feature embedding votes a 6D object pose and the best prediction is adopted. They further proposed an iterative pose refinement procedure to refine the predicted 6D object pose. MoreFusion [255] conducts an object-level volumetric fusion and performs point-wise volumetric pose prediction that exploits volumetric reconstruction and CNN feature extraction from the image observation. The object poses are then jointly refined based on geometric consistency among objects and impenetrable space.

3.4 Comparisons and discussions

In this section, we mainly review the methods based on the RGB-D image, since 3D point cloud-based 6D object pose estimation could be regarded as a registration or alignment problem where some surveys [237, 7] exist. The related datasets, evaluation metrics and comparisons are presented.

Datasets and evaluation metrics

There exist various benchmarks [102] for 6D pose estimation, such as LineMod [97], IC-MI/IC-BIN dataset [238], T-LESS dataset [100], RU-APC dataset [207], and YCB-Video [272], etc. Here we only reviewed the most widely used LineMod [97] dataset and YCB-Video [272] dataset. LineMod [97] provides manual annotations for around 1,000 images for each of the 15 objects in the dataset. Occlusion Linemod [22] contains more examples where the objects are under occlusion. YCB-Video [272] contains a subset of 21 objects and comprises 133,827 images. These datasets are widely evaluated aiming at various kinds of methods.

The 6D object pose can be represented by a matrix , where is a rotation matrix and is a translation vector. The rotation matrix could also be represented as quaternions or angle-axis representation. Direct comparison of the variances between the values can not provide intuitive visual understandings. The commonly used metrics are the Average Distance of Model Points (ADD) [97] for non-symmetric objects and the average closest point distances (ADD-S) [272] for symmetric objects.

Given a 3D model , the ground truth rotation and translation , and the estimated rotation and translation , ADD means the average distance of all model points from their transformed versions. The 6D object pose is considered to be correct if the average distance is smaller than a predefined threshold.

(1)

ADD-S [272] is an ambiguity-invariant pose error metric which takes both symmetric and non-symmetric objects into an overall evaluation. Given the estimated pose and the ground truth pose , ADD-S calculates the mean distance from each 3D model point transformed by to its closest point on the target model transformed by .

Aim at the LineMOD dataset, ADD is used for asymmetric objects and ADD-S is used for symmetric objects. The threshold is usually set as 10 of the model diameter. Aiming at the YCB-Video dataset, the commonly used evaluation metric is the ADD-S metric. The percentage of ADD-S smaller than 2cm (2cm) is often used, which measures the predictions under the minimum tolerance for robotic manipulation. In addition, the area under the ADD-S curve (AUC) following PoseCNN [272] is also reported and the maximum threshold of AUC is set to be 10cm.

Category Method AUC ADD-S (2cm)
Corre-based Heatmaps [170] 72.8 53.1
Template-based PoseCNN [272]+ICP 61.0 73.8
PoseCNN [272]+ICP 93.0 93.2
Castro et al. [27] 67.52 47.09
PointFusion [276] 83.9 74.1
MaskedFusion [183] 93.3 97.1
Voting-based DenseFusion [257](per-pixel) 91.2 95.3
DenseFusion [257](iterative) 93.1 96.8
Table 7: Accuracies of AUC and ADD-S metrics on YCB-video dataset.
Category Method LineMOD Occlusion
Correspondence-based methods BB8 [200] 43.6 -
BB8 [200]+Refine 62.7 -
Tekin et al. [239] 55.95 6.42
Heatmaps [170] - 25.8
Heatmaps [170]+Refine - 30.4
Hu et al. [108] - 26.1
Pix2pose [176] 72.4 32.0
DPOD [297] 82.98 32.79
DPOD [297]+Refine 95.15 47.25
HybridPose [229] 94.5 79.2
Template-based methods SSD-6D [114] 2.42 -
SSD-6D [114]+Refine 76.7 27.5
AAE [234] 31.41 -
AAE [234]+Refine 64.7 -
Castro et al. [27] 59.32 -
PoseCNN [272] 62.7 6.42
PoseCNN [272]+Refine 88.6 78.0
CDPN [132] 89.86 -
Tian et al. [242] 92.87 -
MaskedFusion [183] 97.3 -
Voting-based methods Brachmann et al. [23] 32.3 -
Brachmann et al. [23]+Refine 50.2 -
PVNet [182] 86.27 40.8
DenseFusion [257](per-pixel) 86.2
DenseFusion [257](iterative) 94.3
DPVL [295] 91.5 43.52
YOLOff [79] 94.2 -
YOLOff [79]+Refine 98.1 -
PVN3D [96] 95.1 -
PGNet [294] 96.2 -
PGNet [294]+Refine 97.4 -
PointPoseNet [90] 96.3 52.6
PointPoseNet [90]+Refine - 75.1
Table 8: Accuracies of methods using ADD(-S) metric on LineMOD and Occlusion LineMOD dataset. Refine means methods such as ICP or DeepIM. IR is short for iterative refinement.

Comparisons and discussions

6D object pose estimation plays a pivotal role in robotic and augment reality areas. Various methods exist with different inputs, precision, speed, advantages and disadvantages. Aiming at robotic grasping tasks, the practical environment, the available input data, the available hardware setup, the target objects to be grasped, the task requirements should be analyzed first to decide which kinds of methods to use. The above mentioned three kinds of methods deal with different kinds of objects. Generally, when the target object has rich texture or geometric details, the correspondence-based method is a good choice. When the target object has weak texture or geometric detail, the template-based method is a good choice. When the object is occluded and only partial surface is visible, or the addressed object ranges from specific objects to category-level objects, the voting-based method is a good choice. Besides, the three kinds of methods all have 2D inputs, 3D inputs or mixed inputs. The results of methods with RGB-D images as inputs are summarized in Table 7 on the YCB-Video dataset, and Table 8 on the LineMOD and Occlusion LineMOD datasets. All recent methods on LineMOD achieve high accuracy since there’s little occlusion. When there exist occlusions, correspondence-based and voting-based methods perform better than template-based methods. The template-based methods are more like a direct regression problem, which highly depend on the global feature extracted. Whereas, correspondence-based and voting-based methods utilize the local parts information and constitute local feature representations.

There exist some challenges for nowadays 6D object pose estimation methods. The first challenge lies in that current methods show obvious limitations in cluttered scenes in which occlusions usually occur. Although the state-of-the-art methods achieve high accuracies on the Occlusion LineMOD dataset, they still could not afford severe occluded cases since this situation may cause ambiguities even for human beings. The second one is the lack of sufficient training data, as the sizes of the datasets presented above are relatively small. Nowadays deep learning methods show poor performance on objects which do not exist in the training datasets and perhaps the simulated datasets could be one solution. Although some category-level 6D object pose methods [258, 175, 28] emerged recently, they still can not handle large number of categories.

4 Grasp Estimation

Grasp estimation means estimating the 6D gripper pose in the camera coordinate. As mentioned before, the grasp can be categorized into 2D planar grasp and 6DoF grasp. For 2D planar grasp, where the grasp is constrained from one direction, the 6D gripper pose could be simplified into a 3D representation, which includes the 2D in-plane position and 1D rotation angle, since the height and the rotations along other axes are fixed. For 6DoF grasp, the gripper can grasp the object from various angles and the 6D gripper pose is essential to conduct the grasp. In this section, methods of 2D planar grasp and 6DoF grasp are presented in detail.

4.1 2D planar grasp

Methods of 2D planar grasp can be divided into methods of evaluating grasp contact points and methods of evaluating oriented rectangles. In 2D planar grasp, the grasp contact points can uniquely define the gripper’s grasp pose, which is not the situation in 6DoF grasp. The 2D oriented rectangles can also uniquely define the gripper’s grasp pose. These methods are summarized in Table 9 and typical functional flow-chart is illustrated in Fig. 17.

Methods Traditional methods Deep learning-based methods
Methods of evaluating grasp contact points Domae et al. [56] Zeng et al. [301], Mahler et al. [155], Cai et al. [25], GG-CNN [161], MVP [162], Wang et al. [260]
Methods of evaluating oriented rectangles Jiang et al. [111], Vohra et al. [254] Lenz et al. [126], Pinto and Gupta [189], Park and Chun [172], Redmon and Angelova [201], Zhang et al. [307], Kanan [121], Kumra et al. [120], Zhang et al. [305], Guo et al. [85], Chu et al. [41], Park et al. [173], Zhou et al. [318], Depierre et al. [51]
Table 9: Summary of 2D planar grasp estimation methods.
Figure 17: Typical functional flow-chart of 2D planar grasp methods. Data from the JACQUARD dataset [50].

Methods of evaluating grasp contact points

This kind of methods first sample candidate grasp contact points, and use analytical methods or deep learning-based methods to evaluate the possibility of a successful grasp, which are classification-based methods. Empirical methods of robotic grasping are performed based on the premise that certain prior knowledge, such as object geometry, physics models, or force analytic, are known. The grasp database usually covers a limited amount of objects, and empirical methods will face difficulties in dealing with unknown objects. Domae et al. [56] presented a method that estimates graspability measures on a single depth map for grasping objects randomly placed in a bin. Candidate grasp regions are first extracted and the graspability is computed by convolving one contact region mask image and one collision region mask image. Deep learning-based methods could assists in evaluating the grasp qualities of candidate grasp contact points. Mahler et al. [155] proposed DexNet 2.0, which plans robust grasps with synthetic point clouds and analytic grasping metrics. They first segment the current points of interests from the depth image, and multiple candidate grasps are generated. The grasp qualities are then measured using the Grasp Quality-CNN network, and the one with the highest quality will be selected as the final grasp. Their database have more than 50k grasps, and the grasp quality measurement network achieved relatively satisfactory performance.

Deep learning-based methods could also assist in estimating the most probable grasp contact points through estimating pixel-wise grasp affordances. Robotic affordances [55, 4, 42] usually aim to predict affordances of the object parts for robot manipulation, which are more like a segmentation problem. However, there exist some methods [301, 25] that predict pixel-wise affordances with respect to the grasping primitive actions. These methods generate grasp qualities for each pixel, and the pair of points with the highest affordance value is executed. Zeng et al. [301] proposed a method which infers dense pixel-wise probability maps of the affordances for four different grasping primitive actions through utilizing fully convolutional networks. Cai et al. [25] presented a pixel-level affordance interpreter network, which learns antipodal grasp patterns based on a fully convolutional residual network similar with Zeng et al. [301]. Both of these two methods do not segment the target object and predict pixel-wise affordance maps for each pixels. This is a way which directly estimate grasp qualities without sampling grasp candidates. Morrison et al. [161] proposed the Generative Grasping Convolutional Neural Network (GG-CNN), which predicts the quality and pose of grasps at every pixel. Further, Morrison et al. [162] proposed a Multi-View Picking (MVP) controller, which uses an active perception approach to choose informative viewpoints based on a distribution of grasp pose estimates. They utilized the real-time GG-CNN [161] for visual grasp detection. Wang et al. [260] proposed a fully convolution neural network which encodes the origin input images to features and decodes these features to generate robotic grasp properties for each pixel. Unlike classification-based methods for generating multiple grasp candidates through neural network, their pixel-wise implementation directly predicts multiple grasp candidates through one forward propagation.

Methods of evaluating oriented rectangles

Jiang et al. [111] first proposed to use an oriented rectangle to represent the gripper configuration and they utilized a two-step procedure, which first prunes the search space using certain features that are fast to compute and then uses advanced features to accurately select a good grasp. Vohra et al. [254] proposed a grasp estimation strategy which estimates the object contour in the point cloud and predicts the grasp pose along with the object skeleton in the image plane. Grasp rectangles at each skeleton point are estimated, and point cloud data corresponding to the grasp rectangle part and the centroid of the object is used to decide the final grasp rectangle. Their method is simple and needs no grasp configuration sampling steps.

Aiming at the oriented rectangle-based grasp configuration, deep learning methods are gradually applied in three different ways, which are classification-based methods, regression-based methods and detection-based methods. Most of these methods utilize a five dimensional representation [126] for robotic grasps, which are rectangles with a position, orientation and size: (x,y,,h,w).

Classification-based methods train classifiers to evaluate candidate grasps, and the one with the highest score will be selected. Lenz et al. [126] is the first to apply deep learning methods to robotic grasping. They presented a two-step cascaded system with two deep networks, where the top detection results from the first are re-evaluated by the second. The first network produces a small set of oriented rectangles as candidate grasps, which will be axis aligned. The second network ranks these candidates using features extracted from the color image, the depth image and surface normals. The top-ranked rectangle is selected and the corresponding grasp is executed. Pinto and Gupta [189] predicted grasping locations by sampling image patches and predicting the grasping angle. They trained a CNN-based classifier to estimate the grasp likelihood for different grasp directions given an input image patch. Park and Chun [172] proposed a classification based robotic grasp detection method with multiple-stage spatial transformer networks (STN). Their method allows partial observation for intermediate results such as grasp location and orientation for a number of grasp configuration candidates. The procedure of classification-based methods is straightforward, and the accuracy is relatively high. However, these methods tend to be quite slow.

Regression-based methods train a model to yield grasp parameters for location and orientation directly, since a uniform network would perform better than the two-cascaded system [126]. Redmon and Angelova [201] proposed a larger neural network, which performs a single-stage regression to obtain graspable bounding boxes without using standard sliding window or region proposal techniques. Zhang et al. [307] utilized a multi-modal fusion architecture which combines RGB features and depth features to improve the grasp detection accuracy. Kumra and Kanan [121] utilized deep neural networks like ResNet [95] and further increased the performances in grasp detection. Kumra et al. [120] proposed a novel Generative Residual Convolutional Neural Network (GR-ConvNet) model that can generate robust antipodal grasps from a n-channel image input. Rather than regressing the grasp parameters globally, some methods utilized a ROI (Region of Interest)-based or pixel-wise way. Zhang et al. [305] utilized ROIs in the input image and regressed the grasp parameters based on ROI features.

Detection-based methods utilize the reference anchor box, which are used in some deep learning-based object detection algorithms [206, 144, 202], to assist the generation and evaluation of candidate grasps. With the prior knowledge on the size of the expected grasps, the regression problem is simplified [51]. Guo et al. [85] presented a hybrid deep architecture combining the visual and tactile sensing. They introduced the reference box which is axis aligned. Their network produces a quality score and an orientation as classification between discrete angle values. Chu et al. [41] proposed an architecture that predicts multiple candidate grasps instead of a single outcome and transforms the orientation regression to a classification task. The orientation classification contains the quality score and therefore their network predicts both grasp regression values and discrete orientation classification score. Park et al. [173] proposed a rotation ensemble module (REM) for robotic grasp detection using convolutions that rotates network weights. Zhou et al. [318] designed an oriented anchor box mechanism to improve the accuracy of grasp detection and employed an end-to-end fully convolutional neural network. They utilized only one anchor box with multiple orientations, rather than multiple scales or aspect ratios [85, 41] for reference grasps, and predicted five regression values and one grasp quality score for each oriented reference box. Depierre et al. [51] further extends Zhou et al. [318] by adding a direct dependency between the regression prediction and the score evaluation. They proposed a novel DNN architecture with a scorer which evaluates the graspability of a given position and introduced a novel loss function which correlates the regression of grasp parameters with the graspability score.

Some other methods are also proposed aiming at cluttered scenes, where a robot need to know if an object is on another object in the piles of objects for a successful grasp. Guo et al. [84] presented a shared convolutional neural network to conduct object discovery and grasp detection. Zhang et al. [304] proposed a multi-task convolution robotic grasping network to address the problem of combining grasp detection and object detection with relationship reasoning in the piles of objects. The method of Zhang et al. [304] consists of several deep neural networks that are responsible for generating local feature maps, grasp detection, object detection and relationship reasoning separately. In comparison, Park et al. [174] proposed a single multi-task deep neural networks that yields the information on grasp detection, object detection and relationship reasoning among objects with a simple post-processing.

Dataset Objects Num Num of RGB-D images Num of grasps
Stanford Grasping [221, 220] 10 13747 13747
Cornell Grasping [111] 240 885 8019
CMU dataset [189] over 150 50567 no
Dex-Net 2.0 [155] over 150 6.7 M(Depth only) 6.7 M
JACQUARD [50] 11619 54485 1.1 M
Table 10: Summaries of publicly available 2D planar grasp datasets.
Method Input Size Accuracy(%) Time
Image Split Object Split
Jiang et al. [111] 227 x 227 60.50 58.30 50sec
Lenz et al. [126] 227 x 227 73.90 75.60 13.5sec
Morrison et al. [161] 300 x 300 78.56 - 7ms
Redmon et al. [201] 224 x 224 88.00 87.1 76ms
Zhang et al. [307] 224 x 224 88.90 88.20 117ms
Kumra et al. [121] 224 x 224 89.21 88.96 103ms
Chun et al. [172] 400 x 400 89.60 - 23ms
Asif et al. [5] 224 x 224 90.60 90.20 24ms
Wang et al. [260] 400 x 400 94.42 91.02 8ms
Chu et al. [41] 227 x 227 96.00 96.10 120ms
Chun et al. [173] 360 x 360 96.60 95.40 20ms
Zhou et al. [318] 320 x 320 97.74 96.61 118ms
Park et al. [174] 360 x 360 98.6 97.2 16ms
Table 11: Accuracies of grasp prediction on the Cornell Grasp dataset.

Comparisons and Discussions

The methods of 2D planar grasp are evaluated in this section, which contain the datasets, evaluation metrics and comparisons of the recent methods.

Datasets and evaluation metrics There exist a few datasets for 2D planar grasp, which are presented in Table 10. Among them, the Cornell Grasping dataset [111] is the most widely used dataset. In addition, the dataset has the image-wise splitting and the object-wise splitting. Image-wise splitting splits images randomly and is used to test how well the method can generalize to new positions for objects it has seen previously. Object-wise splitting puts all images of the same object into the same cross-validation split and is used to test how well the method can generalize to novel objects.

Aiming at the point-based grasps and the oriented rectangle-based grasps [111], there exist two metrics for evaluating the performance of grasp detection: the point metric and the rectangle metric. The former evaluates the distance between predicted grasp center and the ground truth grasp center w.r.t. a threshold value. It has difficulties in determining the distance threshold and does not consider the grasp angle. The latter metric considers a grasp to be correct if the grasp angle is within of the ground truth grasp, and the Jaccard index of the predicted grasp and the ground truth is greater than .

Comparisons The methods of evaluating oriented rectangles are compared in Table 11 on the widely used Cornell Grasping dataset [111]. From the table, we can see that the state-of-the-art methods have achieved very high accuracies on this dataset. Recent works [51] began to conduct experiments on the Jacquard Grasp dataset [50] since it has more images and the grasps are more diverse.

4.2 6DoF Grasp

Methods of 6DoF grasp can be divided into methods based on the partial point cloud and methods based on the complete shape. These methods are summarized in Table 12.

Methods Descriptions Traditional methods Deep learning-based methods
Methods based on the partial point cloud Estimate grasp qualities of candidate grasps Bohg and Kragic [16], Pas et al. [177], Zapata-Impata et al. [298] GPD [240], PointnetGPD [133], 6-DoF GraspNet [163], S[197], REGNet [309]
Transfer grasps from existing ones Andrew et al. [159], Nikandrova and Kyrki [169], Vahrenkamp et al. [249] Tian et al. [241], Dense Object Nets [72], DGCM-Net [179]
Methods based on the complete shape Estimate the 6D object pose Zeng et al. [302] Billings and Roberson [12]
Conduct shape completion Miller et al. [159] Varley et al. [251], Lundell et al. [152], Watkins-Valls et al. [269], Merwe et al. [250], Wang et al. [259], Yan et al. [278], Yan et al. [279], Tosun et al. [244], kPAM-SC [75], ClearGrasp [215]
Table 12: Summary of 6DoF grasp estimation methods.

Methods based on the partial point cloud

This kind of methods can be divided into two kinds. The first kind of methods estimate grasp qualities of candidate grasps and the second kind of methods transfer grasps from existing ones. Typical functional flow-chart of methods based on the partial point cloud is illustrated in Fig. 18.

Figure 18: Typical functional flow-chart of 6DoF grasp methods based on the partial point cloud.

Methods of estimating grasp qualities of candidate grasps This kind of methods mean that the 6DoF grasping pose is estimated through analyzing the input partial point cloud merely. Most of this kind of methods [16, 177, 298, 240, 133] sample large number of candidate grasps first, and then utilize various methods to evaluate grasp qualities, which is a classification-based manner. While some novel methods [197, 309, 168, 163] estimate the grasp qualities implicitly and directly predict the 6DoF grasp pose in a single-shot way, which is a regression-based manner.

Bohg and Kragic [16] applied the concept of shape context [8] to improve the performance of grasping point classification. They used a supervised learning approach and the classifier is trained with labelled synthetic images. Pas et al. [177] first used a geometrically necessary condition to sample a large set of high quality grasp hypotheses, which will be classified using the notion of an antipodal grasp. Zapata-Impata et al. [298] proposed a method to find the best pair of grasping points given a partial single-view point cloud of an unknown object. They defined an improved version of the ranking metric [299] for evaluating a pair of contact points, which is parameterized by the morphology of the robotic hand in use.

3D data has different representations such as multi-view images, voxel grids or point cloud, and each representation can be processed with corresponding deep neural networks. These different kinds of neural networks have already been applied into robotic grasping tasks. GPD [240] generates candidate grasps on the a region of interest (ROI) first. These candidate grasps are then encoded into a stacked multi-channel image. Each candidate is evaluated to obtain a score using a four-layer convolutional neural network finally. Lou et al. [149] proposed an algorithm that uniformly samples over the entire 3D space first to generate candidate grasps, predicts the grasp stability using 3D CNN together with a grasping reachability using the candidate grasp pose, and obtains the final grasping success probability. PointnetGPD [133] randomly samples candidate grasps, and evaluates the grasp quality by direct point cloud analysis with the 3D deep neural network PointNet [194]. During the generation of training datasets, the grasp quality is evaluated through combining the force-closure metric and the Grasp Wrench Space (GWS) analysis [118]. Mousavian et al. [163] proposed an algorithm called 6-DoF GraspNet, which samples grasp proposals using a variational auto-encoder and refines the sampled grasps using a grasp evaluator model. Pointnet++ [195] is used to generate and evaluate grasps. Murali et al. [165] further improved 6-DoF GraspNet by introducing a learned collision checker conditioned on the gripper information and on the raw point cloud of the scene, which affords a higher success rate in cluttered scenes.

Qin et al. [197] presented an algorithm called SG, which utilizes a single-shot grasp proposal network trained with synthetic data using Pointnet++ [195] and predicts amodal grasp proposals efficiently and effectively. Each grasp proposal is further evaluated with a robustness score. The core novel insight of SG is that they learn to propose possible grasps by regression, rather than using a sliding windows-like style. SG generates grasp proposals directly, while 6-DoF GraspNet uses an encode and decode way. Ni et al. [168] proposed Pointnet++Grasping, which is also an end-to-end approach to directly predict the poses, categories and scores of all the grasps. Further, Zhao et al. [309] proposed an end-to-end single-shot grasp detection network called REGNet, which takes one single-view point cloud as input for parallel grippers. There network contains three stages, which are the Score Network (SN) to select positive points with high grasp confidence, the Grasp Region Network (GRN) to generate a set of grasp proposals on selected positive points, and the Refine Network (RN) to refine the detected grasps based on local grasp features. REGNet is the state-of-the-art method for grasp detection in 3D space and outperforms several methods including GPD [240], PointnetGPD [133] and S[197]. Fang et al. [68] proposed a large-scale grasp pose detection dataset called GraspNet-1Billion, which contains 97,280 RGB-D image with over one billion grasp poses. They also proposed an end-to-end grasp pose prediction network that learns approaching direction and operation parameters in a decoupled manner.

Methods of transferring grasps from existing ones This kind of methods transfer grasps from existing ones, which means finding correspondences from the observed single-view point cloud to the existing complete one if we know that they come from one category. In most cases, target objects are not totally the same with the objects in the existing database. If an object comes from a class that is involved in the database, it is regarded as a similar object. After the localization of the target object, correspondence-based methods can be utilized to transfer the grasp points from the similar and complete 3D object to the current partial-view object. These methods learn grasps by observing the object without estimating its 6D pose, since the current target object is not totally the same with the objects in the database.

Different kinds of methods are utilized to find the correspondences based on taxonomy, segmentation, and so on. Andrew et al. [159] proposed a taxonomy-based approach, which classified objects into categories that should be grasped by each canonical grasp. Nikandrova and Kyrki [169] presented a probabilistic approach for task-specific stable grasping of objects with shape variations inside the category. An optimal grasp is found as a grasp that is maximally likely to be task compatible and stable taking into account shape uncertainty in a probabilistic context. Their method requires partial models of new objects, and few models and example grasps are used during the training. Vahrenkamp et al. [249] presented a part-based grasp planning approach to generate grasps that are applicable to multiple familiar objects. The object models are segmented according to their shape and volumetric information, and the objet parts are labeled with semantic and grasping information. A grasp transferability measure is proposed to evaluate how successful planned grasps are applied to novel object instances of the same object category. Tian et al. [241] proposed a method to transfer grasp configurations from prior example objects to novel objets, which assumes that the novel and example objects have the same topology and similar shapes. They perform 3D segmentation on the objects considering geometric and semantic shape characteristics, compute a grasp space for each part of the example object using active learning, and build bijective contact mappings between the model parts and the corresponding grasps for novel objects. Florence et al. [72] proposed Dense Object Nets, which is built on self-supervised dense descriptor learning and takes dense descriptors as a representation for robotic manipulation. They could grasp specific points on objects across potentially deformed configurations, grasp objects with instance-specificity in clutter, or transfer specific grasps across objects in class. Patten et al. [179] presented DGCM-Net, a dense geometrical correspondence matching network for incremental experience-based robotic grasping. They apply metric learning to encode objects with similar geometry nearby in feature space, and retrieve relevant experience for an unseen object through a nearest neighbour search. DGCM-Net also reconstructs 3D-3D correspondences using the view-dependent normalized object coordinate space to transform grasp configurations from retrieved samples to unseen objects. Their method could be extended for semantic grasping by guiding grasp selection to the parts of objects that are relevant to the object’s functional use.

Comparisons and discussions Methods of estimating grasp qualities of candidate grasps gain much attentions since this is the direct manner to obtain the 6D grasp pose. Aiming at 6DoF grasp, the evaluation metrics for 2D planar grasp are not suitable. The commonly used metric is the Valid Grasp Ratio (VGR) proposed by REGNet [309]. VGR is defined as the quotient of antipodal and collision-free grasps and all grasps. The usually used grasp dataset for evaluation is the YCB-Video [272] dataset. Comparisons with recent methods are shown in Table 13.

Method VGR(%) Time(ms)
GPD [240] (3 channels) 79.34 2077.12
GPD [240] (12 channels) 80.22 2702.38
PointNetGPD [133] 81.48 1965.60
S[197] 77.63 679.04
REGNet [309] 92.47 686.31
Table 13: Accuracies of grasp prediction on the Cornell Grasp dataset.

Methods of transferring grasps from existing ones have potential usages in high-level robotic manipulation tasks. Not only the grasps could be transferred, the manipulation skills could also be transferred. Lots of methods [9, 285] that learn grasps from demonstration usually utilize this kind of methods.

Methods based on the complete shape

Methods based on the partial point cloud are suitable for unknown objects, since these methods have no identical 3D models to use. Aiming at known objects, their 6D poses can be estimated and the 6DoF grasp poses estimated on the complete 3D shape could be transformed from the object coordinate to the camera coordinate. In another perspective, the 3D complete object shape under the camera coordinate could also be completed from the observed single-view point cloud. And the 6DoF grasp poses could be estimated based on the completed 3D object shape in the camera coordinate. We consider these two kinds of methods as complete shape-based methods since 6DoF grasp poses are estimated based on complete object shapes. Typical functional flow-chart of 6DoF grasp methods based on the complete shape is illustrated in Fig. 19.

Figure 19: Typical functional flow-chart of 6DoF grasp methods based on the complete shape.

Methods of estimating the 6D object pose The 6D object pose could be accurately estimated from the RGB-D data if the target object in known as mentioned in Section 3, and 6DoF grasp poses can be obtained via offline pre-computation or online generation. This is the most popular method used for the grasping systems. If the 6DoF grasp poses exist in the database, the current 6DoF grasp pose could be retrieved from the knowledge base, or obtained by sampling and ranking them through comparisons with existing grasps. If the 6DoF grasp poses do not exist in the database, analytical methods are utilized to compute the grasp poses. Analytical methods consider kinematics and dynamics formulation in determining grasps [214]. Force-closure is one of the main conditions in completing the grasping tasks and there exist many force-closure grasp synthesis methods for 3D objects. Among them, the polyhedral objects are first dealt with, as they are composed of a finite number of flat faces. The force-closure condition is reduced into the test of the angles between the faces normals [167] or using the linear model to derive analytical formulation for grasp characterization [190]. To handle the commonly used objects which usually have more complicated shapes, methods of observing different contact points are proposed [53]. These methods try to find contact points on a 3D object surface to ensure force-closure and compute the optimal grasp by minimizing an objective energy function according to a predefined grasp quality criterion [160]. However, searching the grasp solution space is a complex problem which is quite time-consuming. Some heuristical techniques were then proposed to reduce the search space by generating a set of grasp candidates according to a predefined procedure [20], or by defining a set of rules to generate the starting positions [159]. A few robotic grasping simulators, such as GraspIt! [158], assist the generation of the best gripper pose to conduct a successful grasp. Andrew and Peter [158] proposed GraspIt!, which is a versatile simulator for robotic grasping. GraspIt! supports the loading of objects and obstacles of arbitrary geometry to populate a complete simulation world. It allows a user to interactively manipulate a robot or an object and create contacts between them. Xue et al. [277] implemented a grasping planning system based on GraspIt! to plan high-quality grasps. León et al. [127] presented OpenGRASP, a toolkit for simulating grasping and dexterous manipulation. It provides a holistic environment that can deal with a variety of factors associated with robotic grasping. These methods produce successful grasps and detailed reviews could be found in the survey [214].

Both traditional and deep learning-based 6D object pose estimation algorithms are utilized to assist the robotic grasping tasks. Most of the methods [302] presented in the Amazon picking challenge utilize the 6D poses estimated through partial registration first. Zeng et al. [302] proposed an approach which segments and labels multiple views of a scene with a fully convolutional neural network, and then fits pre-scanned 3D object models to the segmentation results to obtain the 6D object poses. Besides, Billings and Johnson-Roberson [12] proposed a method which jointly accomplish object pose estimation and grasp point selection using a Convolutional Neural Network (CNN) pipeline. Wong et al. [271] proposed a method which integrated RGB-based object segmentation and depth image-based partial registration to obtain the pose of the target object. They presented a novel metric for scoring model registration quality, and conducted multi-hypothesis registration, which achieved accurate pose estimation with position error and angle error. Using this accurate 6D object pose, grasps are conducted with a high success rate. A few deep learning-based 6D object pose estimation approaches such as DenseFusion [257] also illustrate high successful rates in conducting practical robotic grasping tasks.

Methods of conducting shape completion There also exist one kind of methods, which conduct 3D shape completion for the partial point cloud, and then estimate grasps. 3D shape completion provides the complete geometry of objects from partial observations, and estimating 6DoF grasp poses on the completed shape is more precise. Most of this kind of methods estimate the object geometry from partial point cloud [251, 152, 250, 269, 244], and some other methods [259, 278, 279, 75, 215] utilize the RGB-D images. Many of them [259, 269] also combine tactile information for better prediction.

Varley et al. [251] proposed an architecture to enable robotic grasp planning via shape completion. They utilized a 3D convolutional neural network (CNN) to complete the shape, and created a fast mesh for objects not to be grasped, a detailed mesh for objects to be grasped. The grasps are finally estimated on the reconstructed mesh in GraspIt! [158] and the grasp with the highest quality is executed. Lundell et al. [152] proposed a shape completion DNN architecture to capture shape uncertainties, and a probabilistic grasp planning method which utilizes the shape uncertainty to propose robust grasps. Merwe et al. [250] proposed PointSDF to learn a signed distance function implicit surface for a partially viewed object, and proposed a grasp success prediction learning architecture which implicitly learns geometrically aware point cloud encodings. Watkins-Valls et al. [269] also incorporated depth and tactile information to create rich and accurate 3D models useful for robotic manipulation tasks. They utilized both the depth and tactile as input and fed them directly into the model rather than using the tactile information to refine the results. Tosun et al. [244] utilized a grasp proposal network and a learned 3D shape reconstruction network, where candidate grasps generated from the first network are refined using the 3D reconstruction result of the second network. These above methods mainly utilize depth data or point cloud as inputs.

Wang et al. [259] perceived accurate 3D object shape by incorporating visual and tactile observations, as well as prior knowledge of common object shapes learned from large-scale shape repositories. They first applied neural networks with learned shape priors to predict an object’s 3D shape from a single-view color image and the tactile sensing was used to refine the shape. Yan et al. [278] proposed a deep geometry-aware grasping network (DGGN), which first learn a 6DoF grasp from RGB-D input. DGGN has a shape generation network and an outcome prediction network. Yan et al. [279] further presented a self-supervised shape prediction framework that reconstructs full 3D point clouds as representation for robotic applications. They first used an object detection network to obtain object-centric color, depth and mask images, which will be used to generate a 3D point cloud of the detected object. A grasping critic network is then used to predict a grasp. Gao and Tedrake [75] proposed a new hybrid object representation consisting of semantic keypoints and dense geometry (a point cloud or mesh) as the interface between the perception module and motion planner. Leveraging advances in learning-based keypoint detection and shape completion, both dense geometry and keypoints can be perceived from raw sensor input. Sajjan et al. [215] presented ClearGrasp, a deep learning approach for estimating accurate 3D geometry of transparent objects from a single RGB-D image for robotic manipulation. ClearGrasp uses deep convolutional networks to infer surface normals, masks of transparent surfaces, and occlusion boundaries, which will refine the initial depth estimates for all transparent surfaces in the scene.

Comparisons and Discussions When accurate 3D models are available, the 6D object pose could be achieved, which affords the generation of grasps for the target object. However, when existing 3D models are different from the target one, the 6D poses will have a large deviation, and this will lead to the failure of the grasp. In this case, we can complete the partial-view point cloud and triangulate it to obtain the complete shape. The grasps could be generated on the reconstructed and complete 3D shape. Various grasp simulation toolkits are developed to facilitate the grasps generation.

Aiming at methods of estimating the 6D object pose, there exist some challenges. Firstly, this kind of methods highly rely on the accuracy of object segmentation. However, training a network which supports a wide range of objects is not easy. Meanwhile, these methods require the 3D object to grasp be similar enough to those of the annotated models such that correspondences can be found. It is also challenging to compute grasp points with high qualities for objects in cluttered environments where occlusion usually occurs. Aiming at methods of conducting shape completion, there also exist some challenges. The lack of information, especially the geometry on the opposite direction from the camera, extremely affect the completion accuracy. However, using multi-source data would be a future direction.

5 Challenges and Future Directions

In this survey, we review related works on vision-based robotic grasping from three key aspects: object localization, object pose estimation and grasp estimation. The purpose of this survey is to allow readers to get a comprehensive map about how to detect a successful grasp given the initial raw data. Various subdivided methods are introduced in each section, as well as the related datasets and comparisons. Comparing with existing literatures, we present an end-to-end review about how to conduct a vision-based robotic grasp detection system.

Although so many intelligent algorithms are proposed to assist the robotic grasping tasks, challenges still exist in practical applications, such as the insufficient information in data acquisition, the insufficient amounts of training data, the generalities in grasping novel objects and the difficulties in grasping transparent objects.

The first challenge is the insufficient information in data acquisition. Currently, the mostly used input to decide a grasp is one RGB-D image from one fixed position, which lacks the information backwards. It’s really hard to decide the grasp when we do not have the full object geometry. Aiming at this challenge, some strategies could be adopted. The first strategy is to utilize multi-view data. A more widely perspective data is much better since the partial views are not enough to get a comprehensive knowledge of the target object. Methods based on poses of the robotic arms [302, 13] or the slam methods [44] can be adopted to merge the multi-view data. Instead of fusing multi-view data, the best grasping view could also be chosen explicitly [162]. The second one is to involve multi-sensor data such as the haptic information. There exist some works [124, 65, 104] which already involve the tactile data to assist the robotic grasping tasks.

The second challenge is the insufficient amounts of training data. The requirements for the training data is extremely large if we want to build an intelligent enough grasp detection system. The amount of open grasp datasets is really small and the involved objects are mostly instance-level, which is too small compared with the objects in our daily life. Aiming at this challenges, some strategies could be adopted. The first strategy is to utilize simulated environments to generate virtual data [245]. Once the virtual grasp environments are built, large amounts of virtual data could be generated by simulating the sensors from various angles. Since there exists gaps from the simulation data to the practical one, many domain adaptation methods [21, 69, 312] have been proposed. The second strategy is to utilize the semi-supervised learning approaches [154, 292] to learn to grasp with incorporate unlabeled data. The third strategy is to utilize self-supervised learning methods to generate the labeled data for 6D object pose estimation [49] or grasp detection [235].

The third challenge is the generalities in grasping novel objects. The mentioned grasp estimation methods, except for methods of evaluating the 6D object pose, all have certain generalities in dealing with novel objects. But these methods mostly work well on trained dataset and show reduced performance for novel objects. Other than improving the performance of the mentioned algorithms, some strategies could be adopted. The first strategy is to utilize the category-level 6D object pose estimation. Lots of works [258, 175, 256, 28] start to deal with the 6D object pose estimation of category-level objects, since high performance have been achieved on instance-level objects. The second strategy is to involve more semantic information in the grasp detection system. With the help of various shape segmentation methods [293, 153], parts of the object instead of the complete shape can be used to decrease the range of candidate grasping points. The surface material and the weight information could also be estimated to obtain more precise grasping detection results.

The fourth challenge lies in grasping transparent objects. Transparent objects are prevalent in our daily life, but capturing their 3D information is rather difficult for nowadays depth sensors. There exist some pioneering works that tackle this problem in different ways. GlassLoc [320] was proposed for grasp pose detection of transparent objects in transparent clutter using plenoptic sensing. KeyPose [145] conducted multi-view 3D labeling and keypoint estimation for transparent objects in order to estimate their 6D poses. ClearGrasp [215] estimates accurate 3D geometry of transparent objects from a single RGB-D image for robotic manipulation. This area will be further researched in order to make grasps much accurate and robust in daily life.

References

  1. I. Akkaya, M. Andrychowicz, M. Chociej, M. Litwin, B. McGrew, A. Petron, A. Paino, M. Plappert, G. Powell and R. Ribas (2019) Solving rubik’s cube with a robot hand. arXiv preprint arXiv:1910.07113. Cited by: §1.
  2. A. Aldoma, M. Vincze, N. Blodow, D. Gossow, S. Gedikli, R. B. Rusu and G. Bradski (2011) CAD-model recognition and 6dof pose estimation using 3d cues. In 2011 IEEE international conference on computer vision workshops (ICCV workshops), pp. 585–592. Cited by: §2.2.2, Table 2, §3.1.2, Table 4.
  3. Y. Aoki, H. Goforth, R. A. Srivatsan and S. Lucey (2019) PointNetLK: robust & efficient point cloud registration using pointnet. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7163–7172. Cited by: §3.2.2, Table 5.
  4. P. Ardón, È. Pairet, R. P. Petrick, S. Ramamoorthy and K. S. Lohan (2019) Learning grasp affordance reasoning through semantic relations. IEEE Robotics and Automation Letters 4 (4), pp. 4571–4578. Cited by: §4.1.1.
  5. U. Asif, J. Tang and S. Harrer (2018) GraspNet: an efficient convolutional neural network for real-time grasp detection for low-powered devices.. In IJCAI, pp. 4875–4882. Cited by: Table 11.
  6. H. Bay, T. Tuytelaars and L. Van Gool (2006) Surf: speeded up robust features. In European conference on computer vision, pp. 404–417. Cited by: §2.2.1, Table 2, §3.1.1, Table 4.
  7. B. Bellekens, V. Spruyt, R. Berkvens and M. Weyn (2014) A survey of rigid 3d pointcloud registration algorithms. In AMBIENT 2014: the Fourth International Conference on Ambient Computing, Applications, Services and Technologies, August 24-28, 2014, Rome, Italy, pp. 8–13. Cited by: §3.2.2, §3.4.
  8. S. Belongie, J. Malik and J. Puzicha (2002) Shape matching and object recognition using shape contexts. IEEE transactions on pattern analysis and machine intelligence 24 (4), pp. 509–522. Cited by: §4.2.1.
  9. L. Berscheid, P. Meißner and T. Kröger (2019) Robot learning of shifting objects for grasping in cluttered environments. arXiv preprint arXiv:1907.11035. Cited by: §4.2.1.
  10. P. J. Besl and N. D. McKay (1992-02) A method for registration of 3-d shapes. IEEE Trans. Pattern Anal. Mach. Intell. 14 (2), pp. 239–256. External Links: ISSN 0162-8828 Cited by: §3.2.2.
  11. S. Bhatia and S. K. Chalup (2013) Segmenting salient objects in 3d point clouds of indoor scenes using geodesic distances. Journal of Signal and Information Processing 4 (03), pp. 102. Cited by: §2.1.2, Table 1.
  12. G. Billings and M. Johnson-Roberson (2018) SilhoNet: an RGB method for 3d object pose estimation and grasp planning. CoRR abs/1809.06893. Cited by: §4.2.2, Table 12.
  13. K. Blomqvist, M. Breyer, A. Cramariuc, J. Förster, M. Grinvald, F. Tschopp, J. J. Chung, L. Ott, J. Nieto and R. Siegwart (2020) Go fetch: mobile manipulation in unstructured environments. arXiv preprint arXiv:2004.00899. Cited by: §5.
  14. A. Bochkovskiy, C. Wang and H. M. Liao (2020) YOLOv4: optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934. Cited by: §2.2.1, Table 2.
  15. J. Bohg, A. Morales, T. Asfour and D. Kragic (2014-04) Data-driven grasp synthesis: a survey. IEEE Transactions on Robotics 30 (2), pp. 289–309. External Links: ISSN 1552-3098 Cited by: §1.
  16. J. Bohg and D. Kragic (2010) Learning grasping points with shape context. Robotics and Autonomous Systems 58 (4), pp. 362–377. Cited by: §4.2.1, §4.2.1, Table 12.
  17. D. Bolya, C. Zhou, F. Xiao and Y. J. Lee (2019) YOLACT: real-time instance segmentation. In Proceedings of the IEEE International Conference on Computer Vision, pp. 9157–9166. Cited by: §2.3.1, Table 3.
  18. D. Bolya, C. Zhou, F. Xiao and Y. J. Lee (2019) YOLACT++: better real-time instance segmentation. arXiv preprint arXiv:1912.06218. Cited by: §2.3.1, Table 3.
  19. A. Borji, M. Cheng, Q. Hou, H. Jiang and J. Li (2019) Salient object detection: a survey. Computational Visual Media 5 (2), pp. 117–150. Cited by: §2.1.1.
  20. C. Borst, M. Fischer and G. Hirzinger (2003) Grasping the dice by dicing the grasp. In IEEE/RSJ International Conference on Intelligent Robots and Systems, Vol. 4, pp. 3692–3697. Cited by: §4.2.2.
  21. K. Bousmalis, A. Irpan, P. Wohlhart, Y. Bai, M. Kelcey, M. Kalakrishnan, L. Downs, J. Ibarz, P. Pastor and K. Konolige (2018) Using simulation and domain adaptation to improve efficiency of deep robotic grasping. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 4243–4250. Cited by: §5.
  22. E. Brachmann, A. Krull, F. Michel, S. Gumhold, J. Shotton and C. Rother (2014) Learning 6d object pose estimation using 3d object coordinates. In European conference on computer vision, pp. 536–551. Cited by: §3.3.2, §3.4.1, Table 6.
  23. E. Brachmann, F. Michel, A. Krull, M. Ying Yang and S. Gumhold (2016) Uncertainty-driven 6d pose estimation of objects and scenes from a single rgb image. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3364–3372. Cited by: Table 8.
  24. G. Bradski and A. Kaehler (2008) Learning opencv: computer vision with the opencv library. ” O’Reilly Media, Inc.”. Cited by: §2.1.1.
  25. J. Cai, H. Cheng, Z. Zhang and J. Su (2019) MetaGrasp: data efficient grasping by affordance interpreter network. In 2019 International Conference on Robotics and Automation (ICRA), pp. 4960–4966. Cited by: §1, §4.1.1, Table 9.
  26. S. Caldera, A. Rassau and D. Chai (2018) Review of deep learning methods in robotic grasp detection. Multimodal Technologies and Interaction 2 (3), pp. 57. Cited by: §1.
  27. P. Castro, A. Armagan and T. Kim (2020) Accurate 6d object pose estimation by pose conditioned mesh reconstruction. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4147–4151. Cited by: Table 7, Table 8.
  28. D. Chen, J. Li, Z. Wang and K. Xu (2020) Learning canonical shape space for category-level 6d object pose and size estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11973–11982. Cited by: §3.2.1, §3.4.2, Table 5, §5.
  29. H. Chen, Y. Li and D. Su (2019) Multi-modal fusion network with multi-scale multi-path and cross-modal interactions for rgb-d salient object detection. Pattern Recognition 86, pp. 376–385. Cited by: §2.1.2, Table 1.
  30. H. Chen and Y. Li (2018) Progressively complementarity-aware fusion network for rgb-d salient object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3051–3060. Cited by: §2.1.2, Table 1.
  31. H. Chen and Y. Li (2019) CNN-based rgb-d salient object detection: learn, select and fuse. arXiv preprint arXiv:1909.09309. Cited by: §2.1.2, Table 1.
  32. H. Chen, K. Sun, Z. Tian, C. Shen, Y. Huang and Y. Yan (2020) BlendMask: top-down meets bottom-up for instance segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8573–8581. Cited by: §2.3.1, Table 3.
  33. I. Chen and J. W. Burdick (1993) Finding antipodal point grasps on irregularly shaped objects. IEEE transactions on Robotics and Automation 9 (4), pp. 507–512. Cited by: §1.
  34. K. Chen, J. Pang, J. Wang, Y. Xiong, X. Li, S. Sun, W. Feng, Z. Liu, J. Shi and W. Ouyang (2019) Hybrid task cascade for instance segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4974–4983. Cited by: §2.3.1, Table 3.
  35. L. Chen, A. Hermans, G. Papandreou, F. Schroff, P. Wang and H. Adam (2018) Masklab: instance segmentation by refining object detection with semantic and direction features. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4013–4022. Cited by: §2.3.1, Table 3.
  36. W. Chen, X. Jia, H. J. Chang, J. Duan and A. Leonardis (2020) G2L-net: global to local network for real-time 6d pose estimation with embedding vector features. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4233–4242. Cited by: §3.2.2, Table 5.
  37. X. Chen, H. Ma, J. Wan, B. Li and T. Xia (2017) Multi-view 3d object detection network for autonomous driving. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1907–1915. Cited by: §2.2.2, §2.2.2, Table 2.
  38. X. Chen, R. Girshick, K. He and P. Dollár (2019) Tensormask: a foundation for dense object segmentation. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2061–2069. Cited by: §2.3.1, Table 3.
  39. M. Cheng, N. J. Mitra, X. Huang, P. H. Torr and S. Hu (2014) Global contrast based salient region detection. IEEE Transactions on Pattern Analysis and Machine Intelligence 37 (3), pp. 569–582. Cited by: §2.1.1, Table 1.
  40. C. Choy, W. Dong and V. Koltun (2020) Deep global registration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2514–2523. Cited by: §3.2.2, Table 5.
  41. F. Chu, R. Xu and P. A. Vela (2018) Real-world multiobject, multigrasp detection. IEEE Robotics and Automation Letters 3 (4), pp. 3355–3362. Cited by: §1, §4.1.2, Table 11, Table 9.
  42. F. Chu, R. Xu and P. A. Vela (2019) Detecting robotic affordances on novel objects with regional attention and attributes. arXiv preprint arXiv:1909.05770. Cited by: §4.1.1.
  43. A. Crivellaro, M. Rad, Y. Verdie, K. M. Yi, P. Fua and V. Lepetit (2017) Robust 3d object tracking from monocular images using stable parts. IEEE transactions on pattern analysis and machine intelligence 40 (6), pp. 1465–1479. Cited by: §3.1.1, Table 4, Table 6.
  44. A. Dai, M. Nießner, M. Zollhöfer, S. Izadi and C. Theobalt (2017) Bundlefusion: real-time globally consistent 3d reconstruction using on-the-fly surface reintegration. ACM Transactions on Graphics (ToG) 36 (4), pp. 1. Cited by: §5.
  45. J. Dai, K. He, Y. Li, S. Ren and J. Sun (2016) Instance-sensitive fully convolutional networks. In European Conference on Computer Vision, pp. 534–549. Cited by: §2.3.1, Table 3.
  46. J. Dai, K. He and J. Sun (2016) Instance-aware semantic segmentation via multi-task network cascades. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3150–3158. Cited by: §2.3.1, Table 3.
  47. J. Dai, Y. Li, K. He and J. Sun (2016) R-fcn: object detection via region-based fully convolutional networks. In Advances in neural information processing systems, pp. 379–387. Cited by: §2.2.1, Table 2.
  48. M. Danielczuk, M. Matl, S. Gupta, A. Li, A. Lee, J. Mahler and K. Goldberg (2019) Segmenting unknown 3d objects from real depth images using mask r-cnn trained on synthetic data. In 2019 International Conference on Robotics and Automation (ICRA), pp. 7283–7290. Cited by: §2.3.1.
  49. X. Deng, Y. Xiang, A. Mousavian, C. Eppner, T. Bretl and D. Fox (2020) Self-supervised 6d object pose estimation for robot manipulation. In International Conference on Robotics and Automation (ICRA), Cited by: §5.
  50. A. Depierre, E. Dellandréa and L. Chen (2018) Jacquard: a large scale dataset for robotic grasp detection. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3511–3516. Cited by: Figure 17, §4.1.3, Table 10.
  51. A. Depierre, E. Dellandréa and L. Chen (2020) Optimizing correlated graspability score and grasp regression for better grasp prediction. arXiv preprint arXiv:2002.00872. Cited by: §4.1.2, §4.1.3, Table 9.
  52. D. DeTone, T. Malisiewicz and A. Rabinovich (2018) Superpoint: self-supervised interest point detection and description. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 224–236. Cited by: §3.1.1.
  53. D. Ding, Y. Liu and M. Y. Wang (2001) On computing immobilizing grasps of 3-d curved objects. In IEEE International Symposium on Computational Intelligence in Robotics and Automation, pp. 11–16. Cited by: §4.2.2.
  54. T. Do, M. Cai, T. Pham and I. Reid (2018) Deep-6dpose: recovering 6d object pose from a single rgb image. arXiv preprint arXiv:1802.10367. Cited by: §3.2.1, Table 5.
  55. T. Do, A. Nguyen and I. Reid (2018) Affordancenet: an end-to-end deep learning approach for object affordance detection. In 2018 IEEE international conference on robotics and automation (ICRA), pp. 1–5. Cited by: §4.1.1.
  56. Y. Domae, H. Okuda, Y. Taguchi, K. Sumi and T. Hirai (2014) Fast graspability evaluation on single depth maps for bin picking with general grippers. In 2014 IEEE International Conference on Robotics and Automation (ICRA), pp. 1997–2004. Cited by: §4.1.1, Table 9.
  57. Z. Dong, G. Li, Y. Liao, F. Wang, P. Ren and C. Qian (2020) Centripetalnet: pursuing high-quality keypoint pairs for object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10519–10528. Cited by: §2.2.1, Table 2.
  58. D. H. Douglas and T. K. Peucker (1973) Algorithms for the reduction of the number of points required to represent a digitized line or its caricature. Cartographica: the international journal for geographic information and geovisualization 10 (2), pp. 112–122. Cited by: §2.1.1, Table 1.
  59. B. Drost and S. Ilic (2012-10) 3D object detection and localization using multimodal point pair features. In International Conference on 3D Imaging, Modeling, Processing, Visualization Transmission, pp. 9–16. External Links: ISSN 1550-6185 Cited by: Table 6.
  60. B. Drost, M. Ulrich, N. Navab and S. Ilic (2010-06) Model globally, match locally: efficient and robust 3d object recognition. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 998–1005. Cited by: §3.3.2.
  61. L. Du, X. Ye, X. Tan, J. Feng, Z. Xu, E. Ding and S. Wen (2020) Associate-3ddet: perceptual-to-conceptual association for 3d point cloud object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13329–13338. Cited by: §2.2.2, Table 2.
  62. K. Duan, S. Bai, L. Xie, H. Qi, Q. Huang and Q. Tian (2019) Centernet: keypoint triplets for object detection. In Proceedings of the IEEE International Conference on Computer Vision, pp. 6569–6578. Cited by: §2.2.1, Table 2.
  63. F. Engelmann, M. Bokeloh, A. Fathi, B. Leibe and M. Nießner (2020) 3D-mpa: multi-proposal aggregation for 3d semantic instance segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9031–9040. Cited by: §2.3.2, Table 3.
  64. D. Erhan, C. Szegedy, A. Toshev and D. Anguelov (2014) Scalable object detection using deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2147–2154. Cited by: §2.2.1, Table 2.
  65. P. Falco, S. Lu, C. Natale, S. Pirozzi and D. Lee (2019) A transfer learning approach to cross-modal object recognition: from visual observation to robotic haptic exploration. IEEE Transactions on Robotics 35 (4), pp. 987–998. Cited by: §5.
  66. Y. Fan and M. Tomizuka (2019) Efficient grasp planning and execution with multifingered hands by surface fitting. IEEE Robotics and Automation Letters 4 (4), pp. 3995–4002. Cited by: §1.
  67. Z. Fan, J. Yu, Z. Liang, J. Ou, C. Gao, G. Xia and Y. Li (2020) FGN: fully guided network for few-shot instance segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9172–9181. Cited by: §2.3.1, Table 3.
  68. H. Fang, C. Wang, M. Gou and C. Lu (2020) GraspNet-1billion: a large-scale benchmark for general object grasping. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11444–11453. Cited by: §4.2.1.
  69. K. Fang, Y. Bai, S. Hinterstoisser, S. Savarese and M. Kalakrishnan (2018) Multi-task domain adaptation for deep learning of instance grasping from simulation. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 3516–3523. Cited by: §5.
  70. M. A. Fischler and R. C. Bolles (1981) Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM 24 (6), pp. 381–395. Cited by: §2.1.2, §2.1.2.
  71. A. W. Fitzgibbon and R. B. Fisher (1996) A buyer’s guide to conic fitting. University of Edinburgh, Department of Artificial Intelligence. Cited by: §2.1.1, Table 1.
  72. P. R. Florence, L. Manuelli and R. Tedrake (2018) Dense object nets: learning dense visual object descriptors by and for robotic manipulation. arXiv preprint arXiv:1806.08756. Cited by: §4.2.1, Table 12.
  73. A. Frome, D. Huber, R. Kolluri, T. Bülow and J. Malik (2004) Recognizing objects in range data using regional point descriptors. In European conference on computer vision, pp. 224–237. Cited by: §2.2.2, Table 2, §3.1.2, Table 4.
  74. G. Gao, M. Lauri, Y. Wang, X. Hu, J. Zhang and S. Frintrop (2020) 6D object pose regression via supervised learning on point clouds. arXiv preprint arXiv:2001.08942. Cited by: §3.2.2, Table 5.
  75. W. Gao and R. Tedrake (2019) KPAM-sc: generalizable manipulation planning using keypoint affordance and shape completion. arXiv preprint arXiv:1909.06980. Cited by: §4.2.2, §4.2.2, Table 12.
  76. R. Girshick, J. Donahue, T. Darrell and J. Malik (2014) Rich feature hierarchies for accurate object detection and semantic segmentation. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR ’14, pp. 580–587. External Links: ISBN 978-1-4799-5118-5 Cited by: §2.2.1, Table 2.
  77. R. Girshick (2015) Fast r-cnn. In Proceedings of the IEEE international conference on computer vision, pp. 1440–1448. Cited by: §2.2.1, Table 2.
  78. Z. Gojcic, C. Zhou, J. D. Wegner and A. Wieser (2019) The perfect match: 3d point cloud matching with smoothed densities. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5545–5554. Cited by: §3.1.2, Table 4.
  79. M. Gonzalez, A. Kacete, A. Murienne and E. Marchand (2020) YOLOff: you only learn offsets for robust 6dof object pose estimation. arXiv preprint arXiv:2002.00911. Cited by: §3.3.1, Table 6, Table 8.
  80. A. Gordo, J. Almazán, J. Revaud and D. Larlus (2016) Deep image retrieval: learning global representations for image search. In European conference on computer vision, pp. 241–257. Cited by: §3.2.1.
  81. L. C. Goron, Z. Marton, G. Lazea and M. Beetz (2012) Robustly segmenting cylindrical and box-like objects in cluttered scenes using depth cameras. In ROBOTIK 2012; 7th German Conference on Robotics, pp. 1–6. Cited by: §2.1.2, §2.1.2, Table 1.
  82. B. Graham, M. Engelcke and L. van der Maaten (2018) 3d semantic segmentation with submanifold sparse convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 9224–9232. Cited by: §2.2.2, §2.2.2, §2.3.2.
  83. B. Graham and L. van der Maaten (2017) Submanifold sparse convolutional networks. arXiv preprint arXiv:1706.01307. Cited by: §2.2.2, §2.2.2, §2.3.2.
  84. D. Guo, T. Kong, F. Sun and H. Liu (2016) Object discovery and grasp detection with a shared convolutional neural network. In IEEE International Conference on Robotics and Automation (ICRA), pp. 2038–2043. Cited by: §4.1.2.
  85. D. Guo, F. Sun, H. Liu, T. Kong, B. Fang and N. Xi (2017) A hybrid deep architecture for robotic grasp detection. In 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 1609–1614. Cited by: §4.1.2, Table 9.
  86. F. Guo, W. Wang, J. Shen, L. Shao, J. Yang, D. Tao and Y. Y. Tang (2017) Video saliency detection using object proposals. IEEE transactions on cybernetics 48 (11), pp. 3159–3170. Cited by: §2.1.1, Table 1.
  87. Y. Guo, M. Bennamoun, F. Sohel, M. Lu, J. Wan and N. M. Kwok (2016) A comprehensive performance evaluation of 3d local feature descriptors. International Journal of Computer Vision 116 (1), pp. 66–89. Cited by: §3.1.2.
  88. Y. Guo, H. Wang, Q. Hu, H. Liu, L. Liu and M. Bennamoun (2020) Deep learning for 3d point clouds: a survey. IEEE Transactions on Pattern Analysis and Machine Intelligence. Cited by: §2.2.2.
  89. A. M. Hafiz and G. M. Bhat (2020) A survey on instance segmentation: state of the art. International Journal of Multimedia Information Retrieval 9 (3), pp. 171–189. Cited by: §2.3.1.
  90. F. Hagelskjær and A. G. Buch (2019) PointPoseNet: accurate object detection and 6 dof pose estimation in point clouds. arXiv preprint arXiv:1912.09057. Cited by: Table 8.
  91. J. Han, D. Zhang, G. Cheng, N. Liu and D. Xu (2018) Advanced deep-learning techniques for salient and category-specific object detection: a survey. IEEE Signal Processing Magazine 35 (1), pp. 84–100. Cited by: §2.1.2, Table 1.
  92. L. Han, T. Zheng, L. Xu and L. Fang (2020) OccuSeg: occupancy-aware 3d instance segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2940–2949. Cited by: §2.3.2, Table 3.
  93. B. Hariharan, P. Arbeláez, R. Girshick and J. Malik (2014) Simultaneous detection and segmentation. In European Conference on Computer Vision, pp. 297–312. Cited by: §2.3.1, Table 3.
  94. K. He, G. Gkioxari, P. Dollár and R. B. Girshick (2017) Mask r-cnn. IEEE International Conference on Computer Vision (ICCV), pp. 2980–2988. Cited by: §2.3.1, §2.3.1, §2.3.1, Table 3, §3.2.1.
  95. K. He, X. Zhang, S. Ren and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §4.1.2.
  96. Y. He, W. Sun, H. Huang, J. Liu, H. Fan and J. Sun (2020) PVN3D: a deep point-wise 3d keypoints voting network for 6dof pose estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11632–11641. Cited by: §1, §3.3.1, Table 6, Table 8.
  97. S. Hinterstoisser, V. Lepetit, S. Ilic, S. Holzer, G. Bradski, K. Konolige and N. Navab (2012) Model based training, detection and pose estimation of texture-less 3d objects in heavily cluttered scenes. In Asian conference on computer vision, pp. 548–562. Cited by: Figure 11, Figure 13, §3.2.1, §3.4.1, §3.4.1, Table 5.
  98. G. E. Hinton, A. Krizhevsky and S. D. Wang (2011) Transforming auto-encoders. In International conference on artificial neural networks, pp. 44–51. Cited by: §2.1.1.
  99. T. Hodan, D. Barath and J. Matas (2020) EPOS: estimating 6d pose of objects with symmetries. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11703–11712. Cited by: §3.1.1, Table 4.
  100. T. Hodaň, P. Haluza, Š. Obdržálek, J. Matas, M. Lourakis and X. Zabulis (2017) T-LESS: an RGB-D dataset for 6D pose estimation of texture-less objects. IEEE Winter Conference on Applications of Computer Vision (WACV). Cited by: §3.4.1.
  101. T. Hodan, R. Kouskouridas, T. Kim, F. Tombari, K. E. Bekris, B. Drost, T. Groueix, K. Walas, V. Lepetit, A. Leonardis, C. Steger, F. Michel, C. Sahin, C. Rother and J. Matas (2018) A summary of the 4th international workshop on recovering 6d object pose. CoRR abs/1810.03758. Cited by: §3.3.2.
  102. T. Hodaň, F. Michel, E. Brachmann, W. Kehl, A. GlentBuch, D. Kraft, B. Drost, J. Vidal, S. Ihrke and X. Zabulis (2018) BOP: benchmark for 6d object pose estimation. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 19–34. Cited by: §3.4.1.
  103. T. Hodaň, X. Zabulis, M. Lourakis, Š. Obdržálek and J. Matas (2015) Detection and fine 3d pose estimation of texture-less objects in rgb-d images. In 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4421–4428. Cited by: §3.2.1, Table 5.
  104. F. R. Hogan, J. Ballester, S. Dong and A. Rodriguez (2020) Tactile dexterity: manipulation primitives with tactile feedback. arXiv preprint arXiv:2002.03236. Cited by: §5.
  105. J. Hou, A. Dai and M. Nießner (2019) 3d-sis: 3d semantic instance segmentation of rgb-d scans. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4421–4430. Cited by: §2.3.2, Table 3.
  106. Q. Hou, M. Cheng, X. Hu, A. Borji, Z. Tu and P. H. Torr (2017) Deeply supervised salient object detection with short connections. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3203–3212. Cited by: §2.1.1, §2.1.2, Table 1.
  107. Y. Hu, P. Fua, W. Wang and M. Salzmann (2020) Single-stage 6d object pose estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2930–2939. Cited by: §3.1.1, Table 4.
  108. Y. Hu, J. Hugonot, P. Fua and M. Salzmann (2019) Segmentation-driven 6d object pose estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3385–3394. Cited by: §3.1.1, Table 4, Table 8.
  109. H. Jiang and J. Xiao (2013) A linear approach to matching cuboids in rgbd images. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2171–2178. Cited by: §2.1.2, Table 1.
  110. H. Jiang, J. Wang, Z. Yuan, Y. Wu, N. Zheng and S. Li (2013) Salient object detection: a discriminative regional feature integration approach. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2083–2090. Cited by: §2.1.1, Table 1.
  111. Y. Jiang, S. Moseson and A. Saxena (2011) Efficient grasping from rgbd images: learning using a new rectangle representation. In IEEE International Conference on Robotics and Automation, pp. 3304–3311. Cited by: §1, §4.1.2, §4.1.3, §4.1.3, §4.1.3, Table 10, Table 11, Table 9.
  112. A. E. Johnson (1997) Spin-images: a representation for 3-d surface matching. Cited by: §2.2.2, Table 2, §3.1.2, Table 4.
  113. A. Kaiser, J. A. Ybanez Zepeda and T. Boubekeur (2019) A survey of simple geometric primitives detection methods for captured 3d data. In Computer Graphics Forum, Vol. 38, pp. 167–196. Cited by: §2.1.2.
  114. W. Kehl, F. Manhardt, F. Tombari, S. Ilic and N. Navab (2017) SSD-6d: making rgb-based 3d detection and 6d pose estimation great again. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1521–1529. Cited by: §3.2.1, Table 5, Table 8.
  115. S. H. Khan, X. He, M. Bennamoun, F. Sohel and R. Togneri (2015) Separating objects and clutter in indoor scenes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4603–4611. Cited by: §2.1.2, Table 1.
  116. G. Kim, D. Huber and M. Hebert (2008) Segmentation of salient regions in outdoor scenes using imagery and 3-d data. In 2008 IEEE Workshop on Applications of Computer Vision, pp. 1–8. Cited by: §2.1.2, Table 1.
  117. A. Kirillov, Y. Wu, K. He and R. Girshick (2020) Pointrend: image segmentation as rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9799–9808. Cited by: §2.3.1, Table 3.
  118. D. Kirkpatrick, B. Mishra and C. Yap (1992) Quantitative steinitz’s theorems with applications to multifingered grasping. Discrete & Computational Geometry 7 (3), pp. 295–318. Cited by: §4.2.1.
  119. A. Krizhevsky, I. Sutskever and G. E. Hinton (2012) ImageNet classification with deep convolutional neural networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1, NIPS’12, pp. 1097–1105. Cited by: §2.2.1.
  120. S. Kumra, S. Joshi and F. Sahin (2019) Antipodal robotic grasping using generative residual convolutional neural network. arXiv preprint arXiv:1909.04810. Cited by: §4.1.2, Table 9.
  121. S. Kumra and C. Kanan (2017) Robotic grasp detection using deep convolutional neural networks. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 769–776. Cited by: §1, §1, §4.1.2, Table 11, Table 9.
  122. A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang and O. Beijbom (2019) Pointpillars: fast encoders for object detection from point clouds. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 12697–12705. Cited by: §2.2.2, Table 2.
  123. H. Law and J. Deng (2018) Cornernet: detecting objects as paired keypoints. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 734–750. Cited by: §2.2.1, Table 2.
  124. M. A. Lee, Y. Zhu, K. Srinivasan, P. Shah, S. Savarese, L. Fei-Fei, A. Garg and J. Bohg (2019) Making sense of vision and touch: self-supervised learning of multimodal representations for contact-rich tasks. In 2019 International Conference on Robotics and Automation (ICRA), pp. 8943–8950. Cited by: §5.
  125. Y. Lee and J. Park (2020) CenterMask: real-time anchor-free instance segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13906–13915. Cited by: §2.3.1, Table 3.
  126. I. Lenz, H. Lee and A. Saxena (2015) Deep learning for detecting robotic grasps. The International Journal of Robotics Research 34 (4-5), pp. 705–724. Cited by: §1, §4.1.2, §4.1.2, §4.1.2, Table 11, Table 9.
  127. B. León, S. Ulbrich, R. Diankov, G. Puche, M. Przybylski, A. Morales, T. Asfour, S. Moisio, J. Bohg, J. Kuffner and R. Dillmann (2010) OpenGRASP: a toolkit for robot grasping simulation. In Simulation, Modeling, and Programming for Autonomous Robots, N. Ando, S. Balakirsky, T. Hemker, M. Reggiani and O. von Stryk (Eds.), Berlin, Heidelberg, pp. 109–120. Cited by: §4.2.2.
  128. V. Lepetit and P. Fua (2005) Monocular model-based 3d tracking of rigid objects: a survey. Foundations and Trends® in Computer Graphics and Vision 1 (1), pp. 1–89. Cited by: §3.1.1.
  129. V. Lepetit, F. Moreno-Noguer and P. Fua (2009-02) EPnP: an accurate o(n) solution to the pnp problem. IJCV 81 (2), pp. 155–166. External Links: ISSN 0920-5691 Cited by: §3.1.1, §3.1.
  130. G. Li, Z. Liu, L. Ye, Y. Wang and H. Ling (2020) Cross-modal weighting network for rgb-d salient object detection. Cited by: §2.1.2.
  131. Y. Li, H. Qi, J. Dai, X. Ji and Y. Wei (2017) Fully convolutional instance-aware semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2359–2367. Cited by: §2.3.1, Table 3.
  132. Z. Li, G. Wang and X. Ji (2019) CDPN: coordinates-based disentangled pose network for real-time rgb-based 6-dof object pose estimation. In Proceedings of the IEEE International Conference on Computer Vision, pp. 7678–7687. Cited by: §3.2.1, Table 5, Table 8.
  133. H. Liang, X. Ma, S. Li, M. Görner, S. Tang, B. Fang, F. Sun and J. Zhang (2019) Pointnetgpd: detecting grasp configurations from point sets. In 2019 International Conference on Robotics and Automation (ICRA), pp. 3629–3635. Cited by: §1, §4.2.1, §4.2.1, §4.2.1, Table 12, Table 13.
  134. M. Liang, B. Yang, Y. Chen, R. Hu and R. Urtasun (2019) Multi-task multi-sensor fusion for 3d object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7345–7353. Cited by: §2.2.2, §2.2.2, Table 2.
  135. T. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan and S. Belongie (2017) Feature pyramid networks for object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2117–2125. Cited by: §2.2.1, Table 2.
  136. T. Lin, P. Goyal, R. Girshick, K. He and P. Dollár (2017) Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pp. 2980–2988. Cited by: §2.2.1, Table 2.
  137. C. Liu and Y. Furukawa (2019) MASC: multi-scale affinity with sparse convolution for 3d instance segmentation. arXiv preprint arXiv:1902.04478. Cited by: §2.3.2, Table 3.
  138. F. Liu, P. Fang, Z. Yao, R. Fan, Z. Pan, W. Sheng and H. Yang (2019) Recovering 6d object pose from rgb indoor image based on two-stage detection network with multi-task loss. Neurocomputing 337, pp. 15–23. External Links: ISSN 0925-2312 Cited by: §3.2.1, Table 5.
  139. L. Liu, W. Ouyang, X. Wang, P. Fieguth, J. Chen, X. Liu and M. Pietikäinen (2020) Deep learning for generic object detection: a survey. International Journal of Computer Vision 128 (2), pp. 261–318. Cited by: §2.2.1.
  140. M. Liu, Z. Pan, K. Xu, K. Ganguly and D. Manocha (2019) Generating grasp poses for a high-dof gripper using neural networks. arXiv preprint arXiv:1903.00425. Cited by: §1.
  141. N. Liu, J. Han and M. Yang (2018) Picanet: learning pixel-wise contextual attention for saliency detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3089–3098. Cited by: §2.1.1, Table 1.
  142. N. Liu and J. Han (2016) Dhsnet: deep hierarchical saliency network for salient object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 678–686. Cited by: §2.1.1, Table 1.
  143. S. Liu, L. Qi, H. Qin, J. Shi and J. Jia (2018) Path aggregation network for instance segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8759–8768. Cited by: §2.3.1, Table 3.
  144. W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Fu and A. C. Berg (2016) Ssd: single shot multibox detector. In European conference on computer vision, pp. 21–37. Cited by: §2.2.1, Table 2, §4.1.2.
  145. X. Liu, R. Jonschkowski, A. Angelova and K. Konolige (2020) KeyPose: multi-view 3d labeling and keypoint estimation for transparent objects. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11602–11610. Cited by: §3.1.1, Table 4, §5.
  146. Y. Liu, Q. Zhang, D. Zhang and J. Han (2019) Employing deep part-object relationships for salient object detection. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1232–1241. Cited by: §2.1.1, Table 1.
  147. Z. Liu, X. Zhao, T. Huang, R. Hu, Y. Zhou and X. Bai (2020) TANet: robust 3d object detection from point clouds with triple attention.. In AAAI, pp. 11677–11684. Cited by: §2.2.2, Table 2.
  148. J. Long, E. Shelhamer and T. Darrell (2015) Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3431–3440. Cited by: §2.1.1, §2.2.1.
  149. X. Lou, Y. Yang and C. Choi (2019) Learning to generate 6-dof grasp poses with reachability awareness. arXiv preprint arXiv:1910.06404. Cited by: §4.2.1.
  150. D. G. Lowe (1999) Object recognition from local scale-invariant features. In Proceedings of the International Conference on Computer Vision-Volume 2 - Volume 2, ICCV ’99, pp. 1150–. External Links: ISBN 0-7695-0164-8 Cited by: §2.2.1, Table 2, §3.1.1, Table 4.
  151. W. Lu, G. Wan, Y. Zhou, X. Fu, P. Yuan and S. Song (2019) DeepICP: an end-to-end deep neural network for 3d point cloud registration. arXiv preprint arXiv:1905.04153. Cited by: §3.2.2, Table 5.
  152. J. Lundell, F. Verdoja and V. Kyrki (2019) Robust grasp planning over uncertain shape completions. arXiv preprint arXiv:1903.00645. Cited by: §4.2.2, §4.2.2, Table 12.
  153. T. Luo, K. Mo, Z. Huang, J. Xu, S. Hu, L. Wang and H. Su (2020) Learning to group: a bottom-up framework for 3d part discovery in unseen categories. In International Conference on Learning Representations, Cited by: §5.
  154. M. Mahajan, T. Bhattacharjee, A. Krishnan, P. Shukla and G. Nandi (2020) Semi-supervised grasp detection by representation learning in a vector quantized latent space. arXiv preprint arXiv:2001.08477. Cited by: §5.
  155. J. Mahler, J. Liang, S. Niyaz, M. Laskey, R. Doan, X. Liu, J. A. Ojea and K. Goldberg (2017) Dex-net 2.0: deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics. CoRR abs/1703.09312. External Links: Link, 1703.09312 Cited by: §1, §1, §2.1.1, §4.1.1, Table 10, Table 9.
  156. T. Malisiewicz, A. Gupta and A. A. Efros (2011) Ensemble of exemplar-svms for object detection and beyond. In 2011 International conference on computer vision, pp. 89–96. Cited by: §2.2.2.
  157. N. Mellado, D. Aiger and N. J. Mitra (2014) Super 4pcs fast global pointcloud registration via smart indexing. In Computer Graphics Forum, Vol. 33, pp. 205–215. Cited by: §3.2.2, Table 5.
  158. A. T. Miller and P. K. Allen (2004) Graspit! a versatile simulator for robotic grasping. IEEE Robotics Automation Magazine 11 (4), pp. 110–122. Cited by: §4.2.2, §4.2.2.
  159. A. T. Miller, S. Knoop, H. I. Christensen and P. K. Allen (2003-09) Automatic grasp planning using shape primitives. In ICRA, Vol. 2, pp. 1824–1829. Cited by: §4.2.1, §4.2.2, Table 12.
  160. B. Mirtich and J. Canny (1994) Easily computable optimum grasps in 2-d and 3-d. In IEEE International Conference on Robotics and Automation, pp. 739–747. Cited by: §4.2.2.
  161. D. Morrison, P. Corke and J. Leitner (2018) Closing the loop for robotic grasping: a real-time, generative grasp synthesis approach. arXiv preprint arXiv:1804.05172. Cited by: §1, §4.1.1, Table 11, Table 9.
  162. D. Morrison, P. Corke and J. Leitner (2019) Multi-view picking: next-best-view reaching for improved grasping in clutter. In 2019 International Conference on Robotics and Automation (ICRA), pp. 8762–8768. Cited by: §4.1.1, Table 9, §5.
  163. A. Mousavian, C. Eppner and D. Fox (2019) 6-dof graspnet: variational grasp generation for object manipulation. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2901–2910. Cited by: §1, §4.2.1, §4.2.1, Table 12.
  164. R. Mur-Artal, J. M. M. Montiel and J. D. Tardos (2015) ORB-slam: a versatile and accurate monocular slam system. IEEE transactions on robotics 31 (5), pp. 1147–1163. Cited by: §3.1.1.
  165. A. Murali, A. Mousavian, C. Eppner, C. Paxton and D. Fox (2019) 6-dof grasping for target-driven object manipulation in clutter. arXiv preprint arXiv:1912.03628. Cited by: §4.2.1.
  166. M. Najibi, G. Lai, A. Kundu, Z. Lu, V. Rathod, T. Funkhouser, C. Pantofaru, D. Ross, L. S. Davis and A. Fathi (2020) DOPS: learning to detect 3d objects and predict their 3d shapes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11913–11922. Cited by: §2.2.2, Table 2.
  167. V. Nguyen (1987) Constructing stable grasps in 3d. In IEEE International Conference on Robotics and Automation, Vol. 4, pp. 234–239. Cited by: §4.2.2.
  168. P. Ni, W. Zhang, X. Zhu and Q. Cao (2020) PointNet++ grasping: learning an end-to-end spatial grasp generation algorithm from sparse point clouds. arXiv preprint arXiv:2003.09644. Cited by: §4.2.1, §4.2.1.
  169. E. Nikandrova and V. Kyrki (2015) Category-based task specific grasping. Robotics and Autonomous Systems 70, pp. 25–35. Cited by: §4.2.1, Table 12.
  170. M. Oberweger, M. Rad and V. Lepetit (2018) Making deep heatmaps robust to partial occlusions for 3d object pose estimation. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 119–134. Cited by: Table 7, Table 8.
  171. Y. Pang, L. Zhang, X. Zhao and H. Lu (2020) Hierarchical dynamic filtering network for rgb-d salient object detection. In Proceedings of the European Conference on Computer Vision (ECCV), Cited by: §2.1.2, Table 1.
  172. D. Park and S. Y. Chun (2018) Classification based grasp detection using spatial transformer network. arXiv preprint arXiv:1803.01356. Cited by: §1, §4.1.2, Table 11, Table 9.
  173. D. Park, Y. Seo and S. Y. Chun (2018) Real-time, highly accurate robotic grasp detection using fully convolutional neural network with rotation ensemble module. arXiv preprint arXiv:1812.07762. Cited by: §1, §4.1.2, Table 11, Table 9.
  174. D. Park, Y. Seo, D. Shin, J. Choi and S. Y. Chun (2019) A single multi-task deep neural network with post-processing for object detection with reasoning and robotic grasp detection. arXiv preprint arXiv:1909.07050. Cited by: §4.1.2, Table 11.
  175. K. Park, A. Mousavian, Y. Xiang and D. Fox (2020) LatentFusion: end-to-end differentiable reconstruction and rendering for unseen object pose estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10710–10719. Cited by: §3.2.1, §3.4.2, Table 5, §5.
  176. K. Park, T. Patten and M. Vincze (2019) Pix2pose: pixel-wise coordinate regression of objects for 6d pose estimation. In Proceedings of the IEEE International Conference on Computer Vision, pp. 7668–7677. Cited by: §3.1.1, Table 4, Table 8.
  177. A. t. Pas and R. Platt (2015) Using geometry to detect grasps in 3d point clouds. arXiv preprint arXiv:1501.03100. Cited by: §4.2.1, §4.2.1, Table 12.
  178. A. V. Patil and P. Rabha (2018) A survey on joint object detection and pose estimation using monocular vision. arXiv preprint arXiv:1811.10216. Cited by: §3.2.1.
  179. T. Patten, K. Park and M. Vincze (2020) DGCM-net: dense geometrical correspondence matching network for incremental experience-based robotic grasping. arXiv preprint arXiv:2001.05279. Cited by: §4.2.1, Table 12.
  180. H. Peng, B. Li, H. Ling, W. Hu, W. Xiong and S. J. Maybank (2016) Salient object detection via structured matrix decomposition. IEEE transactions on pattern analysis and machine intelligence 39 (4), pp. 818–832. Cited by: §2.1.1, Table 1.
  181. H. Peng, B. Li, W. Xiong, W. Hu and R. Ji (2014) Rgbd salient object detection: a benchmark and algorithms. In European conference on computer vision, pp. 92–109. Cited by: §2.1.2, Table 1.
  182. S. Peng, Y. Liu, Q. Huang, X. Zhou and H. Bao (2019) PVNet: pixel-wise voting network for 6dof pose estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4561–4570. Cited by: §3.3.1, Table 6, Table 8.
  183. N. Pereira and L. A. Alexandre (2019) MaskedFusion: mask-based 6d object pose estimation. arXiv preprint arXiv:1911.07771. Cited by: Table 7, Table 8.
  184. Q. Pham, T. Nguyen, B. Hua, G. Roig and S. Yeung (2019) JSIS3D: joint semantic-instance segmentation of 3d point clouds with multi-task pointwise networks and multi-value conditional random fields. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8827–8836. Cited by: §2.3.2, Table 3.
  185. Q. Pham, M. A. Uy, B. Hua, D. T. Nguyen, G. Roig and S. Yeung (2020) LCD: learned cross-domain descriptors for 2d-3d matching.. In AAAI, pp. 11856–11864. Cited by: §3.1.1, Table 4.
  186. Y. Piao, W. Ji, J. Li, M. Zhang and H. Lu (2019) Depth-induced multi-scale recurrent attention network for saliency detection. In Proceedings of the IEEE International Conference on Computer Vision, pp. 7254–7263. Cited by: §2.1.2, Table 1.
  187. P. O. Pinheiro, T. Lin, R. Collobert and P. Dollár (2016) Learning to refine object segments. In European Conference on Computer Vision, pp. 75–91. Cited by: §2.3.1, Table 3.
  188. P. O. Pinheiro, R. Collobert and P. Dollár (2015) Learning to segment object candidates. In Advances in Neural Information Processing Systems, pp. 1990–1998. Cited by: §2.3.1, Table 3.
  189. L. Pinto and A. Gupta (2016) Supersizing self-supervision: learning to grasp from 50k tries and 700 robot hours. In IEEE International Conference on Robotics and Automation (ICRA), pp. 3406–3413. Cited by: §1, §4.1.2, Table 10, Table 9.
  190. J. Ponce, S. Sullivan, J. Boissonnat and J. Merlet (1993) On characterizing and computing three-and four-finger force-closure grasps of polyhedral objects. In IEEE International Conference on Robotics and Automation, pp. 821–827. Cited by: §4.2.2.
  191. C. R. Qi, X. Chen, O. Litany and L. J. Guibas (2020) Imvotenet: boosting 3d object detection in point clouds with image votes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4404–4413. Cited by: §2.2.2, Table 2.
  192. C. R. Qi, O. Litany, K. He and L. J. Guibas (2019) Deep hough voting for 3d object detection in point clouds. In Proceedings of the IEEE International Conference on Computer Vision, pp. 9277–9286. Cited by: §2.2.2, Table 2.
  193. C. R. Qi, W. Liu, C. Wu, H. Su and L. J. Guibas (2018) Frustum pointnets for 3d object detection from rgb-d data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 918–927. Cited by: §2.2.2, §2.2.2, Table 2.
  194. C. R. Qi, H. Su, K. Mo and L. J. Guibas (2017) Pointnet: deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 652–660. Cited by: §2.2.2, §4.2.1.
  195. C. R. Qi, L. Yi, H. Su and L. J. Guibas (2017) Pointnet++: deep hierarchical feature learning on point sets in a metric space. In Advances in neural information processing systems, pp. 5099–5108. Cited by: §4.2.1, §4.2.1.
  196. Q. Qi, S. Zhao, J. Shen and K. Lam (2019) Multi-scale capsule attention-based salient object detection with multi-crossed layer connections. In 2019 IEEE International Conference on Multimedia and Expo (ICME), pp. 1762–1767. Cited by: §2.1.1, Table 1.
  197. Y. Qin, R. Chen, H. Zhu, M. Song, J. Xu and H. Su (2020) S4g: amodal single-view single-shot se (3) grasp detection in cluttered scenes. In Conference on Robot Learning, pp. 53–65. Cited by: §1, §4.2.1, §4.2.1, Table 12, Table 13.
  198. L. Qu, S. He, J. Zhang, J. Tian, Y. Tang and Q. Yang (2017) RGBD salient object detection via deep fusion. IEEE Transactions on Image Processing 26 (5), pp. 2274–2285. Cited by: §2.1.2, Table 1.
  199. T. Rabbani and F. Van Den Heuvel (2005) Efficient hough transform for automatic detection of cylinders in point clouds. Isprs Wg Iii/3, Iii/4 3, pp. 60–65. Cited by: §2.1.2, Table 1.
  200. M. Rad and V. Lepetit (2017) BB8: a scalable, accurate, robust to partial occlusion method for predicting the 3d poses of challenging objects without using depth. In IEEE International Conference on Computer Vision, pp. 3828–3836. Cited by: §3.1.1, Table 4, Table 8.
  201. J. Redmon and A. Angelova (2015) Real-time grasp detection using convolutional neural networks. In 2015 IEEE International Conference on Robotics and Automation (ICRA), pp. 1316–1322. Cited by: §1, §4.1.2, Table 11, Table 9.
  202. J. Redmon, S. Divvala, R. Girshick and A. Farhadi (2016) You only look once: unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 779–788. Cited by: §2.2.1, Table 2, §4.1.2.
  203. J. Redmon and A. Farhadi (2017) YOLO9000: better, faster, stronger. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7263–7271. Cited by: §2.2.1, Table 2.
  204. J. Redmon and A. Farhadi (2018) Yolov3: an incremental improvement. arXiv preprint arXiv:1804.02767. Cited by: §2.2.1, Table 2.
  205. J. Ren, X. Gong, L. Yu, W. Zhou and M. Ying Yang (2015) Exploiting global priors for rgb-d saliency detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 25–32. Cited by: §2.1.2, Table 1.
  206. S. Ren, K. He, R. Girshick and J. Sun (2015) Faster r-cnn: towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pp. 91–99. Cited by: §2.2.1, §2.2.2, §2.3.1, Table 2, §4.1.2.
  207. C. Rennie, R. Shome, K. E. Bekris and A. F. De Souza (2016) A dataset for improved rgbd-based object detection and pose estimation for warehouse pick-and-place. IEEE Robotics and Automation Letters 1 (2), pp. 1179–1185. Cited by: §3.4.1.
  208. E. Rosten and T. Drummond (2005) Fusing points and lines for high performance tracking. In Tenth IEEE International Conference on Computer Vision (ICCV’05) Volume 1, Vol. 2, pp. 1508–1515. Cited by: §2.2.1, Table 2, §3.1.1, Table 4.
  209. E. Rublee, V. Rabaud, K. Konolige and G. Bradski (2011) ORB: an efficient alternative to sift or surf. In 2011 International conference on computer vision, pp. 2564–2571. Cited by: §2.2.1, Table 2, §3.1.1, Table 4.
  210. R. B. Rusu, N. Blodow and M. Beetz (2009-05) Fast point feature histograms (fpfh) for 3d registration. In IEEE International Conference on Robotics and Automation, pp. 3212–3217. External Links: ISSN 1050-4729 Cited by: §2.2.2, Table 2, §3.1.2, Table 4.
  211. R. B. Rusu, N. Blodow, Z. C. Marton and M. Beetz (2009) Close-range scene segmentation and reconstruction of 3d point cloud maps for mobile manipulation in domestic environments. In 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1–6. Cited by: §2.1.2, §2.1.2, Table 1.
  212. S. Sabour, N. Frosst and G. E. Hinton (2017) Dynamic routing between capsules. In Advances in neural information processing systems, pp. 3856–3866. Cited by: §2.1.1.
  213. S. Sabour, N. Frosst and G. Hinton (2018) Matrix capsules with em routing. In 6th international conference on learning representations, ICLR, pp. 1–15. Cited by: §2.1.1.
  214. A. Sahbani, S. El-Khoury and P. Bidaud (2012) An overview of 3d object grasp synthesis algorithms. Robotics and Autonomous Systems 60 (3), pp. 326 – 336. Note: Autonomous Grasping External Links: ISSN 0921-8890 Cited by: §1, §1, §4.2.2.
  215. S. S. Sajjan, M. Moore, M. Pan, G. Nagaraja, J. Lee, A. Zeng and S. Song (2019) ClearGrasp: 3d shape estimation of transparent objects for manipulation. arXiv preprint arXiv:1910.02550. Cited by: §4.2.2, §4.2.2, Table 12, §5.
  216. S. Salti, F. Tombari and L. D. Stefano (2014) SHOT: unique signatures of histograms for surface and texture description. Computer Vision and Image Understanding 125, pp. 251 – 264. External Links: ISSN 1077-3142 Cited by: §2.2.2, Table 2, §3.1.2, Table 4.
  217. J. Sanchez, J. Corrales, B. Bouzgarrou and Y. Mezouar (2018) Robotic manipulation and sensing of deformable objects in domestic and industrial applications: a survey. The International Journal of Robotics Research 37 (7), pp. 688–716. Cited by: §1.
  218. V. Sarode, X. Li, H. Goforth, Y. Aoki, A. Dhagat, R. A. Srivatsan, S. Lucey and H. Choset (2019) One framework to register them all: pointnet encoding for point cloud alignment. arXiv preprint arXiv:1912.05766. Cited by: §3.2.2, Table 5.
  219. V. Sarode, X. Li, H. Goforth, Y. Aoki, R. A. Srivatsan, S. Lucey and H. Choset (2019) PCRNet: point cloud registration network using pointnet encoding. arXiv preprint arXiv:1908.07906. Cited by: §3.2.2, Table 5.
  220. A. Saxena, J. Driemeyer, J. Kearns, C. Osondu and A. Y. Ng (2008) Learning to grasp novel objects using vision. In Experimental Robotics, pp. 33–42. Cited by: Table 10.
  221. A. Saxena, J. Driemeyer and A. Y. Ng (2008) Robotic grasping of novel objects using vision. The International Journal of Robotics Research 27 (2), pp. 157–173. Cited by: Table 10.
  222. P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus and Y. LeCun (2013) Overfeat: integrated recognition, localization and detection using convolutional networks. arXiv preprint arXiv:1312.6229. Cited by: §2.2.1, Table 2.
  223. J. Shi, Q. Yan, L. Xu and J. Jia (2015) Hierarchical image saliency detection on extended cssd. IEEE transactions on pattern analysis and machine intelligence 38 (4), pp. 717–729. Cited by: §2.1.1, Table 1.
  224. S. Shi, C. Guo, L. Jiang, Z. Wang, J. Shi, X. Wang and H. Li (2020) Pv-rcnn: point-voxel feature set abstraction for 3d object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10529–10538. Cited by: §2.2.2, Table 2.
  225. S. Shi, X. Wang and H. Li (2019) Pointrcnn: 3d object proposal generation and detection from point cloud. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–779. Cited by: §2.2.2, Table 2.
  226. S. Shi, Z. Wang, J. Shi, X. Wang and H. Li (2020) From points to parts: 3d object detection from point cloud with part-aware and part-aggregation network. IEEE Transactions on Pattern Analysis and Machine Intelligence. Cited by: §2.2.2, Table 2.
  227. W. Shi and R. Rajkumar (2020) Point-gnn: graph neural network for 3d object detection in a point cloud. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1711–1719. Cited by: §2.2.2, Table 2.
  228. M. Simon, K. Fischer, S. Milz, C. T. Witt and H. Gross (2020) StickyPillars: robust feature matching on point clouds using graph neural networks. arXiv preprint arXiv:2002.03983. Cited by: §3.1.2, Table 4.
  229. C. Song, J. Song and Q. Huang (2020) Hybridpose: 6d object pose estimation under hybrid representations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 431–440. Cited by: §3.1.1, Table 4, Table 8.
  230. S. Song and J. Xiao (2014) Sliding shapes for 3d object detection in depth images. In European conference on computer vision, pp. 634–651. Cited by: §2.2.2, Table 2.
  231. S. Song and J. Xiao (2016) Deep sliding shapes for amodal 3d object detection in rgb-d images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 808–816. Cited by: §2.2.2, §2.2.2, Table 2.
  232. F. Sultana, A. Sufian and P. Dutta (2020) A review of object detection models based on convolutional neural network. In Intelligent Computing: Image Processing Based Applications, pp. 1–16. Cited by: §2.2.1.
  233. F. Sultana, A. Sufian and P. Dutta (2020) Evolution of image segmentation using deep convolutional neural network: a survey. arXiv preprint arXiv:2001.04074. Cited by: §2.3.1.
  234. M. Sundermeyer, Z. Marton, M. Durner, M. Brucker and R. Triebel (2018) Implicit 3d orientation learning for 6d object detection from rgb images. In European Conference on Computer Vision., pp. 712–729. Cited by: §3.2.1, Table 5, Table 8.
  235. K. Suzuki, Y. Yokota, Y. Kanazawa and T. Takebayashi (2020) Online self-supervised learning for object picking: detecting optimum grasping position using a metric learning approach. In 2020 IEEE/SICE International Symposium on System Integration (SII), pp. 205–212. Cited by: §5.
  236. C. Szegedy, S. Reed, D. Erhan, D. Anguelov and S. Ioffe (2014) Scalable, high-quality object detection. arXiv preprint arXiv:1412.1441. Cited by: §2.2.1, Table 2.
  237. G. K. Tam, Z. Cheng, Y. Lai, F. C. Langbein, Y. Liu, D. Marshall, R. R. Martin, X. Sun and P. L. Rosin (2013) Registration of 3d point clouds and meshes: a survey from rigid to nonrigid. IEEE Transactions on Visualization and Computer Graphics 19 (7), pp. 1199–1217. Cited by: §3.2.2, §3.4.
  238. A. Tejani, D. Tang, R. Kouskouridas and T. Kim (2014) Latent-class hough forests for 3d object detection and pose estimation. In European Conference on Computer Vision, pp. 462–477. Cited by: §3.3.2, §3.4.1, Table 6.
  239. B. Tekin, S. N. Sinha and P. Fua (2018) Real-time seamless single shot 6d object pose prediction. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 292–301. Cited by: §3.1.1, Table 4, Table 8.
  240. A. ten Pas, M. Gualtieri, K. Saenko and R. Platt (2017-12) Grasp pose detection in point clouds. Int. J. Rob. Res. 36 (13-14), pp. 1455–1473. External Links: ISSN 0278-3649 Cited by: §1, §4.2.1, §4.2.1, §4.2.1, Table 12, Table 13.
  241. H. Tian, C. Wang, D. Manocha and X. Zhang (2019) Transferring grasp configurations using active learning and local replanning. In 2019 International Conference on Robotics and Automation (ICRA), pp. 1622–1628. Cited by: §4.2.1, Table 12.
  242. M. Tian, L. Pan, M. H. Ang Jr and G. H. Lee (2020) Robust 6d object pose estimation by learning rgb-d features. arXiv preprint arXiv:2003.00188. Cited by: §3.2.1, Table 5, Table 8.
  243. Z. Tian, C. Shen, H. Chen and T. He (2019) Fcos: fully convolutional one-stage object detection. In Proceedings of the IEEE International Conference on Computer Vision, pp. 9627–9636. Cited by: §2.2.1, §2.3.1, Table 2.
  244. T. Tosun, D. Yang, B. Eisner, V. Isler and D. Lee (2020) Robotic grasping through combined image-based grasp proposal and 3d reconstruction. arXiv preprint arXiv:2003.01649. Cited by: §4.2.2, §4.2.2, Table 12.
  245. J. Tremblay, T. To, B. Sundaralingam, Y. Xiang, D. Fox and S. Birchfield (2018) Deep object pose estimation for semantic robotic grasping of household objects. arXiv preprint arXiv:1809.10790. Cited by: §5.
  246. P. Truong, S. Apostolopoulos, A. Mosinska, S. Stucky, C. Ciller and S. D. Zanet (2019) GLAMpoints: greedily learned accurate match points. In Proceedings of the IEEE International Conference on Computer Vision, pp. 10732–10741. Cited by: §3.1.1.
  247. J. R. Uijlings, K. E. Van De Sande, T. Gevers and A. W. Smeulders (2013) Selective search for object recognition. International journal of computer vision 104 (2), pp. 154–171. Cited by: §2.2.1.
  248. L. Vacchetti, V. Lepetit and P. Fua (2004) Stable real-time 3d tracking using online and offline information. IEEE transactions on pattern analysis and machine intelligence 26 (10), pp. 1385–1391. Cited by: §3.1.1.
  249. N. Vahrenkamp, L. Westkamp, N. Yamanobe, E. E. Aksoy and T. Asfour (2016) Part-based grasp planning for familiar objects. In IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids), pp. 919–925. Cited by: §4.2.1, Table 12.
  250. M. Van der Merwe, Q. Lu, B. Sundaralingam, M. Matak and T. Hermans (2019) Learning continuous 3d reconstructions for geometrically aware grasping. arXiv preprint arXiv:1910.00983. Cited by: §4.2.2, §4.2.2, Table 12.
  251. J. Varley, C. DeChant, A. Richardson, J. Ruales and P. Allen (2017) Shape completion enabled robotic grasping. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2442–2447. Cited by: §4.2.2, §4.2.2, Table 12.
  252. J. Vidal, C. Lin and R. Martí (2018-04) 6D pose estimation using an improved method based on point pair features. In 4th International Conference on Control, Automation and Robotics (ICCAR), pp. 405–409. Cited by: §3.3.2.
  253. V. Villena-Martinez, S. Oprea, M. Saval-Calvo, J. Azorin-Lopez, A. Fuster-Guillo and R. B. Fisher (2020) When deep learning meets data alignment: a review on deep registration networks (drns). arXiv preprint arXiv:2003.03167. Cited by: §3.2.2.
  254. M. Vohra, R. Prakash and L. Behera (2019) Real-time grasp pose estimation for novel objects in densely cluttered environment. In 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), pp. 1–6. Cited by: §4.1.2, Table 9.
  255. K. Wada, E. Sucar, S. James, D. Lenton and A. J. Davison (2020) MoreFusion: multi-object reasoning for 6d pose estimation from volumetric fusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14540–14549. Cited by: §3.3.2, Table 6.
  256. C. Wang, R. Martín-Martín, D. Xu, J. Lv, C. Lu, L. Fei-Fei, S. Savarese and Y. Zhu (2019) 6-pack: category-level 6d pose tracker with anchor-based keypoints. arXiv preprint arXiv:1910.10750. Cited by: §3.3.1, Table 6, §5.
  257. C. Wang, D. Xu, Y. Zhu, R. Martín-Martín, C. Lu, L. Fei-Fei and S. Savarese (2019) Densefusion: 6d object pose estimation by iterative dense fusion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3343–3352. Cited by: §1, §3.3.2, Table 6, Table 7, Table 8, §4.2.2.
  258. H. Wang, S. Sridhar, J. Huang, J. Valentin, S. Song and L. J. Guibas (2019) Normalized object coordinate space for category-level 6d object pose and size estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2642–2651. Cited by: §3.2.1, §3.4.2, Table 5, §5.
  259. S. Wang, J. Wu, X. Sun, W. Yuan, W. T. Freeman, J. B. Tenenbaum and E. H. Adelson (2018) 3d shape perception from monocular vision, touch, and shape priors. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1606–1613. Cited by: §4.2.2, §4.2.2, Table 12.
  260. S. Wang, X. Jiang, J. Zhao, X. Wang, W. Zhou and Y. Liu (2019) Efficient fully convolution neural network for generating pixel wise robotic grasps with high resolution images. In 2019 IEEE International Conference on Robotics and Biomimetics (ROBIO), pp. 474–480. Cited by: §4.1.1, Table 11, Table 9.
  261. W. Wang, R. Yu, Q. Huang and U. Neumann (2018) Sgpn: similarity group proposal network for 3d point cloud instance segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2569–2578. Cited by: §2.3.2, Table 3.
  262. W. Wang, Q. Lai, H. Fu, J. Shen and H. Ling (2019) Salient object detection in the deep learning era: an in-depth survey. arXiv preprint arXiv:1904.09146. Cited by: §2.1.1.
  263. W. Wang, J. Shen, L. Shao and F. Porikli (2016) Correspondence driven saliency transfer. IEEE Transactions on Image Processing 25 (11), pp. 5025–5034. Cited by: §2.1.1, Table 1.
  264. X. Wang, T. Kong, C. Shen, Y. Jiang and L. Li (2019) SOLO: segmenting objects by locations. arXiv preprint arXiv:1912.04488. Cited by: §2.3.1, Table 3.
  265. X. Wang, S. Liu, X. Shen, C. Shen and J. Jia (2019) Associatively segmenting instances and semantics in point clouds. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4096–4105. Cited by: §2.3.2, Table 3.
  266. Y. Wang and J. M. Solomon (2019) Deep closest point: learning representations for point cloud registration. In Proceedings of the IEEE International Conference on Computer Vision, pp. 3523–3532. Cited by: §3.2.2, Table 5.
  267. Y. Wang and J. M. Solomon (2019) PRNet: self-supervised learning for partial-to-partial registration. In Advances in Neural Information Processing Systems, pp. 8812–8824. Cited by: §3.2.2, Table 5.
  268. Z. Wang and K. Jia (2019) Frustum convnet: sliding frustums to aggregate local point-wise features for amodal 3d object detection. In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1742–1749. Cited by: §2.2.2, §2.2.2, Table 2.
  269. D. Watkins-Valls, J. Varley and P. Allen (2019) Multi-modal geometric learning for grasping and manipulation. In 2019 International Conference on Robotics and Automation (ICRA), pp. 7339–7345. Cited by: §4.2.2, §4.2.2, Table 12.
  270. Y. Wei, F. Wen, W. Zhu and J. Sun (2012) Geodesic saliency using background priors. In European conference on computer vision, pp. 29–42. Cited by: §2.1.1, Table 1.
  271. J. M. Wong, V. Kee, T. Le, S. Wagner, G. Mariottini, A. Schneider, L. Hamilton, R. Chipalkatty, M. Hebert and D. M. Johnson (2017) Segicp: integrated deep semantic segmentation and pose estimation. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5784–5789. Cited by: §2.3.1, §4.2.2.
  272. Y. Xiang, T. Schmidt, V. Narayanan and D. Fox (2018) Posecnn: a convolutional neural network for 6d object pose estimation in cluttered scenes. Robotics: Science and Systems (RSS). Cited by: §3.2.1, §3.4.1, §3.4.1, §3.4.1, §3.4.1, Table 5, Table 7, Table 8, §4.2.1.
  273. C. Xie, Y. Xiang, A. Mousavian and D. Fox (2020) The best of both modes: separately leveraging rgb and depth for unseen object instance segmentation. In Conference on Robot Learning, pp. 1369–1378. Cited by: §2.3.1.
  274. E. Xie, P. Sun, X. Song, W. Wang, X. Liu, D. Liang, C. Shen and P. Luo (2020) Polarmask: single shot instance segmentation with polar representation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12193–12202. Cited by: §2.3.1, Table 3.
  275. Q. Xie, Y. Lai, J. Wu, Z. Wang, Y. Zhang, K. Xu and J. Wang (2020) MLCVNet: multi-level context votenet for 3d object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10447–10456. Cited by: §2.2.2, Table 2.
  276. D. Xu, D. Anguelov and A. Jain (2018-06) PointFusion: deep sensor fusion for 3d bounding box estimation. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Cited by: §2.2.2, §2.2.2, Table 2, Table 7.
  277. Z. Xue, A. Kasper, J. M. Zoellner and R. Dillmann (2009) An automatic grasp planning system for service robots. In 2009 International Conference on Advanced Robotics, pp. 1–6. Cited by: §4.2.2.
  278. X. Yan, J. Hsu, M. Khansari, Y. Bai, A. Pathak, A. Gupta, J. Davidson and H. Lee (2018) Learning 6-dof grasping interaction via deep geometry-aware 3d representations. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 1–9. Cited by: §4.2.2, §4.2.2, Table 12.
  279. X. Yan, M. Khansari, J. Hsu, Y. Gong, Y. Bai, S. Pirk and H. Lee (2019) Data-efficient learning for sim-to-real robotic grasping using deep point cloud prediction networks. arXiv preprint arXiv:1906.08989. Cited by: §4.2.2, §4.2.2, Table 12.
  280. Y. Yan, Y. Mao and B. Li (2018) Second: sparsely embedded convolutional detection. Sensors 18 (10), pp. 3337. Cited by: §2.2.2, Table 2.
  281. B. Yang, J. Wang, R. Clark, Q. Hu, S. Wang, A. Markham and N. Trigoni (2019) Learning object bounding boxes for 3d instance segmentation on point clouds. In Advances in Neural Information Processing Systems, pp. 6737–6746. Cited by: §2.3.2, Table 3.
  282. C. Yang, L. Zhang, H. Lu, X. Ruan and M. Yang (2013) Saliency detection via graph-based manifold ranking. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3166–3173. Cited by: §2.1.1, Table 1.
  283. H. Yang, J. Shi and L. Carlone (2020) TEASER: fast and certifiable point cloud registration. arXiv preprint arXiv:2001.07715. Cited by: §3.2.2, Table 5.
  284. J. Yang, H. Li, D. Campbell and Y. Jia (2015) Go-icp: a globally optimal solution to 3d icp point-set registration. IEEE transactions on pattern analysis and machine intelligence 38 (11), pp. 2241–2254. Cited by: §3.2.2, Table 5.
  285. S. Yang, W. Zhang, W. Lu, H. Wang and Y. Li (2019) Learning actions from human demonstration video for robotic manipulation. arXiv preprint arXiv:1909.04312. Cited by: §4.2.1.
  286. Z. Yang, Y. Sun, S. Liu and J. Jia (2020) 3dssd: point-based 3d single stage object detector. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11040–11048. Cited by: §2.2.2, Table 2.
  287. Z. Yang, Y. Sun, S. Liu, X. Shen and J. Jia (2019) Std: sparse-to-dense 3d object detector for point cloud. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1951–1960. Cited by: §2.2.2, Table 2.
  288. M. Ye, S. Xu and T. Cao (2020) HVNet: hybrid voxel network for lidar based 3d object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1631–1640. Cited by: §2.2.2, Table 2.
  289. Z. J. Yew and G. H. Lee (2018) 3DFeat-net: weakly supervised local 3d features for point cloud registration. In European Conference on Computer Vision, pp. 630–646. Cited by: §3.1.2, Table 4.
  290. K. M. Yi, E. Trulls, V. Lepetit and P. Fua (2016) Lift: learned invariant feature transform. In European Conference on Computer Vision, pp. 467–483. Cited by: §3.1.1.
  291. L. Yi, W. Zhao, H. Wang, M. Sung and L. J. Guibas (2019) Gspn: generative shape proposal network for 3d instance segmentation in point cloud. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3947–3956. Cited by: §2.3.2, Table 3.
  292. Y. Yokota, K. Suzuki, Y. Kanazawa and T. Takebayashi (2020) A multi-task learning framework for grasping-position detection and few-shot classification. In 2020 IEEE/SICE International Symposium on System Integration (SII), pp. 1033–1039. Cited by: §5.
  293. F. Yu, K. Liu, Y. Zhang, C. Zhu and K. Xu (2019) Partnet: a recursive part decomposition network for fine-grained and hierarchical shape segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9491–9500. Cited by: §5.
  294. P. Yu, Y. Rao, J. Lu and J. Zhou (2019) Pgnet: pose-guided point cloud generating networks for 6-dof object pose estimation. arXiv preprint arXiv:1912.09316. Cited by: Table 8.
  295. X. Yu, Z. Zhuang, P. Koniusz and H. Li (2020) 6DoF object pose estimation via differentiable proxy voting loss. arXiv preprint arXiv:2002.03923. Cited by: §1, §3.3.1, Table 6, Table 8.
  296. Y. Yuan, J. Hou, A. Nüchter and S. Schwertfeger (2020) Self-supervised point set local descriptors for point cloud registration. arXiv preprint arXiv:2003.05199. Cited by: §3.1.2, Table 4.
  297. S. Zakharov, I. Shugurov and S. Ilic (2019) Dpod: 6d pose object detector and refiner. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1941–1950. Cited by: §3.1.1, Table 4, Table 8.
  298. B. S. Zapata-Impata, P. Gil, J. Pomares and F. Torres (2019) Fast geometry-based computation of grasping points on three-dimensional point clouds. International Journal of Advanced Robotic Systems 16 (1), pp. 1729881419831846. Cited by: §2.1.2, Table 1, §4.2.1, §4.2.1, Table 12.
  299. B. S. Zapata-Impata, C. Mateo Agulló, P. Gil and J. Pomares (2017) Using geometry to detect grasping points on 3d unknown point cloud. Cited by: §4.2.1.
  300. A. Zeng, S. Song, M. Nießner, M. Fisher, J. Xiao and T. Funkhouser (2017) 3dmatch: learning local geometric descriptors from rgb-d reconstructions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1802–1811. Cited by: §3.1.2, Table 4.
  301. A. Zeng, S. Song, K. Yu, E. Donlon, F. R. Hogan, M. Bauza, D. Ma, O. Taylor, M. Liu and E. Romo (2018) Robotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross-domain image matching. In IEEE International Conference on Robotics and Automation (ICRA), pp. 1–8. Cited by: §1, §4.1.1, Table 9.
  302. A. Zeng, K. Yu, S. Song, D. Suo, E. Walker, A. Rodriguez and J. Xiao (2017) Multi-view self-supervised deep learning for 6d pose estimation in the amazon picking challenge. In IEEE International Conference on Robotics and Automation (ICRA), pp. 1386–1383. Cited by: §1, §4.2.2, Table 12, §5.
  303. F. Zhang, C. Guan, J. Fang, S. Bai, R. Yang, P. Torr and V. Prisacariu (2020) Instance segmentation of lidar point clouds. ICRA, Cited by 4 (1). Cited by: §2.3.2, Table 3.
  304. H. Zhang, X. Lan, S. Bai, L. Wan, C. Yang and N. Zheng (2018) A multi-task convolutional neural network for autonomous robotic grasping in object stacking scenes. arXiv preprint arXiv:1809.07081. Cited by: §4.1.2.
  305. H. Zhang, X. Lan, S. Bai, X. Zhou, Z. Tian and N. Zheng (2018) Roi-based robotic grasp detection for object overlapping scenes. arXiv preprint arXiv:1808.10313. Cited by: §4.1.2, Table 9.
  306. J. Zhang, S. Sclaroff, Z. Lin, X. Shen, B. Price and R. Mech (2016) Unconstrained salient object detection via proposal subset optimization. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5733–5742. Cited by: §2.1.1, Table 1.
  307. Q. Zhang, D. Qu, F. Xu and F. Zou (2017) Robust robot grasp detection in multimodal fusion. In MATEC Web of Conferences, Vol. 139, pp. 00060. Cited by: §1, §4.1.2, Table 11, Table 9.
  308. Z. Zhang, B. Sun, H. Yang and Q. Huang (2020) H3DNet: 3d object detection using hybrid geometric primitives. Proceedings of the European Conference on Computer Vision (ECCV). Cited by: §2.2.2, Table 2.
  309. B. Zhao, H. Zhang, X. Lan, H. Wang, Z. Tian and N. Zheng (2020) REGNet: region-based grasp network for single-shot grasp detection in point clouds. arXiv preprint arXiv:2002.12647. Cited by: §1, §4.2.1, §4.2.1, §4.2.1, Table 12, Table 13.
  310. L. Zhao and W. Tao (2020) JSNet: joint instance and semantic segmentation of 3d point clouds. In Thirty-Fourth AAAI Conference on Artificial Intelligence, Cited by: §2.3.2, Table 3.
  311. R. Zhao, W. Ouyang, H. Li and X. Wang (2015) Saliency detection by multi-context deep learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1265–1274. Cited by: §2.1.1, Table 1.
  312. S. Zhao, B. Li, P. Xu and K. Keutzer (2020) Multi-source domain adaptation in the deep learning era: a systematic survey. arXiv preprint arXiv:2002.12169. Cited by: §5.
  313. Z. Zhao, P. Zheng, S. Xu and X. Wu (2019) Object detection with deep learning: a review. IEEE transactions on neural networks and learning systems 30 (11), pp. 3212–3232. Cited by: §2.2.1.
  314. T. Zheng, C. Chen, J. Yuan, B. Li and K. Ren (2019) Pointcloud saliency maps. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1598–1606. Cited by: §2.1.2.
  315. Q. Zhou, J. Park and V. Koltun (2016) Fast global registration. In European Conference on Computer Vision, pp. 766–782. Cited by: §3.2.2.
  316. X. Zhou, D. Wang and P. Krähenbühl (2019) Objects as points. arXiv preprint arXiv:1904.07850. Cited by: §2.2.1, Table 2.
  317. X. Zhou, J. Zhuo and P. Krahenbuhl (2019) Bottom-up object detection by grouping extreme and center points. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 850–859. Cited by: §2.2.1, Table 2.
  318. X. Zhou, X. Lan, H. Zhang, Z. Tian, Y. Zhang and N. Zheng (2018) Fully convolutional grasp detection network with oriented anchor box. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 7223–7230. Cited by: §1, §4.1.2, Table 11, Table 9.
  319. Y. Zhou and O. Tuzel (2018) Voxelnet: end-to-end learning for point cloud based 3d object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4490–4499. Cited by: §2.2.2, Table 2.
  320. Z. Zhou, T. Pan, S. Wu, H. Chang and O. C. Jenkins (2019) Glassloc: plenoptic grasp pose detection in transparent clutter. arXiv preprint arXiv:1909.04269. Cited by: §5.
  321. A. Zhu, J. Yang, C. Zhao, K. Xian, Z. Cao and X. Li (2020) LRF-net: learning local reference frames for 3d local shape description and matching. arXiv preprint arXiv:2001.07832. Cited by: §1.
  322. W. Zhu, S. Liang, Y. Wei and J. Sun (2014) Saliency optimization from robust background detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2814–2821. Cited by: §2.1.1, Table 1.
  323. Z. Zou, Z. Shi, Y. Guo and J. Ye (2019) Object detection in 20 years: a survey. arXiv preprint arXiv:1905.05055. Cited by: §2.2.1.