Dex-Net 3.0: Computing Robust Vacuum Suction Grasp Targets in Point Clouds using a New Analytic Model and Deep Learning

Dex-Net 3.0: Computing Robust Vacuum Suction Grasp Targets in Point Clouds using a New Analytic Model and Deep Learning

Jeffrey Mahler, Matthew Matl, Xinyu Liu, Albert Li, David Gealy, Ken Goldberg Dept. of Electrical Engineering and Computer Science; Dept. of Industrial Operations and Engineering Research; AUTOLAB University of California, Berkeley, USA {jmahler, mmatl, xinyuliu, alberthli, dgealy, goldberg}@berkeley.edu
Abstract

Vacuum-based end effectors are widely used in industry and are often preferred over parallel-jaw and multifinger grippers due to their ability to lift objects with a single point of contact. Suction grasp planners often target planar surfaces on point clouds near the estimated centroid of an object. In this paper, we propose a compliant suction contact model that computes the quality of the seal between the suction cup and local target surface and a measure of the ability of the suction grasp to resist an external gravity wrench. To characterize grasps, we estimate robustness to perturbations in end-effector and object pose, material properties, and external wrenches. We analyze grasps across 1,500 3D object models to generate Dex-Net 3.0, a dataset of 2.8 million point clouds, suction grasps, and grasp robustness labels. We use Dex-Net 3.0 to train a Grasp Quality Convolutional Neural Network (GQ-CNN) to classify robust suction targets in point clouds containing a single object. We evaluate the resulting system in 350 physical trials on an ABB YuMi fitted with a pneumatic suction gripper. When evaluated on novel objects that we categorize as Basic (prismatic or cylindrical), Typical (more complex geometry), and Adversarial (with few available suction-grasp points) Dex-Net 3.0 achieves success rates of 98, 82, and 58 respectively, improving to 81 in the latter case when the training set includes only adversarial objects. Code, datasets, and supplemental material can be found at http://berkeleyautomation.github.io/dex-net.

\newfloatcommand

capbtabboxtable[][\FBwidth]

I Introduction

\seclabel

introduction

Suction grasping is widely-used for pick-and-place tasks in industry and warehouse order fulfillment. As shown in the Amazon Picking Challenge, suction has an advantage over parallel-jaw or multifinger grasping due to its ability to reach into narrow spaces and pick up objects with a single point of contact. However, while a substantial body of research exists on parallel-jaw and multifinger grasp planning [3], comparatively little research has been published on planning suction grasps.

While grasp planning searches for gripper configurations that maximize a quality metric derived from mechanical wrench space analysis [24], human labels [28], or self-supervised labels [20], suction grasps are often planned directly on point clouds using heuristics such as grasping near the object centroid [10] or at the center of planar surfaces [4, 5]. These heuristics work well for prismatic objects such as boxes and cylinders but may fail on objects with non-planar surfaces near the object centroid, which is common for industrial parts and household objects such as staplers or children’s toys. Analytic models of suction cups for grasp planning exist, but they typically assume that a vacuum seal has already been formed and that the state (e.g. shape and pose) of the object is perfectly known [2, 17, 23]. In practice a robot may need to form seals on non-planar surfaces while being robust to external wrenches (e.g. gravity and disturbances), sensor noise, control imprecision, and calibration errors, which are significant factors when planning grasps from point clouds.

Fig. 1: The quasi-static spring model, , used for determining when seal formation is feasible. The model contains three types of springs – perimeter, flexion, and cone springs. An initial state for is chosen given a target point and an approach direction . Then, a contact state for is computed so that ’s perimeter springs form a complete seal against object mesh . Seal formation is deemed feasible if the energy required to maintain this contact state is sufficiently low in every spring.
\figlabel

spring-approach

We propose a novel compliant suction contact model for rigid, non-porous objects that consists of two components: (1) a test for whether a seal can be formed between a suction cup and a target object surface and (2) an analysis of the ability of the suction contact to resist external wrenches. We use the model to evaluate grasp robustness by analyzing seal formation and wrench resistance under perturbations in object pose, suction tip pose, material properties, and disturbing wrenches using Monte-Carlo sampling similar to that in the Dexterity Network (Dex-Net) 1.0 [22].

This paper makes four contributions:

  1. A compliant suction contact model that quantifies seal formation using a quasi-static spring system and the ability to resist external wrenches (e.g. gravity) using a contact wrench basis derived from the ring of contact between the cup and object surface.

  2. Robust wrench resistance: a robust version of the above model under random disturbing wrenches and perturbations in object pose, gripper pose, and friction.

  3. Dex-Net 3.0, a dataset of 2.8 million synthetic point clouds annotated with suction grasps and grasp robustness labels generated by analyzing robust wrench resistance for approximately 375k grasps across 1,500 object models.

  4. Physical robot experiments measuring the precision of robust wrench resistance both with and without knowledge of the target object’s shape and pose.

We perform physical experiments using an ABB YuMi robot with a silicone suction cup tip to compare the precision of a GQ-CNN-based grasping policy trained on Dex-Net 3.0 with several heuristics such as targeting planar surfaces near object centroids. We find that the method achieves success rates of 98, 82, and 58 on datasets of Basic (prismatic or cylindrical), Typical (more complex geometry), and Adversarial (with few available suction-grasp points), respectively.

Ii Related Work

\seclabel

related-work End-effectors based on suction are widely used in industrial applications such as warehouse order fulfillment, handling limp materials such as fabric [17], and robotics applications such as the Amazon Picking Challenge [4], underwater manipulation [29], or wall climbing [2]. Our method builds on models of deformable materials, analyses of the wrenches that suction cups can exert, and data-driven grasp planning.

Ii-a Suction Models

Several models for the deformation of stiff rubber-like materials exist in the literature. Provot et al. [26] modeled sheets of stiff cloth and rubber using a spring-mass system with several types of springs. Hosseini et al. [1] provided a survey of more modern constitutive models of rubber that are often used in Finite Element Analysis (FEA) packages for more realistic physics simulations. In order to rapidly evaluate whether a suction cup can form a seal against an object’s surface, we model the cup as a quasi-static spring system with a topology similar to the one in [26] and estimate the deformation energy required to maintain a seal.

In addition, several models have been developed to check for static equilibrium assuming a seal between the suction cup and the object’s surface. Most models consider the suction cup to be a rigid object and model forces into the object along the surface normal, tangential forces due to surface friction, and pulling forces due to suction [17, 29, 31]. Bahr et al. [2] augmented this model with the ability to resist moments about the center of suction to determine the amount of vacuum pressure necessary to keep a climbing robot attached to a vertical wall. Mantriota [23] modeled torsional friction due to a contact area between the cup and object similar to the soft finger contact model used in grasping [14]. Our model extends these methods by combining models of torsional friction [23] and contact moments [2] in a compliant model of the ring of contact between the cup and object.

Ii-B Grasp Planning

The goal of grasp planning is to select a configuration for an end-effector that enables a robot to perform a task via contact with an object while resisting external perturbations [3], which can be arbitrary [7] or task-specific [18]. A common approach is to select a configuration that maximizes a quality metric (or reward) based on wrench space analysis [24], robustness to perturbations [32], or a model learned from human labels [15] or self-supervision [25].

Several similar metrics exist for evaluating suction grasps. One common approach is to evaluate whether or not a set of suction cups can lift an object by applying an upwards force [17, 29, 30, 31]. Domae et al. [5] developed a geometric model to evaluate suction success by convolving target locations in images with a desired suction contact template to assess planarity. Heuristics for planning suction grasps from point clouds have also been used extensively in the Amazon Robotics Challenge. In 2015, Team RBO [6] won by pushing objects from the top or side until suction was achieved, and Team MIT [35] came in second place by suctioning on the centroid of objects with flat surfaces. In 2016, Team Delft [10] won the challenge by approaching the estimated object centroid along the inward surface normal. In 2017, Cartman [morrison2017cartman] won the challenge and ranked suction grasps according to heuristics such as maximizing distance to the segmented object boundary and MIT [zeng2017robotic] performed well using a neural network trained to predict grasp affordance maps from human labeled RGB-D point clouds. In this work, we present a novel metric that evaluates whether a single suction cup can resist external wrenches under perturbations in object / gripper poses, friction coefficient, and disturbing wrenches.

This paper also extends empirical, data-driven approaches to grasp planning based on images and point clouds [3]. A popular approach is to use human labels of graspable regions in RGB-D images [19] or point clouds [15] to learn a grasp detector with computer vision techniques. As labeling may be tedious for humans, an alternative is to automatically collect training data from a physical robot [20, 25]. To reduce the time-cost of data collection, recent research has proposed to generate labels in simulation using physical models of contact [12, 15]. Mahler et al. [21] demonstrated that a GQ-CNN trained on Dex-Net 2.0, a dataset of 6.7 million point clouds, grasps, and quality labels computed with robust quasi-static analysis, could be used to successfully plan parallel-jaw grasps across a wide variety of objects with 99 precision. In this paper, we use a similar approach to generate a dataset of point clouds, grasps, and robustness labels for a suction-based end-effector.

Iii Problem Statement

\seclabel

problem-statement Given a point cloud from a depth camera, our goal is to find a robust suction grasp (target point and approach direction) for a robot to lift an object in isolation on a planar worksurface and transport it to a receptacle. We compute the suction grasp that maximizes the probability that the robot can hold the object under gravity and perturbations sampled from a distribution over sensor noise, control imprecision, and random disturbing wrenches.

Iii-a Assumptions

\seclabel

assumptions Our stochastic model makes the following assumptions:

  1. Quasi-static physics (e.g. inertial terms are negligible) with Coulomb friction.

  2. Objects are rigid and made of non-porous material.

  3. Each object is singulated on a planar worksurface in a stable resting pose [8].

  4. A single overhead depth sensor with known intrinsics, position, and orientation relative to the robot.

  5. A vacuum-based end-effector with known geometry and a single disc-shaped suction cup made of linear-elastic material.

Iii-B Definitions

\seclabel

description

A robot observes a single-view point cloud or depth image, , containing a singulated object. The goal is to find the most robust suction grasp that enables the robot to lift an object and transport it to a receptacle, where grasps are parametrized by a target point and an approach direction . Success is measured with a binary grasp reward function , where if the grasp successfully transports the object, and otherwise.

The robot may not be able to predict the success of suction grasps exactly from point clouds for several reasons. First, the success metric depends on a state describing the object’s geometric, inertial, and material properties and the pose of the object relative to the camera, , but the robot does not know the true state due to: (a) noise in the depth image and (b) occlusions due to the single viewpoint. Second, the robot may not have perfect knowledge of external wrenches (forces and torques) on the object due to gravity or external disturbances.

This probabilistic relationship is described by an environment consisting of a grasp success distribution modeling , the ability of a grasp to resist random disturbing wrenches, and an observation model . This model induces a probability of success for each grasp conditioned on the robot’s observation:

Definition 1

The robustness of a grasp given a point cloud is the probability of grasp success under uncertainty in sensing, control, and disturbing wrenches: .

Our environment model is described in \secrefdataset and further details are given in the supplemental file.

Iii-C Objective

\seclabel

objective Our ultimate goal is to find a grasp that maximizes robustness given a point cloud, , where specifies constraints on the set of available grasps, such as collisions or kinematic feasibility. We approximate by optimizing the weights of a deep Grasp Quality Convolutional Neural Network (GQ-CNN) on a training dataset consisting of reward values, point clouds, and suction grasps sampled from our stochastic model of grasp success. Our optimization objective is to find weights that minimize the cross-entropy loss over :

(III.1)

Iv Compliant Suction Contact Model

\seclabel

contact-model

To quantify grasp robustness, we develop a quasi-static spring model of the suction cup material and a model of contact wrenches that the suction cup can apply to the object through a ring of contact on the suction cup perimeter. Under our model, the reward if:

  1. A seal is formed between the perimeter of the suction cup and the object surface.

  2. Given a seal, the suction cup is able to resist an external wrench on the object due to gravity and disturbances.

Iv-a Seal Formation

A suction cup can lift objects due to an air pressure differential induced across the membrane of the cup by a vacuum generator that forces the object into the cup. If a gap exists between the perimeter of the cup and the object, then air flowing into the gap may reduce the differential and cause the grasp to fail. Therefore, a tight seal between the cup and the target object is important for achieving a successful grasp.

Fig. 2: Our compliant suction contact model. (Left) The quasi-static spring model used in seal formation computations. This suction cup is approximated by components. Here, is equal to the radius of the cup and is equal to the height of the cup. are the base vertices and is the apex. (Right) Wrench basis for the suction ring contact model. The contact exerts a constant pulling force on the object of magnitude and additionally can push or pull the object along the contact axis with force . The suction cup material exerts a normal force on the object through a linear pressure distribution on the ring. This pressure distribution induces a friction limit surface bounding the set of possible frictional forces in the tangent plane and the torsional moment , and also induces torques and about the contact and axes due to elastic restoring forces in the suction cup material.
\figlabel

model

To determine when seal formation is possible, we model circular suction cups as a conical spring system parameterized by real numbers , where is the numer of vertices along the contact ring, is the radius of the cup, and is the height of the cup. See see \figrefmodel for an illustration.

Rather than performing a computationally expensive dynamic simulation with a spring-mass model to determine when seal formation is feasible, we make simplifying assumptions to evaluate seal formation geometrically. Specifically, we compute a configuration of that achieves a seal by projecting onto the surface of the target object’s triangular mesh and evaluate the feasibility of that configuration under quasi-static conditions as a proxy for the dynamic feasibility of seal formation.

In our model, has two types of springs – structural springs that represent the physical structure of the suction cup and flexion springs that do not correspond to physical structures but instead are used to resist bending along the cup’s surface. Dynamic spring-mass systems with similar structures have been used in prior work to model stiff sheets of rubber [26]. The undeformed structural springs of form a right pyramid with height and with a base that is a regular -gon with circumradius . Let be the set of vertices of the undeformed right pyramid, where each is a base vertex and is the pyramid’s apex. We define the model’s set of springs as follows:

  • Perimeter (Structural) Springs – Springs linking vertex to vertex , .

  • Cone (Structural) Springs – Springs linking vertex to vertex , .

  • Flexion Springs – Springs linking vertex to vertex , .

In the model, a complete seal is formed between and if and only if each of the perimeter springs of lies entirely on the surface of . Given a target mesh with a target grasp for the gripper, we choose an initial configuration of such that is undeformed and the approach line passes through and is orthogonal to the base of . Then, we make the following assumptions to determine a final static contact configuration of that forms a complete seal against (see \figrefspring-approach):

  • The perimeter springs of must not deviate from the original undeformed regular -gon when projected onto a plane orthogonal to . This means that their locations can be computed by projecting them along from their original locations onto the surface of .

  • The apex, , of must lie on the approach line and, given the locations of ’s base vertices, must also lie at a location that keeps the average distance along between and the perimeter vertices equal to .

See the supplemental file for additional details.

Given this configuration, a seal is feasible if:

  • The cone faces of do not collide with during approach or in the contact configuration.

  • The surface of has no holes within the contact ring traced out by ’s perimeter springs.

  • The energy required in each spring to maintain the contact configuration of is below a real-valued threshold modeling the maximum deformation of the suction cup material against the object surface.

We threshold the energy in individual springs rather than the total energy for because air gaps are usually caused by local geometry.

Iv-B Wrench Space Analysis

To determine the degree to which the suction cup can resist external wrenches such as gravity, we analyze the set of wrenches that the suction cup can apply.

Iv-B1 Wrench Resistance

The object wrench set for a grasp using a contact model with basis wrenches is , where is a set of basis wrenches in the object coordinate frame, and is a set of constraints on contact wrench magnitudes [24].

Definition 2

A grasp achieves wrench resistance with respect to if  [18, 24].

We encode wrench resistance as a binary variable such that if resists and otherwise.

Iv-B2 Suction Contact Model

Many suction contact models acknowledge normal forces, vacuum forces, tangential friction, and torsional friction [2, 17, 23, 29] similar to a point contact with friction or soft finger model [24]. However, under this model, a single suction cup cannot resist torques about axes in the contact tangent plane, implying that any torque about such axes would cause the suction cup to drop an object (see the supplementary material for a detailed proof). This defies our intuition since empirical evidence suggests that a single point of suction can robustly transport objects [6, 10].

We hypothesize that these torques are resisted through an asymmetric pressure distribution on the ring of contact between the suction cup and object, which occurs due to passive elastic restoring forces in the material. \figrefmodel illustrates the suction ring contact model. The grasp map is defined by the following basis wrenches:

  1. Actuated Normal Force (): The force that the suction cup material applies by pressing into the object along the contact axis.

  2. Vacuum Force (): The magnitude of the constant force pulling the object into the suction cup coming from the air pressure differential.

  3. Frictional Force (): The force in the contact tangent plane due to the normal force between the suction cup and object, .

  4. Torsional Friction (): The torque resulting from frictional forces in the ring of contact.

  5. Elastic Restoring Torque (): The torque about axes in the contact tangent plane resulting from elastic restoring forces in the suction cup pushing on the object along the boundary of the contact ring.

The magnitudes of the contact wrenches are constrained due to (a) the friction limit surface [14], (b) limits on the elastic behavior of the suction cup material, and (c) limits on the vacuum force. In the suction ring contact model, is approximated by a set of linear constraints for efficient computation of wrench resistance:

Friction:
Material:
Suction:

Here is the friction coefficient, is the radius of the contact ring, and is a material-dependent constant modeling the maximum stress for which the suction cup has linear-elastic behavior. These constraints define a subset of the friction limit ellipsoid and cone of admissible elastic torques under a linear pressure distribution about the ring of the cup. Furthermore, we can compute wrench resistance using quadratic programming due to the linearity of the constraints. See the supplemental file for a detailed derivation and proof.

Iv-C Robust Wrench Resistance

\seclabel

robust-wr We evaluate the robustness of candidate suction grasps by evaluating seal formation and wrench resistance over distributions on object pose, grasp pose, and disturbing wrenches:

Definition 3

The robust wrench resistance metric for and is , the probability of success under perturbations in object pose, gripper pose, friction, and disturbing wrenches.

In practice, we evaluate robust wrench resistance by taking samples, evaluating binary wrench resistance for each, and computing the sample mean: .

V Dex-Net 3.0 Dataset

\seclabel

dataset

Fig. 3: The Dex-Net 3.0 dataset. (Left) The Dex-Net 3.0 object dataset contains approximately 350k unique suction target points across the surfaces of 1,500 3D models from the KIT object database [16] and 3DNet [33]. Each suction grasp is classified as robust (green) or non-robust (red). Robust grasps are often above the object center-of-mass on flat surfaces of the object. (Right) The Dex-Net 3.0 point cloud dataset contains 2.8 million tuples of point clouds and suction grasps with robustness labels, with approximately 11.8 positive examples.
\figlabel

dataset

Fig. 4: Pipeline for generating the Dex-Net 3.0 dataset (left to right). We first sample a candidate suction grasp from the object surface and evaluate the ability to form a seal and resist gravity over perturbations in object pose, gripper pose, and friction. The samples are used to estimate the probability of success, or robustness, for candidate grasps on the object surface. We render a point cloud for each object and associate the candidate grasp with a pixel and orientation in the depth image through perspective projection. Training datapoints are centered on the suction target pixel and rotated to align with the approach axis to encode the invariance of the robustness to image locations.
\figlabel

pipeline

To learn to predict grasp robustness based on noisy point clouds, we generate the Dex-Net 3.0 training dataset of point clouds, grasps, and grasp reward labels by sampling tuples from a joint distribution modeled as the product of distributions on:

  • States: : A uniform distribution over a discrete dataset of objects and their stable poses and uniform continuous distributions over the object planar pose and camera poses in a bounded region of the workspace.

  • Grasp Candidates: : A uniform random distribution over contact points on the object surface.

  • Grasp Rewards : A stochastic model of wrench resistance for the gravity wrench that is sampled by perturbing the gripper pose according to a Gaussian distribution and evaluating the contact model described in \secrefcontact-model.

  • Observations : A depth sensor noise model with multiplicative and Gaussian process pixel noise.

\figref

dataset illustrates a subset of the Dex-Net 3.0 object and grasp dataset. The parameters of the sampling distributions and compliant suction contact model (see \secrefcontact-model) were set by maximizing average precision of the values using grid search for a set of grasps attempted on an ABB YuMi robot on a set of known 3D printed objects (see \secrefobject-datasets).

Our pipeline for generating training tuples is illustrated in \figrefpipeline. We first sample state by selecting an object at random from a database of 3D CAD models and sampling a friction coefficient, planar object pose, and camera pose relative to the worksurface. We generate a set of grasp candidates for the object by sampling points and normals uniformly at random from the surface of the object mesh. We then set the binary reward label if a seal is formed and robust wrench resistance (described in \secrefrobust-wr) is above a threshold value . Finally, we sample a point cloud of the scene using rendering and a model of image noise [22]. The grasp success labels are associated with pixel locations in images through perspective projection [9]. A graphical model for the sampling process and additional details on the distributions can be found in the supplemental file.

Vi Learning a Deep Robust Grasping Policy

\seclabel

gqcnn

We use the Dex-Net 3.0 dataset to train a GQ-CNN that takes as input a single-view point cloud of an object resting on the table and a candidate suction grasp defined by a target 3D point and approach direction, and outputs the robustness, or estimated probability of success, for the grasp on the visible object.

Our GQ-CNN architecture is identical to Dex-Net 2.0 [21] except that we modify the pose input stream to include the angle between the approach direction and the table normal. The point cloud stream takes a depth image centered on the target point and rotated to align the middle column of pixels with the approach orientation similar to a spatial transforming layer [11]. The end-effector depth from the camera and orientation are input to a fully connected layer in a separate pose stream and concatenated with conv features in a fully connected layer. We train the GQ-CNN with using stochastic gradient descent with momentum using an 80-20 training-to-validation image-wise split of the Dex-Net 3.0 dataset. Training took approximately 12 hours on three NVIDIA Titan X GPUs. The learned GQ-CNN achieves 93.5 classification accuracy on the held-out validation set.

We use the GQ-CNN in a deep robust grasping policy to plan suction target grasps from point clouds on a physical robot. The policy uses the Cross Entropy Method (CEM) [20, 21, 27]. CEM samples a set of initial candidate grasps uniformly at random from the set of surface points and inward-facing normals on a point cloud of the object, then iteratively resamples grasps from a Gaussian Mixture Model fit to the grasps with the highest predicted probability of success. See the supplemental file for example grasps planned by the policy.

Vii Experiments

\seclabel

experiments

Fig. 5: (Left) The experimental setup with an ABB YuMi equipped with a suction gripper. (Right) The 55 objects used to evaluate suction grasping performance. The objects are divided into three categories to characterize performance: Basic (e.g. prismatic objects), Typical, and Adversarial.
\figlabel

object-datasets

We ran experiments to characterize the precision of robust wrench resistance when object shape and pose are known and the precision of our deep robust grasping policy for planning grasps from point clouds for three object classes.

Vii-a Object Classes

\seclabel

object-datasets We created a dataset of 55 rigid and non-porous objects including tools, groceries, office supplies, toys, and 3D printed industrial parts. We separated objects into three categories, illustrated in \figrefobject-datasets:

  1. Basic: Prismatic solids (e.g. rectangular prisms, cylinders). Includes 25 objects.

  2. Typical: Common objects with varied geometry and many accessible, approximately planar surfaces. Includes 25 objects.

  3. Adversarial: 3D-printed objects with complex geometry (e.g. curved or narrow surfaces) that are difficult to access. Includes 5 objects.

For object details, see http://bit.ly/2xMcx3x.

Vii-B Experimental Protocol

We ran experiments with an ABB YuMi with a Primesense Carmine 1.09 and a suction system with a 15 diameter silicone single-bellow suction cup and a VM5-NC VacMotion vacuum generator with a payload of approximately 0.9. The experimental workspace is illustrated in the left panel of \figrefobject-datasets. In each experiment, the operator iteratively presented a target object to the robot and the robot planned and executed a suction grasp on the object. The operator labeled successes based on whether or not the robot was able to lift and transport the object to the side of the workspace. For each method, we measured:

  1. Average Precision (AP). The area under the precision-recall curve, which measures precision over possible thresholds on the probability of success predicted by the policy. This is useful for industrial applications where a robot may take an alternative action (e.g. asking for help) if the planned grasp is predicted to fail.

  2. Success Rate. The fraction of all grasps that were successful.

All experiments ran on a Desktop running Ubuntu 14.04 with a 2.7 GHz Intel Core i5-6400 Quad-Core CPU and an NVIDIA GeForce 980 GPU.

Vii-C Performance on Known Objects

\seclabel

known To assess performance of our robustness metric independently from the perception system, we evaluated whether or not the metric was predictive of suction grasp success when object shape and pose were known using the 3D printed Adversarial objects (right panel of \figrefobject-datasets). The robot was presented one of the five Adversarial objects in a known stable pose, selected from the top three most probable stable poses. We hand-aligned the object to a template image generated by rendering the object in a known pose on the table. Then, we indexed a database of grasps precomputed on 3D models of the objects and executed the grasp with the highest metric value for five trials. In total, there were 75 trials per experiment.

We compared the following metrics:

  1. Planarity-Centroid (PC3D). The inverse distance to the object centroid for sufficiently planar patches on the 3D object surface.

  2. Spring Stretch (SS). The maximum stretch among virtual springs in the suction contact model.

  3. Wrench Resistance (WR). Our model without perturbations.

  4. Robust Wrench Resistance (RWR). Our model.

The RWR metric performed best with 99 AP compared to 93 AP for WR, 89 AP for SS, and 88 for PC3D.

Vii-D Performance on Novel Objects

\seclabel

novel We also evaluated the performance of GQ-CNNs trained on Dex-Net 3.0 for planning suction target points from a single-view point cloud. In each experiment, the robot was presented one object from either the Basic, Typical, or Adversarial classes in a pose randomized by shaking the object in a box and placing it on the table. The object was imaged with a depth sensor and segmented using 3D bounds on the workspace. Then, the grasping policy executed the most robust grasp according to a success metric. In this experiment the human operators were blinded from the method they were evaluating to remove bias in human labels.

We compared policies that optimized the following metrics:

  1. Planarity. The inverse sum of squared errors from an approach plane for points within a disc with radius equal to that of the suction cup.

  2. Centroid. The inverse distance to the object centroid.

  3. Planarity-Centroid (PC). The inverse distance to the centroid for planar patches on the 3D object surface.

  4. GQ-CNN (ADV). Our GQ-CNN trained on synthetic data from the Adversarial objects (to assess the ability of the model to fit complex objects).

  5. GQ-CNN (DN3). Our GQ-CNN trained on synthetic data from 3DNet [33], KIT [16], and the Adversarial objects.

\tabref

policy-results details performance on the Basic, Typical, and Adversarial objects. On the Basic and Typical objects, we see that the Dex-Net 3.0 policy is comparable to PC in terms of success rate and has near-perfect AP, suggesting that failed grasps often have low robustness and can therefore be detected. On the adversarial objects, GQ-CNN (ADV) significantly outperforms GQ-CNN (DN3) and PC, suggesting that this method can be used to successfully grasp objects with complex surface geometry as long as the training dataset closely matches the objects seen at runtime. The DN3 policy took an average of 3.0 seconds per grasp.

Basic
Typical
Adversarial
AP ()
Success Rate ()
AP ()
Success Rate ()
AP ()
Success Rate ()
Planarity 81 74 69 67 48 47
Centroid 89 92 80 78 47 38
Planarity-Centroid 98 94 94 86 64 62
GQ-CNN (ADV) 83 77 75 67 86 81
GQ-CNN (DN3) 99 98 97 82 61 58
TABLE I: Performance of point-cloud-based grasping policies for 125 trials each on the Basic and Typical datasets and 100 trials each on the Adversarial dataset. We see that the GQ-CNN trained on Dex-Net 3.0 has the highest Average Precision (AP) (area under the precision-recall curve) on the Basic and Typical objects, suggesting that the robustness score from the GQ-CNN could be used to anticipate grasp failures and select alternative actions (e.g. probing objects) in the context of a larger system. Also, a GQ-CNN trained on the Adversarial dataset outperforms all methods on the Adversarial objects, suggesting that the performance of our model is improved when the true object models are used for training.
\tablabel

policy-results

Vii-E Failure Modes

The most common failure mode was attempting to form a seal on surfaces with surface geometry that prevent seal formation. This is partially due to the limited resolution of the depth sensor, as our seal formation model is able to detect the inability to form a seal on such surfaces when the geometry is known precisely. In contrast, the planarity-centroid metric performs poorly on objects with non-planar surfaces near the object centroid.

Viii Future Work

\seclabel

discussion In future work we will study sensitivity to (1) the distribution of 3D object models using in the training dataset, (2) noise and resolution in the depth sensor, and (3) variations in vacuum suction hardware (e.g. cup shape, hardness of cup material). We will also extend this model to learning suction grasping policies for bin-picking with heaps of parts and to composite policies that combine suction grasping with parallel-jaw grasping by a two-armed robot. We are also working with colleagues in the robot grasping community to propose shareable benchmarks and protocols that specify experimental objects and conditions with industry-relevant metrics such as Mean Picks Per Hour (MPPH), see http://goo.gl/6M5rfw.

Appendix A Additional Experiments

\seclabel

additional-experiments To better characterize the correlation of our robust wrench resistance metric, compliant suction contact model, and GQ-CNN-based policy for planning suction target grasps from point clouds with physical outcomes on a real robot, we present several additional analyses and experiments.

A-a Performance Metrics

Our primary numeric metrics of performance were:

  1. Average Precision (AP). The area under the precision-recall curve, which measures precision over possible thresholds on the probability of success predicted by the policy. This is useful for industrial applications where a robot may take an alternative action (e.g. probing, asking for help) if the planned grasp is predicted to fail.

  2. Success Rate. The fraction of all grasps that were successful.

We argue that these metrics alone do not give a complete picture of how useful a suction grasp policy would work in practice. Average Precision (AP) penalizes a policy for having poor recall (a high rate of false negatives relative to true positives), and success rate penalizes a policy with a high number of failures. However, not all failures should be treated equally: some failures are predicted to occur by the GQ-CNN (low predicted probability of success) while the others are the result of an overconfident prediction.

In practice, a suction grasp policy would be part of a larger system (e.g. a state machine) that could decide whether or not to execute a grasp based on the continuous probability of success output by the GQ-CNN. As long as the policy is not overconfident, such as system can detect failures before they occur and take an alternative action such as attempting to turn the object over, asking a human for help, or leaving the object in the bin for error handling. At the same time, if a policy is too conservative and never predicts successes, then the system will handle be able to handle very few test cases.

We illustrate this tradeoff by plotting the Success-Attempt Rate curve which plots:

  1. Success Rate. The fraction of fraction of grasps that are successful if the system only executes grasps have predicted probability of success greater than a confidence threshold .

  2. Attempt Rate. The fraction of all test cases for which the system attempts a grasp, if the system only attempts grasps with predicted probability of success greater than a confidence threshold .

over all possible values of the confidence threshold . There is typically an inverse relationship between the two metrics: a higher confidence threshold will reduce false positives but will also reduce the frequency of grasp attempts, increasing runtime and decreasing the diversity of cases that the robot is able to successfully handle.

A-B Performance on Known Objects

\seclabel

additional-known-objects To assess performance of our robustness metric independent of the perception system, we evaluated whether or not the metric was predictive of suction grasp success when object shape and pose were known using the 3D printed Adversarial objects. The robot was presented one of the five Adversarial objects in a known stable pose, selected from the top three most probable stable poses. We hand-aligned the object to a template image generated by rendering the object in a known pose on the table. Then, we indexed a database of grasps precomputed on 3D models of the objects and executed the grasp with the highest metric value for five trials. In total, there were 75 trials per experiment.

We compared the following metrics:

  1. Planarity-Centroid (PC3D). The inverse distance to the object centroid for sufficiently planar patches on the 3D object surface.

  2. Spring Stretch (SS). The maximum stretch among virtual springs in the suction contact model.

  3. Wrench Resistance (WR).

  4. Robust Wrench Resistance (RWR).

The results are detailed in \tabrefcorrelation and the Success vs Attempt Rate curve is plotted in \figrefhyp1. A policy based on the robust wrench resistance metric achieved nearly 100 average precision and 92 success on this dataset, suggesting that the ranking of grasps by robust wrench resistance is correlated with the ranking by physical successes.

Metric
AP ()
Success Rate ()
PC3D 88 80
SS 89 84
WR 93 80
RWR 100 92
TABLE II: Performance of robust grasping policies with known state (3D object shape and pose) across 75 physical trials per policy on the Adversarial object dataset. The policies differ by the metric used to rank grasps, and each metric is computed using the known 3D object geometry. The robust wrench resistance metric, which considers the ability of a suction cup to form a seal and resist gravity under perturbations, has very high precision. In comparison, the Planarity-Centroid heuristic achieves only 88 precision and 80 success.
\tablabel

correlation

Fig. 6: Success rate vs Attempt Rate for grasp quality metrics on known 3D objects in known poses. The data was collected across 75 trials per policy on the Adversarial object dataset. The robust wrench resistance metric based on our compliant suction contact model had a 100 success rate for a large percentage of possible test cases, whereas a heuristic based on planarity and the distance to the object center of mass had success rates as low as , indicating that the real-valued distance to the center of mass is not well correlated with grasp success.
\figlabel

hyp1

A-C Performance on Novel Objects

We also evaluated the performance of GQ-CNNs trained on Dex-Net 3.0 for planning suction target points from a single-view point cloud. In each experiment, the robot was presented one object from either the Basic, Typical, or Adversarial classes in a pose randomized by shaking the object in a box and placing it on the table. The object was imaged with a depth sensor and segmented using 3D bounds on the workspace. Grasp candidates were then sampled from the depth image and the grasping policy executed the most robust candidate grasp according to a success metric. In this experiment the human operators were blinded from the method they were evaluating to remove bias in human labels.

We compared policies that used the following metrics:

  1. Planarity. The inverse sum of squared errors from an approach plane for points within a disc with radius equal to that of the suction cup.

  2. Centroid. The inverse distance to the object centroid.

  3. Planarity-Centroid. The inverse distance to the centroid for sufficiently planar patches on the 3D object surface.

  4. GQ-CNN (ADV). A GQ-CNN trained on synthetic data from only the Adversarial objects (to assess the ability of the model to fit complex objects).

  5. GQ-CNN (DN3). A GQ-CNN trained on synthetic data from the 3DNet [33], KIT [16], and Adversarial object datasets.

\tabref

policy-results details performance on the Basic, Typical, and Adversarial objects, and \figrefhyp2 illustrates the Success-Attempt Rate tradeoff. We see that the Dex-Net 3.0 policy has the highest AP across the Basic and Typical classes. Also, the GQ-CNN trained on the Adversarial objects significantly outperforms all methods on the Adversarial dataset, suggesting that our model is able to exploit knowledge of complex 3D geometry to plan robust grasps. Furthermore, the Success-Attempt Rate curve suggests that the continuous probability of success output by the Dex-Net 3.0 policy is highly correlated with the true success label and can be used to detect failures before they occur on the Basic and Typical object classes. The Dex-Net 3.0 policy took an average of approximately 3 seconds to plan each grasp.

Basic
Typical
Adversarial
AP ()
Success Rate ()
AP ()
Success Rate ()
AP ()
Success Rate ()
Planarity 81 74 69 67 48 47
Centroid 89 92 80 78 47 38
Planarity-Centroid 98 94 94 86 64 62
GQ-CNN (ADV) 83 77 75 67 86 81
GQ-CNN (DN3) 99 98 97 82 61 58
TABLE III: Performance of image-based grasping policies for 125 trials each on the Basic and Typical datasets and 100 trials each on the Adversarial datasets. We see that the GQ-CNN trained on Dex-Net 3.0 has the highest average precision on the Basic and Typical objects but has lower precision on the adversarial objects, which are very different than common objects in the training dataset. A GQ-CNN trained on the Adversarial dataset significantly outperforms all methods on these objects, suggesting that our model is able to capture complex geometries when the training dataset contains a large proportion of such objects.
\tablabel

policy-results

Fig. 7: Success vs Attempt Rate for 125 trials on each of the Basic and Typical object datasets and 100 trials each on the Adversarial object dataset. The GQ-CNN trained on Dex-Net 3.0 has near 100 precision on the Basic and Typical classes for a significant portion of attempts, suggesting that the GQ-CNN is able to predict when it is likely to fail on novel objects. The GQ-CNN trained on the Adversarial objects has a significantly higher precision on the Adversarial class but does not perform as well on the other objects.
\figlabel

hyp2

A-D Classification Performance on Known Objects

To assess performance of our robustness metric on classifying grasps as successful or unsuccessful, we evaluated whether or not the metric was able to classify a set of grasps sampled randomly from the 3D object surface using the known 3D object geometry and pose of the Adversarial objects. First, we sampled a set of grasps uniformly at random from the surface of the 3D object meshes. Then robot was presented one of the five Adversarial objects in a known stable pose, selected from the top three most probable stable poses. We hand-aligned the object to a template image generated by rendering the object in a known pose on the table. Then, the robot executed a grasp uniformly at random from the set of reachable grasps for the given stable pose. In total, there were trials, per object.

We compared the predictions made for those grasps by the following metrics:

  1. Planarity-Centroid (PC3D). The inverse distance to the object centroid for sufficiently planar patches on the 3D object surface.

  2. Spring Stretch (SS). The maximum stretch among virtual springs in the suction contact model.

  3. Wrench Resistance (WR).

  4. Robust Wrench Resistance (RWR).

We measured the Average Precision (AP), classification accuracy, and Spearman’s rank correlation coefficient (which measures the correlation between the ranking of the metric value and successes on the physical robot). \tabrefrandom-results details the performance of each metric and the Precision-Recall curve is plotted in \figrefhyp1-pr. We see that the robust wrench resistance metric with our compliant spring contact model has the highest average precision and correlation with successes on the physical robot.

Metric
AP ()
Accuracy ()
Rank Correlation
PC3D 71 68 0.36
SS 75 74 0.49
WR 78 77 0.52
RWR 80 75 0.62
TABLE IV: Performance of classification and correlation with successful object lifts and transports for various metrics of grasp quality based on 3D object meshes. The metrics SS, WR, and RWR all use our compliant suction contact model, and RWR uses our entire proposed method: checking seal formation, analyzing wrench resistance using the suction ring model, and computing robustness with Monte-Carlo sampling.
\tablabel

random-results

Fig. 8: Precision-Recall curve for classifying successful object lifts and transports using various metrics of grasp quality based on 3D object meshes.
\figlabel

hyp1-pr

A-E Failure Modes

Our system was not able to handle many objects due to material properties. We broke up the failure objects into two categories:

  1. Imperceptible Objects: Objects with (a) surface variations less than the spatial resolution of our Primesense Carmine 1.09 depth camera or (b) specularities or transparencies that prevent the depth camera from sensing the object geometry. Thus the point-cloud-based grasping policies were not able to distinguish successes from failures.

  2. Impossible Objects: Objects for which a seal cannot be formed either because objects are (a) non-porous or (b) lack a surface patch for which the suction cup can achieve a seal due to size or texture.

These objects are illustrated in \figreffailure-datasets.

Fig. 9: Two categories that cannot be handled by any of the point-cloud based suction grasping policies. (Left) Imperceptible objects, which cannot be handled by the system due to small surface variations that cannot be detected by the low-resolution depth sensor but do prevent seal formation. (Left) Impossible objects, which cannot be handled by the system due to non-porosity or lack of an available surface to form a seal.
\figlabel

failure-datasets

Appendix B Details of Quasi-Static Spring Seal Formation Model

\seclabel

config-details

In this section, we derive a detailed process for statically determining a final configuration of that achieves a complete seal against mesh . We assume that we are given a line of approach parameterized by , a target point on the surface of , and , a vector pointing towards along the line of approach.

First, we choose an initial, undeformed configuration of . In this undeformed configuration of , all of the springs of are in their resting positions, which means that the structural springs of form a right pyramid with a regular -gon as its base. This perfectly constrains the relative positions of the vertices of , so all that remains is specifying the position and orientation of relative to the world frame.

We further constrain the position and orientation of such that passes through and is orthogonal to the plane containing the base of . This leaves only the position of and a rotation about as degrees of freedom. For our purposes, the position of along does not matter so long as is not in collision with and the base of is closer to than the apex is. In general, we choose such that , where is the largest extent of the object’s vertices. For the rotation about , we simply select a random initial angle. Theoretically, the rotation could affect the outcome of our metric, but as long as is chosen to be sufficiently large, the result is not sensitive to the chosen rotation angle.

Next, given the initial configuration of , we compute the final locations of the perimeter springs on the surface of under two main constraints:

  • The perimeter springs of must not deviate from their initial locations when projected back onto the plane containing the base of ’s initial right pyramid.

  • The perimeter springs of must lie flush against the mesh .

Essentially, this means that the perimeter springs will lie on the intersection of with a right prism whose base is the base of the initial configuration’s right pyramid and whose height is sufficient such that passes all the way through . The base vertices of will lie at the intersection of and ’s side edges, and the perimeter springs of will lie along the intersection of and ’s side faces.

Finally, given a complete configuration of the perimeter vertices of as well as the paths of the perimeter springs along the surface of , we compute the final location of the cup apex . We work with three main constraints:

  • must lie on .

  • must not be below the surface of (i.e. ).

  • should be chosen such that the average displacement between and the perimeter vertices along remains equal to .

Let . Then, the solution distance is given by

When thresholding the energy in each spring, we use a per-spring threshold of a change in length, which was used as the spring stretch limit in  [26].

Appendix C Suction Contact Model

\seclabel

contact-deriv

The basis of contact wrenches for the suction ring model is illustrated in \figrefmodel. The contact wrenches are not independent due to the coupling of normal force and friction, and they may be bounded due to material properties. In this section we prove that wrench resistance can be computed with quadratic programming, we derive constraints between the contact wrenches in the suction ring model, and we explain the limits of the soft finger suction contact models for a single suction contact.

C-a Computing Wrench Resistance with Quadratic Programming

The object wrench set for a grasp using a contact model with basis wrenches is , where is a set of basis wrenches in the object coordinate frame, and is a set of constraints on contact wrench magnitudes [24]. The grasp map can be decomposed as where is the adjoint transformation mapping wrenches from the contact to the object coordinate frame and is the contact wrench basis, a set of orthonormal basis wrenches in the contact coordinate frame [24].

Definition 4

A grasp achieves wrench resistance with respect to if .

Proposition 1

Let be the grasp map for a grasp . Furthermore, let . Then can resist iff .

{proof}

(). Assume can resist . Then and therefore such that . (). Assume . Then such that . When the set of admissible contact wrench magnitudes is defined by linear equality and inequality constraints, the , is a Quadratic Program which can be solved exactly by modern solvers.

C-B Derivation of Suction Ring Contact Model Constraints

Fig. 10: Wrench basis for the compliant suction ring contact model. The contact exerts a constant pulling force on the object of magnitude and additionally can push or pull the object along the contact axis with force . The suction cup material exerts a normal force on the object through a linear pressure distribution on the ring. This pressure distribution induces a friction limit surface bounding the set of possible frictional forces in the tangent plane and the torsional moment , and also induces torques and about the contact and axes due to elastic restoring forces in the suction cup material.
\figlabel

model

Our suction contact model assumes the following:

  1. Quasi-static physics (e.g. inertial terms are negligible).

  2. The suction cup contacts the object along a circle of radius (or “ring”) in the plane of the contact coordinate frame.

  3. The suction cup material behaves as a ring of infinitesimal springs per unit length. Specifically, we assume that the pressure along the axis in contact coordinate frame satisfies where is displacement along the -axis and is a spring constant (per unit length). The cup does not permit deformations along the or axes.

  4. The suction cup material is well approximated by a spring-mass system. Furthermore, points on the contact ring are in static equilibrium with a linear displacement along the axis from the equilibrium position: . Together with Assumption 3, this implies that:

    for real numbers and .

  5. The force on the object due to the vacuum is a constant along the axis of the contact coordinate frame.

  6. The object exerts a normal force on the object where is the force due to actuation. This assumption holds when analyzing the ability to resist disturbing wrenches because the material can apply passive forces but may not hold when considering target wrenches that can be actuated.

The magnitudes of the contact wrenches are constrained due to (a) the friction limit surface [14], (b) limits on the elastic behavior of the suction cup material, and (c) limits on the vacuum force.

C-B1 Friction Limit Surface

The values of the tangential and torsional friction are coupled through the planar external wrench and thus are jointly constrained. This constraint is known as the friction limit surface [14]. We can approximate the friction limit surface by computing the maximum friction force and torsional moment under a pure translation and a pure rotation about the contact origin.

The tangential forces have maximum magnitude under a purely translational disturbing wrench with unit vector in the direction of the velocity:

The torsional moment has a maximum moment under a purely rotational disturbing wrench about the contact axis. This disturbing wrench can be described with a unit vector . Thus the torsional moment is bounded by:

We can approximate the friction limit surface by the ellipsoid [13, 14]:

While this constraint is convex, in practice many solvers for Quadratically Constrained Quadratic Programs (QCQPs) assume nonconvexity. We can turn this into a linear constraint by bounding tangential forces and torsional moments in a rectangular prism inscribed within the ellipsoid:

C-B2 Elastic Restoring Torques

The torques about the and axes are also bounded. Let . Then:

where is the elastic limit or yield strength of the suction cup material, defined as the stress at which the material begins to deform plastically instead of linearly.

C-B3 Vacuum Limits

The ring contact can exert forces on the object along the axis through motor torques that transmit forces to the object through the ring of the suction cup. Under these assumptions, the normal force exerted on the object by the suction cup material is:

Note also that , where is the component of force on the object, since the normal force must offset the force due to vacuum even when no force is being applied on the object.

C-B4 Constraint Set

Taking all constraints into account, we can describe with a set of linear constraints:

Friction:
Material:
Suction:

Since these constraints are linear, we can solve for wrench resistance in the our contact model using Quadratic Programming. In this paper we set and .

C-C Limits of the Soft Finger Suction Contact Model

The most common suction contact model in the literature [17, 23, 29, 31, 34] considers normal forces from motor torques, suction forces from the pressure differential between inside the cup and the air outside the object, and both tangential and torsional friction resulting from the contact area between the cup and the object. Let , , and be unit basis vectors along the , , and axes. The contact model is specified by:

The first constraint enforces Coulomb friction with coefficient . The second constraint ensures that the net torsion is bounded by the normal force, since torsion results from the net frictional moment from a contact area. Unlike contact models for rigid multifinger grasping, can be positive or negative due to the pulling force of suction.

Proposition 2

Under the soft suction contact model, a grasp with a single contact point cannot resist torques about axes in the contact tangent plane.

{proof}

The wrench is not in the range of because it is orthogonal to every basis wrench (column of ).

The null space of is spanned by the wrenches and , suggesting that a single suction contact cannot resist torques in the tangent plane at the contact. This defies our intuition since empirical evidence suggests that a single point of suction can reliably hold and transport objects to a receptacle in applications such as the Amazon Picking Challenge [6, 10].

Appendix D GQ-CNN Performance

\seclabel

training The GQ-CNN trained on Dex-Net 3.0 had an accuracy of 93.5 on a held out validation set of approximately 552,000 datapoints. \figrefroc-conv shows the precision-recall curve for the GQ-CNN validation set and the optimized 64 Conv1_1 filters, each of which is 77. \figrefpolicy-examples illustrates the probability of success predicted by the GQ-CNN on candidates grasps from several real point clouds.

Fig. 11: (Left) Precision-recall curve for the GQ-CNN trained on Dex-Net 3.0 on the validation set of 552,000 pairs of grasps and images. (Right) The 64 Conv1_1 filters of the GQ-CNN. Each is 77. We see that the network learns circular filters which may be used to assess the surface curvature about the ring of contact between the suction cup and object.
\figlabel

roc-conv

Fig. 12: Robust grasps planned with the Dex-Net 3.0 GQ-CNN-based policy on example RGB-D point clouds. (Left) The robot is presented an object in isolation. (Middle) Initial candidate suction target points colored by the predicted probability of success from zero (red) to one (green). Robust grasps tend to concentrate around the object centroid. (Right) The policy optimizes for the grasp with the highest probability of success using the Cross Entropy Method.
\figlabel

policy-examples

Appendix E Environment Model

Fig. 13: A probabilistic graphical model of the relationship between the ability to resist external wrenches e.g. due to gravity under perturbations in object pose, gripper pose, camera pose, and friction.
\figlabel

graphical-model

To learn to predict grasp robustness based on noisy point clouds, we generate the Dex-Net 3.0 training dataset of point clouds, grasps, and grasp success labels by sampling tuples from a joint distribution that is composed of distributions on:

  • States: : A prior on possible objects, object poses, and camera poses that the robot will encounter.

  • Grasp Candidates: : A prior constraining grasp candidates to target points on the object surface.

  • Grasp Successes : A stochastic model of wrench resistance for the gravity wrench.

  • Observations : A sensor noise model.

Our graphical model is illustrated in \figrefgraphical-model.

Distribution Description
truncated Gaussian distribution over friction coefficients
discrete uniform distribution over 3D object models
continuous uniform distribution over the discrete set of
object stable poses and planar poses on the table surface
continuous uniform distribution over spherical coordinates
for radial bounds and polar angle in
TABLE V: Details of the distributions used in the Dex-Net 2.0 graphical model for generating the Dex-Net training dataset.
\tablabel

distributions

E-a Details of Distributions

We follow the state model of [21], which we repeat here for convenience. The parameters of the sampling distributions were set by maximizing average precision of the values using grid search for a set of grasps attempted on an ABB YuMi robot on a set of known 3D printed objects (see \secrefadditional-known-objects).

We model the state distribution as . We model as a Gaussian distribution truncated to . We model as a discrete uniform distribution over 3D objects in a given dataset. We model , where is is a discrete uniform distribution over object stable poses and is uniform distribution over 2D poses: . We compute stable poses using the quasi-static algorithm given by Goldberg et al. [8]. We model as a uniform distribution on spherical coordinates , where the camera optical axis always intersects the center of the table. The parameters of the sampling distributions were set by maximizing average precision of the values using grid search for a set of grasps attempted on an ABB YuMi robot on a set of known 3D printed objects (see \secrefadditional-known-objects).

Our grasp candidate model is a uniform distribution over points samples on the object surface, with the approach direction defined by the inward-facing surface normal at each point.

We follow the observation model of [21], which we repeat here for convenience. Our observation model model images as where is a rendered depth image created using OSMesa offscreen rendering. We model as a Gamma random variable with shape and scale=. We model as Gaussian Process noise drawn with measurement noise and kernel bandwidth .

Our grasp success model specifies a distribution over wrench resistance due to perturbations in object pose, gripper pose, friction coefficient, and the disturbing wrench to resist. Specifically, we model . We model as the wrench exerted by gravity on the object center-of-mass with zero-mean Gaussian noise assuming as mass of 1.0kg. We model as a grasp perturbation distribution where the suction target point is perturbed by zero-mean Gaussian noise and the approach direction is perturbed by zero-mean Gaussian noise in the rotational component of Lie algebra coordinates . We model as a state perturbation distribution where the pose is perturbed by zero-mean Gaussian noise in Lie algebra coordinates with translational component and rotational component and the object center of mass is perturbed by zero-mean Gaussian noise . We model a Bernoulli with parameter 1 if resists given the state and parameter 0 if not.

E-B Implementation Details

To efficiently implement sampling, we make several optimizations. First, we precompute the set of grasps for every 3D object model in the database and take a fixed number of samples of grasp success from using quadratic programming for wrench resistance evaluation. We convert the samples to binary success labels by thresholding the sample mean by . We also render a fixed number of depth images for each stable pose independently of grasp success evaluation. Finally, we sample a set of candidate grasps from the object in each depth image and transform the image to generate a suction grasp thumbnail centered on the target point and oriented to align the approach axis with the middle column of pixels for GQ-CNN training.

Acknowledgments

This research was performed at the AUTOLAB at UC Berkeley in affiliation with the Berkeley AI Research (BAIR) Lab, the Real-Time Intelligent Secure Execution (RISE) Lab, and the CITRIS ”People and Robots” (CPAR) Initiative. The authors were supported in part by donations from Siemens, Google, Honda, Intel, Comcast, Cisco, Autodesk, Amazon Robotics, Toyota Research Institute, ABB, Samsung, Knapp, and Loccioni, Inc and by the Scalable Collaborative Human-Robot Learning (SCHooL) Project, NSF National Robotics Initiative Award 1734633. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Sponsors. We thank our colleagues who provided helpful feedback, code, and suggestions, in particular Ruzena Bajcsy, Oliver Brock, Peter Corke, Chris Correa, Ron Fearing, Roy Fox, Bernhard Guetl, Menglong Guo, Michael Laskey, Andrew Lee, Pusong Li, Jacky Liang, Sanjay Krishnan, Fritz Kuttler, Stephen McKinley, Juan Aparicio Ojea, Michael Peinhopf, Peter Puchwein, Alberto Rodriguez, Daniel Seita, Vishal Satish, and Shankar Sastry.

References

  • [1] A. Ali, M. Hosseini, and B. Sahari, “A review of constitutive models for rubber-like materials,” American Journal of Engineering and Applied Sciences, vol. 3, no. 1, pp. 232–239, 2010.
  • [2] B. Bahr, Y. Li, and M. Najafi, “Design and suction cup analysis of a wall climbing robot,” Computers & electrical engineering, vol. 22, no. 3, pp. 193–209, 1996.
  • [3] J. Bohg, A. Morales, T. Asfour, and D. Kragic, “Data-driven grasp synthesis—a survey,” IEEE Trans. Robotics, vol. 30, no. 2, pp. 289–309, 2014.
  • [4] N. Correll, K. E. Bekris, D. Berenson, O. Brock, A. Causo, K. Hauser, K. Okada, A. Rodriguez, J. M. Romano, and P. R. Wurman, “Analysis and observations from the first amazon picking challenge,” IEEE Transactions on Automation Science and Engineering, 2016.
  • [5] Y. Domae, H. Okuda, Y. Taguchi, K. Sumi, and T. Hirai, “Fast graspability evaluation on single depth maps for bin picking with general grippers,” in Robotics and Automation (ICRA), 2014 IEEE International Conference on.   IEEE, 2014, pp. 1997–2004.
  • [6] C. Eppner, S. Höfer, R. Jonschkowski, R. M. Martin, A. Sieverling, V. Wall, and O. Brock, “Lessons from the amazon picking challenge: Four aspects of building robotic systems.” in Robotics: Science and Systems, 2016.
  • [7] C. Ferrari and J. Canny, “Planning optimal grasps,” in Proc. IEEE Int. Conf. Robotics and Automation (ICRA), 1992, pp. 2290–2295.
  • [8] K. Goldberg, B. V. Mirtich, Y. Zhuang, J. Craig, B. R. Carlisle, and J. Canny, “Part pose statistics: Estimators and experiments,” IEEE Trans. Robotics and Automation, vol. 15, no. 5, pp. 849–857, 1999.
  • [9] R. Hartley and A. Zisserman, Multiple view geometry in computer vision.   Cambridge university press, 2003.
  • [10] C. Hernandez, M. Bharatheesha, W. Ko, H. Gaiser, J. Tan, K. van Deurzen, M. de Vries, B. Van Mil, J. van Egmond, R. Burger, et al., “Team delft’s robot winner of the amazon picking challenge 2016,” arXiv preprint arXiv:1610.05514, 2016.
  • [11] M. Jaderberg, K. Simonyan, A. Zisserman, et al., “Spatial transformer networks,” in Advances in Neural Information Processing Systems, 2015, pp. 2017–2025.
  • [12] E. Johns, S. Leutenegger, and A. J. Davison, “Deep learning a grasp function for grasping under gripper pose uncertainty,” in Proc. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS).   IEEE, 2016, pp. 4461–4468.
  • [13] I. Kao and M. R. Cutkosky, “Quasistatic manipulation with compliance and sliding,” Int. Journal of Robotics Research (IJRR), vol. 11, no. 1, pp. 20–40, 1992.
  • [14] I. Kao, K. Lynch, and J. W. Burdick, “Contact modeling and manipulation,” in Springer Handbook of Robotics.   Springer, 2008, pp. 647–669.
  • [15] D. Kappler, J. Bohg, and S. Schaal, “Leveraging big data for grasp planning,” in Proc. IEEE Int. Conf. Robotics and Automation (ICRA), 2015.
  • [16] A. Kasper, Z. Xue, and R. Dillmann, “The kit object models database: An object model database for object recognition, localization and manipulation in service robotics,” Int. Journal of Robotics Research (IJRR), vol. 31, no. 8, pp. 927–934, 2012.
  • [17] R. Kolluru, K. P. Valavanis, and T. M. Hebert, “Modeling, analysis, and performance evaluation of a robotic gripper system for limp material handling,” IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 28, no. 3, pp. 480–486, 1998.
  • [18] R. Krug, Y. Bekiroglu, and M. A. Roa, “Grasp quality evaluation done right: How assumed contact force bounds affect wrench-based quality metrics,” in Robotics and Automation (ICRA), 2017 IEEE International Conference on.   IEEE, 2017, pp. 1595–1600.
  • [19] I. Lenz, H. Lee, and A. Saxena, “Deep learning for detecting robotic grasps,” Int. Journal of Robotics Research (IJRR), vol. 34, no. 4-5, pp. 705–724, 2015.
  • [20] S. Levine, P. Pastor, A. Krizhevsky, and D. Quillen, “Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection,” arXiv preprint arXiv:1603.02199, 2016.
  • [21] J. Mahler, J. Liang, S. Niyaz, M. Laskey, R. Doan, X. Liu, J. A. Ojea, and K. Goldberg, “Dex-net 2.0: Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics,” in Proc. Robotics: Science and Systems (RSS), 2017.
  • [22] J. Mahler, F. T. Pokorny, B. Hou, M. Roderick, M. Laskey, M. Aubry, K. Kohlhoff, T. Kröger, J. Kuffner, and K. Goldberg, “Dex-net 1.0: A cloud-based network of 3d objects for robust grasp planning using a multi-armed bandit model with correlated rewards,” in Proc. IEEE Int. Conf. Robotics and Automation (ICRA).   IEEE, 2016.
  • [23] G. Mantriota, “Theoretical model of the grasp with vacuum gripper,” Mechanism and machine theory, vol. 42, no. 1, pp. 2–17, 2007.
  • [24] R. M. Murray, Z. Li, and S. S. Sastry, A mathematical introduction to robotic manipulation.   CRC press, 1994.
  • [25] L. Pinto and A. Gupta, “Supersizing self-supervision: Learning to grasp from 50k tries and 700 robot hours,” in Proc. IEEE Int. Conf. Robotics and Automation (ICRA), 2016.
  • [26] X. Provot et al., “Deformation constraints in a mass-spring model to describe rigid cloth behaviour,” in Graphics interface.   Canadian Information Processing Society, 1995, pp. 147–147.
  • [27] R. Y. Rubinstein, A. Ridder, and R. Vaisman, Fast sequential Monte Carlo methods for counting and optimization.   John Wiley & Sons, 2013.
  • [28] A. Saxena, J. Driemeyer, and A. Y. Ng, “Robotic grasping of novel objects using vision,” The International Journal of Robotics Research, vol. 27, no. 2, pp. 157–173, 2008.
  • [29] H. S. Stuart, M. Bagheri, S. Wang, H. Barnard, A. L. Sheng, M. Jenkins, and M. R. Cutkosky, “Suction helps in a pinch: Improving underwater manipulation with gentle suction flow,” in Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on.   IEEE, 2015, pp. 2279–2284.
  • [30] N. C. Tsourveloudis, R. Kolluru, K. P. Valavanis, and D. Gracanin, “Suction control of a robotic gripper: A neuro-fuzzy approach,” Journal of Intelligent & Robotic Systems, vol. 27, no. 3, pp. 215–235, 2000.
  • [31] A. J. Valencia, R. M. Idrovo, A. D. Sappa, D. P. Guingla, and D. Ochoa, “A 3d vision based approach for optimal grasp of vacuum grippers,” in Electronics, Control, Measurement, Signals and their Application to Mechatronics (ECMSM), 2017 IEEE International Workshop of.   IEEE, 2017, pp. 1–6.
  • [32] J. Weisz and P. K. Allen, “Pose error robust grasping from contact wrench space metrics,” in Proc. IEEE Int. Conf. Robotics and Automation (ICRA).   IEEE, 2012, pp. 557–562.
  • [33] W. Wohlkinger, A. Aldoma, R. B. Rusu, and M. Vincze, “3dnet: Large-scale object class recognition from cad models,” in Proc. IEEE Int. Conf. Robotics and Automation (ICRA).   IEEE, 2012, pp. 5384–5391.
  • [34] Y. Yoshida and S. Ma, “Design of a wall-climbing robot with passive suction cups,” in Robotics and Biomimetics (ROBIO), 2010 IEEE International Conference on.   IEEE, 2010, pp. 1513–1518.
  • [35] K.-T. Yu, N. Fazeli, N. Chavan-Dafle, O. Taylor, E. Donlon, G. D. Lankenau, and A. Rodriguez, “A summary of team mit’s approach to the amazon picking challenge 2015,” arXiv preprint arXiv:1604.03639, 2016.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
230640
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description