DexNet 3.0: Computing Robust Vacuum Suction Grasp Targets in Point Clouds using a New Analytic Model and Deep Learning
Abstract
Vacuumbased end effectors are widely used in industry and are often preferred over paralleljaw and multifinger grippers due to their ability to lift objects with a single point of contact. Suction grasp planners often target planar surfaces on point clouds near the estimated centroid of an object. In this paper, we propose a compliant suction contact model that computes the quality of the seal between the suction cup and local target surface and a measure of the ability of the suction grasp to resist an external gravity wrench. To characterize grasps, we estimate robustness to perturbations in endeffector and object pose, material properties, and external wrenches. We analyze grasps across 1,500 3D object models to generate DexNet 3.0, a dataset of 2.8 million point clouds, suction grasps, and grasp robustness labels. We use DexNet 3.0 to train a Grasp Quality Convolutional Neural Network (GQCNN) to classify robust suction targets in point clouds containing a single object. We evaluate the resulting system in 350 physical trials on an ABB YuMi fitted with a pneumatic suction gripper. When evaluated on novel objects that we categorize as Basic (prismatic or cylindrical), Typical (more complex geometry), and Adversarial (with few available suctiongrasp points) DexNet 3.0 achieves success rates of 98, 82, and 58 respectively, improving to 81 in the latter case when the training set includes only adversarial objects. Code, datasets, and supplemental material can be found at http://berkeleyautomation.github.io/dexnet.
capbtabboxtable[][\FBwidth]
I Introduction
\seclabelintroduction
Suction grasping is widelyused for pickandplace tasks in industry and warehouse order fulfillment. As shown in the Amazon Picking Challenge, suction has an advantage over paralleljaw or multifinger grasping due to its ability to reach into narrow spaces and pick up objects with a single point of contact. However, while a substantial body of research exists on paralleljaw and multifinger grasp planning [3], comparatively little research has been published on planning suction grasps.
While grasp planning searches for gripper configurations that maximize a quality metric derived from mechanical wrench space analysis [24], human labels [28], or selfsupervised labels [20], suction grasps are often planned directly on point clouds using heuristics such as grasping near the object centroid [10] or at the center of planar surfaces [4, 5]. These heuristics work well for prismatic objects such as boxes and cylinders but may fail on objects with nonplanar surfaces near the object centroid, which is common for industrial parts and household objects such as staplers or children’s toys. Analytic models of suction cups for grasp planning exist, but they typically assume that a vacuum seal has already been formed and that the state (e.g. shape and pose) of the object is perfectly known [2, 17, 23]. In practice a robot may need to form seals on nonplanar surfaces while being robust to external wrenches (e.g. gravity and disturbances), sensor noise, control imprecision, and calibration errors, which are significant factors when planning grasps from point clouds.
We propose a novel compliant suction contact model for rigid, nonporous objects that consists of two components: (1) a test for whether a seal can be formed between a suction cup and a target object surface and (2) an analysis of the ability of the suction contact to resist external wrenches. We use the model to evaluate grasp robustness by analyzing seal formation and wrench resistance under perturbations in object pose, suction tip pose, material properties, and disturbing wrenches using MonteCarlo sampling similar to that in the Dexterity Network (DexNet) 1.0 [22].
This paper makes four contributions:

A compliant suction contact model that quantifies seal formation using a quasistatic spring system and the ability to resist external wrenches (e.g. gravity) using a contact wrench basis derived from the ring of contact between the cup and object surface.

Robust wrench resistance: a robust version of the above model under random disturbing wrenches and perturbations in object pose, gripper pose, and friction.

DexNet 3.0, a dataset of 2.8 million synthetic point clouds annotated with suction grasps and grasp robustness labels generated by analyzing robust wrench resistance for approximately 375k grasps across 1,500 object models.

Physical robot experiments measuring the precision of robust wrench resistance both with and without knowledge of the target object’s shape and pose.
We perform physical experiments using an ABB YuMi robot with a silicone suction cup tip to compare the precision of a GQCNNbased grasping policy trained on DexNet 3.0 with several heuristics such as targeting planar surfaces near object centroids. We find that the method achieves success rates of 98, 82, and 58 on datasets of Basic (prismatic or cylindrical), Typical (more complex geometry), and Adversarial (with few available suctiongrasp points), respectively.
Ii Related Work
\seclabelrelatedwork Endeffectors based on suction are widely used in industrial applications such as warehouse order fulfillment, handling limp materials such as fabric [17], and robotics applications such as the Amazon Picking Challenge [4], underwater manipulation [29], or wall climbing [2]. Our method builds on models of deformable materials, analyses of the wrenches that suction cups can exert, and datadriven grasp planning.
Iia Suction Models
Several models for the deformation of stiff rubberlike materials exist in the literature. Provot et al. [26] modeled sheets of stiff cloth and rubber using a springmass system with several types of springs. Hosseini et al. [1] provided a survey of more modern constitutive models of rubber that are often used in Finite Element Analysis (FEA) packages for more realistic physics simulations. In order to rapidly evaluate whether a suction cup can form a seal against an object’s surface, we model the cup as a quasistatic spring system with a topology similar to the one in [26] and estimate the deformation energy required to maintain a seal.
In addition, several models have been developed to check for static equilibrium assuming a seal between the suction cup and the object’s surface. Most models consider the suction cup to be a rigid object and model forces into the object along the surface normal, tangential forces due to surface friction, and pulling forces due to suction [17, 29, 31]. Bahr et al. [2] augmented this model with the ability to resist moments about the center of suction to determine the amount of vacuum pressure necessary to keep a climbing robot attached to a vertical wall. Mantriota [23] modeled torsional friction due to a contact area between the cup and object similar to the soft finger contact model used in grasping [14]. Our model extends these methods by combining models of torsional friction [23] and contact moments [2] in a compliant model of the ring of contact between the cup and object.
IiB Grasp Planning
The goal of grasp planning is to select a configuration for an endeffector that enables a robot to perform a task via contact with an object while resisting external perturbations [3], which can be arbitrary [7] or taskspecific [18]. A common approach is to select a configuration that maximizes a quality metric (or reward) based on wrench space analysis [24], robustness to perturbations [32], or a model learned from human labels [15] or selfsupervision [25].
Several similar metrics exist for evaluating suction grasps. One common approach is to evaluate whether or not a set of suction cups can lift an object by applying an upwards force [17, 29, 30, 31]. Domae et al. [5] developed a geometric model to evaluate suction success by convolving target locations in images with a desired suction contact template to assess planarity. Heuristics for planning suction grasps from point clouds have also been used extensively in the Amazon Robotics Challenge. In 2015, Team RBO [6] won by pushing objects from the top or side until suction was achieved, and Team MIT [35] came in second place by suctioning on the centroid of objects with flat surfaces. In 2016, Team Delft [10] won the challenge by approaching the estimated object centroid along the inward surface normal. In 2017, Cartman [morrison2017cartman] won the challenge and ranked suction grasps according to heuristics such as maximizing distance to the segmented object boundary and MIT [zeng2017robotic] performed well using a neural network trained to predict grasp affordance maps from human labeled RGBD point clouds. In this work, we present a novel metric that evaluates whether a single suction cup can resist external wrenches under perturbations in object / gripper poses, friction coefficient, and disturbing wrenches.
This paper also extends empirical, datadriven approaches to grasp planning based on images and point clouds [3]. A popular approach is to use human labels of graspable regions in RGBD images [19] or point clouds [15] to learn a grasp detector with computer vision techniques. As labeling may be tedious for humans, an alternative is to automatically collect training data from a physical robot [20, 25]. To reduce the timecost of data collection, recent research has proposed to generate labels in simulation using physical models of contact [12, 15]. Mahler et al. [21] demonstrated that a GQCNN trained on DexNet 2.0, a dataset of 6.7 million point clouds, grasps, and quality labels computed with robust quasistatic analysis, could be used to successfully plan paralleljaw grasps across a wide variety of objects with 99 precision. In this paper, we use a similar approach to generate a dataset of point clouds, grasps, and robustness labels for a suctionbased endeffector.
Iii Problem Statement
\seclabelproblemstatement Given a point cloud from a depth camera, our goal is to find a robust suction grasp (target point and approach direction) for a robot to lift an object in isolation on a planar worksurface and transport it to a receptacle. We compute the suction grasp that maximizes the probability that the robot can hold the object under gravity and perturbations sampled from a distribution over sensor noise, control imprecision, and random disturbing wrenches.
Iiia Assumptions
\seclabelassumptions Our stochastic model makes the following assumptions:

Quasistatic physics (e.g. inertial terms are negligible) with Coulomb friction.

Objects are rigid and made of nonporous material.

Each object is singulated on a planar worksurface in a stable resting pose [8].

A single overhead depth sensor with known intrinsics, position, and orientation relative to the robot.

A vacuumbased endeffector with known geometry and a single discshaped suction cup made of linearelastic material.
IiiB Definitions
\seclabeldescription
A robot observes a singleview point cloud or depth image, , containing a singulated object. The goal is to find the most robust suction grasp that enables the robot to lift an object and transport it to a receptacle, where grasps are parametrized by a target point and an approach direction . Success is measured with a binary grasp reward function , where if the grasp successfully transports the object, and otherwise.
The robot may not be able to predict the success of suction grasps exactly from point clouds for several reasons. First, the success metric depends on a state describing the object’s geometric, inertial, and material properties and the pose of the object relative to the camera, , but the robot does not know the true state due to: (a) noise in the depth image and (b) occlusions due to the single viewpoint. Second, the robot may not have perfect knowledge of external wrenches (forces and torques) on the object due to gravity or external disturbances.
This probabilistic relationship is described by an environment consisting of a grasp success distribution modeling , the ability of a grasp to resist random disturbing wrenches, and an observation model . This model induces a probability of success for each grasp conditioned on the robot’s observation:
Definition 1
The robustness of a grasp given a point cloud is the probability of grasp success under uncertainty in sensing, control, and disturbing wrenches: .
Our environment model is described in \secrefdataset and further details are given in the supplemental file.
IiiC Objective
\seclabelobjective Our ultimate goal is to find a grasp that maximizes robustness given a point cloud, , where specifies constraints on the set of available grasps, such as collisions or kinematic feasibility. We approximate by optimizing the weights of a deep Grasp Quality Convolutional Neural Network (GQCNN) on a training dataset consisting of reward values, point clouds, and suction grasps sampled from our stochastic model of grasp success. Our optimization objective is to find weights that minimize the crossentropy loss over :
(III.1) 
Iv Compliant Suction Contact Model
\seclabelcontactmodel
To quantify grasp robustness, we develop a quasistatic spring model of the suction cup material and a model of contact wrenches that the suction cup can apply to the object through a ring of contact on the suction cup perimeter. Under our model, the reward if:

A seal is formed between the perimeter of the suction cup and the object surface.

Given a seal, the suction cup is able to resist an external wrench on the object due to gravity and disturbances.
Iva Seal Formation
A suction cup can lift objects due to an air pressure differential induced across the membrane of the cup by a vacuum generator that forces the object into the cup. If a gap exists between the perimeter of the cup and the object, then air flowing into the gap may reduce the differential and cause the grasp to fail. Therefore, a tight seal between the cup and the target object is important for achieving a successful grasp.
To determine when seal formation is possible, we model circular suction cups as a conical spring system parameterized by real numbers , where is the numer of vertices along the contact ring, is the radius of the cup, and is the height of the cup. See see \figrefmodel for an illustration.
Rather than performing a computationally expensive dynamic simulation with a springmass model to determine when seal formation is feasible, we make simplifying assumptions to evaluate seal formation geometrically. Specifically, we compute a configuration of that achieves a seal by projecting onto the surface of the target object’s triangular mesh and evaluate the feasibility of that configuration under quasistatic conditions as a proxy for the dynamic feasibility of seal formation.
In our model, has two types of springs – structural springs that represent the physical structure of the suction cup and flexion springs that do not correspond to physical structures but instead are used to resist bending along the cup’s surface. Dynamic springmass systems with similar structures have been used in prior work to model stiff sheets of rubber [26]. The undeformed structural springs of form a right pyramid with height and with a base that is a regular gon with circumradius . Let be the set of vertices of the undeformed right pyramid, where each is a base vertex and is the pyramid’s apex. We define the model’s set of springs as follows:

Perimeter (Structural) Springs – Springs linking vertex to vertex , .

Cone (Structural) Springs – Springs linking vertex to vertex , .

Flexion Springs – Springs linking vertex to vertex , .
In the model, a complete seal is formed between and if and only if each of the perimeter springs of lies entirely on the surface of . Given a target mesh with a target grasp for the gripper, we choose an initial configuration of such that is undeformed and the approach line passes through and is orthogonal to the base of . Then, we make the following assumptions to determine a final static contact configuration of that forms a complete seal against (see \figrefspringapproach):

The perimeter springs of must not deviate from the original undeformed regular gon when projected onto a plane orthogonal to . This means that their locations can be computed by projecting them along from their original locations onto the surface of .

The apex, , of must lie on the approach line and, given the locations of ’s base vertices, must also lie at a location that keeps the average distance along between and the perimeter vertices equal to .
See the supplemental file for additional details.
Given this configuration, a seal is feasible if:

The cone faces of do not collide with during approach or in the contact configuration.

The surface of has no holes within the contact ring traced out by ’s perimeter springs.

The energy required in each spring to maintain the contact configuration of is below a realvalued threshold modeling the maximum deformation of the suction cup material against the object surface.
We threshold the energy in individual springs rather than the total energy for because air gaps are usually caused by local geometry.
IvB Wrench Space Analysis
To determine the degree to which the suction cup can resist external wrenches such as gravity, we analyze the set of wrenches that the suction cup can apply.
IvB1 Wrench Resistance
The object wrench set for a grasp using a contact model with basis wrenches is , where is a set of basis wrenches in the object coordinate frame, and is a set of constraints on contact wrench magnitudes [24].
We encode wrench resistance as a binary variable such that if resists and otherwise.
IvB2 Suction Contact Model
Many suction contact models acknowledge normal forces, vacuum forces, tangential friction, and torsional friction [2, 17, 23, 29] similar to a point contact with friction or soft finger model [24]. However, under this model, a single suction cup cannot resist torques about axes in the contact tangent plane, implying that any torque about such axes would cause the suction cup to drop an object (see the supplementary material for a detailed proof). This defies our intuition since empirical evidence suggests that a single point of suction can robustly transport objects [6, 10].
We hypothesize that these torques are resisted through an asymmetric pressure distribution on the ring of contact between the suction cup and object, which occurs due to passive elastic restoring forces in the material. \figrefmodel illustrates the suction ring contact model. The grasp map is defined by the following basis wrenches:

Actuated Normal Force (): The force that the suction cup material applies by pressing into the object along the contact axis.

Vacuum Force (): The magnitude of the constant force pulling the object into the suction cup coming from the air pressure differential.

Frictional Force (): The force in the contact tangent plane due to the normal force between the suction cup and object, .

Torsional Friction (): The torque resulting from frictional forces in the ring of contact.

Elastic Restoring Torque (): The torque about axes in the contact tangent plane resulting from elastic restoring forces in the suction cup pushing on the object along the boundary of the contact ring.
The magnitudes of the contact wrenches are constrained due to (a) the friction limit surface [14], (b) limits on the elastic behavior of the suction cup material, and (c) limits on the vacuum force. In the suction ring contact model, is approximated by a set of linear constraints for efficient computation of wrench resistance:
Friction:  
Material:  
Suction: 
Here is the friction coefficient, is the radius of the contact ring, and is a materialdependent constant modeling the maximum stress for which the suction cup has linearelastic behavior. These constraints define a subset of the friction limit ellipsoid and cone of admissible elastic torques under a linear pressure distribution about the ring of the cup. Furthermore, we can compute wrench resistance using quadratic programming due to the linearity of the constraints. See the supplemental file for a detailed derivation and proof.
IvC Robust Wrench Resistance
\seclabelrobustwr We evaluate the robustness of candidate suction grasps by evaluating seal formation and wrench resistance over distributions on object pose, grasp pose, and disturbing wrenches:
Definition 3
The robust wrench resistance metric for and is , the probability of success under perturbations in object pose, gripper pose, friction, and disturbing wrenches.
In practice, we evaluate robust wrench resistance by taking samples, evaluating binary wrench resistance for each, and computing the sample mean: .
V DexNet 3.0 Dataset
\seclabeldataset
To learn to predict grasp robustness based on noisy point clouds, we generate the DexNet 3.0 training dataset of point clouds, grasps, and grasp reward labels by sampling tuples from a joint distribution modeled as the product of distributions on:

States: : A uniform distribution over a discrete dataset of objects and their stable poses and uniform continuous distributions over the object planar pose and camera poses in a bounded region of the workspace.

Grasp Candidates: : A uniform random distribution over contact points on the object surface.

Grasp Rewards : A stochastic model of wrench resistance for the gravity wrench that is sampled by perturbing the gripper pose according to a Gaussian distribution and evaluating the contact model described in \secrefcontactmodel.

Observations : A depth sensor noise model with multiplicative and Gaussian process pixel noise.
dataset illustrates a subset of the DexNet 3.0 object and grasp dataset. The parameters of the sampling distributions and compliant suction contact model (see \secrefcontactmodel) were set by maximizing average precision of the values using grid search for a set of grasps attempted on an ABB YuMi robot on a set of known 3D printed objects (see \secrefobjectdatasets).
Our pipeline for generating training tuples is illustrated in \figrefpipeline. We first sample state by selecting an object at random from a database of 3D CAD models and sampling a friction coefficient, planar object pose, and camera pose relative to the worksurface. We generate a set of grasp candidates for the object by sampling points and normals uniformly at random from the surface of the object mesh. We then set the binary reward label if a seal is formed and robust wrench resistance (described in \secrefrobustwr) is above a threshold value . Finally, we sample a point cloud of the scene using rendering and a model of image noise [22]. The grasp success labels are associated with pixel locations in images through perspective projection [9]. A graphical model for the sampling process and additional details on the distributions can be found in the supplemental file.
Vi Learning a Deep Robust Grasping Policy
\seclabelgqcnn
We use the DexNet 3.0 dataset to train a GQCNN that takes as input a singleview point cloud of an object resting on the table and a candidate suction grasp defined by a target 3D point and approach direction, and outputs the robustness, or estimated probability of success, for the grasp on the visible object.
Our GQCNN architecture is identical to DexNet 2.0 [21] except that we modify the pose input stream to include the angle between the approach direction and the table normal. The point cloud stream takes a depth image centered on the target point and rotated to align the middle column of pixels with the approach orientation similar to a spatial transforming layer [11]. The endeffector depth from the camera and orientation are input to a fully connected layer in a separate pose stream and concatenated with conv features in a fully connected layer. We train the GQCNN with using stochastic gradient descent with momentum using an 8020 trainingtovalidation imagewise split of the DexNet 3.0 dataset. Training took approximately 12 hours on three NVIDIA Titan X GPUs. The learned GQCNN achieves 93.5 classification accuracy on the heldout validation set.
We use the GQCNN in a deep robust grasping policy to plan suction target grasps from point clouds on a physical robot. The policy uses the Cross Entropy Method (CEM) [20, 21, 27]. CEM samples a set of initial candidate grasps uniformly at random from the set of surface points and inwardfacing normals on a point cloud of the object, then iteratively resamples grasps from a Gaussian Mixture Model fit to the grasps with the highest predicted probability of success. See the supplemental file for example grasps planned by the policy.
Vii Experiments
\seclabelexperiments
We ran experiments to characterize the precision of robust wrench resistance when object shape and pose are known and the precision of our deep robust grasping policy for planning grasps from point clouds for three object classes.
Viia Object Classes
\seclabelobjectdatasets We created a dataset of 55 rigid and nonporous objects including tools, groceries, office supplies, toys, and 3D printed industrial parts. We separated objects into three categories, illustrated in \figrefobjectdatasets:

Basic: Prismatic solids (e.g. rectangular prisms, cylinders). Includes 25 objects.

Typical: Common objects with varied geometry and many accessible, approximately planar surfaces. Includes 25 objects.

Adversarial: 3Dprinted objects with complex geometry (e.g. curved or narrow surfaces) that are difficult to access. Includes 5 objects.
For object details, see http://bit.ly/2xMcx3x.
ViiB Experimental Protocol
We ran experiments with an ABB YuMi with a Primesense Carmine 1.09 and a suction system with a 15 diameter silicone singlebellow suction cup and a VM5NC VacMotion vacuum generator with a payload of approximately 0.9. The experimental workspace is illustrated in the left panel of \figrefobjectdatasets. In each experiment, the operator iteratively presented a target object to the robot and the robot planned and executed a suction grasp on the object. The operator labeled successes based on whether or not the robot was able to lift and transport the object to the side of the workspace. For each method, we measured:

Average Precision (AP). The area under the precisionrecall curve, which measures precision over possible thresholds on the probability of success predicted by the policy. This is useful for industrial applications where a robot may take an alternative action (e.g. asking for help) if the planned grasp is predicted to fail.

Success Rate. The fraction of all grasps that were successful.
All experiments ran on a Desktop running Ubuntu 14.04 with a 2.7 GHz Intel Core i56400 QuadCore CPU and an NVIDIA GeForce 980 GPU.
ViiC Performance on Known Objects
\seclabelknown To assess performance of our robustness metric independently from the perception system, we evaluated whether or not the metric was predictive of suction grasp success when object shape and pose were known using the 3D printed Adversarial objects (right panel of \figrefobjectdatasets). The robot was presented one of the five Adversarial objects in a known stable pose, selected from the top three most probable stable poses. We handaligned the object to a template image generated by rendering the object in a known pose on the table. Then, we indexed a database of grasps precomputed on 3D models of the objects and executed the grasp with the highest metric value for five trials. In total, there were 75 trials per experiment.
We compared the following metrics:

PlanarityCentroid (PC3D). The inverse distance to the object centroid for sufficiently planar patches on the 3D object surface.

Spring Stretch (SS). The maximum stretch among virtual springs in the suction contact model.

Wrench Resistance (WR). Our model without perturbations.

Robust Wrench Resistance (RWR). Our model.
The RWR metric performed best with 99 AP compared to 93 AP for WR, 89 AP for SS, and 88 for PC3D.
ViiD Performance on Novel Objects
\seclabelnovel We also evaluated the performance of GQCNNs trained on DexNet 3.0 for planning suction target points from a singleview point cloud. In each experiment, the robot was presented one object from either the Basic, Typical, or Adversarial classes in a pose randomized by shaking the object in a box and placing it on the table. The object was imaged with a depth sensor and segmented using 3D bounds on the workspace. Then, the grasping policy executed the most robust grasp according to a success metric. In this experiment the human operators were blinded from the method they were evaluating to remove bias in human labels.
We compared policies that optimized the following metrics:

Planarity. The inverse sum of squared errors from an approach plane for points within a disc with radius equal to that of the suction cup.

Centroid. The inverse distance to the object centroid.

PlanarityCentroid (PC). The inverse distance to the centroid for planar patches on the 3D object surface.

GQCNN (ADV). Our GQCNN trained on synthetic data from the Adversarial objects (to assess the ability of the model to fit complex objects).
policyresults details performance on the Basic, Typical, and Adversarial objects. On the Basic and Typical objects, we see that the DexNet 3.0 policy is comparable to PC in terms of success rate and has nearperfect AP, suggesting that failed grasps often have low robustness and can therefore be detected. On the adversarial objects, GQCNN (ADV) significantly outperforms GQCNN (DN3) and PC, suggesting that this method can be used to successfully grasp objects with complex surface geometry as long as the training dataset closely matches the objects seen at runtime. The DN3 policy took an average of 3.0 seconds per grasp.














Planarity  81  74  69  67  48  47  
Centroid  89  92  80  78  47  38  
PlanarityCentroid  98  94  94  86  64  62  
GQCNN (ADV)  83  77  75  67  86  81  
GQCNN (DN3)  99  98  97  82  61  58 
policyresults
ViiE Failure Modes
The most common failure mode was attempting to form a seal on surfaces with surface geometry that prevent seal formation. This is partially due to the limited resolution of the depth sensor, as our seal formation model is able to detect the inability to form a seal on such surfaces when the geometry is known precisely. In contrast, the planaritycentroid metric performs poorly on objects with nonplanar surfaces near the object centroid.
Viii Future Work
\seclabeldiscussion In future work we will study sensitivity to (1) the distribution of 3D object models using in the training dataset, (2) noise and resolution in the depth sensor, and (3) variations in vacuum suction hardware (e.g. cup shape, hardness of cup material). We will also extend this model to learning suction grasping policies for binpicking with heaps of parts and to composite policies that combine suction grasping with paralleljaw grasping by a twoarmed robot. We are also working with colleagues in the robot grasping community to propose shareable benchmarks and protocols that specify experimental objects and conditions with industryrelevant metrics such as Mean Picks Per Hour (MPPH), see http://goo.gl/6M5rfw.
Appendix A Additional Experiments
\seclabeladditionalexperiments To better characterize the correlation of our robust wrench resistance metric, compliant suction contact model, and GQCNNbased policy for planning suction target grasps from point clouds with physical outcomes on a real robot, we present several additional analyses and experiments.
Aa Performance Metrics
Our primary numeric metrics of performance were:

Average Precision (AP). The area under the precisionrecall curve, which measures precision over possible thresholds on the probability of success predicted by the policy. This is useful for industrial applications where a robot may take an alternative action (e.g. probing, asking for help) if the planned grasp is predicted to fail.

Success Rate. The fraction of all grasps that were successful.
We argue that these metrics alone do not give a complete picture of how useful a suction grasp policy would work in practice. Average Precision (AP) penalizes a policy for having poor recall (a high rate of false negatives relative to true positives), and success rate penalizes a policy with a high number of failures. However, not all failures should be treated equally: some failures are predicted to occur by the GQCNN (low predicted probability of success) while the others are the result of an overconfident prediction.
In practice, a suction grasp policy would be part of a larger system (e.g. a state machine) that could decide whether or not to execute a grasp based on the continuous probability of success output by the GQCNN. As long as the policy is not overconfident, such as system can detect failures before they occur and take an alternative action such as attempting to turn the object over, asking a human for help, or leaving the object in the bin for error handling. At the same time, if a policy is too conservative and never predicts successes, then the system will handle be able to handle very few test cases.
We illustrate this tradeoff by plotting the SuccessAttempt Rate curve which plots:

Success Rate. The fraction of fraction of grasps that are successful if the system only executes grasps have predicted probability of success greater than a confidence threshold .

Attempt Rate. The fraction of all test cases for which the system attempts a grasp, if the system only attempts grasps with predicted probability of success greater than a confidence threshold .
over all possible values of the confidence threshold . There is typically an inverse relationship between the two metrics: a higher confidence threshold will reduce false positives but will also reduce the frequency of grasp attempts, increasing runtime and decreasing the diversity of cases that the robot is able to successfully handle.
AB Performance on Known Objects
\seclabeladditionalknownobjects To assess performance of our robustness metric independent of the perception system, we evaluated whether or not the metric was predictive of suction grasp success when object shape and pose were known using the 3D printed Adversarial objects. The robot was presented one of the five Adversarial objects in a known stable pose, selected from the top three most probable stable poses. We handaligned the object to a template image generated by rendering the object in a known pose on the table. Then, we indexed a database of grasps precomputed on 3D models of the objects and executed the grasp with the highest metric value for five trials. In total, there were 75 trials per experiment.
We compared the following metrics:

PlanarityCentroid (PC3D). The inverse distance to the object centroid for sufficiently planar patches on the 3D object surface.

Spring Stretch (SS). The maximum stretch among virtual springs in the suction contact model.

Wrench Resistance (WR).

Robust Wrench Resistance (RWR).
The results are detailed in \tabrefcorrelation and the Success vs Attempt Rate curve is plotted in \figrefhyp1. A policy based on the robust wrench resistance metric achieved nearly 100 average precision and 92 success on this dataset, suggesting that the ranking of grasps by robust wrench resistance is correlated with the ranking by physical successes.





PC3D  88  80  
SS  89  84  
WR  93  80  
RWR  100  92 
correlation
AC Performance on Novel Objects
We also evaluated the performance of GQCNNs trained on DexNet 3.0 for planning suction target points from a singleview point cloud. In each experiment, the robot was presented one object from either the Basic, Typical, or Adversarial classes in a pose randomized by shaking the object in a box and placing it on the table. The object was imaged with a depth sensor and segmented using 3D bounds on the workspace. Grasp candidates were then sampled from the depth image and the grasping policy executed the most robust candidate grasp according to a success metric. In this experiment the human operators were blinded from the method they were evaluating to remove bias in human labels.
We compared policies that used the following metrics:

Planarity. The inverse sum of squared errors from an approach plane for points within a disc with radius equal to that of the suction cup.

Centroid. The inverse distance to the object centroid.

PlanarityCentroid. The inverse distance to the centroid for sufficiently planar patches on the 3D object surface.

GQCNN (ADV). A GQCNN trained on synthetic data from only the Adversarial objects (to assess the ability of the model to fit complex objects).
policyresults details performance on the Basic, Typical, and Adversarial objects, and \figrefhyp2 illustrates the SuccessAttempt Rate tradeoff. We see that the DexNet 3.0 policy has the highest AP across the Basic and Typical classes. Also, the GQCNN trained on the Adversarial objects significantly outperforms all methods on the Adversarial dataset, suggesting that our model is able to exploit knowledge of complex 3D geometry to plan robust grasps. Furthermore, the SuccessAttempt Rate curve suggests that the continuous probability of success output by the DexNet 3.0 policy is highly correlated with the true success label and can be used to detect failures before they occur on the Basic and Typical object classes. The DexNet 3.0 policy took an average of approximately 3 seconds to plan each grasp.














Planarity  81  74  69  67  48  47  
Centroid  89  92  80  78  47  38  
PlanarityCentroid  98  94  94  86  64  62  
GQCNN (ADV)  83  77  75  67  86  81  
GQCNN (DN3)  99  98  97  82  61  58 
policyresults
AD Classification Performance on Known Objects
To assess performance of our robustness metric on classifying grasps as successful or unsuccessful, we evaluated whether or not the metric was able to classify a set of grasps sampled randomly from the 3D object surface using the known 3D object geometry and pose of the Adversarial objects. First, we sampled a set of grasps uniformly at random from the surface of the 3D object meshes. Then robot was presented one of the five Adversarial objects in a known stable pose, selected from the top three most probable stable poses. We handaligned the object to a template image generated by rendering the object in a known pose on the table. Then, the robot executed a grasp uniformly at random from the set of reachable grasps for the given stable pose. In total, there were trials, per object.
We compared the predictions made for those grasps by the following metrics:

PlanarityCentroid (PC3D). The inverse distance to the object centroid for sufficiently planar patches on the 3D object surface.

Spring Stretch (SS). The maximum stretch among virtual springs in the suction contact model.

Wrench Resistance (WR).

Robust Wrench Resistance (RWR).
We measured the Average Precision (AP), classification accuracy, and Spearman’s rank correlation coefficient (which measures the correlation between the ranking of the metric value and successes on the physical robot). \tabrefrandomresults details the performance of each metric and the PrecisionRecall curve is plotted in \figrefhyp1pr. We see that the robust wrench resistance metric with our compliant spring contact model has the highest average precision and correlation with successes on the physical robot.






PC3D  71  68  0.36  
SS  75  74  0.49  
WR  78  77  0.52  
RWR  80  75  0.62 
randomresults
AE Failure Modes
Our system was not able to handle many objects due to material properties. We broke up the failure objects into two categories:

Imperceptible Objects: Objects with (a) surface variations less than the spatial resolution of our Primesense Carmine 1.09 depth camera or (b) specularities or transparencies that prevent the depth camera from sensing the object geometry. Thus the pointcloudbased grasping policies were not able to distinguish successes from failures.

Impossible Objects: Objects for which a seal cannot be formed either because objects are (a) nonporous or (b) lack a surface patch for which the suction cup can achieve a seal due to size or texture.
These objects are illustrated in \figreffailuredatasets.
Appendix B Details of QuasiStatic Spring Seal Formation Model
\seclabelconfigdetails
In this section, we derive a detailed process for statically determining a final configuration of that achieves a complete seal against mesh . We assume that we are given a line of approach parameterized by , a target point on the surface of , and , a vector pointing towards along the line of approach.
First, we choose an initial, undeformed configuration of . In this undeformed configuration of , all of the springs of are in their resting positions, which means that the structural springs of form a right pyramid with a regular gon as its base. This perfectly constrains the relative positions of the vertices of , so all that remains is specifying the position and orientation of relative to the world frame.
We further constrain the position and orientation of such that passes through and is orthogonal to the plane containing the base of . This leaves only the position of and a rotation about as degrees of freedom. For our purposes, the position of along does not matter so long as is not in collision with and the base of is closer to than the apex is. In general, we choose such that , where is the largest extent of the object’s vertices. For the rotation about , we simply select a random initial angle. Theoretically, the rotation could affect the outcome of our metric, but as long as is chosen to be sufficiently large, the result is not sensitive to the chosen rotation angle.
Next, given the initial configuration of , we compute the final locations of the perimeter springs on the surface of under two main constraints:

The perimeter springs of must not deviate from their initial locations when projected back onto the plane containing the base of ’s initial right pyramid.

The perimeter springs of must lie flush against the mesh .
Essentially, this means that the perimeter springs will lie on the intersection of with a right prism whose base is the base of the initial configuration’s right pyramid and whose height is sufficient such that passes all the way through . The base vertices of will lie at the intersection of and ’s side edges, and the perimeter springs of will lie along the intersection of and ’s side faces.
Finally, given a complete configuration of the perimeter vertices of as well as the paths of the perimeter springs along the surface of , we compute the final location of the cup apex . We work with three main constraints:

must lie on .

must not be below the surface of (i.e. ).

should be chosen such that the average displacement between and the perimeter vertices along remains equal to .
Let . Then, the solution distance is given by
When thresholding the energy in each spring, we use a perspring threshold of a change in length, which was used as the spring stretch limit in [26].
Appendix C Suction Contact Model
\seclabelcontactderiv
The basis of contact wrenches for the suction ring model is illustrated in \figrefmodel. The contact wrenches are not independent due to the coupling of normal force and friction, and they may be bounded due to material properties. In this section we prove that wrench resistance can be computed with quadratic programming, we derive constraints between the contact wrenches in the suction ring model, and we explain the limits of the soft finger suction contact models for a single suction contact.
Ca Computing Wrench Resistance with Quadratic Programming
The object wrench set for a grasp using a contact model with basis wrenches is , where is a set of basis wrenches in the object coordinate frame, and is a set of constraints on contact wrench magnitudes [24]. The grasp map can be decomposed as where is the adjoint transformation mapping wrenches from the contact to the object coordinate frame and is the contact wrench basis, a set of orthonormal basis wrenches in the contact coordinate frame [24].
Definition 4
A grasp achieves wrench resistance with respect to if .
Proposition 1
Let be the grasp map for a grasp . Furthermore, let . Then can resist iff .
(). Assume can resist . Then and therefore such that . (). Assume . Then such that . When the set of admissible contact wrench magnitudes is defined by linear equality and inequality constraints, the , is a Quadratic Program which can be solved exactly by modern solvers.
CB Derivation of Suction Ring Contact Model Constraints
Our suction contact model assumes the following:

Quasistatic physics (e.g. inertial terms are negligible).

The suction cup contacts the object along a circle of radius (or “ring”) in the plane of the contact coordinate frame.

The suction cup material behaves as a ring of infinitesimal springs per unit length. Specifically, we assume that the pressure along the axis in contact coordinate frame satisfies where is displacement along the axis and is a spring constant (per unit length). The cup does not permit deformations along the or axes.

The suction cup material is well approximated by a springmass system. Furthermore, points on the contact ring are in static equilibrium with a linear displacement along the axis from the equilibrium position: . Together with Assumption 3, this implies that:
for real numbers and .

The force on the object due to the vacuum is a constant along the axis of the contact coordinate frame.

The object exerts a normal force on the object where is the force due to actuation. This assumption holds when analyzing the ability to resist disturbing wrenches because the material can apply passive forces but may not hold when considering target wrenches that can be actuated.
The magnitudes of the contact wrenches are constrained due to (a) the friction limit surface [14], (b) limits on the elastic behavior of the suction cup material, and (c) limits on the vacuum force.
CB1 Friction Limit Surface
The values of the tangential and torsional friction are coupled through the planar external wrench and thus are jointly constrained. This constraint is known as the friction limit surface [14]. We can approximate the friction limit surface by computing the maximum friction force and torsional moment under a pure translation and a pure rotation about the contact origin.
The tangential forces have maximum magnitude under a purely translational disturbing wrench with unit vector in the direction of the velocity:
The torsional moment has a maximum moment under a purely rotational disturbing wrench about the contact axis. This disturbing wrench can be described with a unit vector . Thus the torsional moment is bounded by:
We can approximate the friction limit surface by the ellipsoid [13, 14]:
While this constraint is convex, in practice many solvers for Quadratically Constrained Quadratic Programs (QCQPs) assume nonconvexity. We can turn this into a linear constraint by bounding tangential forces and torsional moments in a rectangular prism inscribed within the ellipsoid:
CB2 Elastic Restoring Torques
The torques about the and axes are also bounded. Let . Then:
where is the elastic limit or yield strength of the suction cup material, defined as the stress at which the material begins to deform plastically instead of linearly.
CB3 Vacuum Limits
The ring contact can exert forces on the object along the axis through motor torques that transmit forces to the object through the ring of the suction cup. Under these assumptions, the normal force exerted on the object by the suction cup material is:
Note also that , where is the component of force on the object, since the normal force must offset the force due to vacuum even when no force is being applied on the object.
CB4 Constraint Set
Taking all constraints into account, we can describe with a set of linear constraints:
Friction:  
Material:  
Suction: 
Since these constraints are linear, we can solve for wrench resistance in the our contact model using Quadratic Programming. In this paper we set and .
CC Limits of the Soft Finger Suction Contact Model
The most common suction contact model in the literature [17, 23, 29, 31, 34] considers normal forces from motor torques, suction forces from the pressure differential between inside the cup and the air outside the object, and both tangential and torsional friction resulting from the contact area between the cup and the object. Let , , and be unit basis vectors along the , , and axes. The contact model is specified by:
The first constraint enforces Coulomb friction with coefficient . The second constraint ensures that the net torsion is bounded by the normal force, since torsion results from the net frictional moment from a contact area. Unlike contact models for rigid multifinger grasping, can be positive or negative due to the pulling force of suction.
Proposition 2
Under the soft suction contact model, a grasp with a single contact point cannot resist torques about axes in the contact tangent plane.
The wrench is not in the range of because it is orthogonal to every basis wrench (column of ).
The null space of is spanned by the wrenches and , suggesting that a single suction contact cannot resist torques in the tangent plane at the contact. This defies our intuition since empirical evidence suggests that a single point of suction can reliably hold and transport objects to a receptacle in applications such as the Amazon Picking Challenge [6, 10].
Appendix D GQCNN Performance
\seclabeltraining The GQCNN trained on DexNet 3.0 had an accuracy of 93.5 on a held out validation set of approximately 552,000 datapoints. \figrefrocconv shows the precisionrecall curve for the GQCNN validation set and the optimized 64 Conv1_1 filters, each of which is 77. \figrefpolicyexamples illustrates the probability of success predicted by the GQCNN on candidates grasps from several real point clouds.
Appendix E Environment Model
To learn to predict grasp robustness based on noisy point clouds, we generate the DexNet 3.0 training dataset of point clouds, grasps, and grasp success labels by sampling tuples from a joint distribution that is composed of distributions on:

States: : A prior on possible objects, object poses, and camera poses that the robot will encounter.

Grasp Candidates: : A prior constraining grasp candidates to target points on the object surface.

Grasp Successes : A stochastic model of wrench resistance for the gravity wrench.

Observations : A sensor noise model.
Our graphical model is illustrated in \figrefgraphicalmodel.
Distribution  Description  

truncated Gaussian distribution over friction coefficients  
discrete uniform distribution over 3D object models  



distributions
Ea Details of Distributions
We follow the state model of [21], which we repeat here for convenience. The parameters of the sampling distributions were set by maximizing average precision of the values using grid search for a set of grasps attempted on an ABB YuMi robot on a set of known 3D printed objects (see \secrefadditionalknownobjects).
We model the state distribution as . We model as a Gaussian distribution truncated to . We model as a discrete uniform distribution over 3D objects in a given dataset. We model , where is is a discrete uniform distribution over object stable poses and is uniform distribution over 2D poses: . We compute stable poses using the quasistatic algorithm given by Goldberg et al. [8]. We model as a uniform distribution on spherical coordinates , where the camera optical axis always intersects the center of the table. The parameters of the sampling distributions were set by maximizing average precision of the values using grid search for a set of grasps attempted on an ABB YuMi robot on a set of known 3D printed objects (see \secrefadditionalknownobjects).
Our grasp candidate model is a uniform distribution over points samples on the object surface, with the approach direction defined by the inwardfacing surface normal at each point.
We follow the observation model of [21], which we repeat here for convenience. Our observation model model images as where is a rendered depth image created using OSMesa offscreen rendering. We model as a Gamma random variable with shape and scale=. We model as Gaussian Process noise drawn with measurement noise and kernel bandwidth .
Our grasp success model specifies a distribution over wrench resistance due to perturbations in object pose, gripper pose, friction coefficient, and the disturbing wrench to resist. Specifically, we model . We model as the wrench exerted by gravity on the object centerofmass with zeromean Gaussian noise assuming as mass of 1.0kg. We model as a grasp perturbation distribution where the suction target point is perturbed by zeromean Gaussian noise and the approach direction is perturbed by zeromean Gaussian noise in the rotational component of Lie algebra coordinates . We model as a state perturbation distribution where the pose is perturbed by zeromean Gaussian noise in Lie algebra coordinates with translational component and rotational component and the object center of mass is perturbed by zeromean Gaussian noise . We model a Bernoulli with parameter 1 if resists given the state and parameter 0 if not.
EB Implementation Details
To efficiently implement sampling, we make several optimizations. First, we precompute the set of grasps for every 3D object model in the database and take a fixed number of samples of grasp success from using quadratic programming for wrench resistance evaluation. We convert the samples to binary success labels by thresholding the sample mean by . We also render a fixed number of depth images for each stable pose independently of grasp success evaluation. Finally, we sample a set of candidate grasps from the object in each depth image and transform the image to generate a suction grasp thumbnail centered on the target point and oriented to align the approach axis with the middle column of pixels for GQCNN training.
Acknowledgments
This research was performed at the AUTOLAB at UC Berkeley in affiliation with the Berkeley AI Research (BAIR) Lab, the RealTime Intelligent Secure Execution (RISE) Lab, and the CITRIS âPeople and Robotsâ (CPAR) Initiative. The authors were supported in part by donations from Siemens, Google, Honda, Intel, Comcast, Cisco, Autodesk, Amazon Robotics, Toyota Research Institute, ABB, Samsung, Knapp, and Loccioni, Inc and by the Scalable Collaborative HumanRobot Learning (SCHooL) Project, NSF National Robotics Initiative Award 1734633. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Sponsors. We thank our colleagues who provided helpful feedback, code, and suggestions, in particular Ruzena Bajcsy, Oliver Brock, Peter Corke, Chris Correa, Ron Fearing, Roy Fox, Bernhard Guetl, Menglong Guo, Michael Laskey, Andrew Lee, Pusong Li, Jacky Liang, Sanjay Krishnan, Fritz Kuttler, Stephen McKinley, Juan Aparicio Ojea, Michael Peinhopf, Peter Puchwein, Alberto Rodriguez, Daniel Seita, Vishal Satish, and Shankar Sastry.
References
 [1] A. Ali, M. Hosseini, and B. Sahari, “A review of constitutive models for rubberlike materials,” American Journal of Engineering and Applied Sciences, vol. 3, no. 1, pp. 232–239, 2010.
 [2] B. Bahr, Y. Li, and M. Najafi, “Design and suction cup analysis of a wall climbing robot,” Computers & electrical engineering, vol. 22, no. 3, pp. 193–209, 1996.
 [3] J. Bohg, A. Morales, T. Asfour, and D. Kragic, “Datadriven grasp synthesisâa survey,” IEEE Trans. Robotics, vol. 30, no. 2, pp. 289–309, 2014.
 [4] N. Correll, K. E. Bekris, D. Berenson, O. Brock, A. Causo, K. Hauser, K. Okada, A. Rodriguez, J. M. Romano, and P. R. Wurman, “Analysis and observations from the first amazon picking challenge,” IEEE Transactions on Automation Science and Engineering, 2016.
 [5] Y. Domae, H. Okuda, Y. Taguchi, K. Sumi, and T. Hirai, “Fast graspability evaluation on single depth maps for bin picking with general grippers,” in Robotics and Automation (ICRA), 2014 IEEE International Conference on. IEEE, 2014, pp. 1997–2004.
 [6] C. Eppner, S. Höfer, R. Jonschkowski, R. M. Martin, A. Sieverling, V. Wall, and O. Brock, “Lessons from the amazon picking challenge: Four aspects of building robotic systems.” in Robotics: Science and Systems, 2016.
 [7] C. Ferrari and J. Canny, “Planning optimal grasps,” in Proc. IEEE Int. Conf. Robotics and Automation (ICRA), 1992, pp. 2290–2295.
 [8] K. Goldberg, B. V. Mirtich, Y. Zhuang, J. Craig, B. R. Carlisle, and J. Canny, “Part pose statistics: Estimators and experiments,” IEEE Trans. Robotics and Automation, vol. 15, no. 5, pp. 849–857, 1999.
 [9] R. Hartley and A. Zisserman, Multiple view geometry in computer vision. Cambridge university press, 2003.
 [10] C. Hernandez, M. Bharatheesha, W. Ko, H. Gaiser, J. Tan, K. van Deurzen, M. de Vries, B. Van Mil, J. van Egmond, R. Burger, et al., “Team delft’s robot winner of the amazon picking challenge 2016,” arXiv preprint arXiv:1610.05514, 2016.
 [11] M. Jaderberg, K. Simonyan, A. Zisserman, et al., “Spatial transformer networks,” in Advances in Neural Information Processing Systems, 2015, pp. 2017–2025.
 [12] E. Johns, S. Leutenegger, and A. J. Davison, “Deep learning a grasp function for grasping under gripper pose uncertainty,” in Proc. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS). IEEE, 2016, pp. 4461–4468.
 [13] I. Kao and M. R. Cutkosky, “Quasistatic manipulation with compliance and sliding,” Int. Journal of Robotics Research (IJRR), vol. 11, no. 1, pp. 20–40, 1992.
 [14] I. Kao, K. Lynch, and J. W. Burdick, “Contact modeling and manipulation,” in Springer Handbook of Robotics. Springer, 2008, pp. 647–669.
 [15] D. Kappler, J. Bohg, and S. Schaal, “Leveraging big data for grasp planning,” in Proc. IEEE Int. Conf. Robotics and Automation (ICRA), 2015.
 [16] A. Kasper, Z. Xue, and R. Dillmann, “The kit object models database: An object model database for object recognition, localization and manipulation in service robotics,” Int. Journal of Robotics Research (IJRR), vol. 31, no. 8, pp. 927–934, 2012.
 [17] R. Kolluru, K. P. Valavanis, and T. M. Hebert, “Modeling, analysis, and performance evaluation of a robotic gripper system for limp material handling,” IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 28, no. 3, pp. 480–486, 1998.
 [18] R. Krug, Y. Bekiroglu, and M. A. Roa, “Grasp quality evaluation done right: How assumed contact force bounds affect wrenchbased quality metrics,” in Robotics and Automation (ICRA), 2017 IEEE International Conference on. IEEE, 2017, pp. 1595–1600.
 [19] I. Lenz, H. Lee, and A. Saxena, “Deep learning for detecting robotic grasps,” Int. Journal of Robotics Research (IJRR), vol. 34, no. 45, pp. 705–724, 2015.
 [20] S. Levine, P. Pastor, A. Krizhevsky, and D. Quillen, “Learning handeye coordination for robotic grasping with deep learning and largescale data collection,” arXiv preprint arXiv:1603.02199, 2016.
 [21] J. Mahler, J. Liang, S. Niyaz, M. Laskey, R. Doan, X. Liu, J. A. Ojea, and K. Goldberg, “Dexnet 2.0: Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics,” in Proc. Robotics: Science and Systems (RSS), 2017.
 [22] J. Mahler, F. T. Pokorny, B. Hou, M. Roderick, M. Laskey, M. Aubry, K. Kohlhoff, T. Kröger, J. Kuffner, and K. Goldberg, “Dexnet 1.0: A cloudbased network of 3d objects for robust grasp planning using a multiarmed bandit model with correlated rewards,” in Proc. IEEE Int. Conf. Robotics and Automation (ICRA). IEEE, 2016.
 [23] G. Mantriota, “Theoretical model of the grasp with vacuum gripper,” Mechanism and machine theory, vol. 42, no. 1, pp. 2–17, 2007.
 [24] R. M. Murray, Z. Li, and S. S. Sastry, A mathematical introduction to robotic manipulation. CRC press, 1994.
 [25] L. Pinto and A. Gupta, “Supersizing selfsupervision: Learning to grasp from 50k tries and 700 robot hours,” in Proc. IEEE Int. Conf. Robotics and Automation (ICRA), 2016.
 [26] X. Provot et al., “Deformation constraints in a massspring model to describe rigid cloth behaviour,” in Graphics interface. Canadian Information Processing Society, 1995, pp. 147–147.
 [27] R. Y. Rubinstein, A. Ridder, and R. Vaisman, Fast sequential Monte Carlo methods for counting and optimization. John Wiley & Sons, 2013.
 [28] A. Saxena, J. Driemeyer, and A. Y. Ng, “Robotic grasping of novel objects using vision,” The International Journal of Robotics Research, vol. 27, no. 2, pp. 157–173, 2008.
 [29] H. S. Stuart, M. Bagheri, S. Wang, H. Barnard, A. L. Sheng, M. Jenkins, and M. R. Cutkosky, “Suction helps in a pinch: Improving underwater manipulation with gentle suction flow,” in Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on. IEEE, 2015, pp. 2279–2284.
 [30] N. C. Tsourveloudis, R. Kolluru, K. P. Valavanis, and D. Gracanin, “Suction control of a robotic gripper: A neurofuzzy approach,” Journal of Intelligent & Robotic Systems, vol. 27, no. 3, pp. 215–235, 2000.
 [31] A. J. Valencia, R. M. Idrovo, A. D. Sappa, D. P. Guingla, and D. Ochoa, “A 3d vision based approach for optimal grasp of vacuum grippers,” in Electronics, Control, Measurement, Signals and their Application to Mechatronics (ECMSM), 2017 IEEE International Workshop of. IEEE, 2017, pp. 1–6.
 [32] J. Weisz and P. K. Allen, “Pose error robust grasping from contact wrench space metrics,” in Proc. IEEE Int. Conf. Robotics and Automation (ICRA). IEEE, 2012, pp. 557–562.
 [33] W. Wohlkinger, A. Aldoma, R. B. Rusu, and M. Vincze, “3dnet: Largescale object class recognition from cad models,” in Proc. IEEE Int. Conf. Robotics and Automation (ICRA). IEEE, 2012, pp. 5384–5391.
 [34] Y. Yoshida and S. Ma, “Design of a wallclimbing robot with passive suction cups,” in Robotics and Biomimetics (ROBIO), 2010 IEEE International Conference on. IEEE, 2010, pp. 1513–1518.
 [35] K.T. Yu, N. Fazeli, N. ChavanDafle, O. Taylor, E. Donlon, G. D. Lankenau, and A. Rodriguez, “A summary of team mit’s approach to the amazon picking challenge 2015,” arXiv preprint arXiv:1604.03639, 2016.