Domain Randomization and Generative Models for Robotic Grasping
Abstract
Deep learningbased robotic grasping has made significant progress the past several years thanks to algorithmic improvements and increased data availability. However, stateoftheart models are often trained on as few as hundreds or thousands of unique object instances, and as a result generalization can be a challenge.
In this work, we explore a novel data generation pipeline for training a deep neural network to perform grasp planning that applies the idea of domain randomization to object synthesis. We generate millions of unique, unrealistic procedurally generated objects, and train a deep neural network to perform grasp planning on these objects.
Since the distribution of successful grasps for a given object can be highly multimodal, we propose an autoregressive grasp planning model that maps sensor inputs of a scene to a probability distribution , where corresponds to the model’s estimate of the normalized probability of success of the grasp conditioned on the observations . Our model allows us to sample grasps efficiently at test time (or avoid sampling entirely).
We evaluate our model architecture and data generation pipeline in simulation and find we can achieve 90% success rate on previously unseen realistic objects at test time despite having only been trained on random objects.
I Introduction
Robotic grasping remains one of the core unsolved problems in manipulation. The earliest robotic grasping methods used analytical knowledge of a scene to compute an optimal grasp for an object [30, 1, 31, 37, 38, 44]. Assuming a simplified contact model and a heuristic for the likelihood of success of a grasp, analytical methods can provide guarantees about grasp quality relative to the chosen heuristic, but they often fail in the real world due to inconsistencies in the simplified object and contact models, the need for accurate 3D models of the objects in question, and sensor inaccuracies [2].
As a result, significant research attention has been given to datadriven grasp synthesis methods [2, 26, 34, 8, 28, 27, 42]. These algorithms avoid some of the challenges of analytic methods by sampling potential grasps and ranking them according to a learned function that maps sensor inputs to an estimate of a chosen heuristic. Datadriven grasp synthesis algorithms can be characterized by (a) how they sample grasps, (b) which heuristic they use to rank grasps, and (c) how they estimate the value of the chosen heuristic from camera images and other sensor data.
Recently, several works have explored using deep neural networks to approximate the grasp heuristic function [22, 35, 23, 14]. The promise of deep neural networks for learning grasp heuristics is that with diverse training data, deep models can learn features that deal with the edge cases that make realworld grasping challenging.
A core challenge for deep learning grasp quality heuristics is data availability. Due to the difficulty and expense of collecting realworld data and due to the limited availability of highquality 3D object meshes, current approaches use as few as hundreds or thousands of unique object instances, which may limit generalization. In contrast, ImageNet [17], the standard benchmark for image classification, has about 15 million unique images from 22 thousand categories.
In order to increase the availability of training data in simulation, we explore applying the idea of domain randomization [40, 45] to the creation of 3D object meshes. Domain randomization is a technique for learning models that work in a test domain after only training on lowfidelity simulated data by randomizing all nonessential aspects of the simulator. One of the core hypotheses of this work is that by training on a wide enough variety of unrealistic procedurally generated object meshes, our learned models will generalize to realistic objects.
Previous work in deep learning for grasping has focused on learning a function that estimates the quality of a given grasp given observations of the scene. Choosing grasps on which to perform this estimate has received comparatively little attention in the deep learning and robotics community, where it is typically done using random sampling or by solving a small optimization problem online. The second goal of this paper is to propose a deep learningbased method for choosing grasps to evaluate. Our hypothesis is that a learned model for grasp sampling will be more likely to find highquality grasps for challenging objects and will do so more efficiently.
We use an autoregressive model architecture [19, 33, 32] that maps sensor inputs to a probability distribution over grasps that corresponds to the model’s weighted estimate of the likelihood of success of each grasp. After training, highest probability grasp according to the distribution succeeds on 89% of test objects and the 20 highest probability grasps contain a successful grasp for 96% of test objects. In order to determine which grasp to execute on the robot, we collect a second observation in the form of an image from the robot’s hand camera and train a second model to choose the most promising grasp among those sampled from the autoregressive model, which results in a success rate of 92%.
The contributions of this paper can be summarized as follows:

We explore the effect of training a model for grasping using unrealistic procedurally generated objects and show that such a model can achieve similar success to one trained on a realistic object distribution. (Another paper [3] developed concurrently to this one explored a similar idea and reached similar conclusions.)

We propose a novel generative model architecture and training methodology for learning the normalized probability of a grasp success conditioned on observation(s) of an object and choosing a high quality grasp at test time.

We evaluate our object generation, training, and sampling algorithms in simulated scenes and find that we can achieve 84% success rate on random objects and 92% success rate on previously unseen realworld objects despite training only on nonrealistic randomly generated objects. With only a single sample from the autoregressive model, we can reach 76% for random objects and 89% for realistic objects.
Ii Related Work
Iia Domain Randomization
Domain randomization involves randomizing nonessential aspects of the training distribution in order to better generalize to a difficulttomodel test distribution. This idea has been employed in robotics since at least 1997, when Jakobi proposed the “Radical Envelope of Noise Hypothesis”, the idea that evolved controllers can be made more robust by completely randomizing all aspects of the simulator that do not have a basis in reality and slightly randomizing all aspects of the simulator that do have a basis in reality [13]. Recently domain randomization has shown promise in transferring deep neural networks for robotics tasks from simulation to the real world by randomizing physics [29] and appearance properties [40, 45, 52].
In another work developed concurrently with this one [3], the authors reach a similar conclusion about the utility of procedurally generated objects for the purpose of robotic grasping. In contrast to this work, theirs focuses on how to combine simulated data with real grasping data to achieve successful transfer to real world grasping, but does not focus on achieving a high overall success rate. Our paper instead focuses on how to achieve the best possible generalization to novel objects, but limits its focus to simulation.
IiB Autoregressive models
This paper uses an autoregressive architecture to model a distribution over grasps conditioned on observations of an object. Autoregressive models leverage the observation that an dimensional probability distribution can be factored as for any choice of ordering of dimensional variables . The task of modeling the distribution then consists of modeling each [19]. In contrast to Generative Adversarial Networks [9], another popular form for a deep generative model, autoregressive models can directly compute the likelihood of samples, which is advantageous for tasks like grasping in which finding the highest likelihood samples is important. Autoregressive models have been used for density estimation and generative modeling in image domains [19, 10, 7] and have been shown to perform favorably on challenging image datasets like ImageNet [33, 46]. Autoregressive models have also been successfully applied to other forms of data including in topic modeling [18] and audio generation [32].
IiC Robotic grasping
Grasp planning methods fall into one of two categories: analytical methods and empirical methods [41].
Analytical methods use a contact model and knowledge of an object’s 3D shape to find grasps that maximize a chosen metric like the ability of the grasp to resist external wrenches [37] or constrain the object’s motion [38]. Some methods attempt to make these estimates more robust to gripper and object pose uncertainty and sensor error by instead maximizing the expected value of a metric under uncertainty [15, 48].
Most approaches use simplified Coulomb friction and rigid body modeling for computational tractability [30, 37], but some have explored more realistic object and friction models [1, 36, 39]. Typically, grasps for an object are selected based on sensor data by registering images to a known database of 3D models using a traditional computer vision pipeline [6, 12, 49] or a deep neural network [11, 51].
IiD Deep learning for robotic grasping
Work in deep learning for grasping can be categorized by how training data is collected and how the model transforms noisy observations into grasp candidates.
Some approaches use handannotated realworld grasping trials to provide training labels [21]. However, handlabeling is challenging to scale to large datasets. To alleviate this problem, some work explores automated largescale data collection [22, 35]. Others have explored replacing real data with synthetic depth data at training time [23, 24, 47, 14], or combining synthetic RGB images with real images [3]. In many cases, simulated data appears to be effective in replacing or supplementing realworld data in robotic grasping. Unlike our approach, previous work using synthetic data uses small datasets of up to a few thousand of realistic object meshes.
One commonly used method for sampling grasps is to learn a visuomotor control policy for the robot that allows it to iteratively refine its grasp target as it takes steps in the environment. Levine and coauthors learn a prediction network that takes an observation and motor command and outputs a predicted probability of a successful grasp if is executed [22]. The crossentropy method is used to greedily select the that maximizes . Viereck and coauthors instead learn a function that maps the current observation and an action to an estimate of the distance to the nearest successful grasp after performing [47]. Directions are sampled and a constant step size is taken in the direction with the minimum value for . In contrast to visuomotor control strategies, planning approaches like ours avoid the potential local optima of greedy execution.
Another strategy to choose a grasp using a deep learning is to sample grasps and score them using a deep neural network of the form , where are the observation(s) of the scene, is a selected grasp, and is the score for the selected grasp [23, 24, 14, 50]. These techniques differ in terms of how they efficiently sample grasps to evaluate at test time. Most commonly these techniques directly optimize using the crossentropy method [23, 24, 50]. In contrast to these approaches, our approach jointly learns a grasp scoring function and a sampling distribution, allowing for efficient sampling and avoiding potential exploitation by the optimization procedure of under or overfit regions of the grasp score function.
Other approaches take a multistep approach, starting with a coarse representation of the possible grasps for an object and then exhaustively searching using a learned heuristic [21] or modeling the score function jointly for all possible coarse grasps [14]. Once a coarse grasp is sampled, it is then finetuned using a separate network [21] or interpolation [14]. By using an autoregressive model architecture, we are able to directly learn a high dimensional ( or dimensional) multimodal probability distribution.
Iii Method
Our goal is to learn a mapping that takes one or more observations of a scene and outputs a grasp to attempt in the scene. The remainder of the section describes the data generation pipeline, model architecture, and training procedure used in our method.
Iiia Data collection
We will first describe the process of generating training objects, and then the process of sampling grasps for those objects.
IiiA1 Object generation
One of our core hypotheses for this project is that training on a diverse array of procedurally generated objects can produce comparable performance to training on realistic object meshes. Our procedurally generated objects were formed as follows:

Sample a random number

Sample primitive meshes from our object primitive dataset

Randomly scale each primitive so that all dimensions to between 1 and 15cm

Place the meshes sequentially so that each mesh intersects with at least one of the preceding meshes

Rescale the final object to approximate the size distribution observed in our real object dataset
To build a diverse object primitive dataset, we took the more than 40,000 object meshes found in the ShapeNet object dataset [5] and decomposed them into more than 400,000 convex parts using VHACD^{1}^{1}1https://github.com/kmammou/vhacd. Each primitive is one convex part.
We compared this object generation procedure against a baseline of training using rescaled ShapeNet objects.
IiiA2 Grasp sampling and evaluation
We sample grasps uniformly at random from a discretized  or dimensional grasp space (dimensional when attention is restricted to upright grasps) corresponding to the coordinates of the center of the gripper and an orientation of the gripper about that point. We discretize each dimension into buckets. The buckets are the relative location of the grasp point within the bounding box of the object – e.g., an value of corresponds to a grasp at the far left side of the object’s bounding box and a coordinate of corresponds to a grasp at the far right.
Grasps that penetrate the object or for which the gripper would not contact the object when closed can be instantly rejected. The remainder are evaluated in a physics simulator.
For each grasp attempt, we also collect a depth image from the robot’s hand camera during the approach to be used to train the grasp evaluation function.
IiiB Model architecture
The model architecture used for our experiments is outlined in Figure 2. The model consists of two separate neural networks – a grasp planning module and a grasp evaluation model . The grasp planning module is used to sample grasps that are likely to be successful. The grasp evaluation model takes advantage of more detailed data in the form of a closeup image from a camera on the gripper of the robot to form a more accurate estimate of the likelihood of each sampled grasp to be successful.
The image representation is formed by passing each image through a separate convolutional neural network. The flattened output of these convolutional layers are stacked and passed through several dense layers to produce .
The neural network models a probability distribution over possible grasps for the object that corresponds to the normalized probability of success of each grasp. The model consists of submodules where is the dimensionality of the grasp. For any grasp , and are related by
where are the dimensions of .
Each is a small neural network taking the concatenation of and as input and producing a distribution over possible values for , the next dimension of . Here is a (possibly trivial) embedding of the previous grasp dimensions. For our experiments, we discretized the possible grasps . The output of is a softmax over the possible discrete values of . We found that using a small fully connected neural network for and not sharing weights between the helps improve performance at convergence.
The grasp evaluation model takes as input a single observation from the hand camera of the robot and outputs a single scalar value corresponding to the likelihood of success of that grasp. The model is parameterized by a convolutional neural network with sigmoid output.
IiiC Training methodology
Since our method involves capturing depth images from the hand of the robot corresponding to samples from , the entire evaluation procedure is not differentiable and we cannot train the model endtoend using supervised learning. As a result, our training procedure involves independently training and .
Given datasets of objects , observations and successful grasps , can be optimized by minimizing with respect to the parameters of the negative loglikelihood of conditioned on the observations , which is
This can be decomposed as
This function can be optimized using standard backpropogation and stochastic minibatch gradient techniques. [19]
In practice, is usually a larger model than and there are often tens or hundreds of successful grasp attempts for a single set of images . Therefore it is computationally advantageous to perform the forward pass and gradient calculations for each once for all . This can be achieved in standard neural network and backpropagation libraries by stacking all grasps for a given object so that SGD batches consist of pairs where is the matrix consisting of all successful grasps for object . To deal with differing values for , we choose and form an matrix by padding the matrix with arbitrary values. We can then write the gradient of our objective function as follows:
where is an indicator function corresponding to whether the entry in was one of the successful grasps.
The grasp evaluation function is trained using supervised learning. Inputs are given by the hand camera images collected during the data collection process and labels are given by whether the corresponding grasp was successful.
Iv Experiments
We evaluated our approach by training grasping models on three datasets:

ShapeNet1M, a dataset of 1 Million scenes with a single object from the ShapeNet dataset with randomized orientation and object scale.

Random1M, a dataset of 1 Million scenes with a single object generated at random using the procedure above.

ShapeNetRandom1M, a dataset with 500,000 scenes from each of the previous datasets.
For each object set, we recorded 2,000 grasp attempts per object. This number of grasps was selected so that more than 95% of objects sampled had at least one successful grasp to avoid biasing the dataset to easier objects^{2}^{2}2This number is not closer to 100% because a small percentage of the random objects in our training set are ungraspable with the Fetch gripper. . For data generation, we used a disembodied Fetch gripper to improve execution speed.
We trained the models using the Adam optimizer [16] with a learning rate of . We trained each model using three random seeds, and report the average of the three seeds unless otherwise noted.
We evaluated the performance of the model on scenes from each of the following evaluation sets:

ShapeNet1M

Random1M

A held out set of ShapeNet objects

A held out set of randomly generated objects

The YCB objects with meshes that are capable of being grasped by our robot’s gripper. We sample four poses (i.e., object rotations) for each.
All executions were done using a Fetch robot in the MuJoCo physics simulator. When evaluating the model, we sampled likely grasps from the model using a beam search with beam width .
Iva Performance of the algorithm
Figure 3 describes the performance of the algorithm on previously seen and unseen data. The full version of our algorithm is able to achieve greater than 90% success rate on previously unseen YCB objects even when training entirely on randomly generated objects. Training on 1M random objects performs comparably to training on 1M instances of realistic objects.
ShapeNet  ShapeNet  Random  Random  

Training set  Train  Test  Train  Test  Ycb 
ShapeNet1M  0.91  0.91  0.72  0.71  0.93 
Random1M  0.91  0.89  0.86  0.84  0.92 
ShapeNetRandom1M  0.92  0.90  0.84  0.81  0.92 
Note that the reported experiments were achieved by limiting the robot to grasps in which the gripper is upright. When the model attempts to predict the full 6dimensional grasp for the object (i.e., it can change the pitch and yaw of the gripper in addition to the grasp position and the roll of the gripper), success rate is around 10% lower across experiments. We hypothesize that that this is due to the fact that (a) nearly all objects in the training and test sets can be grasped with an upright grasp, and (b) sampling a fixed number of times from a sixdimensional grasp distribution provides a much sparser estimate of than from a fourdimensional distribution (upright grasps). Further experimentation could look into whether significantly scaling the amount of training data or using a combination of the 4dimensional training data and 6dimensional training data could improve performance.
Figure 4 reports the precision of our algorithm on the same training and test sets. By setting a threshold of 50% on the value produced by the grasp evaluator , we can choose not to attempt a grasp if the model is not confident a successful one has been found. Here we see a more meaningful advantage for the models trained on realistic objects due to more conservative estimates by the model .
ShapeNet  ShapeNet  Random  Random  

Training set  Train  Test  Train  Test  Ycb 
ShapeNet1M  0.94  0.93  0.78  0.81  0.96 
(0.98)  (0.98)  (0.83)  (0.85)  (0.97)  
Random1M  0.92  0.89  0.88  0.85  0.93 
(0.99)  (1.00)  (0.96)  (0.95)  (0.99)  
ShapeNet  0.92  0.91  0.87  0.85  0.94 
Random1M  (1.00)  (1.00)  (0.97)  (0.94)  (0.98) 
Figure 5 compares the performance of our algorithm to several baselines. In particular, our full method performs significantly better than sampling the highest likelihood grasp from the autoregressive model alone.
ShapeNet  ShapeNet  Random  Random  

Training set  Train  Test  Train  Test  Ycb 
Full Algorithm  0.91  0.89  0.86  0.84  0.92 
AutoregressiveOnly  0.89  0.86  0.80  0.76  0.89 
Random  0.22  0.21  0.10  0.11  0.26 
Centroid  0.30  0.25  0.10  0.12  0.54 
Figure 6 shows the percentage of objects for which the top most likely grasps according to contain at least one successful grasp. The incremental likelihood of sampling a valid grasp saturates between and attempts, motivating our choice of for . Note that more objects have successful grasps among the 20 sampled than achieve success using our method, suggesting the performance of the grasp evaluator could be a bottleneck to the overall performance of the algorithm.
IvB Failure cases
We observed the following three main failure cases for the learned model:

For objects that are close to the maximum size graspable by the gripper (10cm), the grasp chosen sometimes collides with the object.

In some cases, the model chooses to grasp a curved object at a narrower point to avoid wider points that may cause collision, and as a result the gripper slips off.

The model cannot find reasonable grasp candidates for some highly irregular objects like a chain found in the YCB dataset.
The learned models primarily failed on objects for which all features are close in size to the maximum allowed by the gripper. We hypothesize that the models learned to avoid wide points that would cause collision with the object, but since few objects close to the size of the gripper were present during training, this representation failed for objects with graspable features that are slightly too wide or slightly too narrow. Supplementing the training set with additional objects on the edge of graspability or combining planning with visual servoing could alleviate these edge cases.
The model also failed for a small number of highly irregular objects like a chain present in the YCB dataset. These failure cases may present a larger challenge for the use of random objects in training grasping models, but additional diversity in the random generation pipeline may mitigate the issue.
IvC Effect of amount of training data
Figure 8 shows the impact of the number of unique objects in the training set on the performance of our models in validation data held out from the same distribution and outofsample test data from the YCB dataset. Although with enough data the model trained entirely using randomly generated data performs as well as the models trained using realistic data, with smaller training sets the more realistic object distributions perform significantly better on the test set than does the unrealistic random object distribution.
Note that performance does not appear to have saturated yet in these examples. We conjecture that more training data and more training data diversity could help reduce the effects of the first two edge cases above, but may not allow the model to overcome the third edge case.
V Conclusion
We demonstrated that a grasping model trained entirely using nonrealistic procedurally generated objects can achieve a high success rate on realistic objects despite no training on a realistic object distribution. Our grasping model architecture allows for efficient sampling of highlikelihood grasps at evaluation time, with a successful grasp being found for 96% of objects in the first 20 samples. By scoring those samples, we can achieve a success rate of 92% on realistic objects.
Future directions that could improve the success rate of the trained models include scaling up to larger training sets, providing the model with feedback from failed grasps to influence further grasp selection, combining our grasp planning module with work on visual servoing for grasping, and incorporating additional sensor modalities like haptic feedback.
Another exciting direction is to explore using domain randomization for generalization in other robotic tasks. If realistic object models are not needed, tasks like pickandplace, grasping in clutter, and tool use may benefit from the ability to randomly generate hundreds of thousands or millions of 3D scenes.
Acknowledgements
We thank Lukas Biewald and Rocky Duan for helpful discussions, brainstorming, and support in starting this project. The project could not have happened without the help of Peter Welinder, Jonas Schneider, Rachel Fong, and the rest of the engineering team at OpenAI.
References
 [1] Antonio Bicchi and Vijay Kumar. Robotic grasping and contact: A review. In Robotics and Automation, 2000. Proceedings. ICRA’00. IEEE International Conference on, volume 1, pages 348–353. IEEE, 2000.
 [2] Jeannette Bohg, Antonio Morales, Tamim Asfour, and Danica Kragic. Datadriven grasp synthesisâa survey. IEEE Transactions on Robotics, 30(2):289–309, 2014.
 [3] Konstantinos Bousmalis, Alex Irpan, Paul Wohlhart, Yunfei Bai, Matthew Kelcey, Mrinal Kalakrishnan, Laura Downs, Julian Ibarz, Peter Pastor, Kurt Konolige, et al. Using simulation and domain adaptation to improve efficiency of deep robotic grasping. arXiv preprint arXiv:1709.07857, 2017.
 [4] Berk Calli, Arjun Singh, Aaron Walsman, Siddhartha Srinivasa, Pieter Abbeel, and Aaron M Dollar. The ycb object and model set: Towards common benchmarks for manipulation research. In Advanced Robotics (ICAR), 2015 International Conference on, pages 510–517. IEEE, 2015.
 [5] Angel X Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, et al. Shapenet: An informationrich 3d model repository. arXiv preprint arXiv:1512.03012, 2015.
 [6] Matei Ciocarlie, Kaijen Hsiao, Edward Gil Jones, Sachin Chitta, Radu Bogdan Rusu, and Ioan A Şucan. Towards reliable grasping and manipulation in household environments. In Experimental Robotics, pages 241–252. Springer, 2014.
 [7] Mathieu Germain, Karol Gregor, Iain Murray, and Hugo Larochelle. Made: masked autoencoder for distribution estimation. In Proceedings of the 32nd International Conference on Machine Learning (ICML15), pages 881–889, 2015.
 [8] Corey Goldfeder, Peter K Allen, Claire Lackner, and Raphael Pelossof. Grasp planning via decomposition trees. In Robotics and Automation, 2007 IEEE International Conference on, pages 4679–4684. IEEE, 2007.
 [9] Ian Goodfellow, Jean PougetAbadie, Mehdi Mirza, Bing Xu, David WardeFarley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 2672–2680. Curran Associates, Inc., 2014.
 [10] Karol Gregor, Ivo Danihelka, Andriy Mnih, Charles Blundell, and Daan Wierstra. Deep autoregressive networks. arXiv preprint arXiv:1310.8499, 2013.
 [11] Saurabh Gupta, Pablo Arbeláez, Ross Girshick, and Jitendra Malik. Aligning 3d models to rgbd images of cluttered scenes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4731–4740, 2015.
 [12] Stefan Hinterstoisser, Stefan Holzer, Cedric Cagniart, Slobodan Ilic, Kurt Konolige, Nassir Navab, and Vincent Lepetit. Multimodal templates for realtime detection of textureless objects in heavily cluttered scenes. In Computer Vision (ICCV), 2011 IEEE International Conference on, pages 858–865. IEEE, 2011.
 [13] Nick Jakobi. Evolutionary robotics and the radical envelopeofnoise hypothesis. Adaptive behavior, 6(2):325–368, 1997.
 [14] Edward Johns, Stefan Leutenegger, and Andrew J Davison. Deep learning a grasp function for grasping under gripper pose uncertainty. In Intelligent Robots and Systems (IROS), 2016 IEEE/RSJ International Conference on, pages 4461–4468. IEEE, 2016.
 [15] Ben Kehoe, Akihiro Matsukawa, Sal Candido, James Kuffner, and Ken Goldberg. Cloudbased robot grasping with the google object recognition engine. In Robotics and Automation (ICRA), 2013 IEEE International Conference on, pages 4263–4270. IEEE, 2013.
 [16] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
 [17] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012.
 [18] Hugo Larochelle and Stanislas Lauly. A neural autoregressive topic model. In Advances in Neural Information Processing Systems, pages 2708–2716, 2012.
 [19] Hugo Larochelle and Iain Murray. The neural autoregressive distribution estimator. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, pages 29–37, 2011.
 [20] Quoc V Le, David Kamm, Arda F Kara, and Andrew Y Ng. Learning to grasp objects with multiple contact points. In Robotics and Automation (ICRA), 2010 IEEE International Conference on, pages 5062–5069. IEEE, 2010.
 [21] Ian Lenz, Honglak Lee, and Ashutosh Saxena. Deep learning for detecting robotic grasps. The International Journal of Robotics Research, 34(45):705–724, 2015.
 [22] Sergey Levine, Peter Pastor, Alex Krizhevsky, Julian Ibarz, and Deirdre Quillen. Learning handeye coordination for robotic grasping with deep learning and largescale data collection. The International Journal of Robotics Research, page 0278364917710318, 2016.
 [23] Jeffrey Mahler, Jacky Liang, Sherdil Niyaz, Michael Laskey, Richard Doan, Xinyu Liu, Juan Aparicio Ojea, and Ken Goldberg. Dexnet 2.0: Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics. arXiv preprint arXiv:1703.09312, 2017.
 [24] Jeffrey Mahler, Matthew Matl, Xinyu Liu, Albert Li, David Gealy, and Ken Goldberg. Dexnet 3.0: Computing robust robot suction grasp targets in point clouds using a new analytic model and deep learning. arXiv preprint arXiv:1709.06670, 2017.
 [25] Andrew T Miller and Peter K Allen. Graspit! a versatile simulator for robotic grasping. IEEE Robotics & Automation Magazine, 11(4):110–122, 2004.
 [26] Andrew T Miller, Steffen Knoop, Henrik I Christensen, and Peter K Allen. Automatic grasp planning using shape primitives. In Robotics and Automation, 2003. Proceedings. ICRA’03. IEEE International Conference on, volume 2, pages 1824–1829. IEEE, 2003.
 [27] Luis Montesano, Manuel Lopes, Alexandre Bernardino, and José SantosVictor. Learning object affordances: from sensory–motor coordination to imitation. IEEE Transactions on Robotics, 24(1):15–26, 2008.
 [28] Antonio Morales, Eris Chinellato, Andrew H Fagg, and Angel P Del Pobil. Using experience for assessing grasp reliability. International Journal of Humanoid Robotics, 1(04):671–691, 2004.
 [29] Igor Mordatch, Kendall Lowrey, and Emanuel Todorov. Ensemblecio: Fullbody dynamic motion planning that transfers to physical humanoids. In Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on, pages 5307–5314. IEEE, 2015.
 [30] Richard M Murray, Zexiang Li, S Shankar Sastry, and S Shankara Sastry. A mathematical introduction to robotic manipulation. CRC press, 1994.
 [31] VanDuc Nguyen. Constructing forceclosure grasps. The International Journal of Robotics Research, 7(3):3–16, 1988.
 [32] Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499, 2016.
 [33] Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759, 2016.
 [34] Raphael Pelossof, Andrew Miller, Peter Allen, and Tony Jebara. An svm learning approach to robotic grasping. In Robotics and Automation, 2004. Proceedings. ICRA’04. 2004 IEEE International Conference on, volume 4, pages 3512–3518. IEEE, 2004.
 [35] Lerrel Pinto, James Davidson, and Abhinav Gupta. Supervision via competition: Robot adversaries for learning tasks. arXiv preprint arXiv:1610.01685, 2016.
 [36] Domenico Prattichizzo, Monica Malvezzi, Marco Gabiccini, and Antonio Bicchi. On the manipulability ellipsoids of underactuated robotic hands with compliance. Robotics and Autonomous Systems, 60(3):337–346, 2012.
 [37] Domenico Prattichizzo and Jeffrey C Trinkle. Grasping. In Springer handbook of robotics, pages 955–988. Springer, 2016.
 [38] Alberto Rodriguez, Matthew T Mason, and Steve Ferry. From caging to grasping. The International Journal of Robotics Research, 31(7):886–900, 2012.
 [39] Carlos Rosales, Raúl Suárez, Marco Gabiccini, and Antonio Bicchi. On the synthesis of feasible and prehensile robotic grasps. In Robotics and Automation (ICRA), 2012 IEEE International Conference on, pages 550–556. IEEE, 2012.
 [40] Fereshteh Sadeghi and Sergey Levine. (cad) 2 rl: Real singleimage flight without a single real image. arXiv preprint arXiv:1611.04201, 2016.
 [41] Anis Sahbani, Sahar ElKhoury, and Philippe Bidaud. An overview of 3d object grasp synthesis algorithms. Robotics and Autonomous Systems, 60(3):326–336, 2012.
 [42] Ashutosh Saxena, Justin Driemeyer, and Andrew Y Ng. Robotic grasping of novel objects using vision. The International Journal of Robotics Research, 27(2):157–173, 2008.
 [43] Ashutosh Saxena, Lawson LS Wong, and Andrew Y Ng. Learning grasp strategies with partial shape information. In AAAI, volume 3, pages 1491–1494, 2008.
 [44] John D Schulman, Ken Goldberg, and Pieter Abbeel. Grasping and fixturing as submodular coverage problems. International Symposium on Robotics Research (ISRR), 2011.
 [45] Josh Tobin, Rachel Fong, Alex Ray, Jonas Schneider, Wojciech Zaremba, and Pieter Abbeel. Domain randomization for transferring deep neural networks from simulation to the real world. arXiv preprint arXiv:1703.06907, 2017.
 [46] Aaron van den Oord, Nal Kalchbrenner, Lasse Espeholt, koray kavukcuoglu, Oriol Vinyals, and Alex Graves. Conditional image generation with pixelcnn decoders. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 4790–4798. Curran Associates, Inc., 2016.
 [47] Ulrich Viereck, Andreas ten Pas, Kate Saenko, and Robert Platt. Learning a visuomotor controller for real world robotic grasping using easily simulated depth images. arXiv preprint arXiv:1706.04652, 2017.
 [48] Jonathan Weisz and Peter K Allen. Pose error robust grasping from contact wrench space metrics. In Robotics and Automation (ICRA), 2012 IEEE International Conference on, pages 557–562. IEEE, 2012.
 [49] Ziang Xie, Arjun Singh, Justin Uang, Karthik S Narayan, and Pieter Abbeel. Multimodal blending for highaccuracy instance recognition. In Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference on, pages 2214–2221. IEEE, 2013.
 [50] Xinchen Yan, Mohi Khansari, Yunfei Bai, Jasmine Hsu, Arkanath Pathak, Arbhinav Gupta, James Davidson, and Honglak Lee. Learning grasping interaction with geometryaware 3d representations. arXiv preprint arXiv:1708.07303, 2017.
 [51] Andy Zeng, KuanTing Yu, Shuran Song, Daniel Suo, Ed Walker, Alberto Rodriguez, and Jianxiong Xiao. Multiview selfsupervised deep learning for 6d pose estimation in the amazon picking challenge. In Robotics and Automation (ICRA), 2017 IEEE International Conference on, pages 1386–1383. IEEE, 2017.
 [52] Fangyi Zhang, Jürgen Leitner, Michael Milford, and Peter Corke. Simtoreal transfer of visuomotor policies for reaching in clutter: Domain randomization and adaptation with modular networks. arXiv preprint arXiv:1709.05746, 2017.