Virtual Training for a Real Application:
Accurate Object-Robot Relative Localization without Calibration
Localizing an object accurately with respect to a robot is a key step for autonomous robotic manipulation. In this work, we propose to tackle this task knowing only 3D models of the robot and object in the particular case where the scene is viewed from uncalibrated cameras — a situation which would be typical in an uncontrolled environment, e.g., on a construction site. We demonstrate that this localization can be performed very accurately, with millimetric errors, without using a single real image for training, a strong advantage since acquiring representative training data is a long and expensive process. Our approach relies on a classification Convolutional Neural Network (CNN) trained using hundreds of thousands of synthetically rendered scenes with randomized parameters. To evaluate our approach quantitatively and make it comparable to alternative approaches, we build a new rich dataset of real robot images with accurately localized blocks.
Keywords:relative localization pose estimation Convolutional Neural Networks synthetic data virtual training robotics
Solving robot vision, to allow a robot to interact with the world using simple 2D images, is one of the initial motivations of computer vision (roberts1963machine). We revisit this problem using the modern tools of deep learning under assumptions that are relevant for autonomous robots in the construction industry. In factories equipped with robots, with well-identified and repetitive tasks, cameras for robot control can be placed in both relevant and secure places, and calibrated (internally and externally) once and for all.
Now imagine using robots on a work site. As the environment is only very weakly controlled and as the tasks can change frequently, there is no generic camera position that can see well all robot actions. Even if several cameras are used to provide enough relevant views, some of these cameras would often have to be moved, either manually (e.g., hand-shifting tripods) or using cameras mounted on moving devices such as mobile observation robots or cable drones. In these settings, taking the time to recalibrate cameras with respect to the robot after each camera movement is unrealistic, and inertial measurement units (IMUs) would not be accurate enough to recompute new usable camera poses. On a construction site, cameras could also be hit or moved accidentally, sometimes unknowingly. There is thus a high demand on robustness while also requiring strong accuracy for object manipulation.
An option is to directly mount cameras on the operating robots, but the practical number of cameras and the range of viewpoints is then severely reduced. Besides, such cameras are more exposed to dirt and accidents as they are close to robot actions. Last, contrary to the relatively clean and controlled context of a factory, a construction site is a hostile environment for robot vision. The objects suffer from substantial changes in appearance, including texture variations due to dirt and bad weather, changes of illumination due to an outdoor setting, and unruly occlusions. There might also be shape differences between the expected 3D models and the real scene. While this does not actually change the tasks, it makes them substantially more difficult.
To address these issues, we consider an alternative setup where the robot control does not require a formal calibration step. We consider the case of several uncalibrated cameras, e.g., attached to sticks that are moved as the construction evolves, which observe a scene with no reliable landmark that would allow absolute camera registration. The challenge is then to recognize the different elements in the scene and position them accurately with respect to each other.
More concretely, we focus here on the problem of finely localizing and orienting a building block with respect to a robotic arm, that are both seen from external cameras with no extrinsic nor intrinsic calibration. As argued above, this task is of major interest, but to the best of our knowledge, there are no generic method to solve it, and no standard datasets to train or test a method. In this paper, we present such a dataset, with thousands of annotated real test images covering several difficulty levels, as well as a strong baseline method learned from synthetic images.
To accomplish this task, an approach would be possible where 3D models corresponding to the robot and the building block would be first aligned on the image, then localized with respect to the camera, and finally positioned with respect to one another. However, it would be sensitive to changes in shape and appearances, which are pervasive on a construction site. We rather present a more direct approach which leverages recent advances in the use of CNNs. Such networks have the advantage to be generic, they can easily be trained to be robust to many perturbations (including occlusions), and they can be applied to images extremely efficiently.
Yet, CNNs require a very large quantity of annotated training images. While it is possible to animate a robot and a building block in front of a camera to automatically create such a dataset, it requires typically thousands of hours to collect images concerning a hundred of thousands of situations (PintoINCRA2016; LevineISER2017). Given that a new training is necessary when the robot or the building blocks change, this solution is not practical.
To make our framework generic and easily applicable to new robots and new types of blocks, without requiring the expensive and long construction of real-life datasets for training, we demonstrate a light and flexible method that simply takes as input the untextured 3D model of both a building block (with possible parametric variations) and a robot (with its possible joint positions), without any real image. It is based on virtual training using synthetic images only, and yet it performs well on real images.
We use a three-step approach. First, using a lower image resolution of the whole scene, we make a coarse estimation of the block pose relatively to the robot base. Second, after moving the clamp above this estimated location, we locate the clamp in the image and crop a higher-resolution image around that clamp position. Third, using this high-resolution crop, we refine the estimation of the block pose in the clamp reference system, which we can translate into the robot base reference frame as the 3D position of the clamp with respect to the base is known.
This approach has two benefits. First, it allows to take advantage of the resolution of the input image without increasing the required memory (as a fully convolutional approach would). Second, it allows to perform a more accurate relative localization with the robot clamp close to the block, rather than with the robot base potentially far from the block.
To summarize, our main contributions are as follows:
We argue for a new robotic task of relative localization without camera calibration, formulate it in detail, and provide a rich realistic evaluation dataset and procedure111The project page with this UnLoc dataset (Uncalibrated Relative Localization) is imagine.enpc.fr/~loingvi/unloc.. The task involves a three-step procedure where a first coarse position estimation is refined after the robot moves towards the target.
We demonstrate that very accurate (millimetric) relative localization can be reached with learning techniques only despite the lack of calibration information.
We show that training can be performed without a single real image, paving the way for generic virtual training with arbitrary shapes and objects.
2 Related Work
There are two critical challenges in our work: first, extending localization and pose estimation to perform relative pose estimation; second performing such a task without using real images as training data. We thus review the literature on both challenges.
2.1 Object localization and pose estimation
Localizing an object and estimating its pose are classical computer vision problems. Indeed, the 3D understanding of the world that it provides seems to be a natural first step for image understanding and for any robotic interaction (roberts1963machine).
Model-based alignment methods.
In early computer vision works, it was often assumed that a 3D model of the object of interest was available (roberts1963machine; lowe1987three; huttenlocher1990recognizing; mundy2006object). The most successful approach was the use of local keypoints descriptors such as SIFTs (lowe1999object), which lead in particular to many 3D pose estimation pipelines for robotics applications, from a single or multiple images (collet2010efficient; collet2011moped).
Learning-based approaches tend to focus more on object categories than on specific object instances. They can be roughly classified in two categories.
First, many works used manually-designed features as input to learning algorithm, and apply them to detect objects in images (dalal2005histograms; felzenszwalb2010object). Deformable Part Models (felzenszwalb2010object) were particularly successful and extended to predict 3D object poses (glasner2011aware; fidler20123d; hejrati2012analyzing; pepik2012teaching).
More recently CNNs (lecun1989backpropagation) were shown to boost performance for 2D object detection (sermanet2013overfeat; girshick2014rich; he2017mask) and object pose estimation (tulsiani2015viewpoints; su2015render; wu2016single; massa2016crafting). Even more related to our goal, such deep learning approaches were used by PintoINCRA2016 to learn where to grasp an object, and have also recently attracted much attention in the robotics community, in particular for reinforcement learning (schulman2015trust; levine2016end).
Marker-based detection methods.
Another approach is to place one or several fiducial markers, such as ArUco (GARRIDOJURADO20142280; GARRIDOJURADO2016481), on the objects of the scene to facilitate their detection and fine localization in images. This method was successfully used in Feng2014TowardsAR. It however is less flexible as it requires carefully positioning these markers on visible flat surfaces of the objects at well defined locations. Besides, it also requires intrinsically and extrinsically calibrated cameras to locate the robot in a global reference frame (possibly putting markers on the robot too) and to relate objects in 3D.
In contrast to these works, our focus is not the localization of an object of interest in an image or with respect to the camera, but the relative positioning of one object with respect to another, the robot and the block. Rather than targeting full 6DOF pose (hodavn2016evaluation), we restrict ourselves to two position and one angle parameters, since one can reasonably assume that both the robot and the block are standing on a more or less flat surface. We developed a direct prediction approach, building on the success of the CNN based-approaches and following the logic of end-to-end learning, to learn to directly predict the relative pose. Both the definition of the problem of relative pose estimation and this direct approach are novelties of our work.
2.2 Learning from synthetic data
The success of deep learning renewed the interest in the possibilities to learn from synthetic data. Moreover, synthetic data permits to have a large amount of accurate annotations for each image. Indeed, deep learning methods typically require very large annotated datasets to be trained, but such annotations are most often expensive and difficult to obtain. However, an algorithm trained on synthetic data may not apply well to real data, because of their various differences, a problem often referred to as the domain gap between real and synthetic images. We identified two main approaches to address this issue.
The simplest way to avoid the issues related to the differences between real and synthetic images is to try and reduce the domain gap as much as possible by generating realistic images. Generating completely realistic images requires using a high quality 3D model of a scene as well as a good illumination model, and then applying a rendering algorithm. Since high-quality 3D models are expensive and high-quality rendering is computationally expensive (typically several hours per image), little work has been done with this quality of data, and most works focus on faking realism. This can be done for example by fusing a rendered 3D object model with a texture extracted from a real image and compositing it with the background of a real scene. In particular, peng2015learning and pepik2015holding studied how the realism of such composite images impacted performance for object detection, su2015render applied such a strategy to push the performance on object category pose estimation, and chen2016synthesizing on human pose estimation. More recently, several papers have used game engine to generate training data, for example for semantic segmentation (richter2016playing; ros2016synthia; shafaei2016play).
The problem of training a model on synthetic images and applying it to real images can be seen as an instance of the generic domain adaptation problem, for which many approaches have been developed. One approach is to explicitly design a loss or an architecture to perform domain adaptation. Such methods have been used with synthetic data to perform object classification (peng2017synthetic), detection (sun2014virtual; vazquez2014virtual) and, closer to our task, 3D model alignment in 2D images (massa2016deep).
Another approach for learning robust models with synthetic data is to generate training examples with a very high variety to encourage the models to learn invariances. The first application of such an approach with CNNs is probably dosovitskiy2015flownet, which leverages highly-modified and unrealistic composite images between real images and rendered 3D chairs (aubry2014seeing) to learn to predict optical flow between real images. More recently sadeghi2016cad demonstrated how to use a high variety of non-realistic rendered views of indoor scenes to learn how to fly a quadcopter.
Independently of our work, tobin2017domain recently proposed a similar strategy, that they call domain randomization, for localizing a block and grasping it with a robot. There are however several strong differences between this work and ours. In particular, tobin2017domain focus on the differentiation between different shapes of objects (triangular, hexagonal, rectangular) and coarse localization on a table with fixed characteristics, i.e., always seen from a similar viewpoint, with the same orientation. Thus, tobin2017domain essentially requires localizing the table and objects in 2D. On the contrary we do not impose the presence of a table with fixed characteristics and seen from a given angle; we localize the objects directly with respect to a robot at unknown positions. These differences in our data are illustrated on Figure 2. Furthermore, our approach allows to localize a block with an average accuracy of 2.6 mm and 0.7°, while tobin2017domain provide an average accuracy around 15 mm, without orientation estimation.
3 Task Abstraction
The locating-for-grasping problem is a robotic task that is difficult to assess. In this section, we first explain how we express it as a three-step procedure corresponding to three pure computer-vision subtasks, which enables reproducible quantitative evaluation. We then formalize these subtasks.
3.1 Task overview
The task we consider is as follows. The goal is to be able to grasp an unknown cuboid block with a robotic arm using only images from intrinsically and extrinsically uncalibrated cameras. The block lies on a mostly planar ground; its position, orientation and sizes are unknown. The robot and the block are in the field of view of several cameras, whose actual position and orientation are also unknown; as a result, the block can be fully or partially occluded by the robot on some camera views.
For grasping to succeed, the position and orientation of the cuboid with respect to the robot must be known accurately. Please note however that only this relative position and orientation between the robot and the block is required; the actual camera poses are not needed.
The only configuration information that is available for this task is an untextured 3D model of the robot with its joint axes, and a range of possible sizes for the cuboid. In particular, the aspects of both the robot and the block are totally unknown.
As it remains difficult to accurately locate an object with respect to a robot, we actually define the following subtasks, which correspond to a three-step locating procedure:
Coarse relative localization subtask. We first consider the very general case where the robot and the block are at random positions and orientations, and the robot joints are also randomly set. The cameras, which are random too, only provide overviews of the scene. The subtask here is to (coarsely) estimate the pose of the block with respect to the robot.
Tool localization subtask. After the block position is thus estimated, although possibly with moderate accuracy and confidence, we assume the robot clamp is moved on top of that coarse predicted location. In this setting, the robot and the block remain at random positions and orientations, but the clamp is located at a random position close to the block, oriented towards the ground and ready to grasp. Now the second subtask is to detect the clamp in the picture, allowing camera close-ups to later perform a finer pose estimation.
Fine relative localization subtask. Last, using camera close-ups (actually crops of overviews centered on the zone of interest), the third subtask is to finely estimate the block location and orientation with respect to the clamp, hence with respect to the robot, thus enabling the actual grasp.
Note that in this paper, we focus on the position estimation in the horizontal plane where the robot rests and assume the block lies flat on this plane with little or no variation of height or tilt. It corresponds to the realistic assumption that we are working on an approximately planar surface. Adapting the framework to deal with significant block tilts and height variations is future work.
We describe in Section 5.1 a dataset to assess methods that try to address these three subtasks. It is made of real images of a robot and a block, together with accurate annotations of the relative position and orientation of the block with respect to the robot. This allows to quantitatively evaluate the success of a method on the task.
This composite task actually abstracts a more general task of image-based object-robot relative positioning, where robot motions are arbitrary. However, as such a visual servoing is a dynamic process that involves real devices, it is practically impossible to reproduce and hence, it is not possible to compare two given methods. Our three-step locating task for grasping is a discrete formulation of this dynamic process, with a predefined plausible intermediate move. The interest is that it is static and deterministic; competing methods can thus be assessed and compared quantitatively and meaningfully.
Our experiments below also show that such a three-step procedure makes sense for robot control as indeed more accurate position and orientation information can be obtained thanks to the refinement subtask. Additionally, we consider two task variants: single-view and multi-view. In fact, as the block can sometimes be occluded in a single view, using multiple views provides more robustness because then a block has little or no chances to be hidden in all views. The multi-view setting also offers a greater pose accuracy by exploiting information from each views.
3.2 Robot and reference frames
We need to define a reference frame to formalize the positioning subtasks. As the blocks are symmetric (invariant to a 180° rotation in the horizontal plane), and as the problem is mostly robot-centric, we choose to associate the reference frame to the robot.
For frame axes to be defined meaningfully, the robot has to have asymmetries. Rather than imposing markers on the robot, which are not robust, especially in case of a construction site with dirt and possibly bad weather, we choose to rely only on the asymmetries of the robot intrinsic shape.
Concretely, in the dataset we created for this task (see Sect. 5.1), we used a robot (an IRB120 from ABB company) with the following shape features:
The base of the robot is mostly cylindrical. However, it is fixed on the experiment table with nuts and bolts using a mostly-square plate whose orientation can be used to define axes and , up to swapping. More importantly, wires are connected to the robot via a square box attached to the base cylinder, which we use to define the direction of the axis; the axis is then defined as orthogonal and oriented by convention. The block relative orientation is the angle between the largest block dimension (on the ground plane) and the axis. As our cuboid blocks here are symmetric, this angle is defined only between 0 and 180 degrees. This frame is illustrated on Figure (a)a. It is used for the coarse location subtask.
Just above the clamp, a bolt and a small square cover make the side of the grip support asymmetric, as shown on Figure 3. As above, it allows us to define two axes and , and the angle of the main block axis with respect to axis . This frame is relative to the clamp rather than to the robot base, but it can immediately be related to the robot frame as the robot joints and clamp position are known. Note that this second frame only makes sense when the clamp is oriented vertically towards the ground. It is used for the fine location subtask.
Similar shape asymmetries can easily be added to any existing robot if they do not already exist.
4 Solving the Task with CNNs and Synthetic Images
Recovering the exact position of an object (robot, things to grasp, obstacles, etc.) from a single image given only a 3D model is a difficult problem, especially when the object texture is unknown or unreliable, and when its deformations (robot articulations, variants of object size and shape) make it vary drastically. Yet, given several views, the cameras could be calibrated and 3D reconstruction could be performed, enabling the problem to be solved after 3D model alignment. However, multi-view calibration and reconstruction can be challenging problems, requiring a significant processing time, and aligning 3D models to a point cloud is not trivial either.
Instead, we propose to tackle the problem directly and train a CNN to localize a block robustly with respect to the robot, given a single image. In case images from several cameras are available, location estimates can be aggregated to improve the general accuracy. Moreover, we only use synthetic training images, to avoid the costly collection and annotation of real data.
More precisely, we train three CNNs, one for each subtask. The first network estimates a coarse location of the block relatively to the robot base in a single image. After the robot clamp has been moved above that rough location, the second network locates the clamp in a view of the new scene configuration. Last, using an image crop around the estimated clamp position, the third network estimates a fine block pose with respect to the clamp, hence to the robot base.
While there are several steps in this approach, it differs from a pipeline with camera calibration, 3D reconstruction and model alignment in that the errors do not accumulate. Indeed, assuming the first and second networks are robust enough to picture a close-up of the clamp with the block, the final third network can make a fine estimation that does not depend on the accuracy of the previous subtasks, including crop centering.
4.1 Relative pose estimation with a CNN
Since the position as well as the angle are continuous quantities, it would be natural to formulate their estimation as a regression problem. However, such a formulation is not well suited to represent ambiguities. For example, since the arm of the robot is almost symmetric, it would be natural for our system to hesitate between two symmetric positions during the fine estimation step. In a regression set up, there is no natural way to handle such ambiguities. On the contrary, if we were to predict probabilities for each position of the block, a multi-modal distribution could be predicted in an ambiguous case. We thus discretize the space of possible poses and predict probabilities for each bin, formulating the problem as a classification problem. Such a formulation has been shown to be more effective in similar problems, such as keypoint position estimation and orientation estimation (tulsiani2015viewpoints; su2015render; massa2016crafting). Concretely, given the level of accuracy we target (see Section 5.2), our coarse pose estimation considers square bins of size 5 mm, and our fine pose estimation uses bins of size 2 mm.
Network architecture and joint prediction.
We solve the classification problem we have just defined by training a CNN. More precisely, we use a ResNet-18 network, trained from scratch and with a standard cross-entropy loss, which has shown good performance while remaining relatively light. We used a batch size of 128, a weight decay of and a momentum of 0.9. We trained our model with an initial learning rate of until convergence (250 iterations for the coarse estimation, 11 iterations for the tool detection, and 186 iterations for the fine estimation), then a learning rate of for 15 iterations (coarse and fine). We did not explore in detail the influence of the size of the network, but one can reasonably expect a small performance gain by optimizing it, i.e., using a deeper or wider network while keeping it small enough to avoid overfitting.
However, the definition of the form of the network output is not trivial. Simply defining one class per bin would lead to too many classes. For example, there are more than 17,500 attainable 2D positions bins for the coarse estimation (3,600 for the fine estimation), each of which could be broken into 36 orientation bins (90 for the fine estimation). This would not only make the network memory usage much larger, but also require much more training data. Indeed, we show in our early experiments (see Section 5.3) that, despite the intuitive information sharing that could be used to predict the different classes, 80 images per localization class (for a total of more than 280k training images) are not enough to avoid overfitting.
The simplest alternative would be to simply train three independent networks to predict , and the angle . However, we show that it is better to train a multi-task network that predicts probabilities for different 1D bins for , and independently, but computes a single representation in all but the last layer. Interestingly and contrary to the network prediction with one probability per 2D position, this network does not overfit dramatically.
Performing classification instead of regression also has the advantage that it gives a natural way to merge prediction from several views. Indeed, the output of the network can be interpreted as log probabilities. If one considers the information from the different views to be independent, the position probability knowing all the images is simply the product of the individual probabilities. One can thus simply predict the maximum of the sum of the outputs of the network applied to the different images.
In fact, as estimation from a single view can be ambiguous, for example because the exact size of the block is unknown or because the block is occluded by the robot, merging the predictions from several views is crucial to the success of our approach, as shown in Section 5.5. Note that this approach not only scales well with the number of cameras but also, conversely, allows a smooth degradation (up to a single view) in case some cameras are moved or turned away from the scene, or become inoperative.
More complicated strategies could of course be used, such as training a network to directly take several views as input, or training a recurrent neural network that would ”see” the different views one by one.
4.2 Synthetic training dataset
The creation of a large training dataset of annotated real photographs with variations representative of those encountered in actual test scenarios is difficult and time consuming, especially when considering the variability of uncontrolled environments, e.g., related to dirt or outdoor illumination. Instead, we generated three synthetic sets of rendered images together with ground-truth pose information, to use as training and validation sets for each subtask. Rather than spend a lot of processing time to generate photorealistic images or use some form of domain adaptation (see Section 2.2), we chose to apply a strategy of massive image generation (hundreds of thousands of non-photorealistic images) with massive randomization targeted at the specific properties we want to enforce. In this synthetic dataset, as illustrated on Figure 4, the robot is placed on a flat floor and a cuboid block is laid flat nearby, in the following configurations:
robot and block in random poses, for coarse estimation,
robot with clamp in random vertical pose, for 2D tool detection,
close-up on vertical clamp with random block nearby, for fine estimation.
To encourage our model to generalize as much as possible, we introduced the following randomization in the generation of synthetic images:
robot base orientation and position, and joint angles,
(cuboid) block dimensions, orientation and position,
textures for the floor, robot and block,
camera center, target, rotation and focal length.
Details on this synthetic dataset are given in Appendix A.
5 Experimental Results and Analysis
In this section, we describe an evaluation dataset and use it to analyze in detail the performance of our direct learning-based approach, including the different design choices.
5.1 Evaluation dataset
As far as we know, there exists no dataset concerning the task of high-precision object-robot relative localization. We created such a dataset, with real images, for the composite task described in Section 3. Note that it is only a test (evaluation) dataset, not a training dataset.
The dataset is divided in three parts, according to each subtask:
given an image of the robot and a block, find their (coarse) relative pose in the support plane,
given an image of the robot with a vertical clamp pointing downwards, find the clamp location in the image,
given a zoomed image of a vertical clamp pointing downwards and a block, find their (fine) relative pose in the support plane.
The relative block poses are consistent across the three parts of the dataset: for each random block pose, we picture:
a long shot of the scene where the robot joints are set at random angles,
a long shot of the scene where the clamp is moved near the block and set vertical and pointing downwards,
a crop of that same image, more or less centered on the clamp and large enough to show the block as well.
To study the benefits of having multiple views of the scene, each configuration is actually seen and recorded from 3 cameras. Moreover, as the clamp we used only has a small asymmetry on one side (see Figure 3), leading to a possible direction ambiguity when estimating the clamp frame, we recorded for each configuration of type 2-3 an additional position where the clamp is turned vertically 90°(see Figure 6.
Additionally, we made three variants of each of these (sub) datasets, corresponding to different environment difficulties, as illustrated on Figure 7:
a dataset in laboratory condition (’lab’), where the robot and the block are on a flat table with no particular distractor or texture,
a dataset in more realistic condition (’field’), where the table is covered with dirt, sand and gravels, making the flat surface uneven, with blocks thus lying not perfectly flat, and making the appearance closer to what could be expected on a real construction site,
a dataset in adverse condition (’adv’), where the table is covered with pieces of paper that act as distractors because they can easily be confused with cuboid blocks.
All together, the whole dataset covers about 1,300 poses (576 for ’lab’, 639 for ’adv’, and 114 for ’field’), seen from 3 cameras, with 1 clamp position for configuration 1 and 2 clamp positions for configuration 2-3, yielding about 11,700 annotated images. More details on this evaluation dataset are given in Appendix B.
In the following, to study different properties or design choices of our approach, we report results on the ’lab’ dataset only, except otherwise mentioned.
5.2 Evaluation metrics
To assess pose estimation results , , , several measures can be considered. The most natural one is to measure the accuracy, computing the average and standard deviation of the estimation errors , , . While this provides figures that can be easily interpreted, it is not a useful measure of the capacity of a method to provide an accurate-enough prediction as it does not take into account a maximum possible error. We thus also present success measures in terms of the rate of estimation errors , , that are below given thresholds, i.e., that are accurate enough for block grasping to be successful.
Concretely, for the fine estimation network, we consider a square range of size 12 cm, as illustrated on Figure 3. Fine estimation thus makes sense if the prediction error of the coarse estimation is below 6 cm. Besides, for a grasp to be successful given the opening of the clamp we used in the dataset, the required accuracy is 5 mm for , 5 mm for and 2° for . Therefore, for the coarse estimation, we report the percentage of prediction errors below 6 cm for the block location and ; we also consider an error bound of 10° for the block orientation . For the fine estimation, the error bounds are 5 mm for block location, and 2° for block orientation.
Regarding the clamp location estimation in images (second, intermediate subtask), we did not create a ground truth. (We would have had to run a full camera and robot calibration.) Instead, for our experiments, we manually checked all predictions for localizing the center of the clamp, and counted cases where the prediction was outside the bounding box of the clamp in the image.
5.3 Network architecture
As mentioned in Section 4.1, we chose to predict independently , and using a single network which computes a single representation in all but the last layer. To validate our choice, we evaluated the accuracy in the prediction of the coordinate for our coarse estimation setup with three different networks. These networks predict the block pose with three different approaches, either:
|Architecture||1D bins||2D bins||1D , and bins|
|% ( 60 mm)|
along the axis only, with bins of width 5 mm,
with square bins of size 55 mm,
along both axes and separately, with bins of width 5 mm, and for orientation , with bins of size 5°.
The results are reported in Table 1. They clearly show that the network predicting location for each 55 mm square is failing, even if it could in theory represent more complex ambiguities in the position estimation. Analyzing the performance of training and validation on the synthetic dataset shows that it actually overfits. On the contrary, the network trained for estimating only performs well, apparently being able to generalize across several positions. The network predicting separately , and the angle performs even better, showing a clear benefit to the joint training. We use the latter architecture for all the following experiments. More technical details are given in Appendix C.
5.4 Learning on synthetic data, testing on real
Our networks are trained on synthetic data only. One of the first question is how they behave on real images. In this section, we compare the performance of our fine pose estimation network both on synthetic and real validation data. Results for a single camera and one clamp orientation are provided in Table 2.
The difference of success between the synthetic and the ’lab’ dataset is around 20% for the location estimation (prediction of and ). Surprisingly, the network is better on the ’lab’ and on the ’field’ datasets for the orientation estimation (prediction of ), compared to the synthetic dataset. We explain it by the variety of textures in the synthetic dataset acting as distractors whereas, in the ’lab’ and ’field’ datasets, the block edge are clearly visible and unambiguous. Also, as all blocks have different sizes in the synthetic dataset, sometimes the short edge and the long edge of the block are of nearly similar length, leading to an orientation ambiguity, whereas the block dimensions in the real dataset are significantly different. As expected, the ’adv’ dataset has poorer results.
|% ( 5 mm)|
|% ( 5 mm)|
|% ( 2°)|
5.5 View aggregation
|Setting||Camera 1||Camera 2||Camera 3||All cameras|
|Setting||Camera 1||Camera 2||Camera 3||All cameras|
|Setting||Camera 1||Camera 2||Camera 3||All cameras|
As indicated in Section 4.1, the separate estimations of several cameras can be aggregated into a single global estimate. We evaluate in detail the interest of aggregating views from several cameras, also possibly considering together two different clamp orientations, at 90°, as shown on Figure 6. We report our results for the fine estimation dataset in Table 3. This table allows two key observations.
First, by comparing the first three columns to the last one, one can see that unsurprisingly using several viewpoints improves performance. The boost in performance is actually quite significant.
Second, by comparing the first two lines of each table with the last one, one can notice that aggregating the predictions of two views from the same camera but with orthogonal clamp orientations boosts the results too. This is at first sight surprising as the two images are extremely similar, but it can be explained by the fact that the end part of the robot arm is almost symmetric and that estimating its orientation may be difficult from some orientations, as illustrated on Figure 6.
A little inconsistency may be observed in the estimation of ( mm), which is not as accurate and robust as the estimation of ( mm). Although differences in general between and could be explained by the asymmetry of the robot base (see Figure 1) and the non-uniform range of positions of the cameras, located in the half space of positive ’s (see Figure 5), the main reason here is actually that the network returns a totally wrong estimation (97 mm error) for one of the 576 poses, whereas extreme mistakes are otherwise rare and never greater than 15 mm. This impacts for both the average (0.2 mm) and standard deviation (more than 2 mm). Note that this single wrong estimation only affects the accuracy measures, not the success rates as it represents only 0.17% of the poses.
Although not reported here, similar observations can be made about using different viewpoints for the coarse estimation, as well as when analyzing the percentage of images below a given accuracy instead of the average errors. These positive results validate our simple strategy to aggregate the predictions from each camera and each clamp orientation. In the following, we only report aggregated results, unless otherwise mentioned.
5.6 Three-step procedure and final grasping success rate
We presented a three-step approach for estimating accurately the relative pose of a block with respect to the robot. We examine here whether it is realistic in terms of success rate for the final grasping and if all steps are really necessary.
|% ( 60 mm)|
|% ( 60 mm)|
|% ( 10°)|
|% ( 60 mm)|
To check whether the approach makes sense, one can simply look at the success rate for the first two steps of the procedure. As can be seen from Table 4, the first step, i.e., coarse pose estimation, has a success rate of 99.8%. It is thus extremely reliable and almost always accurate enough to allow a subsequent fine pose estimation. The second step of the procedure requires to locate the clamp in an image, to later crop a region of interest for the fine localization. As the evaluation dataset does not include a ground truth of the exact position of the clamp in the images (see Section 5.2), we manually checked all the predictions of our trained network for localizing the center of the clamp and counted cases where the prediction was outside the bounding box of the clamp in the image. In that respect, our second CNN correctly detects the clamp in 99.1% of the pictures, which is also quite a high success rate. All in all, the success rate of the two steps prior to fine pose estimation is thus 98.9%, which confirms that these preliminary steps do not significantly degrade the final estimation.
Now to check whether the three-step procedure does improve accuracy over a single-step procedure, we can compare the percentage of images with errors below 5 mm and 2° in two setups: one in which only a single broad view is considered, and one in which our three-step procedure performs a virtual close-up. To obtain comparable results, and for this experiment only, both approaches are evaluated on the same input images, with the same resolution (1920 1080 center-cropped to a square 1080 1080) and with the same bin discretization (2 mm, rather than 5 mm used otherwise for coarse estimation). Concretely, we test performance with the images where the clamp is positioned coarsely above the block, which is a necessary condition for the fine estimation step, but only a particular case of scene configuration for coarse estimation. The results are reported in Table 5. It can be seen that the three-step procedure dramatically improves performance.
|1 camera||1 camera||3 cameras|
|Setup||1 clamp||1 clamp||2 clamp|
|% ( 5 mm)||20.5||61.1||90.8|
|% ( 5 mm)||23.1||65.7||87.6|
|% ( 2°)||37.6||80.9||96.9|
|% ( 5 mm, 2°)||2.8||36.9||79.0|
What makes the difference is that coarse pose estimation has to resize the image down to 256 256 (a factor 4.2 on the length) to feed it into the network procedure, while fine pose estimation only resizes a 432 432 crop centered on the clamp (a factor 1.7). This resolution ratio of 2.5 between the two procedures naturally translates into a similar ratio for accuracy, and a considerable difference in the success rate for errors below 5 mm.
5.7 Comparison with other methods
We found it difficult to compare our method to other approaches. We tried various existing methods to detect cuboids, such as NIPS2012_4842, but their performance on images of our evaluation dataset was too poor to be usable. Likewise, usual corner detection was not reliable enough to locate blocks in the pictures. Besides, block detection is only part of the problem as, besides camera intrinsic and extrinsic calibration, the robot also need to be registered in the camera frame for relative block positioning.
To construct a simple baseline for comparison, we resorted to markers. We built yet another dataset which is identical to our ’lab’ dataset except that a fiducial marker is added on the top of the block, as illustrated in Figure 8. On this dataset, we can detect the block position with the ArUco marker system (GARRIDOJURADO20142280; GARRIDOJURADO2016481) as the 2D detection of the four corners of the marker performs well: the marker is correctly detected in 95.0% of the pictures.
As we use the same block, with a known height, for our entire dataset, and as the block is always laid flat on the table plane, we can use several known positions of the block, accurately placed on the table by the robot, to calibrate each fixed camera independently. More precisely, we use these positions to determine the coefficients of the homography matrix relating the support plane to the image plane. As the robot itself is used to place the block on the table at various locations, we also know the relative position of the robot with respect to the block. After this calibration step, a 2D block detection in an image directly translates into a pose on the table plane, hence on a relative pose with respect to the robot base. Table 6 compares our method to the results of pose estimation with the marker-based approach. (The figures regarding our method differ from Table 2 because they do not only take into account fine pose estimation but the complete three-step procedure, with possible failures at coarse pose estimation and clamp detection.) As expected, the performance with markers is better than our method, with a single view. However, our method reaches a comparable performance when aggregating the views of 3 cameras and 2 clamps orientations, without all the practical constraints of marker-based approaches.
|1 camera||1 camera||3 cameras|
|Setup||1 clamp||1 clamp||2 clamp|
|% ( 5 mm)|
|% ( 5 mm)|
This experiment emphasizes the fact that our approach does not solve a completely new task, or a task that could not be solved with existing tools. As the task is perfectly well defined, the performance of a ”classical” approach mainly depends on the quality of camera calibration, and 2D/3D alignment algorithms for the robot and the block. Note however that camera calibration is time consuming, and we found it very difficult to find a robust and accurate 2D-3D detection and alignment algorithm. While using markers can make 2D-3D alignment easy, they also have strong practical constraints and could easily be stained, damaged or partially occluded in real-life scenarios, which would make them ineffective.
5.8 Sensitivity to the environment
The results we report in a laboratory environment show the potential of our method. To go beyond these results and evaluate how robust our approach is, we consider more difficult settings, i.e., the more realistic ’field’ conditions and the adverse ’adv’ dataset. Results for the coarse pose estimation are reported in Table 4 and, for fine pose estimation, in Tables 7 and 8. As expected the results are better with the bare ’lab’ environment. Interestingly, the results in more realistic ’field’ environment are still satisfying, with a success rate of for coarse estimation, and for fine estimation. However, in adverse conditions with pieces of papers that could easily be confused with the block (see Figure (c)c), the performance drop is more dramatic, especially for coarse estimation where many distractors are visible, while the success rate of fine estimation drops to .
|% ( 5 mm)|
|% ( 5 mm)|
|% ( 2°)|
|% ( 5 mm, 2°)|
We have introduced a new task of uncalibrated relative pose estimation of an object with respect to a robot. The task can rely on a single view or on multiple views, possibly with different arm positions, and with a possible intermediate step for a more accurate pose estimation. We have also constructed a rich dataset for the evaluation of this task. Last, we have proposed a general method to perform this task, that provides a strong baseline. Indeed, our approach estimates the block pose with respect to the robot with an average location accuracy of 2.6 mm and an average orientation accuracy of 0.7°, and these results in lab conditions degrade well in more realistic or adverse settings. Given the small opening range of our test clamp, which requires a location error less than 5 mm and an orientation error less than 2°, this translates into an overall success rate of 80% for block grasping. While these results are slightly lower than with methods relying on calibrated cameras and robot, they show that relative pose estimation can be practically addressed in unfavorable settings where camera calibration is fragile or cannot be performed given the context.
One important strength of our method is that it can be learned with synthetic images only, without using a single real image for training, thus avoiding the expensive and time-consuming collection of training data.
Natural extensions of our approach include supporting different robots, different grasping tasks, and blocks of different shapes, at different heights on non-horizontal surfaces.
Appendix A Training Dataset Based on Synthetic Images
As introduced in Section 2.2, we generated three synthetic datasets for training a network for each subtask:
robot and block in random pose, for coarse estimation,
robot with clamp in random vertical pose pointing downwards, for 2D tool detection,
close-up of vertical clamp and random block nearby, for fine estimation.
We created a simple room model where a robot is placed on the floor and a cuboid block is laid nearby. The robot we experimented with is an IRB120 from ABB company, for which we have a 3D model. We also have a 3D model of the clamp. However, we did not model the cables on the robot base nor on the clamp (compare, e.g., Figure (a)a to Figure (c)c). We considered configurations that are similar to what can be found in the evaluation dataset, although with slightly greater variations for robustness. The actual randomization for the generation of images is as follows:
The size of the room is 20 m 20 m so that the walls are visible on some images.
The robot base (20 cm 30 cm) orientation and position are sampled randomly. (The robot height is around 70 cm and the arm length around 50 cm.)
The orientations (angles) of the robot joints are sampled randomly among all possible values, except for the clamp 2D detection task and the fine estimation task, where the arm extremity is placed vertically on top of the floor.
Each dimension of the cuboid block is sampled randomly between 2.5 and 10 cm.
The block is laid flat on the floor with an orientation and a position sampled randomly in the attainable robot positions for the coarse estimation task or in a 12 cm square below the clamp for the fine estimation task.
All the textures, for the floor, robot and block are sampled among 69 widely different texture images.
The camera center is sampled randomly at a height between 70 and 130 cm from the floor in a cylindrical sleeve of minimum radius 1 m and maximum radius 2.8 m, centered on the robot, as illustrated in Figure 9.
For the coarse estimation setting (wide views), the camera target is sampled in a cylinder of 30 cm radius and 50 cm height around the robot base.
For the fine estimation setting (close-ups), the target is the center of the clamp, with a small random shift.
The camera is rotated around its main axis (line between camera center and camera target) with an angle sampled randomly between -8 and +8 degrees.
The camera focal length is randomly sampled between 45 mm and 75 mm for an equivalent sensor frame size of 24 mm 24 mm.
Synthetic images are 256 256 pixels.
The pictures were generated with the Unreal Engine 4 game engine. The dataset we created for coarse estimation consists of approximately 420k images (examples can be seen on Figure (a)a), the one for 2D clamp detection consists approximately of 2800k images (examples can be seen on Figure (b)b) and the dataset for fine estimation consists approximately of 600k images (examples can be seen on Figure (c)c). We used 90% of the images for training and the remaining 10% for validation.
Appendix B Evaluation Dataset Based on Real Images
in the ’lab’ dataset, the robot and the block are on a table with no particular distractor or texture,
in the ’field’ dataset, the table is covered with dirt, sand and gravels, making the flat surface uneven,
in the ’adv’ dataset, the table is covered with pieces of paper that can be confused with cuboid blocks.
We use the robot itself to accurately move the block to random poses, which provides an reliable measure of its relative position and orientation, for each configuration. In practice, the block can slowly drift from its expect position as the robot repeatedly picks it, moves it and puts it down. To ensure there is no drift, the block position is checked and realigned every ten position. Because of the limited stroke of the clamp, we considered only a single block sizes: 5 cm 8 cm 4 cm. Note however that our method does not exploit this information; we believe a robust method should be able to process a wide range of block shapes. As we want to model situations where the robot can pick a block, we have to restrict the reach of the arm to a range for which the tool can be set vertically above the block, i.e., 0.505 m.
We collected images from 3 cameras at various viewpoints, looking at the scene slightly from above. To make sure the block is visible on most pictures, we actually considered successively different regions of the experiment table for sampling block poses, moving the cameras to ensure a good visibility for each region. The cameras are moved manually without any particular care. The distance between a camera and the block is typically between 1 and 2.5 m. The maximum angle between the left-most and the right-most camera is typically on the order of 120 degrees. This setting is illustrated on Figure 5.
For each block position with respect to the robot base, we consider two main articulations of the robot arm: a random arm configuration for the coarse location subtask, and an articulation where the clamp is vertical, pointing downward and positioned next to the block for the fine location subtask. In the latter case, we positioned the clamp 150 mm above the table surface, at a random horizontal position in a 120 mm square around the block.
We actually recorded two clamp orientations along the vertical axis: a first random orientation, and then a second orientation where the clamp is rotated by 90 degrees (see Figure 6). As the clamp orientation, with respect to which the fine estimation is defined, can be hard to estimate for some configurations, using two orientations allows more accurate predictions.
In total, we considered approximately 1300 poses (positions and orientations) of the robot and the block together. This lead to a total of approximately 12,000 images, of size 1080 1080 for wide views and 432 432 for close-ups with corresponding ground-truth relative position and orientation. The cameras used are eLight full HD 1080p webcams from Trust. Camera intrinsics were not available nor estimated. Nevertheless, the focal length was roughly determined to be about 50 mm and, in the synthetic images, the camera focal length was randomly set between 45 mm and 75 mm (see Appendix A).
Datasets and 3D models are available from imagine.enpc.fr/~loingvi/unloc.
Appendix C Network Architecture Details
We define here what are the bins for the three different networks, addressing the three different subtasks.
The number of bins for the last layer of the coarse estimation network depends on the bin size and on the maximum range of the robotic arm with the tool maintained vertically, i.e., 0.505 m (see Appendix B). In practice, we defined bins of 5 mm for the coarse estimation and 2 mm for the fine estimation, both visualized by the fine red grid in Figure (a)a and 3. These pictures also give a sense of how accurate the localization is compared to the sizes of the block and of the robot. For the angular estimation, we used bins of 5 and 2 degrees respectively. For the clamp detection network, we defined bins whose size is 2% of the picture width. Since as stated above, we predict each dimension separately, this leads to 202 bins for and and 36 bins for for the coarse localization network, 60 bins for and and 90 bins for for the fine localization network and 50 bins for and for the clamp localization network.