Teaching Robots to Do Object Assembly using Multi-modal 3D Vision

Teaching Robots to Do Object Assembly using Multi-modal 3D Vision

Weiwei Wan Feng Lu Zepei Wu Kensuke Harada National Institute of Advanced Industrial Science and Engineering, Japan Beihang University, China
Abstract

The motivation of this paper is to develop a smart system using multi-modal vision for next-generation mechanical assembly. It includes two phases where in the first phase human beings teach the assembly structure to a robot and in the second phase the robot finds objects and grasps and assembles them using AI planning. The crucial part of the system is the precision of 3D visual detection and the paper presents multi-modal approaches to meet the requirements: AR markers are used in the teaching phase since human beings can actively control the process. Point cloud matching and geometric constraints are used in the robot execution phase to avoid unexpected noises. Experiments are performed to examine the precision and correctness of the approaches. The study is practical: The developed approaches are integrated with graph model-based motion planning, implemented on an industrial robots and applicable to real-world scenarios.

keywords:
3D Visual Detection, Robot Manipulation, Motion Planning
journal: Neurocomputing

1 Introduction

The motivation of this paper is to develop a smart system using multi-modal vision for next-generation mechanical assembly: A human worker assembles mechanical parts in front of a vision system; The system detects the position and orientation of the assembled parts and learns how to do assembly following the human workerfs demonstration; An industrial robot performs assembly tasks following the data learned from human demonstration; It finds the mechanical parts in its workspace, picks up them, and does assembly using motion planning and assembly planning.

The difficulty in developing the smart system is precise visual detection. Two problems exist where the first one is in the human teaching phase, namely how to precisely detect the position and orientation of the parts in human hands during manual demonstration; The second one is in the robot execution phase, namely how to precisely detect the position and orientation of the parts in the workspace and perform assembly.

Lots of detecting and tracking studies are available in contemporary publication, but none of them meets the requirements of the two problems. The approaches used include RGB images, markers, point cloud, extrinsic constraints, and multi-modal solutions where the RGB images and markers are short in occlusions, the point cloud and extrinsic constraints are short in partial loss and noises, and the multi-modal solutions are not clearly stated and are still being developed.

This paper solves the two problems using multi-modal vision. First, we attach AR markers to the objects for assembly and track them by detecting and transforming the marker positions during human demonstration. We don’t need to worry occlusions since the teaching phase is manual and is performed by human beings who are intelligent enough to actively avoid occlusions and show good features to vision systems. The modal employed in this phase is the markers (rgb image) and the geometric relation between the markers and the object models. The tag “AR(RGB)” is used for representation.

Second, we use depth cameras and match the object model to the point cloud obtained using depth camera to roughly estimate the object pose, and use the geometric constraints from planar table surface to reinforce the precision. For one thing, the robot execution phase is automatic and is not as flexible as human teaching. Markerless approaches are used to avoid occlusions. For the other, the point cloud and extrinsic constraints are fused to make up the partial loss and noises. The assumption is when the object is placed on the surface of a table, it stabilizes at a limited number of poses inherent to the geometric constraints. The poses help to freeze some Degree of Freedom and improve precision. The modal employed in this phase is the cloud point data and the geometric constraints from the plenary surface. The tag “Depth+Geom” is used for representation.

Moreover, we propose an improved graph model based on our previous work to perform assembly planning and motion planning. Experiments are performed to examine the precision and correctness of our approaches. We quantitatively show the advantages of “AR(RGB)” and “Depth+Geom” in next-generation mechanical assembly and concretely demonstrate the process of searching and planning using the improve graph model. We also integrate the developed approaches with Kawada Nextage Robots and show the applicability of them in real-world scenarios.

2 Related Work

This paper is highly related to studies in 3D object detection for robotic manipulation and assembly and the literature review concentrates on the perception aspect. For general studies on robotic grasping, manipulation, and assembly, refer to Handeybook (), Mason01 (), and Dogar15 (). The literature review emphasizes on model-based approaches since the paper is motivated by next-generation mechanical assembly and is about the industrial applications where precision is crucial and object models are available. For model-less studies, refer to Goldfeder10 () and Lenz14 (). For appearance-based studies, refer to Murase95 () Mittrapiyanuruk04 (), and Zickler06 (). Moreover, learning approaches are no reviewed since they are not precise. Refer to Stark10 () and Libelt10 () if interested.

We archive and review the related work according to the modals used for 3D detection, including RGB images, markers, point cloud, haptic sensing, extrinsic constraints, and multi-modal fusion.

2.1 RGB images

RGB images are the most commonly used modal of robotic perception. Using RGB images to solve the model-based 3D position and orientation detection problem is widely known as the “model-to-image registration problem” Wunsch96 () and is under the framework of POSIT (Pose from Orthography and Scaling with Iterations) David02 (). When looking for objects in images, the POSIT-based approaches match the feature descriptors by comparing the most representative features of an image to the features of the object for detection. The features could either be values computed at pixel points or histograms computed on a region of pixels. Using more than three matches, the 3D position and orientation of an object can be found by solving polynomial equations Fischler81 (); DeMenthon95 (); Lu00 (). A good material that explains the matching process can be found in Schaffalitzky02 (). The paper studies multi-view image matching. It is not directly related to 3D detection, but explains well how to match the feature descriptors.

Some of the most common choices of features include corner features Harris88 () applied in Chia02 () and David02 (), line features applied in David03 (), Klein03 () and Marchand02 (), cylinder features applied in Marchand02 () and Harada13 (), and SIFT features Lowe04 () applied in Gordon06 () and Collet09 (). Especially, Gordon06 () stated clearly the two stages of model-based detection using RGB images: (1) The modeling stage where the textured 3D model is recovered from a sequence of images; (2) The detection stage where features are extracted and matched against those of the 3D models. The modeling stage is based on the algorithms in Schaffalitzky02 (). The detection stage is open to different features, different polynomial solving algorithms, and some optimizations like Levenberg-Marquardt Press92 () and Mean-shift Cheng95 (), etc. Ramisa12 () compared the different algorithms in the second stage.

2.2 Markers

In cases where the objects don’t have enough features, markers are used to assist image-based detection. Possible marker types include: AR markers, colored markers, and shape markers, etc. The well known AR libraries Fiala05 (); Wagner07 () provide robust libraries and the Optitrack device provides easy-to-use systems for the different marker types. However, the applications with markers require some manual work and there are limitations on marker sizes, view directions, etc.

Sundareswaran98 () uses circular concentric ring fiducial markers which are placed at known locations on a computer case to overlay the hidden innards of the case on the camerafs video feed. Kato99 () uses AR markers to locate the display lots during virtual conference. It explains the underneath computation well. Vacchetti04 () doesnft directly use markers, but uses the offline matched keyframes, which are essential the same thing, to correct online tracking. Makita12 () uses AR markers to recognize and cage objects. Suligoj13 () uses both balls (circles) and AR markers to estimate and track the position of objects. More recently, Ramirezamaro15 () uses AR markers to track the position of human hands and the operating tools and uses the tracked motion to teach robots. The paper shares the same assumption that human beings can actively avoid occlusions. However, it doesn’t need and didn’t analyze the precision since the goal of tracking is in task level, instead of the low level trajectories.

2.3 Point cloud

Point cloud can be acquired using a structured light camera Freedman12 (), stereovision camera Choi12 (), and LIDAR device Bauhahn09 (), etc. The recent availability of low cost depth sensors and the handy Point Cloud Library (PCL) Rusu11 () has widely disseminated 3D sensing technology in research and commercial applications. The basic flow of using point cloud for detection is the same as image-based detection: The first step is to extract features from models and objects; The second step is to match the features and detect the 3D pose.

Shutz97 () is one of the early studies that uses point cloud to detect the pose of an object. It is based on the Iterative Closest Point (ICP) algorithm Besl92 () which iteratively minimizes the mean square distance of nearest neighbor points between the model and the point cloud. Following work basically uses the same technique, with improvements in feature extraction and matching algorithms. The features used in point cloud detection are more vast than those in image-based detection, including local descriptors like signature of histograms of orientation (SHOT) Tombari10 () and Radius-based Surface Descriptor (RSD) Marton11 (), and global descriptors like Clustered Viewpoint Feature Histogram (CVFH) Aldoma11 () and Ensemble of Shape Functions (ESF) Wohlkinger11 (). The matching algorithms didnft change much, sticking to Random Sample Consensus (RANSAC) Fischler81 () and ICP. A complete review and usage of the features and matching algorithms can be found in Aldoma12 ().

2.4 Extrinsic constraints

Like the markers, extrinsic constraints are used to assist point cloud based detection. When the point cloud don’t provide enough features or when the point cloud are noisy and occluded, it is helpful to take into account the extrinsic constraints. For example, the detection pipeline in Aldoma12 () uses hypothesis, which is one example of extrinsic constraints, to verify the result of feature matching. Shiraki14 () analyzes the functions of connected object parts and uses them to refine grasps. The paper is not directly related to detection but is an example of improving performance using extrinsic constraints caused by adjacent functional units.

Some other work uses geometric constraints to reduce ambiguity of ICP. Schuster10 () and Somanath09 () segment 3D clutter using the geometric constraints. They are not directly related to detection but are used widely as the initial steps of many detection algorithms. Savalcalvo15 () clusters point cloud into polygon patches and uses RANSAC with the multiple polygon constraints to improve the precision of detection. Goron12 () uses table constraints for segmentation and uses Hough voting to detect object poses. Cheung15 () uses the sliced 2D contours of 3D stable placements to reduce the noises of estimation. It is like our approach but is contour-based and suffers from ambiguity.

2.5 Multi-modal fusion

Multi-modal approaches are mostly the combination or repetition of the previous five modals. For example, some work fuses repeated modals to improve object detection. Taylor03 () fuses colour, edge and texture cues predicted from a textured CAD model of the tracked object to recover the 3D pose, and is open to additional cues.

Some other work uses visual tracking to correct the noises caused by fast motions and improve the precision of initial matches. The fused modals include the RGB image modal and the motion modal where the later one could be either estimated using image sequences or third-part sensors like Global Positioning System (GPS) or gyros. Kempter12 () is one representative work which fuses model motion (model-based tracking) and model detection in RGB images to refine object poses. Klein03 () fuses gyro data and line features of RGB images to reinforce the pose estimation for head-mounted displays. Reitmayr06 () uses gyro data, point descriptors, and line descriptors together to improve the performance of pose estimation for outdoor applications.

Pangercic11 () uses point cloud to cluster the scene and find the Region-of-Interests (ROIs), and uses image modal to estimate the object pose at respective ROIs. The fused modals are RGB images and Point cloud. Hinterstoisser11 () also combines image and depth modals. It uses the image gradients found on the contour of images and the surface normals found on the body of point cloud to estimate object poses.

To our best knowledge, the contemporary object detection studies do not meet our requirements about precision. The most demanding case is robotic grasping and simple manipulation, which is far less strict than regrasp and assembly. We develop our own approaches in this paper by fusing different modals to deal with the problems in the teaching phase and the robot execution phase respectively. For one thing, we use AR markers to detect the 3D object positions and orientations during human teaching. For the other, we use the cloud point data and the geometric constraints from planar table surface during robot execution. The details are presented in following sections.

3 System Overview

We present an overview of the next-generation mechanical assembly system and make clear the positions of the 3D detection in this section, and describe in detail the “AR(RGB)” and “Depth+Geom” approaches in the sections following it.

Figure 1: The flow of the next-generation mechanical assembly. The flow is composed of a human teaching phase and a robot execution phase. In the human teaching phase, a human worker demonstrates assembly with marked objects in front of an RGB camera. The computer detects the relationship of the assembly parts. In the robot execution phase, the robot detects the parts in the workspace using depth camera and geometric constraints, picks up them, and performs assembly.

Fig.1 shows the flow of the next-generation mechanical assembly system. It is composed of a human teaching phase and a robot execution phase. In the human teaching phase, a human worker demonstrates how to assemble mechanical parts in front of a vision system. The system records the relationship of the two mechanical parts and saves it as an intermediate value.

In the robot execution phase, the robot uses another vision system to find the mechanical parts in the workspace, picks up them, and performs assembly. The relative position and orientation of the assembly parts are the intermediate values perceived by the human teaching phase. The grasping configurations of the robotic hand and the motions to move the parts are computed online using motion planning algorithms.

The beneficial point of this system is it doesn’t need direct robot programming and is highly flexible. What the human worker needs to do is to attach the markers and presave the pose of the marker in the object’s local coordinate system so that the vision system can compute the pose of the object from the detected marker poses.

The 3D detection in the two phases are denoted by the two “object detection” sub-boxes in Fig.1. The one in the human teaching phase uses AR markers since human beings can intentially avoid unexpected partial occlusions by human hands or the other parts, and as well as ensure high precision. The one in the motion planning phase uses point cloud data to roughly detect the object pose, and uses the geometric constraints from planar table surface to correct the noises and improve precision. The details will be explained in following sections. Before that, we list the symbols to facilitate readers.

  • The position of object X on a planery surface. We use A and B to denote the two objects and consequently use and to denote their positions.

  • The orientation of object X on a planery surface. Like , X is to be replaced by A or B.

  • The position of object X in the assembled structure.

  • The orientation of object X in the assembled structure.

  • The pre-assembly positions of the two objects. The robot will plan a motion to move the objects from the initial positions to the preassemble positions.

  • The force-closure grasps of object X. The letter indicates the object is free, and is not in an assembled structure or laying on something.

  • The force-closure grasps of object X on a planery surface. It is associated with and .

  • The collision-free and IK (Inverse Kinematics) feasible grasps of object X on a planery surface. It is also associated with and .

  • The force-closure grasps of object X in an assembled structure. It is associated with and .

  • The collision-free and IK (Inverse Kinematics) feasible grasps of object X in the assembled structure. It is associated with and .

  • The force-closure grasps of object X at the pre-assembly positions.

  • The colllision-free and IK feasible grasps of object X at the pre-assembly positions.

4 3D Detection during Human Teaching

The object detection during human teaching is done using AR markers and a RGB camera. Fig.2 shows the flow of the detection and the poses of the markers in the object model’s local coordinate system. The markers are attached manually by human workers.

Figure 2: Object detection using AR markers. Beforehand, human worker attaches the markers and presaves the pose of the marker in the object’s local coordinate system. During detection, vision system computes the pose of the object from the detected marker poses (the three subfigures). The output is the (, ) and (, ).

During demonstration, the worker holds the two objects and show the makers to the camera. We assume the workers have enough intelligence and can expose the markers to the vision system without occlusion. The detection process is conventional and can be found in many AR literature: Given the positions of some features in the markers’ local coordinate system, find the transform matrix which converts the them into the positions on the camera screen. In our implementation, the human teaching part is developed using Unity and the AR recognition is done using the Vuforia SDK for Unity.

In the example shown in Fig.2, there are two objects where the detected results are represented by (, ) and (, ). During robot execution, the (, ) is set to:

(1)

and and are set to zero and identity matrix respectively. Only the relative poses between the assembly parts are used.

5 3D Detection during Robotic Execution

The object pose detection during robot execution is done using Kinect, Point Cloud Library, and geometric constraints. The detection cannot be done using markers since: (1) What the robot manipulates are thousands of industrial parts, it is impossible to attach markers to all of them. (2) The initial configuration of the object is unexpectable and the markers might be occluded from time to time during robotic pick-and-place. The detection is also not applicable to image-based approaches: Industrial parts are usually mono colored and textureless, image features are not only helpless but even harmful.

Using Kinect is not trivial due to its low resolution and precision. For example, the objects in industrial applications are usually in clutter and it is difficult to segment one object from another using Kinect’s resolution. Our solution is to divide the difficulty in clutter into two subproblems. First, the robot considers how to pick out one object from the clutter, and place the object on an open plenary surface. Second, the robot estimate the pose of the single object on the open plenary surface. The first subproblem doesn’t care what the object is or the pose of the object and its only goal is to pick something out. It is referred as a pick-and-place problem in contemporary literature and is illustrated in the left part of Fig.3. The first subproblem is well studied and interested readers may refer to Domae14 () for the solutions.

The second subproblem is to detect the pose of a single object on an open plenary surface and is shown in the right part of Fig.3. It is much easier but still requires much proess to meet the precision requirements of assembly. We concentrate on precision and will discuss use point cloud algorithms and geometric constraints to solve the second problem.

Figure 3: Overcome the clutter problem by dividing the pose detection into two subproblems. The first one is picking out from clutter where the system doesn’t care what the object is or the pose of the object and its only goal is to pick something out. The problem is well studied. The second one is to detect the pose of a single object on an open plenary surface. It is not trivial since the precision of Kinect is bad.

5.1 Rough detection using point cloud

First, we roughly detect the pose of the object using CVFH and CRH features. In a preprocessing process before starting the detection, we put the camera to 42 positions on a unit sphere, save the view of the camera, and precompute the CVFH and CRH features of each view. This step is virtually performed using PCL and is shown in the dashed blue frame of Fig.4. During the detection, we extract the plenary surface from the point cloud, segment the remaining points cloud, and compute the CVFH and CRH features of each segmentation. Then, we match the precomputed features with the features of each segment and estimate the orientation of the segmentations. This step is shown in the dashed red frame of Fig.4. The matched segments are further refined using ICP to ensure good matching. The segmentation that has highest ICP matches and smallest outlier points will be used as the output. An example is shown in the “Raw result” framebox of Fig.4.

Figure 4: Rawly detecting the pose of an object using model matching and CVFH and CRH features. In a preprocessing process before the detection, we precompute the CVFH and CRH features of 42 different views and save them as the template. The preprocessing process is shown in the dashed blue frame. During the detection, we segment the remaining points cloud, compute the CVFH and CRH features of each segmentation, and match them to the precomputed views using ICP. The best match is used as the output.

5.2 Noise correction using geomteric constraints

Figure 5: Correcting the raw result using the stable placements on a plenary surface (geometric constraints). In a preprocessing process before the correction, we compute the stable placements of the object on a plenary surface. An example is shown in the stable placements framebox. During the correction, we compute the distance between the raw results and each of the stable placements, and correct the raw results using the nearest pose.

The result of rough detection is further refined using the geometric constraints. Since the object is on plenary surface, its stable poses are limited Wan2016ral () and can be used to correct the noises of the roughly estimated result. The placement planning includes two steps. First, we compute the convex hull of the object mesh and perform surface clustering on the convex hull. Each cluster is one candidate standing surface where the object may be placed on. Second, we check the stability of the objects standing on these candidate surfaces. The unstable placements (the placement where the projection of center of mass is outside the candidate surface or too near to its boundary) are removed. An example of the stable placements for one object is shown in the stable placements framebox of Fig.5.

Given the raw detection result using CVFH and CRH features, the robot computes its distance to the stable placements and correct it following Alg.1.

Data: Raw result: , ;
           Stable placements: {}
           Table height:
Result: Corrected result: ,
1 for  to .size() do
2       if  then
3            
4       end if
5      
6 end for
rotFromRpy(0, 0, rpyFromRot())
Algorithm 1 Noise correction

In this pseudo code, indicates a identity matrix. Functions and converts , , angles to rotation matrix and vice verse. The distance between two rotation matrices is computed in line 4. The corrected result is updated at lines 11 and 12.

6 Grasp and Motion Planning

After finding the poses of the parts on the plenary surface, what the robot does next is to grasp the parts and assemble them. It includes two steps: A grasp planning step and a motion planning step.

6.1 Grasp planning

In the grasp planning step, we set the object at free space and compute the force-closured and collision-free grasps. Each grasp is represented using = where and are the contact positions of the finger tips, is the orientation of the palm. The whole set is represented by , which includes many . Namely, = .

Given the pose of a part on the plenary surface, say and , the IK-feasible and collision-free grasps that the robot can use to pick up the object is computed following

(2)

where

(3)

transforms the grasps at free space to the pose of the object. denotes the transformed grasp set. finds the IK-feasible grasps from the input set. checks the collision between the two input elements and finds the collision-free grasps. denotes the IK-feasible and collision-free grasps that the robot can use to pick up the object.

Likewise, given the pose of object A in the assembled structure, say and , the IK-feasible and collision-free grasps that the robot can use to assemble it is computed following

(4)

where

(5)

transforms the grasps at free space to the pose of the object in the assembled structure. denotes the transformed grasp set. and are the same as Eqn(2). indicates the mesh model of object B at pose . denotes the IK-feasible and collision-free grasps that the robot can use to assemble the object.

6.2 Motion planning

In the motion planning step, we build a graph using the elements in and , search the grasp to find high-level keyframes, and perform Transition-based Rapidly-Exploring Random Tree (Transition-RRT) Jaillet08 () motion planning between the keyframes to find assemble motions.

Figure 6: The flow of the motion planning. Given initial and goal poses of an object (left images in the upperleft and lowerleft frameboxes), we search its available initial and goal grasps and use the common grasps and IK to get the initial and goal configurations of the robot arm. Then, we do motion planning repeatedly between the initial and goal configurations to find a solution to the desired task.

Fig.6 shows the flow. The object X in this graph is a wooden block shown in the left image of the upper-left frame box. The image also shows the pose of this object on the plenary surface, and . When the object is assembled in the structure, its pose and is shown in the left image of the bottom-left frame box. The grasps associated with the poses are shown in the right images of the two frame boxes. They are rendered using the colored hand model. Green, blue, and red denote IK-feasible and collision free, IK-infeasible, and collided grasps respectively. We build a graph to find the common grasps and employ Transition-RRT to find the motion between the initial configuration and goal configuration.

In practice, the flow in Fig.6 doesn’t work since the goal configuration is in the assembled structure and is in the narrow passages or on the boundaries of the configuration space. The motion planning problem is a narrow-passage Wan2008 () or peg-in-hole problem Yun2008 () which is not solvable. We overcome the difficulty by adding a pre-assemble configuration: For the two objects A and B, we retract them from the structure following the approaching direction of the two objects and get the pre-assemble poses , , and , where

(6)
(7)

The grasps associated with the retracted poses are

(8)
(9)

Note that the poses in Eqn(6-9) are in the local coordinate of object A where is a zero vector and is an identity matrix. Given the pose of object A in world coordinate, and , the grasps in the world coordinate are computed using

(10)
(11)
(12)
(13)

The moton planning is then to find a motion between one initial configuration in to a goal configuration in where is either or . There is no motion between and since they are equal to each other. The motion between and is hard coded along .

Figure 7: The grasp graph built using and . It has three layers where the top layer encodes the grasps associated with the initial configuration, the middle layer encodes the grasps associated with placements on planery sufaces, and the bottom layer encodes the grasps associated with the assemble pose. The left image shows one (the virtual grasp illustrated in cyan). It corresponds to a node in the bottom layer. The subimages in the frame box illustrate the placements (yellow) and their associated grasps (green).

Which initial and goal configuration to use is decided by building and searching a grasp graph which is built using and , and is shown in the frame box of Fig.7. The graph is basically the same as Wan2016ral (), but has three layers. The top layer has only one circle and is mapped to the initial configuration. The bottom layer also has only one circle and is mapped to the goal configuration. The middle layers are composed of several circles where each of them maps a possible placement on a plenary surface. Each node of the circles represents a grasp: The ones in the upper layers are from , and the ones in the bottom layers are from . The ones from the middle layers are the grasps associated with the placements. The orientations of the placements are evenly sampled on line. Their positions are fixed to the initial position . If the circles share some grasps (grasps with the same , , values in the object’s local coordinate system), we connect them at the correspondent nodes. We search the graph to find the initial and goal configurations and a sequence of high-level keyframes, and perform motion planning between the keyframes to perform desired tasks. An exemplary result will be shown in the experiment section.

7 Experiments and Analysis

We performed experiments to examine the precision of the developed approaches, analyze the process of grasp and motion planning, and demonstrate the applicability of the study using a Kawada Nextage Robot. The camera used in the human teaching phase is a logicool HD Webcam C615. The computer system is a Lenovo Thinkpad E550 laptop (Processor: Intel Core i5-5200 2.20GHz Clock, Memory: 4G 1600MHz DDR3). The depth sensor used in the robotic execution phase is Kinect. The computer system used to compute the grasps and motions is a Dell T7910 workstation (Processor: Intel Xeon E5-2630 v3 with 8CHT, 20MB Cache, and 2.4GHz Clock, Memory: 32G 2133MHz DDR4).

7.1 Precision of the object detection in human teaching

First, we examine the precision of object detection during human teaching. We use two sets of assembly parts and examine the precision of five assembly structures for each set. Moreover, for each assembly structure, we examine the values of at five different orientations to make the results confident.

The two sets and ten structures (five for each) are shown in the first row of Fig.8. Each structure is posed at five different orientations to examine the precision. The five data rows under the subimage row in Fig.8 are the result at the orientations. Each grid of the data rows is shown in the form where is the difference in the measured and the actual value. are the difference in the measured , , and angles. The last row of the figure shows the average detection error of each structure in the form where indicates the absolute value. The metrics are () for distance and () for orientation. The precision in position is less than 1 and the precision in orientation is less than 2 on average.

Figure 8: Results of the object detected during human teaching. The image row shows the structure to be assembled. Each structure is posed at five different orientations to examine the precision and the detected error in distance and orientation are shown in the five data rows below. Each grid of the data rows is shown in the form where is the difference in the measured and the actual value. are the difference in the measured roll, pitch, and yaw angles. The last row is the average detection error. The metrics are for distance and for orientation.

7.2 Precision of the object detection in robotic execution

Then, we examine the pecision of the object detection in the robot execution phase. Three objects with eight placements are used during the process. They are shown in the top row of Fig.9. The plenary surface is set in front of the robot and is divided into four quarters. We place each placement into each quoter to get the average values. There are five data rows divided by dashed or solid lines in Fig.9 where the first four of them show the individual detection precision at each quoter and the last one shows the average detection precision. The detection precision is the difference between the detected value and the groundtruth. Since we know the exact model of the object and the height of the table, the groundtruth is know beforehand. The average detection precision in the last row are the mean of the absolute difference.

Inside each data grid there are four triples where the upper two are the roughly detected position and orientation and the lower two are the corrected values. The roughly detected results are marked with red shadows and the corrected results are marked in green. The three values of the position triples are the , , coordinates, and their metrics are (). The three values of the orientation triples are the , , angles and their metrics are ().

Figure 9: Results of the object detected during robotic execution. The figure includes three subfigure rows and five data rows where the first data row show the target object pose, the second and third subfigure rows plot some examples of the roughly detected poses and the corrected poses. The five data rows are divided by dashed or solid lines where the first four of them show the individual detection precision at four different positions and the last one shows the average detection precision. Each data grid of the data rows include four triples where the upper two (under red shadow) are the roughly detected position and orientation and the lower two (under green shadow) are the corrected values. The three values of the position triples are the , , coordinates, and their metrics are (). The three values of the orientation triples are the , , angles and their metrics are (). The maximum values of each data element are marked with colored frameboxes.

The results show that the maximum errors of rough position detection are -1.1, 3.3, and 2.3 along , , and axis. They are marked with red boxes in Fig.9. After correction, the maximum position errors change to -1.1, 3.6, 0.0 respectively. They are marked with green boxes. The maximum errors of rough orientation detection are -26.3, 26.1, and -19.5 in , , and angles. They are marked with red boxes. The errors change to , , and - after correction. The correction using geometric constraints completely corrects the erros along axis and in and angles. It might slightly increase the errors along and and in but the values are ignorable. The average performance can be found from the data under thee dual solid line. The performance is good enough for robotic assembly.

In addition, the second and third subfigure rows of Fig.9 plot some examples of the roughly detected poses and the corrected poses. Readers may compare them with the data rows.

7.3 Simulation and real-world results

Fig.10 shows the correspondence between the paths found by the searching algorithms and the configurations of the robot and the object. The structure to be assembled in this task is the first one (upper-left one) shown in Fig.8. We do motion planning along the paths repeatedly to perform the desired tasks.

Figure 10: The snapshots of assembling the structure shown in Fig.8. It is divided into two step with the first step shown in (1)-(4) and the second step shown in (5)-(8). In the first step, the robot picks up object A and transfers it to the goal pose. In the second step, the robot finds object B on the table and assembles it to object A. The subfigures (1’)-(8’) shows correspondent path edges and nodes on the three-layer graph.

The assembly process is divided into two step with each step corresponds to one assembly part. In the first step, the robot finds object A on the table and moves it to a goal pose using the three-layer graph. The subfigures (1)-(4) of Fig.10 shows this step. In Fig.10(1), the robot computes the grasps associated with the initial pose and goal pose of object A. The associated grasps are rendered in green, blue, and red colors like Fig.1. They corresponds to the top and bottom layer of the grasp shown in Fig.10(1’). In Fig.10(2), the robot chooses one feasible (IK-feasible and collision-free) grasp from the associated grasps and does motion planning to pick up the object. The selected grasp corresponds to one node in the top layer of the graph, which is marked with red color in Fig.10(2’). In Fig.10(3), the robot picks up object A and transfers it to the goal pose using a second motion planning. This corresponds to an edge in Fig.10(3’) which connects the node in one circle to the node in another. The edge directly connects to the goal in this example and there is no intermediate placements. After that, the robot moves its arm back at Fig.10(4), which corresponds to a node in the bottom layer of the graph shown in Fig.10(4’).

In the second step, the robot finds object B on the table and assembles it to object A. The subfigures (5)-(8) Fig.10(b) show it. In Fig.10(5), the robot computes the grasps associated with the initial pose and goal pose of object B. They are rendered in green, blue, and red colors like Fig.10(1) and are correspondent to the top and bottom layer of the grasp shown in Fig.10(5’). In Fig.10(6), the robot chooses one feasible grasp and does motion planning to pick up the object. The selected grasp corresponds to the marked node in Fig.10(6’). In Fig.10(7), the robot picks up object B and assembles it to the goal pose using a second motion planning which corresponds to an edge in Fig.10(7’). Finally, the robot moves its arm back at Fig.10(8) and (8’).

The subfigures (1”)-(8”) in the third row show how the robot executes the planned motion. They correspond to (1)-(8) and (1’)-(8’) in the first two rows.

8 Conclusions and Future Work

We presented precise 3D visual detection approaches in this paper to meet the requirements of a smart mechanical assembly system. In the human teaching phase of the system where human beings control the operation and can actively avoid occlusion, we use AR markers and compute the pose of the object by detecting the markers’ poses. In the robot execution phase where occlusions happen unexpectedly, we use point cloud matching to find a raw pose and use extrinsic constraints to correct the noises. We examine the precision of the approaches in the experiment part and demonstrate that the precision fulfills assembly tasks using a graph model and an industrial robot.

The future work will be on the manipulation and assembly aspect. The current result is position-based assembly. It will be extended to force-based assembly tasks like inserting, snapping, etc., in the future.

References

  • (1) T. Lozano-Perez, J. L. Jones, E. Mazer, P. A. O’Donnell, HANDEY: A Robot Task Planner, The MIT Press, 1992.
  • (2) M. T. Mason, Mechanics of Robotic Manipulation, The MIT Press, 2001.
  • (3) M. Dogar, A. Spielberg, S. Baker, D. Rus, Multi-robot grasp planning for sequential assembly operations, in: Proceedings of International Conferene on Robotics and Automation (ICRA), 2015.
  • (4) C. Goldfeder, Data-Driven Grasping, Ph.D. thesis, Columbia University (2002).
  • (5) I. Lenz, H. Lee, A. Saxena, Deep Learning for Detecting Robotic Grasps, International Journal of Robotics Research (IJRR).
  • (6) H. Murase, S. K. Nayar, Visual Learning and Recognition of 3D Objects from Appearance, International Journal of Computer Vision (IJCV).
  • (7) P. Mittrapiyanuruk, G. N. Desouza, A. C. Kak, Calculating the 3D-Pose of Rigid-Objects using Active Appearance Models, in: Proceedings of International Conference on Robotics and Automation (ICRA), 2004.
  • (8) S. Zickler, M. Veloso, Detection and Localization of Multiple Objects, in: Proceedings of International Conference on Humand Robots (Humanoids), 2006.
  • (9) M. Stark, M. Goesele, B. Schiele, Back to the Future: Learning Shape Models from 3D CAD Data, in: Proceedings of British Machine Vision Conference, 2011.
  • (10) J. Liebelt, C. Schmid, Multi-view Object Class Detection with a 3D Geometric Model, in: Proceedings of Computer Vision and Pattern Recognition (CVPR), 2010.
  • (11) P. Wunsch, G. Hirzinger, Registration of CAD Models to Images by Iterative Inverse Perspective Matching, in: Proceedings of International Conference on Pattern Recognition, 1996.
  • (12) P. David, D. DeMenthon, R. Duraiswami, H. Samet, SoftPOSIT: Simultaneous Pose and Correspondence Determination, in: Proceedings of European Conference on Computer Vision, 2002.
  • (13) M. A. Fischler, R. C. Bolles, Random Sample Consensus: A Paradigm for Model Fitting and Applications to Image Analysis and Automated Cartography, Association for Computing Machinery.
  • (14) D.DeMenthon, L. S. Davis, Model-Based Object Pose in 25 Lines of Code, International Journal of Computer Vision.
  • (15) C.-P. Lu, G. D. Hager, E. Mjolsness, Fast and Globally Convergent Pose Estimation from Video Images, Transaction on Pattern Analysis and Machine Intelligence (PAMI).
  • (16) F. Schaffalitzky, A. Zisserman, Multi-view Matching for Unordered Image Sets, or “How do I Organize My Holiday Snaps?”, in: Proceedings of European Conference on Computer Vision, 2002.
  • (17) C. J. Harris, A Combined Corner and Edge Detector, in: Proceedings of Alvey Vision Conference, 1988.
  • (18) K. W. Chia, A. D. Cheok, S. J. Prince, Online 6-DOF Augmented Reality Registration from Natural Features, in: Proceedings of European Conference on Computer Vision, 2002.
  • (19) P. David, D. DeMenthon, R. Duraiswami, H. Samet, Simultaneous Pose and Correspondence Determination using Line Features, in: Proceedings of Computer Vision and Pattern Recognition, 2003.
  • (20) G. Klein, T. Drummond, Robust Visual Tracking for Non-instrumented Augmented Reality, in: Proceedings of International Symposium on Mixed and Augmented Reality, 2003.
  • (21) E. Marchand, F. Chaumette, Virtual visual servoing: A framework for real-time augmented reality, in: Proceedings of Eurographic, 2002.
  • (22) K. Harada, K. Nagata, T. Tsuji, N. Yamanobe, A. Nakamura, Y. Kawai, Probabilistic Approach for Object Bin Picking Approximated by Cylinders, in: Proceedings of International Conference on Robotics and Automation (ICRA), 2013.
  • (23) D. G. Lowe, Distinctive Image Features from Scale-invariant Key-points, International Journal of Computer Vision (IJCV).
  • (24) I. Gordon, D. G. Lowe, What and Where: 3D Object Recognition with Accurate Pose, Lecture Notes in Computer Science.
  • (25) A. Collet, D. Berenson, S. S. Srinivasa, D. Ferguson, Object Recognition and Full Pose Registration from a Single Image for Robotic Manipulation, in: Proceedings of International Conference on Robotics and Automation (ICRA), 2009.
  • (26) W. Press, S. Teukolsky, W. Vetterling, Numerical Recipes in C: The Art of Scientific Computing, The Cambridge Press, 1992.
  • (27) Y. Cheng, Mean shift, Mode seeking, and Clustering, Transaction on Pattern Analysis and Machine Intelligence (PAMI).
  • (28) A. Ramisa, D. Aldavert, S. Vasudevan, R. Toledo, R. L. de Mantaras, Evaluation of Three Vision based Object Perception Methods for a Mobile Robot, Journal of Intelligent Robotic Systems (JIRS).
  • (29) M. Fiala, ARTag: A Fiducial Marker System using Digital Techniques, in: Proceedings of Computer Vision and Pattern Recognition (CVPR), 2005.
  • (30) D. Wagner, D. Schmalstieg, ARToolKitPlus for Pose Tracking on Mobile Devices, in: Proceedings of Computer Vision Winter Workshop, 2007.
  • (31) V. Sundareswaran, R. Behringer, Visual Servoing-based Augmented Reality, in: Proceedings of International Workshop on Augmented Reality, 1998.
  • (32) H. Kato, M. Billinghurst, Marker Tracking and HMD Calibration for a Video-based Augmented Reality Conferencing System, in: Proceedings of International Workshop on Augmented Reality, 1999.
  • (33) L. Vacchetti, V. Lepetit, P. Fua, Stable Real-time 3D Tracking using Online and Offline Information, Transaction on Pattern Analysis and Machine Intelligence (PAMI).
  • (34) S. Makita, K. Okita, Y. Maeda, Motion Planning for 3D Multifingered Caging with Object Recognition using AR Picture Markers, in: Proceedings of International Conference on Mechatronics and Automation, 2015.
  • (35) F. Suligoj, B. Sekoranja, M. Svaco, bojan Jerbic, Object Tracking with a Multiagent Robot System and a Stereo Vision Camera, Procedia Engineering.
  • (36) K. Ramirez-Amaro, M. Beetz, G. Cheng, Transferring Skills to Human Robots by Extracting Semantic Representations from Observations of Human Activities, Artificial Intelligence.
  • (37) B. Freedman, A. Shpunt, M. Machline, Y. Arieli, Depth Mapping using Projected Patterns (2012).
  • (38) S. M. Choi, E. G. Lim, J. I. Cho, D. H. Hwang, Stereo Vision System and Stereo Vision Processing Method (2012).
  • (39) P. E. Bauhahn, B. S. Fritz, B. C. Krafthefer, Systems and Methods for Safe Laser Imaging (2009).
  • (40) R. B. Rusu, S. Cousins, 3D is Here: Point Cloud Library (PCL), in: Proceedings of International Conference on Robotics and Automation (ICRA), 2011.
  • (41) C. Schutz, H. Hugli, Augmented Reality using Range Images, in: SPIE Photonics West, The Engineering Reality of Virtual Reality, 1997.
  • (42) P. J. Besl, N. D. McKay, A Method for Registration of 3-D Shapes, Transactions on Pattern Analysis and Machine Intelligence (PAMI).
  • (43) F. Tombari, S. Salti, L. D. Stefano, Unique Signatures of Histograms for Local Surface Description, in: Proceedings of European Conference on Computer Vision (ECCV), 2010.
  • (44) Z. C. Marton, D. Pangercic, N. Blodow, M. Beetz, Combined 2D-3D Categorization and Classification for Multimodal Perception Systems, International Journal of Robotic Research (IJRR).
  • (45) A. Aldoma, et al., CAD-model recognition and 6DOF pose estimation using 3D cues, in: ICCV Workshops, 2011.
  • (46) W. Wohlkinger, M. Vincze, Ensemble of Shape Functions for 3D Object Classification, in: Proceedings of International Conference on Robotics and Biomimetics (ROBIO), 2011.
  • (47) A. Aldoma, Z. C. Maron, F. Tombari, M. Vincze, Tutorial: Point Cloud Library: Three-Dimensional Object Recognition and 6 DOF Pose Estimation, IEEE Robotics and Automation Magzine.
  • (48) Y. Shiraki, K. Nagata, N. Yamanobe, A. Nakamura, K. Harada, D. Sato, D. N. Nenchev, Modeling of Everyday Objects for Semantic Grasp, in: Proceedings of International Symposium on Robot and Human Interactive Communication, 2014.
  • (49) M. J. Schuster, J. Okerman, H. Nguyen, J. M. Rehg, C. C. Kemp, Perceiving Clutter and Surface for Object Placement in Indoor Environment, in: Proceedings of International Conference on Humanoid Robots (Humanoids), 2010.
  • (50) G. Smonath, M. Rohith, D. Metaxas, C. Kambhamettu, D-Clutter: Building Object Model Library from Unsupervised Segmentation of Cluttered Scenes, in: Proceedings of Computer Vision and Pattern Recognition), 2010.
  • (51) M. Saval-Calvo, J. Azorin-Lopez, A. Fuster-Guillo, J. Garcia-Rodriguez, Three-dimensional Planar Model Estimation using Multi-constraint Knowledge based on K-means and RANSAC, Applied Soft Computing.
  • (52) L. Goron, Z.-C. Marton, G. Lazea, M. Beetz, Robustly Segmenting Cylindrical and Box-like Objects in Cluttered Scenes using Depth Camera, in: Proceedings of German Conference on Robotik, 2012.
  • (53) E. C. Cheung, C. Cao, J. Pan, Multi-contour Initial Pose Estimation for 3D Registration, in: Proceedings of International Conference on Intelligent Robots and Systems (IROS), 2015.
  • (54) G. Taylor, L. Kleeman, Fusion of Multimodal Visual Cues for Model-based Object Tracking, in: Proceedings of Australasian Conference on Robotics and Automation (ACRA), 2003.
  • (55) T. Kempter, A. Wendel, H. Bischof, Online Model-Based Multi-Scale Pose Estimation, in: Proceedings of Computer Vision Winter Workshop, 2012.
  • (56) G. Reitmayr, T. Drummond, Going Out: Robust Model-Based Tracking for Outdoor Augmented Reality, in: Proceedings of International Symposium on Mixed and Augmented Reality, 2006.
  • (57) D. Pangercic, V. Haltakov, M. Beetz, Fast and Robust Object Detection in Household Environments using Vocabulary Trees with Sift Descriptors, in: International Conference on Intelligent Robots and Systems (IROS), Workshop on Active Semantic Perception and Object Search in the Real World, 2011.
  • (58) S. Hinterstoisser, S. Holzer, C. Cagniart, S. Ilic, K. Konolige, N. Navab, V. Lepetit, Multimodal Templates for Real-Time Detection of Texture-less Objects in Heavily Cluttered Scenes, in: Proceedings of International Conference on Computer Vision (ICCV), 2011.
  • (59) Y. Domae, D. Okuda, Y. Taguchi, K. Sumi, T. Hirai, Fast Graspability Evaluation on Single Depth Maps for Bin Picking with General Grippers, in: Proceedings of International Conference on Robotics and Automation (ICRA), 2014.
  • (60) W. Wan, K. Harada, Developing and Comparing Single-arm and Dual-arm Regrasp, IEEE Robotics and Automation Letters (RA-L).
  • (61) L. Jaillet, J. Cortes, T. Simeon, Transition-based RRT for path planning in continuous cost spaces, in: Proc. of IROS, 2008.
  • (62) H. Liu, D. Ding, W. Wan, Predictive Model for Path Planning using K-near Dynamic Brdige Builder and Inner Parzen Window, in: Proceedings of International Conference on Intelligent Robots and Systems (IROS), 2008.
  • (63) S. kook Yun, Compliant Manipulation for Peg-in-Hole: is Passive Compliance a Key to Learn Contact Motion, in: Proceedings of International Conferenc‚… on Robotics and Automation (ICRA), 2008.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
48461
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description