Deep Gated Multi-modal Learning:In-hand Object Pose Estimation with Tactile and Image

Deep Gated Multi-modal Learning:
In-hand Object Pose Estimation with Tactile and Image

Tomoki Anzai, Kuniyuki Takahashi The starred authors are contributed equally. T. Anzai is associated with The University of Tokyo. This work is an achievement during part-time job at Preferred Networks.  anzai@jsk.imi.i.u-tokyo.ac.jpK. Takahashi is associated with Preferred Networks, Inc.  takahashi@preferred.jp
Abstract

In robot manipulation tasks, especially in-hand manipulation, estimation of the position and orientation of an object is an essential skill to manipulate objects freely. However, since in-hand manipulation tends to cause occlusion by the hand itself, image information only is not sufficient. For the challenge, combining tactile sensors is one of the approaches. The advantage of using multiple sensors (modals) is that the other modals can compensate for occlusion, noise, and sensor malfunctions. Even though the decision making of each modal reliability corresponding to the situations is important, the manual design of the model is difficult to deal with various situations. Therefore, in this study, we propose deep gated multi-modal learning using end-to-end deep learning in which the network self-determines the reliability of each modal. As experiments, an RGB camera and a GelSight tactile sensor were attached to the gripper of the Sawyer robot, and the poses were estimated during grasping. A total of 15 objects were used in the experiments. In the proposed model, the reliability of the modal was determined according to the noise and failure of each modal, and it was confirmed that the pose was estimated even for unknown objects. 111An accompanying video is available at the following link:
https://youtu.be/NhS_ZhgADGQ

I Introduction

Robots are expected to be working not only in factories, but also in home environments. Personal robots are assumed to grasp objects in various environments. Sometimes obstacles in the environment disturb to realize the target grasping pose in reaching motion directly. In such a situation, the robots need to grasp the object once and then place it on the desk to grasp it again in order to achieve the target pose, or to perform a change-over motion for the object being grasped, in-hand manipulation. However, placing the object takes some time compared to in-hand manipulation. Furthermore, placing tables do not always exist in environments, where the method can not be used. For robot manipulation tasks, especially in-hand manipulation [yousef2011tactile, nikhil2018inhand], object pose estimation grasped by a robotic hand is essential to manipulate objects freely and accurately.

Although, many researchers have developed object pose estimation based on image information only, images alone are not sufficient for in-hand manipulation, where occlusion is likely to occur by the hand itself. One of the approach for the challenge is to combine multiple sensors (modals). Tactile sensors that can observe contact point information, which is challenging to acquire from images, should be useful. One of the advantages of using multiple modals is that even if there are occlusion, noise, or malfunction in some of the modals, the other modal can compensate. In such situations, the prediction of the reliability of each modal and the determination which modal to use are necessary. However, the manual design model is challenging to deal with various situations.

Hence, we propose a method that we call deep gated multi-modal learning (DGML) which uses end-to-end deep learning in which the network can predict and determine the reliability of each modal by DGML itself. By virtue of end-to-end deep learning, this method is capable of generalizing for unknown objects when estimating their pose in hand.

The rest of this paper is organized as follows. Related works and the contribution of this paper are described in Section II, while Section III explains our proposed method. Section IV outlines our experiment setup and evaluation settings with results presented in Section V. Finally, future work and conclusions are described in Section VI.

Fig. 1: An illustration of in-hand object pose estimation with image and tactile through our proposed network, deep gated multi-modal learning, which can decide the reliability of each modal.

Ii Related Works & Contributions

Ii-a Object Pose Estimation with Depth and Image

Object pose estimation is a well-studied problem in computer vision and important for robotic tasks. Particularly many researchers have been developing methods using depth data (point cloud) or RGB-D data [choi2012voting, choi20123d, aldoma2012tutorial]. Classical approaches with depth data are mainly based on point cloud matching method such as interactive closest point (ICP) [ICP]. These methods can achieve high accuracy. However, since this method requires 3D models of objects, unknown objects can not be applied.

Recently, deep learning has become an active research area and especially computer vision field has achieved a lot of success. Then, object pose estimation with image through combination of deep learning with model-based approaches has been studied [krull2015learning, xiang2017posecnn, Wang2019DenseFusion]. The convergence error by initial position deviation of the 3D model with the target object for ICP can be suppressed by deep-learning-based methods. Since this method also requires a 3D model, adapting to unknown objects remains challenging. In state of arts in pose estimation, methods that do not require 3D models have been studied by using deep learning [schwarz2015rgbd, hodan2018bop, hu2019segmentation]. Although this makes it possible to adapt to unknown objects without a 3D model, objects with occlusion remain as challenges.

Note that OpenAI implemented the task of aligning the direction of the cube using a multi-fingered hand [andrychowicz2018learning]. Although this can be realized on the basis of an image including occlusion, it remains a challenge about precise pose estimation because required accuracy of pose estimation is only to align the six specified faces of the cube.

Ii-B Tactile-based Object Pose Estimation

As common to pose estimation researches described in Section II-A, most of them mainly use images and depth information only. These methods are difficult to apply to in-hand manipulation because of the occlusion by the hand in the image or depth. Since tactile sensors can observe the contact state without causing a hidden state by the object or the hand, tactile sensors are attracting attention [tomo2016uskin, tomo2018uskin, yuan2017gelsight, dong2017improved]. There are some researches in which the tactile and image information are combined [gao2016deep, Calandra2017, Bimbo2012], and the effectiveness of the fusion of tactile and visual sensing was demonstrated for object grasping. Some of the research performed object pose estimation with model based approach using 3D model [Bimbo2016]. To the best of our knowledge, there is still little research on object pose estimation using tactile sensors that does not require a 3D model for unknown objects.

Ii-C Multi-modal Learning

A variety of methods have been employed in existing works on multi-modal learning. The majority of these methods falls in either of the three categories about how to pay attention to each modal.

1) Equally attention to all modals: Each modal information is input to the each network for extracting features, and the obtained features are simply combined to estimate information such as grasping point and motion. For example, the combination of image and tactile described in Section II-B, force and image [lee2019making], language and image [hatori2018interactive], and, image and sound [noda2014multimodal].

2) Attention in each modal: Only the important parts of each modal will be used. Part to be attention in each modal is acquired by a network itself. For example, in the case of images, the pixel of interest in the image is used [kim2018robust].

3) Attention to the modal: Only the important modals are used while other modals are ignored. The network makes decision which modal should be used from modals. For example, choose important modals from language and images according to the situation [arevalo2017gated].

In this study, we develop a new approach that the network determines the reliability of each modal, and it changes the amount used of each modal according to the reliability of each modal.

Ii-D Development of Tactile Sensors

To achieve the skillful manipulation by robots, many tactile sensors have been developed and some of them have been applied to robotic hands [dahiya2013directions]. The majority of these sensors falls in either of the following two categories. 1. Multi-touch enabled sensors with sensing capabilities limited to one axis force information vertical to the surface of the sensor, which resembles the pressure sense [ohmura2006conformable, iwata2009design, mittendorfer2011humanoid, fishel2012sensing] or 2. three-axis sensing enabled sensors for only a single cell [paulino2017low] . Two of the few exceptions are the uSkin [tomo2018uskin] and the GelSight [johnson2009retrographic, dong2017improved] which can obtain share force as well as pressure force with multi-touch. The commercialized uSkin sensor [tomo2018uskin] utilizes embedded magnets in a silicon rubber to measure the deformation of silicon during contact by monitoring changes of the magnetic fields. Using this method, it is able to measure both normal as well as shear forces per sensor unit for 16 contact points in this prototype [tomo2016uskin]. Instead of magnet, GelSight [yuan2017gelsight, Calandra2017] is an optical-based tactile sensor, which uses a camera to capture and measure the deformation of its attached elastomer during contact with a surface.

Ii-E Contributions

The target of our method is to estimate object pose in hand. The main contributions of this article are following:

  • Combine image and tactile information to cope with occlusion situation.

  • Use end-to-end learning without assuming a 3D model to estimate the pose of unseen objects.

  • Propose the new approach in which the network itself decide the reliability and contribution of each modal.

  • Investigate details of noise, malfunction, and occlusion behavior in sensor information unique to the robot field.

Fig. 2: The proposed network architecture for deep gated multi-modal learning composed of two CNNs with gate, LSTM, and FC. Inputs are sequence of images and tactile, and output is time-series object pose. The values of gate ( and ) represent the reliability of each module, and the values are acquired the by network itself. After training, if the value of the gate for the module is close to 0, the module has low reliability, whereas if the value is close to 1, the module has high reliability.

Iii Deep Gated Multi-modal learning

We propose an object pose estimation method with image and tactile sensing based on end-to-end deep learning. In order to consider the reliability of each modal, we propose a method using gate. Used in long short-term memory (LSTM), information from each neuron can be adjusted by opening and closing the gate. In this study, the amount of information in each modal is adjusted by changing the gate continuously from 0 to 1 instead of 0 and 1. We show the concept of proposed network model in Fig. 2. DGML is composed of four components for in-hand pose estimation as follows:

  • Extraction feature parts from image and tactile by convolutional neural network (CNN)

  • Reliability part of each modal by gate

  • Reflection of reliability to each modal

  • Estimation of the object pose using time-series input information that reflects reliability by LSTM and full connected (FC)

An image and a tactile data are the input, as well as, a relative pose of an object is the output. Since our proposed method address without 3D models given in advance, we estimate the relative pose the basis of which is the pose at the moment the object is grasped, not the absolute pose. In other words, how much the object moves within the hand after the robot start to grasp the object is estimated.

Extraction feature parts are calculated with training data with step as follow:

(1)

Then, reliability value in image is given by:

(2)

The reliability values & of each module in image and tactile has condition:

(3)

Gate determines relative modal reliability from all modal information. Therefore, the modal reliability is calculated by integrating the input from each modal. By calculating so that the sum of each modal gate is 1, relative reliability is expressed. The activate function of is sigmoid. and values are between 0 and 1. The lower the modal reliability, the closer the value is to 0 and the smaller the contribution to the output. On the other hand, the higher the reliability of the modal, the closer the value is to 1, and the greater the output contribution. In other words, if the value is 0, the modal is completely ignored.

Then, reliability is reflected to each modal as:

(4)

Finally, the object pose as output is calculated:

(5)

To minimize the cost function as follows

(6)

where is the activation functions, are the parameters to be trained, includes the activation function and , and is Huber loss function, respectively. is the expected output, and is the inferred output from input .

There is no teacher signal for the reliability values and , and these are calculated by the network itself to minimize the output error by (6). Then, the reliability values are multiplied by each modal in (4). For this reason, when training the networks with connecting multiple sensors, modals that do not contribute to train data are ignored. Therefore, it is not necessary to design which sensor to use in advance manually.

Iv Experimental Setup

The purpose of experiments is to verify the DGML in occlusion situation, followed by details investigation for noise, and malfunction in sensor information.

We note that our values of the hyper-parameters provided in this section are tuned by random search.

Iv-a Hardware Setup

Iv-A1 Tactile sensor

The GelSight tactile sensor we duplicate from article [yuan2017gelsight, Calandra2017] is an optical-based tactile sensor, which captures pixel image by a camera. The sensor is capable of measuring applied pressing force and shear force in the x, y, and z axes from captured image. We use captured raw image as tactile sensor. Applying an excess amount of force results in tearing the silicon layer from the sensor’s container. To prevent this problem, we applied baby powder on its silicon surface to reduce friction between the surface and the grasped object.

Fig. 3: Setup used in our experiments. Custom printed end-effector with both a tactile skin sensor and a web camera. The Sawyer robot let the gripper move to the minus x-axis, y-axis direction and rotation of yaw.

Iv-A2 Gripper

We developed a gripper to grasp objects shown in Fig. 3. This gripper is a parallel gripper which has two fingers driven by a servo motor (Dynamixel XM430-W350-R). GelSight tactile sensor is attached to one fingertip, and the other fingertip has a sponge. A camera (BUFFALO BSW200MBK) is mounted on the center of the gripper.

Iv-A3 Sawyer

To perform our experiments, we use a Sawyer 7-DOF robotic arm with the gripper as end-effector (See Fig. 3). The Sawyer, GelSight sensor, gripper, and camera are connected to a PC running Ubuntu 16.04 with ROS Kinetic.

Iv-B Objects

For the target objects, we have prepared 15 objects with various size and shape (See Fig. 4). 11 of these objects are used for training, while the remaining 4 were used to evaluate our trained network as unknown materials.

Fig. 4: Trained materials (red) and unknown materials (blue)

Iv-C Data Collection

Fig. 3 shows one of the initial position the robot starts to manipulate the object for data collection. A table is placed in front of the robot and the object is fixed on the table with a double-sided tape. The robot grasps the object, and the image and tactile data are recorded while the robot slides the object in its hand. We estimate the three DoF object pose. Given the coordinate system of the hand which is defined as shown in Fig. 3, the estimated values are , and . While we estimate the object pose relative to the pose when the robot start to grasp it, change of the object pose can be calculated by change of the hand pose since the object is fixed to the table. We define homogeneous transformation matrix as the pose of the gripper at time described in base link coordinate system. This can be calculated easily by forward kinematics. The change of the object pose from time to can be represented as .

We prepare the movement patterns including both translational and rotational motions, and collect data for each object. In order not to depend on the features of background movement as the robot moves, green clothes cover the background and desk. The maximum movement in translation was about 30 mm as well as in rotation was about 40 degree. For small objects such as tapes, cups, scale, and wrench, only rotational movement was performed. In each object, the numbers of motion for translation, rotation, and combination of both of them are 10, 10, and 12, respectively. In trained object in Fig. 4, datasets of 6 out of 10 in translation, 6 out of 10 in rotation, and 8 of 12 in combination of both are used for training, and the remaining data are used for evaluation.

Images, tactile, and object poses were acquired at , and the dataset used for training was resampled to . The captured images were converted to grayscale because the object pose is independent of the color of the objects. One motion of the training data set has a length of about 150 steps, and each step composes of pixels for image, pixels for tactile, and 3 for object pose.

Layer In Out Filter size Activation function

Image

\nth1 conv. 1 32 (3,3) ReLu
Average
pooling
32 32 (4,4) -
\nth2 conv. 32 32 (3,3) ReLu
Average
pooling
32 32 (2,2) -
\nth3 conv. 32 32 (3,3) ReLu
Average
pooling
32 32 (2,2) -

Tactile

\nth1 conv. 1 32 (3,3) ReLu
Average
pooling
32 32 (4,4) -
\nth2 conv. 32 32 (3,3) ReLu
Average
pooling
32 32 (2,2) -
\nth3 conv. 32 32 (3,3) ReLu
Average
pooling
32 32 (2,2) -

Reliability

\nth1 img (FC) 1120 1 - -
\nth1 tac (FC) 2240 1 - -
\nth2 Gate 2 1 - sigmoid

LSTM

LSTM 3360 170 - sigmoid & tanh

FC

\nth1 170 3 - -
  • In and out are the number of channel for image and tactile, whereas these are the number of neurons for gate, LSTM, and FC. Stride and padding for n-th convolution in image and tactile are (1, 1).

TABLE I: Network Design1

Iv-D Network Design

The architecture of our network model is composed of two CNNs with gate, and LSTM and FC to perform DGML as shown in Fig. 2 as described in Section III. We use Chainer [chainer_learningsys2015, chainermn_mlsys2017, ChainerCV2017] as deep learning library, for implementation. More details on the network parameters are shown in Table I. All our network experiments were conducted on a machine equipped with 256 GB RAM, an Intel Xeon E5-2667v4 CPU, and eight Tesla P100-PCIE with 12GB resulting in about 24 to 48 hours of training time.

V Results

V-a Learning Curve of DGML

As a comparison with DGML, we prepared several networks that use only images, only tactile, and both images and tactile with simple connection without using gate. Fig. 5 shows their learning curves and gate values of DGML. As for the values of and , the learning of images progresses at first, but it can be seen that the reliability of tactile information gradually increases. As one of the characteristics of DGML, computational epoch to the convergence of DGML is fastest in the models since training progresses from easy-to-train modals.

Fig. 5: Learning curves and gate vaules of DGML. Note that the value of α and β is the average for training data in each epoch.
Fig. 6: Histogram of image reliability value

V-B Gate Values of Objects

We visualize the reliability of image modal by creating the histogram with all the inferred values of through untrained motions (See Fig. 6). The average value of smaller than 0.5 indicates that the network relies more on the tactile. In this experiment, since the occlusion by the robot gripper itself and the object itself always occurs, the reliability of the image becomes low.

The images of the objects in the histogram are displayed on that are often appeared (See Fig. 6). The ratio between the each object size in the image and the actual object size is the same. Since objects such as tapes, wrench, and cups are prone to tactile changes when they are grasped, the network relies on more tactile information for these objects. Whereas, image information is reliable for objects with no surface irregularities that are difficult to observe features by the tactile sensor. Even though the object has no features on the surface, the reliability of the image is low when a large object makes occlusion in the image. For example, the coffee plastic bottle and orange triangle object are small, so movement can be observed from the image. In contrast, large objects such as the mustard and green plastic bottles make occlusions, which reduces the reliability of image information, therefore the network uses more tactile information. Therefore, we can say that the proposed method is effective to determine the reliability of each module according to the object size and shape.

V-C Inference Result of Object Pose

Table II shows the object pose inference error of in-hand manipulation. The evaluation conditions include without noise and the absence of either image or tactile input. Under such conditions, we compared four models: an image only, a model using only tactile, a model using image and tactile (no gate), and the proposed model. From the result without noise, tactile information should be included in the models for accurate in-hand pose estimation. In this condition, there is no difference in performance between the models including the tactile sensor. The networks can predict correctly not only for known objects but also for unknown objects.

From the result with the absence of one of the modals in Table II, the performance of the proposed DGML is the best. This is because if one of the models is absence, the gate can make decision that the reliability of the modal should be reduced, then, the modal is almost ignored (See Table III). Even though the training dataset does not include data that the absence one of the modals, the gate of DGML can deal with these situations. Even though the training dataset does not include data in which one of the modals is absent, the gate of DGML can deal with these situations.

Cond. Model Known obj. Unknown obj.
Trans. Rot. Trans. Rot.
w/o noise Image 2.10 1.45 3.25 5.47
Tactile 1.43 1.09 2.04
w/o gate 1.01 1.70
DGML 1.11 1.63
w/o image w/o gate 3.00 8.13 3.01 9.00
DGML 1.04 2.35 1.39 3.33
w/o tactile w/o gate 5.17 5.02
DGML 3.49 3.87
TABLE II: Inference error of in-hand object pose estimation
Cond. Known obj. Unknown obj.
w/o noise 0.261 0.739 0.272 0.728
w/o image 0.069 0.931 0.072 0.928
w/o tactile 0.994 0.006 0.994 0.006
TABLE III: The average value of and

V-D Gate Values in Different Noise

In this section, we discuss the change of reliability when a noise is applied to an input of a modal. Fig. 7 shows the reliability value when the network infers from the dataset with added noises to the tactile input. The noises are generated by the normal distribution with various variance. From Fig. 7, it can be seen that the reliability of the image increases as the noise of tactile increases. The gate can correctly recognize the noise of the input signal in the tactile. The strength of using DGML is that the DGML helps to understand the network through the values of the gate because the gate represents which and how much modal should be used. Furthermore, if the sensor is broken, the reliability value is close to 0, so DGML can judge the sensor failure.

Fig. 7: Histogram of image reliability value with different size noise to tactile

Vi Conclusion

In this paper, we proposed a method to estimate in-hand object pose using tactile and image with modals’ reliability, called deep gated multi-modal learning (DGML). The proposed method can estimate not only known object pose but also unknown object pose without 3D model. Moreover, the modals’ reliability can be changed by the network itself depending on the situations such as sensor failure and different size of noise level. Visualization of reliability helps to understand the network behavior such as which modal and how much to use. Furthermore, by using the proposed method, computation efficiency for training has been improved.

For future work, we will develop a system of in-hand object manipulation to integrate the object pose estimator.

References

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
392170
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description