Improved GelSight Tactile Sensor for Measuring Geometry and Slip

Improved GelSight Tactile Sensor for Measuring Geometry and Slip

Siyuan Dong, Wenzhen Yuan and Edward H. Adelson Department of Electrical Engineering and Computer Science, and Computer Science and Artificial Intelligence Laboratory (CSAIL), MIT, Cambridge, MA 02139, USA sydong@mit.eduDepartment of Mechanical Engineering, and CSAIL, MIT, Cambridge, MA 02139,USA yuan_wz@csail.mit.eduDepartment of Brain and Cognitive Sciences, and CSAIL, MIT, Cambridge, MA 02139, USA adelson@csail.mit.edu
Abstract

A GelSight sensor uses an elastomeric slab covered with a reflective membrane to measure tactile signals. It measures the 3D geometry and contact force information with high spacial resolution, and successfully helped many challenging robot tasks. A previous sensor [1], based on a semi-specular membrane, produces high resolution but with limited geometry accuracy. In this paper, we describe a new design of GelSight for robot gripper, using a Lambertian membrane and new illumination system, which gives greatly improved geometric accuracy while retaining the compact size. We demonstrate its use in measuring surface normals and reconstructing height maps using photometric stereo. We also use it for the task of slip detection, using a combination of information about relative motions on the membrane surface and the shear distortions. Using a robotic arm and a set of 37 everyday objects with varied properties, we find that the sensor can detect translational and rotational slip in general cases, and can be used to improve the stability of the grasp.

I Introduction

Tactile sensing is an important way for robots to sense and interact with the environment. With a tactile sensor in its hand, a robot can know whether it is holding some object or whether the gripping force is proper. Examples of tactile sensors designed in the past decades can be found in [2, 3, 4].

Among many robotic tasks that require assistance of tactile sensing, the most important task is to detect whether the robot has safely grasped an object. Slip, a common grasp failure, will occur when the gripping force is not large enough. The warning signals of slipping objects, such as the stretch of fingertip skin and the subtle vibration of a sliding object, can be easily perceived by human. For a long time, researchers have been trying to develop tactile sensors capable of detecting slip [5]. Tactile sensors with this capability measures various tactile signals, including contact force, vibration, acceleration, and stretch of the sensor surface. Recently, Su et al.  [6] and Ajoudani et al. [7] show a grasp control system with the slip-detection function enabled by vibration measurement. It has been shown that the system allow the robot to adjust the gripping force according to the detected slip condition and consequently execute a more stable grasp. However, engineering a slip detection device robust to the weight and geometry of the object is a challenging problem.

Fig. 1: (a) A parallel gripper WSG-50 with the new GelSight sensor gripping a chess Bishop.(b) Image captured by GelSight sensor during the grasp (c) The reconstructed 3D geometry from (b)

An optical tactile sensor, GelSight [8, 9], was introduced to obtain a high-resolution tactile image of the 3D topography of the contact surface. The Fingertip GelSight, a scaled-down version of GelSight, was developed to be mounted on a robot gripper [1]. It provides high spatial resolution (640480 pixels, 0.024 mm/pixel) and has successfully assisted many challenging robotic tasks. However, Despite its successes in these robotic applications, the Fingertip GelSight sensor faces several challenges, such as unsatisfactory precision of its surface normal measurement and fabrication difficulty.

To address these issues, we propose a new design for the Fingertip Gelsight sensor in this paper. The new sensor redesigns the illumination system to make the light more uniform and suitable for lambertian reflective surface and therefore improve the quality of the reconstructed 3D height map of the contact surface. Moreover, the entire frame of the new sensor can be easily printed by 3D printers, and the illumination system of the new design can be easily assembled. The detailed sensor fabrication process is provided in the sensor design section and the Solidworks files of the sensor frame are available online. Consequently, the new sensor, with a highly simplified and standardized fabrication process, is much easier to be reproduced.

We test the new sensor with the task of measuring slip during grasp, and use the measurement to enhance grasp. We use the methods that are based on the fundamental definitions of slip – the phenomenon of the relative displacement between the finger and the objects; and the general way how humans detect slip – measuring the skin stretch of the soft fingers. They are all universal for varied objects. Therefore, the methods can be more generally applied, without requiring pre-knowledge of the objects’ geometries, weights, materials or surface roughness. This measurement of general slip cases will help robots to better monitor the state of grasp, and increase the success rate of grasping unknown or more complicated objects. We believe the method will make intelligent robots more adaptive to the real environment.

This paper is structured as follows: in section II, we introduce the related work about optical based tactile sensors and general methods to detect slip. In section III and IV, we describe the design of the new sensor and quantitatively evaluate its performance of geometry measurements. In section V and VI, we introduce the slip detection method we are using and demonstrate the capability of the new sensor to detect slip. Finally, we summarize the the contribution of this paper and discuss potential applications.

Ii Related Work

Ii-a Optical Tactile Sensors

Optical tactile sensors convert signals of the contact deformation into images and thus achieve high spatial resolution and high sensitivity to the contact force. Optical tactile sensors capture the deformation of the contacting surface by a camera and infer the shape of the detected object, the shear force, the torque and other important information. In 1993, Jiar et al[14] built a compact tactile sensing prototype for robotic manipulations, which was able to reconstruct the 2D shape of the detected object and roughly estimate the gripping force by capturing binary images with a CCD camera. Another finger shaped tactile sensor using optical waveguide, designed by MAEKAWA et al[15], was used to detect the contact point and normal to the surface. Later on, M Ohka et al[16] proposed a tactile sensor that used camera to record the contacting area of a rubber pressed against an array of pyramidal projections. The three-dimensional force as well as the stiffness was reconstructed. In 2000, a compact tactile sensor complemented by Ferrier and Brockett [17] successfully reconstructed the coarse 3D shape of the detected object. By using markers on the membrane and recording the flow field of the marker movement, they sketched the 3D map of the deformation. The idea of adding textures such as arrays of dots or grids on the contact surface of the tactile sensor was also implemented in [18, 19, 20, 21] to encode edge information, reconstruct surface traction fields and force vector field. However, this method was not able to reconstruct 3D maps with high spatial resolution, since the texture on the surface was large and sparse. Therefore, The achieved spatial resolution was limited by the texture, which is much lower than that of the imaging system.

Ii-B GelSight Sensor

The GelSight sensor is also an optical tactile sensor, which is mainly designed to achieve high precision for the measurement of the contact surface geometry [8, 9]. The GelSight sensor consists of three components: (1) soft silicone gel following the shape of the detected object, (2) color LEDs illuminating the deformed membrane and (3) a camera for capturing images. The three-color LEDs illuminate the gel from different angles. Since each surface normal corresponds to a unique color, the color image captured by the camera can be directly used to reconstruct the depth map of the contact surface by looking up a calibrated color-surface normal table.

The GelSight tactile sensor has already been successfully utilized in various tasks. Li et al[10] used the GelSight sensor to recognize 40 classes of different surface textures. Jia et al[22] demonstrated that the GelSight sensor outperformed humans in detecting lumps in soft media, indicating the possible application of this sensor on diagnosis of breast cancer. To apply the GelSight sensor to robots, Li et al[1] designed a fingertip GelSight tactile sensor that highly reduced the volume of the GelSight sensor. The sensor was equipped on Baxter robot hand and completed a USB insertion task. Yuan et al[13] further improved the sensor by adding markers on the gel surface. By analyzing the marker motion, the GelSight sensor can be used to sense normal, shear, torsional load on the contact surface and even detect incipient slip. GelSight sensor was also utilized to detect hardness of contact objects based on the analysis of gel deformation and marker displacement in [11, 12].

However, there are two main problems with the GelSight fingertip sensor designed by Li et al[1] (Li’s sensor). First, the sensor can only reconstruct a coarse 3D map of the contact surface. It used acrylic optical guides to illuminate the gel surface, resulting non-uniform illumination on the contact surface. The non-uniform illumination caused errors in the estimated surface normal. In addition, a semi-specular reflective surface was required in Li’s sensor to amplify the reflection, but the photometric stereo employed to estimate the 3D height map for the GelSight sensor, does not work well on semi-specular surfaces. Secondly, the fabrication process of this sensor is complicated. Every component of the sensor should be adjusted accurately by hand to ensure that the sensor works properly. The arduous fabrication of the sensor severely restricted its application.

Ii-C Slip detection

Slip detection plays a vital role in performing a successful grasp or manipulation. People have long been trying to develop tactile sensors to sense slip and secure grasp via the detection of physical signals related to slip, like the ratio of shear force to normal force, vibration, acceleration. A recent review of different slip-detection technologies is given in [5]. Howe and Cutkosky [23] proposed that sensing object acceleration in the hand is the core part of detecting slip or incipient slip, and they made a tactile sensor prototype with an accelerator under the soft rubber surface to detect the slip or incipient slip state of objects. Holweg et al[24] proposed that in the incipient slip stage, there were fluctuations of the contact force between the object and the soft sensor surface. They made a rubber tactile sensor model that predicts the slip by analyzing the frequency of the normal contact force. Another example is Melchiorri’s work [25], which measured the normal and shear components of the contact force using a force sensor, and compared their ratio to the frictional coefficient of the surface to predict slip. Su et al[6] used a silicone pressure sensor to detect slip by monitoring the sudden change in tangential force and vibration in the normal pressure. Ajoudani et al[7] built a grasp control system with the slip-detection feature and used the ratio of the shear force to the normal force as an indicator of the likeliness of slip.

Yuan et al[13] proposed that the GelSight sensor could detect slip by analyzing the sensor’s contact condition. Their experiments demonstrated that slip started from the peripheral area of the contact surface, where the sensor surface had a relatively smaller displacement. The difference in sensor surface displacement was measured by tracking the motion of the black markers on the sensor. However, they did not perform real-time robot grasping experiments, and only tested objects with flat surface or little surface textures, which guaranteed the contact surface to be large enough to detect the motion of markers.

Iii Design and Fabrication of New Gelsight Sensor

To improve the accuracy of the 3D height map reconstruction and simplify the fabrication process, here we propose a new design of the GelSight tactile sensor which can use a Lambertian surface instead of a semi-specular surface, optimize the illumination uniformity on the sensing surface, and utilize more standardized and 3D printable framework.

We approach the new design from the following aspects. Firstly, we particularly choose LEDs (Osram Opto Semiconductor Standard LEDs - SMD) with a small collimating lens in front. The lens has a 30 viewing angle and efficiently collects and collimates the emitted light from the LED. The LEDs are tightly arranged in a array as shown in Figure 2(c2). We use three LED arrays with three different colors: red, green, and blue.

Secondly, we design a hexagonal plastic tray as shown in Figure 2(c1). This semitransparent tray is produced by a 3D printer (Formlab2) with clear resin. LED arrays are mounted to the every other side of the tray with super glue, as shown in the top view in Figure 2(b). The mounts of the LED arrays, as noted in Figure 2(b) is tilted to 71 to the sensor surface for illuminating the whole sensing surface. Because of the rotational symmetry, the illuminations of R, G, B near the center of the sensing surface are of equal intensity. The large tilt angles of the illumination also generate a large variance in the reflection of the sensor surface regarding to different surface normals, favoring a more discriminatable surface normal measurement. The semitransparent tray also homogenize the light of the LED array while allows a high transmission.

Owing to the bright and uniform illumination achieved in the new design, we can replace the semi-specular surface with a Lambertian surface. The silicone gel labeled with markers is coated with a thin membrane of aluminum powder mixed in silicone base, which ensures Lambertian reflection. Compared to gel with semi-specular coating, the silicon gel used here is much eaisr to make and the coating material is non-toxic. We design the reflection surface to be dome-shaped, which outperforms the flat reflection surface in terms of the reflection uniformity. The dome-shaped gel surface also makes robotic applications easier.

Fig. 2: New design of GelSight tactile sensor. (a) The new GelSight sensor. (b) Schematics of the design. (c1)-(c5) The components of the new sensor: 3D printed sensor frame, LED array, Logitech C310 webcam without cover, the prototype of the sensor and 3D printed sensor cover

For the gel support, a transparent hexagonal-shape acrylic window is laser cut and inserted as a supporting plate. We fill the hollow tray with same silicone materiel to match the refractive index of the silicone gel used for sensing. It eliminates the reflection from the silicone gel, which may otherwise attenuate light and introduce artifacts for imaging.

For the imaging part, a camera (Logitech C310 in Figure 2(c3)) is mounted on the top of the hexagonal tray. The camera is 35 mm away from the sensor, allowing a large field of view. Figure 2(c4) shows the assembled prototype of the sensor according to the design described above.

In order to increase the durability of the sensor, we further design a plastic protection cover (Figure 2(c5)) for the prototype of the sensor. The cover is produced by the same 3D printer with a tough material to ensure its robustness. In addition, the handle of the sensor frame is designed to fit the WSG parallel gripper, which makes the sensor easier to be easy to use for many robotic tasks.

The new design, as shown in Figure 2(a), has a compact structure. The frame of the sensor is 3D printable, and the illumination parts can be easily glued to the mounts on the surface of the frame. The procedure is standardized and the sensor can be manufactured without any specific skills. The Solidworks file for the sensor frame and cover can be downloaded from http://people.csail.mit.edu/yuan_wz/GelSightData/GelSight_part_172.zip.

Fig. 3: GelSight images of ball array(a), human finger(b), watch chain(c), Quarter(d) and the corresponding reconstructed depth image from new sensor(lower row) and Li’s sensor(upper row)

Before using the sensor for reconstructing the 3D height map of the contact surface, we calibrate a lookup table that maps R, G, B values to surface normals. The calibration process is performed by pressing a ball with the diameter of 3.96 mm against the surface of the elastomer gel in an arbitrary position. The color value change induced by the distorted gel surfaces is recorded by the camera. The surface normal of each pixel in the image can be calculated according to the diameter of the ball. Afterwards, a lookup table mapping R,G,B values to surface normals for the specific area is automatic generated by the program. To eliminate the noise from the spatial variance of the illumination, this process is repeated at different positions on the surface. The final lookup table is averaged over all tables.

Iv Evaluation of Geometry Measurement With Gelsight

To evaluate the performance of the new sensor, we compare Li’s sensor and the new sensor from four aspects: gradient versus color change, mapping accuracy of the lookup table, spatial illumination variance, and quality of 3D shape reconstruction.

Iv-a Gradient vs. Color Change

Since we use color values to infer surface normals, a preferable design should perform a one-to-one mapping between surface normals and color values. We calibrated Li’s sensor and the new sensor and choose a pair of calibration images for quantitative analysis. The image of Li’s sensor, as shown in Figure 4(a), features a rapid color change from the edge of the ball to the center. We inspect the color change with surface normals along the radial direction as denoted by the red arrow in Figure 4(a). The color change as a function of the surface normal, as shown in Figure 4(b), is approximately linear when the surface normal pitch angle varies from 5 to 25 degrees. Above 30 degrees, the corresponding color change decreases with a higher surface normal pitch angle, indicating that a single color change value is mapped to two or more very different surface normals. The ambiguity prohibits an effective inference from the color values to the surface normals. We implement the same measurement for the new sensor. As shown in Figure 4(c), the color varies smoothly with surface normals. The function of the color change and the surface normal is linear from 5 to 60 degrees. We did not measure surface normals larger than 60 degrees because the stiffness of the gel prevented a perfect contact. From the above study, the new sensor presents an improved linear mapping between surface normals and the color change.

Fig. 4: The color change comparison over the surface normal pitch angle. We take the example of a calibration sphere being pressed on the sensors’ surfaces, and compare the change of the GelSight image color over the areas of different surface normals, as shown in the red areas in the left figures. The plots shows, for Li’s sensor, the color change is larger upon contact area, but when the surface normal pitch reaches around 30 degree, the color can not well represent the slope change; while for the new sensor, the color change is less obvious, but the linearity is better, within a larger range of slope angles.

Iv-B Mapping accuracy of the lookup table

To evaluate the accuracy of the surface normal measurement, we compared the estimated value with its ground truth of a standard ball. Since both the pitch angle and the yaw angle are indispensable to identify a surface normal, we quantify the two parameters for the Li’ sensor and new sensor. Figure 5 shows the measured surface normal pitch angles vs. the ground truth and measured yaw angles vs. ground truth for Li’s sensor [(a), (b)] and new sensor [(c), (d)]. The dense blue dots represent the data in the lookup table while the red line shows the case of 100% accuracy. In Figure 5(a), the blue dots lay on the red line when the pitch angle varies 5 to 20 degrees. They blow up or drop below the red line as the pitch angle becomes larger than 20 degrees. The is estimated at 0.557, which implies that Li’s sensor is not able to retrieve accurate surface normal pitch angles in that range. In contrast, as shown in Figure 5(c) all the data here is distributed around the red line and the is improved to 0.818. The new sensor outperforms Li’s sensor in terms of retrieving pitch angles. For the surface normal yaw angle, both Li’s sensor and new sensor achieve very high values. The new sensor still exceeds, revealing a stronger capability of reconstructing proper yaw angles.

Fig. 5: Comparison of the measured surface normal angles and the ground truth. The plots are based on the images of Li’s sensor and new sensor shown in Figure 4, when a small sphere is pressed against the sensor surface. We compare the pitch angle and the yaw angles of the pixels within the contact area, and find that the new sensor measures the surface normal more precisely.

Iv-C Spatial difference of the illumination

Non-uniform illumination results in the variance of the surface normal measurements. To quantify the measurement error from this factor, we calculated the of the surface normal pitch angle and the yaw angle at different positions on the sensor surface. The probability distribution of for the pitch angle is plotted in Fig. 6(a). For the new sensor, the peak of the distribution, corresponding to the majority of the values, is larger than the best value obtained of Li’s sensor. Moreover, the lowest value of the new sensor is almost equal to the highest of Li’s sensor. It is also clear that Li’s sensor shows a long tail in the distribution, implying inaccurate predictions of surface normals at certain positions. This tail near low values is, however, absent in the new sensor. The high average value of the new sensor enables a precise measurement of surface normals. The sharp cut-off of the distribution of the new sensor confirms that there is no blind spot on the sensing surface. We can draw a similar conclusion from the value distribution of the yaw angles in Fig. 6(b). The dedicated design of the illumination in the new sensor empowers a more precise measurement of surface normals.

Fig. 6: Probability distribution of of measured surface normals’ (a) pitch angle and (b) yaw angle over all the calibration images, on both Li’s sensor and new sensor. There are 119 images for Li’s sensor and 94 images for the new sensor, where the calibration sphere is pressed at arbitrary locations on the sensors.

Iv-D Quality of 3D shape reconstruction

After the quantitative analysis discussed above, we directly compare the reconstructed 3D images of miscellaneous objects in Figure 3. We tested ball arrays (a), a human finger (b), a watch chain (c), and a quarter coin (d). The depth map recovered by the new sensor (lower panel) is smooth without abrupt changes of the surface normal. The surface recovered by Li’s sensor (upper panel) is grainy with a low signal to noise ratio. For example, the depth map of the ball array recovered by the new sensor preserves the shape of a sphere and smooth surface. Contrastingly, the image attained with Li’s sensor is deformed as cones with rough surface. Because of the non-uniform illumination, the depth map of the 4 balls on the left and right side shows degraded quality than that of the ones in center. Objects with fine features reveal a higher resolution of the new sensor. In Figure 3(d), with the new sensor (lower panel), the feather of the eagle on the quarter coin looks clear and separable while in the image acquired by Li’s sensor, the feather is buried in the noise. Similar observations can be made from the images of the human finger and the watch chain. The comparison of the 3D maps confirms the improvement of the new sensor from an intuitive perspective.

V Slip Detection With New Gelsight Sensor

We use the new GelSight sensor on a robot gripper for grasp tasks, and try to predict slip or incipient slip during grasping. The GelSight sensor detects slip from 3 major clues: the relative displacement between the objects and the sensor surface, the shear displacement distribution of the markers on the sensor surface, and the change in the contact area. Both translational and rotational slips are considered.

For objects with obvious textures, GelSight can precisely recognize the texture location, and track the object’s motion according to the texture motion; the movement of the sensor surface is indicated by the black markers on the surface. A relative movement between the object texture and the markers, either translational or rotational movement, indicates the occurrence of slip. For objects with large curvature and smooth surface, like a coke can, GelSight can detect the objects’ shape, but not their precise movement, because the geometry of the contact surface changes little during the slip. For these objects, we use the method described in [13]: slip occurs starting from the peripheral contact area, which makes the sensor surface in the peripheral area have less displacement compared to the central area. The difference can be inferred by comparing the motions of different markers. In any case, if there is a significant decrease in the contact area, we know a severe slip has occurred. To make the slip detection efficient so that it can be applied to real-time robot grasp tasks, we simplified the algorithm according to the experimental data.

Fig. 7: Translational and rotational slip detection based on geometry and markers, when the object have a obvious texture. We crop a patch in the GelSight image, and compute the translation/rotation of both the color patterns and the markers in the patch, and compare their differences. We consider slip to occur when the differences are large. The plot compared the relative translation/rotation of the color texture and the markers, when slip occurs and not.

Slip detection: measuring the relative displacement between object texture and markers We calculate the translation and rotation of the markers and the object texture between two adjacent frames, and compare the motion differences between the markers and texture. If there is a significant difference that exceeds the threshold, or the accumulated relative translation or rotation is large, we consider slip is happening.

In practice, we select a window on the frame, centered at the pixel with the largest difference in the color intensity, corresponding to a contact area with the largest curvature. This window is usually in the middle of the contact area if the object has a textured surface. For objects with larger surface curvatures, like a pen, this window is typically in the border area. To check the texture motion, we use the gray-scaled image of the window based on the color intensity change. Figure 7 shows an example for the relative translation and rotation between the objects and markers.

Fig. 8: The motion of GelSight markers when gripper lifts (a) a plastic cylindrical bottle and (b) a wood cubic block, where slip occurs in (b), and the markers in the contact area have more varied motion. The backgrounds in the pictures are the color intensity change of GelSight images. To detect the slip through marker motion, we compare the moving distance of the ’approximate peripheral area markers ’ (shown as the green arrows), and the maximum marker motions (shown as the red arrows). When their ratio is lower than a threshold, we predict slip occurs.

Slip detection: tracking the marker motion distribution in the contact area For all the motion vectors of the markers, we measured the maximum moving distance , which is usually from some markers in the middle of the contact area. We also measured the moving distances of the markers in the peripheral area , and calculated the following ratio

(1)

If is larger than a threshold (we selected 0.8 in the experiments), we consider the slip is occurring or going to occur soon, and the robot should execute some protection procedure.

To identify markers located in the peripheral areas, we compute the change in the average pixel intensity of the nearby area for each marker and chose the ones with the largest intensity changes, which corresponds to the largest gradient of the surface geometry (See Section IV-A and Figure 4) and matches the border of the contact area. We select 8 markers (might be repetitive), the surrounding areas of which have the largest color change across different color channels, and pick the moving vector with the largest norm () among these 8 markers to ensure a safe selection. Figure 8 shows two examples of how the markers are chosen.

This method also works, although less perfectly, for detecting rotational slip. When there is torque on the contact surface, an appropriate indicator is to compare the rotational angles of all the markers: in the incipient slip or slip stage, the markers in the peripheral area have smaller rotational angles compared to ones in the center. However, the computation of flow center is time-consuming, so we only use the norms of the motion vectors of different markers to detect rotational slip.

Contact detection We intend to make the pressing force of the robot gripper small and controllable. Additionally, during the lifting process, a decrease in the contact area indicates a severe slip is occurring. We detecte the contact with a simplified method that compares the color intensity of current image with that of the image obtained when nothing contacts the sensor surface. For objects with a smooth and flat surface, we also calculate the overall motion of the markers to track the changes in the normal force [13].

Vi Experimental result on robotic slip detection

Vi-a Experimental setup

We conduct robot grasp experiment with a robot system composed of a UR5 6DOF arm and a WSG 50 parallel gripper. The GelSight sensor is mounted on the gripper as one of its fingers, as shown in Figure 1(a). The reach of the robot arm is 850mm, and the repeatability is mm. The gripper has a maximum opening width of 110mm, which is then reduced to 80mm when a GelSight sensor is mounted. The minimum closing speed of the gripper is 5mm/s.

Fig. 9: The 37 objects for the grasp experiment. They are commonly seen objects in everyday life, with different sizes, shapes and materials.

We perform the grasp experiments on 37 natural objects that are commonly seen in everyday life, as shown in Figure 9. The objects are different in size, shape, material and surface texture. In the experiment, the gripper stopped when it is in the proximity of the target object, and slowly approach the object to eventually grasp. The GelSight signal informs whether the finger has contacted the objects and the force is large enough. After grasping, the robot lifts the object slowly for 3cm, and then stops in the midair. During the lifting, the algorithm detects whether slip is occurring.

We also use the GelSight feedback on slip detection to achieve a safe grasp. In this experiment, if slip is detected during the lift process, the robot stops and puts the object down, releases it, and then re-grasp it with a larger threshold for contact detection. The robot keeps this loop until the grasp is considered safe.

Vi-B Slip prediction

In the first experiment, we use the robot to grasp the 37 objects described in Section VI-A and Figure 9, and slowly lift them. Each object is grasped 7 to 10 times, with different contact threshold for grasp, which means the gripper forces are different. For most objects, we try to equal the number of cases when the grasp is successful or slip occurs; for a small portion objects, because they are very light or the surface is too smooth, there are only the cases for successful grasp or slip.

We record the GelSight video during the grasp and lift process, and measure “whether slip occurred” by human observation. We put the tests in three groups according to the results: successful cases, when there is little relative motion between the objects and the gripper, and the gripper grasped the objects firmly; failure cases, when either significant translational or rotational slip occurred and the grasp failed; border cases, when the gripper barely lifted the objects, but not tightly so that the object easily dropped under small external interference, or even slip has occurred before equilibrium. For the border case, we consider it fine to measure either ‘slip’ or ‘no slip’, but we record the rate when GelSight measures ‘no slip’. The results for the three groups are shown in Table I.

Successful cases Slip cases Border cases
Sample Number 147 116 52
Correct Measure 79% 84% 60%
TABLE I: Experimental Result on Slip Prediction

A typical cause for the prediction failure occurs when the object has a flat and smooth surface, and the gripping force is very small. In this case the gripper simply slide along the gripper surface. As the shear force is so small, the noise prevents the system to detect the slip through marker motion analysis, while there is not enough textures to detect the slip. This situation causes 28% of the failure in the slip cases. Another typical case is that the marker measurement method fails when there was a significant rotational slip when grasping a flat object. This situation causes 28% of the failure in the slip cases, and it can be prevented if a more thorough measurement on the marker rotational movement is conducted.

Vi-C Grasp control with slip detection

Our second experiment is to grasp objects in a close loop with the feedback from the GelSight slip detection. Similar to the first experiment, the robot gripper grasps and lifts the objects, but if slip is detected by GelSight, it suspends and releases the object, and then re-grasps it on the same position using a higher contact threshold, i.e., a larger gripping force. We test on 33 objects from Figure 9, while ignore the other 4 that the robot could not lift. Each object is grasped for 3 times and we collect 99 grasp cases. We count the number of cases that the gripper finally grasped the objects stably.

The gripping force is controlled by the GelSight signal, based on the surface geometry change or the marker displacement. For each objects, a different threshold is required for a stable grasp, but we set the initial threshold to the same value – a very small one indicating a bare contact with the object. We increase the threshold by 1.2 times for each trial.

Out of the 99 grasp experiments, the robot successfully grasped the objects for 88 times, i. e., a success rate of 89%. In each grasp experiment, the target object is grasped for 1 to 6 times, and the average is 2.3 times. When GelSight detected slips, mostly human can clearly see the slip occuring. In the failure cases, the gripping forces are so small that there are not enough information from the contact area for the sensor to measure slip effectively.

Vii Conclusions

In this work, we designed a new Fingertip GelSight tactile sensor. We demonstrated that a precise 3D shape of the contact surface can be reconstructed with this new sensor. Compared to Li’s sensor [1], the new sensor features lower reconstruction errors and smaller spatial variances. The fabrication process is standardized and easy to implement. We performed slip detection with the sensor in a grasping experiment. For 37 objects tested in this work, the new sensor can detect both translational and rotational slip during grasping without any prior knowledge of the objects. The implementation of the new sensor assists safe manipulation tasks. The new Fingertip GelSight sensor can find various applications such as safe grasping, object recognition, and hardness estimation.

Acknowledgment

The work is supported by Toyota Research Institute. We thank Wen Xiong, Dongying Shen, Changchen Chen, Abhijit Biswas for revising the manuscript. We thank Shaoxiong Wang for helping set up robot arm.

References

  • [1] R. Li, R. Platt, W. Yuan, A. ten Pas, N. Roscup, M. A. Srinivasan, and E. Adelson, “Localization and manipulation of small parts using gelsight tactile sensing,” in Intelligent Robots and Systems (IROS 2014), 2014 IEEE/RSJ International Conference on.   IEEE, 2014, pp. 3988–3993.
  • [2] R. S. Dahiya, G. Metta, M. Valle, and G. Sandini, “Tactile sensing—from humans to humanoids,” IEEE Transactions on Robotics, vol. 26, no. 1, pp. 1–20, 2010.
  • [3] H. Yousef, M. Boukallel, and K. Althoefer, “Tactile sensing for dexterous in-hand manipulation in robotics–a review,” Sensors and Actuators A: physical, vol. 167, no. 2, pp. 171–187, 2011.
  • [4] Z. Kappassov, J.-A. Corrales, and V. Perdereau, “Tactile sensing in dexterous robot hands — review,” Robotics and Autonomous Systems, vol. 74, pp. 195 – 220, 2015. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S0921889015001621
  • [5] M. T. Francomano, D. Accoto, and E. Guglielmelli, “Artificial sense of slip–a review,” IEEE Sensors Journal, vol. 13, no. 7, pp. 2489–2498, 2013.
  • [6] Z. Su, K. Hausman, Y. Chebotar, A. Molchanov, G. E. Loeb, G. S. Sukhatme, and S. Schaal, “Force estimation and slip detection/classification for grip control using a biomimetic tactile sensor,” in Humanoid Robots (Humanoids), 2015 IEEE-RAS 15th International Conference on.   IEEE, 2015, pp. 297–303.
  • [7] A. Ajoudani, E. Hocaoglu, A. Altobelli, M. Rossi, E. Battaglia, N. Tsagarakis, and A. Bicchi, “Reflex control of the pisa/iit softhand during object slippage,” in Robotics and Automation (ICRA), 2016 IEEE International Conference on.   IEEE, 2016, pp. 1972–1979.
  • [8] M. K. Johnson and E. Adelson, “Retrographic sensing for the measurement of surface texture and shape,” in Computer Vision and Pattern Recognition (CVPR), 2009 IEEE Conference on.   IEEE, 2009, pp. 1070–1077.
  • [9] M. K. Johnson, F. Cole, A. Raj, and E. H. Adelson, “Microgeometry capture using an elastomeric sensor,” in ACM Transactions on Graphics (TOG), vol. 30, no. 4.   ACM, 2011, p. 46.
  • [10] R. Li and E. Adelson, “Sensing and recognizing surface textures using a gelsight sensor,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2013, pp. 1241–1247.
  • [11] W. Yuan, M. A. Srinivasan, and E. Adelson, “Estimating object hardness with a gelsight touch sensor,” in Intelligent Robots and Systems (IROS 2016), 2016 IEEE/RSJ International Conference on.   IEEE, 2016.
  • [12] W. Yuan, C. Zhu, A. Owens, M. A. Srinivasan, and E. Adelson, “Shape-independent hardness estimation using deep learning and a gelsight tactile sensor,” in Robotics and Automation (ICRA), 2017 IEEE International Conference on.   IEEE, 2017.
  • [13] W. Yuan, R. Li, M. A. Srinivasan, and E. H. Adelson, “Measurement of shear and slip with a gelsight tactile sensor,” in Robotics and Automation (ICRA), 2015 IEEE International Conference on.   IEEE, 2015, pp. 304–311.
  • [14] Y. Jiar, K. Lee, and G. Shi, “A high resolution and high compliance tactile sensing system for robotic manipulations,” in Intelligent Robots and Systems’ 93, IROS’93. Proceedings of the 1993 IEEE/RSJ International Conference on, vol. 2.   IEEE, 1993, pp. 1005–1009.
  • [15] H. Maekawa, K. Tanie, and K. Komoriya, “A finger-shaped tactile sensor using an optical waveguide,” in Systems, Man and Cybernetics, 1993.’Systems Engineering in the Service of Humans’, Conference Proceedings., International Conference on, vol. 5.   IEEE, 1993, pp. 403–408.
  • [16] M. Ohka, Y. Mitsuya, K. Hattori, and I. Higashioka, “Data conversion capability of optical tactile sensor featuring an array of pyramidal projections,” in Multisensor Fusion and Integration for Intelligent Systems, 1996. IEEE/SICE/RSJ International Conference on.   IEEE, 1996, pp. 573–580.
  • [17] N. J. Ferrier and R. W. Brockett, “Reconstructing the shape of a deformable membrane from image data,” The International Journal of Robotics Research, vol. 19, no. 9, pp. 795–816, 2000.
  • [18] C. Chorley, C. Melhuish, T. Pipe, and J. Rossiter, “Development of a tactile sensor based on biologically inspired edge encoding,” in Advanced Robotics, 2009. ICAR 2009. International Conference on.   IEEE, 2009, pp. 1–6.
  • [19] K. Sato, K. Kamiyama, H. Nii, N. Kawakami, and S. Tachi, “Measurement of force vector field of robotic finger using vision-based haptic sensor,” in Intelligent Robots and Systems, 2008. IROS 2008. IEEE/RSJ International Conference on.   IEEE, 2008, pp. 488–493.
  • [20] K. Nagata, M. Ooki, and M. Kakikur, “Feature detection with an image based compliant tactile sensor,” in Intelligent Robots and Systems, 1999. IROS’99. Proceedings. 1999 IEEE/RSJ International Conference on, vol. 2.   IEEE, 1999, pp. 838–843.
  • [21] K. Kamiyama, K. Vlack, T. Mizota, H. Kajimoto, K. Kawakami, and S. Tachi, “Vision-based sensor for real-time measuring of surface traction fields,” IEEE Computer Graphics and Applications, vol. 25, no. 1, pp. 68–75, 2005.
  • [22] X. Jia, R. Li, M. A. Srinivasan, and E. H. Adelson, “Lump detection with a gelsight sensor,” in World Haptics Conference (WHC), 2013.   IEEE, 2013, pp. 175–179.
  • [23] R. D. Howe and M. R. Cutkosky, “Sensing skin acceleration for slip and texture perception,” in Robotics and Automation, 1989. Proceedings., 1989 IEEE International Conference on.   IEEE, 1989, pp. 145–150.
  • [24] E. Holweg, H. Hoeve, W. Jongkind, L. Marconi, C. Melchiorri, and C. Bonivento, “Slip detection by tactile sensors: Algorithms and experimental results,” in Robotics and Automation, 1996. Proceedings., 1996 IEEE International Conference on, vol. 4.   IEEE, 1996, pp. 3234–3239.
  • [25] C. Melchiorri, “Slip detection and control using tactile and force sensors,” IEEE/ASME transactions on mechatronics, vol. 5, no. 3, pp. 235–243, 2000.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
105763
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description