Robot-Assisted Feeding: Generalizing Skewering Strategies across Food Items on a Realistic Plate

Robot-Assisted Feeding: Generalizing Skewering Strategies across Food Items on a Realistic Plate

Ryan Feng* Paul G. Allen School of Computer Science & Engineering, University of Washington, 185 E Stevens Way NE, Seattle, WA, USA
11email: {rfeng, yskim, gilwoo, ekgordon, schmttle, tapo, siddh}@cs.uw.edu, kumars7@uw.edu
   Youngsun Kim* Paul G. Allen School of Computer Science & Engineering, University of Washington, 185 E Stevens Way NE, Seattle, WA, USA
11email: {rfeng, yskim, gilwoo, ekgordon, schmttle, tapo, siddh}@cs.uw.edu, kumars7@uw.edu
   Gilwoo Lee* Paul G. Allen School of Computer Science & Engineering, University of Washington, 185 E Stevens Way NE, Seattle, WA, USA
11email: {rfeng, yskim, gilwoo, ekgordon, schmttle, tapo, siddh}@cs.uw.edu, kumars7@uw.edu
   Ethan K. Gordon Paul G. Allen School of Computer Science & Engineering, University of Washington, 185 E Stevens Way NE, Seattle, WA, USA
11email: {rfeng, yskim, gilwoo, ekgordon, schmttle, tapo, siddh}@cs.uw.edu, kumars7@uw.edu
   Matt Schmittle Paul G. Allen School of Computer Science & Engineering, University of Washington, 185 E Stevens Way NE, Seattle, WA, USA
11email: {rfeng, yskim, gilwoo, ekgordon, schmttle, tapo, siddh}@cs.uw.edu, kumars7@uw.edu
   Shivaum Kumar Paul G. Allen School of Computer Science & Engineering, University of Washington, 185 E Stevens Way NE, Seattle, WA, USA
11email: {rfeng, yskim, gilwoo, ekgordon, schmttle, tapo, siddh}@cs.uw.edu, kumars7@uw.edu
   Tapomayukh Bhattacharjee Paul G. Allen School of Computer Science & Engineering, University of Washington, 185 E Stevens Way NE, Seattle, WA, USA
11email: {rfeng, yskim, gilwoo, ekgordon, schmttle, tapo, siddh}@cs.uw.edu, kumars7@uw.edu
   Siddhartha S. Srinivasa *These authors contributed equally to the work.Paul G. Allen School of Computer Science & Engineering, University of Washington, 185 E Stevens Way NE, Seattle, WA, USA
11email: {rfeng, yskim, gilwoo, ekgordon, schmttle, tapo, siddh}@cs.uw.edu, kumars7@uw.edu
Abstract

A robot-assisted feeding system must successfully acquire many different food items and transfer them to a user. A key challenge is the wide variation in the physical properties of food, demanding diverse acquisition strategies that are also capable of adapting to previously unseen items. Our key insight is that items with similar physical properties will exhibit similar success rates across an action space, allowing us to generalize to previously unseen items. To better understand which acquisition strategies work best for varied food items, we collected a large, rich dataset of 2450 robot bite acquisition trials for 16 food items with varying properties. Analyzing the dataset provided insights into how the food items’ surrounding environment, fork pitch, and fork roll angles affect bite acquisition success. We then developed a bite acquisition framework that takes the image of a full plate as an input, uses RetinaNet to create bounding boxes around food items in the image, and then applies our skewering-position-action network (SPANet) to choose a target food item and a corresponding action so that the bite acquisition success rate is maximized. SPANet also uses the surrounding environment features of food items to predict action success rates. We used this framework to perform multiple experiments on uncluttered and cluttered plates with in-class and out-of-class food items. Results indicate that SPANet can successfully generalize skewering strategies to previously unseen food items.

Food Manipulation, Generalization, Robotic Feeding
\mainmatter

1 Introduction

Eating is a vital activity of daily living (ADL), necessary for independent living at home or in a community. Losing the ability to feed oneself can be devastating to one’s sense of self-efficacy and autonomy [1]. Helping the approximately 1.0 million US adults who require assistance to eat independently [2] would improve their self-worth [3, 4]. It would also considerably reduce caregiver hours since feeding is one of a caregiver’s most time-consuming tasks [5, 6].

Based on a taxonomy of manipulation strategies developed for the feeding task [7], feeding requires the acquisition of food items from a plate or bowl and the transfer of these items to a person. This paper focuses on the bite acquisition phase of the feeding task. Bite acquisition requires the perception of specific food items on a cluttered plate and the manipulation of these deformable items for successful acquisition. However, the universe of food items is immense, so we cannot expect to detect every type of food item. Also, bite acquisition involves complex and intricate manipulation strategies for food with a variety of physical characteristics, such as varied sizes, shapes, compliance, textures, etc. Thus, acquiring a bite from a realistically cluttered plate poses a complex, challenging problem for roboticists.

Our key insight is that items with similar physical properties (e.g., shapes and sizes) will exhibit similar success rates across an action space, allowing us to generalize to unseen items based on these physical properties. Therefore, instead of classifying a food-item into a specific class or recognizing its identity, we can predict success rates for a set of actions. This lets us bypass the problem of food item classification and directly address which action to perform to best acquire an item.

Our insight led us to a data-driven approach for autonomous bite acquisition that generalizes to unseen food items. To this end, we collected a large dataset of bite acquisition trials for food items with varying physical properties, using actions that we found to be effective in our previous work [8]. In this paper, we focus on both solid food items and leaves found in a salad. We developed a skewering position and action neural network (SPANet), and we used this dataset to predict success rates that generalize across multiple food items. We explicitly provide only the environmental feature using a separate environment classifier, e.g., whether an item is isolated, near a wall (i.e., edge of a plate) or another food item, or stacked on top of another food item. We found these environmental factors to be critical elements when choosing an action (see Figure 1).

Figure 1: Left: A robot acquiring a food item from a plate. Right: A plate cluttered with food items. Food items can be isolated, near a wall (i.e., the edge of a plate) or another food item, or stacked on top of other food items.

Our analysis shows that SPANet learns to predict success rates accurately for trained food classes and generalizes well to unseen food classes. Our final integrated system uses these predicted success rates to acquire various foods items from cluttered and non-cluttered plates (see Figure 2).

Our contributions include:

  • A dataset of bite acquisition trials with success rates for food items with varying physical properties and in varied environments

  • An algorithm that can generalize bite acquisition strategies to unseen items

  • A framework for bite acquisition from a realistic plate with varying forms of food items and clutter

2 Related Work

2.1 Food Manipulation for Assistive Feeding

Manipulating food for assistive feeding poses unique challenges compared to manipulating it for other applications [9, 10]. Previous studies on assistive feeding focused on creating a food item taxonomy and exploring the role of haptic modality [7], or developing hard-coded strategies for acquiring items [11, 8] that were previously seen. While hard-coded strategies based on the strategies commonly used by humans [7] achieve a good degree of success [8], a single, expert-driven strategy is given per known food item. Here, our goal is to develop methods for generalizing acquisition strategies to previously unseen food items.

Figure 2: Our framework: Using a full-plate image, RetinaNet outputs bounding boxes around food items. An environment classifier identifies items as being in one of three environments: isolated (ISO), near a wall (i.e., plate edge) or another food item (WALL), or on top of other food items (STACK). SPANet uses the bounding boxes and environment features to output the success probability for each bite acquisition action and the skewering axis for each food item.

Food manipulation for assistive feeding shares similarities to existing literature on grasping. Determining a good grasping position resembles finding a good skewering location. One approach in grasping literature uses learning-based methods to identify good grasping locations on a 3d model and draws them out to perceiving objects [12]. Others use real robot grasping trials [13] or images [14, 15]. However, these approaches generally focus on rigid objects, while our work requires tool-mediated, non-prehensile manipulation of deformable objects for which the haptic modality plays a crucial role. Importantly, physical properties of a food item may change after a failed skewering attempt, underlying the importance of an intelligent first action.

2.2 Food Perception

Many strides in food perception have been made, especially with respect to deep supervised learning using computer vision systems, e.g., for food detection [16, 17, 18]. Our previous work [8] uses RetinaNet as an efficient, two-stage food detector that is faster than other two-stage detectors but more accurate than single-shot algorithms. The generated food item bounding boxes are then input to our SPNet [8], which generates position and rotation masks for skewering those food items. This idea was adopted from [19], which uses independent masks from Mask RCNN with Fully Convolutional Network (FCN) branches.

An important question for the generalization of food items concerns out-of-class classification in perception problems. Since it is not feasible to train on every possible food item, systems like ours must intelligently handle items missing from the training set without being reduced to taking random actions. This work is related to the general task of detecting out-of-class examples. One common baseline approach uses a softmax confidence threshold [20] to detect such examples. Another looks at projecting seen and unseen categories into a joint label space and relating similar classes by generalizing off of zero-shot learning methods [21]. Similarly, we aim to generalize our actions to unseen food items based on ones with similar properties.

Figure 3: Three macro actions: VS: vertical skewer (two on the left), TV: tilted skewer with vertical tines (two in the middle), TA: tilted skewer with angled tines (two on the right). Each macro action has two fork rolls ( and ).

3 Bite Acquisition on a Realistic Plate

Although bite acquisition on a realistic plate should take into account user preferences, bite sequence etc., in this work, we focus on a simplified problem of maximizing a single-item bite acquisition success rate. The primary objective of our autonomous robotic system is to maximize the bite acquisition success rate during a bite acquisition attempt. We call a bite acquisition attempt a success if the robot picks up a target food item in that attempt. A target food item is selected such that the probability of picking it up successfully is maximized. Towards that end, we design an action selector that chooses an action and a target given the RGBD image of a full plate such that the success rate is maximized. While predicting the most successful action for a target food item, our algorithm also looks at its surrounding environment because surrounding environment can affect bite acquisition. An action is one of the six bite acquisition actions shown in Figure 3 (VS(0), VS(90), TV(0), TV(90), TA(0), TA(90)); each defines the angle of approach and the orientation of the fork tines with respect to the food. When fork tines are at degrees, they align with the skewering axis (which aligns with the major axis of a fitted ellipse). Each action is implemented as a closed-loop controller that takes as input multimodal information (i.e., RGBD images and haptic signals) and outputs a low-level control (e.g., fork-tine movement). The haptic signal given by a force/torque sensor attached to our fork, is used to detect if a force threshold is met during skewering.

3.1 Our Action Space

Three macro actions are based on the angle of approach (fork pitch). For each macro action, we have two fork rolls at and degrees, where degree means the fork is aligned with the skewering major axis (see Figure 3), which leads to a total of six actions.

Vertical Skewer (VS): The robot moves the fork over the food so the fork handle is vertical, moves straight down into the food, and moves straight up.

Tilted Vertical Skewer (TV): The robot tilts the fork handle so that the fork tines are vertical, creating a vertical force straight into the food item. The inspiration for this action comes from foods such as grapes, where a straight vertical force could prevent the grapes from rolling away easily.

Tilted Angled Skewer (TA): The robot takes an action with horizontally tilted fork tines. The inspiration for this action comes from foods such as bananas, which are very soft and require more support from the fork to be lifted [7].

Figure 4: Three environment features: Isolated (left), Wall (middle), and Stack (right) scenarios. Note, the wall scenario can also be triggered if the desired food item is near another food item.

3.2 Environment Features

Three environment features affect food item acquisition. They showcase the properties of the immediate surrounding environment of a food item (see Figure 4).

Isolated: The surrounding environment of the target food item is empty, and the food item is isolated. This scenario may arise on any realistic plate and becomes more common as the feeding task progresses and the plate empties.

Wall: The target food item is either close to the wall (edge) of a plate or bowl or close to another food item that acts as a support. This scenario may arise in any realistic plate configuration.

Stack: The target food item is stacked on top of another food item. This scenario may arise in any realistic plate configuration (e.g., a salad). To mimic food plate setups similar to a salad plate, we used lettuce as an item on which to stack other food items.

4 Bite Acquisition: Generalizing to Previously Unseen Food Items

We now develop a model for predicting bite acquisition success rates that can potentially generalize to previously unseen food items. In our prior study [8], we built a model that finds the best skewering position and angle from the input RGB image; this model showed high performance for trained food items. However, identity information served a significant role in the detection model, so the model did not generalize to unseen food items even if they could be skewered with the same action. We also used a hand-labeled dataset for where to skewer, which reflected subjective annotator intuitions.

To build a generalizable model that is not biased by human intuition, we propose three solutions. First, we increase the range of food items compared to  [8]. Second, we use empirical success rates based on real-world bite acquisition trials performed by a robot instead of manual labels. Third, we directly predict success rates over our action space instead of predicting the object class to select its predefined action. Our previous model [8] output a grid of fork positions and one of 18 discretized fork roll angles, leading to an action space of 5202 discrete actions. In this work, we developed a model that abstracts the position and rotation by using a major axis, reducing the action space to just 6 actions, which could improve the model’s generalizability. With this approach, regression for the major axis is more general and easier to solve, and the quantity and variety of data required for training can be significantly reduced.

Figure 5: Our algorithm: The architecture of our success rate regression network, SPANet.

4.1 Skewering-Position-Action Network: SPANet

We designed a network to visually determine the location to skewer and the acquisition strategy to take. With this in mind, our network receives a RGB image from a wrist-mounted camera as input; it outputs a vector containing the two endpoints of the main axis and predicted success rates, where to depict the three macro action types over two fork roll angles.

We experimented with two network architecture designs. The first used a DenseNet [22] base that was pre-trained on ImageNet [23]. The second used an AlexNet [24] style base with simple convolution layers with batch normalization. In both cases, the base network was followed by three fully convolutional layers and a linear layer to output the final output. As the performance gap was marginal but the simpler network ran 50% faster than DenseNet [22], we chose to use the simple convolution layers. For the loss function, we used the smooth -loss between the ground truth vector and the predicted vector. This choice was intended to force the model to learn the success rate distribution as closely as possible.

Based on our previous human user study, we hypothesized that environmental position of the food relative to surrounding items would be a key factor in choosing an action. Our SPANet, therefore, takes as input the surrounding environment type as one hot encoding of whether the food item is isolated, near a wall, or is stacked. It concatenates the encoding with the image feature vector from the base network. We developed an environment classifier to categorize these environment features.

4.2 Environment Classifier Pipeline

Classifying the surrounding environment of a food item is not straightforward using SPANet itself because it requires looking at the surroundings, not just the item. Instead of learning the feature, we use a series of classical computer vision methods to categorize the environmental feature (see Fig. 6).

We first use a Hough circle transform to detect the plate and fit a planar table from the depth of the region immediately surrounding it. Subtracting the original depth map yields a height map relative to the table. After color-segmenting each food item from the RGB image, we divide the surrounding region of interest into sub-regions. If a sub-region intersects the plate boundary or if its median height exceeds some threshold, it is considered to be “occupied.” Otherwise, it is “unoccupied.” A food item is classified as “isolated” if a super-majority of the sub-regions are unoccupied, “stacked” if a super-majority of the sub-regions are occupied, and “near-object” or “near-wall” otherwise.

Figure 6: Environment classifier pipeline. A food item is categorized by comparing the depth of its surrounding environment with that of the table surface depth.

4.3 Bite Acquisition Framework

We developed a robotic feeding system that uses our framework to acquire food items from a plate by applying the actions defined in Section 3.1. Note that our definition of action specifies everything a motion planner needs to know in order to generate a trajectory, i.e., the approach angle and target pose. Given the success rates of each action on all visible items on the plate, the motion planner chooses a (target item, action) with the highest success rate and generates a trajectory. The robot executes it with a closed-loop, force-torque control until a certain force threshold is met or the trajectory has ended.

5 Experiments

5.1 Experimental Setup

Our setup consists of a 6 DoF JACO robotic arm [25]. The arm has 2 fingers that grab an instrumented fork (forque, see Figure 1) using a custom-built, 3D-printed fork holder. The system uses visual and haptic modalities to perform the feeding task. For haptic input, we instrumented the forque with a 6-axis ATI Nano25 Force-Torque sensor [26]. We use haptic sensing to control the end effector forces during skewering. For visual input, we mounted a custom built wireless perception unit on the robot’s wrist; the unit includes the Intel RealSense D415 RGBD camera and the Intel Joule 570x for wireless transmission. Food is placed on an assistive bowl mounted on an anti-slip mat commonly found in assistive living facilities.

We experimented with 16 food items: apple, banana, bell pepper, broccoli, cantaloupe, carrot, cauliflower, celery, cherry tomato, grape, honeydew, kale, kiwi, lettuce, spinach and strawberry.

5.2 Data Collection

For each food item, the robotic arm performed each of the six actions (see Fig. 3) under three different positional scenarios (see Fig. 4). For symmetrical items, such as kiwis, bananas, and leaves, the robot performed the trials with one fork roll. For each configuration (action, item, scenario), we collected 10 trials per item and marked success and failure rates. We defined success as the forque skewering the item and the item staying on the fork after being lifted up for 5 seconds. We defined failure as at least 2 of 4 tines touching the food item but either the fork failing to skewer the item or the item dropping in less than 5 seconds. For each new trial, we changed the food item since every item has a slightly different size and shape even within the same class. The current data (2450 trials at around 2 minutes per trial) took approximately 82 people hours, not counting robot setup. During each skewering trial, we recorded the RGBD image of the whole plate before and after acquisition, forces/torques during skewering, all joint positions of the arm during the trial, and whether the acquisition attempt was a success or a failure. We annotated the images by drawing a tight bounding box around each food item as well as drawing the major axis of the food. The longer axis along the food item was deemed the major axis. Approach angles and orientations for all actions were defined with respect to the center of the major axis.

6 Dataset Analysis

We validated that the six actions and the three environment scenarios we chose indeed resulted in different success rates. To test statistical significance, we performed a series of Fisher Exact tests for homogeneity as opposed to the more common t-test, u-test, or chi-squared test since our dataset was sparse. Our p-value threshold was 0.05 and, with Bonferroni correction, the corrected p-value threshold was 0.0024.

(a)
(b)
(c)
Figure 7: (a) Depending on the surrounding environment, different food items have different success rates with different actions. This figure shows results for a banana item with the tilted angled action, lettuce averaged across all actions, and honeydew with the vertical skewer action. (b) Different actions perform the best for different food items. (c) For long and slender items, a fork roll improves success rates.

6.1 Surrounding Environment Affects Bite Acquisition

We tested our three environment categories over all actions and food items. We found that the stacked category played a significant role in bite acquisition, with a p-value of , compared to the wall or isolated categories. For these latter features, our current experiments did not find a statistically significant difference in success rates. Investigating further, we found that for a subset of food items – viz., kiwi, banana, apple, bell pepper, broccoli, cantaloupe, cauliflower, and strawberry – the wall and stacked environments are significantly better than the isolated environment for the TA strategy, with a p-value of and , respectively. This group was investigated because we empirically observed that these items needed a TA skewer to prevent their sliding off the fork tines, and that the tilted angled worked best near a wall or on top of items. However, note that the p-values exceed our corrected threshold. Figure 6(a) shows cases where each of the three environment scenarios results in the best acquisition success rates for different food items.

6.2 Fork Pitch Affects Bite Acquisition

We tested our three macro actions (which differ in their fork pitch angles) over all environment scenarios, food items, and fork roll angles. We found that the tilted-angled (TA) action specifically played a large role in bite acquisition, with p-values of , compared to vertical-skewer (VS) and tilted-vertical (TV) actions. This result was echoed (albeit less significantly or not corrected) for a specific subset of food items in the stacked environment: kale, spinach, strawberry, kiwi, and honeydew (p-values of and , respectively); for these items, we found that TV and VS out-performed TA. Figure 6(b) shows that different macro actions perform the best for different food items.

6.3 Fork Roll Affects Bite Acquisition

For all food items, macro actions, and environment categories, we tested whether skewering a food perpendicular to the major axis affects bite acquisition. We did not find a strong significance, but since the test indicated a difference (p-value of ) at our original p-value threshold, we hypothesize that with more data we could find a relationship. Investigating further, we found that for large long items with flat surfaces – e.g., carrot, celery, and apple – skewering at was better than at for all macro actions and environments. These results show that learning the orientation as part of the action can help boost the success rate of acquiring unseen food items with similar characteristics. We compared carrot, apple, and celery and again found a p-value of below our original threshold. This difference is echoed in Figure 6(c).

(a)
(b)
Figure 8: Each bar represents SPANet’s success rate on the test set compared to the uniformly random algorithm and the best possible algorithm (optimal) for our action space. (a) In-class results using SPANet show that it correctly predicts action success rates, and the gap between the bar and the optimal is small. (b) Even for unseen items, SPANet generalizes from trained items and performs significantly better than random for all food items except banana and kiwi (which have very different action distributions than the rest of the dataset). SPANet also closely predicts success rates compared to the ground-truth optimal for 7 out of 16 items.

7 Results

7.1 In-Class and Generalization Results

Figure 9: Action distributions of representative food items across different environment categories. Clearly, irrespective of different action distributions for different food items in different environments, SPANet can successfully predict these action distributions when compared to the ground-truth dataset.
(a)
(b)
Figure 10: (a) Given a sample in the test set for a specific excluded food item, these confusion matrices count the number of times the success-rate distribution from our ground-truth dataset is the nearest neighbor to the of the output vector of that sample. Kiwi and banana, which did not generalize well, were excluded. We see some correspondence between food items with similar physical properties. Kale, spinach, cauliflower, broccoli, and lettuce correspond to lettuce; cherry tomato and strawberry correspond to each other; and celery, carrot, bell pepper, and apple correspond to each other. (b) We then clustered food items into four categories based on their properties and constructed this confusion matrix. The results suggest that, given an unseen leafy vegetable, SPANet will associate it with other leafy vegetables it has seen before. The same can be said for long food items and items with non-flat shapes.

Figure 7(a) shows SPANet’s performance for each food item in our training dataset. Given the expected success rates of each action, SPANet recommends actions that perform significantly above random for almost all food items. For the worst-case item, banana, the actions SPANet chose would have an expected success rate 7% lower than repeatedly taking the optimal action. SPANet did not perform significantly above random for both kiwi and lettuce. However, this is because we do not have the statistical power to conclude significance with this dataset. The 95% confidence interval on these two items exceeded the distance between a random and an optimal algorithm. To gain more insights into SPANet’s prediction of action distributions, we chose three representative food items with different physical properties; as Figure 9 shows, SPANet can successfully predict the different action distributions of these food items in different environments.

We also tested SPANet’s ability to generalize to food items that have not been seen before. Results are presented in Figure 7(b). SPANet generalizes quite well for unseen food items except for banana and kiwi, which have very different action distributions (tilted-angled) from the rest of the dataset. To better understand SPANet’s method of generalization, we compared the outputs generated from foods excluded from SPANet’s training set with those from food within the training set. To do so, for each excluded food item in the test set, we took the expected success rate output vector and found its nearest neighbor (under -distance) among the ground-truth of all food items. To mitigate a bias towards food items with success rate vectors near the mean of the whole dataset, we computed the of the output vector before comparison.

Results are shown in Figure 9(a). From this confusion matrix, we can tell that the outputs of unseen food items tend to cluster near one of a few food items: lettuce, strawberry, and celery. Investigating further, we see some correspondence between food items with similar physical properties. For example, we see that kale, spinach, cauliflower, broccoli, and lettuce correspond to lettuce; cherry tomato and strawberry correspond to each other; and celery, carrot, apple, and bell pepper correspond to each other. To better visualize this, we clustered food items into four categories based on their properties. “Leafy” vegetables included lettuce, spinach, kale, cauliflower, and broccoli. “Long” food items included celery, carrots, apples, and bell pepper. “Flat” food items included cantaloupe and honeydew. “Non-flat” food items included items with non-flat surfaces, such as strawberry, cherry tomato, and grape. Figure 9(b) shows the same results as Figure 9(a) clustered into these four classes. The results suggest that, given an unseen leafy vegetable, SPANet will associate it with other leafy vegetables it has seen before. The same can be said for long food items and items with non-flat shapes.

7.2 Full System Integration Results

To test the viability of SPANet on our bite acquisition framework, we integrated the algorithm with our robotic system. We performed experiments with a representative set of foods and environments and measured the acquisition success rate. First, we tested carrot under all three environment scenarios to represent SPANet’s ability to pick the correct angle for long and slender items. Second, we tested strawberry in the wall and stacked environment scenarios as a non-flat item where the best action depends on the environment. We collected 10 trials for each. To deal with positional accuracy issues with the fork being dislodged or bent during physical interaction with the environment, we used visual template matching to detect the fork-tip and adjust its expected pose to reduce image projection error. Anomalies external to the SPANet algorithm (see Section 8), such as planning or wall detection/RetinaNet anomalies, were not counted towards the 10 trials.

We found that SPANet tended to pick the most successful strategies to try to skewer carrots and strawberries. In each food-scenario pair except for strawberry-wall, SPANet always picked either the best or second best options. As shown in Figure 11, the carrot tests perfectly matched their expected success rate and best action success rate. Strawberry-stacked experienced marginally less success than expected, having just 1 failure out of 10 where the fork went in and out without acquiring the strawberry. Interestingly, for strawberry-wall, the tests matched the best action success rate despite SPANet not picking the best actions in this case. This could perhaps be explained by slight variations in strawberry shapes and positions.

7.3 Bite Acquisition from a Realistic Plate

Figure 11: Full system integration acquisition success rates.

To demonstrate the viability of our bite acquisition system on a realistic cluttered plate, we tested SPANet’s ability to acquire a variety of foods from a cluttered full plate, as shown in Figure 11(a). For these experiments, we trained a version of SPANet without a set of 3 food items (carrot, celery, and bell pepper). We tested two out-of-class plates that contained all three excluded items plus cantaloupe in different configurations, as well as an in-class plate containing honeydew, strawberry, and broccoli. We placed two to three pieces of each item onto a lettuce base, filling the plate so that items were often close to each other but not stacked on the lettuce, not themselves. Food item positions were chosen arbitrarily. We then attempted to acquire all food items on the plate except the lettuce.

As shown in Figure 11(b), both out-of-class plates had high success rates, acquiring all 10 items in only 11 attempts (excluding external anomalies, see Section 8). Each attempt picked one of the best two actions from the action space. These results show our system’s ability to generalize to unseen food items by picking reasonable actions and successfully acquiring items with very few failures. Interestingly, the in-class plate had lower performance mainly because of different food items. The first five items were picked up with only a single failure. The first honeydew attempt picked the best action, and while the action choices for broccoli and strawberry were less consistent, they were acquired with only a single strawberry failure where the item slid off of the fork. There were a total of five external planning anomalies where the system could not successfully plan to the commanded 3D food pose. There were also perception anomalies with RetinaNet bounding box detection whereby two honeydew pieces and one broccoli piece could not be acquired due to the matching green background of stacked lettuce.

(a)
(b)
Figure 12: (a) Food plates showing the best action and its corresponding success rate. Food plate with items Left: in class (IC), Middle: out of class (OOC1), and Right: out of class in a different configuration (OOC2). (b) SPANet performed well for all plates. In-class plate performance is slightly lower probably because of different food items that are more challenging to acquire.

8 Discussion

Our SPANet algorithm successfully generalized actions to unseen food items with similar action distributions as known ones. For soft-slippery items such as bananas and kiwis, whose action distributions for successful bite acquisition (tilted-angled) significantly differed from the rest of the dataset, our algorithm did not generalize well and is thus a topic of interest for our future work. Going forward, additional actions could also be added to our action space, including scooping for food items such as rice or mashed potato and twirling for noodles.

In addition to more data collection with a wider variety of food items, our system’s robustness could be improved to deal with external planning and perception anomalies. Visually, the system occasionally struggled to differentiate overlapping objects with similar colors, leading to inaccurate box or axis detection. There were several difficulties in axis detection, leading to an inaccurate axis that made SPANet execute at the wrong fork angle. In terms of planning, some of the actions were more difficult to plan for, and finding ways to maneuver the food to avoid these difficult paths would be helpful. Finally, instability in the fork shape and position led to positional variance at the end of the fork. While the fork-tip template’s matching add-on helped, it would be beneficial to stabilize the fork and prevent it from bending out of shape due to repeated physical interactions with the environment.

Acknowledgment

This work was funded by the National Institute of Health R01 (#R01EB019335), National Science Foundation CPS (#1544797), National Science Foundation NRI (#1637748), the Office of Naval Research, the RCTA, Amazon, and Honda.

References

  • [1] C. Jacobsson, K. Axelsson, P. O. Österlind, and A. Norberg, “How people with stroke and healthy older people experience the eating process,” Journal of clinical nursing, vol. 9, no. 2, pp. 255–264, 2000.
  • [2] M. W. Brault, “Americans with disabilities: 2010,” Current population reports, vol. 7, pp. 70–131, 2012.
  • [3] S. D. Prior, “An electric wheelchair mounted robotic arm-a survey of potential users,” Journal of medical engineering & technology, vol. 14, no. 4, pp. 143–154, 1990.
  • [4] C. A. Stanger, C. Anglin, W. S. Harwin, and D. P. Romilly, “Devices for assisting manipulation: a summary of user task priorities,” IEEE Transactions on rehabilitation Engineering, vol. 2, no. 4, pp. 256–265, 1994.
  • [5] J. S. Kayser-Jones and E. S. Schell, “The effect of staffing on the quality of care at mealtime.” Nursing outlook, vol. 45 2, pp. 64–72, 1997.
  • [6] A. Chio, A. Gauthier, A. Vignola, A. Calvo, P. Ghiglione, E. Cavallo, A. Terreni, and R. Mutani, “Caregiver time use in als,” Neurology, vol. 67, no. 5, pp. 902–904, 2006.
  • [7] T. Bhattacharjee, G. Lee, H. Song, and S. S. Srinivasa, “Towards robotic feeding: Role of haptics in fork-based food manipulation,” IEEE Robotics and Automation Letters, 2019.
  • [8] D. Gallenberger, T. Bhattacharjee, Y. Kim, and S. S. Srinivasa, “Transfer depends on acquisition: Analyzing manipulation strategies for robotic feeding,” in ACM/IEEE International Conference on Human-Robot Interaction (HRI), 2019.
  • [9] M. C. Gemici and A. Saxena, “Learning haptic representation for manipulating deformable food objects,” in IEEE/RSJ International Conference on Intelligent Robots and Systems.   IEEE, 2014, pp. 638–645.
  • [10] M. Beetz, U. Klank, I. Kresse, A. Maldonado, L. Mösenlechner, D. Pangercic, T. Rühr, and M. Tenorth, “Robotic roommates making pancakes,” in 2011 11th IEEE-RAS International Conference on Humanoid Robots, Oct 2011, pp. 529–536.
  • [11] D. Park, Y. K. Kim, Z. M. Erickson, and C. C. Kemp, “Towards assistive feeding with a general-purpose mobile manipulator,” CoRR, vol. abs/1605.07996, 2016. [Online]. Available: http://arxiv.org/abs/1605.07996
  • [12] J. Mahler, J. Liang, S. Niyaz, M. Laskey, R. Doan, X. Liu, J. A. Ojea, and K. Goldberg, “Dex-net 2.0: Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics,” 2017.
  • [13] L. Pinto and A. Gupta, “Supersizing self-supervision: Learning to grasp from 50k tries and 700 robot hours,” in 2016 IEEE International Conference on Robotics and Automation (ICRA), May 2016, pp. 3406–3413.
  • [14] A. Saxena, J. Driemeyer, and A. Y. Ng, “Robotic grasping of novel objects using vision,” The International Journal of Robotics Research, vol. 27, no. 2, pp. 157–173, 2008. [Online]. Available: https://doi.org/10.1177/0278364907087172
  • [15] J. Redmon and A. Angelova, “Real-time grasp detection using convolutional neural networks,” in 2015 IEEE International Conference on Robotics and Automation (ICRA), May 2015, pp. 1316–1322.
  • [16] H. Hassannejad, G. Matrella, P. Ciampolini, I. De Munari, M. Mordonini, and S. Cagnoni, “Food image recognition using very deep convolutional networks,” in Proceedings of the 2Nd International Workshop on Multimedia Assisted Dietary Management, ser. MADiMa ’16.   New York, NY, USA: ACM, 2016, pp. 41–49. [Online]. Available: http://doi.acm.org/10.1145/2986035.2986042
  • [17] A. Singla, L. Yuan, and T. Ebrahimi, “Food/non-food image classification and food categorization using pre-trained googlenet model,” in Proceedings of the 2Nd International Workshop on Multimedia Assisted Dietary Management, ser. MADiMa ’16.   New York, NY, USA: ACM, 2016, pp. 3–11. [Online]. Available: http://doi.acm.org/10.1145/2986035.2986039
  • [18] K. Yanai and Y. Kawano, “Food image recognition using deep convolutional network with pre-training and fine-tuning,” in 2015 IEEE International Conference on Multimedia Expo Workshops (ICMEW), June 2015, pp. 1–6.
  • [19] K. He, G. Gkioxari, P. Dollar, and R. Girshick, “Mask R-CNN,” IEEE Trans. Pattern Anal. Mach. Intell., Jun. 2018.
  • [20] D. Hendrycks and K. Gimpel, “A baseline for detecting misclassified and out-of-distribution examples in neural networks,” Proceedings of International Conference on Learning Representations, 2017.
  • [21] W.-L. Chao, S. Changpinyo, B. Gong, and F. Sha, “An empirical study and analysis of generalized zero-shot learning for object recognition in the wild,” in European Conference on Computer Vision.   Springer, 2016, pp. 52–68.
  • [22] G. Huang, Z. Liu, L. van der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” Aug. 2016.
  • [23] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei, “Imagenet large scale visual recognition challenge,” International Journal of Computer Vision, vol. 115, no. 3, pp. 211–252, Dec 2015. [Online]. Available: https://doi.org/10.1007/s11263-015-0816-y
  • [24] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems 25, F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, Eds.   Curran Associates, Inc., 2012, pp. 1097–1105.
  • [25] “Jaco robotic arm,” https://www.kinovarobotics.com/en/products/robotic-arm-series,[Online; Retrieved on 27th August, 2018].
  • [26] “Force-torque sensor,” https://www.ati-ia.com/products/ft/ft_models.aspx?id=Nano25,[Online; Retrieved on 27th August, 2018].
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
372925
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description