# LESS is More: Rethinking Probabilistic Models of Human Behavior

## Abstract.

Robots need models of human behavior for both inferring human goals and preferences, and predicting what people will do. A common model is the Boltzmann noisily-rational decision model, which assumes people approximately optimize a reward function and choose trajectories in proportion to their exponentiated reward. While this model has been successful in a variety of robotics domains, its roots lie in econometrics, and in modeling decisions among different discrete options, each with its own utility or reward. In contrast, human trajectories lie in a continuous space, with continuous-valued features that influence the reward function. We propose that it is time to rethink the Boltzmann model, and design it from the ground up to operate over such trajectory spaces. We introduce a model that explicitly accounts for distances between trajectories, rather than only their rewards. Rather than each trajectory affecting the decision independently, similar trajectories now affect the decision together. We start by showing that our model better explains human behavior in a user study. We then analyze the implications this has for robot inference, first in toy environments where we have ground truth and find more accurate inference, and finally for a 7DOF robot arm learning from user demonstrations.

^{1}

^{2}

^{3}

^{4}

^{5}

^{6}

^{7}

## 1. Introduction

What we do depends on our intent – our goals and our preferences. When robots collaborate with us, they need to be able to observe our behavior and infer our intent from it, so that they can help us achieve it. They also need to anticipate or predict our future behavior given what they have inferred, so that they can seamlessly coordinate their behavior with ours. Both inference and prediction thus require a model of human behavior conditioned on intent.

A very popular such model is Boltzmann rationality (Baker et al., 2007; Von Neumann and Morgenstern, 1945). It formalizes intent via a reward function, and models the human as selecting trajectories in proportion to their (exponentiated) reward. Boltzmann rationality has seen great successes in a variety of robotic domains, from mobile robots (Kretzschmar et al., 2016; Vasquez et al., 2014; Henry et al., 2010; Ziebart et al., 2009; Pfeiffer et al., 2016) to autonomous cars (Ziebart et al., 2008; Wulfmeier et al., 2015; Kitani et al., 2012) to manipulation (Kalakrishnan et al., 2013; Bobu et al., 2018; Finn et al., 2016; Mainprice and Berenson, 2013; Mainprice et al., 2015), in both inference (Levine and Koltun, 2012; Ziebart et al., 2008; Kalakrishnan et al., 2013; Kretzschmar et al., 2016; Finn et al., 2016; Ramachandran and Amir, 2007; Aghasadeghi and Bretl, 2011; Vasquez et al., 2014; Henry et al., 2010) and prediction (Kitani et al., 2012; Mainprice and Berenson, 2013; Ziebart et al., 2009; Mainprice et al., 2015; Pfeiffer et al., 2016).

Despite its widespread use, Boltzmann predictions are not always the most natural. At the core of the Boltzmann model is the view that behavior is a choice among available alternatives; the probability of any trajectory thus heavily depends on the available alternatives. This has some unforeseen side-effects. One of the simplest examples is at the top of Figure 1. Imagine first that there are two possible trajectories to a goal, left and right, both equally good. Boltzmann would predict a probability of choosing to go to the left. Next, imagine that we change the set of alternatives: we add two similar trajectories to the right. Just because there are more options to go to the right, Boltzmann now predicts a higher probability that you will decide to do so: for these four equally good trajectories, Boltzmann assigns probability each, and estimates going left with only probability instead of as before. Should this change in alternatives – the addition of similar options to go to the right – really be reducing the prediction that you will go left by that much?

This example seems artificial – when are we going to have a) a group of similar trajectories, and b) an imbalance in the number of similar trajectories for each option, so that Boltzmann shows this side-effect? Unfortunately, it is quite representative of real-world trajectory spaces. Spaces of trajectories are continuous and bounded, so they naturally contain a continuum of alternatives of varying similarity to each other, just like the right-side trajectories in our example. Further, trajectories will have varying amounts of similarity to the rest of the space: just like our left-side trajectory was dissimilar from the other alternatives, in the real world, trajectories closer to joint limits or that squeeze in between two nearby obstacles will be dissimilar from the rest of the trajectory space.

Unfortunately, the Boltzmann model was not designed to handle such spaces. It has its roots in the Luce axiom of choice from econometrics and mathematical psychology (Luce, 1977, 1959), which models decisions among discrete and different options. When we move to trajectory spaces, the options now are all connected to some degree:

Our insight is that we need to rethink how to generalize the Luce axiom to trajectory spaces, and account for how similarity in trajectories should influence their probability.

We take a first step towards this goal by introducing an alternative to the Boltzmann model that accounts not just for the reward of each trajectory, but also for the feature-space similarity each trajectory has with all other alternatives. We name our model LESS, as it is Limiting Errors due to Similar Selections. We start by testing that our model does better at predicting human decision (Section 3), and then move on to analyze its implications for inference. We first conduct experiments in simulation, with ground truth reward functions, to show that we can make more accurate inferences using our model (Section 4). Finally, we test inference on real manipulation tasks with a 7DOF arm, where we learn from user demonstrations (Section 5)– though we no longer have ground truth, we show that we can improve the robustness of the inference if we use LESS.

## 2. Method

Motivated by human prediction and reward inference for robotics, we seek an improved human behavior model, explicitly designed for trajectory spaces rather than abstract discrete decisions. To develop this theory, we first turn to the literature on human decision making.

### 2.1. Background

#### Human Decision Making

One of the preeminent theories of human decision making in mathematical psychology is based on Luce’s axiom of choice (Luce, 1959, 1977). In this formulation, we consider a set of options , and we seek to quantify the likelihood that a human will select any particular option . The desirability of each option can be modeled by a function , where produces higher values for more desirable options. As a consequence of Luce’s choice axiom, the probability of selecting an option is given by

(1) |

If we further assume that each option has some underlying reward , and we allow desirability to be an exponential function of this reward, then we recover the Luce-Shepard choice rule (Shepard, 1957):

(2) |

When the options being chosen by the human are trajectories , i.e. sequences of (potentially continuous-valued) actions, we refer to (2) as the Boltzmann model of noisily-rational behavior (Von Neumann and Morgenstern, 1945; Baker et al., 2007). The reward is typically a function of a feature vector , giving the probability density over continuous as

(3) |

#### Handling duplicates

Since the introduction of the Luce choice axiom, related works (Debreu, 1960; Gul et al., 2014) have pointed out its duplicates problem, where inserting a duplicate of any option into has an undue influence on selection probabilities. To address this drawback, various extensions of the Luce model have been proposed which attempt to group together identical or similar options (Ben-Akiva, 1973; Vovsha, 1997). Further extending these ideas, Gul et al. (2014) recently introduced the attribute rule, which reinterprets options as bundles of attributes but maintains Luce’s idea that choice is governed by desirability values.

Analogous to (Gul et al., 2014), let be the set of all attributes, let be the set of attributes belonging to , and let be the set of attributes which belong to at least one option . Define an attribute value, , that maps attributes to their desirability, and an attribute intensity, , that maps pairs of attributes and options to natural numbers, usually 0 or 1, to indicate the degree to which an attribute is expressed. For instance, an attribute could be the property “green” and could return 1 if option , say one of a set of cars, is green, and 0 otherwise.

According to the attribute rule, the probability of choosing is

(4) |

which describes a process where the human first chooses an attribute according to a Luce-like rule, then an option with that attribute according to another Luce-like rule. Note that (4) reduces to (1) if no pair of options in shares any attributes; for example, if each has a single unique attribute, the first sum in (4) disappears, and the second fraction evaluates to 1. In this work, we want to take advantage of the attribute rule’s graceful handling of duplicates while extending its functionality to trajectories with continuous-valued features and not only categorical attributes.

### 2.2. The LESS Human Decision Model

In this paper, we take inspiration from the attribute rule to derive a novel model of human decision making in continuous spaces. Key to our approach is introducing a similarity measure on trajectories. This could be directly in the trajectory space, but more generally it is in feature space, where features could, in one extreme, be the trajectory itself. We first instantiate the attribute rule with features as the attributes, and then soften it to account for feature similarity. Indeed, the Boltzmann rationality model given by (3) already assigns selection probabilities based only on trajectory features, so we look to modify the decision space to depend directly on features as well.

#### Accounting for Trajectories with Identical Features.

We derive our model by starting from (4) and defining the set of attributes to be , the set of all possible feature vectors. Accordingly, the set of attributes that belong to is a single element , and the attributes represented in a set are . Combining this convention with the reward model (3), the modified attribute rule for trajectories over a finite subset becomes

(5) |

In the original attribute rule, the attribute intensity mapped to the natural numbers. A convenient mapping in this context would be to use as an indicator function, where evaluates to 1 only if . With this formulation, if all trajectories have a unique feature vector, then the rightmost term of (5) is identically 1 and we recover the Boltzmann model (3), as applied to a finite sample of trajectories . If, on the other hand, multiple trajectories share the exact same feature vector, then they will effectively be considered as a single option, and the selection probability will be distributed equally among them. This effect is desirable: since the features capture all the relevant inputs to the reward, trajectories with the same features should be considered practically equivalent.

#### Softening to Feature Similarity.

We suggest that such a notion of attribute intensity is too stringent for continuous spaces, and we redefine to be a soft similarity metric , which should be symmetric () and positive semidefinite (), with for all .

Using this redefined similarity metric , we extend (5) to be a probability density on the continuous trajectory space , as in (3):

(6) |

where and the integral over are omitted because they are constant over and cancel out during normalization.

Under this new formulation, the likelihood of selecting a trajectory is inversely proportional to its feature-space similarity with other trajectories. This de-weighting of trajectories that are similar to others is precisely the effect we seek, and we adopt the probability given by (6) as our LESS model of human decision making.

### 2.3. Similarity as Density

The main innovation that differentiates our model from previously proposed rules is the use of a similarity metric that reweights trajectory likelihoods based on the presence of other trajectories that are nearby in feature space. We note that the integral of this similarity over trajectories, the denominator of (6), is akin to a measure of trajectory density in feature space. We estimate similarity as a density by selecting our similarity metric as a kernel function and performing Kernel Density Estimation (KDE). There are many choices of kernel functions, each parametrized by some notion of bandwidth. In our experiments, we used a radial basis function, which peaks when , then exponentially decreases the farther away and are from one another in feature space:

(7) |

where the bandwidth is an important parameter that dictates, for a given feature difference between two trajectories, how much that difference affects the ultimate similarity evaluation. Higher means a higher bandwidth and makes everything look more similar.

We find an optimal bandwidth automatically by using a finite set of samples and maximizing the sum of the log of their summed similarities, which is equivalent to maximizing their likelihood under a probability density estimate produced by KDE:

(8) |

### 2.4. Inference and Prediction with LESS

Let parametrize the reward function . To predict what the human will do given a belief , we marginalize over :

(9) |

with given by (6). To perform inference over given a human trajectory, we update our belief using Bayesian inference:

(10) |

In practice, calculating the integrals in the denominators of (10) and (6) can be intractable, so we use a discretized set of parameters and finite trajectory sample sets in our experiments. The specific sampling of the trajectory choice space can significantly impact inference, and we explore its implications in Section 5.

## 3. LESS as a human decision model

We start by testing the hypothesis that LESS is a better model for human decision making than the standard Boltzmann model.

### 3.1. Human Decision Model Experiment Design

We design a browser-based user study in which we ask participants to make behavior decisions, and measure which model best characterizes these decisions. We select a simple navigation task as our domain, where different behaviors correspond to different ways of traversing the grid from start to goal, as shown in Figure 2.

#### Main Design Idea

The key difficulty in designing such a study is that both models require access to a ground truth reward function, i.e. user preferences over trajectories. Even though we can provide participants with some criteria – in our case optimizing for path length while avoiding the obstacle –, this does not mean our criteria are the only ones they care about. For instance, people might implicitly prefer trajectories that go closer to or further from the obstacle, or that go around the obstacle to the left or right.

Our design idea is to introduce a control trial in which we gather data about relative preferences among two dissimilar options: left and right. These relative preferences then enables us to make predictions, under each model, about the experimental trial, where we add trajectories similar to the option on the right.

For the control trial, participants saw the grid world shown in Figure (a)a with one obstacle in the middle and three trajectories travelling between the start and goal. Two of the trajectories traversed an equal amount of tiles (optimal) and were symmetric along the diagonal of the grid (left and right), and a third trajectory went through the obstacle and visited more tiles than the others (not optimal). We were only interested in what specific optimal trajectory people chose (Left versus Right), and we used the third suboptimal trajectory as an attention test to check if subjects had paid attention to the instructions. We chose the two optimal trajectories to be symmetric and of the same color to reduce possible confounds, such as bias people might have for extraneous features like number of turns, distance from obstacle, color, etc.

For the experimental trial, shown in Figure (b)b, we had the same setup as in the control, with the addition of two other optimal trajectories on the right. They had the same color, number of turns, and number of tiles traversed as the original right-side trajectory. In this setup, there were two visible clusters of options: one trajectory on the left, and three clustered on the right, which we denote as the Left and Right groups, respectively.

#### Manipulated Variables

We manipulated the model used for decision-making in the experimental trial to be Boltzmann vs. LESS. Having access to the ratio that participants chose the left trajectory over the right in the control trial means that regardless of their reward function , , according to (3). This enables us to make predictions using both models as a function of for the experimental trial, despite not knowing itself. For these computations, we assumed that all trajectories in the Right group had the same reward, that the reward of trajectories in the Left and Right groups would be equal to those estimated from the control trial, and (for LESS) that the Left trajectory had density one while the Right trajectories had density three.

Under the Boltzmann model, the addition of two trajectories similar to the one on the right decreases the probability that the trajectory on the left gets chosen. This is most obvious when , i.e. if users liked both trajectories equally – then, would go from all the way down to , as there are now good options. On the other hand, LESS accounts for the similarity of the trajectories on the right and keeps closer to the control value.

#### Dependent Measures

Our measure is the selection proportion of each trajectory in the experimental trial, which enables us to compute agreement between each model and the users’ decisions.

#### Subject Allocation

We recruited 80 participants ( female, male, with ages between and ) from Amazon Mechanical Turk (AMT) using the psiTurk experimental framework (Gureckis et al., 2016). We excluded 3 participants for failing our attention test. All participants were from the United States and had a minimum approval rating of 95%. The treatment trial was assigned between-subjects: participants saw only one of the sets of trajectory options.

#### Hypotheses

H1: For the experimental trial, the Boltzmann proportion prediction is significantly different from the observed proportion.

H2: For the experimental trial, the LESS proportion prediction is equivalent to the observed proportions.

### 3.2. Analysis

In the control trial, users chose the Left trajectory 47.5% of the time. Figure 2 plots the observed proportions for the experimental trial, along with each model’s predictions. The experimental trial resulted in an observed probability of .41 for the Left trajectory, whereas Boltzmann predicts .23 and LESS predicts .475. The models both predict a uniform distribution among the Right trajectories.

We performed a chi-square test of goodness of fit to see if the observed distribution of left vs. right from the experimental group differed from the predicted distributions. In line with our hypotheses, we found a significant difference between the observed values and the Boltzmann prediction (, ), and no significant difference between the observations and the LESS prediction (, ).

To test for equivalence, we performed an equivalence test for multinomial distributions as described by Wellek (2010). This test evaluates the null hypothesis that the Euclidean distance between the multinomial distribution and a reference is greater than some (where the distance is computed by taking each distribution to be a vector in , where is the number of trajectories represented by the distribution). We do not have an a priori estimate for which values of are practically insignificant in this vector space of probability distributions, so we instead invert the test to find the minimum for which the observed distribution matches the predicted distribution at a significance level of . We found that the minimum bound for equivalence at the level was 0.22 for the LESS prediction and 0.39 for the Boltzmann prediction.

The results across all trajectories are analogous, albeit slightly weaker because users tended to favor one of the three Right trajectories more than the other two. The chi-square test revealed a significant difference with the Boltzmann predictions, , , but no significant difference between the observations and the LESS prediction , .

The equivalence test found the observed distribution matches the LESS-based predicted distribution at a significance level of when the bound is 0.29, and 0.36 for Boltzmann. Despite LESS’ tighter , neither prediction aligns perfectly with the empirical data in Figure (d)d. This discrepancy is likely due to some unmodeled features (e.g. distance from the obstacle), which may influence participants’ preferences. However, while unknown features may affect both Boltzmann’s and LESS’ performance, LESS still corrects Boltzmann’s errors from mishandling similarity. We explore the specific effects of feature misspecification further in Section 4.3.

Overall, although neither model is a perfect predictor of behavior, we find that LESS is a better fit: Boltzmann is significantly different from the observed, and LESS provides a tighter equivalence bound.

## 4. Using LESS for robot inference

In Section 3, we provided evidence supporting that LESS can more accurately capture human decisions. This has direct implications for how robots predict behavior – increasing the model accuracy by definition increases the robot’s prediction accuracy. We now hypothesize that it also has implications for how robots infer human preferences from behavior: namely, that using a higher accuracy model when performing inference leads to more accurate inference.

### 4.1. Boltzmann and LESS inference comparison

We first design an experiment to test that if people do act according to the LESS distribution, modeling them as such leads to better inference than modeling them via Boltzmann. To control for potential confounds, we also verify the opposite: if instead people acted according to Boltzmann (which Section 3 does not support), then modeling them as Boltzmann would instead be better for inference.

In this experiment, we created a grid world environment with two objects, where humans have to teach a robot to navigate from a start to a goal and learn preferences for whether to stay close or far from the objects. We simulated hypothetical human demonstrations by sampling trajectories according to LESS and Boltzmann. To do so, we fixed a particular objective and a confidence parameter , and randomly chose trajectories according to probabilities given by either (6), for LESS, or (3), for Boltzmann. We then utilized these trajectories as “human” demonstrations and performed inference using either Boltzmann or LESS as the underlying choice model. Our goal was to analyze how each model’s inference quality depends on the sampling model used across a range of objectives .

#### Manipulated Variables

We used a 2-by-2 factorial design. We manipulated the sampling model with two levels, Boltzmann and LESS, as well as the inference model, Boltzmann and LESS.

#### Other Variables

We tested inference quality across eight different values for more variation and insight. We also used 150 random seeds for sampling demonstrations. For a given sampling method, the combination of a and a seed determine the demonstration set that the inference will use. Therefore, we generated 1200 demonstration sets for each sampling method.

#### Dependent Measures

To analyze each model’s inference quality, we employ two objective metrics:

Accuracy of a-posteriori inference: once we obtain a posterior probability induced by the sampled , we verify that the maximum a-posteriori matches the original . Thus, we define a binary variable that takes value 1 if they match and 0 otherwise:

Magnitude of posterior probability: this metric provides a softened, continuous indication of inference performance by capturing the posterior probability mass assigned to the correct :

#### Hypotheses

H3: When human input is generated using LESS, inference quality is significantly higher with LESS than with Boltzmann.

H4: When human input is generated using Boltzmann, inference quality is significantly higher with Boltzmann than with LESS.

#### Analysis

Figure 3 summarizes the results by showing how TruePosterior varies by inference method for each of our sampling methods. To analyze these results, we ran a factorial repeated measures ANOVA. We found a significant interaction effect between the sampling and inference methods (, ), which can be seen with the change in relative performance of Boltzmann and LESS from Figure (a)a to Figure (b)b. A factorial logistic regression for the TrueMatch results also revealed a significant interaction between sampling method and inference method (). In post-hoc testing, a Tukey HSD test revealed that TruePosterior was significantly higher when the inference method matched the sampling method ( for both), and logistic regressions similarly showed that the probability of is greater when sampling and inference agree ( for both).

These results strongly support both H3 and H4, as they reveal that inference performance is superior when the inference method agrees with the sampling method. Given that the experiment in Section 3 suggests that LESS can be a better model of human sampling behavior, these results provide evidence that using LESS-based inference could give better performance when learning from humans.

### 4.2. Qualitative analysis of LESS inference

Based on what we have seen thus far, LESS clearly leads to different robot inferences. In this section we provide some qualitative intuition about what contributes to this difference.

The important change from Boltzmann to LESS is the strength of the inference as a function of the feature density at the demonstrated trajectory. If a demonstrated trajectory lies in a high-density area, i.e. its features are similar to those of many other possible trajectories, Boltzmann inference will under-learn. This is because there are many high-reward alternatives in the normalizer of (3), which lowers the probability of the demonstration. For the analogous reason, if a demonstration lies in a low-density area, Boltzmann inference will over-learn. Because our LESS method weighs each trajectory by the inverse of the density at its location in feature space , the resulting weighted density will be approximately uniform, not allowing the feature density to influence the strength of the inference: the presence of other options with similar features does not skew the probability as much anymore.

To visualize this, we chose two sets of demonstrations from the previous experiment. One set, , comes from one of the ground truth rewards for which Boltzmann performed better ( in Figure (a)a). The other set, , comes from one for which LESS performed better ( in Figure (b)b). Figure 4 shows the sampled trajectories in and , along with the inference for each model. For , LESS confidently identifies the ground truth, whereas Boltzmann’s posterior is higher entropy. Figure 5 shows that does fall in a high-density region, which indeed leads to Boltzmann under-learning and finding many alternative explanations.

For , on the other hand, something very interesting happens. Looking at where the samples lie (blue dots in Figure 5), two of them are in relatively high-density areas (call them ), whereas the others are in a very sparse region (call them ). are the two with lower in Figure 5 (right). They correspond, in Figure (b)b, to the two trajectories that go closer to the bottom obstacle. To the LESS inference, which is more agnostic to the feature density, this gives evidence for two hypotheses: support the hypothesis that the robot should stay far from the top obstacle, but be ambivalent about the bottom one, whereas the other trajectories, , support that the robot should stay far from both obstacles. This is why we see two hypotheses inferred by LESS in (b)b. The Boltzmann inference, however, learns much more from the trajectories that lie in the low-density area, essentially ignoring . This is what leads to the very confident inference of only one of the hypotheses. In this case, this happens to be the correct hypothesis. In general though, the opposite could have happened – had the two trajectories that go closer to the obstacle been the ones to lie in a sparse area, Boltzmann would have confidently inferred the wrong objective. In summary, Boltzmann, by being sensitive to feature densities, can under- or over-learn.

### 4.3. LESS and feature misspecification

LESS uses information from features to compute similarity, even when those features do not affect the reward. For example, if the reward is solely about efficiency, LESS captures that people treat ”right-of-the-obstacle” options as similar. What if the robot does not have access though to these additional features?

#### Experimental Design

We again generate demonstrations using LESS, but we include two additional features: the average and average coordinate of the trajectory. The two new features do not influence the trajectories’ reward values, but they do influence the similarity metric. To induce a misspecification, the robot performing inference is unaware of these new features. For this experiment, we only manipulate the inference model: LESS vs. Boltzmann.

H5: When the robot’s feature space is misspecified, inference quality with LESS is still superior to inference quality with Boltzmann for LESS-sampled demonstrations.

#### Analysis

For TruePosterior, we performed a one-way repeated measures ANOVA, and as hypothesized, the test revealed that LESS inference was still significantly better than Boltzmann, in spite of the feature misspecification (). Similarly for TrueMatch, a logistic regression revealed that the odds of having were significantly greater when using LESS (), strongly supporting our hypothesis.

We take this result with a grain of salt: in the worst case, if an unspecified feature completely differentiates all options for the human, then even a human sampling according to LESS would exhibit behavior approaching the Boltzmann distribution. Then, based on Section 4.1, Boltzmann inference could yield superior results. However, this experiment suggest that in practical rather than adversarial cases, it is still preferable to use LESS inference on an incomplete set of features. Further, it is always possible to default in LESS to using the trajectory space directly for the similarity metric and not rely on features.

## 5. Robust Inference for High-DOF Arms

Section 4 teased that Boltzmann inference performance is highly dependent on the structure of the environment, and, more precisely, the feature space density induced by all possible trajectories. However, we demonstrated this on a toy task with simulated human data and ground truth access. We now put the same hypothesis to test in a real world high-dimensional scenario with a 7DoF robotic manipulator and real human demonstrations, where one cannot have access to the full trajectory space, nor the ground truth reward.

### 5.1. Single demonstration inference

#### Study Goal.

Since for such an environment calculating the denominator in (3) exactly is intractable, practitioners typically sample the space of trajectories, obtaining varying subsets. Given the Boltzmann model’s high dependency on the feature space density, we speculate that different sample sets would result in vastly varying inference results. In this section, we investigate how LESS can mitigate this effect and help inference robustness. We collect demonstrations from participants for different tasks, and run inference using different sets of trajectory for computing the normalizer.

#### Manipulated Variables

We used a 2-by-5 factorial design. We manipulated the inference model with two levels, Boltzmann and LESS, as well as the size of the sampled trajectory sets used for inference, with five levels: 10, 30, 100, 300, and 1000. We sample 10 different trajectory sets of each size.

#### Other Variables

We tested our hypothesis across three household manipulation tasks where the robot learned to carry a coffee mug from a start position to a goal according to the person’s preferences. In the first task, which we dub table, the participants were asked to move the robot arm from start to goal while maintaining the end-effector close to the table, to prevent the mug from breaking in case of a slip. In the second task, dubbed laptop, the participants were instructed to avoid spilling the coffee over a laptop by providing a demonstration that keeps the robot’s end-effector away from the electronic device. Lastly, in the third task, dubbed human we asked the participants to keep the end-effector away from their body, to avoid spilling coffee on their clothes.

In all scenarios, the robot performs inference by reasoning over three features: one feature of interest (distance from the table, distance from the laptop, and distance from the human, respectively), a second feature drawn from that set, and an efficiency feature computed as the sum of squared velocities across the trajectory.

#### Dependent Measures

In total, for each task , sample size , inference method , and user , we obtained 10 posterior distributions constituting a set . Our goal was to test how robust (or consistent) each method’s inference result was across the ten different trajectory sets. We used an aggregate Kullback-Leibler divergence as a measure of how much the posterior distributions differ from one another:

#### Hypothesis

H6: Performing single inference with LESS across multiple trajectory sets results in higher robustness and, thus, a lower KLAggregate measure than inference with Boltzmann.

#### Subject Allocation

We recruited 12 users (3 female, 9 male, aged 18-30) from the campus community to physically interact with a JACO 7DOF robotic arm and provide demonstrations for three tasks. Figure 7 (left) illustrates the demonstrations collected for the table task. Before giving any demonstrations, each person was allowed a period of training with the robot in gravity compensation mode, in order to get accustomed to interacting with the robot.

#### Analysis

As seen in Figure 7, given two different trajectory sets, inference with each method can have drastically different outcomes. With LESS (top), we see that the resulting posterior distributions are fairly similar, whereas with Boltzmann inference (bottom), they differ in entropy/confidence.

For each sample task , we performed a factorial repeated-measures ANOVA. The results for the laptop task are summarized in Figure (a)a. As the trend in the figure indicates, we found a significant interaction effect between inference method and sample size (, ). A post-hoc Tukey HSD test revealed that LESS produced significantly lower KLAggregate than Boltzmann for 10, 30, and 100 ( for all), but there was no significant difference found for 300 or 1000 ( for both). This trend supports our hypothesis that LESS provides more robust single-demonstration inference, and it reveals that the difference in KLAggregate between LESS and Boltzmann disappears with increasing sample size. Results from the table task also support this trend, with a significant main effect of inference method.

While the human task did reveal a significant interaction between inference method and sample size (, ) it stands apart from the other two: a post-hoc Tukey HSD test only found a difference for sample size 1000 (). This pattern indicates that demonstrations from this task may be generally more ambiguous and present a more difficult inference problem than the other two.

### 5.2. All demonstrations inference

We repeated the same experiment, except this time we run inference by aggregating all users’ demonstrations for a task (batch inference). This would happen in practice if we were interested in teaching the robot about what the average user wants, rather than focusing on customizing the behavior to each user. Here, we found the opposite results, also shown in in Figure (b)b: LESS has higher divergence (lower robustness). We attribute this to the phenomenon described in Section 4.2. When we had only one demonstration before, Boltzmann was not robust because, depending on the set of samples, the demonstration could fall in low- or high-density regions, thus leading to different Boltzmann inferences for different sets. Now, with 12 demonstrations at once, the chances of one demonstration falling in a low-density area are much higher. As we’ve seen in Section 4.2, when there are multiple demonstrations, Boltzmann inference will be dominated by those lying in low-density areas. This leads to a more consistent posterior distribution, so long as the low-density demonstrations suggest the same reward function.

## 6. Discussion

We propose a new probabilistic human behavior model and present compelling evidence that it better captures human decision making and it attenuates inference errors that arise due to similar selections, increasing accuracy and robustness.

One limitation of our method is its reliance on a pre-specified set of robot features for similarity selection, which makes feature misspecification a possible limitation. Although our experiments in Section 4.3 reveal that LESS still performs better inference than Boltzmann, it is unclear whether this outcome is due to the effect of hypothesis H3 or if our method is truly unaffected by misspecification. Further experiments are needed for complete clarification.

Our 12-person aggregate inference results in Section 5 show that LESS can lead to less robust inference. We attributed this outcome to the phenomenon in Section 4.2, but it remains unclear whether this leads to less accurate inference, or whether Boltzmann is actually preferable in situations with enough varied demonstrations.

Lastly, the Mechanical Turk study in Section 3, although compelling, illustrates simplistic datasets of human choices. Further studies on human behavior in more realistic settings would be useful, but complicated by lack of access to the ”ground truth” reward.

Despite these limitations, Boltzmann rationality has become so fundamental to how robots do inference and prediction, that designing a counterpart for continuous robotics domains is sorely needed. We are excited to have taken a step in this direction.

### Footnotes

- journalyear: 2020
- copyright: acmlicensed
- conference: Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction; March 23–26, 2020; Cambridge, United Kingdom
- booktitle: Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (HRI ’20), March 23–26, 2020, Cambridge, United Kingdom
- price: 15.00
- doi: 10.1145/3319502.3374811
- isbn: 978-1-4503-6746-2/20/03

### References

- Maximum entropy inverse reinforcement learning in continuous state spaces with path integrals. In 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vol. , pp. 1561–1566. External Links: Document, ISSN Cited by: §1.
- Goal inference as inverse planning. pp. . Cited by: §1, §2.1.1.
- Structure of passenger travel demand models. Transportation Research Record 526, pp. . Cited by: §2.1.2.
- Learning under misspecified objective spaces. In CoRL, Cited by: §1.
- The American Economic Review 50 (1), pp. 186–188. External Links: ISSN 00028282, Link Cited by: §2.1.2.
- Guided cost learning: deep inverse optimal control via policy optimization. In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, ICML’16, pp. 49–58. External Links: Link Cited by: §1.
- Random choice as behavioral optimization. Cited by: §2.1.2, §2.1.2.
- PsiTurk: an open-source framework for conducting replicable behavioral experiments online. Behavior Research Methods 48 (3), pp. 829–842. External Links: ISSN 1554-3528, Document, Link Cited by: §3.1.4.
- Learning to navigate through crowded environments. In 2010 IEEE International Conference on Robotics and Automation, Vol. , pp. 981–986. External Links: Document, ISSN Cited by: §1.
- Learning objective functions for manipulation. In 2013 IEEE International Conference on Robotics and Automation, Vol. , pp. 1331–1336. External Links: Document, ISSN Cited by: §1.
- Activity forecasting. In Computer Vision – ECCV 2012, A. Fitzgibbon, S. Lazebnik, P. Perona, Y. Sato and C. Schmid (Eds.), Berlin, Heidelberg, pp. 201–214. External Links: ISBN 978-3-642-33765-9 Cited by: §1.
- Socially compliant mobile robot navigation via inverse reinforcement learning. Int. J. Rob. Res. 35 (11), pp. 1289–1307. External Links: ISSN 0278-3649, Link, Document Cited by: §1.
- Continuous inverse optimal control with locally optimal examples. In Proceedings of the 29th International Coference on International Conference on Machine Learning, ICML’12, USA, pp. 475–482. External Links: ISBN 978-1-4503-1285-1, Link Cited by: §1.
- Individual choice behavior.. John Wiley, Oxford, England. Cited by: §1, §2.1.1.
- The choice axiom after twenty years. Journal of Mathematical Psychology 15 (3), pp. 215 – 233. External Links: ISSN 0022-2496, Document, Link Cited by: §1, §2.1.1.
- Human-robot collaborative manipulation planning using early prediction of human motion. In 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vol. , pp. 299–306. External Links: Document, ISSN Cited by: §1.
- Predicting human reaching motion in collaborative tasks using inverse optimal control and iterative re-planning. In 2015 IEEE International Conference on Robotics and Automation (ICRA), Vol. , pp. 885–892. External Links: Document, ISSN Cited by: §1.
- Predicting actions to act predictably: cooperative partial motion planning with maximum entropy models. In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol. , pp. 2096–2101. External Links: Document, ISSN Cited by: §1.
- Bayesian inverse reinforcement learning. In Proceedings of the 20th International Joint Conference on Artifical Intelligence, IJCAI’07, San Francisco, CA, USA, pp. 2586–2591. External Links: Link Cited by: §1.
- Stimulus and response generalization: a stochastic model relating generalization to distance in psychological space. Psychometrika 22 (4), pp. 325–345. External Links: ISSN 1860-0980, Document, Link Cited by: §2.1.1.
- Inverse reinforcement learning algorithms and features for robot navigation in crowds: an experimental comparison. In 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vol. , pp. 1341–1346. External Links: Document, ISSN Cited by: §1.
- Theory of games and economic behavior. Princeton University Press Princeton, NJ. Cited by: §1, §2.1.1.
- Application of cross-nested logit model to mode choice in tel aviv, israel, metropolitan area. Transportation Research Record 1607 (1), pp. 6–15. External Links: Document, Link, https://doi.org/10.3141/1607-02 Cited by: §2.1.2.
- Testing statistical hypotheses of equivalence and noninferiority. Chapman and Hall/CRC. Cited by: §3.2.
- Maximum entropy deep inverse reinforcement learning. Cited by: §1.
- Maximum entropy inverse reinforcement learning. In Proceedings of the 23rd National Conference on Artificial Intelligence - Volume 3, AAAI’08, pp. 1433–1438. External Links: ISBN 978-1-57735-368-3, Link Cited by: §1.
- Planning-based prediction for pedestrians. In Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS’09, Piscataway, NJ, USA, pp. 3931–3936. External Links: ISBN 978-1-4244-3803-7, Link Cited by: §1.