A Mathematical Theory of Human Machine Teaming

A Mathematical Theory of Human Machine Teaming

Pete Trautman   

Pete Trautman   

A Mathematical Theory of HMT

Figure 1: Illustration: an HMT architecture that is lower bounded for generic performance metrics under a variety of teaming stressors.

We begin with a disquieting paradox: human machine teaming (HMT) often produces results worse than either the human or machine would produce alone. Critically, this failure is not a result of inferior human modeling or a suboptimal autonomy: even with perfect knowledge of human intention and perfect autonomy performance, prevailing teaming architectures still fail under trivial stressors [3]. This failure is instead a result of deficiencies at the decision fusion level. Accordingly, efforts aimed solely at improving human prediction or improving autonomous performance will not produce acceptable HMTs: we can no longer model humans, machines and adversaries as distinct entities. We thus argue for a strong but essential condition: HMTs should perform no worse than either member of the team alone, and this performance bound should be independent of environment complexity, human-machine interfacing, accuracy of the human model, or reliability of autonomy or human decision making. In other words, this requirement is fundamental (Figure 1): the fusion of two decision makers should be as good as either in isolation. For instance, if the human model is incorrect, the performance of the team should still be as good as the autonomy in isolation. If the autonomy is unreliable, this should not impair the human. Importantly, most existing HMTs do not have a robust mechanism to “fuse” human and machine information, thus obviating any opportunity at producing “a team that performs greater than the sum of its parts”. In response to these shortcomings, we introduce a theory of interacting random trajectories (IRT) over the humans, machines, and (potentially adversarial) environment [3] that optimally fuses the three sources of information and achieves the following four objectives:

  1. IRT is a unifying formalism: most HMT approaches are approximations to IRT.

  2. IRT quantifies these approximations, and precisely predicts when architectures will fail.

  3. We can predict, in advance of empirical evaluation, when IRT will succeed and fail.

  4. The first three objectives, when combined with dimensionality reduction techniques, enable a human-collective/multi-agent decision fusion framework with performance bounds.

1. A Unifying Formalism for HMT

Figure 2: State of the art HMT architectures (shared control, task allocation, autopilots, and HCI) are not reliably lower bounded. “Input” ranges from high to low level.

To show that IRT is a unifying formalism, we must understand standard HMT decision fusion; we thus introduce linear blending:

(0.1)

At time , is the team action, is the human operator input (joystick deflections, high level commands, or preset autonomous actions), is the autonomy command, and are the operator and autonomy weighting factors, respectively, which can be functions of anything, subject to . As shown in [3], linear blending captures a wide variety of teaming approaches in low level shared control—we argue here that linear decision fusion is used much more broadly in HMT. For instance, switching control (either human or machine has full control) is a special case of linear blending where are either 0 or 1. Consider the following:

  • Dynamic task allocation: an algorithm determines when the human or the machine should be in control of the task (switching control). This does not mean that the allocation method is linear, but that the decision fusion is linear. For instance, we might use an elaborate cognitive architecture to determine when the human takes full control. We point out that “shared mental models” are typically implemented in a switching control fashion, supervisory control approaches delegate to either the machines or the humans, and handoff tasks have been almost exclusively restricted to the case of switching control.

  • Commercial autopilots are exclusively switching control.

  • “Playbook” approaches: the human picks a “play”, and the machines execute, which is an example of switching control where .

  • Standard HCI: human inputs data, machine processes data/presents alternatives to human, human makes a decision. This is switching control: .

Interacting random trajectories [3] is a generalization of linear blending: it is a statistically sound and optimal approach to fusing coevolving human, machine and environment information. IRT relaxes human input to online data about the random human trajectory over the action space (we generalize to multiple humans in Section 4), we take measurements of the ’th machine trajectories , and measurements of the ’th environment agent trajectory . We collapse the machines and the environment into collective random processes, and take the following as our decision fusion architecture (Equation 1. A Unifying Formalism for HMT is updated at every , so reflects collective evolution):

(0.2)

This formulation enables a precise understanding of the assumptions that any linear fusion architecture imposes on the team, thus providing a theoretical advantage, since we can now do analysis in advance of empirical evaluation across a broad range of approaches.

(a)
(b)
(c)
Figure 3: (a) Operator goes left and autonomy goes right, linear architectures fail lower bounding, IRT preserves lower bounding. (b) Coupled robot-crowd models improves safety 3-fold, maintain efficiency near that of a human operator. Decoupled models fail lower bounding property: purely human crowds outperform human-robot crowds. (c) [3, 4] suggest a coupled human-machine-world architecture is required to achieve the lower bounding property.

2. Quantifying the Approximations in Existing HMT Approaches

An important motivation for the lower bounding principle is the commonplace failure of existing HMT architectures—for many applications, trivial stressors lead to the team falling apart. Consider the case of a human and a robot sharing control of a platform in a congested environment (e.g., a shared control wheelchair navigating through a crowd). In [3], we proved that state of the art HMT architectures fail the lower bounding criteria if environmental or operator predictions are multimodal in a Gaussian mixture representation; even under mildly challenging conditions, existing approaches can fuse two safe inputs into an unsafe shared control (Figure (a)a). Furthermore, in [3], we proved that IRT respects the lower bounding property under a variety of circumstances (explored fully in [4]). Most existing approaches to fully autonomous navigation in human environments also fail the lower bounding property. For instance, as shown in [5, 2], decoupling the components of the robot-crowd team leads to the freezing robot problem: once environment complexity exceeds a certain threshold, planning algorithms that independently model the human and the robot freeze in place. More broadly, as shown in Figure (b)b, state of the art crowd navigation algorithms fail the lower bounding property: purely human crowds are safer and more efficient than human-robot crowds. The results in [3, 4] suggest that a coupled and evolving human-machine-world architecture is required to achieve the lower bounding property (Figure (c)c). Further, linear fusion prevents the interleaving of human/machine capabilities, while IRT weaves complementary capabilities (Figure 4).

(a)
(b)
(c)
Figure 4: Linear, IRT architectures navigating through crowd with semantic information (“elevator coming!”) (a) High fidelity human intent model ( has small ): IRT able to leverage human contextual information; linear architectures violate lower bounding property. (b) With large , IRT team performance lower bound maintained. Lower bound violated for linear approach. (c) How to use data to instantiate Figure 1.

3. Predicting Capabilities of IRT

Figure 5: Using learning theory and [3] to generate Figure 1. The “true” teaming action and IRT teaming action are separated by with probability . The human, machine, and world () are arguments of the teaming actions; we vary them to produce Figure 1.

Although a coupled HMT architecture is necessary to achieve lower bounding, can we prove that joint models preserve the property across a spectrum of team stressors? In Figure 4, we present a thought experiment showing how we can reason towards Figure 1: a shared control robot travels through a crowd waiting for an elevator. Without elevator arrival information, the robot’s best choice is to go right. When the elevator bell rings, the robot does not know what it means; a human, however, will recognize that the best path will be around the left side of the crowd and the worst path will be to the right. In Figure (a)a, a high fidelity human model shows how IRT marries human information to robot path planning to exceed the lower bound; the linear architecture discards the human’s input and violates the lower bound (Figures (a)a and (c)c). In Figure (b)b, IRT exceeds the lower bound and the linear architecture violates it (Figures (b)b(c)c).

Constructing general instantiations of Figure 1 will require at least two advances. First, we must quantify performance error outside of the training set. IRT formulates teaming as a joint human-machine-world model, presenting an opportunity to interpret performance error as generalization error; such an approach allows us to leverage important results from learning theory, and thus accelerate our understanding of HMT performance bounds. Second, to generate Figure 1, team stressors must be an argument of the predicted performance error. Since the human, autonomy, and world are arguments of IRT, these models are also arguments of the performance error (Figure 5).

4. Optimal Human Collective Representations

Figure 6: From https://en.wikipedia.org/wiki/Matrix_completion: matrix completion of an matrix, rank . The columns are “machines” and rows are “operators”.

Many existing internet recommendation technologies (e.g., Netflix) are based on a simple observation: human preference is often low dimensional. Thus, when represented in the optimal basis, we can accurately predict human decision making from only a few example decisions. With matrix completion [1], we can exactly recover user preference for any movie with just a few reviews; however, this technology assumes that other humans have already entered reviews, blurring the distinction between individual and collective. This begs a critical question for HMT: can we infer collective human decision making from just a few individual operator samples? We present three examples, and discuss the underlying mathematical challenge.

  1. Supervisory control of platforms: a single operator provides waypoint inputs.

  2. Human control of a robotic prosthesis: in a robotic prosthesis, there are actuators, but the human can only provide actuator inputs.

  3. Big data analysis (find the bad guy in 1B images): an analyst can provide up to image “insights” (derived from contextual clues). With time pressure, he provides insights.

IRT, as described in Equation 1. A Unifying Formalism for HMT, provides a mathematical quantification of this problem:

where we approximate ; we interpret as waypoints, actuator inputs, or analyst insights. However, if the rank of the collective human intent is and is the number of existing entries, [1] tells us that inputs can complete ; in an important sense, we are completing a single operator using the “collective wisdom” of the participants.

Summary of Response: IRT (lower bounding paradox) and optimal human collective representations demand a radical rethinking of coevolving ecosystems of humans, machines and adversaries: the distinction between individual and collective has been muddied in an unintuitive (yet mathematically precise) way.

References

  • Candes and Recht [2008] E.J. Candes and B. Recht. Exact matrix completion via convex optimization. Founds. of Comp. Math., 2008.
  • et al. [2013] P. Trautman et al. Robot navigation in dense crowds: the case for cooperation. In ICRA, 2013.
  • Trautman [2015] P. Trautman. Assistive planning in complex, dynamic environments. In IEEE Systems, Man, and Cybernetics (http://arxiv.org/abs/1506.06784), 2015.
  • Trautman [2016] P. Trautman. Interacting random trajectories: a discussion (http://tinyurl.com/gvc8hzd). Technical report, Galois Inc., 2016.
  • Trautman and Krause [2010] P. Trautman and A. Krause. Unfreezing the robot: Navigation in dense interacting crowds. In IROS, 2010.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
321598
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description