A Review of Verbal and Non-Verbal Human-Robot Interactive Communication

A Review of Verbal and Non-Verbal Human-Robot Interactive Communication

Nikolaos Mavridis
Interactive Robots and Media Lab, NCSR Demokritos
GR-15310, Agia Paraskevi, Athens, Greece


In this paper, an overview of human-robot interactive communication is presented, covering verbal as well as non-verbal aspects. Following a historical introduction, and motivation towards fluid human-robot communication, ten desiderata are proposed, which provide an organizational axis both of recent as well as of future research on human-robot communication. Then, the ten desiderata are examined in detail, culminating to a unifying discussion, and a forward-looking conclusion.



I Introduction: Historical Overview

While the first modern-day industrial robot, Unimate, began work on the General Motors assembly line in 1961, and was conceived in 1954 by George Devol [1, 2], the concept of a robot has a very long history, starting in mythology and folklore, and the first mechanical predecessors (automata) having been constructed in Ancient Times. For example, in Greek mythology, the God Hephaestus is reputed to have made mechanical servants from gold ([3] in p.114, and [4] verse 18.419). Furthermore, a rich tradition of designing and building mechanical, pneumatic or hydraulic automata also exists: from the automata of Ancient Egyptian temples, to the mechanical pigeon of the Pythagorean Archytas of Tarantum circa 400BC [5], to the accounts of earlier automata found in the Lie Zi text in China in 300BC [6], to the devices of Heron of Alexandria [7] in the 1st century. The Islamic world also plays an important role in the development of automata; Al-Jazari, an Arab inventor, designed and constructed numerous automatic machines, and is even reputed to have devised the first programmable humanoid robot in 1206AD [8]. The word “robot”, a Slavic word meaning servitude, was first used in this context by the Czech author Karel Capek in 1921 [9].

However, regarding robots with natural-language conversational abilities, it wasn’t until the 1990’s that the first pioneering systems started to appear. Despite the long history of mythology and automata, and the fact that even the mythological handmaidens of Hephaestus were reputed to have been given a voice [3], and despite the fact that the first general-purpose electronic speech synthesizer was developed by Noriko Omeda in Japan in 1968 [10], it wasn’t until the early 1990’s that conversational robots such as MAIA [11], RHINO [12], and AESOP [13] appeared. These robots cover a range of intended application domains; for example, MAIA was intended to carry objects and deliver them, while RHINO is a museum guide robot, and AESOP a surgical robot.

In more detail, the early systems include Polly, a robotic guide that could give tours in offices [14, 15]. Polly had very simple interaction capacities; it could perceive human feet waving a “tour wanted” signal, and then it would just use pre-determined phrases during the tour itself. A slightly more advanced system was TJ [16]. TJ could verbally respond to simple commands, such as “go left”, albeit through a keyboard. RHINO, on the other hand [12], could respond to tour-start commands, but then, again, just offered a pre-programmed tour with fixed programmer-defined verbal descriptions. Regarding mobile assistant robots with conversational capabilities in the 1990s, a classic system is MAIA [11, 17], obeying simple commands, and carrying objects around places, as well as the mobile office assistant which could not only deliver parcels but guide visitors described in [18], and the similar in functionality Japanese-language robot Jijo-2 [19, 20, 21]. Finally, an important book from the period is [22], which is characteristic of the traditional natural-language semantics-inspired theoretical approaches to the problem of human-robot communication, and also of the great gap between the theoretical proposals and the actual implemented systems of this early decade.

What is common to all the above early systems is that they share a number of limitations. First, all of them only accept a fixed and small number of simple canned commands, and they respond with a set of canned answers. Second, the only speech acts (in the sense of Searle [23]) that they can handle are requests. Third, the dialogue they support is clearly not flexibly mixed initiative; in most cases it is just human-initiative. Four, they don’t really support situated language, i.e. language about their physical situations and events that are happening around them; except for a fixed number of canned location names in a few cases. Five, they are not able to handle affective speech; i.e. emotion-carrying prosody is neither recognized nor generated. Six, their non-verbal communication [24] capabilities are almost non-existent; for example, gestures, gait, facial expressions, and head nods are neither recognized nor produced. And seventh, their dialogue systems are usually effectively stimulus-response or stimulus-state-response systems; i.e. no real speech planning or purposeful dialogue generation is taking place, and certainly not in conjunction with the motor planning subsystems of the robot. Last but quite importantly, no real learning, off-line or on-the-fly is taking place in these systems; verbal behaviors have to be prescribed.

All of these shortcomings of the early systems of the 1990s, effectively have become desiderata for the next two decades of research: the 2000s and 2010s, which we are in at the moment. Thus, in this paper, we will start by providing a discussion giving motivation to the need for existence of interactive robots with natural human-robot communication capabilities, and then we will enlist a number of desiderata for such systems, which have also effectively become areas of active research in the last decade. Then, we will examine these desiderata one by one, and discuss the research that has taken place towards their fulfillment. Special consideration will be given to the so-called “symbol grounding problem” [25], which is central to most endeavors towards natural language communication with physically embodied agents, such as robots. Finally, after a discussion of the most important open problems for the future, we will provide a concise conclusion.

Ii Motivation: Interactive Robots with Natural Language capabilities – but why?

There are at least two avenues towards answering this fundamental question, and both will be attempted here. The first avenue will attempt to start from first principles – and derive a rationale towards equipping robots with natural language. The second, more traditional and safe avenue, will start from a concrete, yet partially transient, base: application domains – existing or potential. In more detail:

Traditionally, there used to be clear separation between design and deployment phases for robots. Application-specific robots (for example, manufacturing robots, such as [26]) were: (a) designed by expert designers, (b) possibly tailor-programmed and occasionally reprogrammed by specialist engineers at their installation site, and (c) interacted with their environment as well as with specialized operators during actual operation. However, the phenomenal simplicity but also the accompanying inflexibility and cost of this traditional setting is often changing nowadays. For example, one might want to have broader-domain and less application-specific robots, necessitating more generic designs, as well as less effort by the programmer-engineers on site, in order to cover the various contexts of operation. Even better, one might want to rely less on specialized operators, and to have robots interact and collaborate with non-expert humans with little if any prior training. Ideally, even the actual traditional programming and re-programming might also be transferred over to non-expert humans; and instead of programming in a technical language, to be replaced by intuitive tuition by demonstration, imitation and explanation [27, 28, 29]. Learning by demonstration and imitation for robots already has quite some active research; but most examples only cover motor and aspects of learning, and language and communication is not involved deeply.

And this is exactly where natural language and other forms of fluid and natural human-robot communication enter the picture: Unspecialized non-expert humans are used to (and quite good at) teaching and interacting with other humans through a mixture of natural language as well as nonverbal signs. Thus, it makes sense to capitalize on this existing ability of non-expert humans by building robots that do not require humans to adapt to them in a special way, and which can fluidly collaborate with other humans, interacting with them and being taught by them in a natural manner, almost as if they were other humans themselves.

Thus, based on the above observations, the following is one classic line of motivation towards justifying efforts for equipping robots with natural language capabilities: Why not build robots that can comprehend and generate human-like interactive behaviors, so that they can cooperate with and be taught by non-expert humans, so that they can be applied in a wide range of contexts with ease? And of course, as natural language plays a very important role within these behaviors, why not build robots that can fluidly converse with humans in natural language, also supporting crucial non-verbal communication aspects, in order to maximize communication effectiveness, and enable their quick and effective application?

Thus, having presented the classical line of reasoning arriving towards the utility of equipping robots with natural language capabilities, and having discussed a space of possibilities regarding role assignment between human and robot, let us now move to the second, more concrete, albeit less general avenue towards justifying conversational robots: namely, specific applications, existing or potential. Such applications, where natural human-robot interaction capabilities with verbal and non-verbal aspects would be desirable, include: flexible manufacturing robots; lab or household robotic assistants [30, 31, 32, 33]; assistive robotics and companions for special groups of people [34]; persuasive robotics (for example, [35]); robotic receptionists [36], robotic educational assistants, robotic wheelchairs [37], companion robots [38], all the way to more exotic domains, such as robotic theatre actors [39], musicians [40], dancers [41] etc.

In all of the above applications, although there is quite some variation regarding requirements, one aspect at least is shared: the desirability of natural fluid interaction with humans supporting natural language and non-verbal communication, possibly augmented with other means. Of course, although this might be desired, it is not always justified as the optimum choice, given technico-economic constraints of every specific application setting. A thorough analysis of such constraints together with a set of guidelines for deciding when natural-language interaction is justified, can be found at [42].

Now, having examined justifications towards the need for natural language and other human-like communication capabilities in robots across two avenues, let us proceed and become more specific: natural language, indeed – but what capabilities do we actually need?

Iii Desiderata - What might one need from a conversational Robot?

An initial list of desiderata is presented below, which in neither totally exhuastive nor absolutely orthogonal; however, it serves as a good starting point for discussing the state of the art, as well as the potentials of each of the items:

D1) Breaking the “simple commands only” barrier

D2) Multiple speech acts

D3) Mixed initiative dialogue

D4) Situated language and the symbol grounding problem

D5) Affective interaction

D6) Motor correlates and Non-Verbal Communication

D7) Purposeful speech and planning

D8) Multi-level learning

D9) Utilization of online resources and services

D10) Miscellaneous abilities

The particular order of the sequence of desiderata, was chosen for the purpose of illustration, as it provides partially for a building-up of key points, also allowing for some tangential deviations.

Iii-a Breaking the “simple commands only” barrier

The traditional conception of conversational robots, as well as most early systems, is based on a clear human-master robot-servant role assignment, and restricts the robots conversational competencies to simple “motor command requests” only in most cases. A classic example can be seen for example in systems such as [30], where a typical dialogue might be:

H: “Give me the red one”

R: (Picks up the red ball, and gives to human)

H: “Give me the green one”

R: “Do you mean this one, or that one?” (robot points to two possible candidate objects)

H: “The one on the left”

R: (Picks up the green ball on the left, and hands over to human)

What are the main points noticing in this example? Well, first of all, (p1) this is primarily a single-initiative dialogue: the human drives the conversation, the robot effectively just producing motor and verbal responses to the human verbal stimulus. Second, (p2) apart from some disambiguating questions accompanied by deixis, there is not much that the robot says – the robot primarily responds with motor actions to the human requests, and does not speak. And, (p3) regarding the human statements, we only have one type of speech acts [23]: RequestForMotorAction. Furthermore, (p4) usually such systems are quite inflexible regarding multiple surface realizations of the acceptable commands; i.e. the human is allowed to say “Give me the red one”, but if he instead used the elliptical “the red object, please” he might have been misinterpreted and (p5) in most cases, the mapping of words-to-responses is arbitrarily chosen by the designer; i.e. motor verbs translate to what the designer thinks they should mean for the robot (normative meaning), instead of what an empirical investigation would show regarding what other humans would expect they mean (empirical meaning).

Historically, advanced theorization for such systems exists as early as [22], and there is still quite a stream of active research which, although based on beautiful and systematic formalizations and eloquent grammars, basically produces systems which would still fall within the three points mentioned above. Such an example is [43], in which a mobile robot in a multi-room environment, can handle commands such as: “Go to the breakroom and report the location of the blue box”

Notice that here we are not claiming that there is no importance in this research that falls within this strand; we are just mentioning that, as we shall see, there are many other aspects of natural language and robots, which are left unaccounted by such systems. Furthermore, it remains to be seen, how many of these aspects can later be effectively integrated with systems belonging to this strand of research.

Iii-B Multiple speech acts

The limitations (p1)-(p5) cited above for the classic “simple commands only” systems provide useful departure points for extensions. Speech act theory was introduced by J.L.Austin [44], and a speech act is usually defined as an utterance that has performative function in language and communication. Thus, we are focusing on the function and purpose of the utterance, instead of the content and form. Several taxonomies of utterances can be derived according to such a viewpoint: for example, Searle [45], proposed a classification of illocutionary speech acts into assertives, directives, commisives, expressives, and declarations. Computational models of speech acts have been proposed for use in human-computer interaction [46].

In this light of speech acts, lets us start by extending upon point (p3) made in the previous section. In the short human-robot dialogue presented in the previous section, the human utterances “Give me the red one” and “Give me the green one” could be classified as Request speech acts, and more specifically requests for motor action (one could also have requests for information, such as “What color is the object?” etc.). But what else might one desire in terms of speech act handling capabilities, apart from RequestForMotorAction (which we shall call SA1, a Directive according to [45])? Some possibilities follow below:

H: “How big is the green one?” (RequestForInformAct, SA2, Directive)

H: “There is a red object at the left” (Inform, SA3, Assertive)

H: “Let us call the small doll Daisy” (Declare, SA4, Declaration)

And many more exist. Systems such as [47] are able to handle SA2 and SA3 apart from SA1-type acts; and one should also notice, that there are many classificatory systems for speech acts, across different axis of classification, and with multiple granularities. Also, it is worth starting at this stage to contemplate upon what might it mean to respond appropriately to different kinds of speech acts. For example, an appropriate response to a RequestForMotorAction (a Directive) is the motor action itself, if unambiguous and feasible; however, an appropriate response to an Assertive or a Declarative consists of a change to some form of a “mental model” [48] or “situation model” [49] [47] that the robot might be keeping; i.e. creating an appropriate mental token for an object in the case of “There is a red object at the left”, or changing the name label for a mental object token in the case of “Let us call this small doll Daisy”; i.e. both statements elicit primarily internal (mental) actions, instead of external (motor or verbal) actions.

Another relevant aspect of speech act theory is the handling of indirect speech acts. For example, consider the following utterance:

H: “Ah, it is quite hot in this room” (phenomenally, an Assertive),

Which might actually be a polite way of saying:

H: “Open the window” (essentially, a Directive).

This substitution of an Assertive for an implied Directive (to be inferred by the listener) is a classic example of an indirect speech act. Usually, the analysis of such acts is based on the Gricean maxims of conversation [50]; and numerous computational implementations for handling such indirect speech acts have been proposed, such as [51].

Finally, yet another problem related to speech acts, is the issue of their classification from the robot, after hearing them. Classic techniques such as those described on [52] rely on liguistic information only; however, paralinguistic information (such as prosodic features) can also prove useful towards speech act classification; the interested reader is referred for example to [53].

Iii-C Mixed Initiative Dialogue

Now, starting again from the shortcoming of the traditional “simple commands-only” systems, let us extend across another axis, namely (p1): human-initiative dialogue is not the only option; one could also have robot-initiative, or ideally, full mixed-initiative. Consider FaceBots [54] [55], a conversational robot utilizing facebook-derived information. A typical dialogue might include turns such as:

R: “Hello! Are you Noura Dhaheri?”

H: “Yes”

R: “Hello Noura! Are you doing well today?”

H: “Pretty well!”

R: “Great! Do you know that our friend Wajahat Kazmi has posted a new photo?”

H: “No!”

R: “Well, I am sending you an inbox message for this, check it out!”

This is a classic example of robot-initiative dialogue, and actually with very simple responses expected from the human – mainly Yes or No. On the other hand, consider a dialogue such as (from the robot BIRON at Bielefeld, dialogue in [56]), which is an example of a dialogue with limited mixed initiative:

R: “Hello Human!”

H: “Hello! Who are you?”

R: “My name is BIRON, if you show me something I can remember it.”

H: “OK, um, look, this is a mug.”

R: “I beg your pardon?”

H: “This is a mug, umm, here.”

R: “Sorry, can you use some other words?”

H: “This is a mug!”

R: “Sorry, I cannot find it. Are you pointing to it with your finger?”

H: “Yes!”

R: “I know it is sometimes difficult with me, but don’t feel discouraged!”

H: (laugh)

R: “OK, I have found it, it is really nice!”

Here, it is neither the robot nor the human driving the dialogue all of the time; for example, the opening pair, R-H is robot-initiative (R: “Hello Human!” H: “Hello!”). However, directly afterwards, the second half of H together with R create a second pair of exchanges, which is human-initiative (H: “Who are you?”, R: “My names is BIRON…”). And thus the initiative can be reversed in multiple points throughout the dialogue.

For an investigation of the state of the art towards mixed initiative, the interested reader is referred to examples such as the Karlsruhe Humanoid [57]the Biron and Barthoc systems at Bielefeld [56], and also workshops such as [58].

Iii-D Situated Language and Symbol Grounding

Yet another observation regarding shortcomings of the traditional command-only systems that is worth extending from, was point (p5) that was mentioned above: the meanings of the utterances were normatively decided by the designer, and not based on empirical observations. For example, a designer/coder could normatively pre-define the semantics of the color descriptor “red” as belonging to the range between two specific given values. Alternatively, one could empirically get a model of the applicability of the descriptor “red” based on actual human usage; by observing the human usage of the word in conjunction with the actual apparent color wavelength and the context of the situation. Furthermore, the actual vocabularies (“red”, “pink”, etc.) or the classes of multiple surface realizations (p4) (quasi-synonyms or semantically equivalent parts of utterances, for example: “give me the red object”, “hand me the red ball”), are usually hand-crafted in such systems, and again not based on systematic human observation or experiment.

There are a number of notable exceptions to this rule, and there is a growing tendancy to indeed overcome these two limitations recently. For example, consider [59], during which a wizard-of-oz experiment provided the collection of vocabulary from users desiring to verbally interact with a robotic arm, and examples such as [37], for which the actual context-depending action models corresponding to simple verbal commands like “go left” or “go right” (which might have quite different expected actions, depending on the surrounding environment) were learnt empirically through human experiments.

Embarking upon this avenue of thought, it slowly becomes apparent that the connection between local environment (and more generally, situational context) and procedural semantics of an utterance is quite crucial. Thus, when dealing with robots and language, it is impossible to isolate the linguistic subsystems from perception and action, and just plug-and-play with a simple speech-in speech-out black box chatterbot of some sort (such as the celebrated ELIZA [60] or even the more recent victors of the Loebner Prize [61]). Simply put, in such systems, there is no connection of what is being heard or said to what the robot senses and what the robot does. This is quite a crucial point; there is a fundamental need for closer integration of language with sensing, action, and purpose in conversational robots [30] [47], as we shall also see in the next sections.

Iii-D1 Situated Language

Upon discussing the connection of language to the physical context, another important concept becomes relevant: situated language, and especially the language that children primarily use during their early years; i.e. language that is not abstract or about past or imagined events; but rather concrete, and about the physical here-and-now. But what is the relevance of this observation to conversational robots? One possibility is the following; given that there seems to be a progression of increasing complexity regarding human linguistic development, often in parallel to a progression of cognitive abilities, it seems reasonable to: First partially mimic the human developmental pathway, and thus start by building robots that can handle such situated language, before moving on to a wider spectrum of linguistic abilities. This is for example the approach taken at [47].

Choosing situated language as a starting point also creates a suitable entry point for discussing language grounding in the next section. Now, another question that naturally follows is: could one postulate a number of levels of extensions from language about the concrete here-and-now to wider domains? This is attempted in [47], and the levels of increasing detachment from the “here-and-now” postulated there are:

First level: limited only to the “here-and-now, existing concrete things”. Words connect to things directly accessible to the senses at the present moment. If there is a chair behind me, although I might have seen it before, I cannot talk about it - “out of sight” means “non-existing” in this case. For example, such a robotic system is [62]

Second level: (“now, existing concrete things”); we can talk about the “now”, but we are not necessarily limited to the “here” - where here means currently accessible to the senses. We can talk about things that have come to our senses previously, that we conjecture still exist through some form of psychological “object permanence” [63] - i.e., we are keeping some primitive “mental map” of the environment. For example, this was the state of the robot Ripley during [64]

Third level: (“past or present, existing concrete things”), we are also dropping the requirement of the “now” - in this case, we also posses some form of episodic memory [65] enabling us to talk about past states. An example robot implementation can be found in [66]

Fourth level: (“imagined or predicted concrete things”); we are dropping the requirement of actual past or present existence, and we can talk about things with the possibility of actual existence - either predicted (connectible to the present) or imagined. [47]

Fifth level: (“abstract things”) we are not talking about potentially existing concrete things any more, but about entities that are abstract. But what is the criterion of “concreteness”? A rough possibility is the following: a concrete thing is a first-order entity (one that is directly connected to the senses); an “abstract” thing is built upon first order entities, and does not connect directly to the senses, as it deals with relationships between them. Take, for example, the concept of the “number three”: it can be found in an auditory example (“threeness” in the sound of three consecutive ticks); it can also be found in a visual example (“threeness” in the snapshot of three birds sitting on a wire). Thus, threeness seems to be an abstract thing (not directly connected to the senses).

Currently, there exist robots and methodologies [47] that can create systems handling basic language corresponding to the first four stages of detachment from situatedness; however, the fifth seems to still be out of reach. If what we are aiming towards is a robot with a deeper understanding of the meaning of words referring to abstract concepts, although related work on computational analogy making (such as [67]), could prove to provide some starting points for extensions towards such domains, we are still beyond the current state-of-the-art.

Nevertheless, there are two interesting points that have arisen in the previous sections: first, that when discussing natural language and robots, there is a need to connect language not only to sensory data, but also to internalized “mental models” of the world – in order for example to deal with detachment from the immediate “here-and-now”. And second, that one needs to consider not only phonological and syntactical levels of language – but also questions of semantics and meaning; and pose the question: “what does it mean for a robot to understand a word that it hears or utters”? And also, more practically: what are viable computational models of the meaning of words, suitable to embodied conversational robots? We will try to tackle these questions right now, in the next subsection.

Iii-D2 Symbol Grounding

One of the main philosophical problems that arises when trying to create embodied conversational robots is the so-called “symbol grounding problem” [25]. In simple terms, the problem is the following: imagine a robot, having an apple in front of it, and hearing the word “apple” – a verbal label which is a conventional sign (in semiotic terms [68] [69]), and which is represented by a symbol within the robots cognitive system. Now this sign is not irrelevant to the actual physical situation; the human that uttered the word “apple” was using it to refer to the physical apple that is in front of the robot. Now the problem that arises is the following: how can we connect the symbol standing for “apple” in the robots cognitive system, with the physical apple that it refers to? Or, in other words, how can we ground out the meaning of the symbol to the world? In simple terms, this is an example of the symbol grounding problem. Of course, it extends not only to objects signified by nouns, but to properties, relations, events etc., and there are many other extensions and variations of it.

So, what are solutions relevant to the problem? In the case of embodied robots, the connection between the internal cognitive system of the robot (where the sign is) and the external world (where the referent is) is mediated through the sensory system, for this simple case described above. Thus, in order to ground out the meaning, one needs to connect the symbol to the sensory data – say, to vision. Which is at least, to find a mechanism through which, achieves the following bidirectional connection: first, when an apple appears in the visual stream, instantiates an apple symbol in the cognitive system (which can later for example trigger the production of the word “apple” by the robot), and second, when an apple symbol is instantiated in the cognitive system (for example, because the robot heard that “there is an apple”), creates an expectation regarding the contents of the sensory stream given that an apple is reported to be present. This bidirectional connection can be succinctly summarized as:

external referent sensory stream internal symbol produced utterance

external referent sensory expectation internal symbol heard utterance

This bidirectional connection we will refer to as “full grounding”, while its first unidirectional part as “half grounding”. Some notable papers presenting computational solutions of the symbol grounding problem for the case of robots are: half-grounding of color and shapes for the Toco robot [62], and full-grounding of multiple properties for the Ripley robot [30]. Highly relevant work includes: [70] and also Steels [71], [72], [73], and also [74] from a child lexical perspective.

The case of grounding of spatial relations (such as “to the left of”, “inside” etc.) reserves special attention, as it is a significant field on its own. A classic paper is [75], presenting an empirical study modeling the effect of central and proximal distance on 2D spatial relations; regarding the generation and interpretation of referring expressions on the basis of landmarks for a simple rectangle world, there is [76], while the book by [77] extends well into illustrating the inadequacy of geometrical models and the need for functional models when grounding terms such as “inside”, and covers a range of relevant interesting subjects. Furthermore, regarding the grounding of attachment and support relations in videos, there is the classic work by [78]. For an overview of recent spatial semantics research, the interested reader is referred to [79], and a sampler of important current work in robotics includes [80], [81], [82], and the most recent work of Tellex on grounding with probabilistic graphical models [83], and for learning word meanings from unaligned parallel data [84].

Finally, an interesting question arises when trying to ground out personal pronouns, such as “me, my, you, your”. Regarding their use as modifiers of spatial terms (“my left”), relevant work on a real robot is [64], and regarding more general models of their meaning, the reader is referred to [85], where a system learns the semantics of the pronouns through examples.

A number of papers has recently also appeared claiming to have provided a solution to the “symbol grounding problem”, such as [86]. There is a variety of different opinions regarding what an adequate solution should accomplish, though. A stream of work around an approach dealing with the evolution of language and semiotics, is outlined in [87]. From a more applied and practical point of view though, one would like to be able to have grounded ontologies [88] [89] or even robot-usable lexica augmented with computational models providing such grounding: and this is the ultimate goal of the EU projects POETICON [90] [91], and the follow-up project POETICON II.

Another important aspect regarding grounding is the set of qualitatively different possible target meaning spaces for a concept. For example, [47] proposes three different types of meaning spaces: sensory, sensorymotor, and teleological. A number of other proposals exists for meaning spaces in cognitive science, but not directly related to grounding; for example, the geometrical spaces Gardenfors [92]. Furthermore, any long-ranging agenda towards extending symbol grounding to an ever-increasing range of concepts, needs to address yet another important point: semantic composition, i.e. for a very simple example, consider how a robot could combine a model of “red” with a model of “dark” in order to derive a model of “dark red”. Although this is a fundamental issue, as discussed in [47], it has yet to be addressed properly.

Last but not least, regarding the real-world acquisition of large-scale models of grounding in practice, special data-driven models are required, and the quantities of empirical data required would make collection of such data from non-experts (ideally online) highly desirable. Towards that direction, there exists the pioneering work of Gorniak [73] where a specially modified computer game allowed the collection of referential and functional models of meaning of the utterances used by the human players. This was followed up by [93] [94] [95], in which specially designed online games allowed the acquisition of scripts for situationally appropriate dialogue production. These experiments can be seen as a special form of crowdsourcing, building upon the ideas started by pioneering systems such as Luis Von Ahn’s peekaboom game [96], but especially targeting the situated dialogic capabilities of embodied agents. Much more remains to be done in this promising direction in the future.

Iii-D3 Meaning Negotiation

Having introduced the concept of non-logic-like grounded models of meaning, another interesting complication arises. Given that different conversational partners might have different models of meaning, say for the lexical semantics of a color term such as “pink”, how is communication possible? A short, yet minimally informative answer, would be: given enough overlap of the particular models, there should be enough shared meaning for communication. But if one examines a number of typical cases of misalignment across models, he will soon reach to the realization that models of meaning, or even second-level models (beliefs about the models that others hold), are very often being negotiated and adjusted online, during a conversation. For example:

(Turquoise object on robot table, in front of human and robot)

H: “Give me the blue object!”

R: “No such object exists”

H: “Give me the blue one!”

R: “No such object exists”

But why is this surreal human-robot dialog taking place, and why it would not have taken place for the case of two humans in a similar setting? Let us analyze the situation. The object on the table is turquoise, a color which some people might classify as “blue”, and others as “green”. The robot’s color classifier has learnt to treat turquoise as green; the human classifies the object as “blue”. Thus, we have a categorical misalignment error, as defined in [47]. For the case of two humans interacting instead of a human and a robot, given the non-existence of another unique referent satisfying the “blue object” description, the second human would have readily assumed that most probably the first human is classifying turquoise as “blue”; and, thus, he would have temporarily adjusted his model of meaning for “blue” in order to be able to include turquoise as “blue”, and thus to align his communication with his conversational partner. Thus, ideally we would like to have conversational robots that can gracefully recover from such situations, and fluidly negotiate their models of meaning online, in order to be able to account for such situations. Once again, this is a yet unexplored, yet crucial and highly promising avenue for future research.

Iii-E Affective Interaction

An important dimension of cognition is the affective/emotional. In the german psychological tradition of the 18th century, the affective was part of the tripartite classification of mental activities into cognition, affection, and conation; and apart from the widespread use of the term, the influence of the tri-partite division extended well into the 20th century [97].

The affective dimension is very important in human interaction [98], because it is strongly intertwined with learning [99], persuasion [100], and empathy, among many other functions. Thus, it carries over its high significance for the case of human-robot interaction. For the case of speech, affect is marked both in the semantic/pragmatic content as well as in the prosody of speech: and thus both of these ideally need to be covered for effective human-robot interaction, and also from both the generation as well as recognition perspectives. Furthermore, other affective markers include facial expressions, body posture and gait, as well as markers more directly linked to physiology, such as heart rate, breathing rate, and galvanic skin response.

Pioneering work towards affective human-robot interaction includes [101] where, extending upon analogous research from virtual avatars such as Rea [102], Steve [103], and Greta [104], Cynthia Breazeal presents an interactive emotion and drive system for the Kismet robot [105], which is capable of multiple facial expressions. An interesting cross-linguistic emotional speech corpus arising from children’s interactions with the Sony AIBO robot is presented in [106]. Another example of preliminary work based on a Wizard-of-Oz approach, this time regarding children’s interactions with the ATR Robovie robot in Japan, is presented in [107]. In this paper, automatic recognition of embarrassment or pleasure of the children is demonstrated. Regarding interactive affective storytelling with robots with generation and recognition of facial expressions, [108] presents a promising starting point. Recognition of human facial expressions is accomplished through SHORE [109], as well as the Seeing Machine’s product FaceAPI. Other available facial expression recognition systems include [110], which has also been used as an aid for autistic children, as well as [111], and [112], where the output of the system is at the level of facial action coding (FACS). Regarding generation of facial expressions for robots, some examples of current research include [113], [114], [115] . Apart from static poses, the dynamics of facial expressions are also very important towards conveying believability; for empirical research on dynamics see for example [116]. Still, compared to the wealth of available research on the same subject with virtual avatars, there is still a lag both in empirical evaluations of human-robot affective interaction, as well as in importing existing tools from avatar animation towards their use for robots.

Regarding some basic supporting technologies of affect-enabled text-to-speech and speech recognition, the interested reader can refer to the general reviews by Schroeder [117] on TTS, and by Ververidis and Kotropoulos [118] on recognition. A wealth of other papers on the subject exist; with some notable developments for affective speech-enabled real-world robotic systems including [119] [120]. Furthermore, if one moves beyond prosodic affect, to semantic content, the wide literature on sentiment analysis and shallow identification of affect applies directly; for example [121] [122] [123]. Finally, regarding physiological measurables, products such as Affectiva’s Q sensor [124], or techniques for measuring heart rate, breathing rate, galvanic skin response and more, could well become applicable to the human-robot affective interaction domain, of course under the caveats of [125]. Finally, it is worth noting that significant cross-culture variation exists regarding affect; both at the generation, as well as at the understanding and situational appropriateness levels [126]. In general, affective human-robot interaction is a growing field with promising results, which is expected to grow even more in the near future.

Iii-F Motor corellates of speech and non-verbal communication

Verbal communication in humans doesn’t come isolated from non-verbal signs; in order to achieve even the most basic degree of naturalness, any humanoid robot needs for example at least some lip-movement-like feature to accompany speech production. Apart from lip-syncing, many other human motor actions are intertwined with speech and natural language; for example, head nods, deictic gestures, gaze movements etc. Also, note that the term “corellates” is somewhat misleading; for example, the gesture channel can be more accurately described as being a complementary channel rather than a channel correlated with or just accompanying speech [127]. Furthermore, we are not interested only in the generation of such actions; but also on their combination, as well as on dialogic / interactional aspects.

Let us start by examining the generation of lip syncing. The first question that arises is: should lip sync actions be generated from phoneme-level information, or is the speech soundtrack adequate? Simpler techniques, rely on the speech soundtrack only; the simplest solution being to utilize only the loudness of the soundtrack, and map directly from loudness to mouth opening. There are many shortcomings in this approach; for example, a nasal “m” usually has large apparent loudness, although in humans it is being produced with a closed mouth. Generally, the resulting lip movements of this method are perceivable unnatural. As an improvement to the above method, one can try to use spectrum matching of the soundtrack to a set of reference sounds, such as at [128, 129], or even better, a linear prediction speech model, such as [130]. Furthermore, apart from the generation of lip movements, their recognition can be quite useful regarding the improvement of speech recognition performance under low signal-to-noise ratio conditions [131]. There is also ample evidence that humans utilize lip information during recognition; a celebrated example is the McGurk effect [132]. The McGurk effect is an instance of so-called multi-sensory perception phenomena [133], which also include other interesting cases such as the rubber hand illusion [134].

Now, let us move on to gestures. The simplest form of gestures which are also directly relevant to natural language are deictic gestures, pointing towards an object and usually accompanied with indexicals such as “this one!”. Such gestures have long been utilized in human-robot interaction; starting from virtual avatar systems such as Kris Thorisson’s Gandalf [135] , and continuing all the way to robots such as ACE (Autonomous City Explorer) [136], a robot that was able to navigate through Munich by asking pedestrians for directions. There exists quite a number of other types of gestures, depending on the taxonomy one adopts; such as iconic gestures, symbolic gestures etc. Furthermore, gestures are highly important towards teaching and learning in humans [137]. Apart from McNeill’s seminal psychological work [127], a definitive reference to gestures, communication, and their relation to language, albeit regarding virtual avatar Embodied Conversational Assistants (ECA), can be found in Justine Cassell’s work, including [138, 139]. Many open questions exist in this area; for example, regarding the synchronization between speech and the different non-verbal cues [140], , and socio-pragmatic influences on the non-verbal repertoire.

Another important topic for human-robot interaction is eye gaze coordination and hared attention. Eye gaze cues are important for coordinating collaborative tasks [141, 142], and also, eye gazes are an important subset of non-verbal communication cues that can increase efficiency and robustness in human-robot teamwork [143]. Furthermore, eye gaze is very important in disambiguating referring expressions, without the need for hand deixis [144, 145]. Shared attention mechanisms develop in humans during infancy [146], and Scasellati authored the pioneering work on shared attention in robots in 1996 [147], followed up by [148]. A developmental viewpoint is also taken in [149], as well as in [150]. A well-cited probabilistic model of gaze imitation and shared attention is given in [151], In virtual avatars, considerable work has also taken place; such as [152, 153].

Eye-gaze observations are also very important towards mind reading and theory of mind [154] for robots; i.e. being able to create models of the mental content and mental functions of other agents (human or robots) minds through observation. Children develop a progressively more complicated theory of mind during their childhood [155]. Elemental forms of theory of mind are very important also towards purposeful speech generation; for example, in creating referring expressions, one should ideally take into account the second-order beliefs of his conversational partner-listener; i.e. he should use his beliefs regarding what he thinks the other person believes, in order to create a referring expression that can be resolved uniquely by his listener. Furthermore, when a robot is purposefully issuing an inform statement (“there is a tomato behind you”) it should know that the human does not already know that; i.e. again an estimated model of second-order beliefs is required (i.e. what the robot believes the human believes). A pioneering work in theory of mind for robots is Scasellati’s [156, 157]. An early implementation of perspective-shifting synthetic-camera-driven second-order belief estimation for the Ripley robot is given in [47]. Another example of perspective shifting with geometric reasoning for the HRP-2 humanoid is given in [158].

Finally, a quick note on a related field, which is recently growing. Children with Autistic Spectrum Disorders (ASD) face special communication challenges. A prominent theory regarding autism is hypothesizing theory-of-mind deficiencies for autistic individuals [159, 160]. However, recent research [161, 162, 163, 164] has indicated that specially-designed robots that interact with autistic children could potentially help them towards improving their communication skills, and potentially transferring over these skills to communicating not only with robots, but also with other humans.

Last but not least, regarding a wider overview of existing work on non-verbal communication between humans, which could readily provide ideas for future human-robot experiments, the interested reader is referred to [24].

Iii-G Purposeful speech and planning

Traditionally, simple command-only canned-response conversational robots had dialogue systems that could be construed as stimulus-response tables: a set of verbs or command utterances were the stimuli, the responses being motor actions, with a fixed mapping between stimuli and responses. Even much more advanced systems, that can support situated language, multiple speech acts, and perspective-shifting theory-of-mind, such as Ripley [47], can be construed as effectively being (stimulus, state) to response maps, where the state of the system includes the contents of the situation model of the robots. What is missing in all of these systems is an explicit modeling of purposeful behavior towards goals.

Since the early days of AI, automated planning algorithms such as the classic STRIPS [165] and purposeful action selection techniques have been a core research topic In traditional non-embodied dialogue systems practice, approaches such as Belief-Desire-Intention (BDI) have existed for a while [166], and theoretical models for purposeful generation of speech acts [167] and computation models towards speech planning [BookSpeechPlanning] exist since more than two decades. Also, in robotics, specialized modified planning algorithms have mainly been applied towards motor action planning and path planning [165], such as RRT [168] and Fast-Marching Squares [169].

However, the important point to notice here is that, although considerable research exists for motor planning or dialogue planning alone, there are almost no systems and generic frameworks either for effectively combining the two, or for having mixed speech- and motor-act planning, or even better agent- and object-interaction-directed planners. Notice that motor planning and speech planning cannot be isolated from one another in real-world systems; both types of actions are often interchangeable with one another towards achieving goals, and thus should not be planned by separate subsystems which are independent of one another. For example, if a robot wants to lower its temperature, it could either say: “can you kindly open the window?” to a human partner (speech action), or could move its body, approach the window, and close it (motor action). An exemption to this research void of mixed speech-motor planning is [170], where a basic purposeful action selection system for question generation or active sensing act generation is described, implemented on a real conversation robot. However, this is an early and quite task-specific system, and thus much more remains to be done towards real-world general mixed speech act and motor act action selection and planning for robots.

Iii-H Multi-level learning

Yet another challenge towards fluid verbal and non-verbal human-robot communication is concerned with learning [171]. But when could learning take place, and what could be and should be learnt? Let us start by examining the when. Data-driven learning can happen at various stages of the lifetime of a system: it could either take place a) initially and offline, at design time; or, it could take place b) during special “learning” sessions, where specific aspects and parameters of the system are renewed; or, c) it could take place during normal operation of the system, in either a human-directed manner, or ideally d) through robot-initiated active learning during normal operation. Most current systems that exhibit learning, are actually involving offline learning, i.e. case a) from above. No systems in the literature have exhibited non-trivial online, real-world continuous learning of communications abilities.

The second aspect beyond the “when”, is the “what” of learning. What could be ideally, what could be practically, and what should be learnt, instead of pre-coded, when it comes to human-robot communication? For example, when it comes to natural-language communication, multiple layers exist: the phonological, the morphological, the syntactic, the semantic, the pragmatic, the dialogic. And if one adds the complexity of having to address the symbol grounding problem, a robot needs to have models of grounded meaning, too, in a certain target space, for example in a sensorymotor or a teleological target space. This was already discussed in the previous sections of “normative vs. empirical meaning” and on “symbol grounding”. Furthermore, such models might need to be adjustable on the fly; as discussed in the section on online negotiation of meaning. Also, many different aspects of non-verbal communication, from facial expressions to gestures to turn-taking, could ideally be learnable in real operation, even more so for the future case of robots needing to adapt to cultural and individual variations in non-verbal communications. Regarding motor aspects of such non-verbal cues, existing methods in imitation and demonstration learning [28] have been and could further be readily adapted; see for example the imitation learning of human facial expressions for the Leonardo robot [172].

Finally, another important caveat needs to be spelled out at this point. Real-world learning and real-world data collection towards communicative behavior learning for robots, depending on the data set size required, might require many hours of uninterrupted operation daily by numerous robots: a requirement which is quite unrealistic for today’s systems. Therefore, other avenues need to be sought towards acquiring such data sets; and crowdsourcing through specially designed online games offers a realistic potential solution, as mentioned in the previous paragraph on real-world acquisition of large-scale models of grounding. And of course, the learning content of such systems can move beyond grounded meaning models, to a wider range of the “what” that could be potentially learnable. A relevant example from a non-embodied setting comes from [173], where a chatterbot acquired interaction capabilities through massive observation and interaction with humans in chat rooms. Of course, there do exist inherent limitations in such online systems, even for the case of the robot-tailored online games such as [95]; for example, the non-physicality of the interaction presents specific obstacles and biases. Being able to extend this promising avenue towards wider massive data-driven models, and to demonstrate massive transfer of learning from the online systems to real-world physical robots, is thus an important research avenue for the future.

Iii-I Utilization of online resources and services

Yet another interesting avenue towards enhanced human-robot communication that has opened up recently is the following: as more and more robots nowadays can be constantly connected to the internet, not all data and programs that the robot uses need to be onboard its hardware. Therefore, a robot could potentially utilize online information as well as online services, in order to enhance its communication abilities. Thus, the intelligence of the robot is partially offloaded to the internet; and potentially, thousands of programs and/or humans could be providing part of its intelligence, even in real-time. For example, going much beyond traditional cloud robotics [174], in the human-robot cloud proposal [175], one could construct on-demand and on-the-fly distributed robots with human and machine sensing, actuation, and processing components.

Beyond these highly promising glimpses of a possible future, there exist a number of implemented systems that utilize information and/or services from the internet. A prime example is Facebots, which are physical robots that utilize and publish information on Facebook towards enhancing long-term human-robot interaction, are described in [54] [55],. Facebots are creating shared memories and shared friends with both their physical as well as their online interaction partners, and are utilizing this information towards creating dialogues that enable the creation of a longer-lasting relationship between the robot and its human partners, thus reversing the quick withdrawal of the novelty effects of long-term HRI reported in [176]. Also, as reported in [177], the multilingual conversational robot Ibn Sina [39], has made use of online google translate services, as well as wikipedia information for its dialogues. Furthermore, one could readily utilize online high-quality speech recognition and text-to-speech services for human-robot communication, such as [Sonic Cloud online services], in order not to sacrifice onboard computational resources.

Also, quite importantly, there exists the European project Roboearth [178], which is described as “…a World Wide Web for robots: a giant network and database repository where robots can share information and learn from each other about their behavior and their environment. Bringing a new meaning to the phrase “experience is the best teacher”, the goal of RoboEarth is to allow robotic systems to benefit from the experience of other robots, paving the way for rapid advances in machine cognition and behaviour, and ultimately, for more subtle and sophisticated human-machine interaction”. Rapyuta [179], which is the cloud engine of Roboearth, claims to make immense computational power available to robots connected to it. Of course, beyond what has been utilized so far, there are many other possible sources of information and/or services on the internet to be exploited; and thus much more remains to be done in the near future in this direction.

Iii-J Miscellaneous abilities

Beyond the nine desiderata examined so far, there exist a number of other abilities that are required towards fluid and general human-robot communication. These have to do with dealing with multiple conversational partners in a discussion, with support for multilingual capabilities, and with generating and recognizing natural language across multiple modalities: for example not only acoustic, but also in written form. In more detail:

Iii-J1 Multiple conversational partners

Regarding conversational turn-taking, in the words of Sacks [180], “The organization of taking turns to talk is fundamental to conversation, as well as to other speech-exchange systems”, and this readily carries over to human-robot conversations, and becomes especially important in the case of dialogues with multiple conversation partners. Recognition of overlapping speech is also quite important towards turn-taking [181]. Regarding turn-taking in robots, a computational strategy for robots participating in group conversation is presented in [182], and the very important role of gaze cues in turn taking and participant role assignment in human-robot conversations is examined in [183]. In [184], an experimental study using the robot Simon is reported, which is aiming towards showing that the implementation of certain turn-taking cues can make interaction with a robot easier and more efficient for humans. Head movements are also very important in turn-taking; the role of which in keeping engagement in an interaction is explored in [185].

Yet another requirement for fluid multi-partner conversations is sound-source localization and speaker identification. Sound source localization is usually accomplished using microphone arrays, such as the robotic system in [186]. An approach utilizing scattering theory for sound source localization in robots is described in [187] and approaches using beamforming for multiple moving sources are presented in [188] and [189]. Finally, HARK, an open-source robot audition system supporting three simultaneous speakers, is presented in [190]. Speaker identification is an old problem; classic approaches utilize Gaussian mixture models, such as [191] and [192]. Robotic systems able to identify their speaker’s identity include [193], [52], as well as the well-cited [194]. Also, an important idea towards effective signal separation between multiple speaker sources in order to aid in recognition, is to utilize both visual as well as auditory information towards that goal. Classic examples of such approaches include [195], as well as [196].

Iii-J2 Multilingual capabilities and Mutimodal natural language

Yet another desirable ability for human-robot communication is multilinguality. Multilingual robots could not only communicate with a wider range of people, especially in multi-cultural societies and settings such as museums, but could very importantly also act as translators and mediators. Although there has been considerable progress towards non-embodied multilingual dialogue systems [197], and multi-lingual virtual avatars do exist [198] [199], the only implemented real-world multilingual physical android robot so far reported in the literature is [177].

Finally, let us move on to examining multiple modalities for the generation and recognition of natural language. Apart from a wealth of existing research on automated production and recognition of sign language for the deaf (ASL) [200] [201] [202], systems directly adaptable to robots also exist [203]. One could also investigate the intersection between human writing and robotics. Again, a wealth of approaches exist for the problem of optical character recognition and handwriting recognition [204] [205], even for languages such as Arabic [206], the only robotic system that has demonstrated limited OCR capabilities is [177]. Last but not least, another modality available for natural language communication for robots is internet chat. The only reported system so far that could perform dialogues both physically as well as through facebook chat is [54] [55].

As a big part of human knowledge, information, as well as real-world communication is taking place either through writing or through such electronic channels, inevitably more and more systems in the future will have corresponding abilities. Thus, robots will be able to more fluidly integrate within human societies and environments, and ideally will be enabled to utilize the services offered within such networks for humans. Most importantly, robots might also one day become able to help maintain and improve the physical human-robot social networks they reside within towards the benefit of the common good of all, as is advocated in [207].

Iv Discussion

From our detailed examination of the ten desiderata, what follows first is that although we have moved beyond the “canned-commands-only, canned responses” state-of-affairs of the ninetees, we seem to be still far from our goal of fluid and natural verbal and non-verbal communication between humans and robots. But what is missing?

Many promising future directions were mentioned in the preceeding sections. Apart from clearly open avenues for projects in a number of areas, such as composition of grounded semantics, online negotiation of meaning, affective interaction and closed-loop affective dialogue, mixed speech-motor planning, massive acquisition of data-driven models for human-robot communication through crowd-sourced online games, real-time exploitation of online information and services for enhanced human-robot communication, many more open areas exist.

What we speculate might really make a difference, though, is the availability of massive real-world data, in order to drive further data-driven models. And in order to reach that state, a number of robots need to start getting deployed, even if in partially autonomous partially remote-human-operated mode, in real-world interactive application settings with round-the-clock operation: be it shopping mall assistants, receptionists, museum robots, or companions, the application domains that will bring out human-robot communication to the world in more massive proportions, remains yet to be discovered. However, given recent developments, it does not seem to be so far away anymore; and thus, in the coming decades, the days might well come when interactive robots will start being part of our everyday lives, in seemless harmonious symbiosis, hopefully helping create a better and exciting future.

V Conclusions

An overview of research in human-robot interactive communication was presented, covering verbal as well as non-verbal aspects. Following a historical introduction reaching from roots in antiquity to well into the ninetees, and motivation towards fluid human-robot communication, ten desiderata were proposed, which provided an organizational axis both of recent as well as of future research on human-robot communication. Then, the ten desiderata were explained, relevant research was examined in detail, culminating to a unifying discussion. In conclusion, although almost twenty-five years in human-robot interactive communication exist, and significant progress has been achieved in many fronts, many sub-problems towards fluid verbal and non-verbal human-robot communication remain yet unsolved, and present highly promising and exciting avenues towards research in the near future.


  • [1] L. Ballard, “Robotics’founding father george c. devol-serial entrepreneur and inventor,” Robot-Congers, no. 31, p. 58, 2011.
  • [2] G. C. Devol, “Encoding apparatus,” Jan. 24 1984, uS Patent 4,427,970.
  • [3] D.-L. Gera, Ancient Greek Ideas on Speech, Language, and Civilization.   Oxford University Press, 2003.
  • [4] R. Lattimore and R. Martin, The Iliad of Homer.   University of Chicago Press, 2011.
  • [5] C. Huffman, Archytas of Tarentum: Pythagorean, Philosopher and Mathematician King.   Cambridge University Press, 2005.
  • [6] J. Needham, Science and Civilisation in China: Volume 2.   Cambridge University Press, 1959.
  • [7] N. Sharkey, “The programmable robot of ancient greece,” New Scientist, pp. 32–35, jul 2007.
  • [8] M. E. Rosheim, Robot Evolution: The Development of Anthrobotics, 1st ed.   New York, NY, USA: John Wiley & Sons, Inc., 1994.
  • [9] N. Hockstein, C. Gourin, R. Faust, and D. Terris, “A history of robots: from science fiction to surgical robotics,” Journal of Robotic Surgery, vol. 1, no. 2, pp. 113–118, 2007.
  • [10] D. H. Klatt, “Review of text-to-speech conversion for English,” Journal of the Acoustical Society of America, vol. 82, no. 3, pp. 737–793, 1987.
  • [11] G. Antoniol, R. Cattoni, M. Cettolo, and M. Federico, “Robust speech understanding for robot telecontrol,” in In Proceedings of the 6th International Conference on Advanced Robotics, 1993, pp. 205–209.
  • [12] W. Burgard, A. B. Cremers, D. Fox, D. H?nel, G. Lakemeyer, D. Schulz, W. Steiner, and S. Thrun, “The Interactive Museum Tour-Guide Robot,” in Proc. of the Fifteenth National Conference on Artificial Intelligence (AAAI-98), 1998.
  • [13] L. Versweyveld, “Voice-controlled surgical robot ready to assist in minimally invasive heart surgery,” Virtual Medicine World Monthly, March 1998.
  • [14] I. Horswill, “Polly: A vision-based artificial agent,” in Proceedings of the Eleventh National Conference on Artificial Intelligence (AAAI-93.   Press, 1993, pp. 824–829.
  • [15] ——, “The design of the polly system,” The Institute for the Learning Sciences, Northwestern University, Tech. Rep., september 1996.
  • [16] M. Torrance, “Natural communication with mobile robots,” Master’s thesis, MIT Department of Electrical Engineering and Computer Science, January 1994.
  • [17] G. Antoniol, B. Caprile, A. Cimatti, and R. Fiutem, “Experiencing real-life interactions with the experimental platform of maia,” in In Proceedings of the 1st European Workshop on Human Comfort and Security, 1994.
  • [18] I. Androutsopoulos, “A principled framework for constructing natural language interfaces to temporal databases,” Ph.D. dissertation, Department of Artificial Intelligence, University of Edinburgh,, 1996.
  • [19] H. Asoh, T. Matsui, J. Fry, F. Asano, and S. Hayamizu, “A spoken dialog system for a mobile office robot,” in Proceedings of the European Conference on Speech Communication and Technology, EUROSPEECH.   ISCA, 1999.
  • [20] J. Fry, H. Asoh, and T. Matsui, “Natural dialogue with the jijo-2 office robot,” in Intelligent Robots and Systems, 1998. Proceedings., 1998 IEEE/RSJ International Conference on, vol. 2, 1998, pp. 1278–1283 vol.2.
  • [21] T. Matsui, H. Asoh, J. Fry, Y. Motomura, F. Asano, T. Kurita, I. Hara, and N. Otsu, “Integrated natural spoken dialogue system of jijo-2 mobile robot for office services,” in Proceedings of the sixteenth national conference on Artificial intelligence and the eleventh Innovative applications of artificial intelligence conference innovative applications of artificial intelligence, ser. AAAI ’99/IAAI ’99.   Menlo Park, CA, USA: American Association for Artificial Intelligence, 1999, pp. 621–627.
  • [22] C. Crangle, P. Suppes, C. for the Study of Language, and I. (U.S.), Language and Learning for Robots, ser. CSLI lecture notes.   Center for the Study of Language and Information, 1994. [Online]. Available: http://books.google.gr/books?id=MlMQ11Pqz10C
  • [23] J. R. Searle, Speech Acts: An Essay in the Philosophy of Language.   Cambridge: Cambridge University Press, 1969.
  • [24] K. Vogeley and G. Bente, ““artificial humans”: Psychology and neuroscience perspectives on embodiment and nonverbal communication,” Neural Networks, vol. 23, no. 8, pp. 1077–1090, 2010.
  • [25] S. Harnad, “The symbol grounding problem,” Physica D: Nonlinear Phenomena, vol. 42, no. 1, pp. 335–346, 1990.
  • [26] G. Schreiber, A. Stemmer, and R. Bischoff, “The fast research interface for the kuka lightweight robot,” in IEEE Conference on Robotics and Automation (ICRA), 2010.
  • [27] S. Wrede, C. Emmerich, R. Grünberg, A. Nordmann, A. Swadzba, and J. Steil, “A user study on kinesthetic teaching of redundant robots in task and configuration space,” Journal of Human-Robot Interaction, vol. 2, no. 1, pp. 56–81, 2013.
  • [28] B. D. Argall, S. Chernova, M. Veloso, and B. Browning, “A survey of robot learning from demonstration,” Robotics and Autonomous Systems, vol. 57, no. 5, pp. 469–483, 2009.
  • [29] C. L. Nehaniv and K. Dautenhahn, Imitation and social learning in robots, humans and animals: behavioural, social and communicative dimensions.   Cambridge University Press, 2007.
  • [30] N. Mavridis and D. Roy, “Grounded situation models for robots: Where words and percepts meet,” in Intelligent Robots and Systems, 2006 IEEE/RSJ International Conference on, 2006, pp. 4690–4697.
  • [31] T. van der Zant and T. Wisspeintner, “Robocup@ home: Creating and benchmarking tomorrows service robot applications,” Robotic Soccer, pp. 521–528, 2007.
  • [32] M. E. Foster, T. By, M. Rickert, and A. Knoll, “Human-robot dialogue for joint construction tasks,” in Proceedings of the 8th international conference on Multimodal interfaces, ser. ICMI ’06.   New York, NY, USA: ACM, 2006, pp. 68–71.
  • [33] M. Giuliani and A. Knoll, “Evaluating supportive and instructive robot roles in human-robot interaction,” in Social Robotics.   Springer, 2011, pp. 193–203.
  • [34] K. Wada and T. Shibata, “Living with seal robots—its sociopsychological and physiological influences on the elderly at a care house,” Robotics, IEEE Transactions on, vol. 23, no. 5, pp. 972–980, 2007.
  • [35] K. Kamei, K. Shinozawa, T. Ikeda, A. Utsumi, T. Miyashita, and N. Hagita, “Recommendation from robots in a real-world retail shop,” in International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction.   ACM, 2010, p. 19.
  • [36] M. Makatchev, I. Fanaswala, A. Abdulsalam, B. Browning, W. Ghazzawi, M. Sakr, and R. Simmons, “Dialogue patterns of an arabic robot receptionist,” in Human-Robot Interaction (HRI), 2010 5th ACM/IEEE International Conference on, 2010, pp. 167–168.
  • [37] S. Tellex and D. Roy, “Spatial routines for a simulated speech-controlled vehicle,” in Proceedings of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction.   ACM, 2006, pp. 156–163.
  • [38] K. Dautenhahn, M. Walters, S. Woods, K. L. Koay, C. L. Nehaniv, A. Sisbot, R. Alami, and T. Siméon, “How may i serve you?: a robot companion approaching a seated person in a helping context,” in Proceedings of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction.   ACM, 2006, pp. 172–179.
  • [39] N. Mavridis and D. Hanson, “The ibnsina center: An augmented reality theater with intelligent robotic and virtual characters,” in Robot and Human Interactive Communication, 2009. RO-MAN 2009. The 18th IEEE International Symposium on.   IEEE, 2009, pp. 681–686.
  • [40] K. Petersen, J. Solis, and A. Takanishi, “Musical-based interaction system for the waseda flutist robot,” Autonomous Robots, vol. 28, no. 4, pp. 471–488, 2010.
  • [41] K. Kosuge, T. Hayashi, Y. Hirata, and R. Tobiyama, “Dance partner robot-ms dancer,” in Intelligent Robots and Systems, 2003.(IROS 2003). Proceedings. 2003 IEEE/RSJ International Conference on, vol. 4.   IEEE, 2003, pp. 3459–3464.
  • [42] V. A. Kulyukin, “On natural language dialogue with assistive robots,” in Proceedings of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction.   ACM, 2006, pp. 164–171.
  • [43] J. Dzifcak, M. Scheutz, C. Baral, and P. Schermerhorn, “What to do and how to do it: Translating natural language directives into temporal and dynamic logic representation for goal management and action execution,” in Proceedings of the 2009 IEEE International Conference on Robotics and Automation (ICRA ’09), Kobe, Japan, May 2009.
  • [44] J. Austin, How to Do Things with Words.   Oxford, 1962.
  • [45] J. Searle, “A taxonomy of illocutionary acts,” in Language, Mind and Knowledge, K. Gunderson, Ed.   University of Minnesota Press, 1975, pp. 344–369.
  • [46] J. F. Allen, D. K. Byron, M. Dzikovska, G. Ferguson, L. Galescu, and A. Stent, “Towards conversational human-computer interaction,” AI MAGAZINE, vol. 22, pp. 27–37, 2001.
  • [47] N. Mavridis, “Grounded situation models for situated conversational assistants,” Ph.D. dissertation, Massachusetts Institute of Technology, 2007.
  • [48] P. N. Johnson-Laird, Mental models: Towards a cognitive science of language, inference, and consciousness.   Harvard University Press, 1983, vol. 6.
  • [49] R. A. Zwaan and G. A. Radvansky, “Situation models in language comprehension and memory.” Psychological bulletin, vol. 123, no. 2, p. 162, 1998.
  • [50] H. P. Grice, “Logic and conversation,” 1975, pp. 41–58, 1975.
  • [51] S. Wilske and G.-J. Kruijff, “Service robots dealing with indirect speech acts,” in Intelligent Robots and Systems, 2006 IEEE/RSJ International Conference on.   IEEE, 2006, pp. 4698–4703.
  • [52] F. Krsmanovic, C. Spencer, D. Jurafsky, and A. Y. Ng, “Have we met? mdp based speaker id for robot dialogue.” in INTERSPEECH, 2006.
  • [53] C. T. Ishi, H. Ishiguro, and N. Hagita, “Analysis of prosodic and linguistic cues of phrase finals for turn-taking and dialog acts.” in INTERSPEECH, 2006.
  • [54] N. Mavridis, M. Petychakis, A. Tsamakos, P. Toulis, S. Emami, W. Kazmi, C. Datta, C. BenAbdelkader, and A. Tanoto, “Facebots: Steps towards enhanced long-term human-robot interaction by utilizing and publishing online social information,” Paladyn, vol. 1, no. 3, pp. 169–178, 2010.
  • [55] N. Mavridis, C. Datta, S. Emami, A. Tanoto, C. BenAbdelkader, and T. Rabie, “Facebots: robots utilizing and publishing social information in facebook,” in Human-Robot Interaction (HRI), 2009 4th ACM/IEEE International Conference on.   IEEE, 2009, pp. 273–274.
  • [56] B. Wrede, S. Buschkaemper, C. Muhl, and K. J. Rohlfing, “Analyses of feedback in hri,” How People Talk to Computers, Robots, and Other Artificial Communication Partners, p. 38, 2006.
  • [57] R. Stiefelhagen, H. K. Ekenel, C. Fugen, P. Gieselmann, H. Holzapfel, F. Kraft, K. Nickel, M. Voit, and A. Waibel, “Enabling multimodal human–robot interaction for the karlsruhe humanoid robot,” Robotics, IEEE Transactions on, vol. 23, no. 5, pp. 840–851, 2007.
  • [58] D. Ertl, A. Green, H. Hüttenrauch, and F. Lerasle, “Improving human-robot communication with mixed-initiative and context-awareness co-located with ro-man 2009.”
  • [59] M. Ralph and M. A. Moussa, “Toward a natural language interface for transferring grasping skills to robots,” Robotics, IEEE Transactions on, vol. 24, no. 2, pp. 468–475, 2008.
  • [60] J. Weizenbaum, “Eliza—a computer program for the study of natural language communication between man and machine,” Communications of the ACM, vol. 9, no. 1, pp. 36–45, 1966.
  • [61] M. L. Mauldin, “Chatterbots, tinymuds, and the turing test: Entering the loebner prize competition,” in AAAI, vol. 94, 1994, pp. 16–21.
  • [62] D. Roy, “A computational model of word learning from multimodal sensory input,” in Proceedings of the International Conference of Cognitive Modeling (ICCM2000), Groningen, Netherlands.   Citeseer, 2000.
  • [63] R. Baillargeon, E. S. Spelke, and S. Wasserman, “Object permanence in five-month-old infants,” Cognition, vol. 20, no. 3, pp. 191–208, 1985.
  • [64] D. Roy, K.-Y. Hsiao, and N. Mavridis, “Mental imagery for a conversational robot,” Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, vol. 34, no. 3, pp. 1374–1383, 2004.
  • [65] E. Tulving, Elements of episodic memory.   Clarendon Press Oxford, 1983.
  • [66] N. Mavridis and M. Petychakis, “Human-like memory systems for interactive robots: Desiderata and two case studies utilizing groundedsituation models and online social networking.”
  • [67] D. Gentner and K. D. Forbus, “Computational models of analogy,” Wiley Interdisciplinary Reviews: Cognitive Science, vol. 2, no. 3, pp. 266–276, 2011.
  • [68] C. S. Pierce, “Logic as semiotic: The theory of signs,” The philosophical writings of Pierce, pp. 98–119, 1955.
  • [69] C. S. Peirce, Collected papers of charles sanders peirce.   Harvard University Press, 1974, vol. 3.
  • [70] T. Spexard, S. Li, B. Wrede, J. Fritsch, G. Sagerer, O. Booij, Z. Zivkovic, B. Terwijn, and B. Krose, “Biron, where are you? enabling a robot to learn new places in a real home environment by integrating spoken dialog and visual localization,” in Intelligent Robots and Systems, 2006 IEEE/RSJ International Conference on.   IEEE, 2006, pp. 934–940.
  • [71] L. Steels, “Evolving grounded communication for robots,” Trends in cognitive sciences, vol. 7, no. 7, pp. 308–312, 2003.
  • [72] S. D. Larson, Intrinsic representation: Bootstrapping symbols from experience.   Springer, 2004.
  • [73] P. J. Gorniak, “The affordance-based concept,” Ph.D. dissertation, Massachusetts Institute of Technology, 2005.
  • [74] C. Yu, L. B. Smith, and A. F. Pereira, “Grounding word learning in multimodal sensorimotor interaction,” in Proceedings of the 30th annual conference of the cognitive science society, 2008, pp. 1017–1022.
  • [75] T. Regier and L. A. Carlson, “Grounding spatial language in perception: an empirical and computational investigation.” Journal of Experimental Psychology: General, vol. 130, no. 2, p. 273, 2001.
  • [76] D. K. Roy, “Learning visually grounded words and syntax for a scene description task,” Computer Speech & Language, vol. 16, no. 3, pp. 353–385, 2002.
  • [77] K. R. Coventry and S. C. Garrod, Saying, seeing and acting: The psychological semantics of spatial prepositions.   Psychology Press, 2004.
  • [78] J. M. Siskind, “Grounding the lexical semantics of verbs in visual perception using force dynamics and event logic,” arXiv preprint arXiv:1106.0256, 2011.
  • [79] J. Zlatev, “Spatial semantics,” Handbook of Cognitive Linguistics, pp. 318–350, 2007.
  • [80] M. Skubic, D. Perzanowski, S. Blisard, A. Schultz, W. Adams, M. Bugajska, and D. Brock, “Spatial language for human-robot dialogs,” Systems, Man, and Cybernetics, Part C: Applications and Reviews, IEEE Transactions on, vol. 34, no. 2, pp. 154–167, 2004.
  • [81] H. Zender, O. Martínez Mozos, P. Jensfelt, G.-J. Kruijff, and W. Burgard, “Conceptual spatial representations for indoor mobile robots,” Robotics and Autonomous Systems, vol. 56, no. 6, pp. 493–502, 2008.
  • [82] S. Tellex, T. Kollar, S. Dickerson, M. R. Walter, A. G. Banerjee, S. J. Teller, and N. Roy, “Understanding natural language commands for robotic navigation and mobile manipulation.” in AAAI, 2011.
  • [83] S. Tellex, T. Kollar, S. Dickerson, M. R. Walter, A. G. Banerjee, S. Teller, and N. Roy, “Approaching the symbol grounding problem with probabilistic graphical models,” AI magazine, vol. 32, no. 4, pp. 64–76, 2011.
  • [84] S. Tellex, P. Thaker, J. Joseph, and N. Roy, “Learning perceptually grounded word meanings from unaligned parallel data,” Machine Learning, pp. 1–17, 2013.
  • [85] K. Gold and B. Scassellati, “Grounded pronoun learning and pronoun reversal,” in Proceedings of the 5th International Conference on Development and Learning, 2006.
  • [86] L. Steels, “The symbol grounding problem has been solved. so what’s next,” Symbols and embodiment: Debates on meaning and cognition, pp. 223–244, 2008.
  • [87] ——, “Semiotic dynamics for embodied agents,” Intelligent Systems, IEEE, vol. 21, no. 3, pp. 32–38, 2006.
  • [88] C. Hudelot, N. Maillot, and M. Thonnat, “Symbol grounding for semantic image interpretation: from image data to semantics,” in Computer Vision Workshops, 2005. ICCVW’05. Tenth IEEE International Conference on.   IEEE, 2005, pp. 1875–1875.
  • [89] A. M. Cregan, “Symbol grounding for the semantic web,” in The Semantic Web: Research and Applications.   Springer, 2007, pp. 429–442.
  • [90] C. Wallraven, M. Schultze, B. Mohler, A. Vatakis, and K. Pastra, “The poeticon enacted scenario corpus—a tool for human and computational experiments on action understanding,” in Automatic Face & Gesture Recognition and Workshops (FG 2011), 2011 IEEE International Conference on.   IEEE, 2011, pp. 484–491.
  • [91] K. Pastra, C. Wallraven, M. Schultze, A. Vataki, and K. Kaulard, “The poeticon corpus: Capturing language use and sensorimotor experience in everyday interaction.” in LREC.   Citeseer, 2010.
  • [92] P. Gärdenfors, Conceptual Spaces: The Geometry of Throught.   MIT press, 2004.
  • [93] J. Orkin and D. Roy, “The restaurant game: Learning social behavior and language from thousands of players online,” Journal of Game Development, vol. 3, no. 1, pp. 39–60, 2007.
  • [94] S. Chernova, J. Orkin, and C. Breazeal, “Crowdsourcing hri through online multiplayer games,” in Proc. Dialog with Robots: AAAI fall symposium, 2010.
  • [95] N. DePalma, S. Chernova, and C. Breazeal, “Leveraging online virtual agents to crowdsource human-robot interaction,” in Proceedings of CHI Workshop on Crowdsourcing and Human Computation, 2011.
  • [96] L. Von Ahn, R. Liu, and M. Blum, “Peekaboom: a game for locating objects in images,” in Proceedings of the SIGCHI conference on Human Factors in computing systems.   ACM, 2006, pp. 55–64.
  • [97] E. R. Hilgard, “The trilogy of mind: Cognition, affection, and conation,” Journal of the History of the Behavioral Sciences, vol. 16, no. 2, pp. 107–117, 1980.
  • [98] R. W. Picard, “Affective computing: challenges,” International Journal of Human-Computer Studies, vol. 59, no. 1, pp. 55–64, 2003.
  • [99] R. Picard, S. Papert, W. Bender, B. Blumberg, C. Breazeal, D. Cavallo, T. Machover, M. Resnick, D. Roy, and C. Strohecker, “Affective learning—a manifesto,” BT Technology Journal, vol. 22, no. 4, pp. 253–269, 2004.
  • [100] G. Haddock, G. R. Maio, K. Arnold, and T. Huskinson, “Should persuasion be affective or cognitive? the moderating effects of need for affect and need for cognition,” Personality and Social Psychology Bulletin, vol. 34, no. 6, pp. 769–778, 2008.
  • [101] C. Breazeal, “Emotion and sociable humanoid robots,” International Journal of Human-Computer Studies, vol. 59, no. 1, pp. 119–155, 2003.
  • [102] J. Cassell, “Embodied conversational interface agents,” Communications of the ACM, vol. 43, no. 4, pp. 70–78, 2000.
  • [103] W. L. Johnson, J. W. Rickel, and J. C. Lester, “Animated pedagogical agents: Face-to-face interaction in interactive learning environments,” International Journal of Artificial intelligence in education, vol. 11, no. 1, pp. 47–78, 2000.
  • [104] F. d. Rosis, C. Pelachaud, I. Poggi, V. Carofiglio, and B. D. Carolis, “From greta’s mind to her face: modelling the dynamics of affective states in a conversational embodied agent,” International Journal of Human-Computer Studies, vol. 59, no. 1, pp. 81–118, 2003.
  • [105] C. Breazeal and J. Velásquez, “Toward teaching a robot ‘infant’using emotive communication acts,” in Proceedings of the 1998 Simulated Adaptive Behavior Workshop on Socially Situated Intelligence, 1998, pp. 25–40.
  • [106] A. Batliner, C. Hacker, S. Steidl, E. Nöth, S. D’Arcy, M. J. Russell, and M. Wong, “” you stupid tin box”-children interacting with the aibo robot: A cross-linguistic emotional speech corpus.” in LREC, 2004.
  • [107] K. Komatani, R. Ito, T. Kawahara, and H. G. Okuno, “Recognition of emotional states in spoken dialogue with a robot,” in Innovations in Applied Artificial Intelligence.   Springer, 2004, pp. 413–423.
  • [108] B.-C. Bae, A. Brunete, U. Malik, E. Dimara, J. Jermsurawong, and N. Mavridis, “Towards an empathizing and adaptive storyteller system,” in Eighth Artificial Intelligence and Interactive Digital Entertainment Conference, 2012.
  • [109] T. Ruf, A. Ernst, and C. Küblbeck, “Face detection with the sophisticated high-speed object recognition engine (shore),” in Microelectronic Systems.   Springer, 2011, pp. 243–252.
  • [110] R. El Kaliouby and P. Robinson, “Real-time inference of complex mental states from facial expressions and head gestures,” in Real-time vision for human-computer interaction.   Springer, 2005, pp. 181–200.
  • [111] C. Shan, S. Gong, and P. W. McOwan, “Facial expression recognition based on local binary patterns: A comprehensive study,” Image and Vision Computing, vol. 27, no. 6, pp. 803–816, 2009.
  • [112] M. S. Bartlett, G. Littlewort, M. Frank, C. Lainscsek, I. Fasel, and J. Movellan, “Fully automatic facial action recognition in spontaneous behavior,” in Automatic Face and Gesture Recognition, 2006. FGR 2006. 7th International Conference on.   IEEE, 2006, pp. 223–230.
  • [113] T. Wu, N. J. Butko, P. Ruvulo, M. S. Bartlett, and J. R. Movellan, “Learning to make facial expressions,” in Development and Learning, 2009. ICDL 2009. IEEE 8th International Conference on.   IEEE, 2009, pp. 1–6.
  • [114] M.-J. Han, C.-H. Lin, and K.-T. Song, “Robotic emotional expression generation based on mood transition and personality model,” Cybernetics, IEEE Transactions on, vol. 43, no. 4, pp. 1290–1303, 2013.
  • [115] T. Baltrušaitis, L. D. Riek, and P. Robinson, “Synthesizing expressions using facial feature point tracking: How emotion is conveyed,” in Proceedings of the 3rd international workshop on Affective interaction in natural environments.   ACM, 2010, pp. 27–32.
  • [116] G. Littlewort, M. S. Bartlett, I. Fasel, J. Susskind, and J. Movellan, “Dynamics of facial expression extracted automatically from video,” Image and Vision Computing, vol. 24, no. 6, pp. 615–625, 2006.
  • [117] M. Schröder, “Expressive speech synthesis: Past, present, and possible futures,” in Affective information processing.   Springer, 2009, pp. 111–126.
  • [118] D. Ververidis and C. Kotropoulos, “Emotional speech recognition: Resources, features, and methods,” Speech communication, vol. 48, no. 9, pp. 1162–1181, 2006.
  • [119] S. Roehling, B. MacDonald, and C. Watson, “Towards expressive speech synthesis in english on a robotic platform,” in Proceedings of the Australasian International Conference on Speech Science and Technology, 2006, pp. 130–135.
  • [120] A. Chella, R. E. Barone, G. Pilato, and R. Sorbello, “An emotional storyteller robot.” in AAAI Spring Symposium: Emotion, Personality, and Social Behavior, 2008, pp. 17–22.
  • [121] B. Pang and L. Lee, “Opinion mining and sentiment analysis,” Foundations and trends in information retrieval, vol. 2, no. 1-2, pp. 1–135, 2008.
  • [122] T. Wilson, J. Wiebe, and P. Hoffmann, “Recognizing contextual polarity: An exploration of features for phrase-level sentiment analysis,” Computational linguistics, vol. 35, no. 3, pp. 399–433, 2009.
  • [123] M. Taboada, J. Brooke, M. Tofiloski, K. Voll, and M. Stede, “Lexicon-based methods for sentiment analysis,” Computational linguistics, vol. 37, no. 2, pp. 267–307, 2011.
  • [124] R. W. Picard, “Measuring affect in the wild,” in Affective Computing and Intelligent Interaction.   Springer, 2011, pp. 3–3.
  • [125] S. H. Fairclough, “Fundamentals of physiological computing,” Interacting with computers, vol. 21, no. 1, pp. 133–145, 2009.
  • [126] H. A. Elfenbein and N. Ambady, “On the universality and cultural specificity of emotion recognition: a meta-analysis.” Psychological bulletin, vol. 128, no. 2, p. 203, 2002.
  • [127] D. McNeill, Hand and mind: What gestures reveal about thought.   University of Chicago Press, 1992.
  • [128] P. Weil, “About face, computergraphic synthesis and manipulation of facial imagery,” Ph.D. dissertation, Massachusetts Institute of Technology, 1982.
  • [129] J. Lewis and P. Purcell, “Soft machine: a personable interface,” in Proc. of Graphics Interface, vol. 84.   Citeseer, 1984, pp. 223–226.
  • [130] J. P. Lewis and F. I. Parke, “Automated lip-synch and speech synthesis for character animation,” in ACM SIGCHI Bulletin, vol. 17, no. SI.   ACM, 1987, pp. 143–147.
  • [131] C. Bregler and Y. Konig, ““eigenlips” for robust speech recognition,” in Acoustics, Speech, and Signal Processing, 1994. ICASSP-94., 1994 IEEE International Conference on, vol. 2.   IEEE, 1994, pp. II–669.
  • [132] H. McGurk and J. MacDonald, “Hearing lips and seeing voices,” Nature, pp. 746–748, 1976.
  • [133] G. A. Calvert, C. Spence, and B. E. Stein, The handbook of multisensory processes.   MIT press, 2004.
  • [134] M. Tsakiris and P. Haggard, “The rubber hand illusion revisited: visuotactile integration and self-attribution.” Journal of Experimental Psychology: Human Perception and Performance, vol. 31, no. 1, p. 80, 2005.
  • [135] K. R. Thorisson, “Communicative humanoids: a computational model of psychosocial dialogue skills,” Ph.D. dissertation, Massachusetts Institute of Technology, 1996.
  • [136] G. Lidoris, F. Rohrmuller, D. Wollherr, and M. Buss, “The autonomous city explorer (ace) project—mobile robot navigation in highly populated urban environments,” in Robotics and Automation, 2009. ICRA’09. IEEE International Conference on.   IEEE, 2009, pp. 1416–1422.
  • [137] W.-M. Roth, “Gestures: Their role in teaching and learning,” Review of Educational Research, vol. 71, no. 3, pp. 365–392, 2001.
  • [138] J. Cassell, T. Bickmore, M. Billinghurst, L. Campbell, K. Chang, H. Vilhjálmsson, and H. Yan, “Embodiment in conversational interfaces: Rea,” in Proceedings of the SIGCHI conference on Human factors in computing systems.   ACM, 1999, pp. 520–527.
  • [139] J. Cassell, H. H. Vilhjálmsson, and T. Bickmore, “Beat: the behavior expression animation toolkit,” in Proceedings of the 28th annual conference on Computer graphics and interactive techniques, ser. SIGGRAPH ’01.   New York, NY, USA: ACM, 2001, pp. 477–486.
  • [140] N. Rossini, “Patterns of synchronization of non-verbal cues and speech in ecas: Towards a more “natural” conversational agent,” in Toward Autonomous, Adaptive, and Context-Aware Multimodal Interfaces. Theoretical and Practical Issues.   Springer, 2011, pp. 96–103.
  • [141] S. R. Fussell, R. E. Kraut, and J. Siegel, “Coordination of communication: Effects of shared visual context on collaborative work,” in Proceedings of the 2000 ACM conference on Computer supported cooperative work.   ACM, 2000, pp. 21–30.
  • [142] S. E. Brennan, X. Chen, C. A. Dickinson, M. B. Neider, and G. J. Zelinsky, “Coordinating cognition: The costs and benefits of shared gaze during collaborative search,” Cognition, vol. 106, no. 3, pp. 1465–1477, 2008.
  • [143] C. Breazeal, C. D. Kidd, A. L. Thomaz, G. Hoffman, and M. Berlin, “Effects of nonverbal communication on efficiency and robustness in human-robot teamwork,” in Intelligent Robots and Systems, 2005.(IROS 2005). 2005 IEEE/RSJ International Conference on.   IEEE, 2005, pp. 708–713.
  • [144] J. E. Hanna and S. E. Brennan, “Speakers’ eye gaze disambiguates referring expressions early during face-to-face conversation,” Journal of Memory and Language, vol. 57, no. 4, pp. 596–615, 2007.
  • [145] J. E. Hanna and M. K. Tanenhaus, “Pragmatic effects on reference resolution in a collaborative task: Evidence from eye movements,” Cognitive Science, vol. 28, no. 1, pp. 105–115, 2004.
  • [146] L. B. Adamson and R. Bakeman, “The development of shared attention during infancy.” Annals of child development, vol. 8, pp. 1–41, 1991.
  • [147] B. Scassellati, “Mechanisms of shared attention for a humanoid robot,” in Embodied Cognition and Action: Papers from the 1996 AAAI Fall Symposium, vol. 4, no. 9, 1996, p. 21.
  • [148] ——, “Imitation and mechanisms of joint attention: A developmental structure for building social skills on a humanoid robot,” in Computation for metaphors, analogy, and agents.   Springer, 1999, pp. 176–195.
  • [149] G. O. Deák, I. Fasel, and J. Movellan, “The emergence of shared attention: Using robots to test developmental theories,” in Proceedings 1st International Workshop on Epigenetic Robotics: Lund University Cognitive Studies, vol. 85, 2001, pp. 95–104.
  • [150] I. Fasel, G. O. Deák, J. Triesch, and J. Movellan, “Combining embodied models and empirical research for understanding the development of shared attention,” in Development and Learning, 2002. Proceedings. The 2nd International Conference on.   IEEE, 2002, pp. 21–27.
  • [151] M. W. Hoffman, D. B. Grimes, A. P. Shon, and R. P. Rao, “A probabilistic model of gaze imitation and shared attention,” Neural Networks, vol. 19, no. 3, pp. 299–310, 2006.
  • [152] C. Peters, S. Asteriadis, K. Karpouzis, and E. de Sevin, “Towards a real-time gaze-based shared attention for a virtual agent,” in International Conference on Multimodal Interfaces, 2008.
  • [153] C. Peters, S. Asteriadis, and K. Karpouzis, “Investigating shared attention with a virtual agent using a gaze-based interface,” Journal on Multimodal User Interfaces, vol. 3, no. 1-2, pp. 119–130, 2010.
  • [154] D. Premack and G. Woodruff, “Does the chimpanzee have a theory of mind?” Behavioral and brain sciences, vol. 1, no. 04, pp. 515–526, 1978.
  • [155] H. M. Wellman, “The child’s theory of mind,” 2011.
  • [156] B. M. Scassellati, “Foundations for a theory of mind for a humanoid robot,” Ph.D. dissertation, Massachusetts Institute of Technology, 2001.
  • [157] B. Scassellati, “Theory of mind for a humanoid robot,” Autonomous Robots, vol. 12, no. 1, pp. 13–24, 2002.
  • [158] L. F. Marin-Urias, E. A. Sisbot, A. K. Pandey, R. Tadakuma, and R. Alami, “Towards shared attention through geometric reasoning for human robot interaction,” in Humanoid Robots, 2009. Humanoids 2009. 9th IEEE-RAS International Conference on.   IEEE, 2009, pp. 331–336.
  • [159] S. Baron-Cohen, Mindblindness: An essay on autism and theory of mind.   MIT press, 1997.
  • [160] S. E. Baron-Cohen, H. E. Tager-Flusberg, and D. J. Cohen, Understanding other minds: Perspectives from developmental cognitive neuroscience .   Oxford University Press, 2000.
  • [161] B. Robins, K. Dautenhahn, R. Te Boekhorst, and A. Billard, “Robotic assistants in therapy and education of children with autism: Can a small humanoid robot help encourage social interaction skills?” Universal Access in the Information Society, vol. 4, no. 2, pp. 105–120, 2005.
  • [162] G. Bird, J. Leighton, C. Press, and C. Heyes, “Intact automatic imitation of human and robot actions in autism spectrum disorders,” Proceedings of the Royal Society B: Biological Sciences, vol. 274, no. 1628, pp. 3027–3031, 2007.
  • [163] B. Robins, P. Dickerson, P. Stribling, and K. Dautenhahn, “Robot-mediated joint attention in children with autism: A case study in robot-human interaction,” Interaction studies, vol. 5, no. 2, pp. 161–198, 2004.
  • [164] B. Robins, K. Dautenhahn, and P. Dickerson, “From isolation to communication: a case study evaluation of robot assisted play for children with autism with a minimally expressive humanoid robot,” in Advances in Computer-Human Interactions, 2009. ACHI’09. Second International Conferences on.   IEEE, 2009, pp. 205–211.
  • [165] S. Russell, “Artificial intelligence: A modern approach author: Stuart russell, peter norvig, publisher: Prentice hall pa,” 2009.
  • [166] D. Jurafsky and H. James, “Speech and language processing an introduction to natural language processing, computational linguistics, and speech,” 2000.
  • [167] P. R. Cohen and C. R. Perrault, “Elements of a plan-based theory of speech acts,” Cognitive science, vol. 3, no. 3, pp. 177–212, 1979.
  • [168] J. J. Kuffner Jr and S. M. LaValle, “Rrt-connect: An efficient approach to single-query path planning,” in Robotics and Automation, 2000. Proceedings. ICRA’00. IEEE International Conference on, vol. 2.   IEEE, 2000, pp. 995–1001.
  • [169] S. Garrido, L. Moreno, M. Abderrahim, and F. Martin, “Path planning for mobile robot navigation using voronoi diagram and fast marching,” in Intelligent Robots and Systems, 2006 IEEE/RSJ International Conference on.   IEEE, 2006, pp. 2376–2381.
  • [170] N. Mavridis and H. Dong, “To ask or to sense? planning to integrate speech and sensorimotor acts,” in Ultra Modern Telecommunications and Control Systems and Workshops (ICUMT), 2012 4th International Congress on.   IEEE, 2012, pp. 227–233.
  • [171] V. Klingspor, J. Demiris, and M. Kaiser, “Human-robot communication and machine learning,” Applied Artificial Intelligence, vol. 11, no. 7, pp. 719–746, 1997.
  • [172] C. Breazeal, “Imitation as social exchange between humans and robots,” in Proceedings of the AISB’99 Symposium on Imitation in Animals and Artifacts, 1999, pp. 96–104.
  • [173] C. L. Isbell Jr, M. Kearns, S. Singh, C. R. Shelton, P. Stone, and D. Kormann, “Cobot in lambdamoo: An adaptive social statistics agent,” Autonomous Agents and Multi-Agent Systems, vol. 13, no. 3, pp. 327–354, 2006.
  • [174] E. Guizzo, “Robots with their heads in the clouds,” Spectrum, IEEE, vol. 48, no. 3, pp. 16–18, 2011.
  • [175] N. Mavridis, T. Bourlai, and D. Ognibene, “The human-robot cloud: Situated collective intelligence on demand,” in Cyber Technology in Automation, Control, and Intelligent Systems (CYBER), 2012 IEEE International Conference on.   IEEE, 2012, pp. 360–365.
  • [176] N. Mitsunaga, Z. Miyashita, K. Shinozawa, T. Miyashita, H. Ishiguro, and N. Hagita, “What makes people accept a robot in a social environment-discussion from six-week study in an office,” in Intelligent Robots and Systems, 2008. IROS 2008. IEEE/RSJ International Conference on.   IEEE, 2008, pp. 3336–3343.
  • [177] N. Mavridis, A. AlDhaheri, L. AlDhaheri, M. Khanii, and N. AlDarmaki, “Transforming ibnsina into an advanced multilingual interactive android robot,” in GCC Conference and Exhibition (GCC), 2011 IEEE.   IEEE, 2011, pp. 120–123.
  • [178] M. Waibel, M. Beetz, J. Civera, R. D’Andrea, J. Elfring, D. Galvez-Lopez, K. Haussermann, R. Janssen, J. Montiel, A. Perzylo, et al., “Roboearth,” Robotics & Automation Magazine, IEEE, vol. 18, no. 2, pp. 69–82, 2011.
  • [179] D. Hunziker, M. Gajamohan, M. Waibel, and R. D’Andrea, “Rapyuta: The roboearth cloud engine,” in Proc. IEEE Int. Conf. on Robotics and Automation (ICRA), Karlsruhe, Germany, 2013.
  • [180] H. Sacks, E. A. Schegloff, and G. Jefferson, “A simplest systematics for the organization of turn-taking for conversation,” Language, pp. 696–735, 1974.
  • [181] E. A. Schegloff, “Overlapping talk and the organization of turn-taking for conversation,” Language in society, vol. 29, no. 1, pp. 1–63, 2000.
  • [182] Y. Matsusaka, S. Fujie, and T. Kobayashi, “Modeling of conversational strategy for the robot participating in the group conversation.” in INTERSPEECH, vol. 1, 2001, pp. 2173–2176.
  • [183] B. Mutlu, T. Shiwa, T. Kanda, H. Ishiguro, and N. Hagita, “Footing in human-robot conversations: how robots might shape participant roles using gaze cues,” in Proceedings of the 4th ACM/IEEE international conference on Human robot interaction.   ACM, 2009, pp. 61–68.
  • [184] C. Chao and A. L. Thomaz, “Turn taking for human-robot interaction,” in AAAI fall symposium on dialog with robots, 2010, pp. 132–134.
  • [185] C. L. Sidner, C. Lee, C. D. Kidd, N. Lesh, and C. Rich, “Explorations in engagement for humans and robots,” Artificial Intelligence, vol. 166, no. 1, pp. 140–164, 2005.
  • [186] J.-M. Valin, F. Michaud, J. Rouat, and D. Létourneau, “Robust sound source localization using a microphone array on a mobile robot,” in Intelligent Robots and Systems, 2003.(IROS 2003). Proceedings. 2003 IEEE/RSJ International Conference on, vol. 2.   IEEE, 2003, pp. 1228–1233.
  • [187] K. Nakadai, D. Matsuura, H. G. Okuno, and H. Kitano, “Applying scattering theory to robot audition system: Robust sound source localization and extraction,” in Intelligent Robots and Systems, 2003.(IROS 2003). Proceedings. 2003 IEEE/RSJ International Conference on, vol. 2.   IEEE, 2003, pp. 1147–1152.
  • [188] J.-M. Valin, F. Michaud, B. Hadjou, and J. Rouat, “Localization of simultaneous moving sound sources for mobile robot using a frequency-domain steered beamformer approach,” in Robotics and Automation, 2004. Proceedings. ICRA’04. 2004 IEEE International Conference on, vol. 1.   IEEE, 2004, pp. 1033–1038.
  • [189] J.-M. Valin, F. Michaud, and J. Rouat, “Robust localization and tracking of simultaneous moving sound sources using beamforming and particle filtering,” Robotics and Autonomous Systems, vol. 55, no. 3, pp. 216–228, 2007.
  • [190] K. Nakadai, T. Takahashi, H. G. Okuno, H. Nakajima, Y. Hasegawa, and H. Tsujino, “Design and implementation of robot audition system’hark’—open source software for listening to three simultaneous speakers,” Advanced Robotics, vol. 24, no. 5-6, pp. 739–761, 2010.
  • [191] D. A. Reynolds and R. C. Rose, “Robust text-independent speaker identification using gaussian mixture speaker models,” Speech and Audio Processing, IEEE Transactions on, vol. 3, no. 1, pp. 72–83, 1995.
  • [192] D. A. Reynolds, T. F. Quatieri, and R. B. Dunn, “Speaker verification using adapted gaussian mixture models,” Digital signal processing, vol. 10, no. 1, pp. 19–41, 2000.
  • [193] M. Ji, S. Kim, H. Kim, and H.-S. Yoon, “Text-independent speaker identification using soft channel selection in home robot environments,” Consumer Electronics, IEEE Transactions on, vol. 54, no. 1, pp. 140–144, 2008.
  • [194] Y. Matsusaka, T. Tojo, S. Kubota, K. Furukawa, D. Tamiya, K. Hayata, Y. Nakano, and T. Kobayashi, “Multi-person conversation via multi-modal interface-a robot who communicate with multi-user-.” in EUROSPEECH, vol. 99, 1999, pp. 1723–1726.
  • [195] K. Nakadai, D. Matsuura, H. G. Okuno, and H. Tsujino, “Improvement of recognition of simultaneous speech signals using av integration and scattering theory for humanoid robots,” Speech Communication, vol. 44, no. 1, pp. 97–112, 2004.
  • [196] M. Katzenmaier, R. Stiefelhagen, and T. Schultz, “Identifying the addressee in human-human-robot interactions based on head pose and speech,” in Proceedings of the 6th international conference on Multimodal interfaces.   ACM, 2004, pp. 144–151.
  • [197] H. Holzapfel, “Towards development of multilingual spoken dialogue systems,” in Proceedings of the 2nd Language and Technology Conference, 2005.
  • [198] C. Cullen, C. Goodman, P. McGloin, A. Deegan, and E. McCarthy, “Reusable, interactive, multilingual online avatars,” in Visual Media Production, 2009. CVMP’09. Conference for.   IEEE, 2009, pp. 152–158.
  • [199] K. R. Echavarria, M. Genereux, D. B. Arnold, A. M. Day, and J. R. Glauert, “Multilingual virtual city guides,” Proceedings Graphicon, Novosibirsk, Russia, 2005.
  • [200] T. Starner, J. Weaver, and A. Pentland, “Real-time american sign language recognition using desk and wearable computer based video,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 20, no. 12, pp. 1371–1375, 1998.
  • [201] C. Vogler and D. Metaxas, “Handshapes and movements: Multiple-channel american sign language recognition,” in Gesture-Based Communication in Human-Computer Interaction.   Springer, 2004, pp. 247–258.
  • [202] G. Murthy and R. Jadon, “A review of vision based hand gestures recognition,” International Journal of Information Technology and Knowledge Management, vol. 2, no. 2, pp. 405–410, 2009.
  • [203] H. Brashear, T. Starner, P. Lukowicz, and H. Junker, “Using multiple sensors for mobile sign language recognition,” 2003.
  • [204] R. Plamondon and S. N. Srihari, “Online and off-line handwriting recognition: a comprehensive survey,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 22, no. 1, pp. 63–84, 2000.
  • [205] T. Plötz and G. A. Fink, “Markov models for offline handwriting recognition: a survey,” International Journal on Document Analysis and Recognition (IJDAR), vol. 12, no. 4, pp. 269–298, 2009.
  • [206] L. M. Lorigo and V. Govindaraju, “Offline arabic handwriting recognition: a survey,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 28, no. 5, pp. 712–724, 2006.
  • [207] N. Mavridis, “Autonomy, isolation, and collective intelligence,” Journal of Artificial General Intelligence, vol. 3, no. 1, pp. 1–9, 2011.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description