“Conservatives Overfit, Liberals Underfit”:The Social-Psychological Control of Affect and Uncertainty

“Conservatives Overfit, Liberals Underfit”:
The Social-Psychological Control of Affect and Uncertainty

Jesse Hoey
Cheriton School of Computer Science
University of Waterloo
Waterloo, Ontario,
Canada, N2L3G1
jhoey@cs.uwaterloo.ca
   Neil MacKinnon
University of Guelph
Guelph, Ontario,
Canada
nmackinn@uoguelph.ca
Abstract

The presence of artificial agents in human social networks is growing. From chatbots to robots, human experience in the developed world is moving towards a socio-technical system in which agents can be technological or biological, with increasingly blurred distinctions between. Given that emotion is a key element of human interaction, enabling artificial agents with the ability to reason about affect is a key stepping stone towards a future in which technological agents and humans can work together. This paper presents work on building intelligent computational agents that integrate both emotion and cognition. These agents are grounded in the well-established social-psychological Bayesian Affect Control Theory (BayesAct). The core idea of BayesAct is that humans are motivated in their social interactions by affective alignment: they strive for their social experiences to be coherent at a deep, emotional level with their sense of identity and general world views as constructed through culturally shared symbols. This affective alignment creates cohesive bonds between group members, and is instrumental for collaborations to solidify as relational group commitments. BayesAct agents are motivated in their social interactions by a combination of affective alignment and decision theoretic reasoning, trading the two off as a function of the uncertainty or unpredictability of the situation. This paper provides a high-level view of dual process theories and advances BayesAct as a plausible, computationally tractable model based in social-psychological theory. We introduce a revised BayesAct model that more deeply integrates social-psychological theorising, and we demonstrate a key component of the model as being sufficient to account for cognitive biases about fairness, dissonance and conformity. We show how the model can unify different exploration strategies in reinforcement learning.

1 Introduction

A key element of human experience is emotion, and enabling artificial agents with the ability to reason about emotions is a key stepping stone towards a future in which artificial intelligence (AI) and humans can work together cooperatively in social dilemmas,111A social dilemma is a game with uncompensated interdependencies (externalities) (Kollock, 1998): each person’s actions in the game affect other persons without their explicit consent (e.g. without compensating them). while respecting ethical, moral and normative orders in society. Our vision is to build intelligent computational agents that parsimoniously integrate both emotion and cognition, that are able to become members of a socio-technical system. We ground our vision in a social-psychological theory of affective alignment and social order called BayesAct (Hoey et al., 2016; Schröder et al., 2016), which is based on the sociological Affect Control Theory (MacKinnon, 1994; Heise, 2007).

We pursue a pragmatic approach to understanding intelligence and building artificial intelligence (AI). In the same spirit as early AI researchers, we seek to build a mechanical entity that has the same general intelligent capacity as a human, based on theory that is (at least somewhat) grounded in social psychological research into human intelligence and being. However, we do not attempt to build a replica of the human brain, rather we seek a replica of the human mind, with information channels to the outside world that can be implemented using arbitrary sensors and actuators. Thus, while this theory will require embodiment, is it not bound to be anthropomorphic. Central to our approach is to find a computationally tractable and easily interpretable model of human intelligence, such that it can be built practically and evaluated as a valuable member of society. To satisfy the interpretability requirement, we seek parsimonious models of the link from an agent’s actions to its observations, the parsimony being defined by the minimal complexity required to adequately align behaviours in social situations. This minimal complexity (most parsimonious) model is the one that “forces” social agents to interact constructively to solve social dilemmas and allows agents to evaluate and interact with the future by comparing their state to a model of only the optimal solution to the social dilemma, instead of to a model of all possible states (Ramstead et al., 2019). Thus, the generative model (the world, or simply another agent in a closed dyadic situation) must be inverted to create a recognition model in the agent that mirrors it.

Parsimonious models have previously been explored in the context of perceptual inference (Dayan et al., 1995). In the Ecological Free Energy Principle, this has been generalized to include embodied intelligent agents capable of autonomous action (Bruineberg and Rietveld, 2019). The core principle is that agents frequent certain configurations of the world, and value is placed on those states of the world that they frequent. BayesAct provides a simple, computational mechanism that brings a social element into the world, and into the agent’s world model, and suggests that intelligent agents living in a social setting may use a sharing mechanism based on emotion which allows them to attend to (and thereby frequent) the social order in which they are embedded, their social econiche (Bruineberg and Rietveld, 2019). This sharing mechanism generalizes across actions and world configurations including other social agents, constituting a highly complex environment.

This paper makes two primary contributions. First, the exposition of a revised version of the BayesAct model which more coherently ties it to the underlying sociologial theorising (Heise, 2007). We introduce the somatic transform as a mathematical method for linking affective and deliberative reasoning at the meta-cognitive level, and link this transform to other dual process models from cognitive neuroscience, behavioral economics, and multi-agent reinforcement learning. Second, the application of BayesAct to a set of three cognitive bias experiments, showing how this single idea can provide an explanation across different domains. We also connect BayesAct to theorising about the Bayesian brain (Friston, 2010), and active inference (Friston et al., 2012).

We first discuss dual process models in general in Section 2, and BayesAct in particular in Section 3. We show how BayesAct can be used to explain behavioural effects in fairness (Section 4.1), dissonance (Section 4.2) and conformity (Section 4.3). In the discussion, we review two key practical application areas, one in online collaborative networks (e.g. GitHub) (Hoey et al., 2018) and the other in building assistive technologies for persons with dementia (Robillard and Hoey, 2018). We briefly discuss other ethical and philosophical considerations and conclude. Technical details about BayesAct can be found at bayesact.ca.

2 Dual Process Models

Human and artificial intelligent agents are faced with the computational problem posed by the complexity of the social order encoded as sensory inputs. Agents must find a way to map this high dimensional input space into an equally high dimensional action space. Agents can handle the complexity of the input by constructing a representation of it, and then preforming calculations over this representation. We will call this the denotative representation, and it is an abstraction of the physics of environment. For example, it is able to represent the positions of pieces on a chess board and make predictions about how a game will turn out given a sequence of moves, or it can represent the bids in a negotation. The denotative representation is assumed to be symbolic, but can be implemented sub-symbolically.222 By subsymbolic, we mean as in a neural network, where the “symbols” are weights on neurons, and therefore somewhat difficult to interpret. However, we do not rule out the possibility that a subsymbolic representation could be used to model a symbolic one. For example in a deep reinforcement learning problem, the symbols are the actions being predicted (in fact, the values of these actions), while the neural network simply provides the mapping function. Regardless of representation, denotative calculations rapidly become intractable, and are exacerbated by the inclusion of other intelligent agents (FeldmanHall and Shenhav, 2019).

FeldmanHall and Shenhav (FeldmanHall and Shenhav, 2019) describe an abstract Bayesian model of the management of uncertainty. In their model, humans are motivated to reduce uncertainty in their distributions over their own actions, conditioned on their current appraisal of their state. The ability to reduce uncertainty has clear evolutionary advantages, meaning humans have an intrinsic motivation to reduce it. Affect acts as an easily accesible measure of uncertainty over actions for the agent, and provides an error signal that can be used in a feedback loop to reduce uncertainty. As uncertainty grows, e.g. in a social situation, negative affect is created and an agent is motivated to reduce this, restoring positive affect, by taking actions that reduce uncertainty. In social situations, the state of the world includes the states of other agents, such as their traits, goals and emotions. A continuum of strategies, from “automatic” control based on stereotypes and impressions, to “controlled” processes based on perspective-taking and effortful search, are used to reduce the uncertainty. This continuum maps to a Bayesian tradeoff between prior and evidence, and is well described in the BayesAct model by a dual-process system we explain in Section 3.

2.1 Complexity

As the complexity of the environment increases, an agent with fixed computational resources runs into a bound that prevents it from modeling the added complexity. As this bound is reached, the predictions made by the denotative representation have increasing trouble matching the evidence from the world, resulting in more dispersed estimates of denotative state and an impoverished mechanism for predicting the future. Such an agent can handle this inability to predict the future by believing the predictions and ignoring the evidence (underfitting, leading to high bias, low variance behaviour), or by relaxing the predictions and believing the evidence (overfitting, leading to low bias, high variance behaviour). While the first agent will have difficulty adapting to change (but can see it coming), the second will have difficulty predicting change (but can adapt to it). We will denote the first type of agents (those that underfit) as “L” agents, and the second type (those that overfit) as “C” agents.

An agent with a hierarchical model can do both, however, as it can create and track a lower-dimensional version of the denotative state that allows it to continue making predictions, albeit with reduced precision. In machine learning, the basic idea of approximating a complex function with another, simpler, function is referred to as variational, and if the functions are probabilistic, then variational Bayesian. An intelligent agent that is using a variational optimization technique to obtain policies of behaviour can be seen as an instance of active inference (Friston et al., 2012). Active Inference proposes consistency between an agent’s internal model and the environment in which it is embedded as a fundamental principle underlying biological agents. The complexity of the agent’s environment, defined as the total number of configurations accessible to the agent, is the true free energy. As an agent’s world becomes more complex (e.g. with the addition of other social agents), this true free energy becomes intractable to model within an agent’s resource bound, and so the agent must approximate. An agent’s variational free energy is an internal measure of how well its (approximate) model fits the real world. The idea of minimizing this variational free energy is the same as moving towards a state of true free energy which is minimal and is consistent with the econiche in which it is embedded (Bruineberg and Rietveld, 2019; Friston, 2010). Agents with better matching models avoid surprise and are better survivors. BayesAct proposes the connotative space as performing a (variational) approximation to the denotative space. This approximation is necessary because of the impossibility of finding a good match at the denotative level alone, and more so in social environments which are inherently harder to predict, and are thefore less valid, or less predictable, or more ambiguous and uncertain (Kahneman and Klein, 2009). In less valid situations, the distribution over denotative states is more dispersed or has higher entropy. People perform more poorly than algorithms in situations of low validity (Kahneman and Klein, 2009) as they are resorting to cognitive biases.

Related views of emotion include the identification of negative valence with increased uncertainty (FeldmanHall and Shenhav, 2019) or with change in free energy (Joffily and Coricelli, 2013), or expected free energy (Hesp, 2018). One component of identity (esteem) is used to modulate a reward function in (Moutoussis et al., 2014b) in order to make cooperation the more salient policy in a social dilemma. However, in (Moutoussis et al., 2014a), interpersonal relationships in general are linked to heuristics that facilitate approximate Bayesian inference, allowing agents to circumvent the intractability of optimal Bayesian inference. Importantly, the models of self and other exist specifically to influence agent actions. In a similar vein, (Jaques et al., 2019) consider social influence as an additional factor in an agent’s utility function in a sequential social dilemma, and the relative weights of this factor can be learned by an agent about other agents. Influence is a signal of power, and we can therefore see the same concept being applied in this multi-agent reinforcement learning context. Note that in earlier work (Ray et al., 2008), the additional factor in the utility function is parameterized by envy and guilt, which map to both power and evaluation dimensions 333Envy is a combined emotion of sadness and anger, which therefore are mixed between negativity and dominance (agression), while Guilt is a combination of joy and fear, mixing positivity with submission (Plutchik, 1980).. Importantly, the precision of the estimate of the other’s type is critically important (Moutoussis et al., 2014a), especially as ambiguity grows (FeldmanHall and Shenhav, 2019). In a trust game, cooperative solutions are linked to valence (self-esteem) in (Moutoussis et al., 2014b), much like the tendency towards fair solutions in uncertain situations in (van den Bos, 2001) (see also Section 4.1 and 5.7).

These approaches show how valence (and arousal in (Hesp, 2018) and power in (Jaques et al., 2019; Ray et al., 2008)) may be related to uncertainty (more precisely to the precision of policies). Affective responses are proposed as resulting from active inference in the conceptual model of (Smith et al., 2019). BayesAct shows how this relationship can be linked to social psychological theory, providing a bridge to sociological analytics, by associating dimensions of sentiment with affective response representation in sentiments of happy vs sad (Evaluation) and of angry vs. afraid (Potency). However, the sentiment dynamics of BayesAct exist at the affective response representation and conscious acccess levels in the three-factor active inference model of (Smith et al., 2019), and would be involved in conscious and subconscious interpretations in working memory (Smith et al., 2018). In fact, in the precursor model of Lane (Lane, 2000), the level 5 theory is “yet to be formulated” (p362), and “would involve social cognition and would focus on how differentiated awareness of self and other influences social behaviour…” (p362). More recent work has noted that “the theory of mind abilities associated with the fifth level of emotional awareness” are not yet fully addressed (Smith et al., 2019). We suggest BayesAct exists at this level, which can handle “blends of blends of emotions”. For example, a feeling of happiness and sadness generated by thinking about a lucky friend feeling happy and proud, but a bit worried about his effect on me.

2.2 BayesAct

BayesAct is a computational dual process (hierarchical) model of intelligence, in which one process is continuous in one or more dimensions and is equated with human sentiment, while the other process is discrete or continuous and models human deliberative reasoning and decision making (Hoey et al., 2016; Schröder et al., 2016). The dual process is built to handle uncertainty and surprise, naturally shifting between higher bias (lower variance) models in the sentiment space in more (denotatively) uncertain (invalid/unpredictable) situations, to higher precision (lower bias) models in the deliberative space in more certain (valid/predictable) environments. BayesAct agents will naturally shift in response to the unpredictability of their environment, but may do so starting from different individual “set points”, defined by the parametrization of the BayesAct model. These differences in the management of uncertainty have been noted before in work demonstrating individual differences in a bias-variance tradeoff at the perceptual level (Glaze et al., 2018). In keeping with a hierarchical Bayesian model of the mind, we show here how this tradeoff can exist at the level of conscious cognitive constructs about the self and identity. We return to these individual differences in Section 5.7.

In BayesAct, we avoid the terms “cognitive” and “emotional”, and refer instead to a “denotative” representation and a “connotative” one. The “connotative” is also referred to as “sentiment” or “feeling”, both of which are “affective”, while “emotion” refers specifically to a signaling mechanism described in Section 3. The denotative representation requires deliberative reasoning, in which sequences of futures are examined in memory to allow for selection of appropriate actions in the present. The connotative representation, on the other hand, is the meaning of the world at the level of sentiments or feelings in a relatively low dimensional space, and produces indications of social (in)consistency or (in)coherence. Direct evidence of these meanings is obtained through emotional signals from other agents. Importantly, the consistency encoded in sentiments extends to agent actions and provides a rough (heuristic) guide over policies. The social intelligence provided by this consistency is shared by agents in a community, and motivates them to want to do things according to the same practice (“habitus” (Bourdieu, 1990)) which encodes the “way we do things”. This shared practice is an approximation built to handle and alleviate computational complexity of the social world. In BayesAct, any denotative state can be mapped into the same connotative space, allowing for comparisons between actions and identities, for example. The connotative state is required to guide an agent towards socially acceptable choices of behavior that can ensure more globally optimal solutions to social dilemmas. Therefore, BayesAct fits the definition of an enactive inference engine, in that it explicitly has the engagement of the agent with the world as a generative model (the mapping from actions to observations, which is unknown to, but partially controllable by, the agent) (Ramstead et al., 2019). One can view the generative model in this case as a mirror image in another agent of the dynamics of sentiment. Once these dynamics sync up between agents, they can be used to create the inverse recognition model in which other agent’s actions are more predictable. Note that this is not the same as modifying the utility function directly, but could be viewed as a modification to expected utility, a different form of reward shaping (Mataric, 1994).

Consider a demonstrative simple example in which people and behaviours are characterised as either “good” or “bad”. Cultural consensus is that good people will do good things, and so the denotative state does not have to model the situation in which good people are doing bad things, and thus can be simpler. The connotative state can therefore be linked to the denotative state with some energy functional dependent on the discrepancy between the meanings of things out of context and in context. If “good” people and “good” behaviours (out of context) are expected to be found together (in context), then inconsistency (good people doing bad things) will be surpising, will cause increased dispersion in estimates at the connotative level, and will push reasoning into the denotative level for analysis and re-labeling of actors and behaviours (maybe this is not, actually a “good” person, or maybe the behaviour is not a “bad” behaviour?). Critically, it is the sharing of these energy functionals that is required to gain this efficiency. A slightly more realistic example is one in which a left (or right) leaning newspaper only has to write articles for their readership, and so can be more simply represented in their most elemental form as the emotional “form” of the news article.

The link in BayesAct between denotative and connotative induces a natural (Bayesian) tradeoff due to relative uncertainty. As the environment becomes less valid, so the distribution over denotative states is more dispersed or has higher entropy), then the posterior will be more heavily influenced by the prior in the connotative state. Agents in less valid (less predictable) environments will put more weight on the connotative representation: they will make inferences and choose actions that are more in line with connotative (socio-cultural) expectations. In more valid environments, a lower entropy denotative distribution dominates the posterior. Agents in more valid environments will thus act more in line with denotative states and predictive dynamics, and so will be information seekers and utilizers. In a social dilemma, for example, one would expect the agents in less valid environments to cooperate (act according to social prescriptions), while agents in more valid environments will defect (act decision theoretically rationally). This is in line with experiments showing how humans tend to rely more on fairness in uncertain social sitautions (van den Bos, 2001), and act more pro-socially (cooperate in a public goods game) in ambiguous situations (ones in which risk is hard to evaluate, see (Vives and FeldmanHall, 2018)). In BayesAct, risk is represented by the transition dynamics parameters in the denotative space. If the distribution over these parameters has lower entropy, then risk is more well defined, and so ambiguity (the uncertainty in risk) is lower.

2.3 Active Inference

In BayesAct, we associate the connotative state as a variational approximation to the denotative state, and identify it as a mechanism for encoding efficient policies in the environment that includes a (potential) social group. Agents minimizing their free energy using such a dual-process model will engage in active inference and will learn a model of emotional dynamics that can ease the computational load on the denotative representation. Further, if the variational approximations are linked across agents, then the resulting social group will learn to share the same approximation in order to more efficiently solve social dilemmas. It is precisely because the collaborative solution to the dilemma yields higher payouts for all individuals that the connotative representation is selected because it is consistent with that of other agents. A secondary, normally multi-modal, signaling mechanism is used to facilitate this linkage across agents. These signals are termed “emotional” and ensure that the agents’ connotative spaces are directly linked. This “emotion language” (Turner, 2016), communicated in part through facial expressions and paralinguistics, allows agents to communicate what aspects of the denotative state are worth more fully exploring. As a result, the connotative state acts as a “flashlight” that illuminates the same part of the denotative state for all agents who share it. Such “splotlight” methaphors have been deeply explored in the context of psychological (usually visual) attention (Crick, 1984). Once this linking occurs, then the connotative state and dynamics encodes a social contract with the social group in which the agent finds itself.

2.4 The Relation between Cognition and Affect

As discussed in Section 2.2, we employ the terms denotative and connotative in BayesAct to distinguish the cognitive meaning and representations of objects and events in high dimensional space from their affective meaning and representations in low-dimensional space. Denotative representations are associated with cognitive processing and deliberative reasoning; connotative representations with affective processing and detection of consistency between culturally-based expectations and actual occurrences. Because the distinction between cognition and affect or denotative and connotative meaning is such an essential ingredient of BayesAct, we discuss it in some detail before proceeding.

This distinction has been a contentious and largely unresolved issue for many years in both the neurobiological and social sciences. Historically, it can be traced back to at least the James-Lange theory of emotion (Lange and James, 1922/1967), which proposed that what we feel as emotion is simply our cognitive perception of somatic states (e.g., trembling, crying, and so on) that has already occurred in direct response to external stimuli. Effectively challenged by Cannon (1929), on the contention that bodily responses are not rapid enough to account for the immediate perception of emotional experience, interest in the James-Lange theory waned until the 1980s when it was reignited by Zajonc (1980, 1984). On the basis of experimental evidence suggesting that subjects could make affective preferences among stimuli presented below the threshold of cognitive awareness, Zajonc concluded that “affect and cognition are separate and partially independent systems” (Zajonc, 1984, p.117) and that affect can occur without prior cognitive processing and can even precede cognition in a behavioral sequence. These conclusions set off a heated exchange with Lazarus in the 1980s that became known as the primacy of cognition versus affect debate (see (MacKinnon, 1994) for an extensive discussion).

The distinction between cognition and affect is even more problematic at the neurobiological level of the brain, where neuroimaging studies have failed to locate the psychological functions of cognition and affect in specific neuroanatomical regions and networks of the brain. Cacioppo et al. (2003) suggest caution in the use of fMRI (functional magnetic resonance imaging) to study localizations of emotion and cognitive functions in the brain. In a similar vein, Davidson (2003) includes in a list of “seven sins” to avoid in the study of emotion the belief that affect and cognition involve independent and separate neural circuitry and the related belief that affect is mostly subcortical and cognition mostly cortical. As pointed out by Barrett and Satpute (2013) in a meta-analysis of neuroimaging studies, the problem with trying to connect psychological functions to specific neuroanatomical structures or networks lies in the fact that large-scale intrinsic networks (e.g., Salience, Default Mode) are domain-general information processing networks that spill across psychological functions. On this basis one should not expect anywhere near a one-to-one correspondence of neuroanatomical structures and networks with even broad psychological categories such as levels of information processing (Ortony et al., 2005) or hierarchical levels of emotional experience (Lane, 2000; Smith et al., 2018). Many other authors have come to similar conclusions, e.g., (Damasio, 1994; LeDoux, 1996; LeDoux and Brown, 2017; Pessoa, 2008, 2018; Duncan and Barrett, 2007; Franks, 2006; Turner, 2009).

Because BayesAct is a model of the human mind, not the human brain, it allows for the possibility of a distinction between cognition and affect. Our view of the relation between cognition and affect at this level consists of a balance of two ideas. On the one hand, we maintain that cognition and affect are not completely independent constituents or processes of the mind because all cognitions evoke affective feelings, if only those mild feelings generated from the perception or recognition of objects. On the other hand, to the extent cognition and affect can be distinguished as partially independent constituents or processes of the mind, we maintain that the distinction is an analytically and empirically valid one. This is most evident at the extremes of “cold” cognitions where the intensity of affective arousal is low and “hot” cognitions where arousal is quite pronounced; or, alternatively, at the extremes of affective experience largely unmediated by cognitive processing and that involving a high level of cognitive appraisal and reflection. And to the extent that cognition and affect are at least partially independent, we maintain that both are required for an adequate understanding of the human mind.

This view of the relation between cognition and affect can be summarized as two principles (MacKinnon, 1994): (1) the principle of inextricability proposes that cognition and affect are not completely independent constituents or processes of the mind, but rather a matter of relative preponderance, a continuum wherein a representation in the mind at any given moment can be predominantly cognitive or predominantly affective or anywhere in between; and (2) the principle of complementarity proposes that, as overlapping constituents or interdependent systems of phenomenological experience, both cognition and affect are necessary to understand the human mind. While the principle of inextricability is an ontological statement about the reality of the human mind as currently understood, the principle of complementarity is an epistemological implication of this ontological view. Proposed by Bohr (1950) in the 1920s to explain the contradictory images evoked by the wave-particle duality in subatomic phenomena, James (1890) had developed the complementarity principle much earlier to reconcile, in his terms, the substantive and transitive parts of thought (see (Stephenson, 1986a, b)), which parallel the distinction between the denotative-cognitive and connotative-affective meanings of concepts employed in BayesAct. Many other authors have pointed to complementarity, including LeDoux, who opines that “emotion and cognition are best thought of as separate but interacting mental functions mediated by separate but interacting brain systems” (LeDoux, 1996, p.69, emphasis added), and Pessoa (2008, 2018), who focuses on brain systems underlying the interaction between emotion and cognitive processing, although he anticipates moving beyond interaction to understanding their integration in the brain. Clore and Ortony (2000) also argue for a mutual relation between the denotative-cognitive and connotative-affective systems of meaning, as do many others e.g. (Storbeck and Clore, 2007; Duncan and Barrett, 2007; Franks, 1989). The principles of inextricability and complementarity are brought together in Franks’ statement that a satisfactory resolution to the conflict between emotion and cognition “will depend on describing how they can be inextricably linked [principle of inextricability] while capable of being in tension” [principle of complementarity] (Franks, 2006, p.55).

Assuming that the distinction between cognition and affect is valid at the psychological level of the mind, their temporal priority and causal primacy becomes a moot point if one supposes a reciprocal relationship between them. Widely suggested in the literature (e.g., (Mook, 1987; Lazarus, 1984; Forgas, 2008; Turner, 2009), this is a core assumption of affect control theory (ACT), which formalizes the reciprocal relation between cognition and affect by capitalizing on the distinction between the denotative (cognitive) and connotative (affective) meaning of words for objects and events established by Osgood and associates (Osgood et al., 1957; Osgood, 1969; Osgood et al., 1975). Comprising feelings of evaluation, potency, and activity (EPA), the dimensional simplicity of connotative-affective meaning provides a portal into the dimensionally-complex denotative-cognitive representations of the world around us. Affective reactions to external objects and stimuli become “the means by which information about the external world is translated into an internal code or representation that can be used to safely navigate the world” (Duncan and Barrett, 2007, p.1186). BayesAct moves this relation between cognition and affect to a significantly higher level by specifying a formal mathematical model that enables one to move back and forth between the denotative and connotative meanings and representations of entities.

2.5 Other Dual Process Theories

Dual process theories are well studied in social psychology (Chaiken and Trope, 1999), but many different terms are used to refer to the two levels of processing. “Cognitive” processing is often referred to as deliberative, reflective (Ortony et al., 2005), conscious (Smith et al., 2019) or “System 2” (Stanovich and West, 2000), whereas “emotional” processing as automatic, routine (Ortony et al., 2005), or “System 1” (Stanovich and West, 2000). In many dual process theories (e.g. (FeldmanHall and Shenhav, 2019)), both deliberative and automatic systems are modeled denotatively in a constraint satisfaction network. In BayesAct, the connotative level is affective and serves as a low-dimensional approximation to a denotative representation. However, the connotative representation is not at the level of “primary” or reactive emotions (e.g. reflexes), or of core affect (Barrett, 2017), but rather at the level of routine or reflective interpretations of emotions linked to procedural memory (Ortony et al., 2005).

Behavioural economists have also tackled emotional human motivations, usually by proposing that humans make choices based on a modified utility function that includes some reward for fairness (Rabin, 1993) or penalty for inequity (Fehr and Schmidt, 1999) or conformity (Mas and Moretti, 2009). However, heuristic adjustments may not be comprehensive enough to account for human behaviour across all situations, and a morality concept that is not based on outcomes can be used as a more parsimonious account (Capraro and Rand, 2018). The question of how this morality is defined is left open.

The BayesAct approach is to modify the expected utility by focussing computational attention on those solutions predictable by the connotative dynamics. Note that this is a different concept than Simon’s bounded rationality (SBR) (Simon, 1967). In SBR, the agent first performs an analysis at the symbolic level (denotative), and then “freezes” this analysis into a second denotative space called habits and coping. In ACT and in BayesAct, the agent gathers a fast impression and then makes predictions in an emotional space with a simple predictive function which can rapidly generate somewhat (socially) relevant predictions about future outcomes involving other agents. In BayesAct, we see an emergent bounded rationality defined by uncertainty over outcomes. As the future becomes more uncertain, an emotional system automatically and softly kicks in to take up the slack. The subsequent diminishment of uncertainty is transmitted socially, shared between agents in a group.

The sociology of culture makes a distinction between heuristic cognitive biases (in so-called “toolkit” theories) and deeply ingrained patterns of behaviour (in so-called “practice” theories) (Lizardo and Strand, 2010). Toolkit approaches have found success in explaining social structures (Martin, 2009), and are hypothesised to arise from the scaffolding of the environmental and social structure The scaffolding is so complex that humans learn heuristics and tricks to get by, but the tricks are “defined” by the scaffolding, since they are created in order to handle exactly it. Thus, from a distance, the social structures may look complex, but in reality each individual is following a set of simple rules, like Simon’s “ant on a beach” (Martin, 2010). Practice theory has been notoriously more difficult to apply, primarly because of a lack of operationalization (Li et al., 2009). In BayesAct, behaviours explained by toolkit and practice theories are reflections of the same underlying affective dynamics (impression formation and the somatic transform).

2.6 Reinforcement Learning

Connections of emotions with reinforcement learning have been explored by a number of authors (Moerland et al., 2017; Hogewoning et al., 2007; Marinier III and Laird, 2008; El-Nasr et al., 2000; Broekens et al., 2015). In traditional reinforcement learning (RL), an agent is tasked with both learning about his world (primarily the utility of situations), and of acting in the same world. The basic quandary for the agent is whether he should exploit his current knowledge of the world, or explore something new in the hopes of discovering even better situations: a difficult tradeoff indeed between a sure bet and a random chance. Many RL agents tackle this tradeoff using some form of optimism under uncertainty: if something has not been tried (or has not been tried for some time, or insufficiently many times), assume it will lead to high utility outcomes (Brafman and Tennenholtz, 2002; Kocsis and Szepesvári, 2006). This method works in practice because it ensures sufficient exploration, and agents don’t get “stuck”. In traditional RL, exploitation is seen as a cognitive/rational skill requiring (usually intense) computation, since it involves predicting the future based on learned knowledge, and analyzing the costs and benefits of different strategies. Exploration, on the other hand, is seen as something that could be guided by any number of (possibly affective) elements. For example, in (Hogewoning et al., 2007), higher valence (which is equivalent to reward being higher than expected, so things are going well) is used to push an agent to increased exploitation of current knowledge, whereas lower valence/reward (things are going worse than expected) pushes an agent to explore. This view is largely consistent with the affect-as-cognitive-feedback view (Huntsinger et al., 2014) where higher valence facilitates usage of existing mental constructs, whereas negative valence inhibits (and thus forces an agent to seek new solutions through exploration). In general, expected and immediate emotions are hypothesized to be related to expected utility and modify action choices accordingly (Loewenstein and Lerner, 2003). In particular, emotions such as fear/hope/joy/distress have been related and computed directly from value functions and rewards. For example, “joy” is defined as the likelihood of a change in expected value, while “hope” is defined as expected value (if positive) (Broekens et al., 2015). Gomez and Insua use a decision theoretic definition of hope, fear, happiness and sadness, in much the same way (Gomez Esteban and Rios Insua, 2017). These emotions are computed based on expected utility, and the elicited emotions’ intensity is partially based on uncertainty If a threshold in uncertainty is crossed, then a short-circuit “impulsive behaviour” is triggered that overrides the rational one. Although the authors claim this is a “System 1/System 2” dual process approach (Kahneman, 2011), it is much more in line with the Simonesque “interrupts” approach (Simon, 1967). An adaptive system combining fuzzy logic with reinforcement learning is described in (El-Nasr et al., 2000). This model also uses application-dependent appraisal rules based on the OCC model (Ortony et al., 1988) to generate emotional states. However, they also use a set of ad-hoc rules to generate actions. A recent survey of the connections between emotional appraisals and elements in reinforcement learning is in (Moerland et al., 2017).

A generalisation of these approaches is intrinsically motivated reinforcement learning (IMRL), which is an evolutionarily plausible mechanism for allowing agents to learn a reward function that will lead them to be maximally fit (Chentanez et al., 2005). The idea is to search the space of reward functions for one that maximizes the agent’s fitness as defined by the extrinsic (usual, designer-provided) reward. A distinction is made with normal “reward shaping” because of resource bounds or agent limitations: the optimized reward function in IMRL will take these bounds into account, finding the best possible reward function given the limitations of the agent. Emotions have been proposed as defining the space of reward functions over which an IMRL method will search (Sequeira et al., 2014). A set of emotional appraisal features are defined, the weights of which are exhaustively examined to determined the optimal setting for a given domain. The appraisal features used in (Sequeira et al., 2014) are novelty (defined in terms of how many times a state-action pair has been tried), goal relevance (defined as a heuristic estimate of the distance to a maximally valued state), controllability (with uncontrollability measured by the Bellman error), and valence (measured by the value function directly). These appraisal features resolve to standard heuristic methods for guiding reinforcement learning, such as optimism under uncertainty (exploration bonus) or Bellman error (exploitation bonus), being now referred to as an emotional appraisal of novelty or control, respectively. Even in the multi-agent case (Sequeira et al., 2011), the appraisals are direct encodings of altruistic reward functions, leading to simple modifications to reward functions that cause rational agents to be more cooperative (as in (Nowak, 2006)). The learned reward functions are domain-specific, and fail to generalise. Attempts to find an optimal “universal” reward function result in an agent that optimizes the extrinsic reward, with a small exploration bonus (encoded as a negative reward for controllability) that decreases as the agent learns the environment (Sequeira et al., 2014). In (Marinier III and Laird, 2008), the SOAR cognitive architechture is augmented with a reinforcement learning agent that uses a synthesis of many emotional appraisals (termed a “feeling”) as a reward signal. As in the above approaches, these emotional appraisals are direct encodings of decision theoretic primitives or heuristics (e.g. direction to goal). Although Marinier and Laird (Marinier III and Laird, 2008) associate this approach with IMRL, it is not related as the agent does not attempt to match the external reward function but only uses its internal reward signal to learn from.

2.7 Rationality and Superintelligence

Athough it is increasingly clear that humans operate with something akin to a dual process model (Zhu and Thagard, 2002), some (e.g. artificial intelligence practitioners) may argue that a connotative representation is unecessary for general intelligence, and that sufficient resources (relaxing the resource bound) will lead to fully denotative, decision-theoretically rational agents, or “econs”. The higher precision allowed by a denotative representation seems to point the way to a superintelligence (Hofstadter, 1983), and a decision-theoretically rational social system. We propose that artificial intelligence requires a dual process denotative/connotative model to exist as a general-purpose member of a socio-technical system. We present computational, evolutionary and social arguments here.

One argument is that an agent’s ability to model the vast numbers of combinations of other agents becomes challenging unless a lower dimensional manifold is discovered that enables cooperation in groups. Consider a multi-agent system consisting of agents of different types who can behave in different ways. If each agent attempts to model all other agents in its group, including their first-level (direct) interactions with each other, the number of combinations would be factorial in the product . If  (Dunbar, 1992) and is or so, then representation is intractable, and even with only , the number of combinations is astronomical.

The second argument in support of a connotative state is offered by Turner (Turner, 2016) from an evolutionary perspective, who notes that early apes were forced into the forest canopy by other simians around 25 million years ago, and had to deal with a more complex, three dimensional space, making permanent groups more difficult and leading to a species with no permanent bonds and increased promiscuity. This increased complexity entailed an increase in the true free energy (number of configurations the world can be in) that the apes had to model, and pressured the development of approximations. When the descendents of these apes, the early hominids, were forced into the savannah in an era of climate change around 10 million years ago, the reduced complexity of a two dimensional world, combined with the need for stronger group cohesion (because of predators) pushed these approximations to other uses, fostered the development of early “emotional” languages, and allowed larger structures of humans to form, opening the door for collective activities like solving social dilemmas. From an evolutionary perspective, the perserverance of this emotional language is an indication of its usefulness in the context of human groups, and therefore we expect it to be useful in a group involving artificial agents as well.

Finally, a group of agents who are able to coordinate to solve social dilemmas will be more suited for survival than a group that does not. This coordination departs from the principles of decision-theoretic (individual) rationality, but can be enforced by a connotative representation that inextricably links agents through emotional signaling. This inextricable link provides a mechanism for a group of agents to jointly minimize their free energy in an uncertain world.

3 The socio-psychological Control of Affect

We present here a short introduction to ACT and BayesAct, and refer the reader to longer treatments in (Hoey et al., 2016; Schröder et al., 2016), covering relationships to other theories of emotion (e.g. appraisal).

3.1 Affect Control Theory

Affect Control Theory (ACT) (Heise, 2007; MacKinnon, 1994) proposes a fundamental link between symbolic, denotative, representations of the social environment and the continuous, connotative, representations of the sentiments or feelings associated with those denotative representations. For example, when one perceives a person in a white coat in a hospital, a denotative impression is formed of this person that is represented with a symbol (“doctor”). This symbol has an associated “fundamental” sentiment in a three-dimensional affective “EPA” space of evaluation (E: good/bad), power (P: strong/weak) and activity (A: active/inactive). Doctors, for example, usually evoke feelings of goodness, strength, and modest activity. EPA space has been found through decades of research to be a cross-culturally normative representation of meaning (Osgood, 1969).

The link between denotative and connotative in ACT is empirically determined through population surveys using semantic differential scales. These measurements yield a set of samples from a population distribution in the sentiment space, which can then be parametrically estimated (e.g. as the mean and variance of a normal distribution), or non-parametrically represented (as a set of samples). Lists of such measurements are called “dictionaries” of mappings from labels to sentiment. In ACT, only the mean of this measurement is used to link denotative and connotative. Thus, a doctor is represented connotatively as (EPA:).444For historical reasons, EPA measurements are scaled to lie between -4.3 and +4.3. All data in this paper is taken from a survey of 1742 people in the USA in 2015, see https://research.franklin.uga.edu/act/. Given a connotative (EPA) vector, a denotative label can be assigned in ACT using a simple nearest neighbour method (e.g. the closest label to (EPA:) is politician (EPA:) - at a Euclidean distance of ). ACT proposes that events in the world, interpreted symbolically (denotatively), create re-assessments at the connotative level called transient impressions that are used to motivate agents towards behaviours that reduce the incoherence between in-context impressions and out-of-context sentiments. This motivation to socially conforming actions can be interpreted as an instance of Bourdieu’s “habitus” (Bourdieu, 1990), as explored in more detail in (Ambrasat et al., 2016).

Emotions in ACT are defined precisely as the vector difference between fundamental (out-of-context) and transient (in-context) sentiments, and are a mechanism to help agents signal (in)coherence to each other (e.g. with facial expressions or paralinguistics). Importantly, these signals are not scalar indications of (in)coherence, but rather vector signals giving recipies for restorative behaviour and emotion regulation (Gross, 1998). For example, if a doctor talks down to (EPA:) another doctor, the object agent is made to feel less powerful (drops to ) than expected, and will display exasperation or indigance. Upon receiving this signal, the acting agent may restore fundamental sentiments by making up with the other.

3.2 BayesAct

BayesAct (Hoey et al., 2016; Schröder et al., 2016) generalises ACT by explicitly representing the distribution over sentiment in a two-level partially observable Markov decision process (POMDP). BayesAct models individual differences as variance in sentiments, and modulates the predictions of ACT due to the differences between denotative entities with low and high connotative variances (Freeland and Hoey, 2018). In the original formulation of the BayesAct model, the sentiment was directly observed in an interaction as a three-dimensional, continuous vector that gave a direct measurement of the sentiment of the behaviour being performed. That is, if a doctor was observed injecting someone with medicine, then BayesAct expected a direct observation of the mean EPA rating for that denotative behaviour, inject someone with medicine: (EPA:). BayesAct had a denotative state, but this only represented elements of an interaction outside of the social definition of identities, such as the state of a game being played. For example, this might be the positions of both agents’ pieces on a chessboard, or current bids in a negotiation.

Here, we propose that the BayesAct model includes a denotative representation of identities and behaviours of other agents. In this case, these denotative elements are linked to the connotative state through a potential function that measures the incoherence (difference) between the current estimate of the denotative state (e.g. doctor) and the current estimate of the connotative state (a distribution in the affective EPA space). We call this potential function the somatic potential The somatic potential is defined in terms of an energy function that measures the incoherence (difference) between the current estimate of the denotative state (e.g. “doctor”) and the current estimate of the connotative state (a distribution in the affective EPA space).

For example, if the doctor performs some behaviour uncharacteristic of a doctor (e.g., abuse a patient), this doctor would seem less good (lower E) than the culturally accepted definition of a doctor. The incoherence generated between the out-of-context sentiment about doctors (high E) and the impressions created by the observed behaviour pushes the observing agent to a higher energy state. While behaviours can be selected (as in ACT) to reduce incoherence (and thus energy), the energy function can also be used to probabilistically rank likely identities that could be used for re-identification. Thus, if a doctor is observed harassing (EPA:) a patient (EPA:), agents would be motivated to act in such a way as to stop the behaviour, or would be forced to re-interpret the doctor as some other identity (the optimal in this case would be (EPA:), with a closest label of rapist at a distance of ).

In the following, we present a mathematical definition of the somatic potential and associated energy function. We start with the assumption that an agent must maintain an internal model of the world as a set of states making up a state space , which is factored into a denotative part, , that describes the ontological states of entities in the world, and a connotative part, , that describes the meanings of entities in the world. In ACT and BayesAct, the connotative space spans the three-dimensional vector space of EPA sentiments for identities (labels assigned to people) and behaviors (labels assigned to people’s actions). For example, a particular agent’s identity may be represented by some , such that the word doctor in the English language is represented as a particular value of that variable . Similarly, will represent the affective meaning (in EPA space) of those denotative entities. For example, a doctor might seem good and powerful (EPA:).

BayesAct has two sets of observations. One, , represents signals about the environment giving evidence for the denotative state. The other, , represents emotional signals from other agents, and gives direct evidence for the connotative state. Information flows into the model from both connotative and denotative sides, and BayesAct computes posterior distributions that best merge the two in a Bayesian sense. Emotion signals are crucial for grounding the connotative state, as otherwise it could be arbitrarily transformed between agents and would be harder to learn.

Finally, BayesAct has two sets of actions representing denotative action, , and connotative meanings of those actions, . Figure 1 shows a Bayesian network representation of BayesAct.

Figure 1: Belief network for BayesAct at a high level of abstraction showing denotative and connotative , observations , emotions , and actions both denotative , and connotative . Two somatic transforms link state and action, respectively, but can be considered the same. Primed variables are post-event, and the network is dynamically unrolled through time.

3.3 Somatic Transform

A core element in the BayesAct POMDP is that every denotative element (including agent actions, ) has an associated connotative element ( for actions). The important part is the connection between denotative and connotative elements, which we will write using a function called the somatic potential: . The somatic potential specifies the shared cultural connotative interpretation of the denotative state . That is, for some , is a function over the connotative space, , representing the shared sentiments for denotative state . For example, could be a normal distribution given by summary statistics (mean, covariance) measured in a population survey. Using a somatic potential, agents can ask questions such as “what do I feel about object ?” (e.g. Q: “how do you feel about doctors?”; A: “I see doctors as quite good, quite powerful, and a bit active, but I’m somewhat uncertain about this”).

The somatic potential can also yield, for some , a function over the denotative space, , . For some discrete set of denotative entities, , this could be a multinomial distribution (a set of numbers such that and ) indicating that entity is likely with probability . The agent can therefore also ask questions such as “what sort of feels like ?” The reader can demonstrate that this is the “fast” or “System 1” thinking at work by playing the following game. First, pick values for E,P and A and then imagine words with those EPA values. Notice how quickly these very different things come to mind. (e.g. Q:”what sort of thing is very good, a bit weak, and very active?”; A: “dogs, smiley faces, kids”).

To give the somatic potential a more precise definition, we may model it as a joint probability distribution over and , written as a Boltzmann distribution 555The Boltzmann distribution is often used in statistical physics to describe the joint probability of different states (e.g. of a gas or spin system). It is usually represented as where is the energy of configuration , is the temperature and is a normalizing factor called the “partition function”. Estimating the partition function (and thus knowing the actual probability of an event ) is challenging because it is a sum of energy terms across all possible states (many of which may not even be known by the agent). The partition function is equal to the free energy of the system scaled by the temperature, . The free energy is a measure of how complex the environment or system is. We use the Boltzmann distribution here for convenience. It could be replaced with other distributions. of the energy function defined earlier, which we write as , denoting the energy or incoherence between the denotative and connotative states :

(1)

Where is a function in giving the connotative distribution for that particular and is a normalizing constant (the inverse of the partition function). M may be simply the mean of the population survey for the concept , and a constant. Thus, the more coherent is with , the smaller the energy , and the more likely the state. The parameter models one aspect of the (emotional) predictability of the environment. As the environment’s diversity increases (say with the addition of some heterogeneous other agents), naturally increases as the ascription of sentiment to denotative elements in the world is less well defined. Such a world becomes less predictable and less valid. Choosing a value for is a learning choice to be made by an agent. Although we know that is related to the variance in sentiments in the population, it does not need to be exactly the same. In the Georgia dataset, the variance in power for doctor is , while that for nurse is . However, the value of will also be a function of the agent’s social network, as the agent may operate in a clique or cluster of locally more homogeneous sentiments.

For ease of exposition, here we will assume that the connotative state is one dimensional, and we focus exclusively on the somatic transform. The dynamics in the full BayesAct POMDP are more fully explored elsewhere. Here, we suppose that a prior marginal over , , represents the agent’s current estimate of the denotative nature of the interaction.666Here we show one possible usage of the model, assuming prior marginal distributions and attempting to find posterior marginals. We assume the joint probability over and may be modeled as a product of the two marginals. Maintaining this independence is important for the stability of the model, allowing for two, separate, but related systems. It also allows for the two systems to operate at different time scales (with the connotative system able to make faster predictions than the denotative system), and is more in line with neuro-biological findings about different brain systems devoted to the two systems. Note that the time scale difference is actually a difference in noise levels, with the denotative system making worse predictions in less valid environments, so the connotative system “takes over” while it “waits”. It is therefore the rate of change of uncertainty that matters, and is externalized as a “fast” and a “slow” system. It can be a belief about various properties of objects being manipulated as part of the ongoing interaction, for example. Thus, an agent may observe a female in a white coat in a hospital setting, and infer a distribution over the possible identities of nurse or doctor. In this case, an agent with a gender stereotype may form a denotative impression which puts more mass on nurse than on doctor777Research shows patients have fairly strong opinions about what doctors and others should be wearing (Petrilli et al., 2018). Such denotatively constructed impressions can be made with constraint networks (e.g. (Freeman and Ambady, 2011; Joseph and Morgan, 2019)) or other relational models such as relational databases. In our model, the result of the constraint satisfaction convergence or stabilization is , and would be represented in this simple case with one number giving the probability that this entity (person) is a nurse, for a distribution over [nurse, doctor] of 888Clearly there may be more than two identities under scrutiny at a time (e.g. head nurse, orderly, medical student, etc). We proceed here with two without loss of generality.

Suppose further that a prior marginal distribution over , , represents an agent’s current estimate of the affective nature of the interaction. It can be a belief about identity sentiments of participants, recent or forthcoming behavior sentiments, or sentiments about settings or other physical objects. Suppose the agent had observed the same female in the white coat performing a gesture implying she has power, such as ordering someone to do something. In this case, the agent’s prior over the white-coated female, , would be shifted towards more powerful values.

Using a joint prior that is factored as , we seek a posterior distribution , which combines priors with the constraint imposed by the somatic potential (Equation 1). Figure 2(a) shows a graphical representation of the somatic potential as a graphical model connecting two variables and with an undirected link which represents the potential . Note that we write both distributions and density functions as , as the type of function and operations used is defined by the variable in context or .


(a) (b)
Figure 2: Somatic Potential as a belief network (a) undirected, (b) equivalent directed graph introducing observed variable .

In fact, we are seeking the posterior given our knowledge of Equation 1. We postulate a variable representing our knowledge that a somatic potential such as Equation 1 connects denotative () and connotative () spaces. We therefore know that this variable has value “true”, which we write as or simply , with the interpretation that iff the somatic potential exists between and . We then are actually considering the joint distribution (after primes are removed) as , shown in Figure 2(b). This distribution can be factored as

(2)

and we see that the joint distribution is in fact a product of the somatic potential and the prior distribution (which may be factored into two prior distributions over and ). Once is added, however, the undirected belief network is transformed into a directed network, as shown in Figure 2(b). This also helps practically for integration into the POMDP framework.

We can further integrate denotative and connotative evidence for each and , respectively with the addition of observables and , generated by and , respectively, as shown in Figure 2(a). Thus, the model is agnostic to the type of sensor information given.

With a somatic potential defined to be the Boltzmann distribution as above, and factored priors and , an estimate of the marginal posterior over (, the feelings evoked after an event) can be computed as:

(3)

where the shows the quantities are proportional (equal up to a constant multiplier, labeled in Equation 1). Thus, the posterior distribution over is the expectation of the somatic potential with respect to the prior distribution over , , multiplied by the prior distribution over .

Similarly, the posterior distribution over , , is given by

(4)

The posterior distribution over is the expectation of the somatic potential with respect to the prior distribution over , , multiplied by the prior distribution over . The ability to transform between connotative and denotative states (in either direction) will be referred to as a somatic transform.

Equations 3 and 4 can be analytically computed in the case of Gaussian priors, which we pursue below. In practice, the somatic transform can be problematic because it generates a posterior over which is no longer a single Gaussian, but a sum of Gaussians. Projecting this very far into the future may lead to an explosion of modes. However, modes can be combined or rejected by action selection as well, meaning that for each sum of Gaussians generated, one can be selected through action. In the doctor example, an sum of two Gaussians results after a single iteration but the act of deference performed can resolve much of this uncertainty by committing to one hypothesis or the other.

3.4 Somatic Transform Examples

Figure 3 shows an example usage of this transform for a simple case. In it, we consider a simple prior over , , as a Gaussian distribution with a variance and a mean which is varied to see the effect of a changing connotative prior. These distributions are shown as dashed lines in Figure 3 (with means ranging from to ). A prior over , , represents only two identities nurse and doctor with probabilities (nurse) and (doctor). A-priori, the agent believes it is more likely this person is a nurse (possibly due to a complex constraint satisfaction network (Freeman and Ambady, 2011)). The somatic transform is implemented using normal distributions as the values of mapping a label in to a mean and variance in given by the Georgia 2015 survey data. The identities of nurse and doctor have power (P) values of and , respectively. We only consider a single dimension here for ease of exposition, but the results carry over to or more dimensions. In Figure 3, we use , but investigate how effects the results in Figure 4.

Figure 3: Effects of the Somatic Transform on the marginals over and . Gaussian priors over are shown as dashed lines for different values of . The prior over is . The posterior over is shown as solid lines, while the posterior over is shown in the legend, with denoting the entropy of and denoting . As the prior shifts to more positive values in Y, the posterior in shifts to be more in line with the power sentiment about doctor, rather than nurse. Further, the posterior in also favors doctor (that is, ).

Figure 3 shows how the posteriors evolve as is changed. The entropy in the posterior distribution over , , is shown as in the legend,999Entropy is a measure used in statistical physics, and it describes the level of homogeneity of a distribution. In information theoretic terms, entropy measures the amount of information in a system that exists in a set of states according to the probability distribution . Entropy is typically written as . A low entropy system has a joint distribution over states which is not dispersed evenly across all , and so is easier to predict. Kahneman and Klein (Kahneman and Klein, 2009) refer to this as the “validity” of the environment. along with the value of as . The posteriors over , are shown as solid lines. First, we can see that as the prior over approaches the prior over (with an expected Power sentiment value of ), the posterior becomes a bimodal distribution with about of its mass nearer to the nurse identity at . Further, as the priors more closely agree (that is approaches ), the entropy of the resulting distribution over increases, so the information obtained by combining them is smaller. For largely different values of the mean of (e.g. or ), the resulting entropy of is small, and more information was gained by the denotative system from the connotative system. The resulting distributions over are also shown, demonstrating a clear shift from nurse () to doctor () as the prior information about the sentiment observed shifts towards the positive (the person demonstrates a behaviour with more power).

Figure 4 shows how the same curves evolve according to a changing value of , with a fixed , . As the world becomes less predictable ( increases), the connotative system increasingly steps in to help out. As Figure 4 shows, this happens naturally in the somatic transform. When is large ( in this example) there is not as strong an effect between and , and so both follow their prior distributions more closely. For small , the sentiment follows the prior over much more closely, becoming more centered around the known mean values of power for the identities of nurse and doctor of and . An agent using smaller values of are therefore more likely to attribute fixed sentiments to individuals, requiring a lessening of heterogeneity. Such agents believe that other agents should behave in less flexible ways, but trust in a more valid environment to provide them with the ability to predict denotatively. At the other extreme of large , agents believe other agents are more flexible, but more consistent connotatively.

Figure 4: Posterior over with varying . As decreases, the posterior over is more focussed on the priors over . denotes the entropy of . denotes the posterior probability of being nurse: .

The somatic transform naturally shows a trade-off between the uncertainty in and . Figure 5 is an example of this trade-off showing how the posterior over and changes as a function of . In this simulation and . As the environment becomes less valid (less predictable or more uncertain, so is more dispersed or has higher entropy), then and will be more heavily influenced by the prior in . Thus, when , . Agents in less valid (less predictable) environments will put more weight on the connotative system: they will make inferences and choose actions that are more in line with connotative (socio-cultural) expectations. In more valid environments, has lower entropy and thus dominates the posterior, leading to a posterior that is more heavily influenced by the denotative state. Agents in more valid environments will thus act more in line with denotative concepts and predictive dynamics, and so will be information seekers and utilizers. In a social dilemma, for example, one would expect the agents in less valid environments to cooperate (act according to social prescriptions), while agents in more valid environments will defect (act decision theoretically rationally). This is in line with experiments showing how humans tend to act more pro-socially (cooperate in a public goods game) in ambiguous situations (ones in which risk is hard to evaluate, see Vives and FeldmanHall, 2018). In this simulation, risk is : if is lower entropy, then risk is more well defined, and so ambiguity (the uncertainty in risk) is lower.

Figure 5: The top (blue) and bottom (green) lines in the legend show a prior state with a less dispersed , with and , respectively, yielding a posterior for both and that is more in line with the original denotative prior . The orange line () shows how the posterior is biased towards the prior in (possibly based on stereotypes). The prior in is shown as a black dashed line (same for all values of ). and denote the prior and posterior entropy of , and and denote the prior and posterior probability of being nurse.

3.5 Relationship of BayesAct and ACT

Note that in ACT, the somatic transform has a zero temperature parameter . Further, either is a point estimate (), and is a constant (no prior, set to ), or is a point estimate () and is a constant. Writing a point estimate as a delta function, , where if and otherwise, then in the first case, Equation 3 is:

(5)

And in the second case, Equation 4 is:

(6)

which simply says that and are related through the function directly (e.g. is a dictionary linking each with a ). Technically in equation 6 this assumes a dictionary with an entry for every possible , clearly an impossibility. Finding the nearest neighbor, as described above, is one possible way to circumvent this.

As mentioned previously, the somatic transform is the key difference between our presentation of the model here and the presentation of it in the original exposition of the BayesACT model (Hoey et al., 2016; Schröder et al., 2016). In the original presentation, we assumed that behaviors were both perceived and generated in the connotative/affective-space, leaving the translation to/from denotative spaces to some other perceptual or motor system. The somatic transform mathematically defines this translation and integrates it directly and deeply into the model itself. Rather than perceiving a behavior as a vector in a 3D affective space, an agent perceives and cognitively interprets denotative aspects of the situation including behavior, then uses these denotative aspects as evidence in support of its connotative/affective predictions, computing the level of support using a somatic transform. Simultaneously, connotative predictions are mapped to future denotative states and actions, providing heuristic guidance to an agent. This aspect of the original presentation of the BayesACT model is thus revealed as a simplifying assumption that has been replaced with the more general somatic transform.

The somatic transform thus captures the inextricability of and (denotative and connotative). The transform fits into the BayesAct partially observable Markov decision process (POMDP), and therefore becomes an integral part of that model. The primary additional elements in BayesACT are (1) the temporal (dynamic) nature of both and ; (2) the ability to construct denotative plans of action; and (3) the observation of X in the world through some set of sensors. While (1) and (2) are important and define how the agent will plan and act, it is not the subject of this article. The observation function, however, is a more statically defined element, and is considered to lie at the reactive level. It may contain further sub-programs that take care of very rapid responses (to immediate threats, for example). Thus, the reactive level of Ortony, Norman and Revelle (Ortony et al., 2005) could be captured by a lower level Bayesian model that also includes a policy of action. This would be summed up at the level of BayesACT using some probability distribution over possible observations, , given the denotative interpretation, , as . Further, the policies of action are represented in the dynamic process over , with more likely futures being exactly the ones predicted by the computed policies.

4 Exploratory Examples

The simple model above with a varying provides an explanation for three related behavioral effects noted in the literature. First, Van-den-Bos (van den Bos, 2001) showed how thoughts of uncertainty about the self can lead to more pro-social behaviour. We show in Section 4.1 how this is a tradeoff from denotative to connotative in BayesAct. Second, in a classic experiment, Festinger gave participants (teenage girls) one of two prizes of equal value to them (audio records of unknown pop stars). The participants subsequently raised their evaluations of the prize they obtained. In general, a person is given one of two items that she values about the same, then she will value the item she is given more highly in order to reduce the cognitive dissonance created by the fact that she did not get the other item. This is also an example of the “confirmation bias” in behavioral economics, in which people seek explanations that confirm their prior beliefs. In Section 4.2, we show that, according to the somatic transform and BayesAct, such re-interpretation of value is simply the process of attempting to unify connotative representations of the self (e.g. “I am a good person”) with denotative representations of uncertain events (e.g., “I think my prize is worse than hers”).

The third experiment we consider in Section 4.3 is the equally well known Asch conformity tests (Asch and Guetzkow, 1951), in which agents are led to denotative choices that are clearly wrong by a bias introduced by other agents making incorrect decisions. In BayesAct this is modeled again as a denotative/connotative tradeoff. As more other agents agree with the wrong choice, the denotative solution breaks down, leading agents to fall back on connotative meanings and conformity (in-groupness).

4.1 Uncertainty and fairness

Van den Bos (van den Bos, 2001) carried out an experiment in which the fairness or unfairness of a situation was evaluated affectively (positive vs. negative) in two conditions: one with induced (primed) feelings of uncertainty enhanced by asking about the emotions felt during uncertain episodes. Two effects are shown. First, the more (perceived) fair condition (where participants got to voice their opinion about a distribution of payoffs) elicited more positive emotions than the non-fair condition (no chance to voice). Second, the effect was enhanced by the increased uncertain feelings. In BayesAct this can be accounted for by noting that as the emotions associated with uncertainty about the self are evoked, so is the uncertainty in the denotative identity. As uncertainty about denotative identity is increased, the participant (a student) will be more reliant on the connotative system.

Consider a purely denotative solution. In this case, the emotions elicited will not play a role, and a student will think that voicing an opinion may change the payoffs in their favour, and so will prefer that option. However, the other option of letting the experimenter decide is not a big deal, so they would rank the voicing condition as better (as they may assign a small probability to their voicing having an impact), but only marginally so. On the other hand, a purely connotative system will ignore the main identity of student and focus on a transient created by the student identity modified by an emotion of anxiety. Such an identity (“anxious student”) will be much more likely to prefer the fair option because it is the one that goes the furthest in restoring connotative meanings to the system.

We show the simplest possible example, using only ACT equations in order to motivate the problem. We show how BayesAct would modify things at the end of the section. In (van den Bos, 2001), there are conditions, with half the participants having a “voice” and half not, and half of them having uncertainty made salient, half not.

Using ACT only101010Here we are using the Indiana 2005 dataset., the purely connotative solution models identity of student (EPA:) who feels (or not) a feeling of anxiety (EPA:), leading to a modified identity of anxious student (EPA:). In the no voice condition, the valence of the emotion generated is computed as the “E” value of the “characateristic emotion” of that identity (which represents the answer to the question “how does a student feel?”). This is (EPA:) for student (with closest labels of warm and easygoing) and (EPA:) for identity anxious student (closest to envious). In the voice condition, an action is taken by the participant, so we model the student taking the action compromise with, as this is close to the optimal for a student, and is what one would expect the student to do in the fairness test (divide equally). A student who compromises with another student feels emotions of (EPA:), while if an anxious student is the actor, emotions are more positive (EPA:). Figure 6(a) shows these data in a simple plot, where the “E” axis is reversed as in the experiments the participants were asked how “sad” and “dissapointed” they were, thus a negative measure. We therefore plot the scaled version (to the range ) of the distance between the “E” value of the emotion felt with the mean “E” value of the emotions sad (EPA:) and disappointed (EPA:). Note that these curves correspond in form to that observed in (van den Bos, 2001), which we plot below in Figure 6(b).

The purely denotative solution has the participant requesting more of the pie, but this is conditioned on the student’s belief that their voice will change the payoffs. As this belief may be small, we expect a small difference between the voicing and no voicing condition, and so it is close to the non-salient curve in Figure 6(a) or (b).

1

2

3

4

no voice

voice

non-salient

salient

1

2

3

4

no voice

voice

non-salient

salient
Figure 6: (a) ACT simulations of conditions, showing the scaled average of the distance from the emotion felt in thecondition with the evaluation of the emotion of sad and disappointed. Scaling to the range 1-7 is done to match scales with that of (van den Bos, 2001). (b) Results of (van den Bos, 2001) showing the mean ratings of sadness and dissapointment for each of the four cases. Results are shown as lines for exposition (data is 4 points: the line ends).

The degree of uncertainty is what governs a BayesAct agent’s tradeoff between these two cases. In the case of no uncertainty, the denotative solution takes over. In the case of large uncertainty, the connotative system takes over. In the first case, the voicing condition is considered a bit better than the non-voicing condition. In the second case, the voicing condition is considered a lot better than the non-voicing condition, because both voicing and non-voicing are more meaningful emotionally when uncertainty is salient. Agents in general will be biased towards more connotative solutions by invoked feelings of anxiety leading to feelings of uncertainty. The correspondence between connotative predictions and experimental results of (van den Bos, 2001) show that people are leaning more heavily on the connotative meanings of the experiment in the uncertainty salient case. As uncertainty in the denotative identity is increased, the posterior over connotative identity becomes more focussed around the connotative meanings (of anxious student). If this were not the case, then the posterior would be more biased towards the denotative reality (of student), and the effect would not be as large. One can also see the same effects here as in (FeldmanHall and Shenhav, 2019), where uncertainty evoked negative affect, and restorative options for that affective state were preferred to restore affective meanings to something closer to their fundamental values.

4.2 Cognitive Dissonance

Consider a simple demonstrative example in which corresponds to whether an item is desirable () or not (). The corresponding is the “E” rating for the item. As demonstrated by Shank and Lulham (2016), people are consistent in their ratings of the EPA values of commercial products. For example, iPhones were rated as (EPA:), whereas Blackberry phones were rated (EPA:). The study also found that commercial products change people’s identities, and are seen as consistent with some identities and not with others. Considering the to be the sentiment associated with the participant’s identity, we place a prior on that is the same as the identity of the participant. The function corresponds directly to the value of the item in the participant’s mind so the somatic transform then represents the fact that good people will tend to have good things.

We are sweeping much of the mechanics of BayesAct under the rug here in order to focus on the somatic transform exclusively. In BayesAct, one would have a connotative and denotative identity for both the participant and for the act of owning an item. The somatic transform would link the denotative meaning of owning one of the items with the connotative meaning of that ownership (owning good things is good), and combined with the prior over identity using impression formation (where good people owning good things is more likely). However, since the participant identity is the same for all participants, we can simply merge it with the meanings of owning an item into priors over and separately. Therefore, our is actually the prior belief in the participant owning the item, and is the prior belief in the identity. Further, an observation would be added denoting the actual item itself, and this would be connected to through some observation function .

Figure 7: Simulation of a cognitive dissonance. The posteriors over and shift towards the prior over , causing a re-interpretation of a bad item as something good. The prior in has a stronger effect if it is less dispersed (smaller , dashed lines). is the entropy of and is the posterior probability of .

In Figure 7, the axis corresponds to the evaluative dimension “”, and the prior has and (corresponding to the mean and standard deviation of the E rating for child in the Georgia 2015 dataset).111111This could potentially be some distribution over identites, here we simplify to one for ease of exposition. We selected child for this demonstrative example because it was more positive and less dispersed than teenager (EPA:) with standard deviations of (EPA:). We used and imagine the same experiment as above where the participant is given a Blackberry. The denotative prior is , implying the participant believes they have a bad item. After combining the connotative prior (which is essentially saying that any item obtained by the participant must be good, since they are good and expect to have good things), the resulting posterior has a reduced value for (dropping to ), so is significantly more likely to be on the good side. That is, a participant who originally thought the prize was not as good (), has changed her or his mind and now thinks the prize is much better ().

Figure 7 also shows the posteriors for smaller () and larger () values of . With a more dispersed prior (larger ), the shift is not as evident (), and with a less dispersed prior (smaller ), the shift is even more evident (). Even further, we note that our model predicts that agents will deal with less valid environments by leaning more heavily on their connotative system. Thus, one would expect the resulting to be high precisely because the connotative system has “taken over” and it has become more imperative to justify receiving the lesser gift.

Any actual experiment would need to take both “types” of into account by integrating them out as

(7)

Using the simple set of three “types” above, we can still compute a heavy bias by assuming a discrete set of which gives a final posterior of . Given we assume the mean and variance in the Georgia dataset is representative of the same population doing the dissonance experiment (which it is clearly not, but these sentiments do change sowly, see above), we can generate a factor corresponding to the results of the dissonance examples.

4.3 Conformity

Similarly to the dissonance case, conformity experiments (Asch and Guetzkow, 1951) can be explained in the same way, as they show a shift in a person’s denotative representation (of the correct answer) towards the representation of the group. In these experiments, a participant is placed in a group of (what they think are) equals, whereas in fact the rest of the group are complicit in the experiment. A question with a really obvious correct answer is posed, and the group members all say the correct answer is the wrong one. Participants in these experiments will be more likely to guess the (obviously incorrect) answer. In the same way as the previous experiment, we imagine a denotative X that represents right and wrong (analogous to good and bad in the dissonance example). Then, a prior over is that the answer chosen (the correct one initially) is right with high probability, . Again, these priors are actually computed through a dynamic process, and combine the connotative meanings of the identities of participants with the connotative meanings of doing something correctly or not. Similarly, the connotative priors of enacting a certain identity (e.g. “student”) are combined with the connotative priors of that identity doing something good (e.g student is good - “E”=1.8 - and good people do good things). As in the last section, we combine these prior effects into a single variable to focus on the somatic transform.

We then update the model sequentially by multiplying the denotative posterior by a probability of observing another group member selecting the right answer of ( probability of selecting the wrong answer, as there may be some other cause for them selecting the wrong answer), and use this as the prior for a second update. This is the same as adding an observation to the model, as above, which is generatively linked to through an observation function which is . Observation of a single other participant selecting an answer multiplies the posterior over with the observation function P, and this becomes the new prior over . This is repeated times to get . The posterior over is used directly as the new prior over .

Figure 8: Simulation of conformity. The posteriors over and shift towards the observed evidence (of the selected answer being incorrect), causing a re-interpretation of a right answer as something wrong. is the entropy of and is the posterior probability of .

Figure 8 shows the results of repeating this process five five times for , and . Clearly, as more and more group members select the wrong answer, the posterior over increasingly favors the wrong answer (rising to after five iterations). After group members all select the wrong answer, this rises to . The posterior over also decreases, with increasing weight put on the wrong mode, indicating that the interpretation has shifted from the selected answer being right to the selected answer being wrong (and so the other answer, actually the incorrect one, should be selected). If we return to the model including and , and assume they both take equal “responsibility” for the event, then model predicts negative feelings for the participant’s sense of self () in this case, as they cope with the fact they apparently got the answer wrong.

5 Discussion

5.1 Other applications

Overall, our aim is to design and build a framework based on social-psychological theory that allows agents to be constructed and deployed across diverse application areas. A careful dynamic calibration of uncertainty in connotative and denotative representations can provide a hierarchical structure necessary to handle the complexity of the social world. Such emotionally aware agents can be useful across a wide range of application areas, including mechanism design, behavioral economics, games, and conversational agents. We review two such applications below. In Section 5.2, we consider online collaborative networks as a group setting in which social and emotional factors can play an important role. Ambiguity pervades the online world because identities can be easily concealed, but as we have seen above, this ambiguity can be handled with emotional modeling and sharing mechanisms. In Section 5.3 we study the changing nature of identity in Alzheimer’s disease. We build solutions to help first-time care partners better understand and interacti with the mental state of their charge. In the following, we give brief overviews of these practical projects, followed by discussions of other ethical and philosophical issues.

5.2 Online Collaboration

Github (github.com) is an online platform that is primarily used for Open Source Software (OSS) development. However, GitHub is rapidly becoming the platform of choice for general-purpose collaborative efforts. GitHub contributors can be seen as forming a large social network that is loosely bound by some developers spanning multiple projects. GitHub hosts 35 million projects and 14 million collaborators, and has seen a super linear growth over the years. At first glance, GitHub appears to be a meritocracy: contributions are made by coders with varying skill levels, and projects are advanced by individual contributions according to their quality and integrity. However, on closer inspection, it appears there are many relational factors at play, and social structures that develop within and across projects have a significant impact on the progression and biases integrated into the projects (Tsay et al., 2014). A group may include a powerful member who bullies weaker members, leading to exclusions, some of which are based on factors such as race or gender. Social status within a collaborative group can play an important role in determining the direction a project takes, and hence the final software and products being used by the general public. BayesAct can be used to model these interactions between GitHub contributors, and to create artificial group members whose roles are to promote and enhance inclusive collaboration. Contributors are each modeled with a BayesAct-based agent. Comments and interactions are then analyzed for sentiment (affect/emotion) and used to learn the affective meanings and identities for each group member. These learned identities are then used to generate information about group coherence, and to make suggestions for collaborative enhancements such as the admittance of new group members, the promotion of existing group members, or the focus of attention on specific contributions. Artificial agents, also with a BayesAct back-end, can become group members themselves, fulfilling certain roles that fill important gaps in the social order created by the group. Artificial agents with an understanding of the relational forces at play can therefore be important moderators helping to promote inclusive and efficient development (Hoey et al., 2018).

5.3 Alzheimer’s Care

The second application area is in the realm of healthcare and is aimed at collaborative networks for the growing cohort of computer-literate older adults. Within a generation, nearly one million Canadians will suffer from Alzheimer’s disease (AD) or a related dementia, and the costs of dementia care will reach $153 billion. Faced with this epidemic and fearing the devastating impact of dementia quality of life, older adults and their families are increasingly seeking creative solutions for personal and social engagement that recognize and address the specific challenges related to dementia. However, in dementia, cognitive and denotative contextual reasoning suffers, while emotional and social reasoning remains relatively intact (König et al., 2017; Francis et al., 2019). For example, a mother may not recognize her son, but will remember how the interaction should “feel”. It is precisely the emotional disturbance caused by the lack of a shared denotative reality that creates difficulties in the interaction. A deeper understanding of, and coping with, this emotional disturbance comes with repeated interactions, but appropriate guidance could be offered to those handling the initial disturbance in the form of automated recalls, hints or tips (e.g. delivered through a smartphone) that remind users of the common patterns of behaviour and the underlying emotional reasons why (Robillard and Hoey, 2018). For example, a reminder to a caregiver with a new resident with dementia at a long-term care facility that this individual used to work as, and identifies strongly with, being a teacher, will remind the caregiver that interactions such as questions may be more appropriate than directions. In the EMOTEC and VIP projects, we study the basis of this interaction using BayesAct, and propose BayesAct-based agents as virtual assistants that can provide this form of emotional guidance.

5.4 Ethical Considerations

When building intelligent agents, and especially those with socio-emotional capabilities, ethical issues must also be carefully considered. The moral machine experiment (Awad et al., 2018) showed that people have shared behaviours as moral decision makers, with consistency across, and diversity within, a culture. We present a possible model for this in BayesAct, with this consistency arising in a sentiment (connotative) space with a simple prior distribution. This connotative space and associated temporal dynamics has a direct multimodal (emotional) communication channel providing it with information, and is learned through interaction with a social group. With a connotative space and dynamics which are consistent with others’, agents can benefit by having easier focus on some aspects of the denotative world, specifically those that are relevant as solutions to social dilemmas. When they are able to follow these prescriptions for dilemmas, they become “members” of the social group in which they are learning. Thus, any moral decisions made by the agent would be consistent with those made in its social group, and therefore be more acceptable. Inconsistent agent behaviours result in the ostracism of offending agents, communicated with emotional signaling and less cooperative behaviour.

5.5 Inextricability, Complementarity and Consciousness

The translation between connotative and denotative aspects of entities using a somatic transform embodies the principles of inextricability and complementarity relating cognition and affect or denotative and connotative meaning discussed in Section 2.4. Denotative and connotative can be mapped one onto the other, and the mapping is culturally shared. Thus, given a connotative state such as a sentiment, the somatic transform immediately calls to mind a host of related denotative entities. Similarly, given a denotative state such as an object, the somatic transform immediately calls to mind an area of the connotative space. This can be very helpful in the case where is not observed directly, but only inferred from external observations. Should external observations be ambiguous or noisy, the connotative prediction of what the denotative state should be (anything connotatively consistent) can help to disambiguate or clarify. The result may be a percept that is consistent connotatively, but inconsistent denotatively. Similarly, is not observed directly (except by introspection) during normal interaction. However, humans have evolved a set of signals that they can pass to one another using a set of modalities that are not directly connected to the denotative aspects of speech such as face and hand movements. These signals, called emotional displays, are indications about the current connotative state, , and can be very useful to help disambiguate complex social interactions. Nevertheless, in some cases, the denotative state can make a prediction about the expected connotative state, and this prediction can help to disambiguate or clarify ambiguous or noisy emotional signals.

Thus, connotative and denotative states are inextricable because one can be recovered from the other at any time. In BayesAct, through the somatic transform, any denotative state can be mapped into the same connotative space, allowing for comparisons between actions and identities, for example. However, the two are complementary in that they describe the same common and deeper underlying reality and are both are necessary to fully understand this deeper reality. Clearly, the connotative state will be of limited usefulness on its own; it needs to be translated into something concrete in the world, and in particular into concrete motor movements (behaviors). Perhaps less obvious, the denotative state by itself will also be of limited usefulness due to the computational difficulties it presents as environments grow less valid (FeldmanHall and Shenhav, 2019). The connotative state is required to guide an agent towards socially acceptable choices of behavior that can ensure more globally optimal solutions to social dilemmas.

Further, attempts to generalize from the neuroanatomical to the psychological level ignore the emergence of properties resulting from the profound interconnectivity and organization of neurons in the brain that give rise to the mind. Among the most important of emergent properties is the reflective and experiential nature of consciousness, the elusive explanation of which has become known among neuroscientists as the “hard problem of consciousness” (Chalmers, 1995). How and why does our subjective or phenomenological experience arise out of the cognitive processing of auditory and visual information? Why and how do we have an inner life in which we can entertain images and thoughts or experience emotions? And why and how do we experience ourselves as the locus of these experiences? A deep and intriguing subject of investigation that nevertheless, may be side-stepped in the pursuit of AI by falling back on a ’thin’ notion of consciousness (McGregor, 2017).

5.6 Reinforcement Learning

The link between exploration and exploitation in reinforcement learning may be a common reliance (or dependence) on the denotative uncertainty of the situation. Consider that there are two primary ways to encourage exploration in RL agents. First, in random exploration, and agent is forced to take some random action (possibly under some constraints) every now and again. Second, in value exploration, and agent’s utility function is artifically modified to include some aspect that we can assiociate with emotion as described above. While random exploration bonuses reward (in the active inference sense of “forcing” a particular behaviour) randomness (noise and uncertainty), valence exploration rewards common (socially accepted) patterns of behaviour. In situations of greater uncertainty, one expects that more connotative meanings will be at play, and therefore the policies examined will be more diverse, forcing exploration. As the diversity of the social group or ecological niche of an agent grows, the amount of exploration also grows. This predicts that more diverse environments (e.g. artist ghettos) will see more exploratory behaviour. In fact, the traditional random and value based exploration bonuses are one and the same. While random bonuses increase uncertainty and therefore push agents to use more socially normative strategies (which are more diversified, depending on the ecological niche, and therefore more outside of denotatively optimal solutions), value bonuses simply add reward directly to exploratory behaviour. In BayesAct the addition of randomness creates added value (again in the active inference sense of which behaviours are expected and performed) on socially normative solutions automatically, and so links the two inextricably by a social potential force linking agents together.

In some sense, in BayesAct, the roles of exploration and exploitation are reversed. In an exploitative mode, the agent simply goes with the social group, whereas more deviant agents may explore and find individually more optimal solutions. The question that arises from this reversal is that, if each member of a society is busy optimising his personal payoffs, even within the social order, then, given enough time or energy (so long as everyone is not too busy), the social order will break down and be lost (as everyone will be disobeying it). Certainly it will be feasible for a purely rational “Simonesque” agent to take advantage of a BayesAct agent, as it can figure out an optimal strategy of (fake) emotional signaling to manipulate others. However, as emotions are very hard to fake, inconsistencies in this agent’s performance will be noted across different situations and by multiple individuals. While BayesAct agents may not be able to “put their finger” on exactly what is wrong, they will sense increased deflection (even if slight), and will label the “Simonesque” agent as a deviant. As is well known from the dynamics of social networks, one very effective method for dealing with deviants is ostracism or “link reciprocity” (Rand et al., 2011). Deviants are simply not interacted with, leaving them “out in the cold” and unable to participate, contribute to, or change, the social order. BayesAct makes a similar prediction: that interactions that cause deflection will not be engaged in by individuals (see (Hoey and Schröder, 2015; MacKinnon and Heise, 2010; Heise, 2013)).

Nevertheless, there are some such deviants who persist, thereby forcing a reorganization or redefinition of the existing social order. This persistence may occur because the deviant manages to “convince” a sufficient number of others that “his way” is better for everyone. As such, this redefinition may, in fact, lead to superior group performance. If this happens, the deviant behaviour is reified by the reorganization of the group, and becomes part of the connotative bias of the BayesAct model again (Berger and Luckmann, 1966). Members of a group following these new prescriptions start treating them as “normal”, as “exploitation” rather than “exploration”. If their influence spreads, if the new prescriptions propagate and dominate, then deviance becomes normality. Such deviance, in hindsight, is then celebrated. It is labeled creativity. In economic situations, deviant behaviour may lead to novel lucrative solutions, and these may become the norm as the society accepts them.

This section has pointed to uncertainty as a driving factor for agents’ choices of exploration vs. exploitation. Regardless of how these terms are used, we can see that both are reflections of the same underlying principle.

5.7 Organic and Instrumental Beliefs

The tradeoffs between connotative and denotative meanings in reasoning link social psychological theorizing across many authors. The idea traces back at least to Durkheim’s instrumental vs. organic solidarity (Durkheim, 2014 (1893), is also reflected in Lawler’s instrumental vs. relational commitments (Lawler et al., 2009) and in Bales’ forward-backward dimension (Bales, 1999). When denotative reasoning takes over, individualistic groups in mechanical solidarity use instrumental commitments (more rational), and will require authority to control them and force them to obey social norms (through e.g. penalties and enforcers). They are thus following “normative” commitments (Lawler et al., 2009), and must be more accepting of authority (Bales’ forward dimension (Bales, 1999)). In more diverse groups (such as those created originally in the industrial revolution with the amalgamation of agrarian people in cities supporting factories), social complexity pushes connotative reasoning to take over, and more collectivity develops in which organic solidarity and relational commitments are more salient, and groups are less accepting of authority (more of Bales’ backward dimension). Such groups will self-regulate, but allow diversity in a population.

The difference between the groups lies in how much uncertainty is handled connotatively vs denotatively, defining a scale between underfit models that rely on connotative representations and result in high bias, low variance predictions and behaviours (we called these “L” agents in Section 2.1), to overfit models that rely on denotative representations and result in low bias, high variance predictions and behaviours (“C” agents). What kind of model is used by a social group depends on the learning environment, which, in a self-referential fashion, depends on the types of agents that populate the group. For example, more diverse populations of “L”s will “force” member agents to use higher bias models, as the uncertainty in the denotative state is much higher. “L” Agents using higher bias models will be more tolerant of diversity, allowing for heterogeneity in the group. “C” agents, on the other hand, will require more homogeneous groups, and rely more on deliberative and observational processing. This same tradeoff is pointed to in (Moutoussis et al., 2014a), in which “limited depth of thought” is linked to more prosocial behaviours (in “L” agents), similarly to what could be achieved by deeper depth-of-thought with fewer social biases (in “C” agents). Further evidence of this relationship can be found in the studies of Van den Bos (van den Bos, 2001) who showed that induced feelings of uncertainty led to increased positive affect caused by perceived fairness. As diversity leads to uncertainty, the higher bias agents will favor fair (culturally normative) solutions over less equitable ones. Higher variance agents, on the other hand, will be more tolerant of inequality. In Section 4.1 we further provide a model of this effect using BayesAct.

Interestingly, Bales indicates a correlation between forward and more conservative political beliefs, and between backward and more liberal political beliefs (Bales, 1999). Thus, from a computational sociological point of view, we are led to the suggestion that the political spectrum of beliefs is defined by uncertainty and ambiguity management techniques: conservatives overfit, while liberals underfit.

6 Conclusion

In this paper, we proposed BayesAct as a computational dual-process model of human group interactions, and showed how it explicitly represents a tradeoff between the uncertainty in a denotative space (of e.g. symbolic constructs about the physics of the world) and in a connotative space (of e.g. feelings about identities and behaviours). We argued that BayesAct captures some of the key elements of known human dual-process reasoning, and argued that it can be used to build artificial agents that are well aligned members of a socio-technical system. We suggested that the model of social sentiment in BayesAct is a variational approximation to an agent’s representation of the world, and that this approximation is built using a social sharing mechanism based on emotion. We discussed how uncertainty plays a critical role in determining the relative contributions of deliberative and affective reasoning, with more uncertainty leading to action choices more in line with connotative meanings, while less uncertainty engenders more deliberative (denotative) policy search. Finally, we discussed the relationship of BayesAct to other dual process theories, reinforcement learning, and to other social pscyhological and sociological theorizing, and we discussed two practical application areas in the study of online group processes and the care of persons with cognitive disabilities.

Acknowledgments: THEMIS.COG is funded by the Canadian Natural Sciences and Engineering Research Council and Social Sciences and Humanities Research Council. The EMOTEC project is funded by NSERC, the Canadian Institute for Health Research (CIHR), the Canadian Consortium on Neurodegeneration and Aging (CCNA), and AGEWELL, Inc., a Canadian Network of Centers of Excellence (NCE). The VIP project is funded by the American Alzheimer’s Association.

References

  • Ambrasat et al. (2016) Jens Ambrasat, Christian von Scheve, Gesche Schauenburg, Markus Conrad, and Tobias Schröder. Unpacking the habitus: Meaning making across lifestyles. Sociological Forum, 31(4):994–1017, 2016. doi: 10.1111/socf.12293. URL https://onlinelibrary.wiley.com/doi/abs/10.1111/socf.12293.
  • Asch and Guetzkow (1951) Solomon E Asch and Harold Guetzkow. Effects of group pressure upon the modification and distortion of judgments. Documents of gestalt psychology, pages 222–236, 1951.
  • Awad et al. (2018) Edmond Awad, Sohan Dsouza, Richard Kim, Jonathan Schulz, Joseph Henrich, Azim Shariff, Jean-François Bonnefon, and Iyad Rahwan. The moral machine experiment. Nature, 563:59–64, october 2018.
  • Bales (1999) Robert Freed Bales. Social Interaction Systems: Theory and Measurement. Transaction Publishers, New Brunswick, NJ, 1999.
  • Barrett (2017) Lisa Feldman Barrett. The theory of constructed emotion: An active inference account of interoception and categorization. Social Cognitive and Affective Neuroscience, 12(1):1–23, 2017.
  • Barrett and Satpute (2013) Lisa Feldman Barrett and Ajay Satpute. Large-scale brain networks in affective and social neuroscience: Towards an integrative functional architecture of the brain. Current Opinion in Neurobiology, 23:361–372, 2013.
  • Berger and Luckmann (1966) Peter L. Berger and Thomas Luckmann. The Social Construction of Reality. Doubleday, 1966.
  • Bohr (1950) Neils Bohr. On the notions of causality and complementarity. Science, 111:51–54, 1950.
  • Bourdieu (1990) Pierre Bourdieu. The Logic of Practice. Stanford University Press, 1990.
  • Brafman and Tennenholtz (2002) Ronen I. Brafman and Moshe Tennenholtz. R-max – a general polynomial time algorithm for near-optimal reinforcement learning. Journal of Machine Learning Research, 3:213–231, 2002.
  • Broekens et al. (2015) Joost Broekens, Elmer Jacobs, and Catholijn M. Jonker. A reinforcement learning model of joy, distress, hope and fear. Connection Science, 27:215–233, 2015.
  • Bruineberg and Rietveld (2019) Jelle Bruineberg and Erik Rietveld. What’s inside your head once you’ve figured out what you head’s inside of. Ecological Psychology, 31(3):198–217, 2019. doi: 10.1080/10407413.2019.1615204.
  • Cacioppo et al. (2003) John T. Cacioppo, Gary G. Berntson, Tyler S. Lorig, Catherine J. Norris, Edith Rickett, and Howard Nusbaum. Just because you’re imaging the brain doesn’t mean you can stop using your head: A primer and set of first principles. Journal of Personality and Social Psychology, 85:650–661, 2003.
  • Cannon (1929) Walter Cannon. Bodily Changes in Pain, Hunger, Fear, and Rage. Ronald, New York, 2nd edition, 1929.
  • Capraro and Rand (2018) Valerie Capraro and David G. Rand. Do the right thing: Preferences for moral behavior, rather than equity or efficiency per se, drive human prosociality. Judgment and Decision Making, 13(1):99–111, January 2018.
  • Chaiken and Trope (1999) Shelly Chaiken and Yaacov Trope. Dual-Process Theories in Social Psychology. Guildford, New York, 1999.
  • Chalmers (1995) David Chalmers. Facing up to the problem of consciousness. Journal of Consciousness Studies, 2:200–219, 1995.
  • Chentanez et al. (2005) Nuttapong Chentanez, Andrew G. Barto, and Satinder P. Singh. Intrinsically motivated reinforcement learning. In L.K. Saul, Y. Weiss, and L. Bottou, editors, Advances in Neural Information Processing Systems 17, pages 1281–1288. MIT Press, 2005.
  • Clore and Ortony (2000) Gerald L. Clore and Andrew Ortony. Cognition in emotion: Always, sometimes, or never? In L. Nadel, R. Kane, and G. L. Ahern, editors, The Cognitive Neuroscience of Emotion, pages 24–61. Oxford University Press, New York, 2000.
  • Crick (1984) Francis Crick. Function of the thalamic reticular complex: the searchlight hypothesis. Proceedings of the National Academy of Sciences, 81:4568–4590, 1984.
  • Damasio (1994) Antonio R. Damasio. Descartes’ error: Emotion, reason, and the human brain. Putnam’s sons, 1994.
  • Davidson (2003) Richard J. Davidson. Seven sins in the study of emotion: Correctives from affective neuroscience. Brain and Cognition, 52:129–132, 2003.
  • Dayan et al. (1995) Peter Dayan, Geoffrey E Hinton, Radford M Neal, and Richard S Zemel. The helmholtz machine. Neural computation, 7(5):889–904, 1995.
  • Dunbar (1992) R.I.M. Dunbar. Neocortex size as a constraint on group size in primates. Journal of Human Evolution, 22(6):469 – 493, 1992. ISSN 0047-2484. doi: http://dx.doi.org/10.1016/0047-2484(92)90081-J. URL http://www.sciencedirect.com/science/article/pii/004724849290081J.
  • Duncan and Barrett (2007) Seth Duncan and Lisa Feldman Barrett. Affect is a form of cognition: A neurobiological analysis. Cognition and Emotion, 21:1184–1211, 2007.
  • Durkheim (2014 (1893) Emile Durkheim. The Division of Labor in Society. Free Press, 2014 (1893).
  • El-Nasr et al. (2000) Magy Seif El-Nasr, John Yen, and Thomas R. Ioerger. FLAME - fuzzy logic adaptive model of emotions. Autonomous Agents and Multiagent Systems, 3:219–257, 2000.
  • Fehr and Schmidt (1999) Ernst Fehr and Klaus M. Schmidt. A theory of fairness, competition, and cooperation. The Quarterly Journal of Economics, 114(3):817–868, 1999. URL doi:10.1162/003355399556151.
  • FeldmanHall and Shenhav (2019) Oriel FeldmanHall and Amitai Shenhav. Resolving uncertainty in a social world. Nature Human Behaviour, In Press, 2019.
  • Forgas (2008) Joseph P. Forgas. Affect and cognition. Perspectives on Psychological Science, 3:94–101, 2008.
  • Francis et al. (2019) Linda Francis, Richard E. Adams, Alexandra König, and Jesse Hoey. Identity and the self in elderly adults with Alzheimer’s disease. In Jane E. Stets and Richard T. Serpe, editors, Identities in Everyday Life, chapter 18, pages 381–402. Oxford University Press, 2019.
  • Franks (1989) David D. Franks. Alternatives to Collins’ use of emotion in the theory of ritualistic chains. Symbolic Interaction, 12:97–101, 1989.
  • Franks (2006) David D. Franks. The neuroscience of emotions. In J. E. Stets and J. H. Turner, editors, Handbook of the Sociology of Emotions, pages 39–62. Springer, New York, 2006.
  • Freeland and Hoey (2018) Robert Freeland and Jesse Hoey. The structure of deference: Modeling occupational status using affect control theory. American Sociological Review, 83(2), April 2018.
  • Freeman and Ambady (2011) Jonathan B. Freeman and Nalini Ambady. A dynamic interactive theory of person construal. Psychological Review, 118(2):247–279, 2011. doi: http://dx.doi.org/10.1037/a0022327.
  • Friston (2010) Karl Friston. The free-energy principle: A unified brain theory? Nature Reviews Neuroscience, 11((2)):127–138, 2010.
  • Friston et al. (2012) Karl Friston, S. Samothrakis, and R. Montague. Active inference and agency: optimal control without cost functions. Biological Cybernetics, 106(8-9):523–541, 2012.
  • Glaze et al. (2018) Christopher M. Glaze, Alexandre L.S. Filipowicz, Joseph W. Kable, Vijay Balasubramanian, and Joshua I. Gold. A bias-variance trade-off governs individual differences in on-line learning in an unpredictable environment. Nature Human Behaviour, 2:213–224, March 2018.
  • Gomez Esteban and Rios Insua (2017) P. Gomez Esteban and D. Rios Insua. An affective model for a non-expensive utility-based decision agent. IEEE Transactions on Affective Computing, pages 1–1, 2017. ISSN 1949-3045. doi: 10.1109/TAFFC.2017.2737979.
  • Gross (1998) James J. Gross. The emerging field of emotion regulation: An integrative review. Review of General Psychology, 2(3):271–299, 1998.
  • Heise (2007) David R. Heise. Expressive Order: Confirming Sentiments in Social Actions. Springer, 2007.
  • Heise (2013) David R. Heise. Modeling interactions in small groups. Social Psychology Quarterly, 76:52–72, 2013.
  • Hesp (2018) Casper Hesp. Hedging your bets: and active inference formulation of valence and arousal. Unpublished reserach project, 2018.
  • Hoey and Schröder (2015) Jesse Hoey and Tobias Schröder. Bayesian affect control theory of self. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 529–536, 2015.
  • Hoey et al. (2016) Jesse Hoey, Tobias Schröder, and Areej Alhothali. Affect control processes: Intelligent affective interaction using a partially observable Markov decision process. Artificial Intelligence, 230:134–172, January 2016.
  • Hoey et al. (2018) Jesse Hoey, Tobias Schröder, Jonathan Morgan, Kimberly B Rogers, Deepak Rishi, and Meiyappan Nagappan. Artificial intelligence and social simulation: Studying group dynamics on a massive scale. Small Group Research, 49(6):647–683, 2018.
  • Hofstadter (1983) Douglas Hofstadter. Dilemmas for superrational thinkers, leading up to a luring lottery. Scientific American, 248(6), June 1983.
  • Hogewoning et al. (2007) Eric Hogewoning, Joost Broekens, Jeroen Eggermont, and Ernst G.P. Bovenkamp. Strategies for affect-controlled action-selection in soar-rl. In J. Mira and J.R. Àlvarez, editors, IWINAC, volume 4528 Part II of LNCS, pages 501–510, 2007.
  • Huntsinger et al. (2014) Jeffrey R. Huntsinger, Linda M. Isbell, and Gerald L. Clore. The affective control of thought: Malleable, not fixed. Psychological Review, 121(4):600–618, 2014.
  • James (1890) William James. Principles of Psychology. Holt, New York, 1890.
  • Jaques et al. (2019) Natasha Jaques, Angeliki Lazaridou, Edward Hughes, Caglar Gulcehre, Pedro Ortega, Dj Strouse, Joel Z. Leibo, and Nando De Freitas. Social influence as intrinsic motivation for multi-agent deep reinforcement learning. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 3040–3049, Long Beach, California, USA, 09–15 Jun 2019. PMLR. URL http://proceedings.mlr.press/v97/jaques19a.html.
  • Joffily and Coricelli (2013) Mateus Joffily and Giorgio Coricelli. Emotional valence and the free-energy principle. PLoS Computational Biology, 9(6):e1003094, 2013.
  • Joseph and Morgan (2019) Kenneth Joseph and Jonathan H. Morgan. Identity paper. unpublished, 2019.
  • Kahneman (2011) Daniel Kahneman. Thinking, Fast and Slow. Doubleday, 2011.
  • Kahneman and Klein (2009) Daniel Kahneman and Gary Klein. Conditions for intuitive expertise: A failure to disagree. American Psychologist, 64(6):515–526, September 2009.
  • Kocsis and Szepesvári (2006) Levente Kocsis and Csaba Szepesvári. Bandit based monte-carlo planning. In Proceedings of European Conference on Machine Learning, 2006.
  • Kollock (1998) Peter Kollock. Social dilemmas: the anatomy of cooperation. Annual Review of Sociology, 24:183–214, 1998.
  • König et al. (2017) Alexandra König, Linda E. Francis, Jyoti Joshi, Julie M. Robillard, and Jesse Hoey. Qualitative study of affective identities in dementia patients for the design of cognitive assistive technologies. Journal of Rehabilitation and Assistive Technologies Engineering, 4, 2017.
  • Lane (2000) Richard D. Lane. Neural correlates of conscious emotional experience. In Richard D. Lane and Lynn Nadel, editors, Cognitive Neuroscience of Emotion, chapter 15, pages 345–370. Oxford University Press, 2000.
  • Lange and James (1922/1967) Carl G. Lange and William James. The Emotions. Hafner, New York, 1922/1967.
  • Lawler et al. (2009) Edward J. Lawler, Shane R. Thye, and Jeongkoo Yoon. Social Commitments in a Depersonalized World. Russell Sage Foundation, 2009.
  • Lazarus (1984) R.S. Lazarus. On the primacy of cognition. American Psychologist, 39:124–129, 1984.
  • LeDoux (1996) Joseph LeDoux. The emotional brain: the mysterious underpinnings of emotional life. Simon and Schuster, New York, 1996.
  • LeDoux and Brown (2017) Joseph E. LeDoux and Richard Brown. A higher-order theory of emotional consciousness. Proceedings of the National Academy of Sciences of the United States of America, 114:E2016–E2025, 2017.
  • Li et al. (2009) Linda C. Li, Jeremy M. Grimshaw, Camilla Nielsen, Maria Judd, Peter C. Coyte, and Ian D. Graham. Evolution of wenger’s concept of community of practice. Implementation Science, 4(1):11, Mar 2009. ISSN 1748-5908. doi: 10.1186/1748-5908-4-11. URL https://doi.org/10.1186/1748-5908-4-11.
  • Lizardo and Strand (2010) Omar Lizardo and Michael Strand. Skills, toolkits, context and institutions: Clarifying the relationship between different approaches to cognition in cultural sociology. Poetics, 38:204–227, 2010.
  • Loewenstein and Lerner (2003) G. Loewenstein and J.S. Lerner. The role of affect in decision making. In R.J. Davidson, K.R. Sherer, and H.H. Goldsmith, editors, Handbook of Affective Sciences, page 619–642. Oxford Univ. Press, 2003.
  • MacKinnon (1994) N. J. MacKinnon. Symbolic Interactionism as Affect Control. State University of New York Press, Albany, 1994.
  • MacKinnon and Heise (2010) Neil J. MacKinnon and David R. Heise. Self, identity and social institutions. Palgrave and Macmillan, New York, NY, 2010.
  • Marinier III and Laird (2008) Robert P. Marinier III and John E. Laird. Emotion-driven reinforcement learning. In Proc. of 30th Annual Meeting of the Cognitive Science Society, pages 115–120, Washington, D.C., 2008.
  • Martin (2009) John Levi Martin. Social Structures. Princeton University Press, 2009.
  • Martin (2010) John Levi Martin. Life’s a beach, but you’re an ant, and other unwanted news for the sociology of culture. Poetics, 38:228–243, 2010.
  • Mas and Moretti (2009) Alexandre Mas and Enrico Moretti. Peers at work. American Economic Review, 99(1):112–145, 2009.
  • Mataric (1994) Maja J Mataric. Reward functions for accelerated learning. In Machine Learning Proceedings 1994, pages 181–189. Elsevier, 1994.
  • McGregor (2017) Simon McGregor. The bayesian stance: Equations for ’as-if’ sensorimotor agency. Adaptive Behavior, 25(2):72–82, 2017.
  • Moerland et al. (2017) Thomas M. Moerland, Joost Broekens, and Catholijn M. Jonker. Emotion in reinforcement learning agents and robots: A survey. Machine Learning, 107(2):443–480, 2017.
  • Mook (1987) Douglas G. Mook. Motivation: The Organization of Action. Norton, New York, 1987.
  • Moutoussis et al. (2014a) Michael Moutoussis, Pasco Fearon, Wael El-Deredy, Raymond J. Dolan, and Karl J. Friston. Bayesian inferences about the self (and others): a review. Consciousness and Cognition, 25:67–76, 2014a.
  • Moutoussis et al. (2014b) Michael Moutoussis, Nelson J.Trujillo-Barreto, Wael El-Deredy, Raymond J. Dolan, and Karl J. Friston. A formal model of interpersonal inference. Frontiers in Human Neuroscience, 8(160), 2014b.
  • Nowak (2006) M. A. Nowak. Five rules for the evolution of cooperation. Science, 314:1560–1563, 2006.
  • Ortony et al. (1988) A. Ortony, G.L. Clore, and A. Collins. The Cognitive Structure of Emotions. Cambridge University Press, 1988.
  • Ortony et al. (2005) A. Ortony, D. Norman, and W. Revelle. Affect and proto-affect in effective functioning. In J. Fellous and M. Arbib, editors, Who needs emotions: The brain meets the machine, pages 173–202. Oxford University Press, 2005.
  • Osgood (1969) Charles E. Osgood. On the whys and wherefores of epa. Journal of Personality and Social Psychology, 12:194–199, 1969.
  • Osgood et al. (1957) Charles E. Osgood, G. J. Suci, and Percy H. Tannenbaum. The Measurement of Meaning. University of Illinois Press, Urbana, 1957.
  • Osgood et al. (1975) Charles E. Osgood, William H. May, and Murray S. Miron. Cross-Cultural Universals of Affective Meaning. University of Illinois Press, 1975.
  • Pessoa (2008) Luiz Pessoa. On the relationship between emotion and cognition. Nature Review Neuroscience, 9:148–158, 2008.
  • Pessoa (2018) Luiz Pessoa. Understanding emotion with brain networks. Current Opinion in Behavioral Sciences, 19:19–25, 2018.
  • Petrilli et al. (2018) Christopher M Petrilli, Sanjay Saint, Joseph J Jennings, Andrew Caruso, Latoya Kuhn, Ashley Snyder, and Vineet Chopra. Understanding patient preference for physician attire: a cross-sectional observational study of 10 academic medical centres in the usa. BMJ Open, 8(5), 2018. ISSN 2044-6055. doi: 10.1136/bmjopen-2017-021239. URL https://bmjopen.bmj.com/content/8/5/e021239.
  • Plutchik (1980) Robert Plutchik. The Emotions. University Press of America, 1980.
  • Rabin (1993) Matthew Rabin. A theory of fairness, competition and cooperation. The American Economic Review, 83(5):1281–1302, 1993.
  • Ramstead et al. (2019) Maxwell JD Ramstead, Michael D Kirchhoff, and Karl J Friston. A tale of two densities: active inference is enactive inference. Adaptive Behavior, 0(0):1–15, 2019. doi: 10.1177/1059712319862774.
  • Rand et al. (2011) DG Rand, S Arbesman, and NA Christakis. Dynamic social networks promote cooperation in experiments with humans. Proc Natl Acad Sci USA, 108(48):19193–19198, 2011.
  • Ray et al. (2008) Debajyoti Ray, Brooks King-Casas, P. Read Montague, and Peter Dayan. Bayesian model of behaviour in economic games. In Proceedings of Neural Information Processing Systems, 2008.
  • Robillard and Hoey (2018) Julie M. Robillard and Jesse Hoey. Emotion and motivation in cognitive assistive technologies for dementia. Computer, 51(3), March 2018.
  • Schröder et al. (2016) Tobias Schröder, Jesse Hoey, and Kimberly B. Rogers. Modeling dynamic identities and uncertainty in social interactions: Bayesian affect control theory. American Sociological Review, 81(4), 2016.
  • Sequeira et al. (2014) Pablo Sequeira, Francisco S. Melo, and Ana Paiva. Learning by appraising: an emotion-based approach to intrinsic reward design. Adaptive Behaviour, 22(5):330–349, 2014.
  • Sequeira et al. (2011) Pedro Sequeira, Francisco S. Melo, Rui Prada, and Ana Paiva. Emerging social awareness: Exploring intrinsic motivation in multiagent learning. In Proceeings of the 1st international joint conference on development and learning in epigenetic robotics, pages 1–6, 2011.
  • Shank and Lulham (2016) Daniel B. Shank and Rohan Lulham. Products as affective modifiers of identities. Sociological Perspectives, 2016.
  • Simon (1967) Herbert A. Simon. Motivational and emotional controls of cognition. Psychological Review, 74:29–39, 1967.
  • Smith et al. (2018) Ryan Smith, William D.S. Killgore, and Richard D. Lane. The structure of emotional experience and its relation to trait emotional awareness: A theoretical review. Emotion, 18(5):670–692, 2018.
  • Smith et al. (2019) Ryan Smith, Thomas Parr, and Karl J. Friston. Simulating emotions: An active inference model of emotional state inference and emotion concept learning. bioRxiv, 2019. doi: 10.1101/640813. URL https://www.biorxiv.org/content/early/2019/05/21/640813.
  • Stanovich and West (2000) Keith E. Stanovich and Richard F. West. Individual differences in reasoning: Implications for the rationality debate? Behavioral and Brain Sciences, 23:645–726, 2000.
  • Stephenson (1986a) W. Stephenson. William James, Niels Bohr, and complementarity: I—concepts. The Psychological Record, 36:519–527, 1986a.
  • Stephenson (1986b) W. Stephenson. William James, Niels Bohr, and complementarity: Ii—pragmatics of a thought. The Psychological Record, 36:529–543, 1986b.
  • Storbeck and Clore (2007) Justin Storbeck and Gerald L. Clore. On the interdependence of cognition and affect. Cognition and Emotion, 21(6):1212–1237, 2007.
  • Tsay et al. (2014) J. Tsay, L. Dabbish, and J. Herbsleb. Influence of social and technical factors for evaluating contribution in github. In Proc. 36th International Conference on Software Engineering, pages 356–366, 2014.
  • Turner (2016) Johnathan H. Turner. The evolutionary biology and sociology of social order. In Edward J. Lawler, Shane R. Thye, and Jeongkoo Yoon, editors, Order on the Edge of Chaos, chapter 2, pages 18–42. Cambridge University Press, 2016.
  • Turner (2009) Jonathan H. Turner. The sociology of emotions: Basic theoretical arguments. Emotion Review, 1:340–354, 2009.
  • van den Bos (2001) Kees van den Bos. Uncertainty management: The influence of uncertainty salience on reactions to perceived procedural fairness. Journal of Personality and Social Psychology, 80(6):931–941, 2001.
  • Vives and FeldmanHall (2018) Marc Lluís Vives and Oriel FeldmanHall. Tolerance to ambiguous uncertainty predicts prosocial behaviour. Nature communications, 9(2156), 2018.
  • Zajonc (1980) R.B. Zajonc. Feeling and thinking: Preferences need no inferences. American Psychologist, 35:151–175, 1980.
  • Zajonc (1984) R.B. Zajonc. On the primacy of affect. American Psychologist, 39:117–123, 1984.
  • Zhu and Thagard (2002) Jing Zhu and Paul Thagard. Emotion and action. Philosophical Psychology, 15(1):19–36, 2002.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
388308
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description