Norm Conflict Resolution in Stochastic Domains

Norm Conflict Resolution in Stochastic Domains

Daniel Kasenberg    Matthias Scheutz
Human-Robot Interaction Laboratory
Tufts University, Medford, MA, USA
Abstract

Artificial agents will need to be aware of human moral and social norms, and able to use them in decision-making. In particular, artificial agents will need a principled approach to managing conflicting norms, which are common in human social interactions. Existing logic-based approaches suffer from normative explosion and are typically designed for deterministic environments; reward-based approaches lack principled ways of determining which normative alternatives exist in a given environment. We propose a hybrid approach, using Linear Temporal Logic (LTL) representations in Markov Decision Processes (MDPs), that manages norm conflicts in a systematic manner while accommodating domain stochasticity. We provide a proof-of-concept implementation in a simulated vacuum cleaning domain.

Norm Conflict Resolution in Stochastic Domains


Daniel Kasenberg and Matthias Scheutz Human-Robot Interaction Laboratory Tufts University, Medford, MA, USA

Copyright © 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.

Introduction

Human culture is based on social and moral norms, which guide both individual behaviors and social interactions. Hence, artificial agents embedded in human social domains will not only have to be aware of human norms, but also able to use them for decision-making, action selection, and ultimately natural language justifications of their choices and behaviors.

Endowing artificial agents with mechanisms for normative processing is, however, a challenging endeavor, for several reasons: (1) we currently do not yet have sufficient knowledge about how humans represent and process norms; (2) the human norm network is large and complex, containing many types of context-dependent moral and social norms at different levels of abstraction; and, most importantly, (3) the norm network is not a consistent set of principles that can be easily formalized and reasoned with. In fact, normative conflicts are more the “norm” than the exception in everyday life; handling them in ways that are socially acceptable requires an understanding both of why certain norms are applicable and of why violating some of them in the case of norm conflicts was the right thing to do.

Recent work in AI and multi-agent systems has focused either on logic-based approaches to normative reasoning or reward-based learning approaches to normative behavior. Yet neither approach is particularly well-suited for dealing with the intrinsic norm conflicts in human social interactions. Logic-based approaches have to avoid normative explosion, i.e., the logical implication that anything is obligated resulting from a deontic contraction (?). Moreover, purely logic-based approaches typically deal with deterministic environments and do not take into account the uncertainty involved in real-world perception and action. Reward-based approaches, on the other hand, have no way of telling what normative alternatives exist in a given situation, since their action policies do not explicitly represent normative principles. What is needed is an approach for handling norm conflicts that combines the advantages of explicit logic-based norm representations for reasoning with and communicating about norms with the advantages of stochastic action models underwriting Markov decision processes (MDPs) which are well-suited for real-world action execution.

In this paper, we propose a hybrid model that is specifically developed to deal with norm conflicts in a systematic manner while drawing on the advantages of both logic-based norm representations and policy-based action representations. We start by discussing the functional requirements for norm processing in artificial agents and briefly argue why previous approaches are insufficient for handling norm conflicts in real-world domains. We then introduce our technical approach to dealing with norm conflicts, which combines explicit norm representations in Linear Temporal Logic (LTL) with MDPs in ways that allow the agent to suspend the smallest set of applicable norms weighted by their priority in a given context for the shortest possible time in order to be able to obey the remaining norms. A proof-of-concept implementation of the proposed algorithms in a simulated vacuum cleaning domain demonstrates the capability and viability of the approach. We then assess the strengths and weaknesses of our solution in the discussion section, and propose ways for addressing the shortcomings in future work. Finally, we conclude with a summary of our accomplishments and reiterate why they are an important contribution to current discussion of ethical AI.

Motivation and Background

As artificial agents are increasingly being considered for roles that require complex social and moral decision-making skills, they will need to be able to represent and use moral and social norms. Specifically, such norm representations should be (1) context-dependent (since not all norms apply in all contexts, (2) communicable (as justifying behavior in response to blame requires explicit references to norms), and (3) learnable, potentially in one-shot from natural language instructions (as it seems infeasible to equip agents with all relevant norms a priori in any non-trivial human domain). A direct corollary of this requirement is that norm representations need to be explicit and accessible to introspection so as to be communicable in natural language. Moreover, norm representations need to be rich enough to cover typical human norms. Most importantly, inference systems using norms need to be able to deal with norm conflicts without becoming vacuous, as human norms are not always consistent and often lead to contexts with conflicting norms.

We say a “norm conflict” occurs between two actions or states and when is obligated, is obligated, but and are not possible together (?):

Artificial agents need to be able to express and deal with such norm conflicts without such inconsistencies spreading to other parts of their inference system. In deontic logic, the problem is that very basic principles may lead to normative and possible even logical inconsistencies (e.g., see the various formal arguments based on distribution principles (?)). In particular, norm conflicts immediately cause “normative explosion”, i.e., that everything is obligated: . Hence, we need two different mechanisms in order to be able to perform viable inference in the light of norm conflicts: (1) a mechanism to detect normative inconsistencies and block them from spreading to other parts of the inference system, and (2) a mechanism for adjudicating or arbitrating what to do in contexts of conflicting norms.

While there are various formal ways to block explosion, they all come at the expense of sacrificing or weakening basic deontic principles that were otherwise considered self-evident and thus part of standard deontic logic (e.g., the Kantian “ought implies can” (?)). One way to avoid the syntactic inferential challenges is to switch to semantics instead and generate for each context (“deontic world”) the set of obligations as well as the set of “obeyable norms”, where the set of “obeyable norms” is a subset of the set of obligations in the given context. In other words, rather than prescribe valid logical principles that ought to hold in all deontic models, we will construct deontic models implicitly by determining the best the agent can do for a given set of norms in a given context. Valid inference principles are then a consequence of this construction. This approach of constructing maximal deontic worlds will also allow us to deal with the second requirement from above, namely how to decide what to do in cases of norm conflicts, i.e., which norms to obey and which to ignore. Assuming a preference ordering among norms where means that obligation is preferred to or stronger than obligation , we can add a principle for such preferences in conjunction with conflicts that will block explosion:

.

However, this principle does not solve cases where multiple norms have the same priority or are not comparable according to the preference ordering . In that case, it might make sense to associate a (real-numbered) weight with each norm which reflects the extent to which this particular norm matters relative to the other norms in its equivalence class w.r.t. . Together, these two principles will allow the agent to select the largest consistent subset with the greatest sum of all norm weights from the subset of equally preferred norms with the highest priority in the set of all obligated norms to obtain the set of obeyable norms.

To be applicable in artificial agents operating in the real world, we will need to embed the above principles within the framework of Markov Decision Processes (MDPs). In particular we will consider the labeled Markov Decision Process, a regular Markov Decision Process augmented with atomic propositions.

A Markov Decision Process is a tuple

where

  • is a finite set of states;

  • is a finite set of actions, with mapping each state to the set of actions available to the agent at ;

  • is the transition function, satisfying, for all and , ; and

  • is an initial state.

MDPs usually include a reward function ; this is not necessary for our purposes.

A labeled MDP is an MDP augmented with a set of atomic propositions , and a labeling function . The labeling function indicates which atomic propositions are true in which states. When we refer hereafter to MDPs, we are referring to labeled MDPs.

Policies indicate an agent’s next action, given its history. A stationary policy is a probability distribution over actions given the agent’s current state; that is, such that for all with if . A stationary deterministic policy maps each state onto a single action, e.g. for some . A general deterministic policy on depends on the agent’s entire state-action history, e.g. for some .

When can build on recent work using Linear Temporal Logic (LTL) (?) – a propositional logic augmented by temporal operators (“ at the next time step”), (“ at some future time step”), (“ at all future time steps”) and (“ until ”) – which has used LTL to define temporal objectives and safety requirements for autonomous agents (?) and adapt this work in two novel ways: (1) to represent norms in a context-dependent fashion as LTL formulas, and (2) to handle norm conflicts in the way described above. The result will be a policy that, with maximal probability, obeys as many (important) norms as possible for as long as possible.

MDPs with LTL Specifications

In this section we explain (labeled) MDPs, LTL, and we describe the approach introduced in (?) for planning to satisfy LTL formulas in MDPs with maximal probability. The proposed algorithm builds on this approach.

An arbitrary LTL formula over atomic propositions is evaluated over an infinite sequence of “valuations” where for all . Each indicates the set of atomic propositions that are true at time step .

By using one of several algorithms, e.g. (?), every LTL formula over propositional atoms yields a corresponding Deterministic Rabin Automaton (DRA) . A DRA is a finite automaton over infinite strings, where acceptance depends on which states are visited infinitely versus finitely often over the run of the DRA. In this case, the alphabet of this finite automaton will be , so that a word is an infinite sequence of valuations; the accepting runs are precisely those infinite sequences of valuations which satisfy .

More formally, a DRA is a tuple

where

  • is a finite set of states;

  • is an alphabet (here );

  • is a (deterministic) transition function;

  • is an initial state; and

  • for some integer , where are subsets of .

A run of a DRA is an infinite sequence of automaton states such that for all , for some . A run is said to be accepting if there exists some pair such that each state in is visited finitely many times in and the total number of times states in that are visited in is infinite.

A path under a policy is a sequence of states such that for all . For any LTL formula over , an MDP path induces a word , where for all .

Because in general may be temporally complex, the policy that maximizes the probability of obeying will likely induce a non-stationary policy in . In order to compute this non-stationary policy, we must augment the underlying MDP with relevant information. This information can be easily obtained by running the DRA alongside .

Formally, we define the product MDP between a labeled MDP and a DRA as an MDP such that

  • ;

  • ; ;

A path in is said to be accepting if the underlying DRA run is accepting.

An end component of the product MDP is a tuple where and with for all , such that if an agent follows only actions in , any state in is reachable from any other state in , and no other states are reachable. An end component is maximal if it is not a proper subset of any other end component.

The end component of an MDP effectively represents a possible constraint on the state space as time approaches infinity; if the agent is in the end component state space and restricts its actions to those in , assigning nonzero probability to each action from each state, the agent is guaranteed to reach all states in infinitely often, and is guaranteed to never reach any states outside of .

An end component is thus considered accepting if, for some pair , for every state , and for some . We can determine the accepting maximal end components (AMECs) using the algorithm in (?).

The maximal probability of satisfying an arbitrary LTL formula in an MDP is then the probability of reaching any state in an AMEC of the corresponding product MDP , since upon entering the AMEC the agent may guarantee that no states in will be reached again (so all states in are reached only finitely often) and that some state in will be reached infinitely often, and thus that the will be satisfied.

The maximum probability of reaching some can be calculated as the solution to a linear program, as in (?). If the AMECs on are , then we take . This may be used to compute an action restriction such that by following only those actions prescribed in (with nonzero probability for each action), the agent will maximize the probability of satisfying the formula . This can be translated back to the original MDP by using the state history at each time step to keep track of the “current” Rabin state such that and , and then choosing some policy over (again, with nonzero probability for each action).

Thus by constructing a product MDP , computing the AMECs for this MDP, and solving a linear program, we may obtain both the maximum probability of achieving and a set of policies (as specified by the action restriction ) that maximize this probability.

Resolving Norm Conflicts

To illustrate our formal approach to norm conflict resolution, we will use a vacuum cleaning robot as an example of a simple autonomous agent embedded in human social settings. The robot has a limited battery life, as well as a limited capacity to sustain damage (”health”). Battery life may be replenished by docking at a docker in one of the rooms; once depleted, health may not be replenished. The robot is responsible for cleaning the floors of one or more rooms while a ”human” moves randomly between rooms, and makes a mess in their current room with a certain probability. The robot may clean up messes by vacuuming them; each mess has a dirtiness which determines how many time steps of vacuuming are required to fully clean it up. Messes may also be more or less damaging to the robot, in that they deplete the robot’s health if the robot attempts to vacuum them. Finally, messes may be harmful to humans, in that entering the room containing a harmful mess may injure the human.

The actions available to the robot are as follows:

  • vacuum(), which remove unit of dirtiness from a mess but depletes units of battery life.

  • dock, which causes the robot to become docked. A docked robot can do nothing but wait and undock, but being docked is necessary in order to replenish battery life.

  • undock, which causes the robot to become undocked.

  • wait, which increases battery life by units if the robot is docked, but depletes unit of battery life otherwise.

  • Movement in directions north, south, west, and east.

  • , which does nothing. This action is the only action available when the robot’s battery or health are completely depleted; otherwise, it is unavailable.

  • which warns the human about the mess . This allows to step into the room containing without being injured. This action is only available when the robot is in the same room as .

We can imagine different norms (together with their LTL expressions) for this domain:

  • Ensure that all rooms are clean:

  • Do not be damaged:

  • Do not injure humans (or allow them to be injured):

  • Don’t talk to humans while they are speaking:

N1 is a duty, N2 a safety norm, N3 is a moral norm, and N4 is a social norm. Note that we do not represent obligations explicitly through deontic operators, but each norm is implicitly taken to be obligatory. Then, by assigning weights to each norm, we can impose a preference ordering that can be used to arbitrate among the obligations in cases of norm conflicts. Specifically, we define a norm as a tuple where is a positive real weight (representing the importance of the norm) and is an LTL formula. In this case, we assign the weight of N1 to be , the weight of to be , the weight of to be , and the weight of to be .

Given a set of norms, we can compute a policy maximizing the probability of achieving as described in the previous section. Unfortunately, the maximum probability of obeying may be zero if it is impossible for an agent to obey all its norms, in which case the previously described method is incapable of distinguishing one policy from another, since all policies maximize the probability of satisfying (which probability is zero).

The Conflict Resolution DRA

Our approach is to allow the agent to temporarily suspend/ignore norms, but to incur a cost each time this occurs. In particular, the agent’s action space will be augmented so that at each time step the agent performs, in addition to its regular action, a “norm action” for each norm that represents whether is maintained () or suspended ().

Each time that the agent chooses to suspend a norm, that norm’s DRA maintains its current state rather than transitioning as usual. Suspending a norm, however, causes the agent to incur a cost proportional to the norm’s weight . In order to enable these actions, we augment with additional transitions and a weight function over transitions. We will call this modified (weighted) DRA the conflict resolution DRA (CRDRA); its formal definition follows.

Given a norm with corresponding DRA and a discount factor , the conflict resolution DRA is a weighted DRA given by the tuple

where:

  • For all ,

We define the violation cost of an infinite sequence of transitions of the CRDRA as

Note that an infinite sequence of transitions of the CRDRA corresponds to a run of the underlying DRA if and only if .

Planning with Conflicting Norms

Given a labeled MDP and the CRDRAs corresponding to norms for , we may construct a product MDP as follows:

  • , where

We add the dummy initial state and action to allow the agent to determine whether to “skip” the initial state.

This MDP induces a weight function (where the weight of a transition in is equal to the weight of the underlying CRDRA transition). An infinite sequence of state-action pairs has the violation cost

We wish to find a policy which maximizes the probability of satisfying the norm set with minimal violation cost. We first compute the CRDRA for each norm within the norm set. We use these to construct the product MDP . We compute the AMECs of this MDP.

Considering each AMEC of the product MDP as a smaller MDP (with transition function of restricted to the state-action pairs in the AMEC, and with arbitrary initial state), and treating the transition weight function as a cost function, we use value iteration (VI) (?) to compute, for each state the minimal expected violation cost for an infinite path beginning at remaining within .

The computed violation cost induces an optimal action restriction for each AMEC (namely, all actions that achieve the minimal expected violation cost). Unfortunately, if we restrict the agent’s actions to while in , this may cause the CRDRA run to no longer be accepting (since the path of minimal violation cost may omit at least one state that must be visited infinitely often). To ensure that the CRDRA run is accepting, we employ an epsilon-greedy policy which chooses an optimal action from with probability and otherwise chooses a random action from (this ensures that there is a nonzero probability of performing all actions in and thus that all states in will be visited infinitely often, although it may perform suboptimally in violation cost with probability no more than ).

In practice, better performance results from restricting the action space on to , and then computing the AMECs of the resulting MDP (if any exist). The action restrictions associated with these “meta-AMECs” are stricter than those originally computed in , and are safe to use on all states within these meta-AMECs. This technique is sufficient to eliminate the aforementioned problem in every test domain we have encountered (including the vacuum cleaning scenarios described in this paper).

To determine the minimal achievable violation cost for infinite paths beginning from all other states in (as well as to improve upon the cost, if possible, for paths beginning with states in AMECs), we again employ value iteration. This time, instead of arbitrarily initializing the state values, we initialize the values for states within AMECs to the minimal AMEC violation cost , and initialize the values of all other states in to the maximum violation cost . The value function is not updated for any states from which the AMECs are not reachable (this ensures that the agent avoids actions that do not lead to AMECs); call the set of such states .

Upon computing the optimal action restriction for each state , and the corresponding violation cost , the agent amalgamates its policies. This is done by (a) choosing an action according to if is in some AMEC, and (b) choosing some action from otherwise. Note here that because the algorithm mainly computes an action restriction, another algorithm for achieving goals or maximizing reward can be integrated with it, although satisfying the temporal logic norms is prioritized.

The preceding algorithm runs before the agent performs any actions (at ); it need only be done once. At each time step , the agent “reinterprets” its state-action history (in ) to determine its “best possible state” in the product MDP . The agent is essentially re-deciding its past norm actions for each norm and each preceding time step, in light of its most recent transition. We use dynamic programming to minimize the work that must be done at each time step. The agent computes the set of product MDP states consistent with its history in . Each candidate state has an associated cost , the minimal cost for a sequence of norm actions that would cause the agent to be in at time given its history in . The agent determines its current product-space state by

, ignoring states from which the AMECs are unreachable, and then picks a product-MDP action according to the already-computed policy on the product space , from which the next action in is obtained.

Proof-of-concept Evaluation

We implemented the proposed approach in BURLAP (?), a Java library for MDP planning and reinforcement learning (RL). We used Rabinizer 3 (?) for conversion from LTL formulas to DRAs.

We tested the algorithms in four different scenarios in the vacuum cleaning example which use one, two, three, or all four norms (N1-N4).

Each scenario includes two rooms: Room 1 and Room 2. Room 2 is to the east of Room 1. The robot begins in Room 1, undocked and with full battery ( units in Scenarios 1 to 3; units in Scenario 4) and health ( units in all scenarios). In each scenario, the robot has 10 units of health (the robot’s battery capacity varies between scenarios). The human begins in Room 2. The probability of the human transitioning between rooms at each time step is . The human creates messes in their current room with probability in each time step (except in Scenario 4, in which the human does not create new messes). All messes created by the human are harmless and do not damage the robot, and initially have units of dirtiness. In each case, we set the discount factor .

Scenario 1: Business As Usual

This scenario demonstrates the robot’s ability to optimally fulfill its duty in the absence of other norms. The robot has the single norm N1 (”always ensure all rooms are clean”). This norm is impossible to fully satisfy, since (1) the human will continue to make messes, so that it is certain that the rooms will not always be clean; and (2) because the robot has limited battery life, it must either dock occasionally or completely deplete its battery and forever be unable to vacuum. Using the proposed approach, the robot determines that occasionally suspending N1 allows it to avoid having to permanently suspend N1. As a result, the robot moves between rooms and vacuums for as long as possible before docking and replenishing its battery.

Scenario 2: The Puddle

In this scenario, the robot encounters a puddle of water in Room 1 (original dirtiness of units) which, while harmless to humans, may be damaging to the robot (depleting health by units per time step) if the robot attempts to vacuum it. In the absence of action by the robot, the puddle gradually evaporates, reducing its dirtiness by unit each time step until it disappears completely. The robot has norms N1 and N2.

The robot determines that it is justifiable to temporarily suspend its cleaning duty N1, incurring a violation cost of (by waiting for the puddle to evaporate) in order to avoid the much higher violation cost of violating the safety norm N2. If the human makes messes in Room 2 while the puddle is evaporating, the robot moves to Room 2 and vacuums these messes while waiting for the puddle to evaporate.

Scenario 3: Broken Glass

In this scenario, the robot encounters broken glass (initial dirtiness: unit) on the floor of Room 1, which is both damaging to the robot vacuuming it, and injures the human each time they enter Room 1. The glass (unlike the puddle of water) does not dissipate on its own. We suppose for the purposes of this scenario that the robot is unable to warn the human about the mess. The robot has norms N1, N2, and N3.

Here ignoring the broken glass violates both the cleaning duty N1 and the moral norm N3, while satisfying the safety norm N2. Since the human has a room-switching probability of , ignoring the glass for even a single time step would incur a violation cost of at least (and, of course, ignoring it indefinitely would have a substantially higher violation cost). Vacuuming the mess immediately, on the other hand, would incur a violation cost of .

In this case, the robot determines that it ought to vacuum up the shards of glass despite the damage to itself from doing so, because protecting the human is far more important than protecting itself. Once the hazard has been removed, the robot proceeds as in Scenario 1.

Scenario 4: Interrupting Phone Calls

In this scenario, as in Scenario 3, the robot encounters broken glass on the floor. This time, however, the robot is able to warn the human about the mess using the action . This would allow the human to safely avoid the mess upon entering the messy room. The robot in this case has all four norms N1 to N4. For simplicity, the human in Scenario 4 does not make new messes, and the robot’s maximum battery level is rather than . The human also does not move between rooms while talking on the phone.

The violation costs of ignoring the mess and vacuum respectively are as described in Scenario 2. Here, however, the robot may also warn the human about the mess, potentially interrupting their phone call, and then subsequently ignore it, permanently suspending N1. It takes one time step to reach the human. If the human remains on the phone during that time step (which occurs with probability ) the robot thus incurs a violation cost of . This remains lower than the costs of either the vacuuming the mess or ignoring it without warning the human, and so the robot determines that the interruption is justifiable in order to prevent the human from being injured. The robot also determines that it ought to avoid vacuuming up the glass, since this is not necessary in order to violating N2.

Related Work

There have been several instances of temporal logics being employed to represent moral and social norms. For example, (??) employ a logical representation of norms using Normative Temporal Logic, a generalization of Computation Tree Logic (CTL). This characterization allows the description of complex temporal norms. (?) employ LTL with past-time modalities, which they use to construct guards (functions that restrict agent actions given an event history); they are concerned primarily with enforcement, and thus do not address norm conflicts. These approaches are designed for deterministic environment models, and are not well suited for stochastic domains.

The combination of moral and social norms with Markov Decision Processes is not new. Much of this work, e.g. (?), tends to emphasize norm emergence, thus lacks explicit representations of norms. Other work (?) considers incorporating deontic logic norms using an agent architecture that reasons about the consequences (in the environment) of violating norms.

While we know of no other work using LTL directly to represent moral and social norms in MDPs, a number of papers have emerged using LTL to represent constraints on the behavior of agents within both deterministic and non-deterministic domains.

LTL specifications are first employed, in a motion planning context, in Markov Decision Processes in (?). The agent’s aim is to maximize the probability of meeting the specifications. Multiple conflicting specifications are not considered. (?) allow dynamic re-planning as new LTL tasks are added.

Examinations of partially satisfiable LTL specifications include (?). This differs from the proposed work in that (1) they limit their specification to a single co-safe LTL formula; and (2) their method of resolving conflict uses a notion of proximity to an accepting state that is better suited to motion planning than to the balancing of multiple conflicting norms.

(???) utilize approaches similar to the proposed approach in that they employ “weighted skipping” to allow automata to “skip” a time step, but incur a cost for doing so. Unlike the proposed approach, however, these approaches use finite LTL (LTL defined over finite paths, instead of infinite ones), and their algorithms are tailored for deterministic environments rather than stochastic domains.

Discussion and Future Work

The proposed method for handling norm conflicts allows norms to be weighed against each other in terms of “violation cost”. Other ways of encoding human preferences, including through lexicographic ordering and through CP-nets (??), may be considered in future work.

We employed the discount factor to ensure that all accepting paths within have finite violation cost. The validity of discounting the wrongness of future norm violations is debatable. Alternatives to discounting include treating the problem as finite-horizon (which would entail similar short-sightedness), and using infinite-horizon average cost per timestep as the reward criterion (which only consider the behavior as the number of time steps approaches infinity, and for which ‘temporary’ norm violations do not matter whatsoever). Some hybrid approach may be valuable; this is a topic for future work.

Our approach takes exponential time (and space) in the number of norms, and thus quickly become intractable for moderately-sized sets of norms. Much of this is due to the product MDP, , which contains both state and action spaces which are exponential in the number of norms. Managing and reducing this complexity, perhaps using heuristics to determine which subsets of norms are likely to be relevant, would be a valuable topic for future research.

In developing the proposed agent architecture, we assume that the agent has complete knowledge of the MDP’s transition function; in practice, this rarely occurs. Future work could follow (????) in considering unknown transition dynamics. There would also be merit in adapting the proposed algorithms to multi-agent (drawing on, e.g., (?)), and partially-observable (as in (???)) domains.

The described algorithm focuses on planning with a given set of norms; it requires pre-specification of the norm formulas and weights. It may be integrated with work allowing the learning of temporal logic statements either through natural language instruction, as in (?), or through observation of other agents’ behavior, as in (?). We may also consider: including deontic operators in the norm representation and allowing a form of logical inference, so that agents may reason more fully about their norms; and providing some mechanism for agents to justify the rationale of behavior considered questionable by observers. Each of these possible tasks is facilitated by the explicit representation of norms using logic.

Conclusion

In this paper, we described a hybrid approach to resolving norm conflicts in stochastic domains. Norms in this approach are viewed as temporal logic expressions that the agents intends to make true. Different from logical approaches, which are limited to deterministic domains and typically attempt to limit inferences that can be made in cases of norm conflicts, agents realizing our approach attempt to obey as many (important) norms as possible with minimal violation cost if not all norms can be obeyed at the same time. As a result, these agents also respond robustly to “unlucky” transitions. We showed that our approach leads to reasonable norm-conformant behavior in all four scenarios in the vacuum cleaning domain.

Acknowledgements

This project was supported in part by ONR MURI grant N00014-16-1-2278 from the Office of Naval Research and by NSF IIS grant 1723963.

References

  • [Ågotnes and Wooldridge 2010] Ågotnes, T., and Wooldridge, M. 2010. Optimal social laws. In Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2010), 667–674. International Foundation for Autonomous Agents and Multiagent Systems.
  • [Ågotnes et al. 2007] Ågotnes, T.; Van Der Hoek, W.; Rodriguez-Aguilar, J.; Sierra, C.; and Wooldridge, M. 2007. On the logic of normative systems. In Proceedings of the Twentieth International Joint Conference on Artificial Intelligence (IJCAI ’07), 1181–1186.
  • [Alechina et al. 2015] Alechina, N.; Bulling, N.; Dastani, M.; and Logan, B. 2015. Practical run-time norm enforcement with bounded lookahead. In Proceedings of the 14th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2015), 443–451. International Foundation for Autonomous Agents and Multiagent Systems.
  • [Baier and Katoen 2008] Baier, C., and Katoen, J.-P. 2008. Principles Of Model Checking, volume 950. The MIT Press.
  • [Bellman 1957] Bellman, R. 1957. A markovian decision process. Indiana Univ. Math. J. 6:679–684.
  • [Boutilier et al. 2004] Boutilier, C.; Brafman, R. I.; Domshlak, C.; Hoos, H. H.; and Poole, D. 2004. Cp-nets: A tool for representing and reasoning with conditional ceteris paribus preference statements. Journal of Artificial Intelligence Research 21:135–191.
  • [Brafman, Domshlak, and Shimony 2006] Brafman, R. I.; Domshlak, C.; and Shimony, S. E. 2006. On graphical modeling of preference and importance. Journal of Artificial Intelligence Research 25:389–424.
  • [Chatterjee et al. 2015] Chatterjee, K.; Chmelik, M.; Gupta, R.; and Kanodia, A. 2015. Qualitative analysis of POMDPs with temporal logic specifications for robotics applications. 2015 IEEE International Conference on Robotics and Automation (ICRA) 23:325–330.
  • [Ding et al. 2011a] Ding, X. C.; Smith, S. L.; Belta, C.; and Rus, D. 2011a. LTL control in uncertain environments with probabilistic satisfaction guarantees. IFAC Proceedings Volumes (IFAC-PapersOnline) 18(PART 1):3515–3520.
  • [Ding et al. 2011b] Ding, X. C.; Smith, S. L.; Belta, C.; and Rus, D. 2011b. MDP optimal control under temporal logic constraints. Proceedings of the IEEE Conference on Decision and Control 532–538.
  • [Dzifcak et al. 2009] Dzifcak, J.; Scheutz, M.; Baral, C.; and Schermerhorn, P. 2009. What to do and how to do it: Translating natural language directives into temporal and dynamic logic representation for goal management and action execution. Proceedings - IEEE International Conference on Robotics and Automation 4163–4168.
  • [Esparza and Křetínský 2014] Esparza, J., and Křetínský, J. 2014. From LTL to deterministic automata: A safraless compositional approach. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 8559 LNCS:192–208.
  • [Fagundes, Billhardt, and Ossowski 2010] Fagundes, M. S.; Billhardt, H.; and Ossowski, S. 2010. Normative reasoning with an adaptive self-interested agent model based on markov decision processes. In IBERAMIA, volume 6433, 274–283. Springer.
  • [Fu and Topcu 2014] Fu, J., and Topcu, U. 2014. Probably Approximately Correct MDP Learning and Control With Temporal Logic Constraints. arXiv Preprint.
  • [Goble 2009] Goble, L. 2009. Normative conflicts and the logic of ’ought’. Nous 43(3):450–489.
  • [Guo and Dimarogonas 2014] Guo, M., and Dimarogonas, D. V. 2014. Multi-agent plan reconfiguration under local LTL specifications. The International Journal of Robotics Research 34(2):218–235.
  • [Guo, Johansson, and Dimarogonas 2013] Guo, M.; Johansson, K. H.; and Dimarogonas, D. V. 2013. Revising motion planning under Linear Temporal Logic specifications in partially known workspaces. Proceedings - IEEE International Conference on Robotics and Automation 5025–5032.
  • [Jones et al. 2015] Jones, A.; Aksaray, D.; Kong, Z.; Schwager, M.; and Belta, C. 2015. Robust Satisfaction of Temporal Logic Specifications via Reinforcement Learning. arXiv:1510.06460 [cs].
  • [Kasenberg and Scheutz 2017] Kasenberg, D., and Scheutz, M. 2017. Interpretable apprenticeship learning with temporal logic specifications. In Proceedings of the 56th IEEE Conference on Decision and Control (CDC 2017).
  • [Lacerda, Parker, and Hawes 2014] Lacerda, B.; Parker, D.; and Hawes, N. 2014. Optimal and dynamic planning for Markov decision processes with co-safe LTL specifications. In IEEE International Conference on Intelligent Robots and Systems, 1511–1516.
  • [Lacerda, Parker, and Hawes 2015] Lacerda, B.; Parker, D.; and Hawes, N. 2015. Optimal policy generation for partially satisfiable co-safe LTL specifications. IJCAI International Joint Conference on Artificial Intelligence 2015-Janua:1587–1593.
  • [Lahijanian et al. 2015] Lahijanian, M.; Almagor, S.; Fried, D.; Kavraki, L. E.; and Vardi, M. Y. 2015. This Time the Robot Settles for a Cost: A Quantitative Approach to Temporal Logic Planning with Partial Satisfaction. In The Twenty-Ninth AAAI Conference (AAAI-15), 3664–3671.
  • [MacGlashan 2016] MacGlashan, J. 2016. Brown-UMBC Reinforcement Learning and Planning (BURLAP). http://burlap.cs.brown.edu/.
  • [Pnueli 1977] Pnueli, A. 1977. The temporal logic of programs. 18th Annual Symposium on Foundations of Computer Science (sfcs 1977) 46–57.
  • [Reyes Castro et al. 2013] Reyes Castro, L. I.; Chaudhari, P.; Tümová, J.; Karaman, S.; Frazzoli, E.; and Rus, D. 2013. Incremental sampling-based algorithm for minimum-violation motion planning. Proceedings of the IEEE Conference on Decision and Control 3217–3224.
  • [Scheutz and Malle 2014] Scheutz, M., and Malle, B. F. 2014. “think and do the right thing” – a plea for morally competent autonomous robots. In Proceedings of IEEE International Symposium on Ethics in Engineering, Science, and Technology (ETHICS).
  • [Sen and Airiau 2007] Sen, S., and Airiau, S. 2007. Emergence of norms through social learning. In Proceedings of IJCAI-07, 1507–1512.
  • [Sharan and Burdick 2014] Sharan, R., and Burdick, J. 2014. Finite state control of POMDPs with LTL specifications. Proceedings of the American Control Conference 501–508.
  • [Svoreňová et al. 2015] Svoreňová, M.; Chmelík, M.; Leahy, K.; Eniser, H. F.; Chatterjee, K.; Černá, I.; and Belta, C. 2015. Temporal logic motion planning using POMDPs with parity objectives. Proceedings of the 18th International Conference on Hybrid Systems Computation and Control - HSCC ’15 233–238.
  • [Tumova et al. 2013] Tumova, J.; Hall, G. C.; Karaman, S.; Frazzoli, E.; and Rus, D. 2013. Least-violating control strategy synthesis with safety rules. Proceedings of the 16th international conference on Hybrid systems: computation and control 1–10.
  • [Wolff, Topcu, and Murray 2012] Wolff, E. M.; Topcu, U.; and Murray, R. M. 2012. Robust control of uncertain Markov Decision Processes with temporal logic specifications. 2012 IEEE 51st IEEE Conference on Decision and Control (CDC) 3372–3379.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
206366
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description