Sequence Planner - Automated Planning and Control for ROS2-based Collaborative and Intelligent Automation Systems

Sequence Planner - Automated Planning and Control for ROS2-based
Collaborative and Intelligent Automation Systems

Martin Dahl, Endre Erös, Atieh Hanna, Kristofer Bengtsson, Petter Falkman *This work has been supported by UNIFICATION, Vinnova, Produktion 2030 and UNICORN, Vinnova, Effektiva och uppkopplade transportsystem.M. Dahl, E. Erös, K. Bengtsson, and P. Falkman, Department of Electrical Engineering, Chalmers University of Technology, 412 96 Göteborg, Sweden. (martin.dahl|endree|kristofer.bengtsson|petter.falkman) @chalmers.seA. Hanna, Group Trucks Operation, Research & Technology Development (R&TD), Gropegårdsgatan 2, 40508 Göteborg, Sweden. atieh.hanna@volvo.com
Abstract

Systems based on the Robot Operating System (ROS) are easy to extend with new on-line algorithms and devices. However, there is relatively little support for coordinating a large number of heterogeneous sub-systems. In this paper we propose an architecture to model and control collaborative and intelligent automation systems in a hierarchical fashion.

\usemintedstyle

emacs

{keywords}

Control Architectures and Programming; Factory Automation; Planning, Scheduling and Coordination

I Introduction

Robotics in production is an increasingly complex field. Off-line and manual programming of specific tasks are today replaced by online algorithms that dynamically performs tasks based on the state of the environment [1, 2]. The complexity will be pushed even further when collaborative robots [3] together with other intelligent and autonomous machines and human operators, will replace more traditional automation solutions. To benefit from these collaborative and intelligent automation systems, also the control systems need to be more intelligent, reacting to and anticipating what the environment and each sub system will do. Combined with the traditional challenges of automation software development, such as safety, reliability, and efficiency, a completely new type of control system is required.

In order to ease integration and development of different types of online algorithms for sensing, planning, and control of the hardware, various platforms have emerged as middle-ware solutions, one of which stands out is the Robot Operating System (ROS) [4]. ROS has been incredibly successful having over 16 million downloads in 2018 alone [5]. The next generation, ROS2 [6], is currently developed, where the communication architecture is based on the Data Distribution Service (DDS) [7] to enable large scale distributed control architectures. This improvement will pave the way for the use of ROS2-based architectures in real-world industrial automation systems, as will be presented in this paper.

However, enabling integration and communication, while greatly beneficial, is just one part of the challenge. The overall control architecture also needs to plan and coordinate all actions of robots, humans and other devices as well as keeping track of a large amount of state related to them. This has led to several frameworks for composing and executing robot tasks (or algorithms), for example the framework ROSPlan [8] that uses PDDL-based models for automated task planning and dispatching, SkiROS [9] that simplifies the planning with the use of a skill-based ontology, eTaSL/eTC [10] that defines a constraint-based task specification language for both discrete and continuous control tasks or CoSTAR [11] that uses Behavior Trees for defining the tasks. However, these frameworks are mainly focused on single robot or single agent applications and lacks features to control large scale, real world collaborative and intelligent industrial automation systems. This paper therefore introduces the control architecture Sequence Planner (SP) that can handle these types of systems and that utilizes the power of ROS2.

In Section II, a collaborative and intelligent systems use case is introduced, which is used as a running example throughout the paper. Section III describe the discrete modeling formalism and notation used. Section IV gives an overview of the proposed architecture, which is expanded upon in Sections VVI, and VII. The implementation of the architecture is discussed in VIII. Finally, Section IX contains some concluding remarks.

Ii The collaborative and intelligent automation systems use case

This paper concerns the development of a ROS2 based automation system for an assembly station in a truck engine manufacturing facility. The challenge involves a collaborative robot and a human operator performing assembly operations on a diesel engine in a collaborative or coactive fashion. In order to achieve this, a wide variety of hardware as well an extensive library of software including intelligent algorithms has to be used. The assembly system can be seen in Figure 1.

Fig. 1: Collaborative robot assembly station controlled by a network of ROS2 nodes. A video clip from the use case: https://youtu.be/YLZzBfY7pbA

The physical setup consists of a collaborative robot from Universal Robots, an autonomous mobile platform (a Mir 100), two different specialized end-effectors, a smart nutrunner that can be used by both the robot and the operator, a docking station for the end-effectors, a lifting system for the nutrunner, a camera and RFID reader system and eight computers dedicated for different tasks. The system is communicating over ROS2, with a number of nodes having their own dedicated ROS1 master behind a bridge [12].

The envisioned intelligent and collaborative systems of the future will comprise of several robots, machines, smart tools, human-machine interfaces, cameras, safety sensors, etc. From our experience with this use case, distributed large-scale automation systems require a communication architecture that enable reliable messaging, well-defined communication, good monitoring, and robust task planning and discrete control. ROS1 systems are hard to scale due to a communication layer that wasn’t intended for large scale automation use-cases. However, with ROS2 instead basing its communication on DDS, which has proven real world usage and performance [13], it seems likely that ROS2 can enable implementation of large scale of industrial automation use-cases.

Systems like this combine the challenges of high level intelligent task and motion planning with the challenges of more traditional automation systems. The automation system needs to keep track of the state of all resources and products, as well as the environment. The control system also needs means of restarting production should something go wrong. High level task plans are not sufficient to deal with these complexities, we argue instead that the problem needs to be tackled at the high (task planning) and low (I/O) level simultaneously. Thus the frameworks mentioned in Section I are not suitable for use with our use case of the large scale collaborative and intelligent automation systems.

Iii Preliminaries

In this section some background to modeling in Sequence Planner (SP) is briefly described. SP as a modeling tool uses a formal representation of an automation system based on extended finite automaton (EFA) [14], a generalization of an automaton that includes guards and actions associated with the transitions. The guards are predicates over a set of variables, which can be updated by the actions. EFA:s allow a reasonably compact representation that is straight forward to translate to problems for different solvers. One example of this is generating bounded model checking problems, used both for falsification and on-line planning in this paper, but also for applying formal verification [15] and performing cycle time optimization [16].

Iii-a Extended finite automata

An extended finite automaton is a 6-tuple: . The set is the extended finite set of states, where is a finite set of locations and is the finite domain of definition of the variables, (the alphabet) is a nonempty finite set of events, is a set of guard predicates over the variables, is a collection of action functions, is the state transition relation, and is the initial state. A transition in an EFA is enabled if and only if its corresponding guard formula evaluates to true; when the transition is taken, a set of variables is updated by the action functions. In this work, a transition with a corresponding guard formula and action function is denoted , where , , and .

Iv Architectural overview

In this paper we describe an architecture for composing heterogeneous ROS2 nodes into a hierarchical automation system, an overview of which can be seen in Figure 2. The automation system is divided into four layers. Layer 0, ROS2 Nodes and pipelines, concerns the individual device drivers and ROS2 nodes in the system. As SP uses EFA which are based on transitions that update variables as the core modeling formalism, transformation pipelines are defined in layer 0 to map ROS2 messages coming from and going out to the nodes in the system to variables within SP. State in SP is divided into measured state - coming from the ROS2 network, estimated state - inferred from previous actions of the control system, and output state - to be sent out on the ROS2 network. Layer 1, Abilities and specifications, concerns modeling the abilities of the different resources, which are the low-level tasks that the resources can perform. Depending on the system state, abilities can be started which trigger state changes of the output variables which is eventually transformed by the pipelines in layer 0 to ROS2 messages. Abilities are modeled in two steps: first individually, then the system specific interactions are modeled as global specifications. Specifications in layer 1 is generally safety oriented: ensuring that nothing “bad” can happen. Layer 2, Production operations, defines the production operations [17] of the automation system. Production operations are generally defined on a high abstraction level (e.g. “assemble part A and part B”). Production operations are dynamically matched to sequences of suitable abilities during run-time. Finally, layer 3, High level planning and optimization represents a high level planning or optimization system that decides in which order to execute the production operations of layer 2. Layer 3 is not further discussed in this work.

Fig. 2: Layers in the proposed control architecture.

V Layer 0 - ROS2 Nodes and pipelines

Layer 0 consist of the already given device drivers such as motor controllers or individual sensors. However, they can also be of more high level nature, consider for example a robot driver with a path planning algorithm. If the drivers are not already ROS2 nodes, they should be wrapped in thin layers for interfacing between the control system and the underlying device drivers. ROS2 nodes are as much as possible kept stateless to handle complicated state machines on the higher levels. Table I shows an overview of the ROS2 nodes in the use case described in Section II.

No. Name ROS v. Computer OS Arch. Network Explanation
1 Tool ECU Bo+Me Rasp. Pi Ubuntu 18 ARM LAN1 Smart tool and lifting system control
2 RSP ECU Bo+Me Rasp. Pi Ubuntu 18 ARM LAN1 Pneumatic conn. control and tool state
3 Dock ECU Bo+Me Rasp. Pi Ubuntu 18 ARM LAN1 State of docked end-effectors
4 MiRCOM Bouncy LP Alpha Ubuntu 18 amd64 LAN2+VPN ROS2 (VPN) to/from REST (LAN2)
5 MiR Kinetic Intel NUC Ubuntu 16 amd64 LAN2 Out-of-the-box MiR100 ROS Suite
6 RFIDCAM Bouncy Desktop Win 10 amd64 LAN1 Published RFID and Camera data
7 UR10 Bo+Kin Desktop Ubuntu 16 amd64 LAN1 UR10 ROS Suite
8 DECS Bouncy Laptop Ubuntu 18 amd64 LAN1+VPN Sequence Planner
TABLE I: Overview of the nodes in the use case.

ROS2 has a much improved transport layer compared to ROS1, however, ROS1 is still far ahead of ROS2 when it comes to the number of packages and active developers. In order to embrace the strengths of both ROS1 and ROS2, i.e. to have an extensive set of developed robotics software (e.g. MoveIt! [18]) and a robust way to communicate, ROS1 nodes are routinely bridged to the ROS2 network.

The fact that ROS uses typed messages means that the state of our low-level controller can be automatically inferred from the message types used by the involved ROS2 nodes. For simpler devices it is straight-forward to simply “wire” a topic into a set of measured state variables in SP. However, this may not always be the case. To be able to support a wide variety of existing ROS2 nodes, we introduce a concept of applying pipelines to transform the messages on the ROS2 network into SP state variables.

Pipelines are typed and can be composed in a graph-like manner to include merging and broadcasting to different endpoints, which allows graphical visualization of the different processing steps. The pipelines themselves are not part of the EFA model underpinning the control, but as they are typed and their processing logic is well isolated, it is straight forward to apply traditional testing methods (e.g. unit testing) to them to ensure their correctness. This enables us to map complex device state into (possibly reduced) control state, as well as having a standardized way of for example aggregating, discretizing, renaming, etc. In SP, the pipelines are implemented using Akka Streams [19]. A common pattern is to add a “ticking” effect to the end of a pipeline, which will have the effect of automatically generating new ROS2 messages based on the current SP state at some specified interval.

Consider the node controlling the smart nutrunner, node number one in Table I. To control the node, a resource with state variables corresponding to the message structures in Listing V is defined in SP. Messages on the tool’s state topic are mapped into measured state and the output state of SP is mapped to messages published on the tool’s command topic. In this case, an automatic mapping of the messages to the SP state variables can be generated, which works by generating a pipeline transformation that matches the field names of the message type to SP variables of the correct type. To ease notation in the coming sections, a shorter variable name is introduced in the comments of Listing V, where measured state is denoted with a subscript “?” and output state is denoted with a subscript “!”.

{listing}

[ht] {minted}[mathescape, numbersep=5pt, frame=lines, fontsize=]python # /smart_nutrunner/state bool tool_is_idle # =¿ bool tool_is_running_forward # =¿ bool programmed_torque_reached # =¿

# /smart_nutrunner/command bool set_tool_idle # =¿ bool run_tool_forward # =¿ Messages to and from the smart tool.

The state relating to the UR10 robot node (node 7 in Table I) is more complex than for the smart nutrunner. By applying the pipelines shown in Fig. 3, the robots position in space is discretized into an enumeration of named poses. On the output side, pipelines add information to the messages about whether the robot should plan its path, which planner it should use, and which type of move it should perform, etc. Message ticking and rate limiting steps are added as the last transformation steps in the respective pipelines to ensure a uniform update rate. Other properties which can be user-configured are merged later in the pipeline, allowing a way to manually override the messages generated by SP (for instance to lower the robot speed during testing). Note also that the state of the UR10 resource is collected from more than one topic.

Fig. 3: Schematic illustrating the pipelines for the topics to and from the UR10 node.

Vi Layer 1 - Abilities and Specifications

Layer 1 forms the glue between the state captured by the transformation pipelines and the tasks the different resources in the system is able to perform. These tasks are defined in terms of the resource’s state and are modeled as abilities. Abilities are modeled per-resource and interactions between them are defined by specifications.

Vi-a Abilities

An ability models a single task that a resource can perform. To track the state of an ability during execution, three boolean state variables are defined: isEnabled (denoted for an ability ), isExecuting (), as well as isFinished (). A set of transitions define how an ability updates the system variables. and denotes the event corresponding to the transitions taken when starting and finishing the ability, respectively. Transitions to update the state of the ability, with the corresponding events , , , maps the resource state into the state of the ability, which allows the state of the ability to be synchronized with the measured state. Each synchronization transition has a dual transition with the negation of guard of the original transition as its guard and an action that resets instead of sets the state variable.

In order to simulate an ability without its actual device, something that is required in order to perform the low level planning described in Section VII, an ability also need to define one or more starting () and executing () effects. The effects are transitions which model how measured state variables behave during the start and execution of an ability respectively.

Fig. 4: The smart nutrunner fastening the cover plate. Here operated by the UR10.

Consider again the smart nutrunner. The smart nutrunner is used to bolt down a cover plate onto the engine, as is shown in Figure 4. An ability, runNut (), that models the task of tightening a pair of bolts can be defined as follows. To be enabled, the tool should be in its isEnabled state which can be reached by taking the transition . When in this state, the start transition updates the output states required to start the tool. Writing to these output state variables will, after passing through the pipeline transformation steps outlined in layer 0, eventually produce a message to the nutrunner topic on the ROS2 network. The transition synchronizes the executing state of the ability with the measured state, setting if the tool is running forward and the pre-programmed torque has not yet been reached, as well as setting when the this is not true.

The desired result of running the ability is to tighten a pair of bolts. As there are no sensors for keeping track of the bolts, the estimated state variable is introduced. When the ability is executing and the programmed torque has been reached the tool should stop running forward, and should be updated to ’tightened’. This is modeled in the transition .

The effects of the ability are , indicating that the tool is expected to start running forward after starting the ability and , indicating that during execution of the ability, the programmed torque is expected to eventually be reached.

For the UR10 a parameterized ability is defined: moveToPosition, which takes a goal position () as an input. As the node runs both the robot driver and the MoveIt! planning system, additional abilities are introduced: attachInPlanningScene and detachInPlanningScene, which sends the appropriate messages to MoveIt! for setting up the motion planning scene.

Additionally a restart ability is introduced for the robot, moveToPrevious, which is enabled if the robot is not moving, its current position is an unknown position and its previous position is a known position. Restart abilities are only enabled during restart mode as described in Section VIII-A.

Finally, the human operator is modeled. The operator’s role is to place bolts coming on the kitting AGV onto the cover plate. The human operator informs the system that he or she is finished with the task by acknowledging this on a smart watch. An ability for the operator, placeBolt, is defined that updates to ’placed’ during its finish transition.

Vi-B Specification

The abilities defined so far, combined with the underlying transformation pipelines, can be used to run the system in an open loop fashion. As we have seen, the abilities are modeled on a per-resource basis, which allow individual testing of their behavior. However, the complexities of developing an automation system arise in the interaction of the different resources.

In order to be able to work with individual devices, as well as isolating the complexities which arise from their different interactions SP relies heavily on modeling using global specifications. The abilities defined so far, together with a set of global specifications can be used to formulate a supervisor synthesis problem directly applicable to the EFA model. Using the method described in [20], the solution to this synthesis problem can be obtained as additional guards on the starting transition of the abilities. Examples of this modeling technique can be found in [21, 22]. By keeping specifications as part of the model, there are fewer points of change which makes for faster and less error-prone development compared to changing the guard expressions manually.

For this case, a safety specification is added: when the robot is guiding the smart nutrunner to tighten a pair of bolts (see Figure 4), it is important that the runNut ability has started before moving down towards the cover plate, otherwise the tool will collide with the bolt. This can be modeled as the forbidden state specification , where is the robot pose at the bolt location. It tells the system that it is forbidden for the robot to be at position when an untightened pair of bolts is in place and the runNut ability has not been started.

Vii Layer 2 - Production operations

While layer 1 define all possibilities of what the system can safely do, layer 2 concerns making the system do something “good”. For this use case it is to bolt the cover plate onto the engine. To achieve this, the high level production operation TightenBoltPair will be defined in this section.

In SP, production operations are modeled as goal states. The goal states are defined as predicates over the system state. For the operation TightenBoltPair, the goal state is , i.e. the estimated bolt state should end up being ’tightened’. It also makes sense for the operations to have a precondition which ensures that the goal state is only activated when it makes sense. For this operation the precondition is .

Modeling the production operations in this way does two things. First, it makes it possible to add and remove resources from the system more easily - as long as the desired goals can be reached, the upper layers of the control system does not need to be changed. Second, it makes it easier to model on a high level, eliminating the need to care about specific sequences of abilities.

Computing a plan for reaching a goal is done by finding a counter example using bounded model checking (BMC) [23] on the EFA model of layer 1 (i.e. production operations can never start each other). Modern SAT-based model checkers are very efficient in finding counter examples, even for systems with hundreds of variables. BMC also has the useful property that counter examples have minimal length due to how the problem is unfolded into SAT problems iteratively. Additionally, well-known and powerful specification languages like Linear Temporal Logic (LTL) [24] can be used. Being able to specify LTL properties means that operations can also contain local specification that should be active whenever the operation is executing. For example, reaching the state defined by the predicate while avoiding the state defined by the predicate can be written as . In the current implementation of SP, the SAT based bounded model checking capabilities of the nuXmv symbolic model checker [25] is used, but planning engines based on PDDL [26] could be plugged in as well.

Viii Control implementation

The system is executed synchronously based on the state of all connected resources. Operations execute when their preconditions are satisfied. The goals of each currently active operation are conjuncted to form a planning problem on the form for the goal states of the currently active operations to , where is the LTL operator specifying that the predicate eventually becomes true. The result of the planning problem gives a start order of the system’s abilities. Abilities are allowed to execute if their preconditions are satisfied and they are the first in the current start order. When abilities are started, they are popped from the start order and the next one may start if its preconditions are fulfilled. This greedy behavior enables multiple abilities to start executing in parallel, in contrast to purely sequential planning frameworks.

Reactivity is crucial in human/robot collaboration – it should be possible for plans to change on short notice. By modeling reasonably small tasks for the operations in layer 2, planning can be fast enough to be performed continuously. Plans can then be followed in a receding horizon fashion, enabling quick reaction to changes in the environment.

The execution system keeps track of all the transitions in the automation system, defined by the abilities and the specifications in layer 1. For example, when an ability is started, the action of its starting transition is executed, updating the state of the SP variables. This state update triggers involved pipelines to assemble new ROS2 messages based on the defined transformations steps (for example generated variable mappings) in layer 0. At the end of the pipeline the new message is sent out on the ROS2 network for the different nodes to process.

Viii-a Restart situations

During execution the system will invariably reach an error, or restart, situation. Given that it is not feasible to measure all state of the system, it is likely that the estimated state will cause out of sync errors (e.g. a tightened bolt will end up in its initial “empty” state after a control system restart). To resynchronize the control system online an operator needs support from the system, for example, guiding the operator to a precalculated state from where restart is safe [27]. SP employs a variety of ways in which to get back into a known state. One of the most important, however, is also the simplest one: keeping state machines out of the Level 0 nodes! It is never desirable to be forced to reset individual devices to get back to a known state.

Given this, it is not unlikely that restart errors can be solved by re-planning. For example if the robot went offline, it is probable that the planner can find a way for the operator to perform the tasks instead. If this fails, SP can enter a restart mode, where the automatic execution of operations is paused. In this mode, operations can be reset, where instead of planning to reach the goal state of the operation, the aim is to reach a state in which its precondition is satisfied. After a successful reset the operation can be started again. In this restart mode, it is possible to activate a number of restart abilities during planning. A restart ability resets a subsystem back to a known state from which execution can resume (see moveToPrevious in Section VI-A). Lastly, it is up to an operator to bring the estimated state of the system back into sync with reality, either by changing the physical world (e.g. putting a missing part into place), or by changing the system state to reflect the reality.

Ix Conclusion

This paper introduced Sequence Planner (SP) as an architecture to model and control ROS2 based collaborative and intelligent automation systems. The control architecture has been implemented on an industrial assembly station. Practical experience during development of the control system for the described use case suggest that ROS2 does in fact enable larger scale industrial automation systems to be built on top of it.

The layered architecture allows reasoning about production operations independently of which combination of resources are used to perform them, but at the same time the low level approach taken to planning and control enables a structured approach for error handling on the level of individual subsystem state. Compared to more high level planning frameworks, this can make recovery after errors possible in more scenarios.

SP is under continuous development here [28]. The hope is that it could become a tool used by a wider audience within the ROS community.

References

  • [1] R. Alterovitz, S. Koenig, and M. Likhachev, “Robot planning in the real world: Research challenges and opportunities,” AI Magazine, vol. 37, no. 2, pp. 76–84, Summer 2016.
  • [2] L. Perez, E. Rodriguez, N. Rodriguez, R. Usamentiaga, and D. F. Garcia, “Robot guidance using machine vision techniques in industrial environments: A comparative review,” Sensors, vol. 16, no. 3, 2016. [Online]. Available: http://www.mdpi.com/1424-8220/16/3/335
  • [3] A. Bauer, D. Wollherr, and M. Buss, “Human-robot collaboration: A survey,” International Journal of Humanoid Robotics, vol. 05, no. 01, pp. 47–66, 2008. [Online]. Available: https://doi.org/10.1142/S0219843608001303
  • [4] “ROS,” http://www.ros.org, 2019, [Online; accessed 25-Feb-2019].
  • [5] D. Lu, “The 2018 ROS Metrics Report,” https://discourse.ros.org/t/the-2018-ros-metrics-report/6216/2, 2018, [Online; accessed 25-Feb-2019].
  • [6] “ROS 2,” https://index.ros.org/doc/ros2/, 2019, [Online; accessed 25-Feb-2019].
  • [7] G. Pardo-Castellote, “Omg data-distribution service: architectural overview,” in 23rd International Conference on Distributed Computing Systems Workshops, 2003. Proceedings., May 2003, pp. 200–206.
  • [8] M. Cashmore, M. Fox, D. Long, D. Magazzeni, B. Ridder, A. Carreraa, N. Palomeras, N. Hurtós, and M. Carrerasa, “Rosplan: Planning in the robot operating system,” in Proceedings of the Twenty-Fifth International Conference on International Conference on Automated Planning and Scheduling, ser. ICAPS’15.   AAAI Press, 2015, pp. 333–341.
  • [9] F. Rovida, M. Crosby, D. Holz, A. S. Polydoros, B. Großmann, R. P. A. Petrick, and V. Krüger, SkiROS—A Skill-Based Robot Control Platform on Top of ROS.   Cham: Springer International Publishing, 2017, pp. 121–160.
  • [10] E. Aertbeliën and J. De Schutter, “etasl/etc: A constraint-based task specification language and robot controller using expression graphs,” in 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, Sep. 2014, pp. 1540–1546.
  • [11] C. Paxton, A. Hundt, F. Jonathan, K. Guerin, and G. D. Hager, “Costar: Instructing collaborative robots with behavior trees and vision,” in 2017 IEEE International Conference on Robotics and Automation (ICRA), May 2017, pp. 564–571.
  • [12] E. Endre, M. Dahl, H. Atieh, and K. Bengtsson, “A ros2 based communication architecture for control in collaborative and intelligent automationsystems,” in Submitted to the 29th International Conference on Flexible Automation and Intelligent Manufacturing (FAIM2019), June 2019.
  • [13] P. Bellavista, A. Corradi, L. Foschini, and A. Pernafini, “Data distribution service (dds): A performance comparison of opensplice and rti implementations,” in 2013 IEEE Symposium on Computers and Communications (ISCC), July 2013, pp. 000 377–000 383.
  • [14] M. Sköldstam, K. Åkesson, and M. Fabian, “Modeling of discrete event systems using finite automata with variables,” in Decision and Control, 2007 46th IEEE Conference on.   IEEE, 2007, pp. 3387–3392.
  • [15] P. Bergagård and M. Fabian, “Deadlock avoidance for multi-product manufacturing systems modeled as sequences of operations,” in 2012 IEEE International Conference on Automation Science and Engineering: Green Automation Toward a Sustainable Society, CASE 2012, Seoul, 20-24 August 2012, 2012, pp. 515 – 520.
  • [16] N. Sundström, O. Wigström, P. Falkman, and B. Lennartson, “Optimization of operation sequences using constraint programming,” IFAC Proceedings Volumes, vol. 45, no. 6, pp. 1580 – 1585, 2012.
  • [17] K. Bengtsson, B. Lennartson, and C. Yuan, “The origin of operations: Interactions between the product and the manufacturing automation control system,” IFAC Proceedings Volumes, vol. 42, no. 4, pp. 40–45, 2009.
  • [18] I. A. Sucan and S. Chitta, “MoveIt!” http://moveit.ros.org, 2018, [Online; accessed 26-Feb-2019].
  • [19] V. Klang, R. Kuhn, and J. Bonér, “Akka library.” http://akka.io, 2018, [Online; accessed 26-Feb-2019].
  • [20] S. Miremadi, B. Lennartson, and K. Åkesson, “A BDD-based approach for modeling plant and supervisor by extended finite automata,” Control Syst. Technol. IEEE Trans., vol. 20, no. 6, pp. 1421–1435, 2012.
  • [21] P. Bergagård, P. Falkman, and M. Fabian, “Modeling and automatic calculation of restart states for an industrial windscreen mounting station,” IFAC-PapersOnLine, vol. 48, no. 3, pp. 1030–1036, 2015.
  • [22] M. Dahl, K. Bengtsson, M. Fabian, and P. Falkman, “Automatic modeling and simulation of robot program behavior in integrated virtual preparation and commissioning,” Procedia Manufacturing, vol. 11, pp. 284–291, 2017.
  • [23] A. Biere, A. Cimatti, E. Clarke, and Y. Zhu, “Symbolic model checking without bdds,” in International conference on tools and algorithms for the construction and analysis of systems.   Springer, 1999, pp. 193–207.
  • [24] A. Pnueli, “The temporal logic of programs,” in 18th Annual Symposium on Foundations of Computer Science (sfcs 1977).   IEEE, 1977, pp. 46–57.
  • [25] R. Cavada, A. Cimatti, M. Dorigatti, A. Griggio, A. Mariotti, A. Micheli, S. Mover, M. Roveri, and S. Tonetta, “The nuxmv symbolic model checker,” in CAV, 2014, pp. 334–342.
  • [26] M. Fox and D. Long, “Pddl2. 1: An extension to pddl for expressing temporal planning domains,” Journal of artificial intelligence research, vol. 20, pp. 61–124, 2003.
  • [27] P. Bergagård and M. Fabian, “Calculating restart states for systems modeled by operations using supervisory control theory,” Machines, vol. 1, no. 3, pp. 116–141, 2013.
  • [28] “Sequence Planner,” https://github.com/sequenceplanner, 2019, [Online; accessed 1-Mar-2019].
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
345706
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description