Scenior: An Immersive Visual Scripting system of Gamified Training based on VR Software Design Patterns

Scenior: An Immersive Visual Scripting system of Gamified Training based on VR Software Design Patterns

Paul Zikas paul@oramavr.com ORamaVR Nick Lydatakis nick@oramavr.com ORamaVR Steve Kateros steve@oramavr.com ORamaVR  and  George Papagiannakis george.papagiannakis@oramavr.com Foundation for Research and Technology - HellasUniversity of Crete - Department of Computer ScienceORamaVR
Abstract.

Virtual reality (VR) has re-emerged as a low-cost, highly accessible consumer product, and training on simulators is rapidly becoming standard in many industrial sectors. Combined with the continued advancements in VR technology, the interest in platforms that generate immersive experiences has increased. However, the available systems are either focusing on gaming context, featuring limited capabilities (embedded editors in game engines) or they support only content creation of virtual environments without any rapid prototyping and modification. In this project we propose Scenior, an innovative coding-free, visual scripting platform to replicate gamified training scenarios through Rapid Prototyping via newly defined VR software design patterns. We implemented and compared three authoring tools: a) Prototyped scripting, b) Visual Scripting and c) VR Editor for rapid reconstruction of VR training scenarios. Our Visual Scripting module is capable generating training applications utilizing a node-based scripting system whereas the VR Editor gives the user/developer the ability to customize and populate new VR training scenarios directly from within the virtual environment. We also introduce Action Prototypes, a new software design pattern suitable to replicate behavioural tasks for VR experiences. In a addition, we present the scenegraph architecture as the main model to represent training scenarios on a modular, dynamic and highly adaptive acyclic graph based on a structured educational curriculum.

Virtual Reality, Visual Scripting, VR Editor, Software Design Patterns, Rapid Prototyping, Authoring tool
copyright: nonejournalyear: 2019copyright: rightsretainedconference: VRST ’19; November 12-15, 2019; Sydney, Australiadoi: 10.1145/3306307.3328180isbn: 978-1-4503-6317-4/19/07booktitle: VRST ’19, November 12-15, 2019, Sydney, Australiaccs: Computing methodologies Virtual realityccs: Software and its engineering Visual languagesccs: Software and its engineering Integrated and visual development environments

1. Introduction

Scenior is a Visual Scripting system capable to generate VR training scenarios following a modular Rapid Prototyping architecture. Our initial goal was to define how to construct complex training pipelines from elementary behaviours derived from prototyped and reusable software building blocks. Inspired from game programming patterns, we implemented new software design patterns named Actions for VR experiences to support a variety of commonly used interactions and procedures within training scenarios offering great flexibility in the development of immersive VR metaphors. We designed Scenior as a collection of authoring tools combining a Visual Scripting system and an embedded VR Editor forming a bridge from product conceptualization to product realization and development in a reasonably fast manner without the fuss of complex programming and fixtures.

The main goal of this project is to implement and compare three different authoring mechanics a) prototyped scripting, b) visual scripting and c) VR Editor for rapid reconstruction of VR training scenarios based on design patterns. In more detail, the proposed system facilitates a VR playground to recreate training scenarios using the developed tools and functionalities. From the developer’s perspective, this system forms a Software Development Kit (SDK) to generate VR content, which follows a well-structured educational pipeline. After coding the training scenario, users can experience the exported simulation using a VR headset and a pair of controllers.

For rapid operation adaptation to variations, we implemented a schematic representation of VR experiences to replicate training scenarios in a directed acyclic graph. By prototyping commonly used interaction patterns we managed to create a customizable platform able to generate new content with minimal changes. Inspired from game programming patterns, we implemented new design patterns for VR experiences to support a variety of commonly used interactions ad procedures within training scenarios offering great flexibility in the development of VR metaphors.

Figure 1. The Architectural diagram of Scenior. The platform consists of Scenegraph along with the Action Prototypes. In a higher hierarchy the authoring tools (Visual Scripting and VR Editor) are facilitating tools to generate interactive behaviours in the virtual environment. Finally, the training scenarios are implemented from the auto-generated code.

During the development process, our design decisions focused around the implementation of a system to transform interactions and behaviours from the real world to the virtual through VR technology. Our set of goals and design decisions were the following:

  • Educational pipeline: We are interested in representing an educational process into an efficient data structure, for simple creation, easy maintenance and fast traversal.

  • Modular Architecture: To support a wide variety of interactions and different behaviours within the virtual environment we want our system to integrate a modular architecture of different components linked into a common structure.

  • Code-free SDK: Our intentions were to develop a platform where users can create VR training scenarios without advanced programming knowledge. We also want to study techniques for content creation on VR experiences: Node-based Visual Scripting an VR Editor are becoming popular tools among game engines.

  • VR Metaphor: We want to study the differences between design principles on different platforms (Desktop, VR, AR etc.). What are the fundamentals in developing VR experiences paying respect to the deployed medium?

  • Rapid Prototyping: We are interested in making reusable prototyped modules to implement more complex interactive behaviours derived from simple building blocks. Our goal is to define basic structural elements capable to visualize simple behaviours but when combined recreate complex training scenarios.

  • VR Software Design Patters: What are the benefits of gamified software design patterns in VR applications? We wan to support a large number of interactive behaviours in VR applications to promote new software patterns specially formulated to speed up the development of VR experiences.

During the development process, our design decisions focused around the implementation of a platform, encapsulating key modules to construct and maintain interactive VR training scenarios. Thus, we introduce the following components:

  • Scenegraph Architecture: We developed a dynamic, modular tree data structure to represent the training scenario following a well defined educational curriculum. A scenegraph tree stores data regarding the tasks where the trainee is asked to accomplish, dismantling the educational pipeline into simplified elements and focusing on one step at the time. Scenegraph facilitates the core architectural element of Scenior orchestrating the educational tasks of each scenario and the interoperability of its different modules.

  • Action Prototypes: We designed basic Prototypes based on VR software design patterns to transfer behaviours from the real to the virtual world. Action Prototypes populate the scenegraph tree with well defined task for user to accomplish in order to complete a training scenario.

  • Visual Scripting: We integrated a Visual Scripting system as an authoring tool to export training scenarios from a node-based, coding-free user interface.

  • VR Editor: We embedded a run-time VR Editor within the training scenarios to give user the ability to customize and create new scenarios directly from the VR environment.

  • Pilot application: We implemented the above mechanics into a pilot training scenario to generate a VR experience utilizing Scenior. In more detail, we developed a training scenario around the restoration of an antique clock where the trainee is asked to perform specific tasks make the clock ticking again. We also conducted a qualitative evaluation to evaluate the feasibility, time consumption and user experience of Scenior.

2. Related Work

Training simulations and educational applications are evolving to support a wide variety of scenarios and occupations. In addition, VR offers a portable, low cost and immersive solution to cover the need for training and education in various sectors. From pilots to surgeons (Greenleaf, 2016), VR has a massive impact of training due to embodied cognition and psychomotor capabilities. In this section we will present the state of the art in VR training, their impact in education and other authoring Mixed Reality platforms.

2.1. The impact of VR in Training and Education

The engagement of education with novel technological solutions provide new opportunities to increase collaboration and interaction through participants, making the learning process more active, effective, meaningful, and motivating (Dwistratanti Sumadio and Awang Rambli, 2010),(Alsumait and Almusawi, 2013). Collaborative VR applications for learning (Greenwald et al., 2017), studies for the impact of VR in exposure treatment (Bouchard et al., 2016) as well as surveys for human social interaction (Pan and Hamilton, 2018) have shown the potential of VR as a training tool. The learning capabilities of VR have great potential from surgical simulations (Papagiannakis et al., 2017, 2018), to supplementary material for learning courses (Carrozzino and Bergamasco, 2010). However, the majority of VR simulators primary provide training, neglecting the educational factor (Gallagher et al., 2005). It is a common misconception to confuse the terms education and training: education refers to the acquisition of knowledge and information whereas training refers to the acquisition of skills (cognitive or psychomotor).

Focusing on the educational factor, the use of VR for knowledge transfer and e-learning is now extended as the R&D grows around entire VR environments where the learning takes place (Monahan et al., 2008). Virtual Reality rapidly increases its potential and influence on e-learning applications (de Faria et al., 2016) and simulations by taking advantage of two basic principles: a) the immersion and b) the embodiment (Slater, 2017; Shin, 2018). In more detail, immersive environments are capable to present a realistic scenario as it is, as it would be in real life.

VR-based learning, serious games and gamification (Ioannides et al., 2017) approaches for interactive learning events extend beyond simple technical and procedural skills. VR environments allows trainees to engage into multidisciplinary groups and focus on individual as well as team-based cognitive skills including problem solving (Thornhill-Miller and Dupont, 2016), decision-making, and team behavioural skills (Pot-Kolder et al., 2018). These concepts are essential to develop an educational curriculum (Kateros et al., 2015) to enhance knowledge and skill transfer from the virtual to the real world.

2.2. Visual Programming as an authoring tool

Visual programming is getting more publicity as more platforms and tools are emerging to enlarge the community. We can separate them into two categories according to their visual appearance and basic functionalities: a) block-based and b) node-based scripting languages

Block-based visual languages consist of modular blocks that represent fundamental programming utilities (if else, while, for loops etc.) or even custom prototypes that describe more complex functionalities. OpenBlocks (Roque, 2008) proposes an extendable framework that enables application developers to build their own graphical block programming systems by specifying a single XML file. Google’s online visual scripting platform Blocky (Pasternak et al., 2017) uses interlocking, graphical blocks to represent code concepts like variables, logical expressions, loops, and other basic programming patterns to export blocks to many programming languages like JavaScript, Python, PHP and Lua. Another approach from MIT is StarLogo (Klopfer et al., 2005), a client-based modeling and simulation software which facilitates the generation and understanding of simulations of complex systems. StarLogo utilize 3D graphics, sounds and a block based interface to facilitate as a programming tool for educational video games. Finally, another interesting approach is the Scratch (Maloney et al., 2010) visual programming language and environment, which primarily targets ages 8 to 16 offering an authoring tool to support self-directed learning through tinkering and collaboration with peers.

On the other hand, node-based visual languages, represent structures and data flow using logical nodes linked with edges reflecting their correlation. The resulting structure looks like a directed graph that provides users with a visual overview of important data and program flow. GRaIL (Ellis and Sibley, 1969) was one of the first systems that featured a visual scripting method for the creation of computer instructions based on cognitive visual patterns. It was used to make sophisticated programs that can be compiled and run at full speed, or stepped through with a debugging interpreter that can run the program at variable speeds. More recently, (Kensek, 2015) published three case studies on Visual programming for building information modeling (BIM) utilizing Dynamo, an pen source graphical programming framework for design. a lightweight tool with an intuitive user interface for commissioning of IP-enabled WSNs.

Another tool (Serna et al., 2015) proposed a solution that includes a visual programming interface with a common framework for discovering smart home services on multi-node wireless sensor networks (WSNs) to connect to the Internet in a secure, simple and efficient way. The system has also code analysis capabilities and generates Python code to produce programming modules. Following the same pattern, BricklAyeR (Stefanidi et al., 2019) is a collaborative platform designed for users with limited programming skills that allows the creation of Intelligent Environments through a building-block interface. Another interesting project is ARTIST (Ilias Kotis, 2019), proposing a platform which provides methods and tools for real-time interaction between human and non-human characters to generate reusable, low cost and optimized MR experiences. Its aim is to develop a code-free system for the deployment and implementation of MR content, while using semantically data from heterogeneous resources.

2.3. Editing directly from the VR environment

The development of authoring tools in virtual reality systems led to the integration of sophisticated functionalities from within the virtual environment. One of them is the implementation of VR Editors as an embedded feature for rapid creation of digital worlds directly from the virtual environment.

In SIGGRAPH 2017, Unity technologies presented EditorVR, an experimental scene editor which encapsulates all the Unity’s features from within the virtual environment giving developers the ability to create a 3D scene while wearing a VR headset. EditorVR supports features for initially laying out or modify a scene in VR, making adjustments to components using the Inspector workspace and building custom tools. The same year, Unreal Engine announced VR Mode, following similar principles. VR Mode enables to design and build worlds in a virtual reality environment using the full capabilities of the Unreal Editor Toolset combined with interaction models designed specifically for VR world building.

Except from game engines, VR editors started emerging into other software sectors like model editing. MARUI is a plugin for Autodesk Maya that lets designers jump right into the virtual scene and perform modelling and animation tasks. MARUI 3 claims that it not only allows designers to work comfortably with unlimited workspace and freedom of posture, but it can also can reduce the production costs up to 50%. Another noticeable project is RiftSketch (Elliott et al., 2015), a live coding environment built for VR, which allows the development and design of 3D scenes from VR. RiftSketch proposes a hybrid XR system utilizing an external RGB camera and a leap motion sensor to record live footage from the programmer’s hands while writing code and project this image into the virtual environment. Finally, eyecadVR, proposes a VR editor for architecture design and scene management, a professional solution for architects to visualize and create their project while being immersed into the virtual world.

The currently available VR editors feature scene management capabilities with intuitive ways to build a scene directly from the virtual environment. However, there are no available authoring tools to offer a complete system for developing a behavioural VR experience including both the design and the programming aspect.

2.4. Rapid Prototyping as a software design pattern

Prototyping hides the complexity of making new instances from the mother object. The concept is to copy an existing object rather than creating a new instance from scratch, something that may include costly operations. The existing object acts as a prototype and contains the state of the object. The newly copied object may change same properties only if required. This approach saves costly resources and time, especially when the object creation is a heavy process. Mark Giereth and Thomas Erlt (Giereth and Ertl, 2008) described three design patterns for rapid visualization prototyping: a) a mapping of object oriented models to relational data tables used in many visualization frameworks b) a script based approach for the configuration of visualization applications and c) performing online changes on the visual mapping by enhancing fine-grained mapping operators with scripting capabilities.

A virtual prototyping system that integrates VR with Rapid Prototyping to create virtual or digital prototypes to facilitate product development is described in (Choi and Cheung, 2008). Combining VR with Rapid Prototyping can result to a powerful tool for testing and evaluating new products and ideas before being employed in practical manufacturing, preventing costly mistakes, decreasing time to market, and meanwhile increasing worker safety. In his book J. Rix et. Al (Rix et al., 1995) discussed the importance of Virtual Prototyping form the applications point of view. Among others, he underlined that developing virtual prototypes and integrate this technology to the product development process promises major advantages for the industrial process such as the reduction of time, saving cost and the increase of quality. In addition, he states that rapid product development and virtual prototyping are fast becoming commodities of worldclass companies as a solution to maximize their effectiveness.

3. The Scenegraph Architecture

To achieve a goal whether it is the restoration of a statue, the repair of an engine’s gearbox or a surgical procedure you need to follow a list of tasks/steps in a sequential order. We are referring to those steps as Actions. For instance, if we want to hang a painting on the wall we have to perform the following Actions: 1) Mark the wall using a pen, 2) Hammer a nail at the marked spot and 3)Hang the painting on the wall.

A simple visualization of the mentioned Actions would be to link them in a single line one after another. However, in complex training scenarios with dozens of Actions a sequential representation would not be very convenient due to the absence of classification and hierarchical visual representation. For this reason, we implemented the Scenegraph architecture. Scenegraph is a tree data structure representing the tasks/Actions of a training scenario. The root of the tree holds the structure, on the first depth we initialize the Lesson nodes, then the Stage nodes and finally at leaf level the Action nodes.

Figure 2. An example of a scenegraph tree representing the simple scenario of hanging a paint on the wall.

One of the main principle of this project is to modify scenegraph and Action components using three different editors (scripting, visual scripting and the VR editor). To achieve this, the scenegraph data is stored as an xml file for efficient maintenance and editing. However, an xml file is not easy to read especially when crowded with various information and data fields due to complex data structures. This was the reason the three developed tools offer extended functionalities, editing abilities and easy maintenance of the scenegraph structure even for complex training scenarios.

4. Action Prototypes

Actions represent interactive behaviours/tasks user needs to complete in the virtual environment to accomplish the training scenario. In this section, we will analyze how we implemented new VR software design patterns thought the rapid prototyping of Actions.

4.1. The IAction Interface

The Action object reflects a flexible structural module, capable to generate complex behaviours from basic elements. This also reflects the concept idea behind scenegraph; provide developers with fundamental elements and tools to implement scenarios from basic principles. Each Action is described by a script containing its behaviour in means of physical actions in the virtual environment. In technical details, each Action script implements the IAction interface, which defines the basic rules every Action should follow. This interface ensures that all Actions will have the same methods and structures.

  • Initialize: (Method) This method is the first one to call when starting an Action. It is responsible to instantiate all the necessary 3D objects for the Action to run normally.

  • Perform: (Method) What is the behaviour of the action when completed? This method cleans the current Action and ensures that everything not needed is deleted before the next Action starts. Also plays animations and runs custom calls to finalize the Action. Performing an Action means we complete the Action and we proceed with the next one. When Perform method is called, Scenegraph goes to the next Action and Initializes it.

  • Undo: (Method) This method includes all the necessary calls to reset the Action. This includes the deletion of instantiated 3D assets and all the necessary procedures to set the Actions before it. Finally, Scenegraph traverses to the previous Action and Initializes it.

  • Clear: (Method) Clears the scene from initialized objects and references from the Action. This method is also used from Perform and Undo methods to clear the currently active Action.

  • ParallelActionID: (Property) Stores the ID number of the parallel module the Action is registered. This information is used when two or more Actions are initialized at the same time offering user the ability to choose on how he wants to proceed in the scenario. This functionality is described as Alternative Paths presenting the dynamic capabilities of scenegraph.

Designing a shared interface among the structural elements of a system is the first step to prototype commonly used components. This methodology is both beneficial for the user and the developer: a) users are introduced with interactive patterns that are familiar with, avoiding complex behaviours while b) developers are coding following the same implementation patterns leading to the easy maintenance of applications.

Figure 3. Example of Insert Action. The hologram indicates the correct position of the object

4.2. From Actions to VR Design Patterns

At this point, we have described the basic interface each Action should implement to initialized and performed properly. With this interface, a developer can generate action scripts that behave in a common ruleset, following the scenegraph pipeline. However, this would not be very convenient since the Actions in this form have a lot of flexibility in terms of implementation. This is problematic in systems where developers need classification in properties to run across different modules. To make our system more efficient we have to limit the capabilities of the Action entity to target simple but commonly used behaviours/tasks in training scenarios. Modelling those behaviours, we will generate a pool of generic behavioural patterns as prototyped software patterns suitable for VR applications.

The implementation of Action Prototypes was highly inspired by Game Programming Patterns (Nystrom, 2014) as an alternative paradigm for design patterns specially design for VR experiences. The immersion of virtual environments causes the implementation of programming patterns to fit into a more interactive way of thinking. The classic game design patterns tend to be deprecated since interactive environments like VR applications are focused to the connection of user with the virtual assets. For this reason, the software patterns developed in this project designed to match the needs for interactivity, embodied cognition and physicality on VR experiences.

Technically, Action Prototypes are a collection of objects referring to specific interactive task. In this project we developed three Action Prototypes:

  • Insert Action: is referring to a specific type of Action that user has to insert an object to a predefined position in order to complete it. Technically, to implement an Insert Action developer/user needs to set the initial and the final position of an object, then the task for the user would be to take this particular object and place it in the correct place paying respect to its orientation.

  • Remove Action: describes a step of the procedure which user has to remove an object using his hands. To implement a Remove Action developer/user needs to define the position where the object will be instantiated, thus the goal of user would be to reach and remove it from this position.

  • Use Action: refers to the step where user needs to take an object from the virtual scene and interact with it over a predefined area.

  • Base Prototype: The Base Prototype does not represent a behaviour like the previous prototypes, is the base class where the other prototypes derive from. It contains common methods used across multiple prototypes for better organization and code optimization. Figure 4 illustrates an architectural diagram of Action Prototypes to visualize better their dependencies.

Action Prototypes are based on an extendable software architecture capable to integrate new Prototyped Actions with minimal changes and with no correlation with existing Actions.

Figure 4. Action Prototypes Architecture diagram.

Action Prototypes form a powerful software pattern to implement interactive tasks in VR experiences. They extend the currently used game design patterns proposing a behavioural interactive pattern ideal for training scenarios and educational applications. Unitizing Action Prototypes, developers can replicate custom behaviours with a few lines of code taking advantage of their abstraction and reusability.

4.3. Alternative Paths

The Action prototypes mentioned above, function as a new design pattern for VR experiences, a modular building block to develop applications in combination with the scenegraph architecture described. However, the proposed scenegraph architecture we can generate VR experiences following a ”static” pipeline of Actions where user needs to complete a predefined list of tasks. In order to transform Scenegraph from a static tree into a dynamic graph we introduced Alternative Paths. Since an educational pipeline can lead to multiple paths according to user’s actions and decisions, Scenegraph follows the same adaptive principles.

In addition, certain actions or even wrong estimations and technical errors may deviate the original training scenario from its normal path. For example, in a paint restoration process if the technician does not pay attention to the correct consistency of used chemicals may cause damage to the painting. This damage should now be fixed causing the scenegraph to dynamically add more Actions to fix the damage. Except for backtracking after wrong estimations and errors the Alternative Path mechanic is also used in situations where the trainee needs to make a particular decision over a dilemma. On a training scenario this feature preliminary stresses the judgment level of trainee upon certain situations where deciding among different approaches is required.

We implemented these functionalities to support real time decision making and as a result, Scenegraph can change its structure (Nodes) as the training scenario continues. Scenegraph currently supports the addition, deletion and alternation of Lessons, Stages and Action Nodes depending on the user’s actions and decisions.

4.4. The Valley of Interactivity

After experimenting with various design patterns and interaction techniques for VR, an interesting pattern appeared regarding the correlation of user experience and the interactivity of the VR application. Without a doubt, an immersive experience relies significantly on the implemented interactive capabilities that form the general user experience. As a result, to make an application more attractive in means of UX we need a more advanced interactive system. However, as we implement more complex interaction mechanics there is a point in timeline where the UX drops dramatically. At this point, the application is too advanced and complex for the user to understand and perform the various tasks with ease. We characterize this feature as heterogeneous behaviourism meaning that user’s actions do not follow a deterministic pattern (same actions cause different behaviours) resulting in the inability to complete the implemented Actions due to their incomprehensible complexity.

Figure 5. ”Uncanny valley” of interaction. Correlation between UX and interaction in VR.

In contrast, applications with limited interactivity follow a linear increase of their user experience. From applications where users are only observers, like 360VR videos to cognitive applications, the interaction curve is linear and VR experiences easy to understand and accomplish. To overcome the effect mentioned before, applications need to drastically enhance their interactivity capabilities and offer users a more intuitive VR environment to understand how they are supposed to act in the virtual world. Overpassing the valley of interactivity, applications are evolving rapidly to follow a psychomotor methodology integrating embodied cognition for the maximum user experience.

Figure 6. A training scenario visualized from the Visual Scripting Editor featuring from right to left: Lessons (red), Stages (green), Actions (blue), Action Scripts (gray) and Prefab nodes (brown).

5. Visual Scripting

The Scenegraph model is capable to generate applications from reusable fundamental elements (Actions) supporting basic insert, remove and use behaviours in VR. However, what is the next step? What can be done to enhance the development process and speed up the content creation? The complexity of scenegraph xml may cause difficulties visualizing the scenegraph nodes especially for major training scenarios. Another point is the programming skills required do develop such experiences. Using the proposed architecture could be challenging for inexperienced programmers resulting in limited user experience and time consuming debugging sessions.

To eliminate the mentioned difficulties, we introduce Visual Scripting as an additional authoring tool to manage, maintain and develop VR experiences utilizing the scenegraph architectural model and the Action Prototypes. Visual Scripting encapsulates all the functionalities from the base model and at the same time offers high visualization capabilities, which are very effective especially on extended projects.

5.1. The Visual Scripting metaphor

The development of a visual scripting system as an assistive tool aimed to visualize the VR training scenario in a convenient way, if possible fit everything into one window. The simplicity of this tool was carefully measured when designing the features since the offered functionalities could be used from non-programmers. A coding-free platform offers a safe environment to work, reducing programming errors and unforeseen discrepancies between projects. From the beginning of the project, one of the main design principles was to strategically abstract the software building blocks into basic elements. The main idea behind this abstraction was the improvement of the visual scripting and VR Editor tools since fundamental elements construct a better visual representation than complex ones. To render the visual nodes we exploited Unity Node Editor Base (UNEB), an open source framework, which provides basic node rendering and management functionalities.

Moving into the Visual Scripting metaphor, the scenegraph data structure forms a tree visualizing the system as a node based editor with nodes linked together with edges forming logical segments and reusable parts. We retrieve this information through reflection and run-time compilation gathering data from the Action scripts. An example of a complete diagram representing a training scenario is illustrated in figure 6. Developers can utilize Visual Scripting to generate training scenarios as both scenegraph and Action Scripts can be generated automatically through UIs.

One of the major goals of this project was to implement a visual scripting system to replace coding of simple Action behaviours with intuitive and easy to use graphical patterns like nodes, drop down menus, input fields and other clickable elements. In this way, the content creation evolves into a coding free process, encapsulating the system principles into equivalent visual metaphors giving programmers the ability to generate VR training content without high demand in software background.

Figure 7. The figure illustrates an Action script module accompanied with the corresponding 3D assets (Prefab Modules). The selected Action represents the insertion of a gear.

5.2. Dynamic Action Script code Generation and Compilation

The proposed scenegraph architecture lies on a pipeline of Action scripts to replicate a training scenario in virtual reality. The simplest Action script contains only the overridden Initialize method, which defines the prefabs that would be instantiated when the scenegraph triggers its Initialize method. Visual scripting can generate run-time simple Action Scripts from the information provided from the visual input (Action and Prefab modules). After completing the visual construction of an Action using the scenegraph editor the next step is to generate the Action script to save the implemented behaviour in a C# code script.

To write C# code run-time, we utilized CodeDOM, a build-in tool for .NET Framework that enables runtime code generation and compilation. The abstraction of Action Prototypes offers an elegant implementation to generate each script using a single virtual method. To finalize the Action script, except from the Action Type (Insert, Remove or Use) we also need the implemented behaviour. Action Prototypes retrieve this information directly from the visual scripting editor through the linked nodes relative to the Action module.

After compiling the Action script, we can link it with a reference to the input field of the node from the visual scripting editor. Finally, we have to export the xml file from scenegraph editor that will save the node structure and important data for the Action scripts into a single xml file. This step concludes the process of generating an Action script using only the tools provided from the scenegraph editor and more importantly without writing a single line of code.

5.3. Expanding auto generated scripts

Visual scripting generates a basic Action script contains the Initialize method, the minimum requirement for an Action to run properly. However, there are cases where developers need to implement significantly complex Action behaviours to enhance the use experience with additional information and features for a better experience.

Prototyped Actions were developed using a particular software architecture capable to provide the basic gameplay facilities but also enhance those action according to developer’s preferences. The Perform method can be overridden directly from the Action script to include this additional behaviour. The same principle is applicable with all the other virtual methods defined in the IAction interface like Undo, Initialize etc. For additional modifications, the best practice is to edit directly the exported script and override the declared IAction methods. In this way, we maintain simple scripts clean while complex ones can be adjusted upon request to fit the training scenario.

6. VR Editor

The visual scripting system enhanced the usability and effectiveness of the scenegraph system to generate gamified training scenarios through a coding-free platform. The impact on content creation was very strong due to the additional tools and features that introduced in the editor. However, visual scripting lacks on one specific and rather important part: the ability to design on-the-go behaviours and scenarios directly within the virtual environment. This feature will improve the design capabilities while offering an intuitive way to modify and update existing applications directly form the virtual environment.

The implementation of VR Editor was designed as an authoring tool on top of the scenegraph architecture, utilizing the developed features of the system to extend the visualization and interaction capabilities of Scenior. This interactive tool will reduce even more the time needed to produce training scenarios due to the run-time modification features and the advanced visualization it provides. In the process of developing a training scenario, it is common to modify key 3D assets multiple times to reach a convenient position to instantiate in the virtual environment. In addition, certain programming behaviours are better designed directly from VR instead of the editor due to the different perspective of the used medium; in this case the VR headset and the controllers.

6.1. Medium-oriented design principles

Designing the VR editor was a critical task since it should be well balanced between interaction, usability and simplicity to actually improve the development process instead of make it more complex. Below lies a brief explanation of the three basic principles of application design based on the applied medium.

Figure 8. Medium-oriented design principles along with example applications and their correlation.
  • Interaction: Describes how the user can interact with a system. From gesture based to voice activated commands each medium has its own characteristics and interaction principles that should be carefully taken in consideration at the design phase.

  • Intuition: Reflects the impact of interaction. A system is characterized intuitive if you can get from point A to point B without looking the documentation. In other words describes how the user will achieve his end goal using the provided interaction techniques.

  • Functionality: Describes what can be achieved with the proposed system. Functionality is the related to the context and impact. Why we should use this software? What will be the feedback and the results related to our work? Functionality has a serious impact on complexity due to the additional information and data management.

6.2. The VR metaphor

The main concept behind the implementation of our VR Editor focuses around an interactive system with floppy disks and a personal computer. The figure  9 illustrates the design of the VR editor along with its various components and floppy disks. The Scenegraph nodes are represented by floppy drives on the left side of the screen. Action scripts are initialized as floppy disks, each one holds the script behaviour that defines the prefabs relative to the Action. There are three types of floppy disk separated with unique coloration; blue disks represent Use Actions, Red disk the Remove Actions and black disks the Insert Actions.

Figure 9. Interacting with the VR editor. User holds a Use Action preparing to generate the Action behaviour.

The right panel contains the properties of the selected drive. On the top side, we see the name of the scenegraph node along with its type (Lesson, Stage or Action). To implement a new script for an action user needs to take a floppy disk and insert it in the corresponding floppy drive. This will automatically load a new script according to the type of floppy disk. In a similar way, ejecting a floppy disk from the drive, detaches the script from the Action.

6.3. Generate Actions and parametrization on-the-go

The functionality with the higher impact on the VR Editor is by far the ability to modify and parametrize Actions on the go. This was also the main reason that lead us to implement the VR Editor as an additional authoring tool within the virtual environment, to support a coding-free development tool and give user the ability to modify or even generate new gamified behaviours while playing the training scenario.

First of all, users can customize the scenegraph through VR Editor by adding or deleting Scenegraph nodes to match their needs. This functionality has serious impact on users that want to parametrize existing VR training scenarios or create their own without having any programming knowledge. We provide this ability via an interactive UI on the VR Editor with physical buttons and knobs where users can modify and save the scenegraph.

The next step is the script generation from the VR Editor. In this process, we will utilize the floppy disks to implement new Action behaviours but also modify existing ones. To generate a new Action, users need to insert a floppy disk representing the Action script (Insert, Remove or Use). The system will register the insertion of floppy disks and an empty Action script will appear on properties screen ready for modification.

If the script is freshly generated it will not have an object attached to the prefab positions, thus we have to generate one to complete the Action script. To implement this functionality we introduced a file manager module for VR Editor. The embedded file manager gives user the ability to traverse through the project’s saved models and select the one needed for each case. Manipulating the position of instantiated 3D assets is meaningful as it is common to place an object in a specific location from the 2D computer monitor but from the VR perspective, the object may not fit well.

To conclude, VR Editor proposed a new interactive design not only to manipulate 3D assets to set the virtual scene but also to generate gamified behaviours without writing a single line of code. This feature had a strong impact on content creation, as a script generator and data visualization at the same time. With this tool, users are no longer just observers, the can modify the training scenarios on the go, implement new ideas, correct wrong Action behaviours without specialized programming knowledge.

7. Results

We organized a user-based qualitative evaluation experiment to examine and quantify the effectiveness and impact of Scenior but to also gather useful data and comments for future updates. In more detail, the experiment conducted from 12 participants, half of them were familiar with computer programming and VR whereas the other half did not have programming background or any familiarity with VR. We split our participants in this way to examine the differences between programmers and non-programmers and how this variable will affect their interaction with Scenior. In the begin of the experiment, participants rate themselves on their familiarity with VR and their programming skills (figure 10).

Figure 10. Data acquired from the self evaluation of participants regarding their experience (range 0-5).

The experiment was separated in four sessions to evaluate the different components and functionalities of Scenior. The first part was to run a VR training scenario generated from Scenior. For this reason, we developed a VR training scenario regarding the restoration of an antique clock where users asked to complete interactive tasks to make the clock working again. The next session, evaluates the capabilities of Visual Scripting, participants were asked to generate a Use and a Remove Action into the VR scenario. After that, they had to utilize VR Editor to change the positions of certain interactive objects and generate an Insert Action directly from the virtual environment. Finally we had an open discussion with the participants to give us valuable feedback for all the implemented tools.

We begin with the evaluation of the generated training scenario. In the following tables, the completion time is measured in mm:ss format, the evaluation of mechanics in 0-10 range and the Help metric illustrates amount of times where participants asked for hints don’t knowing how to proceed further. From table 1 we acquire that non-programmers faced more difficulties on completing the scenario since their average completion time and times asked for help are higher. In contrast they have higher rates in the quality of the overall experience and the educational value of the application.

\toprule Programmers Non-programmers
\midruleCompletion Time 2:14 3:53
Help 0.8 1.6
Experience quality 7.0 8.5
Educational value 7.6 8.8
\bottomrule
Table 1. Evaluation of training scenario

We continue with the evaluation of Visual Scripting tool. In this phase, we explained all the possibilities and functionalities of Visual Scripting and how to generate scripts automatically. After this brief explanation, users were asked to implement a new Use and Remove Action utilizing the scenegraph editor. From table 2 we identify that both programmers and non programmers asked more often for help in the generation of Action scripts than the training scenario. In addition, it is clear, regarding the implementation of Actions, programmers were more familiar with the tools and managed to complete the tasks five minutes earlier. An interesting realization is the fact that non-programmers rated the overall experience higher than the programmers although they gave lower scores on the implementation of Action Scripts.

\toprule Programmers Non-programmers
\midruleCompletion Time 12:45 17:53
Help 1.3 2.0
Use Action 8.5 6.6
Remove Action 8.8 7.1
Overall Experience 7.2 8.6
\bottomrule
Table 2. Evaluation of Visual Scripting

The final evaluation was for VR Editor. We introduced the new tool to the participants with a brief in the beginning to ask them later to implement two tasks. Table 3 shows some interesting results from the evaluation of VR Editor. First of all, participants asked for help in a higher rate that the Visual Scripting tool. In addition, non-programmers completed the tasks four minutes earlier using the VR Editor than the Visual scripting tool. The overall experience has similar values with the one from the Visual Scripting tool.

\toprule Programmers Non-programmers
\midruleCompletion Time 12:15 13:21
Help 1.6 3.0
Object Reposition 7.3 7.1
Insert Action 7.1 6.9
Overall Experience 7.4 8.5
\bottomrule
Table 3. Evaluation of VR Editor

The next session was an open discussion about Scenior to get feedback from the participants. Some of them suggested to implement additional features on the Visual Scripting editor focusing more in the User Experience that functionality. Other participants commented on the difficulty they had to understand the interaction with some modules from VR Editor since there was not a tutorial while playing the application. In addition, some programmers complained they could not assimilate the generation of Actions through VR Editor due to the overload of information in a single window. A positive feedback we received from some non-programmers was the feeling of accomplishment when the managed to generate a simple VR application without knowing programming. Finally, some programmers mentioned they enjoyed the process of Action generation from the Visual Scripting editor due to the visualization of the complete scenario in a single window.

8. Conclusions and Future Work

In this work we presented Scenior, a system capable to generate gamified training experiences exploiting its modular architecture and the authoring tools we developed. We introduced Scenegraph as a dynamic, acyclic data structure to represent any training scenario following an educational curriculum. In addition we proposed a category of new software design patterns, the Action Prototypes, specially formulated for interactive VR applications. Finally, we developed a Visual Scripting tool along with a VR Editor to enhance the visualization and speed up the content creation.

Scenior facilitates a collection of mechanics and tools to enhance the development process. However, there are certain limitations linked with its components and functionalities. First of all, the evaluation process highlighted weaknesses in the interaction with the VR Editor. Although it behaves well in Action customization, the script generation process is still complex due to the amount of information and steps needed from the user. In addition, some of its interactive components are not intuitive, resulting in the frustration of users when asked to implement certain behaviours. Regarding the Visual Scripting editor, the real-time compilation process may cause performance issues in advanced training scenarios and delay the initialization of scenegraph. Finally, both Visual Scripting and VR Editor were designed to generate simple Action behaviours, thus the implementation of complex behaviours would not be feasible due to the challenge in visualizing large amount of data in a compact form.

In the future, we aim to create Unity packages containing the encrypted xml file along with the project resources and scripts to compress training scenarios into one single file. This methodology will separate the platform from the content leading to the simplification of distribution and exploitation of the developed applications.

We aim to publish a new reflection system, which updates only the necessary parts of a script leaving the rest of the code intact. This would be essential, since most of the time we edit small parts of a script by replacing object paths or including new customization to Perform and Update methods. Regarding the Visual scripting and VR Editor tools, an extension for the near future is the implementation of a real time visualization system like a work-flow to monitor the data traveling through different components.

Finally, we aim to expand the demo training scenarios with additional VR content in the fields of cultural heritage and electronics to study the impact of scenegraph from a multidisciplinary approach.

References

  • (1)
  • Alsumait and Almusawi (2013) Asmaa Alsumait and Zahraa S. Almusawi. 2013. Creative and innovative e-learning using interactive storytelling. International Journal of Pervasive Computing and Communications 9, 3 (2013), 209–226. https://doi.org/10.1108/IJPCC-07-2013-0016
  • Bouchard et al. (2016) Stéphane Bouchard, Stéphanie Dumoulin, Geneviéve Robillard, Tanya Guitard, Evelyne Klinger, Héléne Forget, Claudie Loranger, and Francois Xavier Roucaut. 2016. Virtual reality compared with in vivo exposure in the treatment of social anxiety disorder: A three-arm randomised controlled trial. The British journal of psychiatry : the journal of mental science 210 (12 2016). https://doi.org/10.1192/bjp.bp.116.184234
  • Carrozzino and Bergamasco (2010) Marcello Carrozzino and Massimo Bergamasco. 2010. Beyond virtual museums: Experiencing immersive virtual reality in real museums. JOURNAL OF CULTURAL HERITAGE 11 - 4 (10 2010), 452–458. https://doi.org/10.1016/j.culher.2010.04.001
  • Choi and Cheung (2008) S.H. Choi and H.H. Cheung. 2008. A versatile virtual prototyping system for rapid product development. Computers in Industry 59, 5 (2008), 477 – 488. https://doi.org/10.1016/j.compind.2007.12.003
  • de Faria et al. (2016) Jose Weber Vieira de Faria, Manoel Jacobsen Teixeira, Leonardo de Moura Sousa Júnior, Jose Pinhata Otoch, and Eberval Gadelha Figueiredo. 2016. Virtual and stereoscopic anatomy: when virtual reality meets medical education. Journal of Neurosurgery JNS 125, 5 (2016).
  • Dwistratanti Sumadio and Awang Rambli (2010) Desi Dwistratanti Sumadio and Dayang Awang Rambli. 2010. Preliminary Evaluation on User Acceptance of the Augmented Reality Use for Education. Computer Engineering and Applications, International Conference on 2 (03 2010), 461–465. https://doi.org/10.1109/ICCEA.2010.239
  • Elliott et al. (2015) Anthony Elliott, Brian Peiris, and Chris Parnin. 2015. Virtual Reality in Software Engineering: Affordances, Applications, and Challenges. In Proceedings of the 37th International Conference on Software Engineering - Volume 2 (ICSE ’15). IEEE Press, Piscataway, NJ, USA, 547–550. http://dl.acm.org/citation.cfm?id=2819009.2819098
  • Ellis and Sibley (1969) J. F. Heafner Ellis, T. O. and W. L. Sibley. 1969. The Grail Project: An Experiment in Man-Machine Communications. RAND Corporation (06 1969), RM–5999–ARPA.
  • Gallagher et al. (2005) Anthony Gallagher, E Matt Ritter, Howard Champion, Gerald Higgins, Marvin Fried, Gerald Moses, C Smith, and Richard Satava. 2005. Virtual Reality Simulation for the Operating Room: Proficiency-Based Training as a Paradigm Shift in Surgical Skills Training. Annals of Surgery 241 (02 2005), 364–372. https://doi.org/10.1097/01.sla.0000151982.85062.80
  • Giereth and Ertl (2008) Mark Giereth and Thomas Ertl. 2008. Design Patterns for Rapid Visualization Prototyping. Proceedings of the International Conference on Information Visualisation, 569–574. https://doi.org/10.1109/IV.2008.36
  • Greenleaf (2016) Walter Greenleaf. 2016. How VR technology will transform healthcare. In ACM SIGGRAPH 2016 VR Village. 1–2. https://doi.org/10.1145/2929490.2956569
  • Greenwald et al. (2017) Scott Greenwald, Alexander Kulik, André Kunert, Stephan Beck, Bernd Froehlich, Sue Cobb, Sarah Parsons, Nigel Newbutt, Christine Gouveia, Claire Cook, Anne Snyder, Scott Payne, Jennifer Holland, Shawn Buessing, Gabriel Fields, Wiley Corning, Victoria Lee, Lei Xia, and Pattie Maes. 2017. Technology and Applications for Collaborative Learning in Virtual Reality. In CSCL.
  • Ilias Kotis (2019) Konstantinos Ilias Kotis. 2019. ARTIST - a reAl-time low-effoRt mulTi-entity Interaction System for creaTing reusable and optimized MR experiences. Research Ideas and Outcomes 5 (2019), e36464. https://doi.org/10.3897/rio.5.e36464 arXiv:https://doi.org/10.3897/rio.5.e36464
  • Ioannides et al. (2017) Marinos Ioannides, Nadia Thalmann, and George Papagiannakis. 2017. Mixed Reality and Gamification for Cultural Heritage. https://doi.org/10.1007/978-3-319-49607-8
  • Kateros et al. (2015) Stavros Kateros, Stylianos Georgiou, Margarita Papaefthymiou, George Papagiannakis, and Michalis Tsioumas. 2015. A Comparison of Gamified, Immersive VR Curation Methods for Enhanced Presence and Human-Computer Interaction in Digital Humanities. International Journal of Heritage in the Digital Era 4, 2 (2015), 221–233. https://doi.org/10.1260/2047-4970.4.2.221 arXiv:https://doi.org/10.1260/2047-4970.4.2.221
  • Kensek (2015) Karen Kensek. 2015. VISUAL PROGRAMMING FOR BUILDING INFORMATION MODELING: ENERGY AND SHADING ANALYSIS CASE STUDIES. Journal of Green Building 10, 4 (2015), 28–43. https://doi.org/10.3992/jgb.10.4.28 arXiv:https://doi.org/10.3992/jgb.10.4.28
  • Klopfer et al. (2005) Eric Klopfer, Susan Yoon, and Tricia Um. 2005. Teaching Complex Dynamic Systems to Young Students with StarLogo. Journal of Computers in Mathematics and Science Teaching 24, 2 (April 2005), 157–178.
  • Maloney et al. (2010) John Maloney, Mitchel Resnick, Natalie Rusk, Brian Silverman, and Evelyn Eastmond. 2010. The Scratch Programming Language and Environment. Trans. Comput. Educ. 10, 4, Article 16 (Nov. 2010), 15 pages. https://doi.org/10.1145/1868358.1868363
  • Monahan et al. (2008) Teresa Monahan, Gavin McArdle, and Michela Bertolotto. 2008. Virtual reality for collaborative e-learning. Computers & Education 50, 4 (2008), 1339 – 1353. https://doi.org/10.1016/j.compedu.2006.12.008
  • Nystrom (2014) Robert Nystrom. 2014. Game Programming Patterns. Genever Benning.
  • Pan and Hamilton (2018) Xueni Pan and Antonia Hamilton. 2018. Why and how to use virtual reality to study human social interaction: The challenges of exploring a new research landscape. British Journal of Psychology 109 (03 2018). https://doi.org/10.1111/bjop.12290
  • Papagiannakis et al. (2018) George Papagiannakis, Nick Lydatakis, Steve Kateros, Stelios Georgiou, and Paul Zikas. 2018. Transforming Medical Education and Training with VR Using M.A.G.E.S.. In SIGGRAPH Asia 2018 Posters (SA ’18). ACM, New York, NY, USA, Article 83, 2 pages. https://doi.org/10.1145/3283289.3283291
  • Papagiannakis et al. (2017) George Papagiannakis, Panos Trahanias, Eustathios Kenanidis, and Eleftherios Tsiridis. 2017. Psychomotor Surgical Training in Virtual Reality. Master Case Series & Techniques: Adult Hip (07 2017), 827–830. https://doi.org/10.1007/978-3-319-64177-5_41
  • Pasternak et al. (2017) Erik Pasternak, Rachel Fenicheland, and Andrew N. Marshall. 2017. Tips for creating a block language with blockly. In 2017 IEEE Blocks and Beyond Workshop (B B). 21–24. https://doi.org/10.1109/BLOCKS.2017.8120404
  • Pot-Kolder et al. (2018) Roos Pot-Kolder, Chris N W Geraets, Wim Veling, Marije van Beilen, Anton B. P. Staring, Harm J. Gijsman, Philippe Delespaul, and Mark van der Gaag. 2018. Virtual-reality-based cognitive behavioural therapy versus waiting list control for paranoid ideation and social avoidance in patients with psychotic disorders: a single-blind randomised controlled trial. The lancet. Psychiatry 5 3 (2018), 217–226.
  • Rix et al. (1995) Joachim Rix, Stefan Haas, and José Teixeira. 1995. Virtual Prototyping: Virtual environments and the product design process. https://doi.org/10.1007/978-0-387-34904-6
  • Roque (2008) Ricarose Roque. 2008. OpenBlocks : an extendable framework for graphical block programming systems. (05 2008).
  • Serna et al. (2015) Maria Angeles Serna, Cormac J. Sreenan, and Szymon Fedor. 2015. A visual programming framework for wireless sensor networks in smart home applications. In 2015 IEEE Tenth International Conference on Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP). 1–6. https://doi.org/10.1109/ISSNIP.2015.7106946
  • Shin (2018) Donghee Shin. 2018. Empathy and embodied experience in virtual environment: To what extent can virtual reality stimulate empathy and embodied experience? Computers in Human Behavior 78 (2018), 64 – 73. https://doi.org/10.1016/j.chb.2017.09.012
  • Slater (2017) Mel Slater. 2017. Implicit Learning Through Embodiment in Immersive Virtual Reality. Springer Singapore, Singapore, 19–33. https://doi.org/10.1007/978-981-10-5490-7_2
  • Stefanidi et al. (2019) Evropi Stefanidi, Dimitrios Arampatzis, Asterios Leonidis, and George Papagiannakis. 2019. BricklAyeR: A Platform for Building Rules for AmI Environments in AR. 417–423. https://doi.org/10.1007/978-3-030-22514-8_39
  • Thornhill-Miller and Dupont (2016) Branden Thornhill-Miller and Jean-Marc Dupont. 2016. Virtual Reality and the Enhancement of Creativity and Innovation: Under Recognized Potential Among Converging Technologies? Journal of Cognitive Education and Psychology 15, 1 (2016), 102–121. https://doi.org/10.1891/1945-8959.15.1.102
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
390031
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description