Project Beehive: A Hardware/Software Co-Designed Platform for Runtime and Architectural Research

Project Beehive: A Hardware/Software Co-Designed Platform for Runtime and Architectural Research

Christos Kotselidis, Andrey Rodchenko, Colin Barrett, Andy Nisbet, John Mawer, Will Toms, James Clarkson, Cosmin Gorgovan, Amanieu d’Antras, Yaman Cakmakci, Thanos Stratikopoulos, Sebastian Werner, Jim Garside, Javier Navaridas, Antoniu Pop, John Goodacre, Mikel Luján Advanced Processor Technologies Group
The University of Manchesterfirst.last@manchester.ac.uk
Abstract.

With the extreme scaling of current architectures, from wearables to exascale systems, along with new application domains such as Big Data and human-centred applications, vertical and cross-cutting research is vital. Solutions based solely in hardware or software are no longer sufficient to meet the requirements of today’s ubiquitous computing or maintain the pace of improvements seen during the past few decades.

In hardware, the well cited end of single-core scaling has resulted in the proliferation of multi-core system architectures forcing complex parallel programming techniques into the mainstream. To further affect the exploitation of physical resources, systems are becoming increasingly heterogeneous with specialized computing elements and accelerators. Programming across such a range of disparate architectures requires a new level of abstraction and adaptation by programming languages and applications.

In software, for example, emerging complex applications from domains such as Big Data and Computer Vision, run on multi-layered software stacks targeting hardware with a variety of constraints and resources. The design space of current and future computing is becoming extremely broad making the optimization task challenging. Multi-objective optimization for power, performance, and resiliency requires experimentation platforms facilitating quick and easy prototyping of hardware and software with intimate co-designed techniques.

In this paper, we present Beehive: A Hardware/Software co-designed platform enabling simultaneous runtime and architectural research. Beehive utilizes various state-of-the-art software and hardware components along with novel and extensible co-designed tools and techniques. The objective of Beehive is to provide a flexible platform for rapid prototyping and experimentation across the emergent range of applications, programming languages, compilers, runtimes, and low-power heterogeneous many-core architectures in a full-system co-designed manner. We use a complex Computer Vision application, as a use-case, to showcase the versatility and effectiveness of Beehive by accelerating it across numerous and diverse metrics achieving up to 43x performance improvements.

copyright: none

1. Introduction

Traditionally, software and hardware providers have been delivering significant performance improvements on a yearly basis. Unfortunately, this is no longer feasible. Predictions about ”dark silicon” (Esmaeilzadeh:2011:DSE:2000064.2000108, ) and resiliency (Shafique:2014:ECD:2593069.2593229, ), especially in the forthcoming exascale era (Cappello, ), suggest that traditional approaches to computing problems are impeded by power constraints and process manufacturing. Furthermore, since single-threaded performance has been saturated both at the hardware and the software layers, new ways for pushing the boundaries have emerged. After the introduction of multi and many core systems, heterogeneous computing and ad-hoc acceleration, via ASICs and FPGAs (stella, ; zynq, ), are advancing into mainstream computing.

The extreme scaling of current architectures, from low-power wearables to high-performance computing, along with the diversity of programming languages and software stacks, create a wide spectrum of space exploration for achieving optimal energy-efficient results. Co-designing an architectural solution at the system-level111In this context we refer to architectural solution as a co-designed solution that spans from a running application to the underlying hardware architecture. requires tight integration and collaboration between teams, that have typically been working in isolation. The design-space to be explored is vast, and there is the potential that a poor, even if well intentioned, decision will propagate through the entire co-designed stack. Thus, amending the consequences at a later date may prove extremely complex and expensive, if not impossible.

In this paper we present Beehive: a complete full-system hardware/software co-designed platform for rapid prototyping and experimentation (All the available hardware and software components of Beehive will be publicly available). Beehive enables co-designed optimizations from the application level down to the system and hardware level, enabling accurate decision making for architectural and runtime optimizations. As a use-case, we accelerate and optimize the complex KinectFusion (newcombe;2011;kinectfusion:-r, ) Computer Vision application in numerous ways through Beehive’s highly integrated stack achieving up to 43x performance improvements.

In detail, Beehive makes the following contributions:

  • Enables co-designed research and development for traditional and emerging applications and workloads: To achieve this, we tightly integrate the software and hardware layers of the stack in a unified manner while expanding Beehive’s reach to complex applications and workloads (Section 2.2). We showcase that capability by implementing a Java-based version of KinectFusion and co-designing it through Beehive’s stack.

  • Enables co-designed compiler and runtime research for multiple dynamic and non-dynamic programming languages in a unified manner: This is achieved by unifying under the same compilers and runtimes, high-quality production and research Virtual Machines able to execute transparently multiple programming languages (Section 2.3.1).

  • Enables heterogeneous processing on a variety of platforms such as ARM (ARMv7 and Aarch64), and x86: The unified runtime layer has been extended to support multiple ISAs scaling from high-performing x86 to low-power ARM architectures (Section 2.3). We showcase that capability by evaluating standard benchmarks along with the KinectFusion use case.

  • Provides fast prototyping and experimentation on heterogeneous programming on GPGPUs, SIMD units, and FPGAs: The novel Tornado, Indigo, and MAST modules achieve transparent heterogeneous execution on GPGPUs, SIMD units, and FPGAs respectively, without sacrificing productivity (Sections 2.3.3, 2.3.2, 2.5). We showcase that capability by accelerating KinectFusion on GPGPUs, SIMD units, and FPGAs under the same infrastructure.

  • Enables co-designed architectural research on power, performance, and resiliency techniques via high-performing simulators and real hardware: Along with a plethora of real hardware, Beehive integrates a number of high-performing simulators in a unified framework (Section 2.6). We showcase this capability by providing a novel hardware/software co-designed optimization for KinectFusion.

  • Supports dynamic binary optimization techniques via instrumentation and optimization at the system and chip level: Beehive extends its research capabilities to novel micro-architectures by providing dynamic binary instrumentation and optimization techniques for all supported hardware architectures (Section 2.4).

The paper is organized as follows: Section 2 explains the architecture of Beehive along with its individual components. Section 3 presents the Computer Vision application that forms the use case in this paper. Section 4 presents the various co-designed optimizations applied to the selected application along with their correspondent performance evaluations. Finally, Sections 5 and 6 present the related work, the concluding remarks and the future vision of Beehive, respectively.

2. Beehive Architecture

2.1. Overview

Figure 1. Beehive architecture overview.

Beehive, as depicted in Figure 1, follows a multi-layered approach of highly co-designed components spanning from the application down to the hardware level. The design philosophy of Beehive revolves around five pillars:

  1. Rapid prototyping for developing full-stack optimizations efficiently by using high-level programming languages.

  2. Diversity for tackling multiple application domains, programming languages, and runtime systems in a unified manner.

  3. Accuracy of obtained results by integrating and augmenting state-of-the-art industrial-strength components.

  4. Maintainability of the platform keeping it on par with the state-of-the-art in the long term.

  5. Scalability of the platform to complex systems and architectures in a seamless manner.

Beehive targets a variety of workloads ranging from traditional benchmarks to emerging applications from a variety of domains such as Computer Vision and Big Data. Furthermore, as explained later in Section 2.2, Beehive allows multiple implementations of complex applications in a variety of programming languages in order to enable comparative research amongst them. Beehive supports both managed and un-managed languages as explained in Subsection 2.3. Finally, applications can execute either directly on hardware, in-directly on hardware using a dynamic binary optimization layer, or inside Beehive’s simulator stack.

The following subsections explain in detail each layer of Beehive along with the supported applications, programming languages, and hardware platforms.

2.2. Applications

Beehive targets a variety of applications in order to enable co-designed optimizations in numerous domains. Whilst compiler and micro-architectural research traditionally uses benchmarks such as SpecCPU (speccpu, ), SpecJVM (specjvm, ), Dacapo (DaCapo:paper, ), PARSEC (bienia11benchmarking, ), Beehive also considers complex emerging application areas. The two primary domains targeted by Beehive are Computer Vision applications and algorithms such as KinectFusion (newcombe;2011;kinectfusion:-r, ) and other SLAMs (Simultaneous Localization and Mapping algorithms) along with Big Data software stacks such as Spark (;;apache-spark, ), Flink (apache-flink, ), and Hadoop (;;apache-hadoop, ). To showcase Beehive, we selected an implementation of KinectFusion to be the main vehicle of experimentation.

Recent advances in real-time 3D scene understanding capabilities can radically change the way robots interact with and manipulate the world. A proliferation of applications and algorithms, in recent years, have targeted real time 3D space reconstruction both in desktop and mobile environments (DysonLab, ; ProjectTango, ; newcombe;2011;kinectfusion:-r, ). To assess both the accuracy and performance of the proposed optimizations, we use SLAMBench (2015PAMELASLAMBench, ) a benchmarking suite that provides a KinectFusion implementation. SLAMBench harnesses the ICL-NUIM dataset (2014Handa, ) of synthetic RGB-D sequences with trajectory and scene ground truth for reliable accuracy comparison of different implementations and algorithms. SLAMBench currently includes implementations in C++, CUDA, OpenCL, and OpenMP allowing a broad range of languages, platforms, and techniques to be investigated. In Section 3, SLAMBench is explained and decomposed to its key kernels.

Figure 2. DaCapo-9.12-bach benchmarks (higher is better) normalized to Hotspot-C2-Original.
Figure 3. SpecJVM2008 benchmarks (higher is better) normalized to OpenJDK-Zero-IcedTea6_1.13.11.
Figure 2. DaCapo-9.12-bach benchmarks (higher is better) normalized to Hotspot-C2-Original.

2.3. Runtime Layer

Some of the key features of Beehive are found in its runtime layer, which provides capability beyond simply running native applications. One of the challenges when designing such tightly co-designed systems is the application and programming languages support. Supporting numerous runtimes with various back-ends and compilers, while seamlessly integrating them with the lower layers of the computing stack, is a time consuming task which impedes the maintainability of the whole platform. These issues in turn will manifest in slow adoption of state-of-the-art software and hardware components and applications.

In order to overcome these challenges, we have taken the design decision to build the runtime layer around two components: the Java Virtual Machine (JVM) and native C/C++ applications. Despite being able to execute native C/C++ applications (regardless of the compiler used), Beehive has been designed to target languages that can run, and be optimized, on top of the JVM. The advent of the Graal compiler (duboscq;2013;graal-ir:-an-ex, ) along with the Truffle AST interpreter (Wurthinger:2013, ) enables the execution of multiple existing 222For example, languages such as Ruby, JavaScript, R, LLVM-based, etc., are currently supported by Truffle., and novel, dynamic and non-dynamic programming languages and DSLs on top of the JVM. Building the Beehive platform around Truffle, Graal, and the JVM, we achieve high performing execution of a variety of programming languages in a unified manner. Furthermore, the amount of maintenance necessitated is contained to two compilers and one runtime system. In addition, any changes from the open sourced Graal and Truffle projects can be down-streamed to Beehive; keeping it synchronized with the latest software components.

Regarding the runtime systems of Graal and Truffle, two design alternatives have been deployed. The first route is the vanilla implementations running on top of OpenJDK. The benefits of this approach is that Beehive can be utilized by industrial-strength, high-performing systems that run on top of OpenJDK. This, however, has a number of drawbacks. Components of the runtime layer such as Object Layouts, Garbage Collection (GC) algorithms, Monitor Schemes, etc., are difficult to research due to the lack of modularity in OpenJDK. To that end, we decided to add an additional runtime layer for Graal and Truffle: the Maxine Research Virtual Machine (Wimmer, ).

The MaxineVM, a meta-circular Java-in-Java VM developed by Oracle Labs, has been adopted and augmented for usage in Beehive (kotselidis;2017;heterogeneous-m, ). Since its last release from Oracle, it has been enhanced by the Beehive team both in performance and functionality terms (Section 2.3.1). The Graal compiler ported on top of MaxineVM has been stabilized and its performance has been improved making MaxineVM the highest performing research VM (Section 2.3.1). In addition, as depicted in Figure 1, both MaxineVM and OpenJDK use the same optimizing compiler accompanied by the Truffle AST interpreter enabling Beehive to extend its research capabilities from industrial strength to high-quality research projects.

The multi-language capabilities of Beehive have been further augmented by novel software components that enable heterogeneous execution of applications on numerous hardware devices; Indigo, Tornado, and MAST (mast, ; kotselidis;2017;heterogeneous-m, ). While Indigo enables the exploitation of SIMD units, Tornado targets GPGPUs and FPGAs by OpenCL code emission. Furthermore, MAST provides a clean API to access FPGA modules in a concurrent and thread-safe manner. The following subsections explain in detail MaxineVM, Indigo, Tornado, while MAST is explained in Section 2.5.

2.3.1. MaxineVM

The latest release of MaxineVM from Oracle had the following three compilers:

  1. T1X: A fast template-based interpreter (stable).

  2. C1X: An optimizing SSA-based JIT compiler (stable).

  3. Graal: An aggressively optimizing SSA-based JIT compiler scheduled to be integrated in OpenJDK Java9 (semi-stable).

Furthermore, MaxineVM was tied to the x86_64 architecture. In the context of Beehive the following enhancements has been made to MaxineVM:

  1. T1X: Added profiling instrumentation enabling more aggressive profile-guided optimizations.

  2. T1X: Compiler ports to ARMv7 and Aarch64 enabling experimentation on low-power 32bit and 64bit architectures.

  3. C1X: Compiler port to ARMv7 enabling experimentation on low-power ARM 32bit architectures.

  4. Graal: Stability and performance improvements.

  5. Maxine: Complete ARMv7 and undergoing Aarch64 support, stability, and performance enhancements.

Figures 3 and 3 illustrate the performance of MaxineVM in x86 and ARMv7 on Dacapo9.12-bach (DaCapo:paper, ) and SpecJVM2008 (specjvm, ) respectively.

As illustrated in Figure 3333Intel(R) Core(TM) i7-4770@3.4GHz, 16GB RAM, Ubuntu 3.13.0-48-generic, 16 iterations, 12GB heap., since Oracle’s last release (Maxine-Graal-rev.20290 Original), performance has been increased by 64% (Maxine-Graal-rev.20381 Current) while currently Maxine is half of the performance of industrial strength OpenJDK with the C2 and Graal (rev. 21075) compilers. The target is to get the JIT performance of both VMs on par by enabling more aggressive Graal optimizations in Maxine such as escape analysis (stadler;2014;partial-escape-, ) and other compiler intrinsics. Unfortunately, we could not compare against JikesRVM (Alpern:2000:JVM:1011388.1011400, ) since it can not run the Dacapo9.12-bach benchmarks on x86_64.

Regarding ARMv7, as depicted in Figure 3444Samsung Chromebook, Exynos 5 Dual@1.7GHz, 2GB RAM, Ubuntu 3.8.11, 2GB heap. the performance of MaxineVM-ARMv7 falls between the performance of OpenJDK-Zero and OpenJDK-1.7.0-(Client, Server). MaxineVM outperforms OpenJDK-Zero by 12x on average across SpecJVM2008555Serial was excluded from the evaluation., while it is around 0.5x and 0.3x slower than the OpenJDK-1.7.0 client and server compilers respectively. As in x86, many optimizations both in the compiler and the code generator, will be implemented and/or enabled in order to match the performance of the industrial strength OpenJDK.

Regarding the memory manager (GC), various options are being explored ranging from enhancing Maxine VM’s current GC algorithms to porting existing state-of-the-art memory management components. Currently MaxineVM supports semi-space and generational schemes.

2.3.2. Indigo

Indigo, a novel component of Beehive, is an extension plugin for Graal that provides efficient execution of short vector types, commonly found in Computer Vision applications, and support for SIMD execution. While Indigo was initially designed to enhance the performance of computer vision applications, it can be easily expanded to provide generic vectorization support in Graal; a feature which is currently missing from public distribution. Figure 4 outlines how Indigo operates with the Graal compiler.

Figure 4. Indigo’s interaction with the Graal compiler.

As depicted in Figure 4, Indigo uses Graal’s invocation plugin which enables the custom addition of a node in Graal’s Intermediate Representation (IR). This, in turn, can be exploited by Indigo to re-direct the compilation route from Graal to Indigo and use its compilation stack to compile and optimize for SIMD execution. Within Graal, the IR is maintained as a structured graph with nodes representing actions or values while edges represent their dependencies. The graph is initially generated by parsing the bytecode from a class file.

The objective of vectorization is to reduce the distance between vector operations in the IR enabling further optimizations through virtualization (i.e. escape analysis and scalar replacement (stadler;2014;partial-escape-, )). With the use of virtualization, we can maintain temporary vectors entirely at the registers of the targeted architectures. The addresses of the vectors are being used for reading and writing, enabling us to break free from the primitive Java types and, more importantly, from the use of Java arrays. However, since this is not an inherent safe usage of the Java semantics we made the following assumptions:

  • Hardware supports 128-bit vector operations, true for ARM NEON and Intel SSE implementations.

  • The class contains four single-precision floating point numbers suitable for vector operations of SLAM applications.

  • Unused elements of a vector are zero.

  • The elements of a vector are contiguous in memory.

  • Once constructed, a vector is immutable.

The aforementioned assumptions apply to the library provided by Indigo and in turn allow some of the restrictions in Java to be eliminated. This enables the IR to be extended and optimized more aggressively since the semantics are now within the vector abstraction and not within the general purpose language.

Invocation plugins allow the replacement of a method invocation with a sub-graph created during the graph building phase in Graal. We used a single node plugin that contains its own domain specific compiler stack. The major benefit of this approach is the runtime independence from Graal. Therefore, it can be downloaded and used a standalone library that, if the JVM uses Graal on top of the JVM Compiler Interface (JMVCI) (JEP243, ), SIMD instruction emission can be generated. Indigo’s compiler stack contains a basic graph builder, optimizer, register allocator, and code generator with a scope limited for its target domain: Computer Vision applications.

Indigo nodes are generated either during the graph building phase of the compilation or indirectly during inlining. Once a graph has been constructed, it is transformed during the optimization phases by exploiting canonicalization and simplification to merge nodes. This allows us to maximize the number of operations in the node and eliminate new instance nodes (allocation of new objects) from the graph, leaving the data in registers. A simplification phase traverses the operand edges of the Indigo node to detect other Indigo nodes and merges the internal operation graphs together.

When Indigo nodes are lowered to the low-level IR (LIR) nodes used by Graal, they must claim virtual registers from Graal. At this point we lower the operation to a generic SIMD instruction to be scheduled while profiling the register requirements. In order to maintain the vanilla implementation of Graal, we indirectly use its register allocator to provide general purpose and vector registers by claiming values to satisfy the requirements of the compiled method. Later, these will be converted into physical registers during the back end phases. The use of profiling enables us to offload the allocation algorithms to Graal, while ensuring that no vector registers are spilled to the stack. This technique prohibits the JVM from entering un-recoverable states while being spatially more efficient.

Thanks to the modularity of Graal, and access to the compiler through the JVMCI, it is possible to insert novel nodes into the compiler at runtime. With Indigo we show that it is possible to add a domain specific compilation plugin to augment the Graal compiler. This allows us to bypass all Graal internals and emit machine code exploiting SIMD instructions that are unsupported in the publicly available Graal. While this approach targets idiomatic SIMD for Computer Vision, there is no technical reason why it cannot be extended to insert other domain specific knowledge into Java.

Figure 5. Indigo’s performance against Apache CML on common vector and matrix operations.

Figure 5, contains Indigo’s relative performance against the Apache Common Mathematics Library (CML) (Apache:2016, ) for a total number of 13 vector and matrix operations commonly found in Computer Vision applications. As depicted in Figure 5, Indigo outperforms Apache CML both in vector and matrix operations. As expected, the largest gains are observed in matrix operations with matrix-vector multiplication exhibiting a 66.75x speedup. The observed performance improvements derive from the use of SIMD execution along with the compiler optimizations provided by Indigo (null check elimination, scalar replacement, etc.).

2.3.3. Tornado

Tornado, a novel component of Beehive, originated by JACC (clarkson2017boosting, ), is a framework designed to improve the productivity of developers targeting heterogeneous hardware. By exploiting the available heterogeneous resources, they have the potential to improve the performance and energy-efficiency of their applications. The key difference between Tornado and existing programming languages and frameworks is its dynamism; developers do not need to make a priori decisions about their hardware targets. The Tornado runtime system achieves transparent computation offloading with support for automatic device management, data movement, and code generation. This is possible by exploiting the design of VM-based languages: Tornado simply augments the underlying VM with support for OpenCL by using the JVMCI (Java Virtual Machine Compiler Interface); similarly, to Indigo. The JVMCI allows efficient access to low-level information inside the JVM, such as a methods bytecodes and profiling information. Using this information Tornado is able to JIT compile Java bytecode to execute on OpenCL compatible devices.

As depicted in Figure 6, the Tornado API provides developers with a task-based programming model. In Tornado, a task can be thought of as being analogous to a single OpenCL kernel execution. This means that a task must encapsulate the code it needs to execute, the data it should operate on, and some meta-data. The meta-data can contain information such as the device it should execute on or profiling information. The mapping between tasks and devices is done at a task-level granularity; meaning each task is capable of being executed on a different piece of hardware. These mappings can be provided either by the developer or by the Tornado runtime; the mappings are dynamic and have the ability to change anytime.

Instead of focusing on scheduling individual tasks, Tornado allows developers to combine multiple tasks together to form a larger schedulable unit of work (called a task-graph). This approach has a number of benefits: firstly, it provides a clean separation between the code which co-ordinates tasks execution and the code which performs the actual computation; and secondly, it allows the Tornado runtime system to exploit a wider range of runtime optimizations. For instance, the task-graph provides the runtime system with enough information to determine the data dependencies between tasks. By using this knowledge, the runtime system is able to exploit any available task parallelism by overlapping the execution of task execution and data movement. It also provides the runtime system with the ability to eliminate any unnecessary data transfers that would occur because of read-after-write data dependencies between tasks.

To increase developer productivity, Tornado is designed to make offloading computation as transparent as possible. This is achieved via its runtime system which is able to automatically schedule data transfers between devices and handle the asynchronous execution of tasks. Moreover, the JIT compiler provides support for user-guided parallelization. The result is that developers are able to rapidly develop portable heterogeneous applications which can exploit any OpenCL compatible device in the system.

Figure 6. Tornado outline.

2.4. Binary Instrumentation Layer

Beehive integrates a number of binary instrumentation tools to enable research and rapid prototyping of novel micro-architectures and ISA extensions. Along with the well-established Intel’s PIN tool (PIN, ), Beehive integrates the newly introduced MAMBO (Gorgovan:2016:MLD:2899032.2896451, ), and MAMBO-x64 (D'antras:2016:OIB:2899032.2866573, ) tools for ARMv7 and AArch64 architectures.

2.4.1. Mambo

MAMBO is a low-overhead dynamic binary instrumentation and modification tool for the ARM architecture which currently supports ARMv7 and the AArch32 execution state of ARMv8. In the context of Beehive, the initial performance of MAMBO has been further improved since its first release. The introduced optimizations include:

  • A novel scheme to enable hardware return address prediction for dynamic binary translation.

  • A novel software indirect branch prediction scheme for polymorphic indirect branches.

  • A number of micro-architectural specific optimizations such as usage of huge pages for internal data.

While the initial version of MAMBO achieves a geometric mean overhead of 28% on a Cortex-A9 (a dual-issue out-of-order superscalar processor with 8 to 11 pipeline stages) and of 34% on a Cortex-A15 (a triple-issue out-of-order superscalar processor with 15 to 24 pipeline stages), the introduced optimizations reduce the overhead on the two systems to 15% and 21% respectively.

2.4.2. Mambo-X64

The introduced ARM AArch64 architecture is a 64-bit execution mode with a new instruction set which retains binary compatibility with ARMv7 32-bit execution mode. Due to the need to support the large number of existing 32-bit ARM applications, current implementations of AArch64 processors include hardware support for ARMv7. However, this support comes at a cost in hardware complexity, power usage, and verification time.

MAMBO-X64 is a dynamic binary translator which executes 32-bit ARM binaries (both single-threaded and multi-threaded) using the AArch64 instruction set. The integration of MAMBO-X64 into Beehive creates a path for experimentation for future processors to drop hardware support for the legacy 32-bit instruction set while retaining the ability to run ARMv7 applications.

In the context of Beehive, the performance of MAMBO-X64 has been further improved by employing a number of novel optimizations such as: mapping ARMv7 floating-point registers to AArch64 registers dynamically, generating traces that harness hardware return address prediction, and efficiently handling operating system signals. After applying the aforementioned optimizations, on SPEC CPU2006 (speccpu, ), we measured a very low geometric mean average performance overhead of 0.2%, 3.3% and 8.3% on X-Gene, Cortex-A57, and Cortex-A53 processors respectively. The performance of MAMBO-X64 also scales to multi-threaded applications, with an overhead on the PARSEC (bienia11benchmarking, ) multithreaded benchmark suite of only 2.1% with 1, 2 and 4 threads, and 4.9% with 8 threads.

2.5. Hardware/FPGA Layer

As depicted in Figure 1, Project Beehive targets a variety of hardware platforms and therefore significant effort is being placed in providing the appropriate support for the compilers and runtimes of choice. Besides targeting conventional CPU/GPU systems, it is also possible to target FPGA systems such as the Xilinx Zynq ARM/FPGA System on Chip (SoC).

In order to efficiently program FPGAs from high level programming languages, we developed MAST: a Modular Acceleration and Simulation Technology. MAST consists of a hardware/software library and tools allowing the rapid development of systems using ARM based FPGAs. From the hardware perspective it consists of a standardized interface which allows IP blocks to be identified and locked for use by processes running on the ARM processor. All IP blocks feature an AXI slave port, used for configuration and low speed communication, and optionally an AXI master port to provide high speed access to the system memory of the ARM processor, typically via the ACP port to provide cache coherency. Currently hardware design is carried out using Bluespec System Verilog (Arvind:2003:BLH:823453.823860, ), with interface modules conforming to the hardware. The software library, which is entirely in user space, provides a hardware manager which can be used to discover IP on the programmable logic and allocate it a specific process thread. The software library also provides a simple interface with IP blocks between the virtual memory world of the processor and the physical memory required by the hardware, where either the library or the host application can perform memory allocation.

2.6. Simulation Layer

Besides running directly on real hardware, Beehive offers the opportunity to conduct micro-architectural research via its advanced simulation infrastructure. The two simulators of choice, with diverse characteristics, ported to the Beehive platform are: Gem5 (gem5, ) and ZSim (ZSim, ). While Zsim offers a fast and high accurate simulation time on x86 ( 10 MIPS in our experiments), Gem5 provides a slower yet more detailed full-system simulation framework for numerous architectures.

2.6.1. Gem5

The Gem5 full-system simulator has been adopted and augmented in the following ways:

  • Integration with other architectural simulators: A new interface layer has been developed within the Gem5 full-system simulator (Binkert:2011:GS:2024716.2024718, ) to facilitate easy integration with a range of architectural simulators as depicted in Figure 7.

    Figure 7. Beehive’s Gem5 stack.

    The statistics package has been augmented to allow statistics to be assigned to groups, specified at run-time and manipulated (output and reset) independently, without affecting the total values of the statistics or requiring updates to the code base. This allows new architectural simulators to be invoked from within the Gem5 simulator by using standard C++ template code. Current simulators integrated into the Gem5 framework include:

    1) McPAT (5375438, ) and Hotspot (1650228, ): The power and temperature modelers provided by those tools are conjoined to provide accurate temperature-based leakage models. Power samples may be triggered from within the Gem5 simulator, at intervals between 10ns to 10us (allowing transient traces to be generated for benchmarks), and from within the simulated OS (allowing accurate power and temperature figures to be used within user space programs). There is around a 10% simulation time overhead for temperature and power modelling with 10us samples.

    2) Voltspot (6853199, ): In order to measure Voltage noise events caused by power-gating or switching patterns in Multicore SOCs over realistic workloads, the Voltspot simulator has been incorporated into the framework. The additional statistics generated allow nanosecond timing of events to be recorded while using samples of courser granularity.

    3) NVSim (dong2014nvsim, ): The non-volatile memory simulator NVSim has been incorporated into the simulation infrastructure. NVSim can be invoked by McPat (alongside the conventional SRAM modeling tool Cacti (li2011cacti, )) allowing accurate delay, power, and temperature modeling of non-volatile memory anywhere in the memory hierarchy.

  • Machine Learning and Data Analytics techniques: The interface layer has also been used to allow machine-learning/data-analytics techniques to be incorporated within the simulation framework. Machine-learning techniques are used to analyze statistical patterns in the data aiding in the creation of hardware-predictors for power-management, prefetching, branch-prediction etc. The statistics package allows for the specification of features at runtime. Features are defined as a statistic over a given period (e.g. the branch mispredict rate over 1us, or the L2 cache miss-rate over 10ms). Features are specified at run-time and can be accessed periodically or triggered from events within the simulator and the statistics package guarantees to return the features over their specified time (within an error range which is also set at run-time). The FEAST toolkit (brown2012conditional, ) has been incorporated into the framework (Figure 7) to allow for (offline) feature selection. Packages for online K-nearest neighbour (KNN) and Support Vector Machine regression have been incorporated into the framework to allow for online prediction once the features have been chosen. Interaction between the simulator and the predictors is controlled by the statistics package again allowing for the prediction to be triggered within the Gem5 simulator code or from within the simulated OS.

  • Resiliency and Fault-Injection: A critical aspect of any computer system is its dependability evaluation (Li:2008:UPH:1353535.1346315, ; 7314163, ; 1311888, ). The accurate identification of vulnerabilities assists computer architects to carefully plan for low cost and high energy efficient resiliency mechanisms. On the contrary, inaccurate dependability assessment often results on over-designed microprocessors impacting negatively time-to-market and product costs. To aid dependability studies, we developed a fault injection framework that adheres to the following principles: 1) Flexibility: easy to setup, define and perform fault injection experiments, 2) Reproducibility: enable reproducible experiments, 3) Generality: support a wide set of ISAs in a uniform way performing comparative studies, and 4) Scalability: easily deployed to multi-core designs.

    Figure 8. Beehive’s fault injection tool.

    Figure 8 depicts the floor-plan of the fault injection tool. The developed fault injection framework is built on top of Gem5 and operates as follows: A user-defined test scenario is translated into a set of fault injection arguments using a simulator-specific API. The injection library implements all the necessary simulation calls: (i) fault_model(): setup of a transient, intermittent or permanent fault model (Biswas:2005:CAV:1080695.1070014, ; 1225959, ; 5432157, ). Transient faults are modeled by flipping the value of a randomly selected bit in a randomly selected time window within simulation. Intermittent faults are modelled by setting the state of storage elements to one (stuck-at-1) or zero (stuck-at-0), in a randomly selected time window, for a random period. Moreover, permanent faults set the state of storage element persistently to one or to zero. Finally, multi-bit fault injections, having a combination of the aforementioned models, are also supported. (ii) apply(): injects the faults into a user-defined location (e.g. L1, L2 cache, etc.); and (iii) monitor(): logs and clusters the fault injection output. Finally, the injection controller, the kernel of the framework, communicates with the injection library and orchestrates the actual fault injection based on the user-defined arguments.

2.6.2. ZSim

The ZSim simulator, a user-level x86-64 simulator with an OOO-core model of the Westmere (Nehalem) micro-architecture, has been augmented in order to run managed workloads on MaxineVM resulting in the MaxSim simulation platform (rodchenko2017maxsim, ). Alternative options such as the Sniper (Sniper, ) simulator that runs with JikesRVM  (JikesOnSniper, ), or the full-system Gem5 simulator were considered but abandoned due to a number of limitations: Sniper can only run in a 32-bit mode, while Gem5 has a relatively low simulation speed. Finally, in order to perform energy and power estimations, we integrated the McPAT (McPAT, ) tool into the ZSim simulator following the methodology proposed by the Sniper simulator (Sniper-McPAT, ). The methodology necessitated the implementation of a number of extra micro-architectural events in ZSim such as the number of predicted branches and floating point micro-operations.

3. SLAM Applications

3.1. KinectFusion

To showcase the capabilities of the ZZZ platform, we focused on emerging applications which are becoming significant both in desktop and mobile domains: real-time 3D scene understanding in Computer Vision. In particular, we investigate SLAMBench a complex Simultaneous Localization and Mapping (SLAM) application which implements the KinectFusion (KFusion) algorithm. SLAM applications are challenging due to the amount of computation needed per frame and the programming complexity of achieving high performing implementations. SLAMBench allows the reconstruction of a three-dimensional representation from a stream of depth images produced by a RGB-D camera (Figure 9), such as the Microsoft Kinect. Typically, the slower the frames are processed, the harder it is to build an accurate model of the scene.

Figure 9. RGB-D camera combines RGB with Depth information (top left and middle). The tracking (left) results in the 3D reconstruction of the scene (right).

Each of the depth images is used as input to the six-stage processing pipeline shown in Figure 10:

  • Acquisition obtains the next RGB-D frame; either from a camera or from a file.

  • Pre-processing is responsible for cleaning up the incoming data using a bilateral filter and standardizes the units used for measurement.

  • Tracking estimates the new pose of the camera; it builds a point cloud from the current data frame and matches it against a reference point cloud, produced from the raycasting step, using an iterative closest point (ICP) algorithm.

  • Integrate fuses the current frame into the internal model, if a new pose has been estimated.

  • Raycast using raycasting the pipeline can construct a new reference point cloud from the internal representation of the scene.

  • Rendering this stage uses the same raycasting technique to visualize the 3D scene.

Figure 10. KinectFusion Pipeline.

It should be noted that the pipeline has a feedback loop. Each of the pipeline stages is composed from a number of different kernels. In the original KinectFusion implementation, a kernel represents a separate region of code which is executed on the GPU. In a typical pipeline execution KinectFusion will execute between 18 and 54 kernels (best and worst case scenarios). The variation is dependent on the performance of the ICP algorithm, if it is able to estimate the new camera pose quickly then less kernels will be executed. This means that to achieve a real-time performance of 30 frames per second, the application will need to sustain the execution of between 540 and 1620 kernels every second.

3.2. Programmability Vs. Performance

SLAMBench offers baseline and high-performing implementations of KinectFusion in C++, OpenMP, CUDA, and OpenCL. In order to achieve the QoS targets of Computer Vision (typically over 30 FPS), KinectFusion has to be heavily parallelized on GPGPUs and therefore the CUDA and OpenCL implementations are those matching the required targets. Developing on CUDA or OpenCL, however, comes with a number of drawbacks. The first one is code complexity and productivity while the second one is portability since applications have to recompiled and tuned for each target hardware platform.

To tackle the aforementioned problems and to showcase the capabilities of ZZZ, we decided to experiment with Computer Vision applications in Java; a language that up-to-now was not considered for such high performing and demanding applications. Implementing SLAMBench, and in general Computer Vision applications, in Java provides a trade-off between programmability efforts and performance.

While Java can provide rapid prototyping, in contrast to writing OpenCL or CUDA, vanilla and un-optimized implementations can not meet the QoS requirements. We use the Java programming language as a challenge in order to build and optimize Computer Vision applications aiming to achieve real-time 3D space reconstruction. After having developed and validated a serial implementation of SLAMBench, we performed a performance analysis and identified performance bottlenecks. Then, we utilized ZZZ to apply a number of co-designed acceleration and optimization techniques to the various stages of SLAMBench. The acceleration techniques span from custom FPGA acceleration of certain kernels to full-application acceleration through co-designed object compaction and GPGPU off-loading.

4. Evaluation

The following subsections describe the acceleration and optimizations techniques applied to SLAMBench via the Beehive platform along with the experimental results. The hardware and software configurations for each optimization are presented in Table 1.

Optimization 1: GPU Acceleration 2: FPGA Acceleration 3: HW/SW Co-Designed Object Compaction
Beehive Module OpenJDK, Graal, Tornado OpenJDK, Maxine, MAST Maxine, Zsim, McPAT
Hardware
CPU Intel Xeon E5-2620 @ 2Ghz Xilinx Zynq 706 board, ARMv7 Cortex A9 @ 667Mhz Simulated: x86-64 Nehalem @ 2.64Ghz
Cores 12 (24 Threads) 2 4
L1 32KB per core, 8-way 32KB per core 32KB, 8-way, LRU, 4 cycles
L2 256KB per core, 8-way 512KB per core 256KB, 8-way, LRU, 6 cycles
L3 15MB, 20-way - 8MB, 16-way, hashed, 30 cycles
RAM 32GB 1GB 3GB, DDR3-1066, 47 cycles
GPU NVIDIA Tesla K20m @ 0.705Ghz, OpenCL 1.2 - -
Extensions - MAST FPGA AGU Extensions
Software
JVM OpenJDK, Graal Maxine ARMv7, OpenJDK_1.7.0_40 Maxine x86
OS CentOS 6.8 (Kernel 2.6.32) Linux 3.12.0-xilinx-dirty Ubuntu 14 LTS 3.13.0-85
Table 1. Beehive, Hardware, and Software experimental configurations.

4.1. GPU Acceleration

GPU acceleration has been applied to SLAMBench through Tornado (Section 2.3.3). All kernels but one666Acquisition can not be accelerated because the input is serially obtained from a camera or a file. of KinectFusion have been dynamically compiled and offloaded for GPGPU execution through OpenCL code emission. Figures 11 and 12, illustrate the performance and speedup of the accelerated KinectFusion version respectively.

Figure 11. FPS achieved of Tornado versus baseline Java and C++ implementations.
Figure 12. Tornado Speedup versus serial Java and C++ implementations per KFusion stage.

As depicted in Figure 11, the original validated version of KinectFusion can not meet the QoS target of real-time Computer Vision applications (0.71 FPS on average). Both the serial versions of Java and C++ perform under 3 FPS with the C++ version being 3.3x faster than Java. By accelerating KinectFusion through GPGPU execution we manage to achieve a constant rate of over 30 FPS (31.07 FPS) across all frames (802) from the ICL-NUIM dataset (2014Handa, ) (Room 2 configuration). In order to achieve 30 FPS, all kernels have been accelerated by up to 861.26x with an average of 43.37x across the whole application, as depicted in Figure 12. By utilizing Beehive and its GPU acceleration infrastructure, we manage to accelerate a simple un-optimized serial Java version of a KinectFusion algorithm meeting its QoS requirements in a transparent to the developer manner.

4.2. FPGA Acceleration

FPGA acceleration has been applied to SLAMBench through the MAST acceleration functionality of Beehive (Section 2.5). In the context of our initial investigation into FPGA acceleration we have selected the pre-processing stage that contains two computational kernels that: i) scale the depth camera image from mm to meters, and ii) apply a bilateral filter to produce a filtered scaled image. A filter is applied to the scaled image in order to reduce the effects of noise in depth camera measurements. This includes missing or invalid values due to the characteristics of the 3D space777For example, null or invalid measurements are obtained when the surfaces are translucent, and/or the angle of incidence of the infrared radiation from the depth camera is too acute to be reflected back to the camera’s sensors..

In order to improve the execution time in Java, we merged the two routines into a single routine reducing the streaming of data to and from the FPGA device. The offloading to the FPGA is accomplished by using the Java Native Interface (JNI) mechanism to interface with our MAST module (Section 2.5). The JNI stub extracts C-arrays of floating point values from the Java environment that represent the current input raw depth image from the camera, and the current output scaled filtered image. The JNI stub, in turn, converts the current raw depth image into an array of short integers which is memory allocated (through malloc) on first execution of the JNI stub. The FPGA hardware environment is also initialized during first execution, and consequently the hardware performs the merged scaling and filtering operation. Subsequent executions only need to perform a call to extract C-arrays and to, finally, release the output scaled and filtered image array back to the Java environment.

As depicted in Table 2, FPGA acceleration improves performance by 43x and 22x on MaxineVM and OpenJDK respectively. The difference in both execution times and speedups from both VMs stem from the fact that OpenJDK produces more optimal code than MaxineVM (Section 2.3).

VM No FPGA With FPGA Speedup
Acceleration Acceleration
Maxine VM 2.20 0.05 43x
OpenJDK 0.66 0.03 22x
Table 2. Performance and speedup of KFusion’s pre-processing stage with and without FPGA acceleration (mean execution time, in seconds, over 78 frames).

4.3. HW/SW Co-Designed Object Compaction

This generic optimization applies to all Java objects and regards class information elimination from object headers. This is achieved by utilizing tagged pointers; a feature currently supported by ARM AArch64 (ProgrammersGuideForARMv8, ) and SPARC M7 (M7NextGenerationSPARC, ). In order to apply that optimization, changes both at the Virtual Machine and at the hardware layers have to be performed. In our case, it has been applied to SLAMBench through the Maxine/ZSim stack (rodchenko2017OHE, ) (Section 2.6.2).

Object-oriented programming languages have the fundamental property of associating type information with objects allowing them to perform various tasks such as virtual dispatch, introspection, and reflection. Typically, this is performed by maintaining an extra pointer per object to its associated type information. To save that extra heap space per object, we utilize tagged pointers in order to encode class information inside object addresses. By extending ZSim to support tagged pointers in x86 and by extending the Address Generation Unit (AGU) at the micro-architectural level we managed to expose tagged addresses at the JVM level. Instead of maintaining the extra pointer per object, we exploit the unused bits of tagged pointers to encode that information. The proposed optimization, which is orthogonal to any application running on top of the JVM, has been applied to SLAMBench and results are shown in Figures 14 and 14.

Figure 13. Performance improvements of class information elimination in SLAMBench.
Figure 14. Energy and Cache Miss improvements of class information elimination in SLAMBench.
Figure 13. Performance improvements of class information elimination in SLAMBench.

As depicted in Figure 14, by employing the co-designed optimization for eliminating class information from object headers we managed to achieve up to 1.32x speedup with an average of 1.10x across all stages of SLAMBench. Furthermore, as depicted in Figure 14 the optimization resulted in up to 27% Dynamic DRAM energy, 12% total DRAM energy, and 5% total dynamic energy reductions. The energy reductions correlate with improvements in cache utilization of 24% and 25% in L2 and L3 caches respectively. The observed benefits of employing the introduced optimization derive from the fact that by compressing object sizes by one word we managed to: 1) improve cache utilization, 2) reduce garbage collection invocations (from 10 to 7) due to heap savings, and 3) improve retrieval time for class information due to the introduced minimal hardware extension.

5. Related Work

Although heterogeneity is the dominant design approach, its programming environment is extremely challenging. Delite (6113791, ; Chafi:2011:DAH:1941553.1941561, ) is a compiler and runtime framework for parallel embedded domain-specific languages (Sujeeth:2013:FGH:2517208.2517220, ; Sujeeth11optiml:an, ). Its goal is to facilitate heterogeneous programming to efficiently exploit the underlying heterogeneous hardware capabilities. SWAT (Grossman:2016:SPI:2907294.2907307, ) is a software platform that enables native execution of Spark applications on heterogeneous hardware. Furthermore, OpenPiton (Balkind:2016:OOS:2872362.2872414, ) is an open source many-core research framework covering only the hardware layer, X-Mem (GottschoGSSG16, ) is an open-source software tool that characterizes the memory hierarchy for cloud computing, and Minerva (Minerva, ) is a HW/SW co-designed framework for deep neural networks. In contrast to the aforementioned approaches, the Beehive framework is a hardware/software experimentation platform that enables co-designed optimizations for runtime and architectural research. covering all applications and compute stack. Regarding GPGPU Java acceleration, a number of approaches such as APARAPI (amd;;aparapi, ), Ishizaki et. al. (ishizaki;2015;compiling-and-o, ), Rootbeer (pratt-szeliga;2012;rootbeer:-seaml, ), and Habanero-Java (hayashi;2013;accelerating-ha, ), exist. Beehive’s Tornado module differs due to its dynamic nature and its co-operation with other parts of the framework such as MAST.

6. Conclusions and Future Work

In this paper, we introduced Beehive: a hardware/software co-designed platform for full-system runtime and architectural research. Beehive builds on top of existing state-of-the-art as well as novel components at all layers of the platform. By utilizing Beehive, we managed to accelerate a complex Computer Vision application in three distinct ways: GPGPU acceleration, FPGA acceleration, and by compacting objects in a hardware/software co-designed manner. The experimental results proved that we managed to achieve real-time 3D space reconstruction (30 fps) of the KFusion application, after accelerating it by up to 43.

Our vision regarding Beehive is to improve both its integration and performance throughout all the layers. In the long term, we aim to unify the platform’s components under a semantically aware runtime increasing developer productivity. Furthermore, we plan to define a hybrid ISA between emulated and hardware capabilities. This ISA will provide a roadmap of movement of interactions between abstractions offered in software and in hardware. Finally, we plan to work on new hardware services for scale out and representation of volatile and non-volatile communication services. This will provide a consistent view of platform capabilities across heterogeneous processors for Big Data and HPC applications.

7. Acknowledgements

The research leading to these results has received funding from UK EPSRC grants DOME EP/J016330/1, AnyScale Apps EP/L000725/1, INPUT EP/K015699/1 and PAMELA EP/K008730/1 and the EU FP7 Programme under grant agreement No 318633 AXLE project, and EU H2020 No 732366 ACTiCLOUD. Mikel Lujan is funded by a Royal Society University Research Fellowship and Antoniu Pop a Royal Academy of Engineering Research Fellowship.

References

  • [1] Dyson 360 Eye web site. https://www.dyson360eye.com.
  • [2] Project Tango web site. https://www.google.com/atap/projecttango.
  • [3] B. Alpern, C. R. Attanasio, J. J. Barton, M. G. Burke, P. Cheng, J.-D. Choi, A. Cocchi, S. J. Fink, D. Grove, M. Hind, S. F. Hummel, D. Lieber, V. Litvinov, M. F. Mergen, T. Ngo, J. R. Russell, V. Sarkar, M. J. Serrano, J. C. Shepherd, S. E. Smith, V. C. Sreedhar, H. Srinivasan, and J. Whaley. The jalapeño virtual machine. IBM Systems Journal, 2000.
  • [4] AMD-Aparapi. http://developer.amd.com/tools-and-sdks/heterogeneous-computing/aparapi/. July 5, 2019.
  • [5] Apache Flink. https://flink.apache.org. July 5, 2019.
  • [6] Apache Hadoop. http://hadoop.apache.org/. July 5, 2019.
  • [7] Apache Spark. https://spark.apache.org/. July 5, 2019.
  • [8] Arvind. Bluespec: A language for hardware design, simulation, synthesis and verification invited talk. In Proceedings of the First ACM and IEEE International Conference on Formal Methods and Models for Co-Design, MEMOCODE ’03, pages 249–, Washington, DC, USA, 2003. IEEE Computer Society.
  • [9] Jonathan Balkind, Michael McKeown, Yaosheng Fu, Tri Nguyen, Yanqi Zhou, Alexey Lavrov, Mohammad Shahrad, Adi Fuchs, Samuel Payne, Xiaohua Liang, Matthew Matl, and David Wentzlaff. Openpiton: An open source manycore research framework. In Proceedings of the Twenty-First International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS ’16, pages 217–232, New York, NY, USA, 2016. ACM.
  • [10] Christian Bienia. Benchmarking Modern Multiprocessors. PhD thesis, Princeton University, January 2011.
  • [11] Nathan Binkert, Bradford Beckmann, Gabriel Black, Steven K. Reinhardt, Ali Saidi, Arkaprava Basu, Joel Hestness, Derek R. Hower, Tushar Krishna, Somayeh Sardashti, Rathijit Sen, Korey Sewell, Muhammad Shoaib, Nilay Vaish, Mark D. Hill, and David A. Wood. The gem5 simulator. SIGARCH Comput. Archit. News, 39(2):1–7, August 2011.
  • [12] Nathan Binkert, Bradford Beckmann, Gabriel Black, Steven K. Reinhardt, Ali Saidi, Arkaprava Basu, Joel Hestness, Derek R. Hower, Tushar Krishna, Somayeh Sardashti, Rathijit Sen, Korey Sewell, Muhammad Shoaib, Nilay Vaish, Mark D. Hill, and David A. Wood. The gem5 simulator. SIGARCH Comput. Archit. News.
  • [13] Arijit Biswas, Paul Racunas, Razvan Cheveresan, Joel Emer, Shubhendu S. Mukherjee, and Ram Rangan. Computing architectural vulnerability factors for address-based structures. SIGARCH Comput. Archit. News, 33(2):532–543, May 2005.
  • [14] S. M. Blackburn, R. Garner, C. Hoffman, A. M. Khan, K. S. McKinley, R. Bentzur, A. Diwan, D. Feinberg, D. Frampton, S. Z. Guyer, M. Hirzel, A. Hosking, M. Jump, H. Lee, J. E. B. Moss, A. Phansalkar, D. Stefanović, T. VanDrunen, D. von Dincklage, and B. Wiedermann. The DaCapo benchmarks: Java benchmarking development and analysis. In OOPSLA ’06: Proceedings of the 21st annual ACM SIGPLAN conference on Object-Oriented Programing, Systems, Languages, and Applications. ACM Press, 2006.
  • [15] Gavin Brown, Adam Pocock, Ming-Jie Zhao, and Mikel Luján. Conditional likelihood maximisation: a unifying framework for information theoretic feature selection. Journal of Machine Learning Research, 13(Jan):27–66, 2012.
  • [16] K. J. Brown, A. K. Sujeeth, H. J. Lee, T. Rompf, H. Chafi, M. Odersky, and K. Olukotun. A heterogeneous parallel framework for domain-specific languages. In Parallel Architectures and Compilation Techniques (PACT), 2011 International Conference on, pages 89–100, 2011.
  • [17] Franck Cappello, Al Geist, Bill Gropp, Laxmikant Kale, Bill Kramer, and Marc Snir. Toward exascale resilience. November 2009.
  • [18] Trevor E. Carlson, Wim Heirman, and Lieven Eeckhout. Sniper: Exploring the level of abstraction for scalable and accurate parallel multi-core simulation. In Proceedings of 2011 International Conference for High Performance Computing, Networking, Storage and Analysis, SC ’11, pages 52:1–52:12, New York, NY, USA, 2011. ACM.
  • [19] Hassan Chafi, Arvind K. Sujeeth, Kevin J. Brown, HyoukJoong Lee, Anand R. Atreya, and Kunle Olukotun. A domain-specific approach to heterogeneous parallelism. In Proceedings of the 16th ACM Symposium on Principles and Practice of Parallel Programming, PPoPP ’11, pages 35–46, New York, NY, USA, 2011. ACM.
  • [20] James Clarkson, Christos Kotselidis, Gavin Brown, and Mikel Luján. Boosting java performance using gpgpus. In ARCS 2017: International Conference on Architecture of Computing Systems, volume 10172, pages 59–70. LNCS dx.doi.org/10.1007/978-3-319-54999-6_5, 2017.
  • [21] C. Constantinescu. Trends and challenges in vlsi circuit reliability. IEEE Micro, 23(4):14–19, July 2003.
  • [22] Amanieu d’Antras, Cosmin Gorgovan, Jim Garside, and Mikel Luján. Optimizing indirect branches in dynamic binary translators. ACM Trans. Archit. Code Optim., 13(1):7:1–7:25, April 2016.
  • [23] Xiangyu Dong, Cong Xu, Norm Jouppi, and Yuan Xie. Nvsim: A circuit-level performance, energy, and area model for emerging non-volatile memory. In Emerging Memory Technologies, pages 15–50. Springer, 2014.
  • [24] G. Duboscq, L. Stadler, T. Würthinger, D. Simon, C. Wimmer, and H. Mössenböck. Graal ir: An extensible declarative intermediate representation. In Asia-Pacific Programming Languages and Compilers, 2013.
  • [25] Hadi Esmaeilzadeh, Emily Blem, Renee St. Amant, Karthikeyan Sankaralingam, and Doug Burger. Dark silicon and the end of multicore scaling. In Proceedings of the 38th Annual International Symposium on Computer Architecture, ISCA ’11, pages 365–376, New York, NY, USA, 2011. ACM.
  • [26] Cortex-A Series Programmer’s Guide for ARMv8-A. http://infocenter.arm.com/help/topic/com.arm.doc.den0024a/DEN0024A_v8_architecture_PG.pdf. July 5, 2019.
  • [27] Cosmin Gorgovan, Amanieu d’Antras, and Mikel Luján. Mambo: A low-overhead dynamic binary modification tool for arm. ACM Trans. Archit. Code Optim., 13(1):14:1–14:26, April 2016.
  • [28] Mark Gottscho, Sriram Govindan, Bikash Sharma, Mohammed Shoaib, and Puneet Gupta. X-mem: A cross-platform and extensible memory characterization tool for the cloud. In 2016 IEEE International Symposium on Performance Analysis of Systems and Software, ISPASS 2016, Uppsala, Sweden, April 17-19, 2016, pages 263–273, 2016.
  • [29] Max Grossman and Vivek Sarkar. Swat: A programmable, in-memory, distributed, high-performance computing platform. In Proceedings of the 25th ACM International Symposium on High-Performance Parallel and Distributed Computing, HPDC ’16, pages 81–92, New York, NY, USA, 2016. ACM.
  • [30] A. Handa, T. Whelan, J.B. McDonald, and A.J. Davison. A Benchmark for RGB-D Visual Odometry, 3D Reconstruction and SLAM. In ICRA, 2014.
  • [31] Akihiro Hayashi, Max Grossman, Jisheng Zhao, Jun Shirako, and Vivek Sarkar. Accelerating habanero-java programs with opencl generation. In Proceedings of the 2013 International Conference on Principles and Practices of Programming on the Java Platform: Virtual Machines, Languages, and Tools, 2013.
  • [32] Wim Heirman, Souradip Sarkar, Trevor E. Carlson, Ibrahim Hur, and Lieven Eeckhout. Power-aware multi-core simulation for early design stage hardware/software co-optimization. In Proceedings of the 21st International Conference on Parallel Architectures and Compilation Techniques, PACT ’12, pages 3–12, New York, NY, USA, 2012. ACM.
  • [33] Wei Huang, S. Ghosh, S. Velusamy, K. Sankaranarayanan, K. Skadron, and M.R. Stan. HotSpot: A Compact Thermal Modeling Methodology for Early-Stage VLSI Design. Very Large Scale Integration (VLSI) Systems, IEEE Transactions on, 14(5):501–513, 2006.
  • [34] Intel. Intel Atom Processor E6x5C Series-Based Platform for Embedded Computing. https://newsroom.intel.com/wp-content/uploads/sites/11/2016/01/ProductBrief-IntelAtomProcessor_E600C_series.pdf.
    Online; last accessed 23-March-2016.
  • [35] K. Ishizaki, A. Hayashi, G. Koblents, and V. Sarkar. Compiling and Optimizing Java 8 Programs for GPU Execution. In 2015 International Conference on Parallel Architecture and Compilation (PACT), pages 419–431, Oct 2015.
  • [36] Jep 243: Java-level jvm compiler interface. http://openjdk.java.net/jeps/243, 2016. [Online; last accessed 1-Feb-2016].
  • [37] M. Kaliorakis, S. Tselonis, A. Chatzidimitriou, N. Foutris, and D. Gizopoulos. Differential fault injection on microarchitectural simulators. In Workload Characterization (IISWC), 2015 IEEE International Symposium on, pages 172–182, Oct 2015.
  • [38] Christos Kotselidis, James Clarkson, Andrey Rodchenko, Andy Nisbet, John Mawer, and Mikel Luján. Heterogeneous managed runtime systems: A computer vision case study. In Proceedings of the 13th ACM SIGPLAN/SIGOPS International Conference on Virtual Execution Environments, VEE ’17, pages 74–82, New York, NY, USA, 2017. ACM.
  • [39] Man-Lap Li, Pradeep Ramachandran, Swarup Kumar Sahoo, Sarita V. Adve, Vikram S. Adve, and Yuanyuan Zhou. Understanding the propagation of hard errors to software and implications for resilient system design. SIGOPS Oper. Syst. Rev., 42(2):265–276, March 2008.
  • [40] Sheng Li, Jung-Ho Ahn, R.D. Strong, J.B. Brockman, D.M. Tullsen, and N.P. Jouppi. McPAT: An Integrated Power, Area, and Timing Modeling Framework for Multicore and Manycore Architectures. In 42nd Annual IEEE/ACM International Symposium on Microarchitecture, pages 469–480, 2009.
  • [41] Sheng Li, Jung Ho Ahn, Richard D. Strong, Jay B. Brockman, Dean M. Tullsen, and Norman P. Jouppi. The mcpat framework for multicore and manycore architectures: Simultaneously modeling power, area, and timing. ACM Trans. Archit. Code Optim., 10(1):5:1–5:29, April 2013.
  • [42] Sheng Li, Ke Chen, Jung Ho Ahn, Jay B Brockman, and Norman P Jouppi. Cacti-p: Architecture-level modeling for sram-based structures with advanced leakage reduction techniques. In Computer-Aided Design (ICCAD), 2011 IEEE/ACM International Conference on, pages 694–701. IEEE, 2011.
  • [43] Chi-Keung Luk, Robert Cohn, Robert Muth, Harish Patil, Artur Klauser, Geoff Lowney, Steven Wallace, Vijay Janapa Reddi, and Kim Hazelwood. Pin: Building customized program analysis tools with dynamic instrumentation. In Proceedings of the 2005 ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI ’05, pages 190–200, New York, NY, USA, 2005. ACM.
  • [44] M. Maniatakos, N. Karimi, C. Tirumurti, A. Jas, and Y. Makris. Instruction-level impact analysis of low-level faults in a modern microprocessor controller. IEEE Transactions on Computers, 60(9):1260–1273, Sept 2011.
  • [45] John Mawer, Oscar Palomar, Cosmin Gorgovan, Will Toms, Andy Nisbet, and Mikel Lujan. The potential of dynamic binary modification and cpu/fpga socs for simulation. In 25th IEEE International Symposium on Field-Programmable Custom Computing Machines (FCCM), 2017.
  • [46] Luigi Nardi, Bruno Bodin, M. Zeeshan Zia, John Mawer, Andy Nisbet, Paul H. J. Kelly, Andrew J. Davison, Mikel Luján, Michael F. P. O’Boyle, Graham Riley, Nigel Topham, and Steve Furber. Introducing SLAMBench, a performance and accuracy benchmarking methodology for SLAM. In ICRA, 2015.
  • [47] Richard A. Newcombe, Shahram Izadi, Otmar Hilliges, David Molyneaux, David Kim, Andrew J. Davison, Pushmeet Kohli, Jamie Shotton, Steve Hodges, and Andrew Fitzgibbon. Kinectfusion: Real-time dense surface mapping and tracking. In Proceedings of the 2011 10th IEEE International Symposium on Mixed and Augmented Reality, ISMAR ’11, pages 127–136, Washington, DC, USA, 2011. IEEE Computer Society.
  • [48] P.C. Pratt-Szeliga, J.W. Fawcett, and R.D. Welch. Rootbeer: Seamlessly using gpus from java. In Proceedings of 14th International IEEE High Performance Computing and Communication Conference on Embedded Software and Systems, 2012.
  • [49] Brandon Reagen, Paul Whatmough, Robert Adolf, Saketh Rama, Hyunkwang Lee, Sae Kyu Lee, José Miguel Hernández-Lobato, Gu-Yeon Wei, and David Brooks. Minerva: Enabling low-power, highly-accurate deep neural network accelerators. In Proceedings of the 43rd Annual International Symposium on Computer Architecture, ISCA ’16. ACM, 2016.
  • [50] Andrey Rodchenko, Christos Kotselidis, Andy Nisbet, Antoniu Pop, and Mikel Luján. Maxsim: A simulator platform for managed applications. In ISPASS - IEEE International Symposium on Performance Analysis of Systems and Software, 2017.
  • [51] Andrey Rodchenko, Christos Kotselidis, Andy Nisbet, Antoniu Pop, and Mikel Luján. Type information elimination from objects on architectures with tagged pointers support. In IEEE Transactions on Computers, 2017.
  • [52] Daniel Sanchez and Christos Kozyrakis. Zsim: Fast and accurate microarchitectural simulation of thousand-core systems. In Proceedings of the 40th Annual International Symposium on Computer Architecture, ISCA ’13, pages 475–486, New York, NY, USA, 2013. ACM.
  • [53] Muhammad Shafique, Siddharth Garg, Jörg Henkel, and Diana Marculescu. The eda challenges in the dark silicon era: Temperature, reliability, and variability perspectives. In Proceedings of the 51st Annual Design Automation Conference, DAC ’14, pages 185:1–185:6, New York, NY, USA, 2014. ACM.
  • [54] Sniper. Jikes - sniper page in sniper online documentation. http://snipersim.org/w/Jikes, 2014. [Online; last accessed 1-Feb-2016].
  • [55] M7: Next Generation SPARC. http://www.oracle.com/us/products/servers-storage/servers/sparc-enterprise/migration/m7-next-gen-sparc-presentation-2326292.html. July 5, 2019.
  • [56] SpecCPU2006. https://www.spec.org/cpu2006/. July 5, 2019.
  • [57] SpecJVM2008. https://www.spec.org/jvm2008/. July 5, 2019.
  • [58] J. Srinivasan, S. V. Adve, P. Bose, and J. A. Rivers. The impact of technology scaling on lifetime reliability. In Dependable Systems and Networks, 2004 International Conference on, pages 177–186, June 2004.
  • [59] Lukas Stadler, Thomas Würthinger, and Hanspeter Mössenböck. Partial escape analysis and scalar replacement for java. In Proceedings of Annual IEEE/ACM International Symposium on Code Generation and Optimization, CGO ’14, pages 165:165–165:174, New York, NY, USA, 2014. ACM.
  • [60] Arvind K. Sujeeth, Austin Gibbons, Kevin J. Brown, HyoukJoong Lee, Tiark Rompf, Martin Odersky, and Kunle Olukotun. Forge: Generating a high performance dsl implementation from a declarative specification. In Proceedings of the 12th International Conference on Generative Programming: Concepts & Experiences, GPCE ’13, pages 145–154, New York, NY, USA, 2013. ACM.
  • [61] Arvind K. Sujeeth, Hyoukjoong Lee, Kevin J. Brown, Hassan Chafi, Michael Wu, Anand R. Atreya, Kunle Olukotun, Tiark Rompf, and Martin Odersky. Optiml: an implicitly parallel domainspecific language for machine learning. In in Proceedings of the 28th International Conference on Machine Learning, ser. ICML, 2011.
  • [62] The Apache Software Foundation. Commons Math: The Apache Commons Mathematics Library.
    https://commons.apache.org/proper/commons-math/.

    Online; last accessed 08-March-2016.
  • [63] Christian Wimmer, Michael Haupt, Michael L. Van De Vanter, Mick Jordan, Laurent Daynès, and Douglas Simon. Maxine: An approachable virtual machine for, and in, java. January 2013.
  • [64] Thomas Würthinger, Christian Wimmer, Andreas Wöß, Lukas Stadler, Gilles Duboscq, Christian Humer, Gregor Richards, Doug Simon, and Mario Wolczko. One vm to rule them all. In Proceedings of the 2013 ACM International Symposium on New Ideas, New Paradigms, and Reflections on Programming & Software, Onward! ’13, 2013.
  • [65] Xilinx. Zynq-7000 all programmable SoC overview. http://www.xilinx.com/support/documentation/data_sheets/ds190-Zynq-7000-Overview.pdf.
    Online; last accessed 23-March-2016.
  • [66] Runjie Zhang, Ke Wang, B.H. Meyer, M.R. Stan, and K. Skadron. Architecture implications of pads as a scarce resource. In ACM/IEEE 41st International Symposium on Computer Architecture, pages 373–384, 2014.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
49692
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description