An Intermediate Language and Estimator for Automated Design Space Exploration on FPGAs
We present the TyTra-IR, a new intermediate language intended as a compilation target for high-level language compilers and a front-end for HDL code generators. We develop the requirements of this new language based on the design-space of FPGAs that it should be able to express and the estimation-space in which each configuration from the design-space should be mappable in an automated design flow. We use a simple kernel to illustrate multiple configurations using the semantics of TyTra-IR. The key novelty of this work is the cost model for resource-costs and throughput for different configurations of interest for a particular kernel. Through the realistic example of a Successive Over-Relaxation kernel implemented both in TyTra-IR and HDL, we demonstrate both the expressiveness of the IR and the accuracy of our cost model.
An Intermediate Language and Estimator for Automated Design Space Exploration on FPGAs
|Syed Waqar Nabi|
|School of Computing Science,|
|University of Glasgow,|
|Glasgow G12 8QQ, UK.|
|School of Computing Science,|
|University of Glasgow,|
|Glasgow G12 8QQ, UK.|
The context for the work in this paper is the TyTra project [?]which aims to develop a compiler for heterogeneous platforms for high-performance computing (HPC) that includes many/multi-core CPUs, graphics processors (GPUs) and Field Programmable Gate Arrays (FGPAs). The work we present here relates to raising the programming abstraction for targeting FPGAs, reasoning about its multi-dimensional design space, and estimating parameters of interest of multiple configurations from this design-space via a cost model.
We present a new language, the TyTra Intermediate Representation (TIR) language, which has an abstraction level and syntax intentionally similar to the LLVM Intermediate Language [?]. We can derive resource-utilization and performance estimates from TIR code via a light-weight back-end compiler, TyBEC, which will also generate the HDL code for the FPGA synthesis tools. We will briefly discuss syntax of the IR and its expressiveness through illustrations, and discuss the cost model we have built around this language.
The TyTra project is predicated on the observation that we have entered a period where performance increases can only come from increased numbers of heterogeneous computational cores and their effective exploitation by software. The specific challenge we are addressing in the TyTra project is how to exploit the parallelism of a given computing platform, e.g. a multicore CPU, GPU or a FPGA, in the best possible way, without having to change the original program. Our proposed approach is to use an advanced type system called Multi-Party Session Types [?] to describe the communication between the tasks that make up a computation, to transform the program using provably correct type transformations, and to use machine learning and a cost model to select the variant of the program best suited to the heterogeneous platform. Our proof-of-concept compiler is being developed and targets FPGA devices, because this type of computing platform is the most different from other platforms and hence the most challenging.
Figure An Intermediate Language and Estimator for Automated Design Space Exploration on FPGAs is a concise representation of the compiler’s expected flow. The work we present in this paper — identified by the dotted box — is limited to the specification and abstraction of the IR, its utility in representing various configurations, and the cost model built around it which we can use to assess the trade-offs associated with these configuration.
In the next section, we present the TyTra platform model for FPGAs. A very important abstraction for this work is our view of the design-space of FPGAs, which we present next, followed by something we call the estimation-space. Both the design-space and estimation-space are our attempts to give structure to reasoning around multiple configurations of a kernel on an FPGA. We then specify the requirements of the new IR language that we are developing. We follow this with a very brief description of the TIR, and develop this by a expressing different FPGA configurations. We then look at the scheme to arrive at various estimates, and evaluate it by a simple example based on the successive-relaxation method. We conclude by briefly discussing some related work and our future lines of investigation.
The Tytra-FPGA platform model is similar to the platform model introduced by OpenCL [?], but also informed by our prior work on the MORA FPGA programming framework [?], and more nuanced than OpenCL’s to incorporate FPGA-specific architectural features; Altera-OpenCL takes a similar approach [?]. The main departure from the OpenCL model is the Core block, and the Compute-Cores. Figure An Intermediate Language and Estimator for Automated Design Space Exploration on FPGAs is a block diagram of the model, with brief description following. We do however use the terms global memory, local memory, work-group, and work-item, exactly as they are used in the OpenCL framework.
An FPGA device, which would contain one or more compute-units.
Execution unit for a single kernel. An FPGA allows multiple independent kernels to be executed concurrently, though typically there would be a single kernel. The compute-unit contains local memory (block RAM), some custom logic for controlling iterations of a kernel’s execution and managing block memory transfers, and one or more cores.
This is the custom design unit created for a kernel. For pipelined implementations, a core may be considered equivalent to a pipeline lane. There can be multiple lanes for thread-level parallelism (TLP). The core has control logic for generating data streams from a variety of data sources like local-memory, global-memory, host, or a peer compute-device or compute-unit. These streams are fed to/from the core-compute unit inside it, which consists of processing-elements (PEs). A PE can consist of an arithmetic or logic functional unit and its pipeline register, or it can also be a custom scalar or vector instruction processor with its own private memory.
As FPGAs have a fine-grained flexibility, parallelism in the kernel can be exposed by different configurations. It is useful to have some kind of a structure to reason about these configurations; much more so when the goal is an automated design flow. We have created a design-space abstraction for the key differentiating feature of concern of multiple FPGA configurations — the kind and extent of parallelism available in the design. We define an estimation-space for capturing the various estimates for a point in the design-space. By defining a design-space, an estimation-space, and a mapping between them, we have a structured approach for mapping a particular kernel to a suitable configuration on the FPGA.
The design-space is shown in Figure An Intermediate Language and Estimator for Automated Design Space Exploration on FPGAs. A C2 configuration, on the axis indicating the degree of pipeline parallelism, is a pipelined implementation of the kernel on the FPGA. The other horizontal axis indicates the degree of parallelism achieved by replicating a kernel’s core. This can be done by simultaneously launching multiple calls to a kernel, which is parallelism at a coarse, thread level. Along the same dimension is a medium-grained parallelism, which involves launching multiple work-items of a kernel’s work-group.
A configuration in the xy-plane, C1, has multiple kernel cores, each of which has pipeline parallelism as well. We expect this to be the preferable configuration for most small to medium sized kernels, where the FPGA has enough resources to allow multiple kernel instantiations to reside simultaneously.
Note that we have not explicitly shown the most fine-grained parallelism, i.e., Instruction-Level Parallelism (ILP). The assumption is that it will be exploited wherever possible in the pipeline.
While our current focus is on kernels where we can fit at least one fully laid out custom pipeline on the available FPGA resources, re-use of logic resources is possible for larger kernels by cycling through some instructions in a scalar (C4) or vector (C5) fashion, or by using the run-time reconfiguration capabilities of FGPA devices to load in and out relevant subsets of the kernel implementation (C6).
Finally, C0 represents the generic configuration for any point on the design space.
The TyTra design flow (Figure An Intermediate Language and Estimator for Automated Design Space Exploration on FPGAs) depends on the ability of the compiler to make choices about configurations from the design-space of a particular kernel on an FPGA device. Various parameters will be relevant when making this choice, and the success of the TyTra compiler is predicated on the ability to derive estimates of reasonable accuracy for these parameters of concern from the IR, without actually having to generate HDL code and synthesize each configuration on the FPGA. The estimation-space as shown in Figure An Intermediate Language and Estimator for Automated Design Space Exploration on FPGAs is a useful abstraction in this context. The obvious aim is to go as high up as possible on the performance axis, while staying within the computation and IO constraint walls.
Having developed the design-space and estimation-space, it follows that the TyTra-IR should intrinsically be capable of working with both these abstractions, as we discuss in the next section.
The TyTra-IR is one of the key technologies in our proposed approach, hence its design needs to meet many requirements:
Should be intrinsically expressive enough to explore the entire design space of an FPGA (Figure An Intermediate Language and Estimator for Automated Design Space Exploration on FPGAs), but with a particular focus on custom pipelines because our prime target is HPC applications[?]. (The C1 plane).
Should make a convenient target for a front-end compiler that would emit multiple versions of the IR (See Figure An Intermediate Language and Estimator for Automated Design Space Exploration on FPGAs).
Should be able to express access operations in the entire communication hierarchy of the target device111We have omitted the details in this paper, but the TyTra memory-model extends that of LLVM..
Should allow custom number representations to fully utilize the flexibility of FPGAs. If this flexibility offered by FPGAs is not capitalized on, it will be difficult to compete with GPUs for use in HPC for most scientific applications [?].
The language should have enough detail of the architecture to allow generation of synthesizeable HDL code.
A core requirement is to have a light-weight cost-model for high-level estimates. We should be able to place each configuration of interest in the design space (Figure An Intermediate Language and Estimator for Automated Design Space Exploration on FPGAs) to a point in estimation space (Figure An Intermediate Language and Estimator for Automated Design Space Exploration on FPGAs).
The above requirements necessitate the design of a custom intermediate language, as none of the existing HLS (”C-to-gates”) tools meets all requirements. HLS front-end languages are primarily focused on ease of human-use. High-level hardware description languages like OpenCL or MaxJ[?], having coarse-grained high-level datapath and control instructions and syntactic sugar, are inappropriate as compiler targets because the abstraction level is too high. Moreover, even parallelism friendly high-level languages tend to be constrained to specific types of parallelism, and exploring the entire FPGA design-space would either be impossible, or protracted. The requirements of a lightweight cost-model also motivated us to work on a new language.
The TyTra-IR (TIR) is a strongly and statically typed language, and all computations are expressed using Static Single Assignments (SSA). The TIR is largely based on the LLVM-IR because it gives us a suitable point of departure for designing our language, where we can re-use the syntax of the LLVM-IR with little or no modification, and allows to explore LLVM optimizations to improve the code generation capabilities of our tool, as e.g. the LegUp [?] tool does. We use LLVM metadata syntax and some custom syntax as an abstraction for FPGA-specific architectural features.
The TIR code for a design has two components:
deals with setting up the streaming data ports for the kernel. It corresponds to the logic in the core outside the core-compute (See Figure An Intermediate Language and Estimator for Automated Design Space Exploration on FPGAs). All Manage-IR statements are wrapped inside the launch() method.
describes the datapath logic that maps to the core-compute unit inside the core. It mostly works with very limited data abstractions, namely, streaming and scalar ports. All Compute-IR statements are in the scope of the main() function or other functions “called” from it.
By dividing the IR this way, we separate the pure dataflow architecture — working with streaming variables and arithmetic datapath units — from the control and peripheral logic that creates these streams and related memory objects, instantiates required peripherals for the kernel application, and manages the host, peer-device, and peer-unit interfaces. The division between compute-IR and manage-IR directly relates to the division between core-compute unit and the remaining core logic (wrapper) around it (See Figure An Intermediate Language and Estimator for Automated Design Space Exploration on FPGAs). A detailed discussion of the TIR syntax is outside the scope of this paper, but the following illustration of its use in various configurations gives a good picture.
We use a trivial example and build various configurations for it, to demonstrate the architectural expressiveness of the TIR. The following Fortran loop describes the kernel:
do n = 1,ntot y(n) = K + ( (a(n)+b(n)) * (c(n)+c(n)) ) end do
The baseline configuration, whose redacted TIR code is showed in Figure An Intermediate Language and Estimator for Automated Design Space Exploration on FPGAs, is simply a sequential processing of all the operations in the loop. This corresponds to C4 configuration in Figure An Intermediate Language and Estimator for Automated Design Space Exploration on FPGAs.
The manage-IR consists of the launch method which sets up the memory-objects, which are abstractions for any object that can be the source or destination of streaming data. In this case, the memory object (Figure An Intermediate Language and Estimator for Automated Design Space Exploration on FPGAs, line 3) is a local-memory instance, indicated by the argument to addrspace qualifier. The stream-objects connect to memory-objects to create streams of data, as shown in lines 4–5. The creation of streams from memory is equivalent reading from an array in a loop, hence we see that the loop over work-items in Fortran disappears in the TIR. After setting up all stream and memory objects, the main function is called.
The compute-IR sets up the ports (lines 9-11), which are mapped to a stream-object, creating data streams for the compute-IR functions. The SSA datapath instructions in function f1 are configured for sequential execution on the FPGA, indicated by the keyword seq, and then f1 is called by main. Figure An Intermediate Language and Estimator for Automated Design Space Exploration on FPGAs shows the block diagram for this configuration.
This configuration (C2) is a fully pipelined version of the kernel, and the TIR code is shown in Figure An Intermediate Language and Estimator for Automated Design Space Exploration on FPGAs.
Note that the available ILP (the two add operations can be done in parallel) is exploited by explicitly wrapping the two instructions into a par function f1, and then calling it in the pipeline function f2. Our prototype parser can also automatically check for dependencies in a pipe function and schedule instructions using a simple as-soon-as-possible policy. See Figure An Intermediate Language and Estimator for Automated Design Space Exploration on FPGAs for the block diagram of this configuration.
For simple kernels where enough space is left after creating one pipeline core for its execution, we can instantiate multiple identical pipeline lanes (C1). The code in Figure An Intermediate Language and Estimator for Automated Design Space Exploration on FPGAs illustrates how this can be specified in TIR. We do not reproduce segments that have appeared in previous listings. See Figure An Intermediate Language and Estimator for Automated Design Space Exploration on FPGAs for the block diagram of this configuration.
Comparing with the previous single-pipeline configuration, note that we have a new par function f3 calling the same pipe function four times, indicating replication. Similarly, there are now four separate ports for each array input, and there are four separate streaming objects (not shown) for each of these ports, all of which connect to the same memory object, indicating a multi-port memory.
There is one more interesting configuration we can express in TIR by wrapping multiple calls to a seq function in a par function. This would represent a vectorized sequential processor (C5).
The TIR for this configuration is shown in Figure An Intermediate Language and Estimator for Automated Design Space Exploration on FPGAs, with only the relevant new bits emphasized.
See Figure An Intermediate Language and Estimator for Automated Design Space Exploration on FPGAs for the block diagram of this configuration.
Comparing with the single sequential processor configuration, note that we have a new
f2 that calls the same
seq function four times, indicating a replication of the sequential processor.
Following from the requirement of the ability to get cost and performance estimates as discussed in §An Intermediate Language and Estimator for Automated Design Space Exploration on FPGAs, we designed the TIR specifically to allow generation of accurate estimates. Our prototype TyTra Back-end Compiler (TyBEC) can calculate estimates directly from the TIR without any further synthesis. Many different configurations for the same kernel can be compared by a programmer or – eventually – by a front-end compiler.
Two key estimates are calculated by the TyBEC estimator: the resource utilization for a specific Altera FPGA device (ALUTs, REGs, Block-RAM, DSPs), and the throughput estimate for the kernel under consideration. With reference to Figure An Intermediate Language and Estimator for Automated Design Space Exploration on FPGAs, this covers two dimensions. An estimate of IO bandwidth requirements is on-going work. For the purpose of this work we make the simplifying assumption that all kernels are compute-bound rather than IO-bound.
We have described a performance measure called the EWGT (Effective Work-Group Throughput) for comparing how fast a kernel executes across different design points. This may be defined as the number of times an entire work-group (the loop over all the work-items in the index-space) of a kernel is executed every second. Measuring throughput at this granularity rather than the more conventional bits-per-second unit allows us to reason about performance at a coarse enough level to take into account parameters like dynamic reconfiguration penalty. Following is the generic expression which applies to the entire design space (i.e. the C0 root configuration), and specialized expressions for configurations of interest can be derived from it:
Effective Workgroup Throughput
Number of identical lanes
Degree of vectorization
Number of FPGA configurations needed to execute the entire kernel
Time taken to reconfigure FPGA.
Number of equivalent FLOP instructions delegated to the average instruction processor
Ticks taken by one FLOP operation, i.e. CPI.
FPGA clock period.
Number of work-items in the kernel loop.
The key novelty is that the TIR through its constrained syntax at a particular abstraction exposes the parameters that make up the expression, and a simple parser can extract them from the TIR code, as we will show in §An Intermediate Language and Estimator for Automated Design Space Exploration on FPGAs. If we were to use a higher-abstraction HLS language as our internal IR representation, we would not be able to use the above expression, and some kind of heuristic would have to be involved in making the estimates.
All specialized expressions for different types of configurations can be obtained from the generic expression as follows:
For C1, with multiple kernel pipelines, no sequential processing, we set , giving us:
For C2, limited to one pipeline lane, setting leads to:
For C3, with no pipeline parallelism, we set to give:
For C4, where PEs are scalar instruction processors, setting leads to:
For C5. where PEs are vector instruction processors, we set , getting:
Finally, for C6, with multiple run-time configurations the expression remains the same as C0.
As an example, the single-pipelined version in §\thefigure corresponds to C2, and multi-pipeline in §\thefigure, corresponds to C1. We estimated their EWGT based on the relevant expression above, and then compared it to the figures from HDL simulation. See the comparison in the last row of Table An Intermediate Language and Estimator for Automated Design Space Exploration on FPGAs. Note that the cycles/kernel estimate (second-last row) is very accurate; the somewhat higher deviation of about 20% in EWGT estimate is due to the deviation in estimation of device frequency.
Each instructions can be assigned a resource cost by one of two methods:
Use a simple analytical expression developed specifically for the device based on experiments. We have found that the regularity of FPGA fabric allows some very simple first or second order expressions to be built up for most instructions based on a few experiments. The details are outside the scope of this paper.
Lookup, and possibly interpolate, from a cost database for the specific token and data type.
The resource costs are then accumulated based on the structural information available in the TIR. For example, two instructions in a pipe function will incur additional cost of pipeline registers, and instruction in a seq block will save some resources by re-use of functional units, but there will be an additional cost of storing the instructions, and creating control logic to sequence them on the shared functional units. Both the cost and performance estimates follow trivially once we have the kernel expressed in the TIR abstraction.
We have written a TyTra Back-end Compiler (TyBEC) that generates estimates as described in this section. Figure An Intermediate Language and Estimator for Automated Design Space Exploration on FPGAs shows the flow of the TyBEC.
Using the illustration from §An Intermediate Language and Estimator for Automated Design Space Exploration on FPGAs we compared the estimates generated by TyBEC, with the actual resource consumption figures from synthesis of hand-crafted HDL. We only compare the two more relevant example for an FPGA, that is, a single pipeline configuration, and one where pipeline is replicated four times (C2 and C1). The results of comparison are in Table An Intermediate Language and Estimator for Automated Design Space Exploration on FPGAs. Note that the purpose of these estimates primarily is to choose between different configurations of a kernel. The estimates are quite accurate and well within the tolerance set by the requirements.
We discuss a more realistic kernel in this section, to demonstrate the expressibility of the TIR and effectiveness of its cost model. The successive over-relaxation method is a way of solving a linear system of equations, and requires taking a weighted average of neighbouring elements over successive iterations222The TIR has the semantics for standard and custom floating-point representation but the compiler does not yet support floats.. The listing in Figure An Intermediate Language and Estimator for Automated Design Space Exploration on FPGAs is a C-style pseudo-code of the algorithm:
Figure An Intermediate Language and Estimator for Automated Design Space Exploration on FPGAs shows how this translates to TyTra-IR configured as a single pipeline (C2). Note the use of stream offsets (line 21), repeated call to kernel through the repeat keyword (line 4), and use of a function of type comb (line 12), which translates to a single-cycle combinatorial block. We also use nested counters for indexing the 2D index-space (lines 23-24).
We also implemented this kernel as another configuration with replicated pipelines (C1, similar to the configuration in §\thefigure).
Results of Estimator We ran the TyBEC estimator on the two configurations and compared the resource and throughput figures obtained from hand-crafted HDL. Table An Intermediate Language and Estimator for Automated Design Space Exploration on FPGAs shows this comparison.
A reasonable accuracy of the estimator is clearly indicated by these comparisons. This vindicates our observation that an IR designed at an appropriate abstraction will yield estimates of cost and performance in a very straightforward and light-weight manner, that are accurate enough to make design decisions. Hence it is our plan to use this IR to develop a compiler that takes legacy code, and automatically compares various possible configurations on the FPGA to arrive at the best solution.
High-Level Synthesis for FPGAs is an established technology both in the academia and research. There are two ways of comparing our work with others. If we look at the entire TyTra flow as shown in Figure An Intermediate Language and Estimator for Automated Design Space Exploration on FPGAs, then the comparison would be against other C-to-gates tools that can work with legacy code and generate FPGA implementation code from it. As an example, LegUP[?] is an upcoming tool developed in the academia for this purpose. Our own front-end compiler is a work in progress and is not the focus of this paper.
A more appropriate comparison for this paper would be to take the TyTra-IR as a custom language that allows one to program FPGAs at a higher abstraction than HDL, and could be used as an automated or manual route to FPGA programming. Reference [?] for example discussed the Chimps language that is at a similar abstraction as TIR and generates HDL description. Our work is relatively less developed compared to Chimps, however there is nothing equivalent to the estimator model that we have in the TyBEC. The MaxJ is a Java-based custom language used to program Maxeler DFEs (FPGAs) [?]. It is in some ways similar to TIR in the way uses stream abstractions for data, creates pipelines by default. In fact, our IR has been informed a study of the MaxJ language. The use of streaming and scalar ports, offset streams, nested counters, and separation of management and computation code in the TIR is very similar to MaxJ. However, the similarity does not extend much beyond these elements. TIR and MaxJ are at very different abstraction levels, with the latter positioned to provide a programmer-friendly way to program FPGAs. The TIR on the other hand is meant to be a target language for a front-end compiler, and is therefore lower abstraction and fine-grained. This fine-grained nature allows a much better observability and controllability of the configuration on the FPGA, which makes it a more suitable language to explore the entire FPGA design space.
Altera-OCL is an OpenCL compatible development environment for targeting Altera FPGAs[?]. It offers a familiar development eco-system to programmers already used to programming GPUs and many/multi-cores using OpenCL. A comparison of a high-level language like OpenCL with TyTra-IR would come to similar conclusions as arrived in relation to MaxJ. In addition, we feel that the intrinsic parallelism model of OpenCL, which is based on multi-threaded work-items, is not suitable for FPGA targets which offer the best performance via the use of deep, custom pipelines. Altera-OCL is however of considerable importance to our work, as we do not plan to develop our own host-API, or the board-package for dealing with FPGA peripheral functionality. We will wrap our custom HDL inside an OpenCL device abstraction, and will use OpenCL API calls for launching kernels and all host-device interactions.
In this paper, we showed that the TIR syntax makes it very easy to construct a variety of configurations on the FPGA. While the current semantics are already reasonably expressive, the TIR will evolve to encompass more complex scientific kernels. We showed that we can estimate very accurate estimates of cost and performance from the TIR without any further translations. We indicated that automatic HDL generation is a straightforward process, which is a work in progress, and our immediate next step.
We plan to improve the accuracy of the estimator with better mathematical models. We will also make our estimator tool more generic, as for this proof-of-concept work the supported set of instructions and data-types is quite limited. The compiler will also be extended to incorporate optimizations, in particular we aim to incorporate LegUP’s sophisticated LLVM optimizations before emitting HDL code[?].
While we work on maturing the TIR and the back-end compiler, we will be moving higher up the abstraction as well. We will be investigating the automatic generation of the TIR from HLL, with the help of Multi-Party Session Types to ensure correctness of transformations.
Acknowledgement The authors acknowledge the support of the EPSRC for the TyTra project (EP/L00058X/1).
- 1 “The TyTra project website,” http://tytra.org.uk/, accessed: 2015-04-17.
- 2 Chris Lattner and Vikram Adve, “The LLVM Instruction Set and Compilation Strategy,” CS Dept., Univ. of Illinois at Urbana-Champaign, Tech. Report UIUCDCS-R-2002-2292, Aug 2002.
- 3 K. Honda, N. Yoshida, and M. Carbone, “Multiparty asynchronous session types,” SIGPLAN Not., vol. 43, no. 1, pp. 273–284, Jan. 2008. [Online]. Available: http://doi.acm.org/10.1145/1328897.1328472
- 4 J. Stone, D. Gohara, and G. Shi, “Opencl: A parallel programming standard for heterogeneous computing systems,” Computing in Science Engineering, vol. 12, no. 3, pp. 66–73, May 2010.
- 5 S. R. Chalamalasetti, S. Purohit, M. Margala, and W. Vanderbauwhede, “Mora-an architecture and programming model for a resource efficient coarse grained reconfigurable processor,” in Adaptive Hardware and Systems, 2009. AHS 2009. NASA/ESA Conference on. IEEE, 2009, pp. 389–396.
- 6 T. Czajkowski, U. Aydonat, D. Denisenko, J. Freeman, M. Kinsner, D. Neto, J. Wong, P. Yiannacouras, and D. Singh, “From opencl to high-performance hardware on fpgas,” in Field Programmable Logic and Applications (FPL), 2012 22nd International Conference on, Aug 2012, pp. 531–534.
- 7 W. Vanderbauwhede, “On the capability and achievable performance of fpgas for hpc applications,” in Emerging Technologies Conference, Apr 2014.
- 8 M. Vestias and H. Neto, “Trends of cpu, gpu and fpga for high-performance computing,” in Field Programmable Logic and Applications (FPL), 2014 24th International Conference on, Sept 2014, pp. 1–6.
- 9 A. Putnam, D. Bennett, E. Dellinger, J. Mason, P. Sundararajan, and S. Eggers, “Chimps: A c-level compilation flow for hybrid cpu-fpga architectures,” in Field Programmable Logic and Applications, 2008. FPL 2008. International Conference on, Sept 2008, pp. 173–178.
- 10 A. Canis, J. Choi, M. Aldham, V. Zhang, A. Kammoona, J. H. Anderson, S. Brown, and T. Czajkowski, “Legup: High-level synthesis for fpga-based processor/accelerator systems,” in Proceedings of the 19th ACM/SIGDA International Symposium on Field Programmable Gate Arrays, ser. FPGA ’11. New York, NY, USA: ACM, 2011, pp. 33–36. [Online]. Available: http://doi.acm.org/10.1145/1950413.1950423
- 11 O. Pell and V. Averbukh, “Maximum performance computing with dataflow engines,” Computing in Science Engineering, vol. 14, no. 4, pp. 98–103, July 2012.