Mira: A Framework for Static Performance Analysis
The performance model of an application can provide understanding about its runtime behavior on particular hardware. Such information can be analyzed by developers for performance tuning. However, model building and analyzing is frequently ignored during software development until performance problems arise because they require significant expertise and can involve many time-consuming application runs. In this paper, we propose a fast, accurate, flexible and user-friendly tool, Mira, for generating performance models by applying static program analysis, targeting scientific applications running on supercomputers. We parse both the source code and binary to estimate performance attributes with better accuracy than considering just source or just binary code. Because our analysis is static, the target program does not need to be executed on the target architecture, which enables users to perform analysis on available machines instead of conducting expensive experiments on potentially expensive resources. Moreover, statically generated models enable performance prediction on non-existent or unavailable architectures. In addition to flexibility, because model generation time is significantly reduced compared to dynamic analysis approaches, our method is suitable for rapid application performance analysis and improvement. We present several scientific application validation results to demonstrate the current capabilities of our approach on small benchmarks and a mini application.
Understanding application and system performance plays a critical role in high-performance software and architecture design. Developers require a thorough insight of the application and system to aid their development and improvement. Performance modeling provides software developers with necessary information about bottlenecks and can guide users in identifying potential optimization opportunities.
As the development of new hardware and architectures progresses, the computing capability of high performance computing (HPC) systems continues to increase dramatically. However along with the rise in computing capability, it is also true that many applications cannot use the full available computing potential, which wastes a considerable amount of computing power. The inability to fully utilize available computing resources or specific advantages of architectures during application development partially accounts for this waste. Hence, it is important to be able to understand and model program behavior in order to gain more information about its bottlenecks and performance potential. Analyzing the instruction mixes of programs at function or loop granularity can provide insight on CPU and memory characteristics, which can be used for further optimization of a program.
In this paper, we introduce a new approach for analyzing and modeling programs using primarily static analysis techniques combining both source and binary program information. Our tool, Mira, generates parameterized performance models that can be used to estimate instruction mixes at different granularity (from function to statement level) for different inputs and architectural features without requiring execution of the application.
Current program performance analysis tools can be categorized into two types: static and dynamic. Dynamic (runtime) analysis is performed through execution of the target program and measurement of metrics of interest, e.g., time or hardware performance counters. By contrast, static analysis operates on the source or binary code without actually executing it. PBound  is an example static analysis tool for automatically modeling program performance based on source code analysis of C applications. Because PBound considers only the source code, it cannot capture compiler optimizations and hence produces less accurate estimates of performance metrics. We discuss other examples of these approaches in more detail in Sections II and V.
While some past research efforts mix static and dynamic analysis to create a performance model, relatively little effort has been put into pure static performance analysis and increasing the accuracy of static analysis. Our approach starts from object code because the code transformations performed by optimizing compilers would cause non-negligible effects on the analysis accuracy. In addition, object code is language-independent and more directly reflects runtime behavior. Although object code could provide instruction-level information, it still fails to offer some critical factors for understanding the target program. For instance, it is difficult or impossible to obtain detailed information about high-level code structures (user-defined types, classes, loops) from just the object code. Therefore, source code is also analyzed in our project to supplement complementary high-level information.
By combining source code and object code representations, we are able to obtain a more precise description of the program and its possible behavior when running on a particular architecture, which results in improved modeling accuracy. The output of our tool can be used to rapidly explore program behavior for different inputs without requiring actual application execution. In addition, because the analysis is parameterized with respect to the architecture, Mira provides users valuable insight of how programs may run on particular architectures without requiring access to the actual hardware. Furthermore, the output of Mira can also be applied to create performance models to further analyze or optimize performance, for example Roofline arithmetic intensity estimates .
This paper is organized as follows: Section II briefly describes the ROSE compiler framework, the polyhedral model for loop analysis, and the background of performance measurement and analysis tools. In Sections III, we discuss the details of our methodology and the implementation. Section IV evaluates the accuracy of the generated models on several benchmark codes. In Section V, we introduce related work about static performance modeling. Section VI concludes with a summary and future work discussion.
Ii-a ROSE Compiler Framework
ROSE  is an open-source compiler framework developed at Lawrence Livermore National Laboratory (LLNL). It supports the development of source-to-source program transformation and analysis tools for large-scale Fortran, C, C++, OpenMP and UPC (Unified Parallel C) applications. ROSE uses the EDG (Edison Design Group) parser and OPF (Open Fortran Parser) as the front-ends to parse C/C++ and Fortran. The front-end produces ROSE intermediate representation (IR) that is then converted into an abstract syntax tree (AST). It provides users a number of APIs for program analysis and transformation, such as call graph analysis, control flow analysis, and data flow analysis. The wealth of available analyses makes ROSE an ideal tools both for experienced compiler researchers and tool developers with minimal background to build custom tools for static analysis, program optimization, and performance analysis.
Ii-B Polyhedral Model
We rely on the polyhedral model to characterize the iteration spaces of certain types of loops. The polyhedral model is an intuitive algebraic representation that treats each loop iteration as lattice point inside the polyhedral space produced by loop bounds and conditions. Nested loops can be translated into a polyhedral representation if and only if they have affine bounds and conditional expressions, and the polyhedral space generated from them is a convex set. Moreover, the polyhedral model can be used to generate generic representation depending on loop parameters to describe the loop iteration domain. In addition to program transformation , the polyhedral model is broadly used for automating optimization and parallelization in compilers (e.g., GLooG ) and other tools [6, 7, 8].
Ii-C Performance Measurement and Analysis Tools
Performance tools are capable of gathering performance metrics either dynamically (instrumentation, sampling) or statically. PAPI  is used to access hardware performance counters through both high- and low-level interfaces, which are typically used through manual or automated instrumentation of application source code. The high-level interface supports simple measurement and event-related functionality such as start, stop or read, whereas the low-level interface is designed to deal with more complicated needs. The Tuning and Analysis Utilities (TAU)  is another state-of-the-art performance tool that uses PAPI as the low-level interface to gather hardware counter data. TAU is able to monitor and collect performance metrics by instrumentation or event-based sampling. In addition, TAU also has a performance database for data storage and analysis and visualization components, ParaProf. There are several similar performance tools including HPCToolkit , Scalasca , MIAMI , gprof , Byfl , which can also be used to analyze application or systems performance through runtime measurements.
Mira is built on top of ROSE compiler framework, which provides several useful APIs as front-end for parsing the source file and disassembling the ELF file. Mira is implemented in C++ and is able to process C/C++ source code as input. Figure 1 illustrates the entire workflow of Mira for performance model generation and analysis, which comprises three major parts:
Input Processor - Input parsing and disassembling
Metric Generator - AST traversal and metric generation
Model Generator - Model generation in Python
Iii-a Processing Input Files
Iii-A1 Source code and binary representations
The Input Processor is the front-end of Mira, and its primary goal is to process source code and ELF object file inputs and build the corresponding ASTs (Abstract Syntax Trees). Mira analyzes these ASTs to locate critical structures such as function bodies, loops, and branches. Furthermore, because the source AST also preserves high-level source information, such as variable names, types, the order of statements and the right/left hand side of assignment, Mira incorporates this high-level information into the generated model. For instance, one can query all information about the static control part (SCoP) of a loop, including loop initialization, loop condition, and increment (these are not explicit in the binary code). In addition, because variable names are preserved, it makes the identification of loop indexes much easier and processing of the variables inside the loop more accurate.
Iii-A2 Bridge between source and binary
The AST is the output of the frond-end part of Mira. After processing the inputs, two ASTs are generated separately from the source and compiled binary codes representing the structures of the two inputs. Mira is designed to use information retrieved from these trees to improve the accuracy of the generated models. Therefore, it is necessary to build connections between the two ASTs so that for a structure in source it is able to instantly locate corresponding nodes in the binary one.
Although both ASTs are representations of the inputs, they have totally different shapes, node organizations and meaning of nodes. A partial binary AST (representing a function) is shown in Figure 3. Each node of the binary AST describes the syntax element of assembly code, such as SgAsmFunction, SgAsmX86Instruction. As shown in Figure 3, a function in the binary AST is composed of multiple instructions, while in the source AST, a functions is composed of statements. Hence, one source AST node typically corresponds to several nodes in the binary AST, which complicates the building of connections between them.
Because the differences between the two AST structures make it difficult to connect source to binary, an alternate way is needed to make the connection between ASTs more precise. Inspired by debuggers, line numbers are used in our tool as the bridge to associate source to binary. When we are debugging a program, the debugger knows exactly the source line and column of the error location. By using the -g option during program compilation, the compiler will insert debug-related information into the object file for future reference. Most compilers and debuggers use DWARF (debugging with attributed record format) as the debugging file format to organize the information for source-level debugging. DWARF categorizes data into several sections, such as .debug_info, .debug_frame, etc. The .debug_line section stores the line number information.
The line number debugging information allows us to decode the specific DWARF section to map the line number to the corresponding instruction address. Because line number information in the source AST is already preserved in each node, unlike the binary AST, it can be retrieved directly. After line numbers are obtained from both source and binary, connections are built in each direction between the two ASTs. As mentioned in the previous section, a source AST node normally links to several binary AST nodes due to the different meaning of nodes. Specifically, a statement contains several instructions, but an instruction only has one connected source location. Once the node in the binary AST is associated to the source location, further analysis can be performed. For instance, it is possible to narrow the analysis to a small scope and collect data such as the instruction count and type in a particular code fragment, such as function body, loop body, and even a single statement.
|Application||Number of loops||Number of statements||Statements in loops||Percentage|
Iii-B Generating metrics
The Metric Generator is an important part of the entire framework, which has significant impact on the accuracy of the generated model. It receives the ASTs as inputs from the Input Processor to produce metrics for model generation. An AST traversal is needed to collect and propagate necessary information about the specific structures in the program for appropriate organization of the program representation to precisely guide model generation. During the AST traversal, additional information is attached to the particular tree node as a supplement used for analysis and modeling. For example, if it is too long, one statement is probably located in several lines. In this case, all the line numbers are collected together and stored as extra information attached to the statement node.
To best model the program, the metric generator traverses the source AST twice: first bottom-up and then top-down. The upward traversal propagates detailed information about specific structures up to the head node of the sub-tree. For instance, as shown in Figure 2, SgForStatement is the head node for the loop sub-tree; however, this node itself does not store any information about the loop. Instead, the loop information such as loop initialization, loop condition and step are stored in SgForInitStatement, SgExprStatement and SgPlusPlusOp separately as child nodes. In this case, the bottom-up traversal recursively collects information from leaves to root and organizes it as extra data attached to the head node for the loop. The attached information will serve as context in modeling.
After bottom-up traversal, top-down traversal is applied to the AST. Because information about sub-tree structure has been collected and attached, the downward traversal primarily focuses on the head node of sub-tree and those of interest, for example the loop head node, if head node, function head node, and assignment node, etc. Moreover, the top-down traversal must pass down necessary information from parent to child node in order to model complicated structures correctly. For example, in nested loop and branch inside loop the inner structure requires the information from the parent node as the outer context to model itself, otherwise these complicated structures can not be correctly handled. Also, instruction information from ELF AST is connected and associated to correspond structures in top-down traversal.
Iii-C Generating Models
The Model Generator is built on the Metric Generator, which consumes the intermediate analysis result of the metric generator and generates an easy-to-use model. To achieve the flexibility, the generated model is coded in Python so that the result of the model can be directly applied to various scientific libraries for further analysis and visualization. In some cases, the model is in ready-to-execute condition for which users are able to run it directly without providing any input. However, users are required to feed extra input in order to run the model when the model contains parametric expressions. The parametric expression exists in the model because our static analysis is not able to handle some cases. For example, when user input is expected in the source code or the value of a variable comes from the returning of a call, the variables are preserved in the model as parameters that will be specified by the users before running the model.
Iii-C1 Loop Modeling
Loops are common in HPC codes and are typically at the heart of the most time-consuming computations. A loop executes a block of code repeatedly until certain conditions are satisfied. Bastoul et al.  surveyed multiple high-performance applications and summarized the results in Table I. The first column shows the number of loops contained in the application. The second column lists the total number of statements in the applications and the third column counts the number of statements covered by loop scope. The ratio of in-loop statements to the total number of statements are calculated in the last column. In the data shown in the table, the lowest loop coverage is 77% for quake and the coverage rates for the rest of applications are above 80%. This survey data also indicates that the in-loop statements make up a large majority portion of the total statements in the selected high-performance applications.
Iii-C2 Using the Polyhedral Model
Whether loops can be precisely described and modeled has a direct impact on the accuracy of the generated model because the information about loops will be provided as context for further in-loop analysis. The term ”loop modeling” refers to analysis of the static control parts (SCoP) of a loop to obtain the information about the loop iteration domain, which includes understanding of the initialization, termination condition and step. Unlike dynamic analysis tools which may collect runtime information during execution, our approach runs statically so the loop modeling primarily relies on SCoP parsing and analyzing. Usually to model a loop, it is necessary to take several factors into consideration, such as depth, data dependencies, bounds, etc. Listing 1 shows a basic loop structure, the SCoP is complete and simple without any unknown variable. For this case, it is possible to retrieve the initial value, upper bound and steps from the AST, then calculate the number of iterations. The iteration count is used as context when analyzing the loop body. For example, if corresponding instructions are obtained the from binary AST for the statements in Listing 1, the actual count of these instructions is expected to be multiplied by the iteration count to describe the real situation during runtime.
However, loops in real application are more complicated, which requires our framework to handle as many scenarios as possible. Therefore, the challenge for modeling the loop is to create a general method for various cases. To address this problem, we use the polyhedral model in Mira to accurately model the loop. The polyhedral model is capable of handling an N-dimensional nested loop and represents the iteration domain in an N-dimensional polyhedral space. For some cases, the index of inner loop has a dependency with the outer loop index. As shown in Listing 2, the initial value of the inner index j is based on the value of the outer index i. For this case, it is possible to derive a formula as the mathematical model to represent this loop, but it would be difficult and time-consuming. Most importantly, it is not general; the derived formula may not fit for other scenarios. To use the polyhedral model for this loop, the first step is to represent loop bounds in affine functions. The bounds for the outer and inner loop are and , which can be written as two equations separately:
In Figure LABEL:sub@loop:a, the two-dimensional polyhedral area presenting the loop iteration domain is created based on the two linear equations. Each dot in the figure represents a pair of loop indexes (i, j), which corresponds to one iteration of the loop. Therefore, by counting the integers in the polyhedral space, we are able to parse the loop iteration domain and obtain the iteration times. For loops with more complicated SCoP, such as the ones contain variables instead of concrete numerical values, the polyhedral model is also applicable. When modeling loops with unknown variables, Mira uses the polyhedral model to generate a parametric expression representing the iteration domain, which can be changed by specifying different values to the input. Mira maintains the generated parametric expressions and uses as context in the following analysis. In addition, the unknown variables in loop SCoP are preserved as parameters until the parametric model is generated. With the parametric model, it is not necessary for the users to re-generate the model for different values of the parameters. Instead, they just have to adjust the inputs for the model and run the Python code to produce a concrete value.
There are exceptions that the polyhedral model cannot handle. For the code snippet in Listing 3, the SCoP of the loop forms a non-convex set (Figure LABEL:sub@loop:d) which is not handled by the polyhedral model. Another problem in this code is that the loop initial value and loop bound depend on the return values of function calls. For static analysis to track and obtain the such values, more complex interprocedural analysis is required, which is planned as part of our future work.
In addition to loops, branch statements are also common structures. In scientific applications, branch statements are frequently used to verify the intermediate output during the computing. Branch statements can be handled by the information retrieved from the AST. However, it complicates the analysis when the branch statements reside in a loop. In Listing 4, the if constraint is introduced into the previous code snippet. The number of execution times of the statement inside the if depends on the branch condition. In our analysis, the polyhedral model of a loop is kept and passed down to the inner scope. Thus the if node has the information of its outer scope. Because the loop conditions combined with branch conditions form a polyhedral space as well, shown in Figure LABEL:sub@loop:b, the polyhedral representation is still able to model this scenario by adding the branch constraint and regenerate a new polyhedral model for the if node. Comparing Figure LABEL:sub@loop:b with Figure LABEL:sub@loop:a, it is obvious that the iteration domain becomes smaller and the number of integers decreases after introducing the constraint, which indicates the execution times of statements in the branch is limited by the if condition.
However, some branch constraints might break the definition of a convex set that the polyhedral model is not applicable. For the code in Listing 5, the if condition excludes several integers in the polyhedral space causing ”holes” in the iteration space as shown in Figure LABEL:sub@loop:c. The excluded integers break the integrity of the space so that it no longer satisfies the definition of the convex set, thus the polyhedral model is not available for this particular circumstance. In this case, the true branch of the if statement raises the problem however the false branch still satisfies the polyhedral model. Thus we can use the following equation to solve:
Because the counter of the outer loop and false branch both can be expressed by the polyhedral model, using either concrete value or parametric expression, so the count of the true branch is obtained. The generality of the polyhedral model makes it suitable for most common cases in real applications, however there are some cases that cannot be handled by the polyhedral model or even static analysis. For such circumstances, we provide users an option to annotate branches or the loops which Mira is not able to handle statically.
There are loop and branch cases that we are not able to process in a static way, such as conditionals involving loop index-unrelated variables or external function calls used for computing loop initial values or loop/branch conditions. Mira accepts user annotations to address such problems. We designed three types of annotation: an estimated percentage or a numerical value representing the proportion of iterations branch may take inside a loop or the number of iterations, which simplifies the loop/branch modeling; a variable used as initial value or condition to complete the polyhedral model; or a flag to indicate that a structure or a scope should be skipped. To annotate the code, users just need to put the information in a ”#pragma” directive in this format: #pragma @Annotation information. Mira processes the annotations during metric generation.
As the example shown in Listing 6, the if has a function call as a condition which causes a failure when Mira tries to generate the model fully automatically. To solve this problem, we specify an annotation in the pragma to provide the missing information and enable Mira to generate a complete model. In the given example, the whole branch scope will be skipped when generating metrics. Besides, we also annotate the initial value and condition of the inner loop using variable x and y because as a static tool Mira is not able to obtain values from those arrays. Mira will use the two variables to complete the polyhedral model; these variables will be treated as parameters expecting sample values from the user at model evaluation time.
Mira organizes the generated model in functions, which correspond to functions in the source code. In the generated model, the function header is modified for two reasons: flexibility and usability. Specifically, each user-defined function in the source code is modeled into a corresponding Python function with a different function signature, which only includes the arguments that are used by the model. In addition, the generated model function has a slightly different name in order to avoid potential conflict due to different calling contexts or function overloading. For instance, the Python function with name foo_2 represents the original C++ function foo, but with a reduced number of arguments. In the body of the generated Python function, the original C++ statements are replaced with corresponding instruction counter metrics retrieved from binary. These data are stored in Python dictionaries and updated in the same order as the statements. Each function, when called, returns the aggregate counts within its scope. The advantage of this design is to provide the user the freedom to separate and obtain the overview of the particular functions with only minor changes to the model.
Correct handling of function calls involves two aspects: locating the corresponding function and combining the metrics into the caller function. To combine the metrics, we designed a Python helper function handle_function_call, which takes three arguments: caller metrics, callee metrics and loop iterations. It enables Mira to model the function call in the loop, which each metric of the callee should multiply the loop iterations. Mira retrieves the name of the callee function from the source AST node, and then generates a function call statement in Python and takes the return values that representing the metrics in the callee function. After that, Mira calls the handle_function_call to combine metrics of the caller and the callee.
Iii-C6 Architecture Description File
To enable the evaluation of the generated performance model in the context of specific architectural features, we provide an architecture description file where users define architecture-related parameters, such as number of CPU cores, cache line size, and vector length. Moreover, this user-customized file can be extended to include the information which does not exist in source or binary file to enable Mira to generate more predictions. For instance, we divided the x86 instruction set into 64 different categories in the description file, which Mira uses to estimate the number of instructions in each category for each function in the source file. This representation strikes a balance between fine and coarse-grained approaches, providing category-based cumulative instruction counts at fine code granularity (down to statement-level), which enables developers to obtain better understanding of local behavior. Based on the metrics Mira generated in Table II, Figure 6 illustrates the distribution of categorized instructions from function cg_solve from the miniFE application . The separated piece represents the SSE2 vector instructions which is the source of the floating-point instruction in this function.
|Integer arithmetic instruction||6.8E8|
|Integer control transfer instruction||2.26E8|
|Integer data transfer instruction||2.42E9|
|SSE2 data movement instruction||3.67E8|
|SSE2 packed arithmetic instruction||1.93E8|
|64-bit mode instruction||2.59E8|
Iii-C7 Generated Model
We describe the model generated (output) by Mira with an example. In Figure 5, it shows the source code (input) and generated Python model separately. The source code (Figure LABEL:sub@model:a) includes a class A defining a member function foo with two array variables as the parameters. The member function foo is composed of a nested loop in which we annotate the upper bound of the inner loop with variable y. In the main function, it creates an instance of class A and call function foo. Figure LABEL:sub@model:b shows part of the generated Python function foo in which the new function name is replaced with the combination of its class name, original function name and the number of arguments in the original function definition. The body of the generated function A_foo_2 consists of the Python statements for keeping track of performance metrics. As we can see in the generated function, Mira uses the annotation variable y to complete the polyhedral model and preserves y as the argument. Similarly, the generated function main is shown in Figure LABEL:sub@model:c. It calls the A_foo_2 function and then updates its metrics by invoking handle_function_call. The parameter y_16 indicates that the function call associates the source code at line 16. At present, the value of y_16 is specified by users during model evaluation. Different values can be supplied as function parameters in different function call contexts.
|Array size / Tool||TAU||Mira||Error|
|Matrix size / Tool||TAU||Mira||Error|
In this section, we evaluate the correctness of the model derived by Mira with TAU in instrumentation mode. Two benchmarks are separately executed statically and dynamically on two different machines. While Mira counts all types of instructions, we focus on floating-point instructions (FPI) in this section because it is an important metric for HPC code analysis. The validation is performed by comparing the floating-point instruction counts produced by Mira with empirical instrumentation-based TAU/PAPI measurements.
Iv-a Experiment environment
We conducted the validation on two machines whose specifications are as follows.
Arya - Two Intel Xeon E5-2699v3 2.30GHz 18-core Haswell CPUs and 256GB of memory.
Frankenstein - Two Intel Xeon E5620 2.40GHz 4-core Nehalem CPUs and 22GB of memory.
Two benchmarks are chosen for validation, STREAM  and DGEMM . STREAM is designed for the measurement of sustainable memory bandwidth and corresponded computation rate for simple vector kernels. DEGMM is a widely used benchmark for measuring the floating-point rate on a single CPU. It uses double-precision real matrix-matrix multiplication to calculate the floating-point rate. For both benchmarks, the non-OpenMP version is selected and executed serially with one thread.
|size||Function / Tool||TAU||Mira||Error|
Iv-C Mini Application
In addition to the STREAM and DGEMM benchmarks, we also use the miniFE mini-application  to verify the result of Mira. MiniFE is composed of several finite-element kernels, including computation of element operators, assembly, sparse matrix-vector product, and vector operations. It assembles a sparse linear system and then solves it using a simple unpreconditioned conjugate-gradient algorithm. Unlike STREAM and DGEMM in which the main function takes the majority part of the code, miniFE distributes the kernels by several functions, and associates each other by function calls which challenges the capability of Mira to handle a long chain of function calls.
In this section, we present empirical validation results and illustrate the tradeoffs between static and dynamic methods for performance analysis and modeling. We also show a use case for the generated instruction metrics to compute an instruction-based arithmetic intensity derived metric, which can be used to identify loops that are good candidates for different types of optimizations (e.g., parallelization or memory-related tuning).
Tables III, IV and V show floating-point instruction counts in two benchmarks and mini application separately. The metrics are gathered by evaluating the model generated by Mira, and comparing to the empirical results obtained through instrumentation-based measurement using TAU and PAPI.
In Figure LABEL:sub@val:a, the X axis is the size of the input array, and we choose 20 million, 50 million and 100 million, respectively. The logarithmic Y axis shows floating-point instruction counts. Similarly, in Figure LABEL:sub@val:b, the X axis is for input size and the Y for FPI counts. Figure LABEL:sub@val:c and Figure LABEL:sub@val:d show FPI counts for three functions for the different problem sizes. We show details for the cg_solve function, which solves the sparse linear system, because it accounts for the bulk of the floating-point computations in this mini app. The function waxpby and the operator overloading function matvec_std::operator() are in cg_solve’s call tree and are invoked in the loop. Our results show that the floating-point instruction counts produced by Mira are close to the TAU measurements (of the PAPI_FP_INS values), with error of up to 3.08%. The difference between static estimates and measured quantities increases with problem size, which means that there are discrepancies within some of the loops. This is not unexpected—static analysis cannot capture dynamic behavior with complete accuracy. The measured values capture samples based on all instructions, including those in external library function calls, which at present are not visible and hence not analyzed by Mira. For such scenarios, Mira can only track the function call statements that just contain several stack manipulation instructions while the content of the invoked function is skipped. In future we plan to provide different mechanisms for handling these cases, including limited binary analysis of the corresponding library supplemented by user annotations.
In addition to correctness, we compare the execution time of the static and empirical approaches. In empirical approaches, the experiment has to be repeated for different input values and in some cases multiple runs for each input value are required (e.g., when collecting performance hardware counter data). Instrumentation approaches can focus on specific code regions, but most sampling-based approaches collect information for all instructions, hence they potentially incur runtime and memory cost for collecting data on uninteresting instructions. By contrast, our model only needs to be generated once, and then can be evaluated (at low computational cost) for different user inputs and specific portions of the computation. Most important, the performance analysis by a parametric model can be used to achieve broad coverage without incurring the costs of many application executions.
Another challenge in dynamic approaches is the differences in hardware performance counters, including lack of availability of some types of measurements. For example, in modern Intel Haswell servers, there is no support for FLOP or FPI performance hardware counters. Hence, static performance analysis may be the only way to produce floating-point-based metrics in such cases.
Next, we demonstrate how one can use the Mira-generated metrics to model the estimated instruction-based floating-point arithmetic intensity of the cg_solve function. The general definition of arithmetic intensity is the ratio of arithmetic operation to the memory traffic. With an appropriate setting in the architecture description file, we can enable Mira to generate various metrics. As the data shown in Table II, Mira categorizes the instructions in cg_solve into seven categories. In the listed categories, ”SSE2 packed arithmetic instruction” represents the packed and scalar double-precision floating-point instructions and ”SSE2 data movement instruction” describes the movement of double-precision floating-point data between XMM registers and memory. Therefore the instruction-based floating-point arithmetic intensity of function cg_solve can be simply calculated as . This is a simple example to demonstrate the usage of our model. With sophisticated setting of architecture description file, Mira is able to perform more complicated prediction.
V Related Work
There are two related tools that we are aware of designed for static performance modeling, PBound  and Kerncraft . PBound was designed by one of the authors of this paper (Norris) to estimate “best case” or upper performance bounds of C/C++ applications through static compiler analysis. It collects information and generates parametric expression for particular operations including memory access and floating-point operations, which is combined with user-provided architectural information to compute machine-specific performance estimates. However, it relies purely on source code analysis, and ignores the effects of compiler transformations (e.g., compiler optimization), frequently resulting in bound estimates that are not realistically achievable.
Hammer et al. have created Kerncraft, a static performance modeling tool with concentration on memory hierarchy. Kerncraft characterizes performance and scaling loop behavior based on Roofline  or Execution-Cache-Memory (ECM)  model. It uses YAML as the file format to describe low-level architecture and Intel Architecture Code Analyzer (IACA)  to operate on binaries in order to gather loop-relevant information. However, the reliance on IACA limits the applicability of the tool so that the binary analysis is restricted by Intel architecture and compiler.
Tools such as PDT provide source-level instrumentation, while MAQAO and Dyninst  use binary instrumentation for dynamic program analysis. Apollo  is a recent API-based dynamic analysis tool that provides a lightweight approach based on machine learning to select the best tuning parameter values while reducing the modeling cost by spreading it over multiple runs instead of constructing the model at runtime.
System simulators can also used for modeling, for example, the Structural Simulation Toolkit (SST) . However, as a system simulator, SST has a different focus—it simulates the whole system instead of single applications and it analyzes the interaction among architecture, programming model and communications system. Moreover, simulation is computationally expensive and limits the size and complexity of the applications that can be simulated. Compared with PBound, Kerncraft and Mira, SST is relatively heavyweight, complex, and focuses on hardware, which is more suitable for exploring architecture, rather than performance of the application.
In this paper, we present Mira, a framework for static performance modeling and analysis. We aim at designing a faster, accurate and flexible method for performance modeling as a supplement to existing tools in order to address problems that cannot solved by current tools. Our method focuses on floating-point operations and achieves good accuracy for benchmarks. These preliminary results suggest that this can be an effective method for performance analysis.
While at present Mira can successfully analyze realistic application codes in many cases, much work remains to be done. In our future work, the first problem we eager to tackle is to appropriately handle more function-calling scenarios, especially those from system or third-party libraries. We will also consider combining dynamic analysis and introducing more performance metrics into the model to accommodate cases where control flow cannot be characterized accurately purely through static analysis. We also plan to extend Mira to enable characterization of shared-memory parallel programs.
-  S. H. K. Narayanan, B. Norris, and P. D. Hovland, “Generating performance bounds from source code,” in Parallel Processing Workshops (ICPPW), 2010 39th International Conference on. IEEE, 2010, pp. 197–206.
-  S. Williams, A. Waterman, and D. Patterson, “Roofline: an insightful visual performance model for multicore architectures,” Communications of the ACM, vol. 52, no. 4, pp. 65–76, 2009.
-  D. Quinlan, “Rose homepage,” http://rosecompiler.org.
-  L.-N. Pouchet, U. Bondhugula, C. Bastoul, A. Cohen, J. Ramanujam, P. Sadayappan, and N. Vasilache, “Loop transformations: Convexity, pruning and optimization,” in 38th ACM SIGACT-SIGPLAN Symposium on Principles of Programming Languages (POPL’11). Austin, TX: ACM Press, Jan. 2011, pp. 549–562.
-  C. Bastoul, “Code generation in the polyhedral model is easier than you think,” in PACT’13 IEEE International Conference on Parallel Architecture and Compilation Techniques, Juan-les-Pins, France, September 2004, pp. 7–16.
-  M. Griebl, “Automatic parallelization of loop programs for distributed memory architectures,” 2004.
-  U. Bondhugula, A. Hartono, J. Ramanujam, and P. Sadayappan, “A practical automatic polyhedral parallelizer and locality optimizer,” in ACM SIGPLAN Notices, vol. 43, no. 6. ACM, 2008, pp. 101–113.
-  T. Grosser, A. Groesslinger, and C. Lengauer, “Pollyâperforming polyhedral optimizations on a low-level intermediate representation,” Parallel Processing Letters, vol. 22, no. 04, p. 1250010, 2012.
-  P. J. Mucci, S. Browne, C. Deane, and G. Ho, “Papi: A portable interface to hardware performance counters,” in Proceedings of the department of defense HPCMP users group conference, 1999, pp. 7–10.
-  S. S. Shende and A. D. Malony, “The tau parallel performance system,” International Journal of High Performance Computing Applications, vol. 20, no. 2, pp. 287–311, 2006.
-  L. Adhianto, S. Banerjee, M. Fagan, M. Krentel, G. Marin, J. Mellor-Crummey, and N. R. Tallent, “Hpctoolkit: Tools for performance analysis of optimized parallel programs,” Concurrency and Computation: Practice and Experience, vol. 22, no. 6, pp. 685–701, 2010.
-  M. Geimer, F. Wolf, B. J. Wylie, E. Ábrahám, D. Becker, and B. Mohr, “The scalasca performance toolset architecture,” Concurrency and Computation: Practice and Experience, vol. 22, no. 6, pp. 702–719, 2010.
-  G. Marin, J. Dongarra, and D. Terpstra, “Miami: A framework for application performance diagnosis,” in Performance Analysis of Systems and Software (ISPASS), 2014 IEEE International Symposium on. IEEE, 2014, pp. 158–168.
-  S. L. Graham, P. B. Kessler, and M. K. Mckusick, “Gprof: A call graph execution profiler,” in ACM Sigplan Notices, vol. 17, no. 6. ACM, 1982, pp. 120–126.
-  S. Pakin and P. McCormick, “Hardware-independent application characterization,” in Workload Characterization (IISWC), 2013 IEEE International Symposium on. IEEE, 2013, pp. 111–112.
-  C. Bastoul, A. Cohen, S. Girbal, S. Sharma, and O. Temam, “Putting polyhedral loop transformations to work,” in International Workshop on Languages and Compilers for Parallel Computing. Springer, 2003, pp. 209–225.
-  M. A. Heroux, D. W. Doer¡DF¿er, P. S. Crozier, J. M. Willenbring, H. C. Edwards, A. Williams, M. Rajan, E. R. Keiter, H. K. Thornquist, and R. W. Numrich, “Improving performance via mini-applications,” Sandia National Laboratories, Tech. Rep. SAND2009-5574, Sept. 2009.
-  J. D. McCalpin, “A survey of memory bandwidth and machine balance in current high performance computers,” IEEE TCCA Newsletter, vol. 19, p. 25, 1995.
-  P. R. Luszczek, D. H. Bailey, J. J. Dongarra, J. Kepner, R. F. Lucas, R. Rabenseifner, and D. Takahashi, “The hpc challenge (hpcc) benchmark suite,” in Proceedings of the 2006 ACM/IEEE conference on Supercomputing. Citeseer, 2006, p. 213.
-  J. Hammer, G. Hager, J. Eitzinger, and G. Wellein, “Automatic loop kernel analysis and performance modeling with kerncraft,” in Proceedings of the 6th International Workshop on Performance Modeling, Benchmarking, and Simulation of High Performance Computing Systems. ACM, 2015, p. 4.
-  J. Hofmann, J. Eitzinger, and D. Fey, “Execution-cache-memory performance model: Introduction and validation,” arXiv preprint arXiv:1509.03118, 2015.
-  Intel, “Intel architecture code analyzer homepage,” https://software.intel.com/en-us/articles/intel-architecture-code-analyzer.
-  K. A. Lindlan, J. Cuny, A. D. Malony, S. Shende, F. Juelich, R. Rivenburgh, C. Rasmussen, and B. Mohr, “A tool framework for static and dynamic analysis of object-oriented software with templates,” in Proceedings of the 2000 ACM/IEEE Conference on Supercomputing. IEEE Computer Society, 2000. [Online]. Available: http://dl.acm.org/citation.cfm?id=370049.370456
-  L. Djoudi and D. Barthou, “Maqao: Modular assembler quality analyzer and optimizer for Itanium 2,” Workshop on EPIC architectures and compiler technology, 2005.
-  A. R. Bernat and B. P. Miller, “Anywhere, any-time binary instrumentation,” in Proc. of the 10th ACM SIGPLAN-SIGSOFT Workshop on Program Analysis for Software Tools, ser. PASTE ’11. ACM, 2011, pp. 9–16. [Online]. Available: http://doi.acm.org/10.1145/2024569.2024572
-  D. Beckingsale, O. Pearce, I. Laguna, and T. Gamblin, “Apollo: Reusable models for fast, dynamic tuning of input-dependent code,” in The 31th IEEE International Parallel and Distributed Processing Symposium, 2017.
-  A. Rodrigues, K. S. Hemmert, B. W. Barrett, C. Kersey, R. Oldfield, M. Weston, R. Riesen, J. Cook, P. Rosenfeld, E. Cooper-Balis, and B. Jacob, “The structural simulation toolkit,” SIGMETRICS Perform. Eval. Rev., vol. 38, no. 4, pp. 37–42, March 2011.