Using Meta-heuristics and Machine Learning for Software Optimization of Parallel Computing Systems: A Systematic Literature Review
While the modern parallel computing systems offer high performance, utilizing these powerful computing resources to the highest possible extent demands advanced knowledge of various hardware architectures and parallel programming models. Furthermore, optimized software execution on parallel computing systems demands consideration of many parameters at compile-time and run-time. Determining the optimal set of parameters in a given execution context is a complex task, and therefore to address this issue researchers have proposed different approaches that use heuristic search or machine learning. In this paper, we undertake a systematic literature review to aggregate, analyze and classify the existing software optimization methods for parallel computing systems. We review approaches that use machine learning or meta-heuristics for software optimization at compile-time and run-time. Additionally, we discuss challenges and future research directions. The results of this study may help to better understand the state-of-the-art techniques that use machine learning and meta-heuristics to deal with the complexity of software optimization for parallel computing systems. Furthermore, it may aid in understanding the limitations of existing approaches and identification of areas for improvement.
Keywords:Parallel computing machine learning meta-heuristics software optimization
Traditionally, parallel computing (Padua, 2011) systems have been used for scientific and technical computing. Usually scientific and engineering computational problems are complex and resource intensive. To efficiently solve these problems, utilization of parallel computing systems that may comprise multiple processing units is needed. The emergence of multi-core and many-core processors in the last decade led to the pervasiveness of parallel computing systems from embedded systems, personal computers, to data centers and supercomputers. While in the past parallel computing was a focus of only a small group of scientists and engineers at supercomputing centers, nowadays virtually programmers of all systems are exposed to parallel processors that comprise multiple or many cores (Jeffers and Reinders, 2015).
The modern parallel computing systems offer high performance capabilities. In the recent years, the computational capabilities of the supercomputing centers have been increasing very fast. For example, the average performance of the top 10 supercomputers in 2010 was 0.84 PFlops/s, in 2014 the average performance climbed to 11.16 PFlops/s, and in 2016 the average performance capability is 20.63 PFlops/s (TOP500, 2016). With such exiting performance gain a serious issue of the power consumption of these supercomputing centers arises. For example, according to the TOP 500 list (TOP500, 2016), since 2010 to 2016, the average power consumption of the top 10 supercomputers has increased from 2.98MW to 8.88MW, that is about 198% increase.
Utilizing these resources to gain the highest extent of performance while keeping low level of energy consumption demands significant knowledge of vastly different parallel computing architectures and programming models. Improving the resource utilization of parallel computing systems (including heterogeneous systems that comprise multiple non-identical processing elements) is important, yet difficult to achieve Jin et al (2016). For example, for data-intensive applications the limited bandwidth of the PCIe interconnection forces developers to use the resources on the host only, which leads to the underutilization of the system. Similarly, in compute-intensive applications, while utilizing the accelerating device, the host CPUs remain idle, which leads to waste of energy and performance. Approaches that intelligently manage the resources of host CPUs and accelerating devices to address such inefficiencies seem promising (Mittal and Vetter, 2015).
To achieve higher performance, scalability and energy efficiency, engineers often combine CPUs, GPUs, or FPGAs. In such environments system developers need to consider multiple execution contexts with different programming abstractions and run-time systems. There is a consensus that software development for parallel computing systems, especially heterogeneous systems, is significantly more complex than for traditional sequential systems. In addition to the programmability challenges, performance portability of programs to various platforms is essential and challenging for productive software development, due to the differences in architectural level of multi-core and many-core processors (Benkner et al, 2011).
Software development and optimal execution on parallel computing systems expose programmers and tools to a large amount of parameters (Sandrieser et al, 2012) at software compile-time and at run-time. Examples of properties for a GPU-accelerated system include: CPU count, GPU count, CPU cores, CPU core architecture, CPU core speed, memory hierarchy levels, GPU architecture, GPU device memory, GPU SM count, CPU cache, CPU cache line, memory affinity, run-time system, etc. Finding the optimal set of parameters for a specific context is a non-trivial task, and therefore many methods for software optimization that use meta-heuristics and machine learning have been proposed. A systematic literature review may help to aggregate, analyze, and classify the proposed approaches and derive the major lessons learned.
In this paper, we conduct a systematic literature review of approaches for software optimization of parallel computing systems. We focus on approaches that use machine learning or meta-heuristics that are published since the year 2000. We classify the selected review papers based on the software life-cycle activities (compile-time or run-time), target computing systems, optimization methods, and period of publication. Furthermore, we discuss existing challenges and future research directions. The aims of this systematic literature review are to:
systematically study the state-of-the-art software optimization methods for parallel computing systems that use machine learning or meta-heuristics;
classify the existing studies based on the software life-cycle activities (compile-time, and run-time), target computing systems, optimization methods, and period of publication;
discuss existing challenges and future research directions.
Figure 1 depicts our solution for browsing the results of literature review that we have developed using SurVis Beck et al (2016) literature visualization tool. The browser is available on-line at www.smemeti.com/slr/ and enables to filter the review results based on the optimization methods, software life-cycle activity, parallel computing architecture, keywords, and authors. A time-line visualizes the number of publications per year. Publications that match filtering criteria are listed on the right-hand side; the browser displays for each publication the title, authors, abstract, optimization method, life-cycle activity, target system architecture, keywords, and a representative figure. The on-line literature browser is easy to extend with future publications that fit the scope of this review.
The rest of the paper is organized as follows. In section 2 we describe the research methodology. In section 3, we give an overview of the parallel computing systems, software optimization techniques, and the software optimization at different life-cycle activities. For each of the software life-cycle activities, including Compile-Time activities (Section 4), and Run-Time activities (Section 5), we discuss the characteristics of state-of-the-art research, and discuss limitations and future research directions. Finally, in Section 6 we conclude our paper.
2 Research methodology
During the planning stage the following activities are performed: (1) identifying the need for a literature review, (2) defining the research questions of the literature review, and (3) developing/evaluating the protocol for performing the literature review. The activities associated with conducting the literature review include: (1) identifying the research, (2) literature selection, (3) data extraction and synthesis. The reporting stage includes writing the results of the review and formatting the document. In what follows, we describe in more details the research method and the major activities performed during this study.
2.1 Research questions
We have defined the following research questions:
2.2 Search and Selection of Literature
The literature search and selection process is depicted in Fig. 3. Based on the objectives of the study, we have selected an initial set of keywords that is used to search for articles, such as: parallel computing, machine learning, meta-heuristics and software optimization. To improve the result of the search process we consider the keywords’ synonyms during the search. The search query is executed on digital electronic databases (such as, ACM Digital Library, IEEEXplore, and Google Scholar), conference venues (such as, SC, ISC, ICAC, PPoPP, ICDCS, CGO, ICPP, Euro-Par, and ParCo), and scientific journals (such as, TOCS, JPDC, JOS). The outcome of the search process is a list of potentially relevant scientific publications.
Manual selection of these publications by reading the title, abstract, and keywords is performed, which results in a filtered list of relevant scientific publications. Furthermore, a recursive procedure of searching for related articles is performed using the corresponding related articles section of each digital library (for example, the ACM Digital Library related papers function powered by IBM Watson, or the Related articles function of Google Scholar).
Additionally, the chain sampling technique (also known as snowball sampling) is used to search for related articles. Chain sampling is a recursive technique that considers existing articles, usually found in the references section of the research publication under study Biernacki and Waldorf (1981).
2.3 The Focus and Scope of the Literature Review (Selection Process)
The scope of this literature review includes:
publications that investigate the use of machine learning or meta-heuristics for software optimization of parallel computing systems;
publications that contribute to compile-time activities (code optimization and code generation), and run-time activities (scheduling and adaptation) of software life-cycle;
research published since the year 2000.
While other optimization methods (such as, linear programming, dynamic programming, control theory), and other software optimization activities (such as, design-time software optimization) may be of interest, they are left out of scope to keep the systematic review focused.
2.4 Data Extraction
In accordance with the classification strategy (described in Section 3.3) and the defined research questions (described in Section 2.1), for each of the selected primary studies we have collected information that we consider important to be recorded in order to perform the literature review.
Table 1 shows an excerpt of the data items (used for quantitative and qualitative analysis) collected for each of the selected studies. Data items 1-3 are used for the quantitative analysis related to RQ1. Data item 4 is used to answer RQ2. Data collected for item 5 is used to answer RQ3, whereas data collected for item 6 is used to answer RQ4. Data item 7 is used to classify the selected scientific publications based on the software life-cycle activities (see Table 3), whereas data item 8 is used for the classification based on the target architecture (see Fig. 6).
|1||Date||Date of the data extraction|
|2||Bibliographic reference||Author, Year, Title, Research Center, Venue|
|3||Type of article||Journal article, conference paper, workshop paper, book section|
|4||Problem, objectives, solution||What is the problem; what are the objectives of the study; how the proposed solution works?|
|5||Optimization Technique||Which Machine Learning or Meta-heuristic algorithm is used?|
|6||Considered features||The list of considered features used for optimization|
|7||Life-cycle Activity||Code Optimization, Code Generation, Scheduling, Adaptation?|
|8||Target architecture||Single/Multi-node system, Grid Computing, Cloud Computing|
|9||Findings and conclusions||What are the findings and conclusions?|
|10||Relevance||Relevance of the study in relation to the topic under consideration|
3 Taxonomy and Terminology
In this section, we provide an overview of the parallel computing systems and software optimization approaches with focus in machine learning and meta-heuristics. Thereafter, we present our approach for classifying the state-of-the-art optimization techniques for parallel computing.
3.1 Parallel Computing Systems
A parallel computing system comprises a set of interconnected processing elements and memory modules. Based on the system architecture, generally parallel computers can be categorized into shared and distributed memory. Shared memory parallel computing systems communicate through a global shared memory, whereas in distributed memory systems every processing element has its own local memory and the communication is performed through message passing. While shared memory systems have shown limited scalability, distributed memory systems have demonstrated to be highly scalable. Most of the current parallel computing systems use shared memory within a node, and distributed memory between nodes (Barney et al, 2010).
According to Top500 (TOP500, 2016) in the 90’s the commonly used parallel computing systems were symmetric multi processing (SMP) systems and massive parallel processing (MPP) systems. SMPs are shared memory systems where two or more identical processing units share other system resources (main memory, I/O devices) and are controlled by a single operating system. MPPs are distributed memory systems where a larger number of processing units (or separate computers) are housed in the same place. The disparate processing units share no system resources, they have their own operating system, and communicate through high-speed network. The main computing models within the distributed parallel computing systems include cluster Sterling et al (1995); Dongarra et al (2005), grid Smanchat et al (2013); Buyya et al (2009); Foster and Kesselman (2003); Sadashiv and Kumar (2011), and cloud computing Sadashiv and Kumar (2011); Foster et al (2008); Malawski et al (2015).
Nowadays, the mainstream platforms for parallel computing, at their node level consist of multi-core and many-core processors. Multi-core processors may have multiple cores (two, four, eight, twelve, sixteen,..) and it is expected to have even more cores in the future. Many-core systems consist of larger number of cores. The individual cores of the many-core systems are specialized to efficiently perform operations such as, SIMD, SIMT, speculations, and out-of-order execution. These cores are more energy efficient because they usually run at lower frequency.
Systems that comprise multiple identical cores or processors are known as homogeneous systems, whereas heterogeneous systems comprise non-identical cores or processors. As of November 2017, the TOP500 list (TOP500, 2016) contains several supercomputers that comprise multiple heterogeneous nodes. For example, a node of Tianhe-2 (2nd most powerful supercomputer) comprises Intel Ivy-Bridge multi-core CPUs and Intel Xeon Phi many-core accelerators; Piz Daint (3rd) consists of Intel Xeon E5 multi-core CPUs and NVIDIA Tesla P100 many-core GPUs (Memeti et al, 2016).
Programming parallel computing systems, especially heterogeneous ones, is significantly more complex than programming sequential processors (Pllana et al, 2008). Programmers are exposed to various parallel programming languages (often implemented as extensions of general-purpose programming languages such as C and C++), including, OpenMP (OpenMP, 2013), MPI (Gropp et al, 1999), OpenCL (Stone et al, 2010), NVIDIA CUDA (Nvidia, 2015), OpenACC (Wienke et al, 2012) or Intel TBB (Voss and Kim, 2011). Additionally, the programmer is exposed to different architectures with different characteristics (such as # CPU/GPU devices, # cores, core speed, run-time system, memory and memory levels, cache size). Finding the optimal system configuration that results with the highest performance is challenging. In addition to the programmability challenge, heterogeneous parallel computing systems bring the portability challenge, which means that programs developed for a processor architecture (for instance, Intel Xeon Phi) may not function on another processor architecture (such as, GPU). Manual software porting and performance tuning for various architectures may be prohibitive.
Existing approaches, discussed in this study, propose several solutions that use machine learning or meta-heuristics during compile-time and run-time to alleviate the programmability and performance portability challenges of parallel computing systems.
3.2 Software Optimization Approaches
In computer science selecting the best solution considering different criteria from a set of various available alternatives is a frequent need. Based on what type of values the model variables can take, the optimization problems can be broadly classified in continuous and discrete. Continuous optimization problems are concerned with the case where the model variables can take any value permitted by some given constraints. Continuous optimization problems are easier to solve. Given a point , using continuous optimization techniques one can infer information about neighboring points of (Gould, 2006).
In contrast, in discrete optimization (also known as combinatorial optimization) methods the model variables belong to a discrete set (typically subset of integers) of values. Discrete optimization deals with problems where we have to choose an optimal solution from a finite number of possibilities. Discrete optimization problems are usually hard to solve and only enumeration of all possible solutions is guaranteed to give the correct result. However, enumerating across all available solutions in a large search space is prohibitively demanding.
Heuristic-guided approaches are designed to solve optimization problems more quickly by finding approximate solutions when other methods are too slow or fail to find any exact solution. These approaches select near-optimal solutions within a time frame (that is, they trade-off optimality for speed). While heuristics are designed to solve a particular problem (problem-dependent), meta-heuristics can be applied to a broad range of problems. They can be thought as higher-level heuristics that are designed to determine a near-optimal solution to an optimization problem, with limited computation capacity and knowledge about the problem.
In what follows, we first describe the meta-heuristics and list commonly used algorithms, and thereafter, we describe the machine learning in the context of software optimization.
Meta-heuristics are high-level algorithms that are capable to determine a sufficiently satisfactory (near-optimal) solution to an optimization problem with limited domain knowledge and computation capacity. As meta-heuristics are problem-independent they can be used for a variety of problems. Meta-heuristics algorithms are often used for the management and efficient use of resources to increase productivity (Press et al, 2007; Wolsey and Nemhauser, 2014). In cases where the search space is large, exhaustive search, iterative methods, or simple heuristics are impractical, whereas meta-heuristics can often find good solutions with less computational effort. Meta-heuristics have shown to provide efficient solution to different problems, such as the minimum spanning tree (MST), traveling salesman problem (TSP), shortest path trees, and matching problems. Selecting the most suitable heuristic for a specific problem is important to reach a near-optimal solution more quickly. However, this process requires consideration of various factors, such as the domain type, search space, computational time, and solution quality (Memeti et al, 2016; Braun et al, 2001).
In the context of software optimization the commonly used meta-heuristics include Genetic Algorithms, Simulated Annealing, Ant Colony Optimization, Local Search, Tabu Search, Particle Swarm Optimization (see Figure 4).
3.2.2 Machine Learning
Machine Learning is a technique that allows computing systems to learn (that is, improve) from the experience (available data). Mitchell (1997) defines Machine Learning as follows, “A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E”.
Machine learning programs operate by building a prediction model from a set of training data, which later on is used to make data-driven predictions, rather than following hard-coded static instructions. Some of the most popular machine learning algorithms (depicted in Fig. 4) include regression, decision tree, support vector machines, Bayesian inference, random forest, and artificial neural networks.
An important process while training a model is the feature selection, because the efficiency of models depends on the selected variables. It is critical to choose features that have significant impact on the prediction model. There are different feature selection techniques that can find features that contain the most useful information to distinguish between classes, for example mutual information score (MIS) (Duda et al, 1973), greedy feature selection (Stephenson and Amarasinghe, 2005), or information gain ratio (Guyon and Elisseeff, 2003).
Depending on the way the prediction model is trained, machine learning may be supervised or unsupervised. In supervised machine learning the prediction model learns from examples that are labeled, which means that the input and the output are known in the training data set. Supervised learning uses classification techniques to predict discrete responses (such as, determining whether an e-mail is genuine or spam, determining whether a tumor is malign or benign), and regression techniques to predict continuous responses (such as, changes in temperature, fluctuations in power demand). The most popular supervised learning algorithms for classification problems include Support Vector Machines, Naive Bayes, Nearest Neighbor, and Discriminant Analysis, whereas for regression problems algorithms such as Linear Regression, Decision Trees, and Neural Networks are used. Selecting the best algorithm depends on the size and type of input data set, the desired output (insight), and how those insights will be used.
The unsupervised machine learning models have no or very little knowledge of how the results should look like. Basically, correct results (that is labeled training data sets) are not used for model training, but the model aims at finding hidden patterns in data based on statistical properties (for instance, intra-cluster variance) of the training data sets. Unsupervised learning can be used for solving data clustering problems in various domains, for example, sequence analysis, market research, object recognition, social network analysis, and astronomical data analysis. Some commonly used algorithms for data clustering include K-Means, Hierarchical Clustering, Neural Networks, Hidden Markov Model, and Density-based Clustering.
3.3 Software Optimization at Different Software Life-cycle Activities
Software optimization can happen during different activities of the software life-cycle. We categorize the software optimization activities by the time that occur: Design and Implementation-time, Compile-time, Run-time (Fig. 5).
During the design and implementation activity, decisions such as selection of the programming language/model and selection of the parallelization strategy are considered.
The compile-time activities include decisions of selecting the optimal compiler optimization flags and source code transformations (such as loop unrolling, loop nest optimization, pipelining, instruction scheduling, etc) such that the executable program is optimized to achieve certain goals (performance or energy) on a given context.
The run-time activities include decisions of selecting the optimal data and task scheduling on parallel computing systems, as well as taking decisions (such as switching to another algorithm, or changing the clock frequency) that help the system to adapt itself during the program execution and improve the overall performance and energy efficiency.
While software design and implementation activities are performed by the programmer, software activities at compile-time and run-time are completed by tools (such as compilers and run-time systems). Therefore, in this paper we focus on tool-supported software optimization approaches that use approximate techniques (machine learning and meta-heuristics) at compile-time and run-time.
3.4 Classification based on architecture, software optimization approach, and life-cycle activity
In this section we classify the considered scientific publications based on the architecture, software optimization approach, and life cycle activities.
RQ1: What is the current state of the art on software optimization techniques for parallel computing systems that use meta-heuristics and machine learning? To provide an overview of the current state of the art scientific publications that use machine learning and meta-heuristics for software optimization of parallel computing systems, we have grouped the research publications in the following time periods: 2000-2005, 2006-2011, and 2012-2017. Further filtering and classification of the considered scientific publications, and visualization of the results in the form of a time-line can be performed using our on-line interactive tool (see Fig. 1).
Architecture: Figure 6 shows a classification of the reviewed papers based on the target architecture, including multi-node, single-node, grid, and cloud parallel computing systems. The horizontal axis on the top delineates the common types of processors used during the corresponding time period. For instance, from 2000 to 2005 grids and clusters employed single or multiple sequential processors at node level, whereas during the period from 2006 to 2011 nodes employed multi-core processors. Accelerators combined with multi-core processors can be seen during time period 2012-2017. We may observe that most of the work is focused on optimization of resource utilization at the node level (single-node). Optimization of the resources of multi-node computing systems (including clusters) is addressed by several research studies continuously during the considered periods of time. The optimization of grid computing systems using machine learning and meta-heuristic approaches has received less attention, whereas optimization of cloud computing systems has received attention during the period 2012-2017.
Software optimization approach: In Table 2 we classify the selected publications that use intelligent techniques (such as, machine learning and meta-heuristics) for software optimization at compile-time and run-time. We may observe that machine learning is used more often for software optimization during compile-time and run-time compared to meta-heuristics.
|Machine Learning||Lee and Schopf (2003); Monsifrot et al (2002); Cavazos and Moss (2004); Zhang et al (2004); Corbalan et al (2005); Stephenson and Amarasinghe (2005); Thomas et al (2005); Zhang et al (2005)||Agakov et al (2006); Cavazos et al (2007); Diamos and Yalamanchili (2008); Fursin et al (2008); Beach and Avis (2009); Chen and Long (2009); Tournavitis et al (2009); Luk et al (2009); Ansel et al (2009); Wang and O’Boyle (2009); Park et al (2010); Hoffmann et al (2010a, b); Eastep et al (2010); Fursin et al (2011); Pekhimenko and Brown (2011); Grewe and OâBoyle (2011); Ravi and Agrawal (2011); Castro et al (2011); Grewe et al (2011); Benkner et al (2011); Danylenko et al (2011); Eastep et al (2011)||Castro et al (2012); Kessler et al (2012); Li et al (2012); Kessler and Löwe (2012); Fonseca and Cabral (2013); Liu et al (2013); Wang and O’boyle (2013); Rossbach et al (2013); Emani et al (2013); Binotto et al (2013); Fonseca and Cabral (2013); Mantripragada et al (2014); Grzonka et al (2014); Gaussier et al (2015); Ogilvie et al (2015); Mastelic et al (2015); Memeti and Pllana (2016c); Memeti et al (2016); Memeti and Pllana (2016b); Silvano et al (2016)|
|Meta-heuristics||Zomaya and Teh (2001); Ahmad et al (2001); Zomaya et al (2001); Page and Naughton (2005a, b); Stephenson et al (2003); Cooper et al (2005)||Gordon et al (2006); Carretero et al (2007); Sivanandam and Visalakshi (2009); Tiwari et al (2009); Tiwari and Hollingsworth (2011)||Albayrak et al (2013); Li et al (2014); Grzonka et al (2014); Memeti and Pllana (2016a); Memeti et al (2016); Memeti and Pllana (2016b)|
Life-cycle activity: A classification of the reviewed papers based on the software life-cycle activities (including, code optimization, code generation, scheduling, and adaptation) is depicted in Table 3. We may observe that the scheduling life-cycle activity has received the most attention, especially during 2012-2017 period. The use of machine learning and meta-heuristics for code optimization during compile-time has been addressed by many researchers, especially during the period between 2006 and 2011. Similar trend can be observed for research studies that focus on using intelligent approaches to optimize code generation. Optimization of software through adaptation is addressed during the year of 2006-2011.
|Code Optimization||Monsifrot et al (2002); Stephenson et al (2003); Cavazos and Moss (2004); Stephenson and Amarasinghe (2005); Cooper et al (2005)||Agakov et al (2006); Gordon et al (2006); Cavazos et al (2007); Fursin et al (2008); Tournavitis et al (2009); Tiwari et al (2009); Fursin et al (2011); Tiwari and Hollingsworth (2011)||Liu et al (2013); Wang and O’boyle (2013)|
|Code Generation||Beach and Avis (2009); Chen and Long (2009); Ansel et al (2009); Luk et al (2009); Tournavitis et al (2009); Pekhimenko and Brown (2011)||Fonseca and Cabral (2013); Rossbach et al (2013)|
|Scheduling||Zomaya and Teh (2001); Ahmad et al (2001); Zomaya et al (2001); Lee and Schopf (2003); Zhang et al (2004); Corbalan et al (2005); Zhang et al (2005); Page and Naughton (2005a, b)||Diamos and Yalamanchili (2008); Wang and O’Boyle (2009); Sivanandam and Visalakshi (2009); Beach and Avis (2009); Park et al (2010); Grewe and OâBoyle (2011); Castro et al (2011); Grewe et al (2011); Ravi and Agrawal (2011); Benkner et al (2011); Danylenko et al (2011)||Kessler and Löwe (2012); Li et al (2012); Kessler et al (2012); Castro et al (2012); Emani et al (2013); Binotto et al (2013); Albayrak et al (2013); Mantripragada et al (2014); Grzonka et al (2014); Li et al (2014); Gaussier et al (2015); Mastelic et al (2015); Ogilvie et al (2015); Memeti et al (2016); Memeti and Pllana (2016c, a, b); Silvano et al (2016); Fonseca and Cabral (2013)|
|Adaptation||Thomas et al (2005)||Luk et al (2009); Hoffmann et al (2010a, b); Eastep et al (2010, 2011)|
Compiling (Aho et al, 2006) is the process of transforming source code from one form into another. Traditionally, compiler engineers exploited the underlying architecture by manually implementing several code transformation techniques. Furthermore, decisions that determine whether to apply a specific optimization or not were hard-coded manually. At each major revision or implementation of new instruction set architecture, the set of such hard-coded compiler heuristics must be re-engineered (a time consuming process). In the modern era, the architectures are continuously evolving trying to bring higher performance while keeping shorter time to market, therefore developers do not prefer to do the re-engineering (that requires significant time investment).
Modern parallel computing architectures are complex due to higher core counts, different multi-threading, memory hierarchy, computation capabilities, and processor architecture. This disparity of architecture increases the number of available compiler optimization flags, and makes compilers unable to efficiently utilize the available resources. Tuning these parameters manually is not just unfeasible, but also introduces scalability and portability issues. Machine learning and meta-heuristics promise to address compiler problems, such as, selecting compiler optimization flags or heuristic-guided compiler optimizations.
In what follows, we discuss the existing state-of-the-art approaches that use machine learning and meta-heuristics for software optimization for code optimization and code generation. Thereafter, we discuss the limitations and identify possible future research directions.
4.1 Code Optimization
Code optimization will not change the program behavior, but will optimize the code to reach optimization goals (reducing the execution time, energy consumption, or required resources).
Compiler optimization techniques for code optimization include loop unrolling, splitting and collapsing, instruction scheduling, software pipelining, auto-vectorization, hyper-block formation, register allocation, and data pre-fetching (Stephenson et al, 2003). Different device specific code optimization techniques may behave differently in various architectures. Furthermore, choosing more than one optimization technique does not necessarily result with better performance, sometimes combination of different techniques may have negative impact on the final output. Hence, manually writing hard-code heuristics is impractical, and techniques that intelligently select the compiler transformations that result with higher application benefits in a given context are required.
Within the scope of this survey, scientific publications that use machine learning for code optimization at compile time include Monsifrot et al (2002); Stephenson and Amarasinghe (2005); Cavazos and Moss (2004); Fursin et al (2008, 2011); Liu et al (2013); Wang and O’boyle (2013); Tournavitis et al (2009); Agakov et al (2006), whereas scientific publications that use meta-heuristics for code optimization include Stephenson et al (2003); Cooper et al (2005); Tiwari et al (2009); Tiwari and Hollingsworth (2011). Table 4 lists the characteristics of the selected primary studies that address code optimization at compile time. Such characteristics include: the algorithm used for optimization, the optimization objectives, the considered features that describe the application being optimized, and type of optimization (on-line or off-line). We may observe that besides the approach proposed by Tiwari and Hollingsworth (2011), the rest of them focus on off-line optimization approaches and they are based on historical data (knowledge) that is gathered from previous runs.
RQ2: Which optimization goals are achieved using meta-heuristics and machine learning? As we mentioned earlier, different optimizations can be performed during compilation. We may see that some researchers focus on using intelligent techniques to identify loops that would potentially execute more efficiently when unrolled Monsifrot et al (2002), or selecting the loop unroll factor that yields the best performance Stephenson and Amarasinghe (2005). Instruction scheduling Cavazos and Moss (2004), partitioning strategy for irregular Liu et al (2013) and streaming Wang and O’boyle (2013) applications, determining the list of compiler optimizations that results with the best performance Fursin et al (2011); Cooper et al (2005); Tiwari and Hollingsworth (2011) are also addressed by the selected scientific publications. Furthermore, Tournavitis et al (2009) use SVMs to determine whether parallelization of the code would be beneficial, and which scheduling policy to select for the parallelized code.
RQ3: Which are the common algorithms used to achieve such optimization goals? With regards to the machine learning algorithms used for code optimization, nearest neighbor (NN) classifier, support vector machine (SVM), and decision tree (DT) are the most popular. Other algorithms, such as ruled set induction (RSI), and predictive search distribution (PSD) are also used for code optimization during compilation. Whereas, approaches that are based on search based algorithms use genetic algorithm (GA), hill climbing (HC), greedy algorithm(GrA), and parallel rank ordering (PRO) for code optimization during compile-time.
RQ4: Which features are considered during the optimization of parallel computing systems? To achieve the aforementioned objectives, a representative set of program features is extracted through static code analysis, which are considered to be the most informative with regards to the program behavior. The selection of such features is closely related to the optimization goals. For example, to identify loops that benefit from unrolling, Monsifrot et al (2002) use loop characteristics such as, number of memory accesses, arithmetic operations, code statements, control statements, and loop iterations. Such loop characteristics are also used to determine the loop unroll factor Stephenson and Amarasinghe (2005). Characteristics related to a specific code block (such as number of instructions, branches, calls, stores) are used when deciding whether a applications benefit from instruction scheduling Cavazos and Moss (2004). Determining the partitioning strategy of irregular applications is based on static program features related to basic block, loop characteristics, and the data dependency Liu et al (2013). Features such as pipeline depth, load/store operations per instruction, number of computations, and computation-communication ratio are used when determining partitioning strategy of streaming applications Wang and O’boyle (2013). Tiwari and Hollingsworth (2011) considers architectural specifications such as cache and register capacity, in addition to the application specific parameters, such as tile size in a matrix multiplication algorithm.
|Monsifrot et al (2002)||DT||identify loops to unroll||loop characteristics (# memory accesses; # arithmetic operations; # statements; # control statements; # iterations)||off-line (sup.)|
|Stephenson and Amarasinghe (2005)||NN, SVM||select the most beneficial loop unroll factor||loop characteristics (# floating point operations; # operands; # memory operations; critical path length; # iterations)||off-line (sup.)|
|Cavazos and Moss (2004)||RSI||determine whether to apply instructions scheduling||code-block characteristics (# instructions; # branches; # calls; # stores; # returns; int/float/sys_func_unit instructions)||off-line (sup.)|
|Fursin et al (2008, 2011)||PSD||determine the most effective compiler optimizations||static program features (# basic blocks in a method; # normal/critical/abnormal CFG edges; # int/float operations)||off-line (sup.)|
|Liu et al (2013)||kNN||determine the best partitioning strategy of irregular applications||static program features (# basic blocks; # instructions; loop probability; branch probability; data dependency)||off-line (sup.)|
|Wang and O’boyle (2013)||NN||determine the best partitioning strategy of streaming app.||program features (pipeline depth; split-join width; pipeline/split-join work; # computations; # load/store ops)||off-line (sup.)|
|Tournavitis et al (2009)||SVM||determine whether parallelism is beneficial; select the best scheduling policy||static program features (# instructions; # load/store; # branches; # iterations); dynamic program features (# data accesses; # instructions; # branches)||off-line (sup.)|
|Agakov et al (2006)||IIDM; MM; NN;||reduce the number of required program evaluations in iterative compiler optimization; analyze program similarities||program features (type of nested loop; loop bound; loop stride; # iterations; nest depth; # array references; # instructions; # load/store/compare/branch/divide/call/generic/array/memory copy/other instructions; int/float variables)||off-line (sup.)|
|Stephenson et al (2003)||GP||tuning compiler heuristics||hyper-block formation features; register allocation features; data pre-fetching features.||off-line (unsupervised)|
|Cooper et al (2005)||GrA; GA; HC; RP;||tuning the compilation process through adaptive compilation||/||off-line|
|Tiwari et al (2009); Tiwari and Hollingsworth (2011)||PRO||tune generated code; determine the best compilation parameters||architectural parameters (cache capacity; register capacity); application specific parameters||on-line|
4.2 Code Generation
The process of transforming code from one representation into another one is called code generation. We call “machine code generation” the code transformation from the high level to low level representation (that is ready for execution), whereas “source code generation” indicates in this paper the source-to-source code transformation.
In the context of parallel computing, a source-to-source compiler is an automatic parallelization compiler that can automatically annotate a sequential code with parallel code annotations (such as, OpenMP pragma directives or MPI code statements). Source-to-source compilers may alleviate the portability issue, by enabling to automatically translate the code into an equivalent representation of the code that is ready to be compiled and executed on target architectures.
In this section, we focus on source code generation techniques that can:
generate device specific code from other code representations,
generate multiple implementations of the same code, or
automatically generate parallel code from sequential code.
During the process of porting applications, programmers are faced with the following problems: (1) demand of device specific knowledge and API; (2) difficulties to predict whether the application will have performance benefits before it is ported; (3) there exist a large number of programming languages and models that are device (types and manufacturer) specific.
To address such issues, researchers have proposed different solutions. In Table 5, we list the characteristics of these solutions such as, optimization algorithm, optimization objectives, and considered features during optimization.
RQ2: Which optimization goals are achieved using meta-heuristics and machine learning? The optimization objectives are derived from the aforementioned portability challenges. For example, to alleviate the demand for device specific knowledge, Beach and Avis (2009) aim to identify candidate kernels that would likely benefit from parallelization, generate device specific code from high-level code, and map to the accelerating device that yields the best performance. Similarly, Fonseca and Cabral (2013) propose the automatic generation of OpenCL code from Java code. Ansel et al (2009) propose the PetaBricks framework that enables writing multiple versions of algorithms, which are automatically translated into C++ code. The runtime can switch between the available algorithms during program execution. Luk et al (2009) introduce Qilin that enables source-to-source transformation from C++ to TBB and CUDA. It uses machine learning to find the optimal work distribution between the CPU and GPU on a heterogeneous system.
RQ3: Which are the common algorithms used to achieve such optimization goals? Decision trees (DT), k-nearest neighbor (kNN), cost sensitive decision table (DT), linear regression (LR), and logistic regression (LRPR) machine learning algorithms are used during the code-generation.
RQ4: Which features are considered during the optimization of parallel computing systems? Beach and Avis (2009) considered static loop characteristics to achieve their objectives, whereas Chen and Long (2009) uses both static and dynamic program features to generate the multi-threaded versions of a selected loop, and then select the most suitable loop version at run-time. Combination of static code features (extracted at compile time), and dynamic features (extracted at run-time) are also used to determine the most suitable processing device for a specific application Fonseca and Cabral (2013). To determine the best workload distribution of a parallel application, Luk et al (2009) consider algorithm parameters and hardware configuration parameters. Pekhimenko and Brown (2011) consider general and loop-based features to determine the list of program method transformation during code generation that would reduce the compilation time.
|Beach and Avis (2009)||DT||generate device specific code from high-level code; map applications to accelerating devices.||loop (kernel) characteristics (data precision, amount of computation performed and memory access characteristics)||off-line (sup.)|
|Chen and Long (2009)||kNN||generate multi-threaded loop versions; select the most suitable one at run-time||static code features (loop nest depth, # arrays used); dynamic features (data set size)||off-line (sup.)|
|Fonseca and Cabral (2013)||DT||source-to-source transformation of data-parallel applications; predict the efficiency and select the suitable device.||static program features (outer/inner access/write; basic operations; …); dynamic program features (data-to; data-from; …)||off-line (sup.)|
|Ansel et al (2009)||/||enable writing multiple versions of algorithms and algorithmic choices at the language level; auto-tuning of the specified algorithmic choices; switch between the available algorithms during program execution||/||off-line|
|Luk et al (2009)||LR||determine the optimal work distribution between the CPU and GPU||runtime algorithm parameters (input size) and hardware configuration parameters||on-line|
|Rossbach et al (2013)||/||distribute data-parallel portions of a program across heterogeneous computing resources;||/||/|
|Pekhimenko and Brown (2011)||LRPR||determine the list of program method transformations that result with lower compilation time||general program features (# instructions; # load/store operations; # float operations); loop-based features (# loops types; # loop statements)||off-line (sup.)|
4.3 Challenges and Research Directions
While most of the approaches discussed in this systematic review present significant improvement towards having intelligent compilers that require less engineering effort to provide satisfactory code execution performance, indications that there is still room for improvement can be observed in Stephenson et al (2003) and Wang and O’boyle (2013).
Shortcomings of the compiler approaches that use machine learning or meta-heuristics include: (1) limitation to a specific programming language or model ((Tournavitis et al, 2009)), (2) forcing developers to use extra annotations on their code ((Luk et al, 2009)), or use not widely known parallel programming languages ((Ansel et al, 2009)), (3) focusing on single or simpler aspects of optimizations techniques (ex: loop unrolling, unrolling factor, instruction scheduling) (Monsifrot et al, 2002; Stephenson and Amarasinghe, 2005; Cavazos and Moss, 2004), whereas more complex compiler optimizations (that are compute-intensive) are not addressed sufficiently.
Optimizations based on features derived from static code analysis provide poor global characterization of the dynamic behavior of the applications, whereas using dynamic features requires application profiling, which adds additional execution overhead to the program under study. This additional time can be considered negligible for applications that are executed multiple times after the optimization, however it represents overhead for single-run applications.
Approaches that generate many multi-threaded versions of the code Chen and Long (2009) might end up with dramatic code increases that make difficult the applicability to embedded parallel computing systems with limited resources. Adaptive compilation techniques (Cooper et al, 2005) add non-negligible compilation overhead.
Future research should address the identified shortcomings in this systematic review by providing intelligent compiler solutions for general-purpose languages (such as, C/C++) and compilers (for instance, GNU Compiler Collection) that are widely used and supported by the community. Many compiler optimization issues are complex and require human resources that are usually not available within a single research group or project.
The run-time program life-cycle is the time during which the program is running (that is, being executed) and it is also known as execution-time. Software systems that enable running programs to interact with the execution environment are known as run-time systems. The run-time environment contains environment information, such as, the available resources, existing workload, and scheduling policy. A running program can access the execution environment information via the run-time system.
In the past, the choice of architecture and the algorithms was considered during the design and implementation phase of software life-cycle. Nowadays, there are various multi- and many-core processing devices, with different performance and energy consumption characteristics. Furthermore, there is no single algorithm implementation that can exploit the full processing potential of these diverse processing elements. Often it is not possible to know if an application performs better on device X or Y before the execution. The performance of a program is determined by the properties of the execution context (program input, type of available processing elements, current system utilization,…) that is known at run-time. Some programs perform better on device X when the input size is large enough, but worse for smaller input sizes. Hence, decisions whether a program should be run on X or Y, or which algorithm to use are postponed to run-time.
In this study, we focus on optimization methods used in different run-time systems that use machine learning or meta-heuristics to optimize the program execution. Such run-time systems may be responsible to partition programs into tasks and scheduling these tasks to different processing devices, selecting the most suitable device(s) for a specific task, selecting the most suitable algorithm or the size of the input workload, selecting the number of processing elements or clock frequency, and many more different system run-time configuration parameters to achieve the specified goals including the performance, energy efficiency, and fault tolerance. Specifically, we focus on two major run-time activities: scheduling and adaptation.
In what follows, we discuss the related-state-of-the-art run-time optimization approaches for scheduling and adaptation. Thereafter, we summarize the limitations of the current approaches and discuss possible future research directions.
According to the Cambridge Dictionary 111Cambridge Dictionary, http://dictionary.cambridge.org/dictionary/english/scheduling, scheduling is “the job or activity of planning the times at which particular tasks will be done or events will happen”. In context of this paper, we use the term scheduling to indicate mapping the tasks onto the processing elements, and determining the order of task execution to minimize the overall execution time.
Scheduling may strongly influence the performance of parallel computing systems. Improper scheduling can lead to load imbalance and consequently to a sub-optimal performance. Researchers have proposed different approaches that use meta-heuristics or machine learning to find the best scheduling within a reasonable time.
Based on whether the scheduling algorithms can modify the scheduling policy during program execution, generally scheduling algorithms are classified in static and dynamic.
5.1.1 Static Scheduling
Static scheduling techniques retain an unchanged policy until the end of program execution. Static approaches assume that the number of tasks is fixed, known before execution starts, and that accurate information of their running times is known. Static approaches usually use analytical models to estimate the computation and communication cost, where the work distribution is performed based on these estimations. The program execution time is essential for job scheduling. However, accurately predicting/estimating the program execution time is difficult to achieve in shared environments where system resources can dynamically change over time. Inaccurate predictions may lead to performance degradation Chirkin et al (2017).
|Wang and O’Boyle (2009)||ANN; SVM||mapping computations to multi-core CPUs; determine the optimal thread number;||code features (# static instructions; # load/store operations; # branches); data and dynamic features (L1 data cache miss rate; branch miss rate)||off-line (sup.)|
|Grewe and OâBoyle (2011)||SVM||mapping computations to the suitable processing device||static code features (# int/float/math operations; barriers; memory accesses; % local/coalesced memory accesses; compute-memory ratio)||off-line (sup.)|
|Castro et al (2011)||ID3 DT||mapping threads to specific cores; reduce memory latency and contention||program features (transaction time ratio; transaction abort ratio; conflict detection policy; conflict resolution policy; cache misses)||off-line (sup.)|
|Ogilvie et al (2015)||L; MP; IB1; IBk; KStar …||reducing the training data; select the most informative training data; mapping application to processors;||/||off-line (sup.)|
|Memeti and Pllana (2016c)||BDTR||determine workload distribution of data-parallel applications on heterogeneous systems||hardware configuration (# threads; # cores; # threads/core; thread affinity); application parameters (input size)||off-line (sup.)|
|Memeti and Pllana (2016a)||SA||determine near-optimal system configuration parameters of heterogeneous systems||system configuration parameters (#threads/thread_affinity/ workload_fraction on host/device);||off-line (sup.)|
|Memeti et al (2016); Memeti and Pllana (2016b)||BDTR; SA||determine near-optimal system configuration on heterogeneous systems||available resources; scheduling policy; and the workload fraction;||off-line (sup.)|
|Zomaya and Teh (2001); Ahmad et al (2001); Zomaya et al (2001); Carretero et al (2007)||GA||task scheduling||/||off-line (sup.)|
Table 6 lists the characteristics (such as optimization algorithm, objective, and features) of scientific publications that use machine learning and/or meta-heuristics for static scheduling.
RQ2: Which optimization goals are achieved using meta-heuristics and machine learning? With regards to static scheduling, the attention of recent research that use machine learning and meta-heuristics is in the following optimization objectives: mapping program parallelism to multi-core architectures Wang and O’Boyle (2009), mapping applications to the most appropriate processing device Grewe and OâBoyle (2011); Ogilvie et al (2015), mapping threads to specific cores Castro et al (2011), and determining workload distribution on heterogeneous parallel computing systems Memeti and Pllana (2016c, a); Memeti et al (2016); Memeti and Pllana (2016b).
RQ3: Which are the common algorithms used to achieve such optimization goals? To achieve the aforementioned optimization objectives, machine learning algorithm such as, Artificial Neural Networks (ANN), Support Vector Machines (SVM), and (Boosted) Decision Trees (BDTR) are used Wang and O’Boyle (2009); Grewe and OâBoyle (2011); Castro et al (2011); Memeti and Pllana (2016c). An approach that combines a number of machine learning algorithms, including, Logistic (L), Multilayer Perceptron (MP), IB1, IBk, KStar, Random Forest, Logit Boost, Multi-Class-Classifier, Random Committee, NNge, ADTree, and RandomTree, to create an active-learning query-committee with the aim to reduce the required amount if training data is proposed by Ogilvie et al (2015). A combination of Simulated Annealing (SA) and boosted decision tree regression to determine near optimal system configurations is proposed by Memeti and Pllana (2016b). The use of Genetic Algorithms (GA) for task scheduling has been extensively addressed by several researchers (Zomaya and Teh, 2001; Ahmad et al, 2001; Zomaya et al, 2001; Carretero et al, 2007).
RQ4: Which features are considered during the optimization of parallel computing systems? The list of considered system features for optimizing of parallel computing systems is closely related to the optimization objectives, target applications and architecture. For example, Castro et al (2011) consider transaction time and abort ratio, conflict detection and resolution policy to map thread to specific cores and reduce memory latency and contention in software transactional memory applications running on multi-core architectures. Static code features, such as number of instruction, memory operations, math operations, and branches, are considered during the mapping of applications to the most suitable processing devices Wang and O’Boyle (2009); Grewe and OâBoyle (2011). While such approaches consider application specific features, researchers have demonstrated positive improvement results with approaches that do not require code analysis. Instead, they rely on features such as the available system resources and program input size during the optimization process (that is determining the workload distribution of data-parallel applications) Memeti and Pllana (2016a, b, c).
5.1.2 Dynamic Scheduling
Dynamic scheduling algorithms take into account the current system state, and modify themselves during run-time to improve the scheduling policy. Dynamic scheduling does not require prior knowledge of all task properties. To overcome the limitations of the static scheduling, various dynamic approaches are proposed, including work stealing, partitioning and assigning tasks on the fly, queuing systems, and task based approaches. Dynamic scheduling is usually harder to implement, however the performance gain may be better than static scheduling.
Emani et al (2013)
|ANN||determine the best number of threads||static features (# load/store ops; # instructions; # branches); dynamic features (# processors; # workload threads; run queue length; ldavg-1; ldavg-5)||off-line (sup.)|
Lee and Schopf (2003)
|R&F||determine the application execution time in shared environments||program input parameters; # processors; resource status;||off-line (sup.)|
Park et al (2010)
|SVM||mapping tasks to processing devices||# tasks in the queue; the ready times of the machines; computing capabilities of each machine.||off-line (sup.)|
Mantripragada et al (2014)
|GrA||evenly partitioning tasks between high performance clusters and the cloud||estimated execution time determined by monitoring the actual exec. time of data or tasks chunks||on-line|
Mastelic et al (2015)
|/||predicting resource allocation for business processes in the Cloud||runtime metrics of a process and its behavior||off-line|
Castro et al (2012)
|ID3 DT||predicting a thread mapping strategy for STM applications||Transactional Time/Abort Ratio; Conflict Detection/Resolution Policy; Last-Level Cache Miss||off-line (sup.)|
Grzonka et al (2014)
|ANN||improve the effectiveness of grid scheduler decisions||characteristics of the tasks and machines||off-line|
Gaussier et al (2015)
|LR||improving the scheduling algorithms using machine learning techniques||job arrival time; required resources; # running jobs; occupied resources;||on-line|
Binotto et al (2013)
|/||optimize the task scheduling on heterogeneous platforms||input data; data transfers; task performance; platform features;||off-line & on-line|
Grewe et al (2011)
|ANN||predict the optimal number of threads||program features and workload features||off-line|
Zhang et al (2004, 2005)
|Adaptive LR||determine the number of threads and scheduling policy for each parallel region||inter-thread data locality, instruction mix and load imbalance||/|
Page and Naughton (2005a, b)
|GA||minimize the make-span; dynamic task scheduling in heterogeneous systems||task properties(arrival time; dependency); system properties (network; processors)||on-line|
Albayrak et al (2013)
|Adaptive GrA||mapping of computation kernels on heterogeneous GPUs accelerated systems.||profiling information (execution time; data-transfer time); hardware characteristics||off-line|
Li et al (2014)
|HC||selecting optimal per task system configuration for MapReduce applications||map-reduce parameters (# mappers; # reducers; slow start; io.sort.mb; # virtual cores)||on-line|
Sivanandam and Visalakshi (2009)
|PSO; SA||dynamic scheduling of heterogeneous tasks on heterogeneous processors; load balancing;||task properties (execution time; communication cost; fitness function); hardware properties (# processors)||/|
Zomaya and Teh (2001)
|GA||dynamic load-balancing where optimal task scheduling can evolve at run-time||/||on-line|
Ravi and Agrawal (2011)
|/||mapping tasks to heterogeneous architectures;||architectural trade-offs; computation patterns; application characteristics;||on-line|
Diamos and Yalamanchili (2008)
|PR||dynamic scheduling and performance optimization for heterogeneous systems||kernel execution time; machine parameters; input size; input distribution var.; instrumentation data;||off-line (supervised)|
Benkner et al (2011); Kessler et al (2012)
|LR; QR||prediction of performance aspects (e.g. execution time, power consumption) of implementation variants;||system information (resource availability and requirements; estimated performance of implementation variants; input availability)||off-line (supervised)|
Li et al (2012)
|DT||reducing the number of training data required to build prediction models||input parameters (e.g. size); system available resources (e.g. # cores; # accelerators);||off-line (supervised)|
Kessler and Löwe (2012); Danylenko et al (2011)
|DT; DD; NB; SVM||use meta-data from performance aware components to predict the expected execution time; select the best implementation variant and the scheduling policy;||input parameters (e.g. size); system available resources; meta-data||off-line (supervised)|
Table 7 lists the characteristics (such as optimization algorithm, objective, and features) of scientific publications that use machine learning and/or meta-heuristics for dynamic scheduling.
RQ2: Which optimization goals are achieved using meta-heuristics and machine learning? With regards to the optimization objectives, considered scientific publications aim at: (1) determining the optimal number of threads for a given application Emani et al (2013); Grewe et al (2011); Zhang et al (2004, 2005); (2) determining the application execution time Lee and Schopf (2003); Benkner et al (2011); Kessler et al (2012); Kessler and Löwe (2012); Danylenko et al (2011); (3) mapping tasks to processing devices Park et al (2010); Castro et al (2011); Albayrak et al (2013); Ravi and Agrawal (2011); (4) partitioning tasks between high performance clusters Mantripragada et al (2014); (5) predicting resource allocation in the cloud Mastelic et al (2015); (6) improving scheduling algorithms Grzonka et al (2014); Gaussier et al (2015); (7) minimizing the make-span Binotto et al (2013); Page and Naughton (2005a, b); Sivanandam and Visalakshi (2009); Zomaya and Teh (2001); Diamos and Yalamanchili (2008); (8) selecting near optimal system configurations Li et al (2014); and (9) reducing the number of training examples required to build prediction models Li et al (2012).
RQ3: Which are the common algorithms used to achieve such optimization goals? Artificial neural network (ANN) Emani et al (2013); Grzonka et al (2014); Grewe et al (2011), regression (LR, QR, PR) Lee and Schopf (2003); Gaussier et al (2015); Diamos and Yalamanchili (2008); Benkner et al (2011); Kessler et al (2012), support vector machines (SVM) Park et al (2010); Kessler and Löwe (2012); Danylenko et al (2011), and decision trees (DT) Castro et al (2012); Li et al (2012); Kessler and Löwe (2012); Danylenko et al (2011) are the most popular machine learning algorithms used for optimization in the scientific publications considered in this study. Whereas, genetic algorithms (GA) Page and Naughton (2005a, b); Zomaya and Teh (2001), greedy-based algorithms (GrA) Mantripragada et al (2014); Albayrak et al (2013), hill-climbing (HC) Li et al (2014), particle swarm optimization (PSO) Sivanandam and Visalakshi (2009), and simulated annealing (SA) Sivanandam and Visalakshi (2009) are used as heuristic based optimization approaches for dynamic scheduling.
RQ4: Which features are considered during the optimization of parallel computing systems? Approaches such as Mantripragada et al (2014); Castro et al (2012); Mastelic et al (2015); Li et al (2014); Zhang et al (2004, 2005) focus on features collected dynamically during program execution, such as, estimated execution time determined through analysis of profiling data, information related to tasks (arrival time, number of currently running tasks). Whereas other approaches combine static features collected at compile-time with dynamic ones collected at run-time Emani et al (2013); Park et al (2010); Grzonka et al (2014); Page and Naughton (2005a, b), program input parameters, and hardware related information Binotto et al (2013); Lee and Schopf (2003); Grewe et al (2011); Ravi and Agrawal (2011); Diamos and Yalamanchili (2008). Similarly to the static scheduling techniques, the selection of such features is closely related to the optimization objectives. For example, Zhang et al (2004, 2005) considers the inter-thread data locality when tuning OpenMP applications for hyper-threaded SMPs; Page and Naughton (2005a, b) considers task properties, such as, task arrival time and task dependency, when scheduling dynamically tasks in heterogeneous distributed systems.
According to the Cambridge Dictionary 222Cambridge Dictionary, http://dictionary.cambridge.org/dictionary/english/adaptation, adaptation is “the process of changing to suit different conditions.” In this paper, we use the term adaptation to refer to the property of systems that are capable of evaluating and changing their behavior to achieve specified goals with respect to performance, energy efficiency, or fault tolerance. In dynamic environments, modern parallel computing systems may change their behavior by: (1) changing the number of used processing elements to optimize system resource allocation; (2) changing the algorithm or implementation variant that yields to better results with respect to the specified goals; (3) reducing the quality (accuracy) of the output to meet the performance goals; or (4) changing the clock frequency to reduce energy consumption.
|Paper||Method||Adaptation Objectives||Monitored Parameters||Tuned Parameters|
|Thomas et al (2005)||Custom adaptation loop; DT||select the most suitable algorithm implementation||architecture and system information (available memory, cache size, # processors); performance characteristics||algorithm implementation|
|Hoffmann et al (2010a, b)||ODA||apply user defined actions to change the program behavior||performance information retrieved using application heartbeats Hoffmann et al (2010a)||user defined actions (such as: adjusting the clock speed)|
|Eastep et al (2010)||Lock Acquisition Scheduling; RL||adapt the lock’s internal implementation||reward signal (heart rate) retrieved using application heartbeats.||change the lock scheduling policy|
|Eastep et al (2011)||RL||determine the ideal data structure knob settings||reward signal (throughput heart rate); support for external perf. monitors||adjusting the scancount value and performance-critical knob of Flat Combining algorithm.|
|Luk et al (2009)||LR||adaptive mapping of computations to PE||execution-time of parts of the program||choosing the mapping scheme (static or adaptive)|
|Silvano et al (2016)||DSL||adapt applications to meet user defined goals||contextual information, requirements, resources availability||user defined actions (altering resource alloc. and task mapping)|
The studied literature in this paper provide examples that adaptation (also referred to as self-adaptation) proved to be an effective approach to deal with the complexity, variability, and dynamism of modern parallel computing systems. Table 8 lists the characteristics (such as, adaptation method and objectives, monitored and tuned parameters) of the scientific publications that use adaptation for software optimization of parallel computing systems.
RQ2: Which optimization goals are achieved using meta-heuristics and machine learning? With regards to the adaptation objectives, Thomas et al (2005) use a custom adaptation loop to adaptively select the most suitable algorithm implementation for a given input data set and system configuration. Hoffmann et al (2010a, b) use an observe-decide-act (ODA) feedback loop to adaptively apply user defined actions to change the program behavior in favor of achieving some user-defined goals, such as energy efficiency and throughput. Adaptation methods are used in the smart-locks library Eastep et al (2010), which can change its behavior at run-time to achieve certain goals. Similarly, in Eastep et al (2011) adaptation methods are used for optimizing data structure knobs. Adaptive mapping of computations to the processing units is proposed by Luk et al (2009). The Antarex (Silvano et al, 2016) project aims at providing means for application tuning and adaptation for energy efficient heterogeneous high performance computing systems, by providing a domain specific language that allows specifying adaptation goals at compile-time.
RQ3: Which are the common algorithms used to achieve such optimization goals? During the process of adaptation, all of the approaches proposed in the considered scientific publications, have at least three components of an adaptation loop, including monitoring, deciding, and acting. For example, Thomas et al (2005) monitors architecture and environment parameters, then uses a decision tree to analyze such information, and perform the required changes (in this case selecting an algorithm implementation). Similarly, Hoffmann et al (2010b) use the so called observe-decide-act (ODA) feedback loop to monitor performance related information (retrieved using the application heartbeats Hoffmann et al (2010a)), and use the heart-rate to take some user defined actions, such as adjusting the clock speed, allocating cores, or change the algorithm. Reinforcement learning (RL), an on-line machine learning algorithm, is used to help with the adaptation decisions in both smart-locks Eastep et al (2010) and smart data-structures Eastep et al (2011), whereas linear regression (LR) is used by Luk et al (2009) for choosing the mapping scheme of computations to processing elements.
RQ4: Which features are considered during the optimization of parallel computing systems? In Table 8 we list two types of parameters, the monitored parameters, used to evaluate whether adaptation goals have been met, and tuned parameters, which are basically defined actions that will change the program behavior until the desired goals are achieved. For monitoring, architecture and environment variables (such as, available memory, cache size, number of processors), and performance characteristics are considered by Thomas et al (2005). Performance related information retrieved from the heartbeats monitor are used as monitoring parameters in the following scientific articles Hoffmann et al (2010b); Eastep et al (2010, 2011). Luk et al (2009) relies on the execution time of parts of the program, whereas the Antarex framework uses contextual information, requirements, and resource availability for monitoring the program behavior. As tuning parameters, the following are considered, selecting the algorithm implementation Thomas et al (2005); Hoffmann et al (2010b), adjusting the clock speed, core allocation, select algorithm Hoffmann et al (2010b), change lock scheduling policy Eastep et al (2010), adjust the scancount Eastep et al (2011), change mapping scheme Luk et al (2009), and altering resource allocation and task mapping Silvano et al (2016).
5.3 Challenges and Research Directions
At run-time, many execution environment parameters influence the performance behavior of the program. Exploring this huge parameter space is time consuming and impractical for programs with long execution times and large demands for system resources.
Different computing capabilities and energy efficiency of processing elements of heterogeneous parallel computing systems make the scheduling a difficult challenge. Some of the existing scheduling strategies often assume that the program is executed on a dedicated system and all system resources are available for use. Another issue is that commonly used scheduling techniques ignore slow processing elements due to their low performance capabilities. Mapping computations always to processing units that offer higher performance capability is not optimal, because slower processing elements may never get work to perform. Furthermore, most of the reviewed approaches target specific features of the code (for example, loops) only, or are limited to specific programming models and applications (data-bound or compute-bound). Many static scheduling approaches require retraining of the prediction model for each new architecture, limiting their general use because training requires a significant amount of training data that is not always available. Approaches that reduce the amount of training data require implementation of multiple machine learning algorithms (for instance, (Ogilvie et al, 2015)). Approaches that use a single execution (Li et al, 2014; Cooper et al, 2005) by trying various system configurations during the program execution are promising, however the introduced overhead is not negligible.
Future research should aim at reducing the scheduling and adaption overhead for dynamic approaches. Run-time optimization techniques for heterogeneous systems should be developed that utilize all available computing resources to achieve the optimization goals. There is a need for robust run-time optimization frameworks that are useful for a large spectrum of programs and system architectures. Furthermore, techniques that reduce the amount of data generated from system monitoring are needed in particular for extreme-scale systems.
In this article, we have conducted a systematic literature review that describes approaches that use machine learning and meta-heuristics for software optimization of parallel computing systems. We have classified approaches based on the software life-cycle activities at compile-time and run-time, including the code optimization and generation, scheduling, and adaptation. We have discussed the shortcomings of existing approaches and provided recommendations for future research directions.
Our analysis of the reviewed literature suggests that the use of machine learning and meta-heuristics based techniques for software optimization of parallel computing systems is capable of delivering performance comparable to the manual code optimization or task scheduling strategies in specific cases. However, many existing solutions are limited to a specific programming language and model, type of application, or system architecture. There is a need for software optimization frameworks that are applicable to a large spectrum of programs and system architectures. Future efforts should focus on developing solutions for widely used general-purpose languages (such as, C/C++) and compilers that are used and supported by the community.
- Agakov et al (2006) Agakov F, Bonilla E, Cavazos J, Franke B, Fursin G, O’Boyle MF, Thomson J, Toussaint M, Williams CK (2006) Using machine learning to focus iterative optimization. In: Proceedings of the International Symposium on Code Generation and Optimization, IEEE Computer Society, pp 295–305
- Ahmad et al (2001) Ahmad I, Kwok Y, Ahmad I, Dhodhi M (2001) Scheduling parallel programs using genetic algorithms. Solutions to Parallel and Distributed Computing Problems New York, USA: John Wiley and Sons, Chapt 9:231–254
- Aho et al (2006) Aho AV, Lam MS, Sethi R, Ullman JD (2006) Compilers: Principles, Techniques, and Tools (2Nd Edition). Addison-Wesley Longman Publishing Co., Inc., Boston, MA, USA
- Albayrak et al (2013) Albayrak OE, Akturk I, Ozturk O (2013) Improving Application Behavior on Heterogeneous Manycore Systems Through Kernel Mapping. Parallel Comput 39(12):867–878
- Ansel et al (2009) Ansel J, Chan C, Wong YL, Olszewski M, Zhao Q, Edelman A, Amarasinghe S (2009) PetaBricks: a language and compiler for algorithmic choice, vol 44. ACM
- Barney et al (2010) Barney B, et al (2010) Introduction to parallel computing. Lawrence Livermore National Laboratory 6(13):10
- Beach and Avis (2009) Beach TH, Avis NJ (2009) An intelligent semi-automatic application porting system for application accelerators. In: Proceedings of the combined workshops on UnConventional high performance computing workshop plus memory access workshop, ACM, pp 7–10
- Beck et al (2016) Beck F, Koch S, Weiskopf D (2016) Visual analysis and dissemination of scientific literature collections with survis. IEEE Transactions on Visualization and Computer Graphics 22(1):180–189, DOI 10.1109/TVCG.2015.2467757
- Benkner et al (2011) Benkner S, Pllana S, Träff JL, Tsigas P, Richards A, Namyst R, Bachmayer B, Kessler C, Moloney D, Sanders P (2011) The PEPPHER Approach to Programmability and Performance Portability for Heterogeneous many-core Architectures. In: ParCo
- Biernacki and Waldorf (1981) Biernacki P, Waldorf D (1981) Snowball sampling: Problems and techniques of chain referral sampling. Sociological methods & research 10(2):141–163
- Binotto et al (2013) Binotto APD, Wehrmeister MA, Kuijper A, Pereira CE (2013) Sm@rtConfig: A context-aware runtime and tuning system using an aspect-oriented approach for data intensive engineering applications. Control Engineering Practice 21(2):204–217
- Braun et al (2001) Braun TD, Siegel HJ, Beck N, Bölöni LL, Maheswaran M, Reuther AI, Robertson JP, Theys MD, Yao B, Hensgen D, et al (2001) A comparison of eleven static heuristics for mapping a class of independent tasks onto heterogeneous distributed computing systems. Journal of Parallel and Distributed computing 61(6):810–837
- Buyya et al (2009) Buyya R, Yeo CS, Venugopal S, Broberg J, Brandic I (2009) Cloud computing and emerging it platforms: Vision, hype, and reality for delivering computing as the 5th utility. Future Generation Computer Systems 25(6):599–616, DOI 10.1016/j.future.2008.12.001
- Carretero et al (2007) Carretero J, Xhafa F, Abraham A (2007) Genetic algorithm based schedulers for grid computing systems. International Journal of Innovative Computing, Information and Control 3(6):1–19
- Castro et al (2011) Castro M, Goes LFW, Ribeiro CP, Cole M, Cintra M, Mehaut JF (2011) A machine learning-based approach for thread mapping on transactional memory applications. In: High Performance Computing (HiPC), 2011 18th International Conference on, IEEE, pp 1–10
- Castro et al (2012) Castro M, Góes LFW, Fernandes LG, Méhaut JF (2012) Dynamic thread mapping based on machine learning for transactional memory applications. In: Euro-Par 2012 Parallel Processing, Springer, pp 465–476
- Cavazos and Moss (2004) Cavazos J, Moss JEB (2004) Inducing heuristics to decide whether to schedule. In: Conference on Programming Language Design and Implementation, ACM, New York, NY, USA, PLDI ’04, pp 183–194
- Cavazos et al (2007) Cavazos J, Fursin G, Agakov F, Bonilla E, Boyle MF, Temam O (2007) Rapidly selecting good compiler optimizations using performance counters. In: Code Generation and Optimization, 2007. CGO’07. International Symposium on, IEEE, pp 185–197
- Chen and Long (2009) Chen X, Long S (2009) Adaptive multi-versioning for OpenMP parallelization via machine learning. In: Parallel and Distributed Systems (ICPADS), 2009 15th International Conference on, IEEE, pp 907–912
- Chirkin et al (2017) Chirkin AM, Belloum AS, Kovalchuk SV, Makkes MX, Melnik MA, Visheratin AA, Nasonov DA (2017) Execution time estimation for workflow scheduling. Future Generation Computer Systems DOI 10.1016/j.future.2017.01.011
- Cooper et al (2005) Cooper KD, Grosul A, Harvey TJ, Reeves S, Subramanian D, Torczon L, Waterman T (2005) ACME: adaptive compilation made efficient. In: ACM SIGPLAN Notices, ACM, vol 40, pp 69–77
- Corbalan et al (2005) Corbalan J, Martorell X, Labarta J (2005) Performance-driven processor allocation. Parallel and Distributed Systems, IEEE Transactions on 16(7):599–611
- Danylenko et al (2011) Danylenko A, Kessler C, Löwe W (2011) Comparing machine learning approaches for context-aware composition. In: Software Composition, Springer, pp 18–33
- Diamos and Yalamanchili (2008) Diamos GF, Yalamanchili S (2008) Harmony: An execution model and runtime for heterogeneous many core systems. In: Proceedings of the 17th International Symposium on High Performance Distributed Computing, ACM, New York, NY, USA, HPDC ’08, pp 197–200, DOI 10.1145/1383422.1383447
- Dongarra et al (2005) Dongarra J, Sterling T, Simon H, Strohmaier E (2005) High-performance computing: clusters, constellations, mpps, and future directions. Computing in Science & Engineering 7(2):51–59
- Duda et al (1973) Duda RO, Hart PE, et al (1973) Pattern classification and scene analysis, vol 3. Wiley New York
- Eastep et al (2010) Eastep J, Wingate D, Santambrogio MD, Agarwal A (2010) Smartlocks: lock acquisition scheduling for self-aware synchronization. In: Proceedings of the 7th international conference on Autonomic computing, ACM, pp 215–224
- Eastep et al (2011) Eastep J, Wingate D, Agarwal A (2011) Smart data structures: an online machine learning approach to multicore data structures. In: Proceedings of the 8th international conference on Autonomic computing, ACM, pp 11–20
- Emani et al (2013) Emani MK, Wang Z, O’Boyle MF (2013) Smart, adaptive mapping of parallelism in the presence of external workload. In: International Symposium on Code Generation and Optimization (CGO), IEEE, pp 1–10
- Fonseca and Cabral (2013) Fonseca A, Cabral B (2013) ÆminiumGPU: An Intelligent Framework for GPU Programming. In: Facing the Multicore-Challenge III, Springer, pp 96–107
- Foster and Kesselman (2003) Foster I, Kesselman C (2003) The Grid 2: Blueprint for a new computing infrastructure. Elsevier
- Foster et al (2008) Foster I, Zhao Y, Raicu I, Lu S (2008) Cloud computing and grid computing 360-degree compared. In: 2008 Grid Computing Environments Workshop, pp 1–10, DOI 10.1109/GCE.2008.4738445
- Fursin et al (2008) Fursin G, Miranda C, Temam O, Namolaru M, Yom-Tov E, Zaks A, Mendelson B, Bonilla E, Thomson J, Leather H, et al (2008) MILEPOST GCC: machine learning based research compiler. In: GCC Summit
- Fursin et al (2011) Fursin G, Kashnikov Y, Memon AW, Chamski Z, Temam O, Namolaru M, Yom-Tov E, Mendelson B, Zaks A, Courtois E, et al (2011) Milepost gcc: Machine learning enabled self-tuning compiler. International Journal of Parallel Programming 39(3):296–327
- Gaussier et al (2015) Gaussier E, Glesser D, Reis V, Trystram D (2015) Improving backfilling by using machine learning to predict running times. In: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, ACM, p 64
- Gordon et al (2006) Gordon MI, Thies W, Amarasinghe S (2006) Exploiting coarse-grained task, data, and pipeline parallelism in stream programs. In: ACM SIGOPS Operating Systems Review, ACM, vol 40, pp 151–162
- Gould (2006) Gould N (2006) An introduction to algorithms for continuous optimization
- Grewe and OâBoyle (2011) Grewe D, OâBoyle MF (2011) A static task partitioning approach for heterogeneous systems using opencl. In: Compiler Construction, Springer, pp 286–305
- Grewe et al (2011) Grewe D, Wang Z, O’Boyle MF (2011) A workload-aware mapping approach for data-parallel programs. In: Proceedings of the 6th International Conference on High Performance and Embedded Architectures and Compilers, ACM, pp 117–126
- Gropp et al (1999) Gropp W, Lusk E, Skjellum A (1999) Using MPI: portable parallel programming with the message-passing interface, vol 1. MIT press
- Grzonka et al (2014) Grzonka D, Kolodziej J, Tao J (2014) Using artificial neural network for monitoring and supporting the grid scheduler performance. In: ECMS, pp 515–522
- Guyon and Elisseeff (2003) Guyon I, Elisseeff A (2003) An introduction to variable and feature selection. The Journal of Machine Learning Research 3:1157–1182
- Hoffmann et al (2010a) Hoffmann H, Eastep J, Santambrogio MD, Miller JE, Agarwal A (2010a) Application heartbeats: a generic interface for specifying program performance and goals in autonomous computing environments. In: Parashar M, Figueiredo RJO, Kiciman E (eds) ICAC, ACM, pp 79–88
- Hoffmann et al (2010b) Hoffmann H, Maggio M, Santambrogio MD, Leva A, Agarwal A (2010b) SEEC: A framework for self-aware computing
- Iakymchuk et al (2016) Iakymchuk R, Jordan H, Bo Peng I, Markidis S, Laure E (2016) A particle-in-cell method for automatic load-balancing with the allscale environment. In: The Exascale Applications & Software Conference (EASC2016)
- Jeffers and Reinders (2015) Jeffers J, Reinders J (2015) High Performance Parallelism Pearls Volume Two: Multicore and Many-core Programming Approaches. Morgan Kaufmann
- Jin et al (2016) Jin C, de Supinski BR, Abramson D, Poxon H, DeRose L, Dinh MN, Endrei M, Jessup ER (2016) A survey on software methods to improve the energy efficiency of parallel computing. The International Journal of High Performance Computing Applications p 1094342016665471, DOI 10.1177/1094342016665471
- Kessler and Löwe (2012) Kessler C, Löwe W (2012) Optimized composition of performance-aware parallel components. Concurrency and Computation: Practice and Experience 24(5):481–498
- Kessler et al (2012) Kessler C, Dastgeer U, Thibault S, Namyst R, Richards A, Dolinsky U, Benkner S, Träff JL, Pllana S (2012) Programmability and performance portability aspects of heterogeneous multi-/manycore systems. In: Design, Automation & Test in Europe Conference & Exhibition (DATE), 2012, IEEE, pp 1403–1408
- Kitchenham and Charters (2007) Kitchenham B, Charters S (2007) Guidelines for performing Systematic Literature Reviews in Software Engineering. Tech. Rep. EBSE 2007-001, Keele University and Durham University Joint Report
- Lee and Schopf (2003) Lee BD, Schopf JM (2003) Run-time prediction of parallel applications on shared environments. In: Cluster Computing, 2003. Proceedings. 2003 IEEE International Conference on, IEEE, pp 487–491
- Li et al (2012) Li L, Dastgeer U, Kessler C (2012) Adaptive off-line tuning for optimized composition of components for heterogeneous many-core systems. In: High Performance Computing for Computational Science-VECPAR 2012, Springer, pp 329–345
- Li et al (2014) Li M, Zeng L, Meng S, Tan J, Zhang L, Butt AR, Fuller N (2014) Mronline: Mapreduce online performance tuning. In: Proceedings of the 23rd international symposium on High-performance parallel and distributed computing, ACM, pp 165–176
- Liu et al (2013) Liu B, Zhao Y, Zhong X, Liang Z, Feng B (2013) A Novel Thread Partitioning Approach Based on Machine Learning for Speculative Multithreading. In: High Performance Computing and Communications & 2013 IEEE International Conference on Embedded and Ubiquitous Computing (HPCC_EUC), 2013 IEEE 10th International Conference on, IEEE, pp 826–836
- Luk et al (2009) Luk CK, Hong S, Kim H (2009) Qilin: Exploiting parallelism on heterogeneous multiprocessors with adaptive mapping. In: Proceedings of the 42Nd Annual IEEE/ACM International Symposium on Microarchitecture, ACM, New York, NY, USA, MICRO 42, pp 45–55, DOI 10.1145/1669112.1669121
- Malawski et al (2015) Malawski M, Juve G, Deelman E, Nabrzyski J (2015) Algorithms for cost- and deadline-constrained provisioning for scientific workflow ensembles in iaas clouds. Future Generation Computer Systems 48:1 – 18, DOI 10.1016/j.future.2015.01.004, special Section: Business and Industry Specific Cloud
- Mantripragada et al (2014) Mantripragada K, Binotto APD, Tizzei LP (2014) A Self-adaptive Auto-scaling Method for Scientific Applications on HPC Environments and Clouds. CoRR abs/1412.6392
- Mastelic et al (2015) Mastelic T, Fdhila W, Brandic I, Rinderle-Ma S (2015) Predicting resource allocation and costs for business processes in the cloud. In: 2015 IEEE World Congress on Services, pp 47–54, DOI 10.1109/SERVICES.2015.16
- Memeti and Pllana (2016a) Memeti S, Pllana S (2016a) Combinatorial optimization of dna sequence analysis on heterogeneous systems. Concurrency and Computation: Practice and Experience pp n/a–n/a, DOI 10.1002/cpe.4037, cpe.4037
- Memeti and Pllana (2016b) Memeti S, Pllana S (2016b) Combinatorial optimization of work distribution on heterogeneous systems. In: 2016 45th International Conference on Parallel Processing Workshops (ICPPW), pp 151–160, DOI 10.1109/ICPPW.2016.35
- Memeti and Pllana (2016c) Memeti S, Pllana S (2016c) A machine learning approach for accelerating dna sequence analysis. The International Journal of High Performance Computing Applications 0(0):1094342016654,214, DOI 10.1177/1094342016654214
- Memeti et al (2016) Memeti S, Pllana S, Kołodziej J (2016) Optimal Worksharing of DNA Sequence Analysis on Accelerated Platforms, Springer International Publishing, Cham, pp 279–309. DOI 10.1007/978-3-319-44881-7˙14
- Mitchell (1997) Mitchell TM (1997) Machine Learning, 1st edn. McGraw-Hill, Inc., New York, NY, USA
- Mittal and Vetter (2015) Mittal S, Vetter JS (2015) A survey of cpu-gpu heterogeneous computing techniques. ACM Computing Surveys (CSUR) 47(4):69
- Monsifrot et al (2002) Monsifrot A, Bodin F, Quiniou R (2002) A machine learning approach to automatic production of compiler heuristics. In: Artificial Intelligence: Methodology, Systems, and Applications, Springer, pp 41–50
- Nvidia (2015) Nvidia C (2015) CUDA C Programming Guide. NVIDIA Corporation 120:18
- Ogilvie et al (2015) Ogilvie WF, Petoumenous P, Wang Z, Leather H (2015) Cgo: G: Intelligent heuristic construction with active learning
- OpenMP (2013) OpenMP A (2013) OpenMP 4.0 specification, June 2013
- Padua (2011) Padua D (2011) Encyclopedia of Parallel Computing. Springer Publishing Company, Incorporated
- Page and Naughton (2005a) Page AJ, Naughton TJ (2005a) Dynamic task scheduling using genetic algorithms for heterogeneous distributed computing. In: 19th International Parallel and Distributed Processing Symposium, IEEE, pp 189a–189a
- Page and Naughton (2005b) Page AJ, Naughton TJ (2005b) Framework for task scheduling in heterogeneous distributed computing using genetic algorithms. Artificial Intelligence Review 24(3):415–429, DOI 10.1007/s10462-005-9002-x
- Park et al (2010) Park Yw, Baskiyar S, Casey K (2010) A novel adaptive support vector machine based task scheduling. In: Proceedings the 9th International Conference on Parallel and Distributed Computing and Networks, Austria, pp 16–18
- Pekhimenko and Brown (2011) Pekhimenko G, Brown AD (2011) Efficient program compilation through machine learning techniques. In: Software Automatic Tuning, Springer, pp 335–351
- Pllana et al (2008) Pllana S, Benkner S, Mehofer E, Natvig L, Xhafa F (2008) Towards an Intelligent Environment for Programming Multi-core Computing Systems. In: Euro-Par Workshops, Springer, Lecture Notes in Computer Science, vol 5415, pp 141–151
- Press et al (2007) Press WH, Teukolsky SA, Vetterling WT, Flannery BP (2007) Numerical Recipes 3rd Edition: The Art of Scientific Computing, 3rd edn. Cambridge University Press
- Ravi and Agrawal (2011) Ravi VT, Agrawal G (2011) A dynamic scheduling framework for emerging heterogeneous systems. In: High Performance Computing (HiPC), 2011 18th International Conference on, IEEE, pp 1–10
- Rossbach et al (2013) Rossbach CJ, Yu Y, Currey J, Martin JP, Fetterly D (2013) Dandelion: a compiler and runtime for heterogeneous systems. In: Proceedings of the Twenty-Fourth ACM Symposium on Operating Systems Principles, ACM, pp 49–68
- Sadashiv and Kumar (2011) Sadashiv N, Kumar SMD (2011) Cluster, grid and cloud computing: A detailed comparison. In: 2011 6th International Conference on Computer Science Education (ICCSE), pp 477–482, DOI 10.1109/ICCSE.2011.6028683
- Sandrieser et al (2012) Sandrieser M, Benkner S, Pllana S (2012) Using Explicit Platform Descriptions to Support Programming of Heterogeneous Many-Core Systems. Parallel Computing 38(1-2):52–56
- Silvano et al (2016) Silvano C, Agosta G, Cherubin S, Gadioli D, Palermo G, Bartolini A, Benini L, Martinovič J, Palkovič M, Slaninová K, Bispo Ja, Cardoso JaMP, Abreu R, Pinto P, Cavazzoni C, Sanna N, Beccari AR, Cmar R, Rohou E (2016) The antarex approach to autotuning and adaptivity for energy efficient hpc systems. In: Proceedings of the International Conference on Computing Frontiers, ACM, New York, NY, USA, CF ’16, pp 288–293, DOI 10.1145/2903150.2903470
- Sivanandam and Visalakshi (2009) Sivanandam SN, Visalakshi P (2009) Dynamic task scheduling with load balancing using parallel orthogonal particle swarm optimisation. Int J Bio-Inspired Comput 1(4):276–286, DOI 10.1504/IJBIC.2009.024726
- Smanchat et al (2013) Smanchat S, Indrawan M, Ling S, Enticott C, Abramson D (2013) Scheduling parameter sweep workflow in the grid based on resource competition. Future Generation Computer Systems 29(5):1164 – 1183, DOI 10.1016/j.future.2013.01.005
- Stephenson and Amarasinghe (2005) Stephenson M, Amarasinghe S (2005) Predicting unroll factors using supervised classification. In: Code Generation and Optimization, 2005. CGO 2005. International Symposium on, IEEE, pp 123–134
- Stephenson et al (2003) Stephenson M, Amarasinghe S, Martin M, O’Reilly UM (2003) Meta optimization: Improving compiler heuristics with machine learning. SIGPLAN Not 38(5):77–90
- Sterling et al (1995) Sterling T, Becker DJ, Savarese D, Dorband JE, Ranawake UA, Packer CV (1995) Beowulf: A parallel workstation for scientific computation. In: In Proceedings of the 24th International Conference on Parallel Processing, pp 11–14
- Stone et al (2010) Stone JE, Gohara D, Shi G (2010) OpenCL: A parallel programming standard for heterogeneous computing systems. Computing in science & engineering 12(1-3):66–73
- Thomas et al (2005) Thomas N, Tanase G, Tkachyshyn O, Perdue J, Amato NM, Rauchwerger L (2005) A framework for adaptive algorithm selection in STAPL. In: Proceedings of the tenth ACM SIGPLAN symposium on Principles and practice of parallel programming, ACM, pp 277–288
- Tiwari and Hollingsworth (2011) Tiwari A, Hollingsworth JK (2011) Online adaptive code generation and tuning. In: 2011 IEEE International Parallel Distributed Processing Symposium, pp 879–892, DOI 10.1109/IPDPS.2011.86
- Tiwari et al (2009) Tiwari A, Chen C, Chame J, Hall M, Hollingsworth JK (2009) A scalable auto-tuning framework for compiler optimization. In: Proceedings of the 2009 IEEE International Symposium on Parallel&Distributed Processing, IEEE Computer Society, Washington, DC, USA, IPDPS ’09, pp 1–12, DOI 10.1109/IPDPS.2009.5161054
- TOP500 (2016) TOP500 (2016) TOP500 Supercomputer Sites. http://www.top500.org/, accessed: Jan. 2016
- Tournavitis et al (2009) Tournavitis G, Wang Z, Franke B, O’Boyle MF (2009) Towards a holistic approach to auto-parallelization: integrating profile-driven parallelism detection and machine-learning based mapping. In: ACM Sigplan Notices, vol 44, pp 177–187
- Voss and Kim (2011) Voss M, Kim W (2011) Multicore desktop programming with intel threading building blocks. IEEE Software 28(undefined):23–31
- Wang and O’Boyle (2009) Wang Z, O’Boyle MF (2009) Mapping parallelism to multi-cores: a machine learning based approach. In: ACM Sigplan notices, ACM, vol 44, pp 75–84
- Wang and O’boyle (2013) Wang Z, O’boyle MF (2013) Using machine learning to partition streaming programs. ACM Transactions on Architecture and Code Optimization (TACO) 10(3):20
- Wienke et al (2012) Wienke S, Springer P, Terboven C, an Mey D (2012) Openacc: First experiences with real-world applications. In: Proceedings of the 18th International Conference on Parallel Processing, Springer-Verlag, Berlin, Heidelberg, Euro-Par’12, pp 859–870
- Wolsey and Nemhauser (2014) Wolsey LA, Nemhauser GL (2014) Integer and combinatorial optimization. John Wiley & Sons
- Zhang et al (2004) Zhang Y, Burcea M, Cheng V, Ho R, Voss M (2004) An adaptive openmp loop scheduler for hyperthreaded smps. In: ISCA PDCS, pp 256–263
- Zhang et al (2005) Zhang Y, Voss M, Rogers E (2005) Runtime empirical selection of loop schedulers on hyperthreaded smps. In: Parallel and Distributed Processing Symposium, 2005. Proceedings. 19th IEEE International, IEEE, pp 44b–44b
- Zomaya and Teh (2001) Zomaya AY, Teh YH (2001) Observations on using genetic algorithms for dynamic load-balancing. Parallel and Distributed Systems, IEEE Transactions on 12(9):899–911
- Zomaya et al (2001) Zomaya AY, Lee RC, Olariu S (2001) An introduction to genetic-based scheduling in parallel processor systems. Solutions to Parallel and Distributed Computing Problems pp 111–133