Learning to Optimize Tensor Programs

Learning to Optimize Tensor Programs

Abstract

We introduce a learning-based framework to optimize tensor programs for deep learning workloads. Efficient implementations of tensor operators, such as matrix multiplication and high dimensional convolution, are key enablers of effective deep learning systems. However, existing systems rely on manually optimized libraries such as cuDNN where only a narrow range of server class GPUs are well-supported. The reliance on hardware-specific operator libraries limits the applicability of high-level graph optimizations and incurs significant engineering costs when deploying to new hardware targets. We use learning to remove this engineering burden. We learn domain-specific statistical cost models to guide the search of tensor operator implementations over billions of possible program variants. We further accelerate the search by effective model transfer across workloads. Experimental results show that our framework delivers performance competitive with state-of-the-art hand-tuned libraries for low-power CPU, mobile GPU, and server-class GPU.

\MakePerPage

footnote

1 Introduction

Deep learning has become ubiquitous in our daily lives. Deep learning models can now recognize images [22], understand natural language [37], play games [26], and automate system decisions (e.g., device placement [25] and indexing [20]). Tensor operators, such as matrix multiplication and high dimensional convolution, are basic building blocks of deep learning models. Scalable learning systems [1, 4, 8, 2] rely on manually-optimized high-performance tensor operation libraries, such as cuDNN. These libraries are optimized for a narrow range of hardware. To optimize a tensor operator, programmers need to choose from many implementations that are logically equivalent but differ dramatically in performance due to difference in threading, memory reuse, pipelining and other hardware factors. Supporting diverse hardware back-ends requires tremendous engineering effort. Even on currently supported hardware, the development of deep learning frameworks and models is fundamentally limited by the set of optimized operators in libraries, preventing optimizations such as operator fusion that can produce unsupported operators.

We ask the following question: can we use learning to alleviate this engineering burden and automatically optimize tensor operator programs for a given hardware platform? This paper provides an affirmative answer to this problem. We build statistical cost models that predict program run time using for a given low-level program. These cost models guide the exploration of the space of possible programs. Our cost models use transferable representations that can generalize across different workloads to accelerate search. We make the following contributions:

  • We provide a formalization of the problem of learning to optimize tensor programs and summarize its key characteristics.

  • We propose a machine learning-based framework to solve this new problem.

  • We further accelerate the optimization by to using transfer learning.

  • We provide detailed empirical analysis of component design choices in this framework.

Experimental results on real-world deep learning workloads show that our framework brings end-to-end performance improvements ranging from 1.2 to 3.8 over existing frameworks.

2 Problem Formalization

Figure 1: Motivating example of the problem. For a given tensor operator specification (), there are multiple possible low-level program implementations, each with different choices of loop order, tiling size as well as other options. Each choice creates a logically equivalent program with different performance. Our problem is to explore the space of programs and find an optimized one.

We start this section by walking through a motivating example in  Figure 1. To enable automatic code generation, we specify tensor operators using index expressions (e.g. ). Let denote the space of index expressions. The index expression leaves many low-level implementation details such as loop order, memory scope, and threading unspecified. As a result, we can generate multiple variants of low-level code that are logically equivalent to the expression for a given . We use to denote the space of possible transformations (schedules) from to low-level code. For a , let be the generated low-level code. Here, represents a compiler framework that generates low-level code from . We are interested in minimizing , which is the real run time cost on the hardware. Importantly, we do not know an analytical expression for , but can query it by running experiments on the hardware. For a given tuple of , our problem can be formalized as the following objective:

(1)

This problem formalization is similar to that of traditional hyper-parameter optimization problems [33, 32, 34, 12, 16, 24], but with several key differentiating characteristics:

Relatively Low Experiment Cost. Traditionally, hyper-parameter optimization problems incur a high cost to query —running experiments could take hours or days. However, the cost of compiling and running a tensor program is a few seconds. This property requires the training and inference of the model to be fast (otherwise there would be no benefit over profiling execution on real hardware). It also means we can collect more training data during optimization.

Domain-Specific Problem Structure. Most existing hyper-parameter optimization algorithms treat the problem as a black box. As we are optimizing programs, we can leverage their rich structures to build effective models.

Large Quantity of Similar Operators. An end-to-end deep learning system needs to optimize tensor operator programs for different input sizes, shapes, and data layout configurations. These tasks are similar in nature and can offer opportunities for transfer learning.

We describe two key prerequisites for automatic code generation that is competitive with hand optimized code. (1) We need to define an exhaustive search space covering all hardware-aware optimizations in hand tuned libraries. (2) We need to efficiently to find an optimal schedule in .

There are many domain-specific languages (DSLs) for code generation [31, 35, 14, 36, 19, 29], each with with a different , and . Polyhedral models [5, 41, 40] are a popular choice for ; they model the loop domains as integer linear constraints. An alternative approach originating from Halide [31] is to define a schedule space using a set of transformation primitives. The improvement of is an important research direction that is beyond the scope of this paper. We pick a rich and focus on schedule optimization in the rest of the paper.

We use primitives from an existing code generation framework loop ordering, shared memory caching for GPUs, and annotations such as unrolling and vectorization. The search space size can be on the order of billions for a single GPU operator. As we will find in our experiments, our choice of can already contain programs competitive with hand-optimized libraries.

3 Learning to Optimize Tensor Programs

Figure 2: Overview of Learning to Optimize Tensor Program Framework

We propose a machine learning-based framework to solve this problem. Figure 2 gives an overview of the modules in the framework. We build a statistical cost model to estimate the cost of each low-level program . An exploration module proposes new schedule configurations to run on the hardware. The run time statistics are collected in a database , which can in turn be used to update . We will discuss module-specific design choices in the following subsections.

3.1 Statistical Cost Model

The first statistical model we support is gradient boosted tree [10](GBT)-based. We extract domain-specific features from a given low-level AST . The features include loop structure information such as memory access count, data reuse ratio in addition to generic annotations (e.g., vectorization, unrolling, thread binding). We use XGBoost [7] which has proven to be a strong feature-based model in past problems. Our second model is a TreeGRU[38] which recursively encodes a low-level AST into an embedding vector. We map the embedding vector to a final predicted cost using a linear layer.

GBT and TreeGRU represent two major kinds of machine learning approaches to solve the problem. Both approaches are valuable, and they exhibit different characteristics. The GBT relies on precise feature extraction and can make fast predictions using CPUs. The deep learning-based approach is extensible and does not require feature engineering, but also brings challenges to training and predictive speed. We apply batching to TreeGRU model and use GPU to make training and prediction fast enough to be usable in our framework.

Input :  Transformation space
Output :  Selected schedule configuration
while  do
        // Pick the next promising batch
       
        Equation 3
        // Run measurement on hardware environment
        for  in  do
               ;
              
        end for
       // Update cost model
       
       
       
end while
Algorithm 1 Learning to Optimize Tensor Programs

3.2 Training Objective Function

There are multiple objective functions we could use to train a statistical cost model for a given collection of data . A common choice is the regression loss function to encourage the model to predict cost accurately. On the other hand, in the selection process, we only care about the relative order of the run times of programs rather than their absolute values, so we can instead use the following rank loss function [6]:

(2)

The prediction can be used to select the top-performing implementations.

3.3 Exploration Module

The exploration module controls the search loop, which is summarized in Algorithm 1. At each iteration, the exploration module needs to pick a batch of candidate programs based on and query on real hardware. We cannot simply enumerate the entire space of and pick the top-b candidates due to the size of the search space. Instead, we use simulated annealing  [18] with the energy function. Specifically, we use a batch of parallel Markov chains to improve the prediction throughput of the statistical cost model. We select the top-performing batch of candidates to run on real hardware. The collected performance data is used to update . We make the states of the Markov chains persistent across updates of .

Diversity-aware Exploration. We consider both quality and diversity when selecting candidates for hardware evaluation. Assume that the schedule configuration can be decomposed into components . We maximize the following objective to select candidate set from top candidates:

(3)

The first term encourages us to pick candidates that have low run time cost. The second term counts the number of different configuration components that are covered by . is a submodular function and we can apply the greedy algorithm [28, 21] to get an approximate solution.

Uncertainty Estimator. Bayesian optimization methods [33, 32, 34, 16] use acquisition functions other than the mean when an uncertainty estimate of is available. Typical choices include expected improvement (EI) and upper confidence bound (UCB). We can use bootstrapping to get the uncertainty estimate of the model and validate the effectiveness of these methods. As we will see in the experiments, considering uncertainty does not improve the search in our problem. However, the choice of acquisition function remains an interesting direction that is worthy of further exploration.

4 Accelerating Optimization via Transfer Learning

Thus far, we have only focused on learning to optimize a single tensor operator workload. In practice, we need to optimize for many tensor operators with different input shapes and data types. In the real world setting, the system collects historical data from previously seen workloads. We can use transfer learning to effectively use to speed up the optimization.

The key to transfer learning is to create a transferable representation that is invariant to the source and target domains. We can then share the cost model using the common representation across domains. Different choices of representations may have different level of invariance.

A common practice in Bayesian optimization methods is to directly use configuration as the input to the model. However, the search space specification can change for different a workload, or when the user wants to specify a new search space for the same workload. The configuration representation is not invariant to changes in the search space.

Figure 3: Examples of possible ways to encode the low level loop AST.

On the other hand, the low-level loop AST  (Figure 3a) is a shared representation of programs that is invariant to the search space. To leverage this invariance, our cost model takes the low-level loop AST as input. We also need to encode into a vector space to perform prediction. The specific encoding of can also result in different levels of invariance.

Context Relation Features for GBT. We define context features at each loop level to represent loop characteristics. A simple representation of context features is as a vector (e.g., in Figure 3b where each loop has a row of features). Context features are informative but, crucially, cannot generalize across different loop nest patterns; we define context relation features to overcome this issue.

To build context relation features, we instead treat context vectors as a bag of points and extract features that model relations between feature axes. Formally, let be the context feature matrix such that corresponds to -th feature of loop . We define a set of spaced constant thresholds . The relation feature between feature and is defined as: . This encoding summarizes useful relations such as loop count vs. touched memory size (related to the memory hierarchy of the access) which affects run time cost.

Context Encoded TreeGRU. The invariant representation also exists for the neural-based model. Figure 3c shows a way to encode the program by learning an embedding vector for each identifier and summarize the AST using TreeGRU. This model works well for modeling a single workload. However, the set of loop variables can change across different domains, and we do not have embedding for the new loop variables. We instead encode each loop variable using the context vector extracted for GBT to summarize the AST (Figure 3d). We scatter each loop level embedding into vectors using the following rule: . Conceptually, the softmax classifies the loop level into one of memory hierarchy slot in . Then we sum the scattered vectors of all loop levels together to get the final embedding.

Once we have a transferable representation, we can use a simple transfer learning method by combining a global model and an in-domain local model as follows:

(4)

The global model is trained on using the invariant representation and helps to make effective initial prediction before we have enough data to fit .

Workload Name C1 C2 C3 C4 C5 C6 C7 C8 C9 C10 C11 C12
H, W 224,224 56,56 56,56 56,56 56,56 28,28 28,28 28,28 14,14 14,14 14,14 7,7
IC, OC 3,64 64,64 64,64 64,128 64,128 128,128 128,256 128,256 256,256 256,512 256,512 512,512
K, S 7,2 3,1 1,1 3,2 1,2 3,1 3,2 1,2 3,1 3,2 1,2 3,1
Table 1: Configurations of all conv2d operators in single batch ResNet-18 inference. H,W denotes height and width, IC input channels, OC output channels, K kernel size, and S stride size.

5 Relation to Prior Works

Black box optimization (auto-tuning) is used in high-performance computing libraries such as ATLAS [42] and FFTW [11]. Alternatively, a hardware-dependent cost model can be built to guide the search [27, 5]. Polyhedral methods [5, 41] use integer linear programming to do cost optimization. Tensor Comprehensions [40] combines both approaches, using black-box optimization to choose parameters of thread blocks and polyhedral optimization to generate internal loops. Black box approaches can require many experiment trials to explore a huge . On the other hand, predefined cost models may not be accurate enough to capture the complexity of modern hardware and must be manually redefined for each new hardware target.

Statistical cost models have been previously applied to optimize SAT solvers [16, 17]. We apply this idea to our problem and build a domain-specific cost model that enables effective transfer among workloads. There is a recent trend of using deep neural networks to do program analysis [3, 9]. We hope our new problem setting and experiment environment can serve as a testbed for new research opportunities in related directions.

6 Experiments

Figure 4: Statistical cost model vs. genetic algorithm (GA) and random search (Random) evaluated on NVIDIA TITAN X. Number of trials corresponds to number of evaluations on the real hardware. We also do three hardware evaluations per trial in Random x3 and GA x3. Both the GBT and TreeGRU-based models converge faster and get better results than the black box baselines. The GA can get similar performance in C3 (1x1 convolution), which is relatively easier than C1 and C2.
Figure 5: Rank vs. Regression objective function evaluated on NVIDIA TITAN X. Rank-based objective is either better or gives the same performance as regression-based objective in the presented results.
Figure 6: Impact of diversity-aware selection with different choices of evaluated on NVIDIA TITAN X. diversity-aware selection have no positive or negative impact on most of the evaluated workloads except for a few. diversity-aware selection gives speedup on workload C12.
Figure 7: Impact of uncertainty-aware acquisition functions evaluated on NVIDIA TITAN X. Uncertainty-aware acquisition functions do not bring improvements in our evaluations.

6.1 Component Evaluations

We start by evaluating each of our design choices in the framework. The component evaluations are based on convolution workloads in ResNet-18 [13] for ImageNet classification (Table 1). Due to space limitations, we only show component evaluation results on representative workloads. The complete set of results are presented in the supplementary material. All methods compared in this subsection are initialized without any historical data. In Section 6.2, we evaluate the transfer learning setting.

Importance of Statistical Cost Model. Figure 4 compares the performance of statistical cost model versus black box methods. Both the GBT and TreeGRU models outperform the black box methods and can find operators that are faster than those found with random search. This result is particularly interesting in comparison to prior results in hyper-parameter tuning [24], where model-based approaches are shown to only work as well as random search. Our statistical models benefit from domain-specific modeling and help the framework find better configurations.

Choice of Objective Function. We compare the two objective functions in Figure 5 on both types of models. We find that, in most cases, using a rank-based objective is slightly better than using a regression-based objective. This could be due to the rank-based objective sidestepping the potentially challenging task of modeling absolute cost values. We choose rank as our default objective.

Impact of Diversity-aware Exploration. We evaluate the impact of diversity-aware exploration objective in Figure 6. We find that for most of the workloads we evaluate, diversity-based selection has no positive or negative impact. However, diversity-aware exploration brings improvement for C12, which shows some potential usefulness to the approach. We adopt this strategy as it can sometimes be helpful, has no meaningful negative impact, and has negligible impact on the running time.

Impact of Uncertainty Estimator. Finally, we present a negative evaluation result. We evaluate the usefulness of uncertainty-aware acquisition functions in Figure 7. The uncertainty measurement is achieved by training five models using bootstrapping. We use the regression objective in this setting as in most Bayesian optimization methods. The results show that uncertainty estimation is not as important in our problem. This could possibly be due to the fact that our models are trained with more training samples than traditional hyper-parameter optimization problems.

Figure 8: Impact of transfer learning. Transfer-based models can quickly find better solutions.
Figure 9: Comparison of different representations in different transfer domain settings. The configuration based model can be viewed as a typical Bayesian optimization approach  (batched version of SMAC [16]). We find that models using configuration space features works well within a domain but is less useful across domains. The flattened AST features work well when transferring across convolution workloads but cannot be used across operator types. Context relation representation allows effective transfer across operator types.

6.2 Transfer Learning Evaluations

The evaluations presented so far use no historical data. This subsection evaluates the improvements that can be obtained by transfer learning.

Improvements by Transfer. We first evaluate the general improvement that can be obtained by using transfer learning. We randomly pick samples from collected from C1,C2,C3,C4,C5,C6 and use them to form the source domain (30000 samples in TITAN X experiment, 20000 samples in ARM GPU and ARM A53 experiment). We then compare the performance of transfer-enabled methods against learning from scratch for the target workloads C7,C8,C9. The results are shown in Figure 8. Overall, transfer learning can yield a to speedup. Transfer learning is especially important for real deep learning compilation systems, which continuously optimize incoming workloads.

Invariant Representation and Domain Distance. As discussed in Section 4, different representations have different levels of invariance. We use three scenarios to study the relationship between domain distance and the invariance of feature representations: (a) running optimization only on one target domain; (b) C1–C67: C1–C6 as source domain and C7 as target domain (transfer within same operator type); (c) C1–C6Matmul-1024: transfer across operator types with C1–C6 as source domain and matrix multiplication as target domain. The results ( Figure 9) show that we need more invariance when the domains are further apart. By using our transferable feature representation, our model can generalize across different input shapes and operator types. We also run a preliminary study on transfer from an ARM Mali GPU to ARM Cortex-A53 ( Figure 9d), showing that the proposed representation can enable transfer across devices. Developing an invariant feature representation is an important problem, and we expect more research in this direction.

6.3 End-to-End Evaluation

(a) Optimization curves in wall clock time. (We set cuDNN v7, Tensorflow Lite and ARM ComputeLibrary v18.03 as the baselines for TITAN X, ARM A53 and ARM Mali-T860 respectively.)
(b) NVIDIA TITAN X Single Op
(c) ARM Cortex-A53 Single Op
Figure 10: Single Operator Performance on TITAN X and ARM CPU. More ARM GPU (Mali) results can be found in the supplementary material. We also include a weight pre-transformed Winograd [23] for 3x3 conv2d (AutoTVM PT). AutoTVM generates programs that are competitive to hardware-specific libraries.
(a) NVIDIA TITAN X End2End
(b) ARM Cortex-A53 End2End
(c) ARM Mali-T860 End2End
Figure 11: End-to-end performance across back-ends. 1AutoTVM outperforms the baseline methods.

Thus far, our evaluation has focused on the specific design choices in our framework. We now move to the natural follow-up question: can learning to optimize tensor programs improve real-world deep learning systems on diverse hardware targets? We call our framework AutoTVM. We compare our approach with existing deep learning frameworks backed by highly engineered hardware-specific libraries on diverse hardware back-ends: a server class GPU, an embedded CPU, and a mobile GPU. AutoTVM performs optimization and code generation without any external operator library.

We first evaluate single operator optimization against baselines that use hardware-specific libraries. The baselines are: cuDNN v7 for NVIDIA GPU, TFLite(commit: 7558b085) for Cortex-A53, ARM Compute Library (v18.03) for ARM Mali GPU. We also include the TensorComprehension (commit: ef644ba) [40] as an additional baseline for TITAN X 2 TensorComprehension uses random seeds generations population for each operator and padding was removed (TC does not yet support padding). The results are shown in Figure 10. AutoTVM generates high-performance tensor programs across different hardware back-ends.

We further embed our framework into an existing deep learning graph compiler stack and perform end-to-end workload evaluation. We evaluate real world end-to-end deep learning inference workloads including ResNet [13], MobileNet [15], LSTM Language Model [43], Deep Q Network (DQN) [26], and Deep Convolutional Generative Adversarial Networks (DCGAN) [30]. Our baselines are: MXNet (v1.1), Tensorflow (v1.7) for GPU, TFLite(commit: 7558b085) for Cortex A53, and ARM Compute Library (v18.03) for ARM Mali GPU. The results are summarized in  Figure 11. AutoTVM brings end-to-end performance improves ranging from 1.2 to 3.8. These improvements are due to both the tensor program optimization, as well as the operator fusion optimizations that are otherwise impossible if we use operator libraries with a limited set of operators.

3

7 Discussion and Conclusion

We presented a machine learning-based framework to automatically optimize the implementation of tensor operators in deep learning systems. Our statistical cost model allows effective model sharing between workloads and speeds up the optimization process via model transfer. The positive experimental results of this new approach show promise for deep learning deployment. Beyond our solution framework, the specific characteristics of this new problem make it an ideal test bed for innovations in related areas such as neural program modeling, Bayesian optimization, transfer learning and reinforcement learning. On the systems side, learning to optimize tensor programs can enable more fused operators, data layouts, and data types across diverse hardware back-ends. These improvements are crucial to improving deep learning systems. We will open source our experimental framework to encourage more studies in these directions.

Acknowledgement

Tianqi Chen is supported by the Google PhD Fellowship. This work was partially supported by the NSF under grant #1518703.

Appendix A Supplementary Materials

a.1 Additional Experimental Results

Figure 12: Single Operator Performance on Mali T860MP4
Figure 13: Effectiveness of cost model on all conv2d operators in ResNet-18.
Figure 14: Impact of objective function of cost model on all conv2d operators in ResNet-18.
Figure 15: Impact of diversity aware exploration on all conv2d operators in ResNet-18.
Figure 16: Impact of uncertainty aware acquisition function on all conv2d operators in ResNet-18.

a.2 Summary of Loop Features

Loop Context

We extract loop context for every loop variable. The loop context contains loop attributes and the access patterns for all touched inner buffers.

Feature Name Description
length The length of this loop
annotation One-hot annotation of this loop (can be vectorize, unrolled, paralleled, …)
top-down The product of the lengths of outer loops
bottom-up The product of the lengths of inner loops
access pattern (for every buffer) touch count The number of touched elements
reuse ratio Reuse ratio of this buffer (= bottom-up / touch count)
stride Coefficent of this loop varialbe in the index expression
Table 2: Listing of loop context feature

Relation Feature

First we pick the longest chain from the AST. Then we extract loop context features for the loop variables in this chain. We compute two pairs of relation : touch count vs reuse ratio and touch count vs top-down.

a.3 Experiment Configuration

Hyperparameter Value Description
30 batch size of planning in GBT
50 batch size of planning in TreeGRU
128 dimension of loop variable embedding in TreeGRU
128 hidden size of GRU cell in TreeGRU
128 number of Markov chains in parallel simulated annealing
500 maximum steps of one simulated annealing run

Footnotes

  1. footnotemark:
  2. According to personal communication [39], TC is not meant to be used for compute-bound problems yet. But it is still a good reference baseline to be included in the comparison.
  3. footnotetext: DCGAN and LSTM are not reported on A53 and Mali because they are not yet supported by baseline systems.

References

  1. Martin Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, Manjunath Kudlur, Josh Levenberg, Rajat Monga, Sherry Moore, Derek G. Murray, Benoit Steiner, Paul Tucker, Vijay Vasudevan, Pete Warden, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. Tensorflow: A system for large-scale machine learning. In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), pages 265–283, 2016.
  2. Amit Agarwal, Eldar Akchurin, Chris Basoglu, Guoguo Chen, Scott Cyphers, Jasha Droppo, Adam Eversole, Brian Guenter, Mark Hillebrand, Ryan Hoens, Xuedong Huang, Zhiheng Huang, Vladimir Ivanov, Alexey Kamenev, Philipp Kranen, Oleksii Kuchaiev, Wolfgang Manousek, Avner May, Bhaskar Mitra, Olivier Nano, Gaizka Navarro, Alexey Orlov, Marko Padmilac, Hari Parthasarathi, Baolin Peng, Alexey Reznichenko, Frank Seide, Michael L. Seltzer, Malcolm Slaney, Andreas Stolcke, Yongqiang Wang, Huaming Wang, Kaisheng Yao, Dong Yu, Yu Zhang, and Geoffrey Zweig. An introduction to computational networks and the computational network toolkit. Technical Report MSR-TR-2014-112, August 2014.
  3. Miltiadis Allamanis, Marc Brockschmidt, and Mahmoud Khademi. Learning to represent programs with graphs. In International Conference on Learning Representations, 2018.
  4. Frédéric Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian J. Goodfellow, Arnaud Bergeron, Nicolas Bouchard, and Yoshua Bengio. Theano: new features and speed improvements. Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop, 2012.
  5. Uday Bondhugula, Albert Hartono, J. Ramanujam, and P. Sadayappan. A practical automatic polyhedral parallelizer and locality optimizer. In Proceedings of the 29th ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI ’08, pages 101–113. ACM, 2008.
  6. Chris Burges, Tal Shaked, Erin Renshaw, Ari Lazier, Matt Deeds, Nicole Hamilton, and Greg Hullender. Learning to rank using gradient descent. In Proceedings of the 22Nd International Conference on Machine Learning, ICML ’05, pages 89–96, New York, NY, USA, 2005. ACM.
  7. Tianqi Chen and Carlos Guestrin. XGBoost: A scalable tree boosting system. In Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’16, pages 785–794, New York, NY, USA, 2016. ACM.
  8. Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, , and Zheng Zhang. MXNet: A flexible and efficient machine learning library for heterogeneous distributed systems. In Neural Information Processing Systems, Workshop on Machine Learning Systems (LearningSys’15), 2015.
  9. Xinyun Chen, Chang Liu, and Dawn Song. Tree-to-tree neural networks for program translation. CoRR, abs/1802.03691, 2018.
  10. J.H. Friedman. Greedy function approximation: a gradient boosting machine. Annals of Statistics, 29(5):1189–1232, 2001.
  11. M. Frigo and S. G. Johnson. Fftw: an adaptive software architecture for the fft. In Acoustics, Speech and Signal Processing, 1998. Proceedings of the 1998 IEEE International Conference on, volume 3, pages 1381–1384 vol.3, May 1998.
  12. Daniel Golovin, Benjamin Solnik, Subhodeep Moitra, Greg Kochanski, John Karro, and D. Sculley. Google vizier: A service for black-box optimization. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’17, pages 1487–1495. ACM, 2017.
  13. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. arXiv preprint arXiv:1603.05027, 2016.
  14. Troels Henriksen, Niels G. W. Serup, Martin Elsman, Fritz Henglein, and Cosmin E. Oancea. Futhark: Purely functional gpu-programming with nested parallelism and in-place array updates. In Proceedings of the 38th ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI 2017, pages 556–571, New York, NY, USA, 2017. ACM.
  15. Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. CoRR, abs/1704.04861, 2017.
  16. Frank Hutter, Holger H. Hoos, and Kevin Leyton-Brown. Sequential model-based optimization for general algorithm configuration. In Proceedings of the 5th International Conference on Learning and Intelligent Optimization, LION’05, pages 507–523, Berlin, Heidelberg, 2011. Springer-Verlag.
  17. Frank Hutter, Lin Xu, Holger Hoos, and Kevin Leyton-Brown. Algorithm runtime prediction: Methods and evaluation (extended abstract). In Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, IJCAI 2015, Buenos Aires, Argentina, July 25-31, 2015, pages 4197–4201, 2015.
  18. S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi. Optimization by simulated annealing. Science, 220(4598):671–680, 1983.
  19. Fredrik Kjolstad, Shoaib Kamil, Stephen Chou, David Lugato, and Saman Amarasinghe. The tensor algebra compiler. Proc. ACM Program. Lang., 1(OOPSLA):77:1–77:29, October 2017.
  20. Tim Kraska, Alex Beutel, Ed H. Chi, Jeffrey Dean, and Neoklis Polyzotis. The case for learned index structures. CoRR, abs/1712.01208, 2017.
  21. Andreas Krause and Daniel Golovin. Submodular function maximization. In Tractability: Practical Approaches to Hard Problems. Cambridge University Press, February 2014.
  22. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems 25, pages 1097–1105. 2012.
  23. Andrew Lavin and Scott Gray. Fast algorithms for convolutional neural networks. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 4013–4021, 2016.
  24. Lisha Li, Kevin G. Jamieson, Giulia DeSalvo, Afshin Rostamizadeh, and Ameet Talwalkar. Efficient hyperparameter optimization and infinitely many armed bandits. CoRR, abs/1603.06560, 2016.
  25. Azalia Mirhoseini, Hieu Pham, Quoc V. Le, Benoit Steiner, Rasmus Larsen, Yuefeng Zhou, Naveen Kumar, Mohammad Norouzi, Samy Bengio, and Jeff Dean. Device placement optimization with reinforcement learning. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, pages 2430–2439, 2017.
  26. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529, 2015.
  27. Ravi Teja Mullapudi, Andrew Adams, Dillon Sharlet, Jonathan Ragan-Kelley, and Kayvon Fatahalian. Automatically scheduling halide image processing pipelines. ACM Trans. Graph., 35(4):83:1–83:11, July 2016.
  28. George L Nemhauser, Laurence A Wolsey, and Marshall L Fisher. An analysis of approximations for maximizing submodular set functions—i. Mathematical Programming, 14(1):265–294, 1978.
  29. Shoumik Palkar, James J. Thomas, Deepak Narayanan, Anil Shanbhag, Rahul Palamuttam, Holger Pirk, Malte Schwarzkopf, Saman P. Amarasinghe, Samuel Madden, and Matei Zaharia. Weld: Rethinking the interface between data-intensive applications. CoRR, abs/1709.06416, 2017.
  30. Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
  31. Jonathan Ragan-Kelley, Connelly Barnes, Andrew Adams, Sylvain Paris, Frédo Durand, and Saman Amarasinghe. Halide: A language and compiler for optimizing parallelism, locality, and recomputation in image processing pipelines. In Proceedings of the 34th ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI ’13, pages 519–530, New York, NY, USA, 2013. ACM.
  32. B. Shahriari, K. Swersky, Z. Wang, R. P. Adams, and N. de Freitas. Taking the human out of the loop: A review of bayesian optimization. Proceedings of the IEEE, 104(1):148–175, Jan 2016.
  33. Jasper Snoek, Hugo Larochelle, and Ryan P. Adams. Practical bayesian optimization of machine learning algorithms. In Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 2, NIPS’12, pages 2951–2959, USA, 2012.
  34. Jasper Snoek, Oren Rippel, Kevin Swersky, Ryan Kiros, Nadathur Satish, Narayanan Sundaram, Md. Mostofa Ali Patwary, Prabhat Prabhat, and Ryan P. Adams. Scalable bayesian optimization using deep neural networks. In Proceedings of the 32Nd International Conference on International Conference on Machine Learning - Volume 37, ICML’15, pages 2171–2180, 2015.
  35. Michel Steuwer, Toomas Remmelg, and Christophe Dubach. Lift: A functional data-parallel ir for high-performance gpu code generation. In Proceedings of the 2017 International Symposium on Code Generation and Optimization, CGO ’17, pages 74–85, Piscataway, NJ, USA, 2017. IEEE Press.
  36. Arvind K. Sujeeth, HyoukJoong Lee, Kevin J. Brown, Hassan Chafi, Michael Wu, Anand R. Atreya, Kunle Olukotun, Tiark Rompf, and Martin Odersky. Optiml: An implicitly parallel domain-specific language for machine learning. In Proceedings of the 28th International Conference on International Conference on Machine Learning, ICML’11, pages 609–616, USA, 2011.
  37. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks. In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, NIPS’14, pages 3104–3112, Cambridge, MA, USA, 2014. MIT Press.
  38. Kai Sheng Tai, Richard Socher, and Christopher D Manning. Improved semantic representations from tree-structured long short-term memory networks. arXiv preprint arXiv:1503.00075, 2015.
  39. Nicolas Vasilache. personal communication.
  40. Nicolas Vasilache, Oleksandr Zinenko, Theodoros Theodoridis, Priya Goyal, Zachary DeVito, William S. Moses, Sven Verdoolaege, Andrew Adams, and Albert Cohen. Tensor comprehensions: Framework-agnostic high-performance machine learning abstractions. CoRR, abs/1802.04730, 2018.
  41. Sven Verdoolaege, Juan Carlos Juega, Albert Cohen, José Ignacio Gómez, Christian Tenllado, and Francky Catthoor. Polyhedral parallel code generation for cuda. ACM Trans. Archit. Code Optim., 9(4):54:1–54:23, January 2013.
  42. R. Clint Whaley and Jack J. Dongarra. Automatically tuned linear algebra software. In Proceedings of the 1998 ACM/IEEE Conference on Supercomputing, SC ’98, pages 1–27, Washington, DC, USA, 1998. IEEE Computer Society.
  43. Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329, 2014.
Comments 2
Request Comment
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
198709
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel
2

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description