Learning to Optimize Tensor Programs
Abstract
We introduce a learningbased framework to optimize tensor programs for deep learning workloads. Efficient implementations of tensor operators, such as matrix multiplication and high dimensional convolution, are key enablers of effective deep learning systems. However, existing systems rely on manually optimized libraries such as cuDNN where only a narrow range of server class GPUs are wellsupported. The reliance on hardwarespecific operator libraries limits the applicability of highlevel graph optimizations and incurs significant engineering costs when deploying to new hardware targets. We use learning to remove this engineering burden. We learn domainspecific statistical cost models to guide the search of tensor operator implementations over billions of possible program variants. We further accelerate the search by effective model transfer across workloads. Experimental results show that our framework delivers performance competitive with stateoftheart handtuned libraries for lowpower CPU, mobile GPU, and serverclass GPU.
footnote
1 Introduction
Deep learning has become ubiquitous in our daily lives. Deep learning models can now recognize images [22], understand natural language [37], play games [26], and automate system decisions (e.g., device placement [25] and indexing [20]). Tensor operators, such as matrix multiplication and high dimensional convolution, are basic building blocks of deep learning models. Scalable learning systems [1, 4, 8, 2] rely on manuallyoptimized highperformance tensor operation libraries, such as cuDNN. These libraries are optimized for a narrow range of hardware. To optimize a tensor operator, programmers need to choose from many implementations that are logically equivalent but differ dramatically in performance due to difference in threading, memory reuse, pipelining and other hardware factors. Supporting diverse hardware backends requires tremendous engineering effort. Even on currently supported hardware, the development of deep learning frameworks and models is fundamentally limited by the set of optimized operators in libraries, preventing optimizations such as operator fusion that can produce unsupported operators.
We ask the following question: can we use learning to alleviate this engineering burden and automatically optimize tensor operator programs for a given hardware platform? This paper provides an affirmative answer to this problem. We build statistical cost models that predict program run time using for a given lowlevel program. These cost models guide the exploration of the space of possible programs. Our cost models use transferable representations that can generalize across different workloads to accelerate search. We make the following contributions:

We provide a formalization of the problem of learning to optimize tensor programs and summarize its key characteristics.

We propose a machine learningbased framework to solve this new problem.

We further accelerate the optimization by to using transfer learning.

We provide detailed empirical analysis of component design choices in this framework.
Experimental results on realworld deep learning workloads show that our framework brings endtoend performance improvements ranging from 1.2 to 3.8 over existing frameworks.
2 Problem Formalization
We start this section by walking through a motivating example in Figure 1. To enable automatic code generation, we specify tensor operators using index expressions (e.g. ). Let denote the space of index expressions. The index expression leaves many lowlevel implementation details such as loop order, memory scope, and threading unspecified. As a result, we can generate multiple variants of lowlevel code that are logically equivalent to the expression for a given . We use to denote the space of possible transformations (schedules) from to lowlevel code. For a , let be the generated lowlevel code. Here, represents a compiler framework that generates lowlevel code from . We are interested in minimizing , which is the real run time cost on the hardware. Importantly, we do not know an analytical expression for , but can query it by running experiments on the hardware. For a given tuple of , our problem can be formalized as the following objective:
(1) 
This problem formalization is similar to that of traditional hyperparameter optimization problems [33, 32, 34, 12, 16, 24], but with several key differentiating characteristics:
Relatively Low Experiment Cost. Traditionally, hyperparameter optimization problems incur a high cost to query —running experiments could take hours or days. However, the cost of compiling and running a tensor program is a few seconds. This property requires the training and inference of the model to be fast (otherwise there would be no benefit over profiling execution on real hardware). It also means we can collect more training data during optimization.
DomainSpecific Problem Structure. Most existing hyperparameter optimization algorithms treat the problem as a black box. As we are optimizing programs, we can leverage their rich structures to build effective models.
Large Quantity of Similar Operators. An endtoend deep learning system needs to optimize tensor operator programs for different input sizes, shapes, and data layout configurations. These tasks are similar in nature and can offer opportunities for transfer learning.
We describe two key prerequisites for automatic code generation that is competitive with hand optimized code. (1) We need to define an exhaustive search space covering all hardwareaware optimizations in hand tuned libraries. (2) We need to efficiently to find an optimal schedule in .
There are many domainspecific languages (DSLs) for code generation [31, 35, 14, 36, 19, 29], each with with a different , and . Polyhedral models [5, 41, 40] are a popular choice for ; they model the loop domains as integer linear constraints. An alternative approach originating from Halide [31] is to define a schedule space using a set of transformation primitives. The improvement of is an important research direction that is beyond the scope of this paper. We pick a rich and focus on schedule optimization in the rest of the paper.
We use primitives from an existing code generation framework loop ordering, shared memory caching for GPUs, and annotations such as unrolling and vectorization. The search space size can be on the order of billions for a single GPU operator. As we will find in our experiments, our choice of can already contain programs competitive with handoptimized libraries.
3 Learning to Optimize Tensor Programs
We propose a machine learningbased framework to solve this problem. Figure 2 gives an overview of the modules in the framework. We build a statistical cost model to estimate the cost of each lowlevel program . An exploration module proposes new schedule configurations to run on the hardware. The run time statistics are collected in a database , which can in turn be used to update . We will discuss modulespecific design choices in the following subsections.
3.1 Statistical Cost Model
The first statistical model we support is gradient boosted tree [10](GBT)based. We extract domainspecific features from a given lowlevel AST . The features include loop structure information such as memory access count, data reuse ratio in addition to generic annotations (e.g., vectorization, unrolling, thread binding). We use XGBoost [7] which has proven to be a strong featurebased model in past problems. Our second model is a TreeGRU[38] which recursively encodes a lowlevel AST into an embedding vector. We map the embedding vector to a final predicted cost using a linear layer.
GBT and TreeGRU represent two major kinds of machine learning approaches to solve the problem. Both approaches are valuable, and they exhibit different characteristics. The GBT relies on precise feature extraction and can make fast predictions using CPUs. The deep learningbased approach is extensible and does not require feature engineering, but also brings challenges to training and predictive speed. We apply batching to TreeGRU model and use GPU to make training and prediction fast enough to be usable in our framework.
3.2 Training Objective Function
There are multiple objective functions we could use to train a statistical cost model for a given collection of data . A common choice is the regression loss function to encourage the model to predict cost accurately. On the other hand, in the selection process, we only care about the relative order of the run times of programs rather than their absolute values, so we can instead use the following rank loss function [6]:
(2) 
The prediction can be used to select the topperforming implementations.
3.3 Exploration Module
The exploration module controls the search loop, which is summarized in Algorithm 1. At each iteration, the exploration module needs to pick a batch of candidate programs based on and query on real hardware. We cannot simply enumerate the entire space of and pick the topb candidates due to the size of the search space. Instead, we use simulated annealing [18] with the energy function. Specifically, we use a batch of parallel Markov chains to improve the prediction throughput of the statistical cost model. We select the topperforming batch of candidates to run on real hardware. The collected performance data is used to update . We make the states of the Markov chains persistent across updates of .
Diversityaware Exploration. We consider both quality and diversity when selecting candidates for hardware evaluation. Assume that the schedule configuration can be decomposed into components . We maximize the following objective to select candidate set from top candidates:
(3) 
Uncertainty Estimator. Bayesian optimization methods [33, 32, 34, 16] use acquisition functions other than the mean when an uncertainty estimate of is available. Typical choices include expected improvement (EI) and upper confidence bound (UCB). We can use bootstrapping to get the uncertainty estimate of the model and validate the effectiveness of these methods. As we will see in the experiments, considering uncertainty does not improve the search in our problem. However, the choice of acquisition function remains an interesting direction that is worthy of further exploration.
4 Accelerating Optimization via Transfer Learning
Thus far, we have only focused on learning to optimize a single tensor operator workload. In practice, we need to optimize for many tensor operators with different input shapes and data types. In the real world setting, the system collects historical data from previously seen workloads. We can use transfer learning to effectively use to speed up the optimization.
The key to transfer learning is to create a transferable representation that is invariant to the source and target domains. We can then share the cost model using the common representation across domains. Different choices of representations may have different level of invariance.
A common practice in Bayesian optimization methods is to directly use configuration as the input to the model. However, the search space specification can change for different a workload, or when the user wants to specify a new search space for the same workload. The configuration representation is not invariant to changes in the search space.
On the other hand, the lowlevel loop AST (Figure 3a) is a shared representation of programs that is invariant to the search space. To leverage this invariance, our cost model takes the lowlevel loop AST as input. We also need to encode into a vector space to perform prediction. The specific encoding of can also result in different levels of invariance.
Context Relation Features for GBT. We define context features at each loop level to represent loop characteristics. A simple representation of context features is as a vector (e.g., in Figure 3b where each loop has a row of features). Context features are informative but, crucially, cannot generalize across different loop nest patterns; we define context relation features to overcome this issue.
To build context relation features, we instead treat context vectors as a bag of points and extract features that model relations between feature axes. Formally, let be the context feature matrix such that corresponds to th feature of loop . We define a set of spaced constant thresholds . The relation feature between feature and is defined as: . This encoding summarizes useful relations such as loop count vs. touched memory size (related to the memory hierarchy of the access) which affects run time cost.
Context Encoded TreeGRU. The invariant representation also exists for the neuralbased model. Figure 3c shows a way to encode the program by learning an embedding vector for each identifier and summarize the AST using TreeGRU. This model works well for modeling a single workload. However, the set of loop variables can change across different domains, and we do not have embedding for the new loop variables. We instead encode each loop variable using the context vector extracted for GBT to summarize the AST (Figure 3d). We scatter each loop level embedding into vectors using the following rule: . Conceptually, the softmax classifies the loop level into one of memory hierarchy slot in . Then we sum the scattered vectors of all loop levels together to get the final embedding.
Once we have a transferable representation, we can use a simple transfer learning method by combining a global model and an indomain local model as follows:
(4) 
The global model is trained on using the invariant representation and helps to make effective initial prediction before we have enough data to fit .
Workload Name  C1  C2  C3  C4  C5  C6  C7  C8  C9  C10  C11  C12 

H, W  224,224  56,56  56,56  56,56  56,56  28,28  28,28  28,28  14,14  14,14  14,14  7,7 
IC, OC  3,64  64,64  64,64  64,128  64,128  128,128  128,256  128,256  256,256  256,512  256,512  512,512 
K, S  7,2  3,1  1,1  3,2  1,2  3,1  3,2  1,2  3,1  3,2  1,2  3,1 
5 Relation to Prior Works
Black box optimization (autotuning) is used in highperformance computing libraries such as ATLAS [42] and FFTW [11]. Alternatively, a hardwaredependent cost model can be built to guide the search [27, 5]. Polyhedral methods [5, 41] use integer linear programming to do cost optimization. Tensor Comprehensions [40] combines both approaches, using blackbox optimization to choose parameters of thread blocks and polyhedral optimization to generate internal loops. Black box approaches can require many experiment trials to explore a huge . On the other hand, predefined cost models may not be accurate enough to capture the complexity of modern hardware and must be manually redefined for each new hardware target.
Statistical cost models have been previously applied to optimize SAT solvers [16, 17]. We apply this idea to our problem and build a domainspecific cost model that enables effective transfer among workloads. There is a recent trend of using deep neural networks to do program analysis [3, 9]. We hope our new problem setting and experiment environment can serve as a testbed for new research opportunities in related directions.
6 Experiments
6.1 Component Evaluations
We start by evaluating each of our design choices in the framework. The component evaluations are based on convolution workloads in ResNet18 [13] for ImageNet classification (Table 1). Due to space limitations, we only show component evaluation results on representative workloads. The complete set of results are presented in the supplementary material. All methods compared in this subsection are initialized without any historical data. In Section 6.2, we evaluate the transfer learning setting.
Importance of Statistical Cost Model. Figure 4 compares the performance of statistical cost model versus black box methods. Both the GBT and TreeGRU models outperform the black box methods and can find operators that are faster than those found with random search. This result is particularly interesting in comparison to prior results in hyperparameter tuning [24], where modelbased approaches are shown to only work as well as random search. Our statistical models benefit from domainspecific modeling and help the framework find better configurations.
Choice of Objective Function. We compare the two objective functions in Figure 5 on both types of models. We find that, in most cases, using a rankbased objective is slightly better than using a regressionbased objective. This could be due to the rankbased objective sidestepping the potentially challenging task of modeling absolute cost values. We choose rank as our default objective.
Impact of Diversityaware Exploration. We evaluate the impact of diversityaware exploration objective in Figure 6. We find that for most of the workloads we evaluate, diversitybased selection has no positive or negative impact. However, diversityaware exploration brings improvement for C12, which shows some potential usefulness to the approach. We adopt this strategy as it can sometimes be helpful, has no meaningful negative impact, and has negligible impact on the running time.
Impact of Uncertainty Estimator. Finally, we present a negative evaluation result. We evaluate the usefulness of uncertaintyaware acquisition functions in Figure 7. The uncertainty measurement is achieved by training five models using bootstrapping. We use the regression objective in this setting as in most Bayesian optimization methods. The results show that uncertainty estimation is not as important in our problem. This could possibly be due to the fact that our models are trained with more training samples than traditional hyperparameter optimization problems.
6.2 Transfer Learning Evaluations
The evaluations presented so far use no historical data. This subsection evaluates the improvements that can be obtained by transfer learning.
Improvements by Transfer. We first evaluate the general improvement that can be obtained by using transfer learning. We randomly pick samples from collected from C1,C2,C3,C4,C5,C6 and use them to form the source domain (30000 samples in TITAN X experiment, 20000 samples in ARM GPU and ARM A53 experiment). We then compare the performance of transferenabled methods against learning from scratch for the target workloads C7,C8,C9. The results are shown in Figure 8. Overall, transfer learning can yield a to speedup. Transfer learning is especially important for real deep learning compilation systems, which continuously optimize incoming workloads.
Invariant Representation and Domain Distance. As discussed in Section 4, different representations have different levels of invariance. We use three scenarios to study the relationship between domain distance and the invariance of feature representations: (a) running optimization only on one target domain; (b) C1–C67: C1–C6 as source domain and C7 as target domain (transfer within same operator type); (c) C1–C6Matmul1024: transfer across operator types with C1–C6 as source domain and matrix multiplication as target domain. The results ( Figure 9) show that we need more invariance when the domains are further apart. By using our transferable feature representation, our model can generalize across different input shapes and operator types. We also run a preliminary study on transfer from an ARM Mali GPU to ARM CortexA53 ( Figure 9d), showing that the proposed representation can enable transfer across devices. Developing an invariant feature representation is an important problem, and we expect more research in this direction.
6.3 EndtoEnd Evaluation
Thus far, our evaluation has focused on the specific design choices in our framework. We now move to the natural followup question: can learning to optimize tensor programs improve realworld deep learning systems on diverse hardware targets? We call our framework AutoTVM. We compare our approach with existing deep learning frameworks backed by highly engineered hardwarespecific libraries on diverse hardware backends: a server class GPU, an embedded CPU, and a mobile GPU. AutoTVM performs optimization and code generation without any external operator library.
We first evaluate single operator optimization against baselines that use hardwarespecific libraries.
The baselines are: cuDNN v7 for NVIDIA GPU, TFLite(commit: 7558b085) for CortexA53, ARM Compute Library (v18.03) for ARM Mali GPU.
We also include the TensorComprehension (commit: ef644ba) [40] as an additional baseline for TITAN X
We further embed our framework into an existing deep learning graph compiler stack and perform endtoend workload evaluation. We evaluate real world endtoend deep learning inference workloads including ResNet [13], MobileNet [15], LSTM Language Model [43], Deep Q Network (DQN) [26], and Deep Convolutional Generative Adversarial Networks (DCGAN) [30]. Our baselines are: MXNet (v1.1), Tensorflow (v1.7) for GPU, TFLite(commit: 7558b085) for Cortex A53, and ARM Compute Library (v18.03) for ARM Mali GPU. The results are summarized in Figure 11. AutoTVM brings endtoend performance improves ranging from 1.2 to 3.8. These improvements are due to both the tensor program optimization, as well as the operator fusion optimizations that are otherwise impossible if we use operator libraries with a limited set of operators.
7 Discussion and Conclusion
We presented a machine learningbased framework to automatically optimize the implementation of tensor operators in deep learning systems. Our statistical cost model allows effective model sharing between workloads and speeds up the optimization process via model transfer. The positive experimental results of this new approach show promise for deep learning deployment. Beyond our solution framework, the specific characteristics of this new problem make it an ideal test bed for innovations in related areas such as neural program modeling, Bayesian optimization, transfer learning and reinforcement learning. On the systems side, learning to optimize tensor programs can enable more fused operators, data layouts, and data types across diverse hardware backends. These improvements are crucial to improving deep learning systems. We will open source our experimental framework to encourage more studies in these directions.
Acknowledgement
Tianqi Chen is supported by the Google PhD Fellowship. This work was partially supported by the NSF under grant #1518703.
Appendix A Supplementary Materials
a.1 Additional Experimental Results
a.2 Summary of Loop Features
Loop Context
We extract loop context for every loop variable. The loop context contains loop attributes and the access patterns for all touched inner buffers.
Feature Name  Description  

length  The length of this loop  
annotation  Onehot annotation of this loop (can be vectorize, unrolled, paralleled, …)  
topdown  The product of the lengths of outer loops  
bottomup  The product of the lengths of inner loops  
access pattern (for every buffer)  touch count  The number of touched elements 
reuse ratio  Reuse ratio of this buffer (= bottomup / touch count)  
stride  Coefficent of this loop varialbe in the index expression 
Relation Feature
First we pick the longest chain from the AST. Then we extract loop context features for the loop variables in this chain. We compute two pairs of relation : touch count vs reuse ratio and touch count vs topdown.
a.3 Experiment Configuration
Hyperparameter  Value  Description 

30  batch size of planning in GBT  
50  batch size of planning in TreeGRU  
128  dimension of loop variable embedding in TreeGRU  
128  hidden size of GRU cell in TreeGRU  
128  number of Markov chains in parallel simulated annealing  
500  maximum steps of one simulated annealing run 
Footnotes
 footnotemark:
 According to personal communication [39], TC is not meant to be used for computebound problems yet. But it is still a good reference baseline to be included in the comparison.
 footnotetext: DCGAN and LSTM are not reported on A53 and Mali because they are not yet supported by baseline systems.
References
 Martin Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, Manjunath Kudlur, Josh Levenberg, Rajat Monga, Sherry Moore, Derek G. Murray, Benoit Steiner, Paul Tucker, Vijay Vasudevan, Pete Warden, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. Tensorflow: A system for largescale machine learning. In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), pages 265–283, 2016.
 Amit Agarwal, Eldar Akchurin, Chris Basoglu, Guoguo Chen, Scott Cyphers, Jasha Droppo, Adam Eversole, Brian Guenter, Mark Hillebrand, Ryan Hoens, Xuedong Huang, Zhiheng Huang, Vladimir Ivanov, Alexey Kamenev, Philipp Kranen, Oleksii Kuchaiev, Wolfgang Manousek, Avner May, Bhaskar Mitra, Olivier Nano, Gaizka Navarro, Alexey Orlov, Marko Padmilac, Hari Parthasarathi, Baolin Peng, Alexey Reznichenko, Frank Seide, Michael L. Seltzer, Malcolm Slaney, Andreas Stolcke, Yongqiang Wang, Huaming Wang, Kaisheng Yao, Dong Yu, Yu Zhang, and Geoffrey Zweig. An introduction to computational networks and the computational network toolkit. Technical Report MSRTR2014112, August 2014.
 Miltiadis Allamanis, Marc Brockschmidt, and Mahmoud Khademi. Learning to represent programs with graphs. In International Conference on Learning Representations, 2018.
 Frédéric Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian J. Goodfellow, Arnaud Bergeron, Nicolas Bouchard, and Yoshua Bengio. Theano: new features and speed improvements. Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop, 2012.
 Uday Bondhugula, Albert Hartono, J. Ramanujam, and P. Sadayappan. A practical automatic polyhedral parallelizer and locality optimizer. In Proceedings of the 29th ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI ’08, pages 101–113. ACM, 2008.
 Chris Burges, Tal Shaked, Erin Renshaw, Ari Lazier, Matt Deeds, Nicole Hamilton, and Greg Hullender. Learning to rank using gradient descent. In Proceedings of the 22Nd International Conference on Machine Learning, ICML ’05, pages 89–96, New York, NY, USA, 2005. ACM.
 Tianqi Chen and Carlos Guestrin. XGBoost: A scalable tree boosting system. In Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’16, pages 785–794, New York, NY, USA, 2016. ACM.
 Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, , and Zheng Zhang. MXNet: A flexible and efficient machine learning library for heterogeneous distributed systems. In Neural Information Processing Systems, Workshop on Machine Learning Systems (LearningSys’15), 2015.
 Xinyun Chen, Chang Liu, and Dawn Song. Treetotree neural networks for program translation. CoRR, abs/1802.03691, 2018.
 J.H. Friedman. Greedy function approximation: a gradient boosting machine. Annals of Statistics, 29(5):1189–1232, 2001.
 M. Frigo and S. G. Johnson. Fftw: an adaptive software architecture for the fft. In Acoustics, Speech and Signal Processing, 1998. Proceedings of the 1998 IEEE International Conference on, volume 3, pages 1381–1384 vol.3, May 1998.
 Daniel Golovin, Benjamin Solnik, Subhodeep Moitra, Greg Kochanski, John Karro, and D. Sculley. Google vizier: A service for blackbox optimization. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’17, pages 1487–1495. ACM, 2017.
 Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. arXiv preprint arXiv:1603.05027, 2016.
 Troels Henriksen, Niels G. W. Serup, Martin Elsman, Fritz Henglein, and Cosmin E. Oancea. Futhark: Purely functional gpuprogramming with nested parallelism and inplace array updates. In Proceedings of the 38th ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI 2017, pages 556–571, New York, NY, USA, 2017. ACM.
 Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. CoRR, abs/1704.04861, 2017.
 Frank Hutter, Holger H. Hoos, and Kevin LeytonBrown. Sequential modelbased optimization for general algorithm configuration. In Proceedings of the 5th International Conference on Learning and Intelligent Optimization, LION’05, pages 507–523, Berlin, Heidelberg, 2011. SpringerVerlag.
 Frank Hutter, Lin Xu, Holger Hoos, and Kevin LeytonBrown. Algorithm runtime prediction: Methods and evaluation (extended abstract). In Proceedings of the TwentyFourth International Joint Conference on Artificial Intelligence, IJCAI 2015, Buenos Aires, Argentina, July 2531, 2015, pages 4197–4201, 2015.
 S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi. Optimization by simulated annealing. Science, 220(4598):671–680, 1983.
 Fredrik Kjolstad, Shoaib Kamil, Stephen Chou, David Lugato, and Saman Amarasinghe. The tensor algebra compiler. Proc. ACM Program. Lang., 1(OOPSLA):77:1–77:29, October 2017.
 Tim Kraska, Alex Beutel, Ed H. Chi, Jeffrey Dean, and Neoklis Polyzotis. The case for learned index structures. CoRR, abs/1712.01208, 2017.
 Andreas Krause and Daniel Golovin. Submodular function maximization. In Tractability: Practical Approaches to Hard Problems. Cambridge University Press, February 2014.
 Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems 25, pages 1097–1105. 2012.
 Andrew Lavin and Scott Gray. Fast algorithms for convolutional neural networks. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 2730, 2016, pages 4013–4021, 2016.
 Lisha Li, Kevin G. Jamieson, Giulia DeSalvo, Afshin Rostamizadeh, and Ameet Talwalkar. Efficient hyperparameter optimization and infinitely many armed bandits. CoRR, abs/1603.06560, 2016.
 Azalia Mirhoseini, Hieu Pham, Quoc V. Le, Benoit Steiner, Rasmus Larsen, Yuefeng Zhou, Naveen Kumar, Mohammad Norouzi, Samy Bengio, and Jeff Dean. Device placement optimization with reinforcement learning. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 611 August 2017, pages 2430–2439, 2017.
 Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Humanlevel control through deep reinforcement learning. Nature, 518(7540):529, 2015.
 Ravi Teja Mullapudi, Andrew Adams, Dillon Sharlet, Jonathan RaganKelley, and Kayvon Fatahalian. Automatically scheduling halide image processing pipelines. ACM Trans. Graph., 35(4):83:1–83:11, July 2016.
 George L Nemhauser, Laurence A Wolsey, and Marshall L Fisher. An analysis of approximations for maximizing submodular set functions—i. Mathematical Programming, 14(1):265–294, 1978.
 Shoumik Palkar, James J. Thomas, Deepak Narayanan, Anil Shanbhag, Rahul Palamuttam, Holger Pirk, Malte Schwarzkopf, Saman P. Amarasinghe, Samuel Madden, and Matei Zaharia. Weld: Rethinking the interface between dataintensive applications. CoRR, abs/1709.06416, 2017.
 Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
 Jonathan RaganKelley, Connelly Barnes, Andrew Adams, Sylvain Paris, Frédo Durand, and Saman Amarasinghe. Halide: A language and compiler for optimizing parallelism, locality, and recomputation in image processing pipelines. In Proceedings of the 34th ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI ’13, pages 519–530, New York, NY, USA, 2013. ACM.
 B. Shahriari, K. Swersky, Z. Wang, R. P. Adams, and N. de Freitas. Taking the human out of the loop: A review of bayesian optimization. Proceedings of the IEEE, 104(1):148–175, Jan 2016.
 Jasper Snoek, Hugo Larochelle, and Ryan P. Adams. Practical bayesian optimization of machine learning algorithms. In Proceedings of the 25th International Conference on Neural Information Processing Systems  Volume 2, NIPS’12, pages 2951–2959, USA, 2012.
 Jasper Snoek, Oren Rippel, Kevin Swersky, Ryan Kiros, Nadathur Satish, Narayanan Sundaram, Md. Mostofa Ali Patwary, Prabhat Prabhat, and Ryan P. Adams. Scalable bayesian optimization using deep neural networks. In Proceedings of the 32Nd International Conference on International Conference on Machine Learning  Volume 37, ICML’15, pages 2171–2180, 2015.
 Michel Steuwer, Toomas Remmelg, and Christophe Dubach. Lift: A functional dataparallel ir for highperformance gpu code generation. In Proceedings of the 2017 International Symposium on Code Generation and Optimization, CGO ’17, pages 74–85, Piscataway, NJ, USA, 2017. IEEE Press.
 Arvind K. Sujeeth, HyoukJoong Lee, Kevin J. Brown, Hassan Chafi, Michael Wu, Anand R. Atreya, Kunle Olukotun, Tiark Rompf, and Martin Odersky. Optiml: An implicitly parallel domainspecific language for machine learning. In Proceedings of the 28th International Conference on International Conference on Machine Learning, ICML’11, pages 609–616, USA, 2011.
 Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks. In Proceedings of the 27th International Conference on Neural Information Processing Systems  Volume 2, NIPS’14, pages 3104–3112, Cambridge, MA, USA, 2014. MIT Press.
 Kai Sheng Tai, Richard Socher, and Christopher D Manning. Improved semantic representations from treestructured long shortterm memory networks. arXiv preprint arXiv:1503.00075, 2015.
 Nicolas Vasilache. personal communication.
 Nicolas Vasilache, Oleksandr Zinenko, Theodoros Theodoridis, Priya Goyal, Zachary DeVito, William S. Moses, Sven Verdoolaege, Andrew Adams, and Albert Cohen. Tensor comprehensions: Frameworkagnostic highperformance machine learning abstractions. CoRR, abs/1802.04730, 2018.
 Sven Verdoolaege, Juan Carlos Juega, Albert Cohen, José Ignacio Gómez, Christian Tenllado, and Francky Catthoor. Polyhedral parallel code generation for cuda. ACM Trans. Archit. Code Optim., 9(4):54:1–54:23, January 2013.
 R. Clint Whaley and Jack J. Dongarra. Automatically tuned linear algebra software. In Proceedings of the 1998 ACM/IEEE Conference on Supercomputing, SC ’98, pages 1–27, Washington, DC, USA, 1998. IEEE Computer Society.
 Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329, 2014.