# Cappuccino: Efficient Inference Software Synthesis for Mobile System-on-Chips

###### Abstract

Convolutional Neural Networks (CNNs) exhibit remarkable performance in various machine learning tasks. As sensor-equipped Internet of Things (IoT) devices permeate into every aspect of modern life, the ability to execute CNN inference, a computationally intensive application, on resource constrained devices has become increasingly important. In this context, we present Cappuccino, a framework for synthesis of efficient inference software targeting mobile System-on-Chips (SoCs). We propose techniques for efficient parallelization of CNN inference targeting mobile SoCs, and explore the underlying tradeoffs. Experiments with different CNNs on three mobile devices demonstrate the effectiveness of our approach.

## I Introduction

Convolutional Neural Networks (CNNs) have proven to be one of the most effective approaches to feature extraction [1, 2]. While frameworks such as Caffe [3] or Torch [4] are commonly used for training CNN models, inference using trained CNNs on resource-constrained platforms remains a challenge. We hypothesize that platforms based on mobile system-on-chips (SoCs) will be a major player in the emerging IoT landscape due to their rich feature set and market forces, and thus, we contend that efficient CNN inference on such platforms is increasingly essential.

Forward evaluation of a trained CNN, also known as inference, is computationally intensive. The research community has put forth a number of solutions for accelerating CNN inference on different platforms, including the design of a customized ASIC chip ([5, 6]), FPGA-based accelerator design ([7, 8, 9]), and parallelization on server-grade graphics processing units (GPUs). Latifi et al. offered a library for parallel execution of CNNs on mobile devices [10].

We present Cappuccino, a tool for automatic synthesis of efficient CNN inference software targeting mobile SoCs. In addition to the software synthesis capability, Cappuccino features a novel approach to zero-overhead utilization of vector instructions. Furthermore, it considers the effect of inexact computing on classification accuracy, and leverages imprecise arithmetic to further optimize the computation.

## Ii Convolutional Neural Networks

Modern Convolutional Neural Networks (CNNs) have millions of parameters, whose values are obtained during training. Each CNN has multiple convolutional layers, which use 3D filter banks for feature extraction. The convolution result of kernels of a filter bank with Input Feature Maps (IFMs) is accumulated to create Output Feature Maps (OFMs). The number of IFMs, the number of OFMs, and the output size are , , and , respectively. The convolution operation is visualized in Figure 1. Thus, a CNN layer has kernels ( filter banks with kernels each). Kernels have dimension of . Each pixel in an OFM is the sum of convolutions between kernels and the corresponding pixels in IFMs. To generate adjacent pixels in an OFM, the kernel bank is slid across IFMs by a stride of . A simplified pseudo-code for a convolution operation is shown in Figure 2. The vast majority of CNN inference execution time is spent in convolutional layers [11], and thus, we restrict our discussion to them.

## Iii Cappuccino

In order to use a CNN for inference on mobile devices, one has to evaluate its forward path with known parameter values. Cappuccino serves this very purpose in that, it synthesizes an optimized SoC-based inference software for a given CNN description. Our current embodiment of Cappuccino synthesizes the CNN in form of an optimized RenderScript program, which exploits the available processing resources on a mobile SoC to execute the computation. Depending on the target SoC, the generated program will be typically launched on multiple CPU cores, the mobile GPU, and the mobile DSP.

As Figure 3 illustrates, Cappuccino requires three inputs. The first is a network description file that contains the CNN architectural information such as number, size, and type of its layers. The second input is a model file, which contains the weight and bias parameter values. Cappuccino reorders CNN parameters to improve the performance of vectorized operations. Parameter reordering does not change the model size, and occurs during compile-time. The third input is the validation dataset that was originally used during training of the CNN. Using this dataset, Cappuccino analyzes the impact of optimizations, such as inexact computing on the given CNN, to determine the suitability of utilizing imprecise arithmetic for the given CNN. The result of this analysis guides the corresponding decision during software generation.

## Iv Inference Optimization Strategies

### Iv-a Thread Workload Allocation

Convolutional layers contain three main sources of parallelism: Kernel-Level Parallelism (KLP), Filter bank-Level Parallelism (FLP), and Output-Level Parallelism (OLP). In accelerating a CNN, one or more of these types of parallelism should be used for workload allocation to threads.

#### Iv-A1 Kernel-Level Parallelism (KLP)

In KLP, parallelization is obtained by executing the computations for convolving a kernel with corresponding IFM pixels in parallel. Hence, each thread computes one multiplication, and the final result is generated via accumulation by an eventual reduction operation.

#### Iv-A2 Filter bank-Level Parallelism (FLP)

In FLP, filter banks exploit parallelism by allocating kernel computation to separate threads. In this case, each thread computes the convolution of an entire kernel. Subsequently, a reduction addition yields the final result.

#### Iv-A3 Output-Level Parallelism (OLP)

In OLP, computation of output pixels are carried out in parallel. That is, each software thread computes the 3D convolution of an entire filter bank of kernels and corresponding pixels in IFMs.

An advantage of OLP is that a kernel loaded by one thread can be reused by other threads that are responsible for generating another pixel in the same OFM. In SoCs with efficient cache systems, it is possible to load each kernel once and use it times. In contrast, data loaded in KLP and FLP cannot be reused as efficiently by the same or other threads. Moreover, in KLP and FLP the required reduction incurs additional overhead for thread synchronization and inter-thread data transfer. As such, Cappuccino uses OLP as its primary workload allocation policy at the thread level. Furthermore, it utilizes vector processing to exploit KLP and FLP within each thread.

The output of each convolutional layer is a 3D data structure which includes elements (also referred to as pixels). These elements can be uniquely identified using three variables: the OFM (), the column (), and the row () number. Each element is the result of a convolution between the corresponding window of an IFM and a specific filter bank (Figure 4). Each thread is identified using a unique index , where . The identifier index is used to compute values of , , and .

### Iv-B Data Reordering for Vector Processing

Cappuccino uses vector processing to further optimize intra-thread workload execution. Before executing a vector instruction, it is necessary to load all of the operands. In most SoCs with vector processing support, the memory bus is wide enough to load multiple words of a contiguous block in one memory access. To utilize this feature, known as memory access locality, model parameter values have to be shuffled around. Conventionally, IFMs and kernel parameters are stored in either row- or column- major order. Therefore, data elements stored in the adjacent memory addresses are either the next element from the same row/column or the first element of the next row/column. If we represent an element’s address using (Layer, Row, Column) format, the data stored in a row major format reads:

(1) |

In a -way vector processor, one wants to load at least operands with a single memory access. Cappucino reorders the model data to achieve this goal. In particular, we propose to store the model data in a map major order, as opposed to row or column major, so that a thread can apply vector instructions to corresponding elements of different maps. Absent of this optimization, vector processing would incur significant overhead at the boundaries of a kernel. For example, assuming , we reorder the model data in the following order (2):

(2) |

A 3D representation of this transform is shown in Figure 5. When model data is reordered, Cappuccino reads IFMs as super-words (vectors), performs vectorized convolution, and accumulates the result. The optimized computation is shown in the algorithm of Figure 6.

#### Iv-B1 Zero-Overhead Dynamic Reordering of OFMs

Note that model data can be reordered and written to a new model file without any overhead as it happens statically at compile-time. However, reordering the input to an intermediate CNN layer is not as straight forward. In CNNs, the output of a layer becomes the input to the next layer. It follows that the output of a layer has to be reordered to allow the use of vectorized operations in computing the next CNN layer. This process has to happen dynamically, and thus, is expected to incur time and energy overhead.

Cappuccino avoids the dynamic data reordering overhead by directly storing elements of the OFMs in map major order as they are computed. Parameters , , and are used to determine the location of the output element that thread generates. To store OFMs in map major format, one has to swap the priorities associated with these parameters. For example, the result of computations by the second thread () is by default stored in the second location of the output memory. After reordering, however, the second element of the output memory must contain . Such an output can be directly used as the input to the next layer without any overhead. Figure 7 illustrates the idea.

To create the output in the reordered map major format, we generate indexes for stacks of layers, instead of a single layer (Figure 7). That is, we start indexing the second row only after all first rows of all layers are indexed. Equations (3) and (4) map a thread id to and , respectively. For computing the value of (map index), it is required to see which stack and layer a particular output belongs to (Figures 5 and Figure 7). Equation (5) computes the value of .

(3) |

(4) |

(5) |

### Iv-C Inexact Computing

Modern mobile SoCs tend to support a number of predefined imprecise computing modes that are likely to results in faster or more energy efficient execution [12]. On such platforms, the target processing mode has to strike a balance between the implementation metrics, e.g., runtime or energy dissipation, and the inference classification accuracy.

For example, RenderScript offers two imprecise computing modes for applications that do not need a strict implementation of the IEEE 754 standard, called relaxed and imprecise computing modes. In both modes, the implementation of floating point arithmetic is not fully compliant with the IEEE standard 754 for handling denormalized numbers. The imprecise computing mode is more efficient, but has a lower arithmetic accuracy. In this mode, operations resulting in -0.0 can return +0.0, and operations on INF and NAN are unsupported. Perhaps more importantly, vector processing is only available under imprecise computing modes, in current version of RenderScript. Vector processing under the RenderScript precise computing mode would result in sequential processing of vector elements.

Cappuccino analyzes the given CNN layer by layer to determine the best matching computing mode for every layer of the CNN. In every layer, it utilizes the validation dataset to measure the classification accuracy under different processing modes. Subsequently, Cappuccino determines which layers of a CNN can be processed using inexact arithmetic and which ones demand a precise implementation. The goal is to execute as many CNN layers as possible in inexact modes, under user specified constraints in terms of acceptable degradation in classification accuracy.

In our discussions, the term accuracy arises in reference to either arithmetic accuracy or classification accuracy. The former measures the numerical difference between values computed in exact vs. inexact arithmetic. The latter indicates the inference classification performance of CNNs, for example by measuring the percentage of true positive predictions.

## V Experimental Results

### V-a Setup

CNN Name | Execution Time on Nexus 5 (ms) | Execution Time on Nexus 6P (ms) | Execution Time on Galaxy S7 (ms) | |||||||||

05cm. 05cm. 05cm. | Baseline | Parallel | Imprecise | Speedup | Baseline | Parallel | Imprecise | Speedup | Baseline | Parallel | Imprecise | Speedup |

AlexNet | 33848.40 | 947.15 | 836.32 | 40.47X | 8626 | 512.72 | 61.80 | 139.58X | 8698.43 | 442.97 | 127.78 | 68.07X |

SqueezeNet | 43932.73 | 1302.10 | 161.50 | 272.03X | 17299.55 | 671.46 | 141.30 | 122.43X | 12331.82 | 888.91 | 150.24 | 82.08X |

GoogLeNet | 84404.40 | 2651.12 | 2478.09 | 34.06X | 25570.48 | 1575.45 | 602.28 | 42.46X | 21917.67 | 1699.42 | 686.08 | 31.95X |

We used Cappuccino to implement three modern CNNs: AlexNet [1], GoogLeNet [2], and SqueezeNet [13]. Subsequently, the parallelized implementations are evaluated on three different smartphones with different generations of Qualcomm Snapdragon SoCs. In order to increase the precision in measurements, all experiments have been repeated 100 times, the minimum and maximum observations are omitted, and the average of the remaining 98 observations are reported. In all of the experiments, the smartphones were put in airplane mode, their screen brightness were fully dimmed, and their background processes were stopped to the extent possible.

### V-B Runtime and Energy Efficiency

CNN Name | Baseline (J) | Proposed (J) | Ratio | ||||

05cm. 05cm. | First 1000 | Second 1000 | Average | First 1000 | Second 1000 | Average | |

SqueezeNet | 26.37 | 26.40 | 26.39 | 3.39 | 3.36 | 3.38 | 7.81X |

#### V-B1 Speedup

We executed the synthesized programs on the platforms and measured the execution time. Table I summarizes the results. Programs synthesized by Cappuccino offer a speedup of at least 31.95X (GoogLeNet on Galaxy S7) and at most 272.03X (SqueezeNet on Nexus 5) compared to the baseline implementation of single-threaded Java. Moreover, the execution time in all but one case is below a second.

#### V-B2 Effect of Inexact Computing

To determine the best inexact computing mode, we use Cappuccino to measure the classification accuracy of the aforementioned CNNs in computing modes supported by target platforms. This analysis is performed on 5000 random images of ILSVRC 2012 validation dataset [14]. The classification accuracy in imprecise mode turns out to be identical to the exact mode. Hence, Cappuccino recommends utilization of imprecise computing in all layers.

Table I demonstrates the effect of imprecise computing on execution time. In our experiments, use of imprecise computing mode offers up to 8X speedup compared to the same implementation under exact arithmetic. Note that RenderScript incarnation of the imprecise computing mode enables vector processing in addition to other optimizations, such as using a rapid exception handling for denormalized numbers.

#### V-B3 Comparison with Related Work

Table III compares the performance of software synthesized by Cappuccino with the state-of-the-art work [10]. The proposed solution under exact arithmetic improves the execution time by 1.38X. In addition, when the synthesized software is both parallel and imprecise, it shows up to 11.47X speedup compared to CNNDroid [10].

#### V-B4 Energy Consumption

Cappuccino invokes many threads, which increases the instantaneous power consumption compared to a sequential program. However, software synthesized by Cappuccino runs drastically faster than a sequential equivalent. This results in reduction of energy consumption. Table II compares the energy consumption for running SqueezeNet on Nexus 5. Reported numbers are computed by running each program 1000 times, and calculating the average. Measurements are performed twice to showcase repeatability (2000 runs total).

## Vi Conclusion

CNNDroid [10] | Cappuccino: Parallel | Speedup | Cappuccino: Imprecise | Speedup | |
---|---|---|---|---|---|

Execution Time (ms) | 709 | 512.72 | 1.38X | 61.80 | 11.47X |

In this paper we presented Cappuccino, a platform for efficient synthesis of CNN inference software targeting mobile SoCs. Cappuccino leverages RenderScripts via which, it utilizes CPUs, the GPU and the DSP that commonly exist on a mobile SoC to execute a CNN efficiently. Cappuccino performs an assessment on the impact of inexact computing on execution time and classification accuracy. Subsequently, it selects an inexact computing mode that best fits a layer of a CNN. Compared to sequential implementations, programs synthesized by Cappuccino achieve a speedup of at least 31.95X and at most 272.03X, and improve the energy consumption by 7.81X.

## References

- [1] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, 2012, pp. 1097–1105.
- [2] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 1–9.
- [3] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell, “Caffe: Convolutional architecture for fast feature embedding,” in Proceedings of the 22nd ACM international conference on Multimedia. ACM, 2014, pp. 675–678.
- [4] R. Collobert, K. Kavukcuoglu, and C. Farabet, “Torch7: A matlab-like environment for machine learning,” in BigLearn, NIPS Workshop, no. EPFL-CONF-192376, 2011.
- [5] Y. Chen, T. Luo, S. Liu, S. Zhang, L. He, J. Wang, L. Li, T. Chen, Z. Xu, N. Sun et al., “Dadiannao: A machine-learning supercomputer,” in Proceedings of the 47th Annual IEEE/ACM International Symposium on Microarchitecture. IEEE Computer Society, 2014, pp. 609–622.
- [6] N. Jouppi, “Google supercharges machine learning tasks with tpu custom chip,” Google Blog, May, vol. 18, 2016.
- [7] C. Zhang, P. Li, G. Sun, Y. Guan, B. Xiao, and J. Cong, “Optimizing fpga-based accelerator design for deep convolutional neural networks,” in Proceedings of the 2015 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays. ACM, 2015, pp. 161–170.
- [8] M. Motamedi, P. Gysel, V. Akella, and S. Ghiasi, “Design space exploration of fpga-based deep convolutional neural networks,” in Design Automation Conference (ASP-DAC), 2016 21st Asia and South Pacific. IEEE, 2016, pp. 575–580.
- [9] J. Qiu, J. Wang, S. Yao, K. Guo, B. Li, E. Zhou, J. Yu, T. Tang, N. Xu, S. Song et al., “Going deeper with embedded fpga platform for convolutional neural network,” in Proceedings of the 2016 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays. ACM, 2016, pp. 26–35.
- [10] S. S. Latifi Oskouei, H. Golestani, M. Hashemi, and S. Ghiasi, “Cnndroid: Gpu-accelerated execution of trained deep convolutional neural networks on android,” in Proceedings of the 2016 ACM on Multimedia Conference. ACM, 2016, pp. 1201–1205.
- [11] J. Cong and B. Xiao, “Minimizing computation in convolutional neural networks,” in International Conference on Artificial Neural Networks. Springer, 2014, pp. 281–290.
- [12] G. Mitra, B. Johnston, A. P. Rendell, E. McCreath, and J. Zhou, “Use of simd vector operations to accelerate application code performance on low-powered arm and intel platforms,” in Parallel and Distributed Processing Symposium Workshops & PhD Forum (IPDPSW), 2013 IEEE 27th International. IEEE, 2013, pp. 1107–1116.
- [13] F. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally, and K. Keutzer, “Squeezenet: Alexnet-level accuracy with 50x fewer parameters and 0.5 mb model size,” arXiv preprint arXiv:1602.07360, 2016.
- [14] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei, “ImageNet Large Scale Visual Recognition Challenge,” International Journal of Computer Vision (IJCV), vol. 115, no. 3, pp. 211–252, 2015.