Co-Exploration of Neural Architectures and Heterogeneous ASIC Accelerator Designs Targeting Multiple Tasks

Co-Exploration of Neural Architectures and Heterogeneous ASIC Accelerator Designs Targeting Multiple Tasks

Abstract

Neural Architecture Search (NAS) has demonstrated its power on various AI accelerating platforms such as Field Programmable Gate Arrays (FPGAs) and Graphic Processing Units (GPUs). However, it remains an open problem how to integrate NAS with Application-Specific Integrated Circuits (ASICs), despite them being the most powerful AI accelerating platforms. The major bottleneck comes from the large design freedom associated with ASIC designs. Moreover, with the consideration that multiple DNNs will run in parallel for different workloads with diverse layer operations and sizes, integrating heterogeneous ASIC sub-accelerators for distinct DNNs in one design can significantly boost performance, and at the same time further complicate the design space. To address these challenges, in this paper we build ASIC template set based on existing successful designs, described by their unique dataflows, so that the design space is significantly reduced. Based on the templates, we further propose a framework, namely NASAIC, which can simultaneously identify multiple DNN architectures and the associated heterogeneous ASIC accelerator design, such that the design specifications (specs) can be satisfied, while the accuracy can be maximized. Experimental results show that compared with successive NAS and ASIC design optimizations which lead to design spec violations, NASAIC can guarantee the results to meet the design specs with 17.77%, 2.49, and 2.32 reductions on latency, energy, and area and with 0.76% accuracy loss. To the best of the authors’ knowledge, this is the first work on neural architecture and ASIC accelerator design co-exploration.

I Introduction

Recently, Neural Architecture Search (NAS) [31, 16, 21] successfully opens up the design freedom to automatically identify the neural architectures with the maximum accuracy; in addition, hardware-aware NAS [25, 4, 9, 7, 12, 11, 10, 27, 17, 30, 3] further enables the hardware design space to jointly identify the best architecture and hardware designs in maximizing network accuracy and hardware efficiency. Most of the existing hardware-aware NAS approaches focus on GPUs or Field Programmable Gate Arrays (FPGAs).

On the other hand, among all AI accelerating platforms, application-specific integrated circuits (ASICs), composed of processing elements (PEs) connected in different topologies, can provide incomparable energy efficiency, latency, and form factor [5, 19, 26]. Most existing ASIC accelerators, however, target common neural architectures [6, 18, 5] and do not reap the power of NAS. Though seemingly straightforward, integrating NAS with ASIC designs is not a simple matter, as can be seen from the image classification example in Fig. 1. The neural architecture search space is formed by ResNet9 [15] with adjustable hyperparameters. The hardware design space is formed by ASICs with adjustable number of PEs and their connections. The results are depicted in a three-dimensional space, where the three axes represent different hardware metrics and each point represents a solution of paired neural architecture and ASIC design. From the figure we can see that when NAS and ASIC design are performed successively, all the solutions (denoted by circles) violate user-defined hardware design specifications (design specs, denoted by diamond). When NAS is done in aware of a particular ASIC design, the resulting solution (denoted by triangle) has lower accuracy compared with the optimal one (denoted by star) from 10,000 Monte Carlo runs, which uses a different ASIC design. A simple heuristic to pick a solution with latency, energy and area closest to the design specs (denoted by square) would also be sub-optimal. It is therefore imperative to jointly explore the neural architecture search space and hardware design space to identify the optimal solution.

Figure 1: Neural architecture search space and hardware design space exploration: solutions from successive NAS and ASIC design; solution from NAS in aware of an ASIC design; the closest-to-spec solution; and the optimal solution from 10,000 Monte Carlo (MC) runs. (Best viewed in color)

However, such a task is quite challenging, primarily due to the large design space of ASICs where a same set of PEs can constitute numerous topologies (and thus dataflows). Enumeration is simply out of the question. In addition, when ASIC accelerators are deployed on the edge, they usually need to handle multiple tasks involving multiple DNNs. For instance, tasks like object detection, image segmentation, and classification can be triggered simultaneously on augmented reality (AR) glasses [1], each of which relies on one kind of DNN. Since the DNNs for different tasks can have distinct architectures, one dataflow cannot fit all of them; meanwhile, multiple tasks need to be executed concurrently, which requires task-level parallelism. As such, it is best to integrate multiple heterogeneous sub-accelerators (corresponding to different dataflows) into one accelerator to improve performance and energy efficiency, which has been verified in [13]. Yet this further complicates the design space.

To address these challenges, in this paper, we establish a link between NAS and ASIC accelerator design. Instead of a full-blown exploration of the design space, we observe that there already exist a few great ASIC accelerator designs such as Shidiannao [6], NVDLA [18], and Eyeriss [5]. Each of these designs has its unique dataflow, and the accelerator is determined once the hardware resource associated with the dataflow is given. As such, we can create a set of ASIC templates, where each template corresponds to one specific dataflow, so that the design space can be significantly narrowed down to the selection of templates to form a heterogeneous accelerator, and the allocation of hardware resources (e.g., the number of PEs and NoC bandwidth) to the selected templates.

Based on the template concept, we then further propose a neural architecture and ASIC design co-exploration framework, namely NASAIC, for multiple tasks targeting edge devices. The objective of NASAIC is to identify the best neural architectures for each task and the ASIC design, such that all design specs can be met while the accuracy of the neural architectures can be maximized. Specifically, we devise a novel controller that can simultaneously predict hyperparameters of multiple DNNs together with the parameters of hardware resource allocation for different template selections. Based on the state-of-the-art cost model [14], we separately explore the mapping and scheduling of neural architectures onto ASIC templates. Finally, a reward is generated to update the controller. To accelerate the search process, we apply the early pruning technique to remove neural architectures that cannot satisfy design specs without training. Experimental results on the workload with the mixed classification and segmentation tasks show that, compared with solutions generated by the successive NAS and ASIC design optimization which cannot satisfy the design specs, those from NASAIC can guarantee to meet the design specs with 17.77%, 2.49, and 2.32 reductions in latency, energy, and area and with only 0.76% average accuracy loss on these two tasks. Furthermore, compared with hardware-aware NAS for a fixed ASIC design, NASAIC can achieve 3.65% higher accuracy. To the best of the authors’ knowledge, this is the first work on neural architecture and ASIC design co-exploration.

Ii Background and Challenges

We are now witnessing the rapid growth of NAS. Since the very first work for NAS with reinforcement learning [31], there has been tremendous work to study efficient neural architecture search [16, 21]. Integrating hardware awareness in the search loop opens a new research direction, which attracts research efforts on hardware-aware NAS [25, 4]. Taking one step further, most recently, co-exploration of neural architecture and hardware design is proposed [9, 7]. Unlike the original NAS with mono-objective on maximizing accuracy, those hardware-aware NAS frameworks take inference latency into consideration, and push forward the deployment of DNNs on edge devices. NAS has been applied to GPUs and FPGAs but not yet to ASICs, though ASICs are the most efficient ones among all AI accelerating platforms [28, 29].

Two so-far-unseen but urgent-to-solve challenges exist.

Challenge 1: How to enable the co-exploration of neural architectures and ASIC accelerator designs?

The large design space of ASIC accelerators hinders the application of NAS to ASIC accelerators. Unlike GPUs with fixed hardware or FPGAs with well-structured hardware, ASIC designs grant the maximum flexibility to designers to determine the hardware organization. This enables to pursue the maximum efficiency; however, it significantly enlarges the design space. Fortunately, there exist extensive research works in designing ASIC AI accelerators [6, 18, 5], making it possible to shrink the design space on top of existing designs.

Among all ASIC accelerator designs, one of the key observations is that each design has a specific dataflow, such as Shidiannao [6], NVDLA [18] , and Eyeriss [5] styles. For instance, NVDLA [18] involves an adder-tree to calculate the partial sum of output feature maps. Inspired by this, we propose to build a set of accelerator templates, each of which has a dataflow style, resulting in a fixed hardware structure. On top of it, we only need to allocate resource for templates, without changing hardware structures. Consequently, the design space can be significantly shrunk, and in turn, it enables the co-exploration of neural architectures and ASIC designs by incorporating hardware allocation parameters.

Challenge 2: Multiple neural architectures need to be identified under the unified design specs.

Another challenge is that the realistic applications on edge devices require the collaboration of multiple tasks, which involves multiple DNNs. In addition, all these DNNs will be executed on the accelerator with unified design specs, including latency, energy, and area. In consequence, sequentially optimizing each DNN using hardware-aware NAS will not work; instead, the multiple neural architectures need to be simultaneously optimized under the unified design specs.

Integrating multiple DNNs in one accelerator brings one further challenge. DNNs for different tasks have distinct architectures, yet one dataflow is not suitable for all architectures. For instance, NVDLA style [18] ( in Fig. 2) loads one pixel from each activation channel for one computation. In order to fully use the computation resource, it favors convolution layers with large activation channel but low activation resolution; while Shidiannao style [6] ( in Fig. 2) is on the opposite. As a result, NVDLA style works better for ResNets, while Shidiannao works better for U-Nets. As demonstrated in [13], we can integrate multiple heterogeneous sub-accelerators using a network-on-chip topology through Network Interface Controller (NIC) in one AISC accelerator, which further complicates the design space.

In this work, we will address the above challenges.

Iii Problem Definition

In this section we will first define multi-task workloads and heterogeneous accelerators, and then formulate the problem of neural architecture and ASIC design co-exploration.

Figure 2: Overview: co-exploration with three layers of optimizations.

Fig. 2 demonstrates an overview of the co-exploration, which involves three exploration layers: “Application”, “Accelerator”, and “Synthesis”. The application layer determines the neural architectures to be applied, while the accelerator layer creates the an ASIC template set based on the dataflow style of the existing accelerator designs. Acting as the bridge, the synthesis layer allocates a template together with the resources to each sub-accelerator, then maps and schedules the network layers to sub-accelerators. In the following text, we will define each exploration layer in detail.

Application. The application workload considered in this work has multiple AI tasks which involve a DNN model for each task. A workload with tasks is defined as . Fig. 2 shows an example with two tasks (i.e., for classification and for segmentation). Task corresponds to a DNN architecture , which forms a set with DNN architectures. We define a DNN architecture as . is composed of a backbone architecture , a set of layers , a set of hyperparameters , and an accuracy . For example, in Fig. 2, backbone architecture for classification task is ResNet9 [15], and its hyperparameters include the number of filter () and the number of skip layers () for each residual block, as shown in Fig. 3 (left); while for , backbone architecture is U-Net [22] whose hyperparameters include the height () and filter numbers () for each layer.

Based on the above definition, we define the neural architecture search function , which determines hyperparameters in DNN to identify one neural architecture. Note that NAS [31] is to determine with the mono-objective of maximizing accuracy . As shown in Fig. 2, each set of hyperparameters corresponds to one neural architecture, and we determine to identify a specific neural architecture for task (colored ones).

Figure 3: Left: search spaces for both NAS and ASIC accelerator designs. Right: the resultant heterogeneous ASIC accelerator.

ASIC Accelerator. A heterogeneous ASIC accelerator formed by multiple sub-accelerators connected in a NoC topology through NIC is shown in Fig. 3 (right). Define to be a set of sub-accelerators. A sub-accelerator has three properties: dataflow style , the number of PEs , and the NoC bandwidth . With a set of predefined dataflow templates to choose from, as shown in Fig. 2, the ASIC design space is significantly narrowed down from choosing specific unrolling, mapping and data reuse patterns to allocating resources (one template with associated PEs and bandwidth) to each sub-accelerator. Kindly note that according to the template and the mapped network layers, the memory size can be determined to support the full use of hardware, as in [14]. Therefore, memory size will not be explored in the search space.

Synthesis. Based on the definition of applications and accelerators, next, we present the synthesis optimization.

Resource allocation. On the hardware side, we design each sub-accelerator in set , given a set of dataflow templates , the maximum number of PEs (e.g., ) and the maximum bandwidth (e.g., . Note that since contains different dataflows, the resultant accelerator will be heterogeneous if more than one type of dataflows are mapped to . By reducing the size of to one, the proposed techniques can be used for homogeneous designs.

We define an allocation function to determine the dataflow template from , and the PEs and bandwidth used for , such that and . As an example, Fig. 2 illustrates two kinds of dataflow templates: shidiannao [6] and NVDLA [18]. The resultant accelerator (in Fig. 2 ) is composed of two heterogeneous sub-accelerators with different dataflow templates, PE numbers and bandwidth.

Mapper and scheduler. On the software side, we map network layers to sub-accelerators and determine their execution orders on each sub-accelerator. A map function is defined, which indicates the network layer in the DNN to be mapped to the sub-accelerator . Based on the mapping, we determine the execution order of network layers on sub-accelerator following a schedule function .

The synthesis results can be evaluated via four metrics, including accuracy, latency, energy, and area. In this work, we aim to maximize the accuracy of DNNs under the given design specs on latency (), energy () and area ().

Problem Definition. Based on all the above definitions, we formally define the optimization problem as follows: given a multi-task workload , the backbone neural architecture for each DNN in set , a set of sub-accelerators , a set of dataflow templates , the maximum number of PEs and bandwidth, and design specs (, , ), we determine:

  • : architecture hyperparameters of each DNN ;

  • : the dataflow and resource allocation for each sub-accelerator ;

  • and : the mapping of network layers to sub-accelerators and their schedule orders;

such that the maximum accuracy of DNNs can be achieved while all design specs and resource constraints are met; i.e., , , , , , , where represent latency, energy, and area of the resultant accelerator, and a function defined in next section is to get the accuracy of all networks, which can be functions like (maximize the average accuracy) or (maximize the minimum accuracy).

Iv Proposed co-exploration framework: NASAIC

Figure 4: NASAIC: parameters for neural architecture and accelerator are first determined by controller; then the identified neural architecture and accelerator will be evaluated; finally, a reward will be generated by the evaluation results to feedback and update the controller.

This section will present the details of NASAIC that addresses the problem formulated in Section III. Fig. 4 demonstrates the overview of NASAIC. It contains three components, including controller, optimizer selector, and evaluator. In general, the controller samples neural architectures and hardware resource allocation in each episode (aka. iteration). Then the predicted sample goes through the optimizer selector and evaluator to generate the accuracy and hardware cost. Finally, a reward is generated to update the controller. All the components work together to generate solutions with high weighted accuracy and meet all design specs. To illustrate NASAIC framework, we apply reinforcement learning approach in this paper. Based on the formulated reward function, other optimization approaches, such as evolution algorithms, can also be applied. Note that since the hardware constraints are non-differentiable, differentiable neural architecture search (DARTS) cannot be applied. In the following text, we will introduce each component in detail.

Multi-Task Co-Exploration Controller. The controller is the key component in NASAIC. Driven by the requirement of multi-task in one application workload, we propose a novel reinforcement-learning based Recurrent Neural Network (RNN) controller to simultaneously predict multiple neural architectures. In addition, we integrate accelerator design parameters into the controller to realize a genuine co-exploration of neural architectures and hardware designs.

Fig. 5 demonstrates the proposed controller. It is composed of segments, where is the sum of task number in workload and sub-accelerator number in set ; i.e., . The first segments correspond to DNNs, while the remaining segments correspond to sub-accelerators. For the segment associated with a DNN, say , its outputs determine ’s hyperparameters, i.e., the function. For instance, in Fig. 5, the first segment predicts the filter numbers (FN) and skip layers (SK). Similarly, the segment for sub-accelerator determines its hardware design parameters, i.e., the function, as shown in the right part of Fig. 5.

We employ reinforcement learning method to update the controller and predict new samples. Specifically, in each episode, the controller first predicts a sample, and gets its reward based on the evaluation results form components and . Then, we employ the Monte Carlo policy gradient algorithm [24] to update the controller:

(1)

where is the batch size and is the number of steps in each episode. Rewards are discounted at every step by an exponential factor and the baseline is the average exponential moving of rewards.

Figure 5: Co-exploration controller for multiple tasks: determine neural architecture hyperparameters, and hardware design parameters.

Optimizer Selector. We integrate an optimizer selector in NASAIC to accelerate the search process. This is based on the observation that the speed of hardware evaluation is much faster than the training process. Specifically, as shown in Fig. 4, we add two switches ( for neural architecture exploration and for hardware design exploration). In terms of the status of switches, the framework can perform different functions listed as follows:

  • , it performs conventional NAS, like [31].

  • , it uses the previous neural architecture and explores hardware designs only. In this case, we aim to obtain valid accelerator design for the neural architecture, and therefore, we do not consider the accuracy in reward.

  • , it predicts new neural architectures and hardware designs.

NASAIC repeatedly conducts the following two steps times: (1) both and are closed for 1 step, aiming to obtain new neural architecture and hardware design; (2) the switch is opened for steps, in order to explore the best hardware for a previous identified neural architecture. Kindly note that the first step is carried out in a non-blocking scheme, such that one training and times hardware exploration can be conducted in parallel. Once all hardware explorations are completed and no feasible hardware design is found, it will terminate the training process to accelerate the search process.

Evaluator. The evaluator contains two paths: (1) via the training and validating to obtain networks’ accuracy; (2) via cost modeling, mapping and scheduling to generate penalty in terms of design specs.

Training and validating In this path, hyperparameters for DNN architecture are obtained from controller. For each DNN , we train it from scratch and get its accuracy on a held-out validation dataset. Based on the accuracy, we obtain the weighted accuracy for calculating the reward as follows:

(2)

where is the total number of tasks in the given workload, and is a weight ranging from 0 to 1, such that .

Mapping and scheduling On this path, a set of identified DNN architectures and a set of determined sub-accelerator are given by controller. We need to get the hardware metrics including latency , energy , and area . NASAIC incorporates the state-of-the-art cost model, MAESTRO [14], and a mapping and scheduling algorithm to obtain the above metrics. For area , we can directly obtain it from MAESTRO with the given sub-accelerator . The latency and energy are determined by the mapping and scheduling. To develop an algorithm for mapping and scheduling, we need to obtain the latency and energy of each layer on different sub-accelerators. Let be the layer set. For a pair of network layer and sub-accelerator , we can input them to MAESTRO to get the latency and energy .

The problem can be proved to be equivalent to the traditional heterogeneous assignment problem [8, 23]: given the latency and energy cost for each layer on sub-accelerator , the dependency among layers, and a timing constraint , we are going to determine the mapping and scheduling of each layer on one sub-accelerator, such that the energy cost is minimized while the latency . We denote to be an optimal solver, i.e., . Then, we have the following theorem.

Theorem

Given a layer set , a sub-accelerator set , and design specs on latency and energy , the design specs can be met if and only if .

Figure 6: Exploration results obtained by NASAIC for three different workloads under design specs: (left) with CIFAR-10 and STL-10 datasets; (middle) with CIFAR-10 and Nuclei; (right) with CIFAR-10 dataset. (Best viewed in color)

The above theorem can be proved using contradiction. Due to the space limitation, the detailed proof is omitted. Based on this theorem, the latency and energy are obtained by the solver , which can be instantiated by Integer-Linear Programming (ILP) for the optimal solution; however, since ILP is time-consuming, this paper applies a heuristic approach in [23] to accelerate the search process. On top of the obtained hardware metrics and the given design specs, we formulate a penalty function. Penalty is determined in terms of the degree that the solution exceeds the design specs, and no penalty if all design specs are met, which is formulated as follows:

(3)

where , , are the upper bounds for the metrics, which can be obtained by exploring the hardware design space using the neural architecture identified by NAS, as the circles in Fig. 1.

Finally, based on all the above evaluation results, we calculate the reward with a scaling variable , listed as follows:

(4)

V Experimental Evaluation

We evaluate the efficacy of the proposed framework, NASAIC, using different application workloads and hardware configurations. Results reported in this section demonstrate that NASAIC can efficiently identify accurate neural architectures together with AISC accelerator designs that are guaranteed to meet the given design specs, while achieving high accuracy for multiple AI tasks.

A. Evaluation Environment

Application workloads: We use typical workloads on AR glasses in applications such as driver assistance or augmented medicine to demonstrate the efficacy of NASAIC. In these workloads, the core tasks involve classification and segmentation, where representative datasets such as CIFAR-10, STL-10, and Nuclei are commonly employed, along with light-weight neural architectures. We synthesize the following three workloads.

  • W1: Tasks on one classification dataset (CIFAR-10) and one segmentation dataset (Nuclei).

  • W2: Tasks on two classification datasets (CIFAR-10, STL-10).

  • W3: Tasks on the same classification dataset (CIFAR-10).

The backbone architectures and their search space for the above tasks are defined as follows. For the classification tasks, we select ResNet9 [15], which contains multiple residual blocks, as the architecture backbone. During NAS, the number of convolution layer and the number of filter channels for each residual block are searched and then determined. For CIFAR-10, we employ 3 residual blocks, and parameter options for each block are depicted in Fig. 1(a); while for STL-10, considering that its input images have higher resolution (i.e., pixels), we deepen the network to 5 residual blocks, and increase the maximum number of convolution layers in each residual block to 3 and the maximum number of filter channel to 512 for each block. For the segmentation tasks, we use U-Net [22] as the architecture backbone. The search space for this backbone architecture includes the number of height and filter channel number in each layer, as shown in Fig. 1. Note that we follow the standard NAS approach [31] to hold out a part of data from training images to be the validation set, and the training parameters (e.g., batch size, learning rate, and etc.) follow ResNet9 [15] and U-Net [22].

Hardware configuration: Accelerator design includes the allocation of hardware resources to sub-accelerators, and the selection of dataflow for each sub-accelerator. For resource allocation, we set the maximum number of PEs to be 4096 and the maximum NoC bandwidth to be 64GB/s, in accordance to [13]. Note that, our proposed NASAIC can support arbitrary number of sub-accelerators; for simple demonstration, we make a case study by integrating two sub-accelerators. Specifically, each sub-accelerator uses one of the following dataflows: Shidiannao (abbr. shi) [6], NVDLA (abbr. dla) [18], and row-stationary [5] style. In the case where one sub-accelerator has no resource allocation, the design degenerates to a single large accelerator; while in the case where sub-accelerators have exactly the same allocation, the design degenerates to homogeneous accelerators.

Hardware constraints on latency, energy and area will be set by designers (users), according to their own use cases. To evaluate the effectiveness of NASAIC, we set distinct and strict design specs, including Latency (), Energy (), Area (), for each application workload as follows: for ; for ; for .

NASAIC setting: For exploration parameters, we set and , indicating that we explore the search space for 500 episodes and 10 accelerator designs in each episode. For reward calculation parameters, we set to calculate the weighted accuracy, and . Controller RNN is trained by RMSProp optimization, with the initial learning rate of 0.99 and exponential decay of 0.5 for 50 steps. All experiments are conducted on a server with a 48-thread Intel Xeon CPU and one NVIDIA Tesla P100 GPU. NASAIC only takes around 3.5 GPU Hours to complete the exploration for each workload, which mainly benefits from the early pruning from optimizer selector component in NASAIC (see Section IV ).

B. Design Space Exploration

Fig. 6 demonstrates the exploration results of NASAIC on three application workloads. In this figure, the x-axis, y-axis, and z-axis represent latency, energy and area, respectively. The black diamond indicates the design specs (upper bound); each green diamond is a solution (neural architecture-ASIC design pair) explored by NASAIC; each blue cross is a solution based on the smallest neural network in the search space combined with different ASIC designs (lower bound); and the red star refers to the best solution in terms of the average accuracy explored by NASAIC. The numbers in the rectangles with blue, green, and red colors represent the accuracy of the smallest network, the inferior solutions, and our best solutions, respectively.

We have several observations from Fig. 6. First, NASAIC can guarantee that all the explored solutions meet the design specs. Second, the identified solutions have high accuracy. The accuracy on CIFAR-10 of the four solutions are 92.85%, 92.62%, 93.23%, and 91.11%, while the accuracy lower bounds from the smallest network is 78.93%. Similarly, for STL-10, the accuracy is 75.72% compared with the lower bound of 71.57%. For Nuclei, the IOU (Intersection Over Union) is 0.8374 compared with the lower bound of 0.6462. Third, we observe that the best solutions of and identified by NASAIC are quite close to the boundary defined by one of the three design specs, which indicates that in these cases the accuracy is bounded by resources. For , the energy of the identified solution is 97.12% of the spec; while for , the latency of the identified solution is 93.4%. This gives designers insights on if/where the hardware bottleneck is that prevents the accelerator from getting higher accuracy, and thus they can loose such constraint to increase the accuracy if necessary. On the other hand, for (middle of Fig. 6), our best solution is farther away from the specs compared with solution pointed out by the arrow ( is one of the explored solutions by NASAIC). However, the accuracy of for CIFAR-10 and STL-10 are 2.86% and 2.91% lower than the best solution. This reflects that the best solution may not always be the one closest to the specs, and therefore, heuristics that select the solution that is closest to the specs cannot work.

C. Results on Multiple Tasks for Multiple Datasets

Work. Approach Hardware Dataset Accuracy L / E / A /
W1 NASASIC CIFAR-10 94.17% 9.45e5 3.56e9 4.71e9
Nuclei 83.94%
ASIC CIFAR-10 91.98% 5.8e5 1.94e9 3.82e9
HW-NAS Nuclei 83.72%
NASAIC CIFAR-10 92.85% 7.77e5 1.43e9 2.03e9
Nuclei 83.74%
W2 NASASIC CIFAR-10 94.17% 9.31e5 3.55e9 4.83e9
STL-10 76.50%
ASIC CIFAR-10 92.53% 9.69e5 2.90e9 3.86e9
HW-NAS STL-10 72.07.%
NASAIC CIFAR-10 92.62% 6.48e5 2.50e9 3.34e9
STL-10 75.72%
: violate design specs; : meet design specs.
Table I: Comparison between successive NAS and ASIC design (NASASIC), ASIC design followed by hardware-aware NAS (ASICHW-NAS), and NASAIC.

Table I reports the comparison results on multi-dataset workloads. We implement two additional approaches. First, “NASASIC” indicates successive NAS [31] and brute-force hardware exploration. Second, in “ASICHW-NAS”, a Monte Carlo search with 10,000 runs is first conducted to obtain the ASIC design closest to the design specs. Then, for that specific ASIC design, we extend the hardware-aware NAS [2] to identify the best neural architecture under the design specs.

Results in Table I demonstrate that for the neural architectures identified by NAS, none of the accelerator designs explored by the brute-force approach can provide a legal solution that satisfies all design specs. On the contrary, for both workloads, NASAIC can guarantee the solutions to meet all specs with the average accuracy loss of 0.76% and 1.17%, respectively. For workload , NASAIC achieves 17.77%, 2.49, and 2.32 reductions on latency, energy, and area, respectively, against NASASIC. For workload , the numbers are 30.39%, 29.58%, and 30.85%. When comparing NASAIC with ASICHW-NAS, even though the solution of the latter is closer to the design specs, for W1, NASAIC achieves 0.87% higher accuracy for CIFAR-10 and similar accuracy for Nuclei; for W2, 3.65% higher accuracy is achieved for STL-10 and similar accuracy for CIFAR-10.

All the above results have revealed the necessity and underscored the importance of co-exploring neural architectures and ASIC designs.

D. From Single and Homogeneous to Heterogeneous ASIC Accelerator

The benefits of heterogeneous accelerators under heterogeneous workloads are evident. Table II reports the comparison results of different accelerator configurations under the homogeneous workload CIFAR-10 (W3). In these approaches, “NAS” explores neural architectures without hardware awareness and the corresponding ASIC applies the maximum hardware resource; “Single Acc.”, “Homo. Acc.”, “Hetero. Acc” are NASAIC with single accelerator design, two homogeneous sub-acclerators, and two heterogeneous sub-accelerators. Kindly note that, as discussed in Section V-A, NASAIC can support the exploration of a single accelerator. We set hardware configurations as follows to guarantee single and homongeneous solutions to meet design specs. For Single Acc., the network will be sequentially executed twice, which indicates that the constraints on latency and energy should be halved. For Homo. Acc., two homogeneous sub-accelerators will run a same network simultaneously, which indicates that the energy and area for each accelerator should be halved.

From the results in Table II, we observe that although NAS can successfully identify the neural architectures with the highest accuracy (94.17%), they cannot satisfy the specs even though all hardware resources are used. In comparison, Single Acc. identifies a relatively smaller neural architecture with less hardware resource, but can meet the specs with the accuracy of 91.45%. Without exploring parallelism, Single Acc. cannot further improve accuracy since it is bounded by latency. After boosting performance, Homo. Acc. identifies the neural architecture with 92.00% accuracy. Exploring the heterogeneous accelerators by NASAIC, two distinct networks are generated: one is with accuracy of 93.23%, close to the best result identified by NAS; and the other one with slightly lower accuracy of 91.11% is comparable with that of Single Acc.. This solution will be useful in Ensemble learning [20], and can provide more choices for designers.

Approach Hardware Architecture Accuracy Sat.
NAS 94.17%
Single Acc. 91.45%
Homo. Acc. 92.00%
Hetero. Acc. 93.23%
(NASAIC) 91.11%
: For the block, is filter
numbers, is skip layer numbers. Block 0 is a standard conv instead of residual.
Table II: On CIFAR-10 (W3), comparison results of architectures and accelerator designs obtained by different accelerator configurations.

Vi Conclusion

In this work, we have proposed a framework, namely NASAIC, to co-explore neural architectures and ASIC accelerator designs targeting multiple AI tasks on edges devices. NASAIC has filled the missing link between NAS and ASIC by creating an accelerator template set in terms of the dataflow style. In addition, a novel multi-task oriented RNN controller has been developed to simultaneously determine multiple neural architectures under a unified design spec. The efficacy of NASAIC is verified through a set of comprehensive experiments.

References

  1. M. Abrash (2019) Https://www.oculus.com/blog/inventing-the-future/. Note: Accessed: 2019-11-26 Cited by: §I.
  2. M. an (2019) Mnasnet: platform-aware neural architecture search for mobile. In Proc. of CVPR, pp. 2820–2828. Cited by: §V.
  3. S. Bian (2020) NASS: optimizing secure inference via neural architecture search. arXiv preprint arXiv:2001.11854. Cited by: §I.
  4. H. Cai (2019) ProxylessNAS: direct neural architecture search on target task and hardware. In Proc. of ICLR, Cited by: §I, §II.
  5. Y. Chen (2016) Eyeriss: a spatial architecture for energy-efficient dataflow for convolutional neural networks. In Proc. of ISCA, pp. 367–379. Cited by: §I, §I, §II, §II, §V.
  6. Z. Du (2015) ShiDianNao: shifting vision processing closer to the sensor. In Proc. of ISCA, NY, USA, pp. 92–104. External Links: ISBN 978-1-4503-3402-0, Link, Document Cited by: §I, §I, §II, §II, §II, §III, §V.
  7. C. Hao (2019) FPGA/dnn co-design: an efficient design methodology for iot intelligence on the edge. In Proc. of DAC, Las Vegas, USA. Cited by: §I, §II.
  8. K. Ito (1998) ILP-based cost-optimal dsp synthesis with module selection and data format conversion. IEEE Trans. TVLSI 6 (4), pp. 582–594. Cited by: §IV.
  9. W. Jiang (2019) Accuracy vs. efficiency: achieving both through fpga-implementation aware neural architecture search. In Proc. of DAC, Las Vegas, USA. Cited by: §I, §II.
  10. W. Jiang (2019) Achieving super-linear speedup across multi-fpga for real-time dnn inference. ACM Transactions on Embedded Computing Systems (TECS) 18 (5s), pp. 1–23. Cited by: §I.
  11. W. Jiang (2019) Device-circuit-architecture co-exploration for computing-in-memory neural accelerators. arXiv preprint arXiv:1911.00139. Cited by: §I.
  12. W. Jiang (2019) Hardware/software co-exploration of neural architectures. arXiv:1907.04650. Cited by: §I.
  13. H. Kwon (2019) HERALD: optimizing heterogeneous dnn accelerators for edge devices. arXiv preprint arXiv:1909.07437. Cited by: §I, §II, §V.
  14. H. Kwon (2019) Understanding reuse, performance, and hardware cost of dnn dataflow: a data-centric approach. In Proc. of MICRO, pp. 754–768. Cited by: §I, §III, §IV.
  15. C. Li. (2019) Https://lambdalabs.com/blog/resnet9-train-to-94-cifar10-accuracy-in-100-seconds. Note: Accessed: 2019-11-24 Cited by: §I, §III, §V.
  16. H. Liu (2019) Darts: differentiable architecture search. In Proc. of ICLR, Cited by: §I, §II.
  17. Q. Lu (2019) On neural architecture search for resource-constrained hardware platforms. arXiv preprint arXiv:1911.00105. Cited by: §I.
  18. NVIDIA (2017) Nvdla deep learning accelerator. http:// nvdla.org. Cited by: §I, §I, §II, §II, §II, §III, §V.
  19. A. Parashar (2017) Scnn: an accelerator for compressed-sparse convolutional neural networks. In Proc. of ISCA, pp. 27–40. Cited by: §I.
  20. M. P. Perrone and L. N. Cooper (1992) When networks disagree: ensemble methods for hybrid neural networks. Technical report BROWN UNIV PROVIDENCE RI INST FOR BRAIN AND NEURAL SYSTEMS. Cited by: §V.
  21. H. Pham (2018) Efficient neural architecture search via parameter sharing. In Proc. of ICML, pp. 4092–4101. Cited by: §I, §II.
  22. O. Ronneberger (2015) U-net: convolutional networks for biomedical image segmentation. In Proc. of MICCAI, pp. 234–241. Cited by: §III, §V.
  23. Z. Shao (2005) Efficient assignment and scheduling for heterogeneous dsp systems. IEEE Trans. TPDS 16 (6), pp. 516–525. Cited by: §IV, §IV.
  24. R. J. Williams (1992) Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning 8 (3-4), pp. 229–256. Cited by: §IV.
  25. B. Wu (2019) Fbnet: hardware-aware efficient convnet design via differentiable neural architecture search. In Proc. of CVPR, pp. 10734–10742. Cited by: §I, §II.
  26. X. Xu (2018) Scaling for edge inference of deep neural networks. Nature Electronics 1 (4), pp. 216–222. Cited by: §I.
  27. L. Yang (2020) Co-exploring neural architecture and network-on-chip design for real-time artificial intelligence. in Proc. of ASP-DAC. Cited by: §I.
  28. J. Zhang (2018) Thundervolt: enabling aggressive voltage underscaling and timing error resilience for energy efficient deep learning accelerators. In Proceedings of the 55th Annual Design Automation Conference, pp. 1–6. Cited by: §II.
  29. J. Zhang (2019) CompAct: on-chip com pression of act ivations for low power systolic array based cnn acceleration. ACM Transactions on Embedded Computing Systems (TECS) 18 (5s), pp. 1–24. Cited by: §II.
  30. X. Zhang (2019) When neural architecture search meets hardware implementation: from hardware awareness to co-design. In 2019 IEEE Computer Society Annual Symposium on VLSI (ISVLSI), pp. 25–30. Cited by: §I.
  31. B. Zoph and Q. V. Le (2017) Neural architecture search with reinforcement learning. In Proc. of ICLR, Cited by: §I, §II, §III, 1st item, §V, §V.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
407734
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description