AxTrain: Hardware-Oriented Neural Network Training for Approximate Inference

# AxTrain: Hardware-Oriented Neural Network Training for Approximate Inference

Xin He, Liu Ke, Wenyan Lu, Guihai Yan, Xuan Zhang Washington University in St. Louis  and  SKLCA, Institute of Computing Technology, Chinese Academy of Sciences
###### Abstract.

The intrinsic error tolerance of neural network (NN) makes approximate computing a promising technique to improve the energy efficiency of NN inference. Conventional approximate computing focuses on balancing the efficiency-accuracy trade-off for existing pre-trained networks, which can lead to suboptimal solutions. In this paper, we propose AxTrain, a hardware-oriented training framework to facilitate approximate computing for NN inference. Specifically, AxTrain leverages the synergy between two orthogonal methods—one actively searches for a network parameters distribution with high error tolerance, and the other passively learns resilient weights by numerically incorporating the noise distributions of the approximate hardware in the forward pass during the training phase. Experimental results from various datasets with near-threshold computing and approximation multiplication strategies demonstrate AxTrain’s ability to obtain resilient neural network parameters and system energy efficiency improvement.

journalyear: 2018copyright: acmcopyrightconference: ISLPED ’18: International Symposium on Low Power Electronics and Design; July 23–25, 2018; Seattle, WA, USAbooktitle: ISLPED ’18: ISLPED ’18: International Symposium on Low Power Electronics and Design, July 23–25, 2018, Seattle, WA, USAprice: 15.00doi: 10.1145/3218603.3218643isbn: 978-1-4503-5704-3/18/07

## 1. Introduction

An Artificial Neural Network (ANN) is a biologically inspired machine learning model that has been practically demonstrated to deliver superior performance in many recognition, mining, and synthesis (RMS) applications (RMS, ). The success of ANN can be attributed to innovations across the computing system stack: To achieve higher accuracy, deeper and more complex networks are created along with more advanced training algorithms. To speed up NN training and deployment, powerful parallel computing engines (e.g., GPUs) are designed to accelerate computationally intensive mathematical operations. Despite the improved performance, energy efficiency still remains a limiting factor when deploying advanced ANNs into edge devices with stringent power budgets.

A growing body of research has been proposed to tackle energy efficiency from diverse perspectives. Algorithmically, the focus is to simplify neural network (NN) by either using more concise network models (e.g. ResNet (RESNET, ) and binary neural networks (BNN, )) or pruning and compressing existing models (DEEPCOMPRESS, ). From the hardware perspective, efficiency-driven optimizations have been conducted at the architecture, circuit, and device levels. Customized NN accelerators aim at higher energy efficiency, approximate circuits trade accuracy for energy efficiency (TMSCS, ; AEYE, ), and emerging technologies (e.g. RRAM crossbar) perform low power NN computation in memory (RRAMTRAIN, ). In this paper, we investigate an auxiliary approach with a focus on network training that can be generally applied to diverse approximate computing techniques. The approach is orthogonally compatible with techniques to improve energy efficiency from other domains.

Existing approximate computing techniques are confined to exploiting pre-trained NNs, which can result in suboptimal solutions. Without knowledge of the underlying hardware, NN algorithms optimize only for accuracy under the assumption of ideal hardware implementation, yet they do not consider hardware-specific error tolerance. Therefore, small noises from approximate hardware may lead to severe network accuracy degradation. Compromises often have to be made to maintain the accuracy target, leading to conservative approximation and failure to exploit all the opportunities for efficiency improvement.

The key question is how to train a robust neural network that not only achieves high accuracy given ideal hardware assumptions, but is also resilient to noise and errors, so that more aggressive approximation could be applied without severely compromising accuracy. As Fig.1 illustrates, a conventional training algorithm is dedicated to searching for a “global” minimum which has the smallest loss across the weight space, ignoring higher loss in the vicinity of the minimum. Thus perturbations by approximate computing easily results in significant loss, as indicated by “Local minimum 1”. Instead of minimizing loss at a single minimum point, our proposed hardware-oriented training seeks a “near optimal” minimum where a “flat” and “good enough” loss surface is preferred and the globally smallest error is not mandatory, as “Local minimum 2” depicts. Thanks to the flat error surface, the NN now exhibits a higher degree of tolerance for approximate computing induced noise.

In this paper, we propose AxTrain, a hardware-oriented NN training framework for approximate computing. AxTrain explores two different paths towards high resilience: an active method (AxTrain-act) that explicitly biases the training process to a noise insensitive minimum; and a passive method (AxTrain-pas) that exposes the model of low-level hardware imperfection to the high-level training algorithm for noise tolerance. AxTrain then leverages the synergy between active and passive methods to facilitate approximate computing.

In the AxTrain-act method, the innovation is to guide the training algorithm to improve both network loss and noise resilience directly. During training, noise sensitivity is also back propagated along with network loss to the network parameters, and those parameters get updated in order to minimize loss and noise sensitivity. This solution can be seen as an artificial regularization term to bias the training algorithm towards a high resilience (flat) and accurate (near optimal) minimum, similar to the L2 norm regularization for the over-fitting problem.

For the AxTrain-pas method, the error tolerance property of the NN is leveraged to reduce side effect from approximate computing. Rather than training with ideal hardware models, numerical functional models of the approximate hardware are incorporated along the forward pass in the training step, so that the training algorithm can learn the noise distribution of the approximate hardware on its own and descend to a minimum which is robust to approximate computing. Thanks to the knowledge of approximate hardware, the training process experiences different train sets with slightly modified statistical distributions in each epoch, and arrives at a robust model that yields high accuracy with approximate computing.

Finally, to evaluate the effectiveness of the proposed AxTrain framework, we study two popular approximate computing techniques: approximate multiplier and near threshold voltage (NTV) based memory storage (NTC, ), because multiplication and parameter storage dominate power consumption in NN accelerators.

## 2. Related work and Background

### 2.1. Related Work

Approximate computing is a promising technique for efficiency optimization(JMAO, ; FUZZY, ). Diverse techniques have been explored in prior work that apply approximate computing approaches to improve NN energy efficiency. Minerva (MINERVA, )is an example that uses circuit-level techniques to handle memory error in NN accelerators, employing Razor sampling circuits for fault detection and equipping the weight fetch stage with bit masking and word masking for flipped weights to mitigate bit-flip errors caused by NTV-based weight storage. Several other prior works on NN accelerators demonstrate the benefit of approximation at the architecture level: Olivier shows NN accelerators can tolerate transistor-level faults (OLIVIER, ); Zidong et al. exploit NN’s tolerance for arbitrary approximate multiplier configurations through exhaustive design space exploration (ZIDONG, ). Recent research proposes more explicit techniques to exploit NN’s intrinsic error tolerance and flexibility during training to improve efficiency. For example, both AxNN and ApproxAnn take neuron criticality into consideration and perform periodical retraining for self-healing (AXNN, ; APPROXANN, ). AxNN proposes the characterization of neuron criticality first, then the replacement of non-critical neurons with their approximate versions. To ensure targeted accuracy, iterative retraining is used for error recovery. Inspired by AxNN, ApproxAnn proposes a more reliable way to quantify neuron criticality and adopts iterative heuristics to gain maximum efficiency.

Although AxNN and ApproxAnn both strive to take the advantage of NN’s pliable training process to improve energy efficiency, certain limitations in their techniques persist: 1) They require highly configurable hardware where modes of multipliers can be individually adjusted; 2) Due to area and power constraints in large-scale networks with time-multiplexed multipliers, periodic runtime multiplier reconfiguration will inevitably degrade accelerator performance; 3) Approximation is performed on a pre-trained network with hardware-agnostic training, which does not optimize for error tolerance, so the target accuracies in their designs are met with relatively conservative approximations. All these limitations motivate our AxTrain framework.

### 2.2. Neural Network Preliminary

At the architecture level, ANN can be seen as a parallel computing engine which consists of a large number of basic hardware elements, such as multipliers, accumulators, and nonlinear transformation units. A typical neural network as shown in Fig.2 consists of an input layer, multiple hidden layers, and an output layer. During the forward pass, the input layer retrieves inputs, , of a task sample and directly passes them to the next layer. To generate activations, , for each neuron in a hidden layer, the hidden layer first performs multiplication and accumulation, , using activations from the previous layer and network parameters (including weights and biases), then feeds the intermediate results to a nonlinear transformation , such as Sigmoid () for the output layer, ReLu () for hidden layers. This process is repeated layer by layer until the output layer is reached, and the final activations (outputs) from the output layer are generated for regression and classification.

NN training aims at exploring network parameters which minimize the error between network outputs and targets. To reduce the error, backpropagation (BP) is used to propagate output error from the output layer to the previous layers consecutively and to quantify error contributions from network parameters by taking derivatives of the output error with respect to these parameters. Then the parameters are updated in a backward pass using stochastic gradient descent to the derivatives to reduce output error. The mathematical equations for training can be summarized as:

The derivative of the output error with respect to th neuron in layer is

 (1) ∂E∂xli=(Nl+1∑j=1∂E∂xl+1j×wl+1ji)×h′(xli)

The weights’ gradient and updating method are derived as

 (2) ∂E∂wlji=∂E∂xlj×al−1i
 (3) wlji=wlji−ηΔwlji

where is the learning rate.

## 3. AxTrain Framework

In this section, we present the hardware-oriented AxTrain framework that searches for a “near optimal” and resilient minimum to facilitate approximate computing and achieve a better tradeoff between inference accuracy and energy efficiency. Specifically, AxTrain exploits two different methods: AxTrain-act explicitly regularizes the NN to descend to parameter distributions that are insensitive to noise; and AxTrain-pas intentionally models approximate computing-induced noise in the forward-pass of the training and internalizes the noise distribution in its learned weights. AxTrain leverages the synergy between the active and passive methods by first training with AxTrain-act to reduce overall sensitivity and then with AxTrain-pas to learn hardware-specific noise.

### 3.1. AxTrain-active Method

#### 3.1.1. Define NN sensitivity-oriented regularization

AxTrain-act introduces robustness as an additional regularization term to an NN’s cost function to drive NN training. In machine learning, regularization is a process that can introduce prior knowledge to the training process to express preference in the solution. For example, an L2 regularization term reduces the magnitudes of NN weights and limits NN capacity to prevent over-fitting. Similarly, AxTrain-act defines robustness and incorporates it into the cost function for training, as illustrated below.

 (4) \vspace−0.1cmEtot=E+γ⋅S(w)\vspace−0.1cm

where is the original NN output error. represents the network sensitivity, and a lower sensitivity suggests higher resilience and more robustness to noise. We use as a preference factor for sensitivity. Based on Eq.4, AxTrain-act minimizes not only network error but also noise sensitivity. To reduce the output error, , training algorithm employs backpropagation to evaluate the gradient and update network weights as described in Section 2.

Since the magnitude of should reflect how output deviations are affected by noisy weights, we define a NN’s sensitivity as

 (5) \vspace−0.1cmS(w)=∑k(∑∀l,ij|wlij||∂Ok∂wlij|)\vspace−0.1cm

This definition satisfies four important aspects: 1) We employ absolute values to guarantee that the training process works on worst-case sensitivity reduction, and noises from those sensitive weights cannot cancel out each other to arrive at a smaller . 2) is the derivative of an output to a weight in layer , which is used to measure the outputs’ response with respect to weights perturbation. 3) is also incorporated, since induced noise from the approximate hardware is usually proportional to the magnitude of weight. 4) To minimize heuristic intervention in the optimization process, we capture the total sensitivity by summing across all weights, instead of ranking or partitioning individual weight  (AXNN, ). Based on this definition, we can infer that a network with small would behave similarly with and without noise, and hence exhibit better resilience against approximation. The challenge now is how to reduce network sensitivity in training.

#### 3.1.2. Derive gradients

Inspired by BP and SGD (stochastic gradient descent), we propose to calculate the gradients that measure how the sensitivity changes with respect to the weights and then update the weights accordingly to reduce sensitivity, similar to the conventional BP weight updates for minimizing loss.

Taking a specific weight as an example, to minimize the sensitivity we should make the update along its negative gradient , which can be derived as

 (6) ∂S(w)∂wij=∂∑k(∑ab|wab||∂Ok∂wab|)∂wij=∑k(sign(wij)|∂Ok∂wij|+∑ab(|wab|sign(∂Ok∂wab)⋅(∂2Ok∂wab∂wij))

The first term, , is evaluated using BP.

Evaluation of the second term for all is complicated because of the second order derivative (Hessian matrix). Directly calculating the Hessian is a time-consuming process, hence we adopt Pearlmutter’s algorithm (HESSIAN, ) to speed up the computation, since Pearlmutter’s algorithm can compute NN’s “Hessian (H) vector (V) product” in time simply by another round of forward-backward propagation. In our case, could be denoted as vector , while as the Hessian matrix for all parameters. Pearlmutter’s algorithm proposes the R operator which facilitates calculation as

 (7) RV{f(w)}=∂f(w+rV)∂r∣∣∣r=0

Hence the second term of Eq.6 is transformed into . After applying R operator to Eq.2, we have:

 (8) RV{∂Ok∂wl+1ij}=RV{∂Ok∂xl+1i⋅alj}=RV{∂Ok∂xl+1i}alj+RV{alj}∂Ok∂xl+1i

To compute this equation, we can obtain and with a second round propagation as follows:

1) For the forward pass, the R operator is applied to get :

 (9) RV{xl+1j}=RV{n∑i=0ali⋅wl+1ji}=∑iVl+1ji⋅ali+∑iwl+1ji⋅RV{ali}
 (10) RV{al+1j}=RV{hl+1(xl+1j)}=h(l+1)′(xl+1j)⋅RV{xl+1j}

For the input layer, . After forward propagation, we can get ;

2)For the backward pass in the hidden layers to get :

 (11) RV{∂O∂xli}=RV{(Nl+1∑j=1∂O∂xl+1j⋅wl+1ji)⋅h′(xli)}=h′′(xli)RV{xli}(Nl+1∑j=1∂O∂xl+1j⋅wl+1ji)+h′(xli)(Nl+1∑j=1∂O∂xl+1j⋅Vl+1ji)+h′(xli)(Nl+1∑j=1RV{∂O∂xl+1j}⋅wl+1ji)

Here we omit further similar derivation for the output layer. Once we have and , they can be substituted into Eq.8, and then into Eq.6, and the influence of network weights on sensitivity, , can be computed. Note that for AxTrain-act, the training overhead is the time consumed for another round of forward-backward propagation per batch to derive the , which does not burden the inference system in an off-line training scenario.

#### 3.1.3. Update the preference factor adaptively

As defined in Eq.4, AxTrain-act optimizes both the network error and sensitivity, and uses a preference factor to control the relative magnitude of the sensitivity-related update rate. A large may reduce final network accuracy, while a small one could prevent full reduction of sensitivity. To ensure NN accuracy and convergence, we leverage an adaptive update method for based on (LAMB, ). Instead of a fixed value, is updated on a per epoch basis. A is added to for lower sensitivity if the error in the current epoch is smaller than the weighted sum (0.5, 0.25, 0.125 …) of training errors in previous epochs or the current error is smaller than a pre-defined accuracy bound. Otherwise, a is subtracted to preserve training accuracy.

### 3.2. AxTrain-passive Method

Different from AxTrain-act, which explicitly optimizes for robustness, AxTrain-pas exposes the nonideality of approximate hardware to the training algorithm by numerically mimicking its inexact operations in the forward propagation. Because of the incorporated hardware knowledge, AxTrain-pas can learn the noise distribution from approximate hardware and implicitly exploit the noise insensitive minimum. AxTrain-pas is a hardware-oriented approach which can be generally applied to most approximate techniques in NN accelerators: approximate arithmetic operations (ZIDONG, ) for neuron calculation and fuzzy memorization (FUZZY, ) for parameter storage.

In neuron calculation, the most computational intensive operations consist of weight activation multiplications and later additions. Multipliers usually consume higher power and contribute more delays to the critical path than the adders used for accumulations, while the precision of multiplications is relatively less critical than that of additions for NN output accuracy. All these considerations make multipliers better candidates for approximation, as shown in Fig.3, where an approximate multiplier is used in a neuron processing element. Every time a neuron forward calculation, , is performed, the original accurate multiplications are replaced by their approximate counterparts.

Power consumption for parameter storage also plays a significant role in NN accelerators, since NNs often consist of thousands of weight parameters. Fuzzy storage is thus leveraged to trade decreased weight precision for power reduction. Fig.3 also shows fuzzy storage for local and global weights. To model the effect of approximate computing in the training algorithm, AxTrain-pas models the noise induced to network weights by fuzzy memorization whenever the weights are retrieved in the forward propagation. Taking NTV-based fuzzy storage (detailed later) as an example, NTV causes random bit flips, since low supply voltage renders SRAM cells less reliable. During training, AxTrain-pas models NTV induced flips as stochastic noise (NTC, ) and injects the noise by randomly flipping the bits in network weights at a certain probability (based on voltage level and technology). Note that AxTrain-pas applies approximation statically throughout the network. This policy reduces hardware complexity, such as the support for runtime multiplier reconfiguration and memory mode switching.

When applying approximate computing in NN, we should first minimize noise from the approximate hardware itself. Taking NTV-based storage as an example, the upper bound of noise in a weight is determined by the binary format used to represent network weights, e.g., the noise magnitude for a sign-bit flip in a fixed-point number corresponds to the maximum value that the fixed point format can represent. Hence unnecessary high order bits that do not affect accuracy should be eliminated to confine the effect of the noise. Fortunately, most network weights can be easily regularized to concentrated over a range of , so integer bits may not be necessary to represent the weights. In this case the network’s activations typically are almost two orders of magnitude larger than the weights, which suggests activations and weights should be represented in different fixed-point formats. Hence, dynamic fixed point representation is used in NN accelerators to maintain network functionality and confine noise (PRIME, ).

Calculate gradients by straight-through estimator. After augmenting the forward propagation pass with numerical models of approximate computing, a natural question arises: how are the gradients backpropagated through approximate hardware? Given the nonlinear or stochastic nature of approximate hardware, it is hard to analytically compute the precise derivatives across the entire input range for approximate operations. Inspired by Hinton’s lecture (12b) (HINTONLECTURE, ) and Bengio’s work (STOGRA, ), we adopt the “straight-through estimator” technique in AxTrain-act as below:

This BP method directly passes gradients from the outputs of an approximate operator to its inputs, while preventing noise-induced large gradients from disturbing the training algorithm’s convergence. Base on our experimental evaluation, this BP method is effective for AxTrain-pas training.

We examine the efficacy of AxTrain-act and AxTrain-pas by comparing their weight sensitivity (flatness) with conventional BP. Fig.4 shows the relative sensitivities of weights from the last (most critical) layer of an multilayer perceptron (MLP) model for the MNIST digit recognition datasets, where the deeper blue range indicate less sensitive weights. AxTrain-pas is trained with approximate multipliers in the most aggressive mode. Fig. 4 demonstrates the AxTrain-act significantly reduces the sensitivities across all the network weights, while AxTrain-pas implicitly learns noise distribution and selectively reduces the sensitivity for those weights which suffer larger noise from the approximate multiplier.

## 4. Experimental Methodology

NN accelerator architecture. To evaluate the energy efficiency improvement from approximate computing, we implement a flexible data-driven NN accelerator named “FlexFlow” ((FLEXFLOW, )) tailored for ANN and shown in Fig.5. FlexFlow employs a weight buffer and a neuron buffer for storage, a group of processing engines (PE) for computation, and an instruction decoder for controlling. To perform neuron calculation, each PE consists of a multiplier, an adder, a neuron local memory, a weight local memory, and a controller.

Case studies on two approximate hardware. 1) Approximate multiplier. Without loss of generality, to assess the implications of approximate multiplications in NN accelerator, we adopt an existing approximate multiplier for weight-activation multiplication (DRUM, ). This design explores the tradeoff between precision and computing efficiency based on changing the effective width for computation. Generally, in the operands of the multiplier, from the MSB to the LSB only the first nonzero bit and its consecutive bits are retrieved (with the last bit set) for computations. In this way, with a smaller configuration, the approximate multiplier gains higher energy efficiency at a cost of increased noise. And in the experiment, we adopt four configurations (K1, K2, K3, K4).

2) Near threshold voltage storage. For fuzzy storage, we leverage NTV supply voltage for SRAM weight storage. Conventionally, SRAM works as a reliable storage at nominal voltage levels (e.g., 1.1V). To improve energy efficiency, the SRAM supply voltage can be reduced to the NTV regime at the risk of bit flipping (NTC, ). In this case, the supply voltage can be treated as a knob to tune approximate computing and determine the noise probability. We select two representative knobs (flip rate@voltage: 10%@400mV, 1%@660mV, 0.1%@850mV (NTC, )) for each applications, as Table 1 depicts.

Power evaluation flow. To evaluate the power improvement from the approximate multiplier, we first implement approximate hardware using Verilog and then synthesize the design using the Synopsys Design Compiler with the TSMC 65nm library. The power results are gathered using Synopsys PrimeTime. We evaluate the NTV-based storage by CACTI-P(CACTI, ).

Training tool and Dataset. To evaluate the accuracy of NN, we implement the training algorithm and inference simulator using the PyTorch deep learning framework. The datasets we used are detailed in Table 1. Breast cancer, Image segmentation, Ionosphere, and Satimage are obtained from the UCI Machine Learning Repository (UCI, ), and MNIST is a well known dataset for digit classification (MNIST, ). We evaluate both the MLP and CNN models for MNIST. For each dataset, 80% of samples are used for training, while the remaining 20% are used for testing. In the off-line training, the networks are first trained with AxTrain-act until both the network error and sensitivity cost converge, then tuned with AxTrain-pas for a few more epochs (e.g., 10 epochs for MNIST) without hurting the accuracy.

## 5. Experimental results

We conduct experiments for six representative applications with four approximate multiplier configurations (K1, K2, K3, K4) and two NTV levels, which include an aggressive (Agg) lower voltage and a conservative (Con) higher voltage, as as Table 1 illustrates.

First, we compare the output error of NN under different approximate multiplier configurations with networks trained by conventional BP, AxTrain-act, and AxTrain, as shown in Fig.6. The errors under different approximate configurations are normalized to original network results with accurate multipliers for each application. Fig.6 shows that the network outputs suffer larger error with more aggressive approximation configurations. Compared with conventional BP scheme, AxTrain-act exhibits higher noise tolerance by reducing error by 40.77%, 34.56% and 25.15% on average for K1, K2 and K3, respectively, while AxTrain further reduces error to 75.61%, 58.45%, 37.66%. We notice that in a few rare cases (like K4 in MNIST), when using multipliers with conservative approximation, AxTrain-act performs slightly better than AxTrain. Due to the intrinsic error tolerance of the NN, the accuracy degradation caused by conservative approximate multipliers is quite small, thus the improvement headroom is limited.

For NTV-based SRAM weight storage, we show the results from fifty runs, since NTV induced bit-flipping is a probabilistic event. Fig.8 demonstrates the average accuracies and the deviations. The output accuracies for both aggressive and conservative NTV increase by 32.10% and 8.163% on average, compared with conventional BP. This figure also indicates AxTrain reduces the side effects of bit flip in aggressive NTV mode and restores the output quality to a higher level, equals to conventional BP attains in conservative NTV mode. Note that accuracy is used as the comparison metric instead of network error, because error magnitude for conventional BP in the MNIST-Agg case is too large to be properly shown.

For a thorough evaluation of AxTrain, we also implement a recent approach, ApproxAnn, that can be used compatibly with AxTrain-act, thanks to the orthogonality of our training-based approach in supplementing efficient techniques from other domains. Unlike AxTrain, which uses one approximate configuration throughout the inference, ApproxAnn sets a target accuracy requirement (2% maximum allowed degradation in this case). It then retrains a pre-trained BP network to employ as many approximate multipliers as possible to replace accurate multipliers without exceeding the requirement. Hence we compare the number of approximate multipliers ApproxAnn could use with pre-trained networks using conventional BP and AxTrain-act, and we show the results for aggressive approximation (K1, K2) in Fig.8. As expected, NN under less aggressive K2 could always employ a larger number of multipliers than under more aggressive K1. ApproxAnn with an AxTrain-trained network could use 23.33% more approximate multipliers on average in the K2 mode and 25.51% more in the K1 mode than ApproxAnn-BP, which means AxTrain helps ApproxAnn to better exploit the power saving opportunity.

Finally, to demonstrate the benefit of AxTrain at the system level, we compare the FlexFlow accelerator’s lowest power consumption that approximate computing could attain using AxTrain and conventional BP, while keeping a target accuracy (maximum 2% degradation to accurate implementation), as depicted in Fig.9. On the axis in this figure, we also show the approximation mode that AxTrain and BP apply. This figure shows that AxTrain’s higher noise resilience could result in more aggressive approximation, which leads to lower power consumption than conventional BP. To be specific, computational power and storage power are reduced by 41.57% and 33.14% on average, correspondingly. We also compare the computational power consumption under approximate multiplier between AxTrain and ApproxANN, and AxTrain requires 25.73% less power consumption than ApproxANN on average. Notably in the Satimage dataset the in NTV storage case, a conservative voltage of 0.85V is enforced to maintain tight accuracy constraint. By relaxing the allowed degradation to 6%, a 27.01% power reduction is achieved under 0.66V.

## 6. Conclusion

Approximate computing leverages the intrinsic error tolerance of a neural network for improved energy efficiency.The main objective is maintaining good enough accuracy with aggressive approximation. In this paper, we propose the AxTrain framework to optimize NNs for both accuracy and robustness. Using both explicit training and implicit learning, AxTrain reduces NN’s sensitivity and improves its resilience against approximation. Experimental results under NTV and approximate multiplier based approximate computing techniques reveal AxTrain could lead to more robust networks than conventional hardware-agnostic training frameworks.

###### Acknowledgements.
This work was supported in part by Natural Science Foundation Award #1657562 and National Natural Science Foundation of China under Grant No. 61572470.

## References

• (1) P. Dubey, “Recognition, mining and synthesis moves computers to the era of tera,” Technology@ Intel Magazine, vol. 9, no. 2, pp. 1–10, 2005.
• (2) K He, et al, “Deep residual learning for image recognition,” in In CVPR, pp. 770–778, 2016.
• (3) I Hubara and M Courbariaux and D Soudry and R El-Yaniv and Y Bengio, “Binarized neural networks,” in NIPS, 2016.
• (4) S Han, et al, “Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding,” in ICLR, 2016.
• (5) X. He, G. Yan, Y. Han, and X. Li, “Exploiting the potential of computation reuse through approximate computing,” TMSCS, vol. 3, no. 3, pp. 152–165, 2017.
• (6) X. He, G. Yan, F. Sun, Y. Han, and X. Li, “ApproxEye: Enabling approximate computation reuse for microrobotic computer vision,” in ASP-DAC, pp. 402–407, 2017.
• (7) L Chen, et al, “Accelerator-friendly neural-network training: Learning variations and defects in rram crossbar,” in DATE, pp. 19–24, 2017.
• (8) RG Dreslinski, et al, “Near-threshold computing: Reclaiming moore’s law through energy efficient integrated circuits,” Proceedings of the IEEE, vol. 98, no. 2, pp. 253–266, 2010.
• (9) J. Miao, et al, “Modeling and synthesis of quality-energy optimal approximate adders,” in ICCAD, pp. 728–735, 2012.
• (10) Y Han, et al, “Enabling near-threshold voltage (ntv) operation in multi-vdd cache for power reduction,” in ISCAS, pp. 337–340, 2013.
• (11) B. Reagen, et al, “Minerva: Enabling Low-Power, Highly-Accurate Deep Neural Network Accelerators,” in ISCA, pp. 267–278, 2016.
• (12) O. Temam, “A defect-tolerant accelerator for emerging high-performance applications,” in ISCA, 2012.
• (13) Z Du, et al, “Leveraging the error resilience of machine-learning applications for designing highly energy efficient accelerators,” in ASP-DAC, pp. 201–206, 2014.
• (14) S. Venkataramani, A. Ranjan, K. Roy, and A. Raghunathan, “AxNN: Energy-Efficient Neuromorphic Systems using Approximate Computing,” in ISLPED, pp. 27–32, 2014.
• (15) Q Zhang, et al, “ApproxANN: an approximate computing framework for artificial neural network,” in DATE, pp. 701–706, 2015.
• (16) B. Pearlmutter, “Fast exact multiplication by the hessian,” Neural Computation, vol. 6, no. 1, pp. 147–160, 1994.
• (17) AS Weigend, et al, “Generalization by Weight-Elimination with Application to Forecasting,” in NIPS, pp. 875–882, 1991.
• (18) P. Chi, et al, “Prime: A novel processing-in-memory architecture for neural network computation in reram-based main memory,” in ISCA, pp. 27–39, 2016.
• (19) G. Hinton, “Neural networks for machine learning,” Coursera, 2012.
• (20) Y Bengio, et al, “Estimating or propagating gradients through stochastic neurons for conditional computation,” in arXiv preprint, 2013.
• (21) W. Lu, et al, “FlexFlow: A Flexible Dataflow Accelerator Architecture for Convolutional Neural Networks,” in HPCA, pp. 553–564, 2017.
• (22) S Hashemi, et al, “DRUM: A dynamic range unbiased multiplier for approximate applications,” in ICCAD, pp. 418–425, 2015.
• (23) S. Li, et al, “CACTI-P: Architecture-level modeling for SRAM-based structures with advanced leakage reduction techniques,” in ICCAD, pp. 694–701, 2011.
• (24) M. Lichman, “UCI machine learning repository,” 2013.
• (25) Y. Lecun and C. Cortes, “The mnist database of handwritten digits,” 1998.
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters