AdaptivFloat: A Floatingpoint based Data Type for Resilient Deep Learning Inference
1 Introduction
Deep learning approaches have transformed representation learning in a multitude of tasks. Recurrent Neural Networks (RNNs) are now the standard solution for speech recognition, exhibiting remarkably low word error rates Chiu et al. (2017) while neural machine translation has narrowed the performance gap versus human translators Wu et al. (2016). Convolutional Neural Networks (CNNs) are now the dominant engine behind image processing and have been pushing the frontiers in many computer vision applications Krizhevsky et al. (2012); He et al. (2016). Today, deep neural networks (DNNs) are deployed at all computing scales, from resourceconstrained IoT edge devices to massive data center farms. In order to exact higher compute density and energy efficiency on these compute platforms, a plethora of reduced precision quantization techniques have been proposed.
In this line of research, a large body of work has focused on fixedpoint encodings Choi et al. (2019); Gupta et al. (2015); Lin et al. (2015); Hwang and Sung (2014); Courbariaux et al. (2015) or uniform quantization via integer Migacz (2017); Jacob et al. (2017). These fixedpoint techniques are frequently evaluated on shallow models or on CNNs exhibiting relatively narrow weight distributions. However, as it can be seen from Figure 1, sequence transduction models with layer normalization such as the Transformer Vaswani et al. (2017) can contain weights more than an order of magnitude larger than those from popular CNN models with batch normalization such as ResNet50, Inceptionv3 or DenseNet201. The reason for this phenomenon is that batch normalization effectively produces a weight normalization side effect Salimans and Kingma (2016) whereas layer normalization adopts invariance properties that do not reparameterize the network Ba et al. (2016).
In the pursuit of wider dynamic range and improved numerical accuracy, there has been surging interest in floatingpoint based Drumond et al. (2018); Köster et al. (2017), logarithmic Johnson (2018); Miyashita et al. (2016) and posit representations Gustafson and Yonemoto (2017), which also form the inspiration of this work.
AdaptivFloat improves on the aforementioned techniques by dynamically maximizing its available dynamic range at a neural network layer granularity. And unlike block floatingpoint (BFP) approaches with shared exponents that may lead to degraded rendering of smaller magnitude weights, AdaptivFloat achieves higher inference accuracy by remaining committed to the standard floatingpoint delineation of independent exponent and mantissa bits for each tensor element. However, we break from IEEE 754 standard compliance with a unique clamping strategy for denormal numbers and with a customized proposition for zero assignment, which enables us to engineer leaner hardware.
Rather than proposing binary or ternary quantization techniques evaluated on a small number of carefully selected models, through AdaptivFloat, we aim to inform a generalized floatingpoint based mathematical blueprint for adaptive and resilient DNN quantization that can be easily applied on neural models of various categories (CNN, RNN or MLP), layer depths and parameter statistics.
By virtue of an algorithmhardware codesign, we also propose a processing element implementation that exploits the AdaptivFloat arithmetic in its computational datapath in order to yield energy efficiencies that surpass those of integerbased variants. Furthermore, owing to the superior performance of AdaptivFloat at very low word sizes, as it will be shown, higher compute density can be acquired at a lower penalty for computational accuracy compared to block floatingpoint, integer, or nonadaptive IEEElike float or posit encodings. Altogether, the AdaptivFloat algorithmhardware codesign framework offers a compelling alternative to integer or fixedpoint solutions.
Finally, we note that the AdaptivFloat encoding scheme is selfsupervised as it only relies on unlabeled data distributions in the network.
This paper makes the following contributions:

We propose and describe AdaptivFloat: a floatingpoint based data encoding algorithm for deep learning, which maximizes its dynamic range at a neural network layer granularity by dynamically shifting its exponent range and by optimally clipping its representable datapoints.

We evaluate AdaptivFloat across a diverse set of DNN models and tasks and show that it achieves higher classification and prediction accuracies compared to equivalent bit width uniform, block floatingpoint and nonadaptive posit and float quantization techniques.

We propose a hybrid floatinteger (HFINT) PE implementation that exploits the AdaptivFloat mechanism and provides a costeffective compromise between the high accuracy of floatingpoint computations and the greater hardware density of fixedpoint postprocessing. We show that the HFINT PE produces higher energy efficiencies compared to conventional monolithic integerbased PEs.

We design and characterize an accelerator system targeted for sequencetosequence neural networks and show that, when integrated with HFINT PEs, lower overall power consumption compared to an integerbased adaptation is obtained.
The rest of the paper is structured as follows. A summary of prominent number and quantization schemes used in deep learning is narrated in Section 2. We present the intuition and a detailed description of the AdaptivFloat algorithm in Section 3. The efficacy and resiliency of AdaptivFloat is demonstrated in Section 4 across DNN models of varying parameter distributions. Section 5 describes the hardware modeling with energy, area and performance efficiency results reported in Section 6. Section 7 concludes the paper.
2 Related Work
Quantization Techniques. Lowprecision DNN training and inference have been researched heavily in recent years with the aim of saving energy and memory costs. A rather significant percentage of prior work in this domain Wu et al. (2015); Mishra et al. (2017); Park et al. (2017); Zhou et al. (2016); Cai et al. (2017); Zhang et al. (2018); Han et al. (2015) have focused on or evaluated their low precision strategies strictly on CNNs or on models with narrow parameter distributions. Notably, inference performance with modest accuracy degradation has been demonstrated with binary Courbariaux and Bengio (2016), ternary Zhu et al. (2016), and quaternary weight precision Choi et al. (2019). Often, tricks such as skipping quantization on the sensitive first and last layers are performed in order to escape steeper endtoend accuracy loss.
Extending these aggressive quantization techniques to RNNs have been reported Alom et al. (2018), although still with recurrent models exhibiting the same narrow distribution seen in many CNNs. Park et al. (2018) noticed that large magnitude weights bear a higher impact on model performance and proposed outlieraware quantization, which requires separate low and high bitwidth precision for small and outlier weight values, respectively. However, this technique complicates the hardware implementation by requiring two separate PE datapaths for the small and the outlier weights.
HardwareFriendly Encodings. Linear fixedpoint or uniform integer quantization is commonly used for deep learning hardware acceleration Jouppi et al. (2017); Jacob et al. (2017); Reagen et al. (2016) as it presents an area and energy costeffective solution compared to floatingpoint based processors. Moreover, lowprecision integer inference has already made its way into commercial systems. NVIDIAâs TensorRT Migacz (2017) is a commercial library that determines 8bit integer quantization parameters offline for GPU inference. Google’s TPU, based on INT8 computations, has been deployed in datacenters to perform accelerated inference of DNN applications. While integer quantization has been favorably applied to CNNs, Bhandare et al. (2019) demonstrated robust INT8 quantization on the Transformer network. In this paper, a broader quantization study on the same network is given with AdaptivFloat showing minor degradation when the weight size is as low as 5bit.
Aware of the dynamic range limitation of fixedpoint encoding, we have seen block floatingpoint data types, such as Flexpoint Köster et al. (2017) and inspired variants employed in the Brainwave NPU Fowers et al. (2018). Block floatingpoint’s appeal stems from its potential to achieve floatingpointlike dynamic range with hardware cost and implementation comparable to fixedpoint. However, by collapsing the exponent value of each tensor element to that of the element with the highest magnitude, elements with smaller magnitudes will be more prone to data loss. Logarithmic approaches Vogel et al. (2018); Lee et al. (2017) replacing fixedpoint multipliers with shifters have demonstrated smaller power consumption and higher throughput compared to linearbased architectures.
Number Formats with Higher Dynamic Range. For DNN computation workloads with very large operands, 16bit number formats such as INT16 and FP16 have been adopted. And several commercial hardware accelerators such as the 2 generation TPU and Intel FPGAs have opted to use the Bfloat16 data type as it preserves the dynamic range of 32bit float by retaining its eight exponent bits, but incurs reduced precision with 7 fractional bits.
Additionally, there has been increasing interest in using the posit data type in deep learning due to its ability to exact higher accuracy and larger dynamic range compared to floats Gustafson and Yonemoto (2017). In particular, posit’s tapered precision can represent small values, where the majority of DNN parameters tend to fall, more accurately than floatingpoint numbers. However, although posit offers a wider reach compared to floats, its accuracy on larger numbers, for a given bit width, may be lower than floats. Furthermore, due to the dynamic nature of its regime bits, hardware implementation of a positbased datapath exhibits worse energydelay product compared to fixedpoint and floatingpoint implementations Carmichael et al. (2018). We include the posit numerical format in our experimental DNN performance evaluation and comparison with the AdaptivFloat data type.
3 Methodology
In this section, we provide greater details on AdaptivFloat, the floatingpoint based number encoding format for adaptive and resilient DNN quantization. We describe the inner workings behind the adjustment of the available dynamic range of representable values in order to best fit the weights of the neural network layers.
3.1 The AdaptivFloat Format
The AdaptivFloat number representation scheme generally follows the IEEE 754 Standard floatingpoint format that includes a sign bit, exponent bit, and mantissa bit fields. In order to efficiently encode in hardware all the representable datapoints, we avoid the computation of denormal values in the AdaptivFloat number system.
The main problem arising as a result of not using denormals in floatingpoint is the lack of representation for the ”zero” point, which is essential to neural network computations. We solve this constraint by sacrificing the positive and negative minimum values to allocate the slot for ”zero” as shown in Figure 2.
Moreover, at very low bit compression, customization based on the value range of a neural network layer can greatly reduce the quantization error with little overhead on the shared extra parameters. Thus, similar to integer quantization that uses a quantization scale (or step), we introduce a bias value, , to dynamically shift the range of exponent values at a layer granularity. The calculation of is described in Section 3.2. A benefit of using is the simplicity of the hardware logic required to perform the adaptive operation compared to the multiplying quantization scale used in integer quantization. This contrast is discussed in detail in Section 5.
Algorithm 1 shows how a bit vector, with a generated per layer, is converted to its decimal representation. If both exponent bits and mantissa bits are zeros, the bit vector should be interpreted as zero. Otherwise, the bit vector is converted by the following equation:
where the exponent value is the addition of exponent bits and , and the mantissa value is calculated by appending an implied “one” as the MSB and follow the same format as standard floating point.
3.2 Quantization
Having defined the AdaptivFloat data format, the quantization problem is simply mapping a full precision value to the nearest representable datapoint. For this purpose, we need to determine the optimal AdaptivFloat which will maximize the available dynamic range of the encoding format in order to provide the most accurate rendering of a particular matrix (or NN layer). This is analogous to determining the quantization scale for integer quantizations, but is a small, typically negative, integer value rather than the highprecision floatingpoint scaling factor needed in integer quantization Migacz (2017).
Algorithm 2 describes how to find the most suitable in order to encode, as faithfully as possible, the AdaptivFloatquantized weight matrix. We first compute the sign matrix, , and the matrix of absolute values, , from the full precision weight matrix . Then, the algorithm finds the maximum absolute value from to determine the corresponding to a suitable range of representable datapoints for the weight matrix to quantize. Before doing quantization on , we first round the values smaller than the AdaptivFloat minimum representable value to zero or at a halfway threshold. Then, we clamp values larger than the max value, , to . The quantization involves rewriting into normalized exponent and mantissa form with an exponent matrix and a mantissa matrix . The mantissa matrix is quantized by the quantization scale calculated by the number of mantissa bits. indicates the quantized mantissa matrix. Finally, the AdaptivFloatquantized matrix is reconstructed by multiplication of , and .
We use the notation to indicate a bit AdaptivFloat number with exponent bits. Figure 3 provides an illustration showing how the is chosen to best fit the range of values in a weight matrix and the resulting quantized datapoints adhering to the format. The bit vectors containing the AdaptivFloat quantized values can be packed and stored in hardware memory resources.
3.3 Shifting the AdaptivFloat Range
The main characteristic of the AdaptivFloat quantization is using the to shift the range of quantized datapoints. Figure 4 depicts an quantization strategy applied on different ResNet50 and Transformer layers, having varying weight distribution range. For ResNet50, we show the convolution layers with 1x1 and 3x3 weight kernels as examples, and for the Transformer, we show two encoder layers and one decoder layer. The plots illustrate how the can vary from one layer to another in order to best fit their natural weight distribution. For example, the of ResNet50 can be seen adjusting from 8 to 10 while transitions from 5 to 7 in the Transformer network. The narrower the weight distribution becomes, which indicates a smaller maximum value in the weight tensor, the more negative gets.
4 Experimental Results
For bit compression evaluation, we select three popular DNN models of distinct neural types and applications, and exhibiting relatively narrow to wide spread in their weight distributions. The models considered, as shown in Table 1, are: (1) Transformer Vaswani et al. (2017), which made a very consequential impact in the field of machine translation and question answering; (2) a 4layer LSTM encoder, 1layer LSTM decoder, attentionbased sequencetosequence (seq2seq) network Chorowski et al. (2015) commonly used in speech recognition; and (3) ResNet50, a wellknown image classification CNN He et al. (2016). The Transformer and the LSTMbased Seq2Seq networks are trained on the OpenNMT platform Klein et al. (2017) using the WMT’17 EnglishtoGerman and LibriSpeech datasets, respectively. And, the Pytorch toolkit Pytorch (Technical report) is used to train ResNet50 using the ImageNet dataset.
We compare the efficacy of AdaptivFloat along with numerical data types frequently employed for deep learning acceleration, namely block floatingpoint (BFP), IEEElike float, posit, and uniform representations. We created templates of these data types in Python to be run within the PyTorch framework. The AdaptivFloat, uniform, and block floatingpoint quantization schemes are selfadaptive in the sense that their dynamic range autoadjusts based on the distribution of the data. The number of exponent bits in the AdaptivFloat, IEEElike float, and posit formats is set evenly for all the layers in the network to the value yielding the highest inference accuracy after doing a search on the exponent width. Generally, the best inference performance was obtained with the exponent space set to 3 bits for AdaptivFloat, 4 bits for float (3 bits when the word size becomes 4 bits), and 1 bit for posit (0 bits when the word size becomes 4 bits).
Finally, we note that the following results are generated by quantizing all of the layers in the DNN models in order to capture the wholelength performance of these five numerical data types unlike several works Choi et al. (2019); Zhou et al. (2016); Mishra et al. (2017) that intentionally skip quantization for the first and last layers.
4.1 Root Mean Squared Error
We begin by quantifying the quantization error of the number formats with respect to baseline FP32 precision. The boxplots depicted in Figure 5 show the distribution of the root mean squared (RMS) quantization error emanating from the data types and computed across all layers of the three models under evaluation. AdaptivFloat consistently produces lower average quantization error compared to uniform, BFP, posit, or IEEElike float encoding. Furthermore, among the selfadaptive data types, AdaptivFloat exhibits the tightest error spread for all bit widths of the Transformer and seq2seq networks, while BFP’s error spread is thinnest for the 6bit and 8bit versions of the ResNet50 model – although with a higher mean compared to AdaptivFloat. This suggests that BFP would fare best in networks with slimmer weight distribution such as ResNet50. Among the nonadaptive data types, we see that posit generally yields both a lower average RMS quantization error and a narrower interquartile error range compared to Float. These results provide important insights to quantized DNN performance as we dive into the bare inference accuracy results in the next subsection.
4.2 Inference Performance Analysis
Tables 2, 3, and 4 show the resiliency of the data types under study as they are put to the test under varying weight bit compression on the Transformer, sequencetosequence, and ResNet50 models, respectively. The inference results are tabulated after posttraining quantization (PTQ) and after quantizationaware retraining (QAR) from the plateaued FP32 baseline. The training setup and the hyperparameter recipe are kept the same for all five data types under evaluation in order to impose a fair comparison.
The key observation we can distill is that AdaptivFloat demonstrates much greater resiliency at very low precision ( 6bit) compared to the other four data formats. For instance, at 4bit encoding, AdaptivFloat can still yield, after retraining, a decent BLEU score of 25.5 on the Transformer model while the impact from the other four number formats is catastrophic due to insufficient dynamic range or decimal accuracy.
We can make similar observations on the seq2seq and ResNet50 models as AdaptivFloat show modest retrained performance degradation at 4bit and 5bit weight precision. Notably, only a 1.2 Top1 accuracy drop is seen with a weight width of 4bit. When the weights of the seq2seq model are quantized to 4bit, the nonadaptive data types (float and Posit) are essentially unable to provide expressible transcription. This suggests that, for resilient performance at very low word size, it is critical to have a quantization scheme that can adjust its available dynamic range to represent the network’s weights as faithfully as possible. AdaptivFloat’s observed robustness at very low precision enables higher compute density into reconfigurable architectures at a low penalty for computational accuracy.
Adding noise to weights when computing the parameter gradients has been shown to exact a regularization effect that can improve generalization performance Noh et al. (2017). This effect can be seen in all data types but is particularly pronounced in AdaptivFloat with performance exceeding FP32 by up to +0.3 in BLEU score, 0.75 in word error rate and +0.1 in Top1 accuracy.
4.3 Effect of both Weight and Activation Quantization
Tables 5, 6, and 7 report the inference performance from reducing the word size of both weights and activations on the Transformer, Seq2Seq, and ResNet50 models, respectively. W/A signifies a quantization of bit weight and bit activation.
We observe that AdaptivFloat’s 8bit performance is as good as, if not better than, the baseline FP32 result on all three DNN models while the degradation at 6bit is still modest. Interestingly, in the case of the seq2seq model, the 6bit AdaptivFloat weight and activation quantization generates regularization effective enough to exceed the FP32 baseline. At 4bit weight and activation precision, the performance degradation of AdaptivFloat is steeper on the sequence models than on ResNet50 as many of the activations from the attention mechanisms fall outside of the available dynamic range of the number format.
5 PE Architecture
AdaptivFloat’s superior bit compression ability paves the way to efficient bit packing into resourceconstrained accelerators. In this section, we describe the design of a hybrid FloatInteger (HFINT) PE that exploits the AdaptivFloat logic in its computational datapath and provides an efficient compromise between the high accuracy of floatingpoint computations and the greater hardware density of fixedpoint postprocessing. We contrast the proposed PE architecture against that of a conventional integer (INT) PE.
5.1 Conventional Integer PE
The microarchitecture of a bit integerbased PE is shown in Figure 6. It contains fixedpoint vector MAC units receiving bit integer weight and activation vectors. The MAC partial sums are stored in bit registers in order to accumulate up to values without overflow. A highprecision scaling factor is typically used to dequantize the computation with high accuracy Migacz (2017). Using a bit scaling factor requires the scaled results to be stored in registers of width , which later are bitshifted right by the fractional value of the scaling. Then, the data is clipped and truncated back to bits before being modulated by the neural network activation function. An 8bit integerbased PE architecture will be referred later in the document as INT8/24/40 to designate a datapath with 8bit MAC operands, accumulated into 24bit (to add up to 256 values without overflow) and then scaled to 40bit using a 16bit scaling factor.
5.2 Hybrid FloatInteger PE
Figure 7 illustrates the microarchitecture of a bit Hybrid FloatInteger (HFINT) PE. The vector MACs units perform floatingpoint multiplications between a bit float weight vector and a bit float activation vector — and accumulate the result as integer. The weights and activations stored onchip are quantized according to the AdaptivFloat algorithm described in Algorithm 2 in Section 3. The extracted AdaptivFloat for weight and activation tensors are saved in allocated 4bit registers and are used to shift the exponent range of the accumulated partial sums. We note that while the AdaptivFloat for the static weights are extracted posttraining, the for the dynamic activations are informed from statistics during offline batch inference on the test dataset. The accumulation precision needs to be bit in order to accumulate up to values without overflow. The accumulated partial sums are then clipped and truncated back to bit integer before being processed by the activation function. At the end of the PE datapath, the integer activations are converted back to the AdaptivFloat format. The 8bit HFINT PE architecture will be referred to as HFINT8/30 to indicate an 8bit MAC datapath with 30bit accumulation.
A key contrast to note between the INT PE and the HFINT PE, apart from the differing data types employed in the vector MAC units, is the INT PE requires a postaccumulation multiplier in order to perform the adaptive operation of the quantization. This in turn increases the required postaccumulation precision by bit before truncation. In the next section, we provide energy, performance, and area comparisons between the two PE topologies.
6 Hardware Evaluation
6.1 AlgorithmHardware Codesign Methodology
We developed a design and verification flow, as illustrated in Figure 8, which closes the loop between software modeling and backend hardware implementation. The AdaptivFloat and integer quantizations are performed in the Pytorch deep learning framework during and post training. The extracted AdaptivFloat for weights and activations and the scaling factor from the integer quantization are sent to the C++ simulator, along with the quantized weights.
In order to evaluate the hardware on a realistic DNN workload, we designed an accelerator system, depicted in Figure 9, targeted for RNN and FC sequencetosequence networks where we have seen wider parameter distributions compared to convolution networks. The accelerator is evaluated with four PEs that are integrated as either INT or HFINT. Each PE contains an input/bias buffer with sizes ranging from 1KB to 4KB and a weight buffer whose size ranges from 256KB to 1MB depending on the vector size and operand bit width. A global buffer (GB) unit with 1MB of storage collects the computed activations from the individual PEs via the arbitrated crossbar channel and then broadcasts them back to the four PEs, in order to process the next time step or the next layer.
In this experimental setup, we note here that the HFINT PE uses MAC operands with 3 exponent bits which was found to yield the best inference accuracy across the ResNet50, Seq2Seq, and Transformer networks.
The INT and HFINT accelerators were both designed in SystemC with synthesizable and bitaccurate components from the MatchLib Khailany et al. (2018) and HLSLibs HLSLibs (Technical report) libraries. Verilog RTL was autogenerated by the Catapult highlevel synthesis (HLS) tool with HLS constraints uniformly set with the goal to achieve maximum throughput on the pipelined designs.
For fair power, performance and area (PPA) comparisons, the two designs employ the same evaluation methodology. Energy and performance results are reported on the postHLS Verilog netlists by the Catapult tool at 1GHz clock frequency using a commercial 16nm FinFET standard cell library. The simulated workload consists of 100 LSTM time steps with 256 hidden units operating in a weight stationary dataflow. The same process node is also used by Synopsys Design Compiler to extract the area estimates of the accelerators after placement and timingaware logic synthesis.
6.2 Energy, Performance and Area Analyses
We first look at PPA efficiencies in the PE, which is the computational workhorse of the accelerator. Moreover, we evaluate the effect of increasing throughput via the inner MAC vector size which is also equal to the number of parallel lanes (i.e., vector MAC units).
Figure 10 shows that the HFINT PEs achieve smaller peroperation energy than the INT PEs at either 4bit or 8bit MAC operands and across vector sizes. Larger vector size and operand bit width benefits more the HFINT PE than the INT PE in terms of energy efficiency. Precisely, from 4bit operands and vector size of 4 – to 8bit operands and vector size of 16, the peroperation energy of the HFINT PE is 0.97 to 0.90 that of the INT PE. The smaller peroperation energy of the HFINT PE stems from the fact that its vector MACs contain smaller mantissa multipliers and exponent adders that consume less overall power than full bitwidth multipliers as used in vector MACs of the INT PEs.
Increasing the vector size is found to improve overall energy efficiency due to higher spatial reuse of the accumulated partial sums. On the other hand, the INT PEs exhibit 1.04 to 1.21 higher performance per unit area compared to the HFINT PEs due to the more compact and homogeneous logic in the vector MACs.
Table 8 reports the power, area, and compute time of an 8bit INT and 8bit HFINT accelerator system with 4 PEs and a global buffer. The PEs here have a MAC vector size of 16 in both systems. The HFINT accelerator reports 0.92 the power and 1.14 the area of the integerbased adaptation, confirming the efficiency trends reported in Figure 10. Note that both accelerators have the same compute time because the HLS tool generated the same aggregate pipelining result for both designs.
7 Conclusion
Fixedpoint quantization schemes can be inadequate for networks possessing relatively wide parameter distributions commonly seen in deep sequence transduction models such as the Transformer. This paper presents AdaptivFloat, a resilient floatingpoint based encoding solution that dynamically maximizes and optimally clips its available dynamic range, at a layer granularity, in order to create accurate encodings of neural network parameters from narrow to wide distribution spread. AdaptivFloat demonstrates marked robustness at very low precision ( 6bit) on the Transformer, LSTMbased seq2seq, and ResNet50 networks. This paves the way to higher compute density into reconfigurable architectures – at a much lower penalty for computational accuracy compared to leading encoding types such as block floatingpoint, uniform, or nonadaptive float or posit number formats. We also illustrate the algorithmhardware codesign of AdaptivFloat, which allows the extracted of weights and activations to be stored on allocated registers onchip in order to perform the adaptive operation of the quantization. The proposed processing elements and accelerators that leverage this mechanism demonstrate peroperation energy that is 0.90 to 0.97 that of integerbased adaptations at varying vector sizes and MAC operand bit widths. Altogether, the AdaptivFloat algorithmhardware codesign framework offers a compelling alternative to integer or fixedpoint solutions.
8 Acknowledgments
This work was supported by the Application Driving Architectures (ADA) Research Center, a JUMP Center cosponsored by SRC and DARPA
References
 Effective quantization approaches for recurrent neural networks. CoRR abs/1802.02615. External Links: Link, 1802.02615 Cited by: §2.
 Layer normalization. CoRR abs/1607.06450. External Links: Link, 1607.06450 Cited by: §1.
 Efficient 8bit quantization of transformer neural machine language translation model. CoRR abs/1906.00532. External Links: Link, 1906.00532 Cited by: §2.
 Deep learning with low precision by halfwave gaussian quantization. CoRR abs/1702.00953. External Links: Link, 1702.00953 Cited by: §2.
 Deep positron: A deep neural network using the posit number system. CoRR abs/1812.01762. External Links: Link, 1812.01762 Cited by: §2.
 Stateoftheart speech recognition with sequencetosequence models. CoRR abs/1712.01769. External Links: Link, 1712.01769 Cited by: §1.
 ACCURATE and efficient 2bit quantized neural networks. Cited by: §1, §2, §4.
 Attentionbased models for speech recognition. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 712, 2015, Montreal, Quebec, Canada, pp. 577–585. Cited by: Table 3, §4.
 BinaryConnect: training deep neural networks with binary weights during propagations. CoRR abs/1511.00363. External Links: Link, 1511.00363 Cited by: §1.
 BinaryNet: training deep neural networks with weights and activations constrained to +1 or 1. CoRR abs/1602.02830. External Links: Link, 1602.02830 Cited by: §2.
 Endtoend DNN training with block floating point arithmetic. CoRR abs/1804.01526. External Links: Link, 1804.01526 Cited by: §1.
 A configurable cloudscale dnn processor for realtime ai. In Proceedings of the 45th Annual International Symposium on Computer Architecture, Cited by: §2.
 Deep learning with limited numerical precision. CoRR abs/1502.02551. External Links: Link, 1502.02551 Cited by: §1.
 Beating floating point at its own game: posit arithmetic. Supercomput. Front. Innov.: Int. J. 4 (2), pp. 71–86. External Links: ISSN 24096008, Link, Document Cited by: §1, §2.
 Deep compression: compressing deep neural network with pruning, trained quantization and huffman coding. CoRR abs/1510.00149. Cited by: §2.
 Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Vol. , pp. 770–778. External Links: Document, ISSN 10636919 Cited by: §1, Table 4, §4.
 Opensource highlevel synthesis ip libraries. Technical report External Links: Link Cited by: §6.1.
 Fixedpoint feedforward deep neural network design using weights +1, 0, and â1. In 2014 IEEE Workshop on Signal Processing Systems (SiPS), Vol. , pp. 1–6. External Links: Document, ISSN 21623562 Cited by: §1.
 Quantization and training of neural networks for efficient integerarithmeticonly inference. CoRR abs/1712.05877. External Links: Link, 1712.05877 Cited by: §1, §2.
 Rethinking floating point for deep learning. CoRR abs/1811.01721. External Links: Link, 1811.01721 Cited by: §1.
 Indatacenter performance analysis of a tensor processing unit. In 2017 ACM/IEEE 44th Annual International Symposium on Computer Architecture (ISCA), Vol. , pp. 1–12. External Links: Document, ISSN Cited by: §2.
 A modular digital vlsi flow for highproductivity soc design. In Proceedings of the 55th Annual Design Automation Conference, DAC ’18, New York, NY, USA, pp. 72:1–72:6. External Links: ISBN 9781450357005, Link, Document Cited by: §6.1.
 OpenNMT: opensource toolkit for neural machine translation. CoRR abs/1701.02810. External Links: Link, 1701.02810 Cited by: §4.
 Flexpoint: an adaptive numerical format for efficient training of deep neural networks. CoRR abs/1711.02213. External Links: Link, 1711.02213 Cited by: §1, §2.
 ImageNet classification with deep convolutional neural networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems  Volume 1, NIPS’12, USA, pp. 1097–1105. External Links: Link Cited by: §1.
 LogNet: energyefficient neural networks using logarithmic computation. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Vol. , pp. 5900–5904. External Links: Document, ISSN 2379190X Cited by: §2.
 Fixed point quantization of deep convolutional networks. CoRR abs/1511.06393. External Links: Link, 1511.06393 Cited by: §1.
 8bit inference with tensorrt. In NVIDIA GPU Technology Conference, External Links: Link Cited by: §1, §2, §3.2, §5.1.
 WRPN: wide reducedprecision networks. CoRR abs/1709.01134. External Links: Link, 1709.01134 Cited by: §2, §4.
 Convolutional neural networks using logarithmic data representation. CoRR abs/1603.01025. External Links: Link, 1603.01025 Cited by: §1.
 Regularizing deep neural networks by noise: its interpretation and optimization. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 49 December 2017, Long Beach, CA, USA, pp. 5109–5118. Cited by: §4.2.
 Weightedentropybased quantization for deep neural networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Vol. , pp. 7197–7205. External Links: Document, ISSN 10636919 Cited by: §2.
 Energyefficient neural network accelerator based on outlieraware lowprecision computation. In 2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture (ISCA), Vol. , pp. 688–698. External Links: Document, ISSN 2575713X Cited by: §2.
 ImageNet training in pytorch. Technical report External Links: Link Cited by: §4.
 Minerva: enabling lowpower, highlyaccurate deep neural network accelerators. In 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA), Vol. , pp. 267–278. External Links: Document, ISSN 10636897 Cited by: §2.
 Weight normalization: A simple reparameterization to accelerate training of deep neural networks. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 510, 2016, Barcelona, Spain, pp. 901. External Links: Link Cited by: §1.
 Attention is all you need. CoRR abs/1706.03762. External Links: Link, 1706.03762 Cited by: §1, Table 2, §4.
 Efficient hardware acceleration of cnns using logarithmic data representation with arbitrary logbase. In Proceedings of the International Conference on ComputerAided Design, ICCAD ’18, New York, NY, USA, pp. 9:1–9:8. External Links: ISBN 9781450359504, Link, Document Cited by: §2.
 Quantized convolutional neural networks for mobile devices. CoRR abs/1512.06473. External Links: Link, 1512.06473 Cited by: §2.
 Google’s neural machine translation system: bridging the gap between human and machine translation. CoRR abs/1609.08144. External Links: Link, 1609.08144 Cited by: §1.
 LQnets: learned quantization for highly accurate and compact deep neural networks. CoRR abs/1807.10029. External Links: Link, 1807.10029 Cited by: §2.
 DoReFanet: training low bitwidth convolutional neural networks with low bitwidth gradients. CoRR abs/1606.06160. External Links: Link, 1606.06160 Cited by: §2, §4.
 Trained ternary quantization. CoRR abs/1612.01064. External Links: Link, 1612.01064 Cited by: §2.