The Two-Pass Softmax Algorithm

# The Two-Pass Softmax Algorithm

## Abstract

The softmax (also called softargmax) function is widely used in machine learning models to normalize real-valued scores into a probability distribution. To avoid floating-point overflow, the softmax function is conventionally implemented in three passes: the first pass to compute the normalization constant, and two other passes to compute outputs from normalized inputs. We analyze two variants of the Three-Pass algorithm and demonstrate that in a well-optimized implementation on HPC-class processors performance of all three passes is limited by memory bandwidth.

We then present a novel algorithm for softmax computation in just two passes. The proposed Two-Pass algorithm avoids both numerical overflow and the extra normalization pass by employing an exotic representation for intermediate values, where each value is represented as a pair of floating-point numbers: one representing the “mantissa” and another representing the “exponent”.

Performance evaluation demonstrates that on out-of-cache inputs on an Intel Skylake-X processor the new Two-Pass algorithm outperforms the traditional Three-Pass algorithm by up to 28% in AVX512 implementation, and by up to 18% in AVX2 implementation. The proposed Two-Pass algorithm also outperforms the traditional Three-Pass algorithm on Intel Broadwell and AMD Zen 2 processors.

To foster reproducibility, we released an open-source implementation of the new Two-Pass Softmax algorithm and other experiments in this paper as a part of XNNPACK library at GitHub.com/google/XNNPACK.

## 1 Introduction

The softmax (also called softargmax) function is a smooth approximation to the argmax function. The softmax function operates on a vector of real-valued scores and normalizes them into a probability distribution

 pi=σ(x)i=exi∑kexk

where and .

The softmax function is commonly used in machine learning to give a probabilistic interpretation to outputs of classification models [2]. Such machine learning models output a probability for every possible object class, and the number of classes in modern datasets can reach millions. For example, natural language processing models may predict probability distribution over each possible word in a vocabulary, and recommendation systems may model the probability distribution over users, products, web pages, or their interactions. Table 1 summarized number of classes on several public classification datasets.

Hierarchical Softmax [13] (HSM) and its modifications [15] are the common techniques to scale classification models to large number of classes. HSM models jointly consider the softmax function and the matrix-matrix multiplication that produced its input, and replace them by a low-rank approximation. Thus, HSM methods improve performance by reducing the matrix-matrix multiplication cost, and do so by approximating the original machine learning model.

Unlike previous research, we focus on the softmax function in the context of inference using a pre-trained model. This situation commonly arises in machine learning frameworks, such as TensorFlow [1] or PyTorch [31], when the training dataset or metaparameters needed to devise an accurate approximation to the model are not available. In this case, the model must be computed exactly according to the specification and unsafe approximations are not possible.

Our contributions in this paper are as follows:

• We demonstrate that a well-optimized implementation of the softmax function can be memory-bound even in single-threaded execution. This result emphasizes the importance of eliminating memory operations for further improvements in the performance of the softmax function.

• We introduce a novel algorithm for computing the softmax function. The new algorithm employs an exotic representation for intermediate values, where each value is represented as a pair of floating-point numbers: one representing the “mantissa” and another representing the “exponent”. Thanks to the special representation of intermediate results, the new algorithm needs only two passes over an input vector versus three passes for traditional algorithms.

• We present and evaluate high-performance implementations of the new Two-Pass softmax algorithms for the x86-64 processors with AVX2 and AVX512F SIMD extensions. The experimental study confirms the speedups of up to on an Intel Skylake-X system.

The proposed improvements to the softmax implementation are orthogonal to matrix-matrix multiplication optimizations, and can be combined with sparsification [24, 29], low-rank decomposition [9], low-precision arithmetic [19, 30, 35], or hardware acceleration [16, 20] for the matrix-matrix multiplication that produce softmax input.

## 2 Related work

The softmax function is widely used in the modern machine learning models and algorithms. In particular, the softmax function gained wide popularity in Natural Language Processing as a building block for language models, which predict a word or n-gram out of a large vocabulary [21]. As the softmax function plays an important role in the machine learning models the several approximations have been proposed [23, 36, 34, 32, 22, 37, 15] in order to make the calculations efficient and fast. We refer the reader to Jozefowicz et al. [15] for a good overview of the recent approaches to speed up the softmax calculations.

Although a lot work has been done to address the computation complexity of the softmax function on a large inputs, the proposed solutions either 1) require special hardware (e.g FPGA) or 2) require to use an approximation to the softmax function during model training. In the latter case the softmax approximation can not be directly applied to a pre-trained machine learning model, which dramatically limits its use in existing machine learning frameworks e.g. TensorFlow [1] and PyTorch[31]. Therefore, we developed a novel algorithm that improves performance of the original softmax function on widely available hardware, can be used with pre-trained machine learning models, and can be implemented in any machine learning framework.

## 3 The Three-Pass Algorithm

Direct calculation of the softmax function according to the formula is conjugate with numerical issues. Single-precision function overflows1 for and underflows2 for , and, in turn, cause NaN3 outputs in the naïve implementations of the softmax function. In practice, the parts of the machine learning models that produce input to the softmax function are rarely bounded, and thus implementation can’t assume that the input would fall into such narrow range.

In order to overcome numerical instability issues machine learning frameworks adapt a workaround by utilizing the equivalence [12]:

 σ(x)i=exi∑kexk=e(xi−c)∑ke(xk−c)

which holds for any value. In particular, if , then:

• No inputs to function exceed zero

• There is at least one zero input to the function, and thus the denominator of the fraction is non-zero.

Together, these properties result in good numerical behavior of the computation and lead to Algorithm 1.

Both of the Pass 2 and the Pass 3 in Algorithm 1 compute with the same and values. This observation hints a potential optimization: if computing function is expensive, we could save the computed values to avoid recomputing them in the Pass 3. Such modification is presented in Algorithm 2.

Algorithm 2 computes values only once, but this reduction in the number of computations comes at a cost: the second pass of Algorithm 2 does both a read and a write for each element, unlike Algorithm 1 where the second pass does only reads.

## 4 The Two-Pass Algorithm

The Three-Pass Algorithms 1 and 2 avoid numerical issues by normalizing inputs relative to their maximum value, but then require an additional memory pass to find the maximum value. In this section we suggest that it is possible to get the numerical stability without the extra memory pass, and present a practical Two-Pass algorithm for softmax computation.

The immediate reason for the numerical instability of a naïve softmax implementation is the saturation of function for inputs outside of the narrow range of . Therefore, we have to look inside the function for a solution.

The function can be implemented in infinite number of ways, but practical implementations [11, 18, 25, 10] in IEEE floating-point arithmetic follow the traditional structure of elementary function implementation [27], and include three steps:

1. Range reduction, where the problem of approximating on the infinite input domain is reduced to a problem of approximating on a small finite range. For , a natural range reduction derives from the equivalence

 ex=et∈[−log22,log22]x−log2⋅⌊x⋅log2e⌉⋅2n∈Z⌊x⋅log2e⌉

which decompose approximating on into approximating on and multiplying the result by where is an integer.

2. Approximation of the function on the reduced range, i.e. on for . This step is achieved through a polynomial or rational approximation, often in combination with table look-ups.

3. Reconstruction of the final value from the approximation on the reduced range. For , the reconstruction step consists of multiplication of by , and can be achieved at low cost by manipulating the exponent field of a binary floating-point number . It is this step where the underflow and overflow situations arise: and thus always fits into a single-precision floating-point number, but can exceed the range of the exponent field, causing underflow or overflow.

The key idea that enables the Two-Pass Softmax algorithm is to remove the reconstruction step, and instead keep the result of as a pair of floating-point values , where . Mathematically, , but in general this expression can not be evaluated in floating-point arithmetic without overflowing or underflowing the result. The representation of as a floating-point number is important: although is by design always an integer, it can have a very large magnitude, and fall outside of the range of standard integer formats. Therefore, the result must be represented as two, rather than one, floating-point numbers. Using multiple floating-point numbers to represent the real-valued result has a long history in double-double, triple-double, quad-double [17] representations and Error-Free Transformations [14]. However, these representations use multiple floating-point numbers to improve precision of floating-point arithmetic, whereas we suggest to use two floating-point numbers to extend its dynamic range.

Algorithm 3 presents the softmax computation in just two passes by implementing addition for representation. The reduction pass keeps track of the running maximum value among all elements, and accumulates the scaled values to the running sum. It avoids the floating-point overflow by scaling values by the difference between the corresponding values and maximum of values. As this difference is never positive, values are never scaled up, which ensures the absence of the floating-point overflow.

## 5 Theoretical Analysis

While the number of memory passes in the presented softmax algorithms is evident from the names, the number of actual memory operations is more nuanced. Every pass of the Three-Pass algorithm with Recomputing reads the input array, while the last pass also writes the output array. The Three-Pass algorithm with Reloading just reads the input array in the first pass, reads the input array and writes the output array in the second pass, and reads-modifies-writes the output array in the last pass. The Two-Pass algorithm reads the input array in both passes, and also writes the output array in the second pass. Thus, the memory bandwidth requirements of the Two-Pass algorithm are similar to just the last two passes of the Three-Pass algorithm with Recomputing.

Table 2 summarize the number of memory reads, memory writes, and the memory bandwidth cost for the three algorithms on arrays of elements. Per Table 2, the Two-Pass algorithm has a memory bandwidth advantage of over the Three-Pass algorithm with Recomputing and over the Three-Pass algorithm with Reloading. In practice, we should treat these numbers as upper bounds, because higher computational complexity of the Two-Pass algorithm cuts into gains from bandwidth reduction.

## 6 Experimental Evaluation

### 6.1 Platform

We evaluate the performance of the three softmax algorithms on the Intel Xeon W-2135 processor based on Skylake-X microarchitecture and with the characteristics listed in Table 3. To improve performance stability we disabled dynamic frequency scaling in the processor for the duration of our experiments.

Additionally, in Sec. 6.8 we replicate a subset of experiments on a Broadwell-based Intel Xeon E5-2696 v4 processor and on an AMD Ryzen 9 3900X processor with Zen 2 microarchitecture.

### 6.2 Protocol

We use Google Benchmark framework to estimate sustained performance of the softmax implementations. We set the minimum run-time for each measurement to 5 seconds, repeat each measurement 25 times and record the median of the 25 runs. In each benchmark run we simulate the cache state during neural network inference: output vector is evicted from the cache before each iteration, but input tensor stays in cache as long as it fits.

### 6.3 Implementation

We developed highly optimized implementations of the Three-Pass Algorithms 1 and 2, and the Two-Pass Algorithm 3 in C. For all three algorithms, we did two templated implementations targeting AVX2 and AVX512 instruction sets. We expressed high-level optimization parameters, such as unroll factor for the loops and the number of accumulator variables in reduction functions, as meta-parameters of the templated implementations, and employed auto-tuning to discover their optimal values.

An efficient implementation of vectorized function is a key component of all softmax variants. For our implementation, we adapted the throughput-optimized methods of Dukhan and Vuduc [10] to single-precision floating-point evaluation. Algorithm 4 presents the resulting table-free, branch-free, and division-free algorithm.

Algorithm 4 follows the traditional structure of elementary function implementation [27], described in Sec. 4. It starts with a range reduction to reduce approximation on the infinite input domain to approximation on a small and finite range. The calculation of the reduced argument in Algorithm 4 uses Cody-Waite range reduction [6]: is represented as a sum of two single-precision constants, and , to improve the accuracy of this step. Range reduction results in a reduced argument in the range and a reduced integer argument . Next, is approximated on with a degree-5 polynomial. The polynomial coefficients are produced by the algorithm of Brisebarre and Chevillard [3] as implemented in Sollya software [5]. Following [10], we evaluate the approximation polynomial with Horner scheme using Fused Multiply-Add instructions to minimize the number of floating-point instructions and maximize the throughput. In the last stage the Algorithm 4 reconstructs the final output value of the function by multiplying polynomial approximation result by . In AVX2 implementation, we do this multiplication by directly manipulating floating-point exponent to construct a scale number :

 s\coloneqq{2nn>=−12600n<126≤x

and compute the final step as . This reconstruction trick has two buit-in assumptions: the argument to the is negative4, and subnormal floating-point numbers can be flushed to zero without significant accuracy impact. The reconstruction step in the AVX512 implementation leverage the new VSCALEFPS instruction [7] which computes as a single hardware operation.

The resulting implementation has maximum error under 2 ULP, validated through exhaustive evaluation on all valid inputs. This accuracy is comparable to other vectorized elementary function implementations, e.g. SIMD functions in GNU LibM library guarantee maximum error under 4 ULP.

Implementation of the ExtExp in the Two-Pass softmax algorithm is similar to Algorithm 4 with the reconstruction step removed. Thus, implementations of both the Three-Pass and the Two-Pass algorithms use exactly the same range reduction and approximating polynomials to compute the exponential function.

### 6.4 The Three-Pass Algorithms and Bandwidth Saturation

Fig. 1 presents the performance of the Three-Pass softmax Algorithm 1 with recomputing of exponentials and the Three-Pass softmax Algorithm 2 with reloading of computed exponentials in the AVX512 implementations. Reloading of exponential computations delivers speedup when the data is small enough to fit into private L1 and L2 caches, but turns into slowdown when operating on L3 cache, and eventually levels off at advantage when working set exceeds last-level cache.

The AVX2 implementation of the same Three-Pass softmax Algorithm 1 and Algorithm 2 is illustrated in Fig. 2 and demonstrates similar trends. As the working set increases, the speedup from reloading of exponential computations goes down, and eventually levels off at for large arrays.

The small difference between recomputing and reloading of exponential computations on Fig. 1 and Fig. 2 suggests that despite the expensive exponential function, softmax computation might be memory-bound for large arrays. To directly test this hypothesis, we decompose the Algorithms 1 and 2 into individual memory passes, and compare measured bandwidth to STREAM benchmarks [26].

Fig. 3 and 4 illustrate the memory bandwidth in each pass of the Three-Pass softmax Algorithms 1 and 2, as well as Copy and Scale STREAM benchmarks [26]. As recommended in STREAM documentation, we set the array size to four times the size of last-level cache (8650752 single-precision elements for Softmax, and 4325376 double-precision elements for STREAM). The first softmax pass (max-reduction) is the same in both versions of the Three-Pass algorithm, and thus presented only once. This pass reads one input array, and doesn’t have a direct equivalent in STREAM, but achieves similar bandwidth to STREAM Copy and Scale benchmarks, which both read one array and write one array. The second pass in Algorithm 1 reads one array, computes exponentials on the inputs, and accumulates them. It achieves of STREAM Copy bandwidth in AVX512 version and in AVX2 version. The second pass Algorithm 2 is similar, but additionally stores computed exponents into the output array. Although it achieves higher bandwidth than the second pass in Algorithm 1, it takes substantially longer to complete; the higher bandwidth is due to doubling the number of transferred bytes with a less than proportional increase in run time. The third pass of Algorithm 1 reads one array, computes exponentials on the inputs, scales them, and writes results to another array. This pass does the same number of memory operations as STREAM Scale benchmark, but substantially more computational operations. Yet, our auto-tuned implementations exceed the performance of STREAM Scale benchmark in both the AVX512 and the AVX2 versions. The third pass of the Algorithm 2 is an in-place variant of STREAM Scale benchmark. The processor clearly favors in-place operation: it is faster than STREAM Scale with AVX512, and faster with AVX2.

To summarize, passes 1 and 3 of the Algorithm with Recomputing 1 demonstrate similar memory performance to STREAM benchmark, and pass 2 in AVX512 implementation is not far behind. Passes 1 and 2 of the Algorithm 2 with Reloading similarly perform close to STREAM bandwidth, and pass 3 is significantly faster than STREAM Scale benchmark. These results confirm that performance of Three-Pass softmax algorithms is limited by achievable memory bandwidth, and suggest that softmax computation can be further accelerated only through reducing the number of memory operations.

### 6.5 The Two-Pass Algorithm

On Fig. 5 we present the performance of the Two-Pass softmax algorithm in comparison with the two versions of the Three-Pass algorithm in AVX512 implementations. On out-of-cache working sets the proposed Two-Pass softmax algorithm outperforms Three-Pass algorithms by .

Fig. 6 similarly compares performance of the Two-Pass and the Three-Pass algorithms in the AVX2 implementations. Here, the Two-Pass algorithm outperforms Three-Pass algorithm with Reloading of exponential computations by on out-of-cache workloads. The smaller speedups, compared to AVX512 implementation, are explained by relatively higher cost of recomputing exponentials in AVX2 compared to AVX512. If we compare to the Three-Pass Algorithm 1, which similarly recomputed exponentials, the Two-Pass algorithm wins by .

On Fig.7 we decompose the absolute run-time for the three algorithms and two SIMD instruction sets into individual memory passes and offers insight into the origin of performance improvements with the Two-Pass algorithm. The two passes of the Two-Pass softmax algorithm have similar, but slightly higher absolute run-time to the last two passes of the Three-Pass softmax algorithm with recomputation of re-computation of exponential function, which share the same memory access pattern. The slightly higher run-time in the passes of the Two-Pass algorithm can be explained by larger number of operations needed for accumulation on the representation compared to just accumulating scalar floating-point values.

The benchmarks in Sec. 6.4 and Sec. 6.5 presented performance in single-threaded softmax computation, and demonstrated that on HPC-class systems softmax saturates memory bandwidth even when running on a single core. Utilizing multiple cores increases available computational resources faster than achievable memory bandwidth, and therefore increases the advantage of the bandwidth-saving Two-Pass softmax algorithm. To quantify this advantage, we fix the size of the array at 4 times the last-level cache size [26], and vary the number of threads from 1 to 6 (number of cores in the system) to 12 (number of logical processors, including hyperthreads, in the system).

Fig. 8 illustrates weak multi-core scaling of the AVX512 implementations. As the number of threads grows, the advantage of the Two-Pass over Three-Pass algorithms remains unchanged at . Interestingly, the reloading variant of the Three-Pass algorithm scales worse than the recomputing variant, and the recomputing Algorithm 1 outperforms the reloading Algorithm 2 when at least 3 cores are being utilized.

Fig. 9 similarly illustrates weak multi-core scaling of the AVX2 implementations. The advantage of the Two-Pass over Three-Pass algorithms grows from on a single core to when utilizing all 6 cores to when also using hyperthreads.

### 6.7 Comparison with Intel DNNL

The results in Sec. 6.5-6.6 demonstrate that on out-of-cache inputs the Two-Pass softmax algorithm outperforms the Three-Pass softmax algorithms in our implementation, but leaves out the question of whether our implementations are competitive with state-of-the-art. To demonstrate the absolute effectiveness of the Two-Pass algorithm, we compared our implementations of the three softmax algorithm to the softmax primitive in Intel Deep Neural Network Library (DNNL) version 1.1.1.

Intel DNNL implements the Three-Pass softmax Algorithm 2 with reloading of computed exponentials. It includes implementations for SSE4.1, AVX, and AVX512 instruction sets, and automatically dispatch to AVX512 implementation on the Skylake-X processor. Unlike our implementations, Intel DNNL generates implementation in runtime using Just-in-Time (JIT) technology. JIT code generation potentially allows adaptation of implementation to parameters of a particular softmax operator (e.g. number of channels).

Fig. 10 presents the comparison between two implementations (Ours and DNNL) of the Three-Pass algorithm with reloading of exponentials, and the Two-Pass softmax algorithm in our implementation. For the Three-Pass algorithm with reloading, our implementation ourperforms Intel DNNL for the majority of problem sizes. Its advantage is particularly high – over 2X – when data fits into L1, diminish to within L2, and levels off at for out-of-cache problem sizes. As the implementation in Intel DNNL is less efficient than ours, our Two-Pass softmax algorithm outperforms DNNL softmax primitive on all problem sizes: it is faster on out-of-cache problem sizes, and up to when input fits into L1 cache.

### 6.8 Validation on Alternative Processors

The results in Sec. 6.4-6.7 were all collected on the Xeon W-2135 processor with the Intel Skylake-X microarchitecture, which prompts a question as to whether the advantage of the Two-Pass softmax algorithm is restricted to a specific type of processor. To demonstrate that the Two-Pass algorithm generalize to other types of processors, we replicated results of Sec. 6.5 on Xeon E5-2696 v4 processor with Intel Broadwell microarchitecture and Ryzen 9 3900X with AMD Zen 2 microarchitecture. Both of these processors support the AVX2, but not the AVX512 instruction set, and have different cache hierarchy parameters than the Intel Skylake-X system.

Fig. 11 presents performance of the three softmax algorithms on the Intel Broadwell system. Although the Two-Pass softmax algorithm demonstrates inferior performance on problems which fit into L2 cache, it gets competitive with the Three-Pass softmax algorithms on L3-sizes problems, and outperforms them by on out-of-cache problems.

Fig. 12 shows a similar picture on AMD Zen 2 microarchitecture. Here, the Three-Pass algorithms deliver superior performance as long as data fits into L3 cache, but loose to the Two-Pass algorithm when the data exceeds cache size.

## 7 Conclusion

We presented a novel Two-Pass algorithm for softmax computation and demonstrated that the new Two-Pass algorithm is up to 28% faster than the traditional Three-Pass algorithm on large input vectors. The algorithm, however, offers no advantage over reloading variant of the Three-Pass algorithm when the data fits into the processor’s cache.

This study focused on performance on a CPU, but the algorithm has great potential for GPU and hardware AI accelerators. These platforms further shift the balance between compute and memory performance towards expensive memory and cheap floating-point operations, and would favor reduced memory intensity of the presented Two-Pass softmax algorithm.

To foster reproducibility, we released an open-source implementation of the new Two-Pass Softmax algorithm and other experiments in this paper as a part of XNNPACK library at GitHub.com/google/XNNPACK.

## Acknowledgements

We thank Matthias Grundmann, Sergey Ioffe, Juhyun Lee, Karthik Raveendran, and Richard Vuduc for helpful feedback and discussion, and Vijay Thakkar for sharing access to the Ryzen 9 3900X system.

### Footnotes

1. Produce floating-point infinity because result is too large to be represented as a finite single-precision floating-point number
2. Produce 0 because result is too small to be distinguishable from zero in the single-precision floating-point format
3. Not a Number: a special floating-point value defined by IEEE 754 standard indicating invalid result
4. Always the case for the evaluation in the Three-Pass softmax algorithms.

### References

1. M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving and M. Isard (2016) Tensorflow: a system for large-scale machine learning. In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), pp. 265–283. Cited by: §1, §2.
2. J. S. Bridle (1990) Probabilistic interpretation of feedforward classification network outputs, with relationships to statistical pattern recognition. In Neurocomputing, pp. 227–236. Cited by: §1.
3. N. Brisebarre and S. Chevillard (2007) Efficient polynomial l-approximations. In 18th IEEE Symposium on Computer Arithmetic (ARITH’07), pp. 169–176. Cited by: §6.3.
4. C. Chelba, T. Mikolov, M. Schuster, Q. Ge, T. Brants, P. Koehn and T. Robinson (2013) One billion word benchmark for measuring progress in statistical language modeling. arXiv preprint arXiv:1312.3005. Cited by: Table 1.
5. S. Chevillard, M. Joldeş and C. Lauter (2010) Sollya: an environment for the development of numerical codes. In International Congress on Mathematical Software, pp. 28–31. Cited by: §6.3.
6. W. J. Cody (1980) Software manual for the elementary functions (prentice-hall series in computational mathematics). Prentice-Hall, Inc.. Cited by: §6.3.
7. M. Cornea (2015) Intel AVX-512 instructions and their use in the implementation of math functions. Intel Corporation. Cited by: §6.3.
8. J. Deng, W. Dong, R. Socher, L. Li, K. Li and L. Fei-Fei (2009) Imagenet: a large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248–255. Cited by: Table 1.
9. E. L. Denton, W. Zaremba, J. Bruna, Y. LeCun and R. Fergus (2014) Exploiting linear structure within convolutional networks for efficient evaluation. In Advances in neural information processing systems, pp. 1269–1277. Cited by: §1.
10. M. Dukhan and R. Vuduc (2013) Methods for high-throughput computation of elementary functions. In International Conference on Parallel Processing and Applied Mathematics, pp. 86–95. Cited by: §4, §6.3, §6.3.
11. S. Gal (1991) An accurate elementary mathematical library for the ieee floating point standard. ACM Transactions on Mathematical Software (TOMS) 17 (1), pp. 26–45. Cited by: §4.
12. I. Goodfellow, Y. Bengio and A. Courville (2016) Deep learning. MIT press. Cited by: §3.
13. J. Goodman (2001) Classes for fast maximum entropy training. arXiv preprint cs/0108006. Cited by: §1.
14. S. Graillat and V. Ménissier-Morain (2007) Error-free transformations in real and complex floating point arithmetic. In Proceedings of the International Symposium on Nonlinear Theory and its Applications, pp. 341–344. Cited by: §4.
15. E. Grave, A. Joulin, M. Cissé and H. Jégou (2017) Efficient softmax approximation for GPUs. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1302–1310. Cited by: §1, §2.
16. S. Han, X. Liu, H. Mao, J. Pu, A. Pedram, M. A. Horowitz and W. J. Dally (2016) EIE: efficient inference engine on compressed deep neural network. In 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA), pp. 243–254. Cited by: §1.
17. Y. Hida, X. S. Li and D. H. Bailey (2001) Algorithms for quad-double precision floating point arithmetic. In Proceedings 15th IEEE Symposium on Computer Arithmetic. ARITH-15 2001, pp. 155–162. Cited by: §4.
18. C. Iordache and P. T. P. Tang (2003) An overview of floating-point support and math library on the intel xscale trade/architecture. In Proceedings 2003 16th IEEE Symposium on Computer Arithmetic, pp. 122–128. Cited by: §4.
19. B. Jacob, S. Kligys, B. Chen, M. Zhu, M. Tang, A. Howard, H. Adam and D. Kalenichenko (2018) Quantization and training of neural networks for efficient integer-arithmetic-only inference. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2704–2713. Cited by: §1.
20. N. P. Jouppi, C. Young, N. Patil, D. Patterson, G. Agrawal, R. Bajwa, S. Bates, S. Bhatia, N. Boden and A. Borchers (2017) In-datacenter performance analysis of a tensor processing unit. In 2017 ACM/IEEE 44th Annual International Symposium on Computer Architecture (ISCA), pp. 1–12. Cited by: §1.
21. R. Jozefowicz, O. Vinyals, M. Schuster, N. Shazeer and Y. Wu (2016) Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410. Cited by: §2.
22. S. Kanai, Y. Fujiwara, Y. Yamanaka and S. Adachi (2018) Sigsoftmax: reanalysis of the softmax bottleneck. In Advances in Neural Information Processing Systems, pp. 286–296. Cited by: §2.
23. I. Kouretas and V. Paliouras (2019) Simplified hardware implementation of the softmax activation function. In 2019 8th International Conference on Modern Circuits and Systems Technologies (MOCAST), pp. 1–4. Cited by: §2.
24. B. Liu, M. Wang, H. Foroosh, M. Tappen and M. Pensky (2015) Sparse convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 806–814. Cited by: §1.
25. P. Markstein (2000) IA-64 and elementary functions: speed and precision. Vol. 3, Prentice Hall. Cited by: §4.
26. J. D. McCalpin (1995-12) Memory bandwidth and machine balance in current high performance computers. IEEE Computer Society Technical Committee on Computer Architecture (TCCA) Newsletter, pp. 19–25. Cited by: §6.4, §6.4, §6.6.
27. J. Muller, N. Brisebarre, F. De Dinechin, C. Jeannerod, V. Lefevre, G. Melquiond, N. Revol, D. Stehlé and S. Torres (2010) Handbook of floating-point arithmetic. Cited by: §4, §6.3.
28. A. Panchenko, E. Ruppert, S. Faralli, S. P. Ponzetto and C. Biemann (2017) Building a web-scale dependency-parsed corpus from commoncrawl. arXiv preprint arXiv:1710.01779. Cited by: Table 1.
29. J. Park, S. R. Li, W. Wen, H. Li, Y. Chen and P. Dubey (2016) Holistic SparseCNN: forging the trident of accuracy, speed, and size. arXiv preprint arXiv:1608.01409 1 (2). Cited by: §1.
30. J. Park, M. Naumov, P. Basu, S. Deng, A. Kalaiah, D. Khudia, J. Law, P. Malani, A. Malevich and S. Nadathur (2018) Deep learning inference in facebook data centers: characterization, performance optimizations and hardware implications. arXiv preprint arXiv:1811.09886. Cited by: §1.
31. A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai and S. Chintala (2019) PyTorch: an imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32, H. Wallach, H. Larochelle, A. Beygelzimer, F. d\textquotesingleAlché-Buc, E. Fox and R. Garnett (Eds.), pp. 8024–8035. Cited by: §1, §2.
32. K. Shim, M. Lee, I. Choi, Y. Boo and W. Sung (2017) SVD-softmax: fast softmax approximation on large vocabulary neural networks. In Advances in Neural Information Processing Systems, pp. 5463–5473. Cited by: §2.
33. S. Singh, A. Subramanya, F. Pereira and A. McCallum (2012) Wikilinks: a large-scale cross-document coreference corpus labeled via links to wikipedia. University of Massachusetts, Amherst, Tech. Rep. UM-CS-2012 15. Cited by: Table 1.
34. Q. Sun, Z. Di, Z. Lv, F. Song, Q. Xiang, Q. Feng, Y. Fan, X. Yu and W. Wang (2018) A high speed softmax vlsi architecture based on basic-split. In 2018 14th IEEE International Conference on Solid-State and Integrated Circuit Technology (ICSICT), pp. 1–3. Cited by: §2.
35. A. Tulloch and Y. Jia (2017) High performance ultra-low-precision convolutions on mobile devices. arXiv preprint arXiv:1712.02427. Cited by: §1.
36. M. Wang, S. Lu, D. Zhu, J. Lin and Z. Wang (2018) A high-speed and low-complexity architecture for softmax function in deep learning. In 2018 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS), pp. 223–226. Cited by: §2.
37. B. Yuan (2016) Efficient hardware architecture of softmax layer in deep neural network. In 2016 29th IEEE International System-on-Chip Conference (SOCC), pp. 323–326. Cited by: §2.
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters