An Overflow Free Fixedpoint Eigenvalue Decomposition Algorithm: Case Study of Dimensionality Reduction in Hyperspectral Images
Abstract
We consider the problem of enabling robust range estimation of eigenvalue decomposition (EVD) algorithm for a reliable fixedpoint design. The simplicity of fixedpoint circuitry has always been so tempting to implement EVD algorithms in fixedpoint arithmetic. Working towards an effective fixedpoint design, integer bitwidth allocation is a significant step which has a crucial impact on accuracy and hardware efficiency. This paper investigates the shortcomings of the existing range estimation methods while deriving bounds for the variables of the EVD algorithm. In light of the circumstances, we introduce a range estimation approach based on vector and matrix norm properties together with a scaling procedure that maintains all the assets of an analytical method. The method could derive robust and tight bounds for the variables of EVD algorithm. The bounds derived using the proposed approach remain same for any input matrix and are also independent of the number of iterations or size of the problem. Some benchmark hyperspectral data sets have been used to evaluate the efficiency of the proposed technique. It was found that by the proposed range estimation approach, all the variables generated during the computation of Jacobi EVD is bounded within .
Affine arithmetic, eigenvalue decomposition, fixedpoint arithmetic, formal methods, integer bitwidth allocation, interval arithmetic, overflow, range analysis, satisfiabilitymodulotheory.
I Introduction
Eigenvalue decomposition (EVD) is a key building block in signal processing and control applications. The fixedpoint development of eigenvalue decomposition (EVD) algorithm have been extensively studied in the past few years [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] because fixedpoint circuitry is significantly simpler and faster. Owing to its simplicity, fixedpoint arithmetic is ubiquitous in low cost embedded platforms. Fixedpoint arithmetic has played an important role in supporting the fieldprogrammablegatearray (FPGA) parallelism by keeping the hardware resources as low as possible. The most crucial step involved in the floattofixed conversion process is deciding the integer wordlengths (IWLs) in order to avoid overflow. This step has a significant impact on accuracy and hardware resources.
IWLs can be determined either using simulation [1], [11], [12] or by analytical (formal) methods [13], [14], [15], [16]. Existing works on fixedpoint EVD have mainly used simulationbased approach for finding the IWLs [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 17] because of its capability to be performed on any kind of systems. In simulationbased methods, variable bounds are estimated using the extreme values obtained from the simulation of the floatingpoint model. This method needs a large amount of input matrices to obtain a reliable estimation of ranges. Thus, the method is quite slow. Moreover, it cannot guarantee to avoid overflow for nonsimulated matrices. This is primarily due to the diverse range of input data. A stochastic range estimation method is discussed in [18] which computes the ranges by propagating statistical distributions through operations. It requires large number of simulations to estimate system parameters and an appropriate input data set to estimate quality parameters [19], [14] and therefore, it does not produce absolute bounds [20].
There are several limitations associated with analytical (formal) methods. An analytical method based on norm and transfer function is described in [21]. This method produces theoretical bounds that guarantee no overflow will occur, but the approach is only limited to linear timeinvariant systems [19]. Interval arithmetic (IA) ignores correlation among signals resulting in an overestimation of ranges [13]. Affine arithmetic (AA) is a preferable approach that takes into account the interdependency among the signals [22], but ranges determined through AA explode during division if the range of divisor includes zero [15], [16]. IA also suffers from the same problem. Both IA and AA are pessimistic approaches leading to higher implementation cost [23]. Satisfiability modulo theory (SMT) produces tight ranges compared to IA and AA [15], [24]. However, it is computationally expensive and much slower as compared to IA and AA [16], [25]. IA and AA methods compute the ranges of the intermediate variables by propagating the bounds of the input data through the arithmetic operations. SMT refines the range results provided by IA and AA. There are common issues associated with IA, AA and SMT. Given a particular range of the input matrix, these analytical methods compute certain ranges of the intermediate variables based on the arithmetic operations. However, if the range of the input matrix changes, the bounds for the variables no longer remain the same. Another issue with these analytical methods is that the bounds of the variables obtained using these methods are not independent of the number of iterations or the size of the problem. We exemplify these common issues associated with IA, AA and SMT in the next section.
Ii Motivation
In this section, we illustrate the issues associated with the existing range estimation methods through a motivational example (dimensionality reduction of hyperspectral images using fixedpoint EVD). Along with the covariance matrices of hyperspectral images, we have also used some random symmetric positive semidefinite matrices generated from MATLAB. We have chosen such an instance because it is discovered from the literature that some of the works on dimensionality reduction of hyperspectral images highlight the overflow issues while using fixedpoint EVD algorithm. A sincere effort has been made to contemplate them.
The diverse range of the elements of the input data matrices for different hyperspectral images (HSIs) limits the use of fixedpoint EVD for dimensionality reduction [26], [27]. If the range of the input data is diverse, selecting a particular IWL may not avoid overflow for all range of input cases. Egho et al.
[28] stated that fixedpoint implementation of EVD algorithm leads to inaccurate computation of eigenvalues and eigenvectors due to overflow. Therefore, the authors implemented Jacobi algorithm in FPGA using floatingpoint
arithmetic. Lopez et al. [26] reported
overflow issues while computing EVD in fixedpoint arithmetic.
Burger et al. [29] mentioned that while processing millions of HSI, numerical instability like overflow should be
avoided. Hence, determination of proper IWLs for
variables of fixedpoint EVD algorithm in order to free it from
overflow for all range of input data remains a major research
issue.
The most widely used
algorithm for dimensionality reduction is principal component analysis (PCA). PCA requires computation of eigenvalues () and
eigenvectors () given by
(1) 
where is the covariance matrix, is a
diagonal matrix containing the eigenvalues and the columns of
contain the eigenvectors. is a new coordinate basis for the
image. There are several algorithms developed in the
literature for EVD of symmetric matrices [30], [31], [32]. Among all, twosided Jacobi algorithm
is most accurate and numerically stable [30], [33].
Most of the work attempted for dimensionality reduction via EVD uses twosided Jacobi
algorithm [28], [34], [35].
The same algorithm is used in this paper for computing EVD of the
covariance matrix of the hyperspectral data. Apart from the accuracy and
stability of Jacobi algorithm, it also has high degree potential for parallelism, and hence can be implemented on FPGA
[5], [6]. In [3, 5, 4, 6, 7, 8, 9, 10] this algorithm is implemented on FPGA with fixedpoint arithmetic
to reduce power consumption and silicon area. However, in
all the works, fixedpoint implementation of Jacobi algorithm uses
the simulationbased approach for estimating the ranges of
variables. It does not produce promising bounds (as discussed
earlier in section I).
Jacobi method computes EVD of a symmetric matrix by producing a sequence of orthogonally similar matrices, which eventually converges to a diagonal matrix [30] given by
(2) 
where is the Jacobi rotation and is a diagonal
matrix containing eigenvalues (). In each step, we
compute a Jacobi rotation with and update to ,
where is chosen in such a way that two offdiagonal entries of
a matrix of are set to zero. This is called
twosided or classical Jacobi method. Algorithm 1 lists the
steps for Jacobi method. In order to investigate the challenges with fixedpoint EVD algorithm,
we have used four different types of HSI collected by
the spaceborne (Hyperion), airborne (ROSIS and AVIRIS), handheld sensors (Landscape) and Synthetic (simulated EnMap). The selected Hyperion image subset contains the
Chilika Lake (latitude: 19.63 N  19.68 N, longitude: 85.13 E 
85.18 E) and its catchment areas [36], [37]. ROSIS data was acquired during a
flight campaign at Pavia University, northern Italy [38]. AVIRIS data was gathered by the AVIRIS sensor over the Indian Pines test site in Northwestern Indiana. Landscape data is obtained
from the database available from Stanford University [39]. The simulated EnMap image subset contains the Maktesh Ramon, Israel (30.57 N, 34.83 E) [40]. The sizes of the covariance matrix for the images are for Hyperion, for ROSIS, for AVIRIS, for Landscape and for simulated EnMap. Out
of 120, 103, 200, 148 and 244 bands, only a certain number of bands are
sufficient for obtaining suitable information due to the large
correlation between adjacent bands. Hence, the dimension of the
image should be reduced to
decrease the redundancy in the data. The principal components (PCs) are
decided from the magnitudes of the eigenvalues. The numbers of PCs
which explain 99.0 variance are retained for the
reconstruction purpose. The following paragraph describes the shortcomings of the existing range estimation methods while computing bounds for EVD algorithm.
Tables I and II shows the ranges obtained for Hyperion and ROSIS using simulation, IA, AA and the proposed method. The simulationbased range analysis is performed by feeding the floatingpoint algorithm with each input matrix separately and observing the data range. Notice that the ranges or the required IWLs (Table III) estimated using the simulationbased approach for Hyperion cannot avoid overflow in case of ROSIS. In other words, based on the ranges obtained using the simulation of Hyperion data one would allocate 24 bits to the integer part, but these number of bits cannot avoid overflow in case of ROSIS. Simulationbased method can only produce exact bounds for the simulated cases. Thus, simulationbased method is characterized by a need for stimuli, due to which it cannot be relied upon in practical implementations. In contrast, the static or analytical or formal methods like IA and AA which depends on the arithmetic operations always provide worstcase bounds so that no overflow occurs. However, the bounds are highly overestimated compared to the actual bounds produced by simulationbased method as shown in Tables I and II. This increases the hardware resources unnecessarily.
In order to examine the range explosion problem of IA and AA, we computed the range of using IA and AA for random symmetric positive semidefinite matrices of different sizes generated from MATLAB. Table IV shows how the range of explodes when computed through IA and AA. All the range estimation using IA and AA have been carried out using double precision floatingpoint format. According to the IEEE 754 double precision floatingpoint format, the maximum number that can be represented is in the order of . It is noticed in Table IV that whenever the range is more than the maximum representable number, it is termed as infinity. It is apparent from the algorithm that variable is some or the other way related to the computation of all other variables. So, with the range of becoming infinity, the range of other variables also result in infinity as shown in Tables I and II. The range of variable goes unbounded because of the pessimistic nature of bounds produced by IA and AA. All the issues with existing range estimation methods are handled meticulously by the proposed method that produces unvarying or robust bounds while at the same time tightens the ranges. This is quite apparent from Tables I and II.
Var  Simulation  IA  AA  Proposed 
Var  Simulation  IA  AA  Proposed 
Var  Hyperion  ROSIS 

24  25  
22  22  
24  25  
24  25 
Size  Start  =1, =1, =2  =3, =3, =4  =4, =4, =6  =6, =6, =8  End 
=2  
=4  
=6  
=8  
=10  
=12  
In order to combat this, SMT has arisen which produce tight bounds compared to IA and AA. However, SMT is again computationally costly. Its runtime grows abruptly with application complexity. Hence, applying SMT for large size matrices would be too complex. Amidst the individual issues of the analytical methods, there are also some common issues. Provided with a particular range of the input matrix, the analytical methods (IA, AA and SMT) compute certain ranges of the intermediate variables based on the arithmetic operations. Notwithstanding, the ranges no longer remain the same, if the range of the input matrix changes. In order to investigate this issue, we consider two symmetric input matrices given by
(3) 
and
(4) 
The ranges obtained using IA and SMT in case of matrix cannot guarantee to avoid overflow in case of as shown in Tables V and VI. The fact is also similar for ranges derived using AA. This scenario is handled correctly by the proposed method that produces robust and tight bounds in both the cases and . The range estimation using SMT was carried out using the freely available HySAT implementation [41].
There is one more common issue with these anaytical methods. We know that, provided with a fixed range of the input stimuli, these analytical (formal) methods successfully produce robust bounds [24]. Even though the range
of the input matrix is fixed, the bounds produced by these analytical methods would be robust only for a particular size of the problem or number of iterations. In other words, the bounds obtained will not be independent of the
number of iterations.
Var  Simulation  IA  SMT  Proposed 
0  0  0  
Var  Simulation  IA  SMT  Proposed 
0  0  0  
In order to illustrate this, let us consider two random symmetric positive definite matrices of sizes 3 3 and 5 5 given by
(5) 
and
(6) 
The bounds obtained using IA for the input matrices and are shown in Table VII. The bounds are unnecessarily large compared to the actual bounds produced by the simulationbased approach shown in Table VIII. Now, the input matrices are scaled through the upper bound of their spectral norm to limit their range within 1 and 1. The new matrices and whose elements range between 1 and 1 are given by
(7) 
and
(8) 
The ranges obtained for the scaled input matrices are shown in Table IX. Even though after scaling, the range of the variables obtained using IA are large and unbounded compared to the original bounds obtained using simulationbased method (Table X). The difference in unboundedness of the ranges shown in Tables VII and IX is not substantially large. This illustrates that the ranges obtained using IA are not independent of the number of iterations. Similar is the case for both AA and SMT. We can observe the phenomenon in Table XI for one of the test hyperspectral data (simulated EnMAP). Inspite of the range of covariance matrix being , the bounds estimated using IA and AA exploded compared to the actual bounds obtained using simulationbased approach. These examples comprehend that the bounds derived using the existing analytical methods are not independent of the number of iterations. Given the issues of the existing range estimation methods, our proposed method provides robust and tight bounds for the variables as shown in Tables I, II, V, VI and XI. Moreover, the bounds produced by the proposed method are independent of the size of the problem. The key to all these advantages is the usage of the scaling method and vector, matrix norm properties to derive the ranges.
Variables  IA ()  IA () 

or  
Variables  Simulation ()  Simulation () 

or  
Variables  IA ()  IA () 

or  
Variables  Simulation ()  Simulation () 

or  
Var  Simulation  IA  AA  Proposed 
Iii Proposed Solution
Particularizing, there are mainly three issues associated with the existing range estimation methods:

incompetence of the simulationbased approach to produce unvarying or robust bounds,

bounds produced by existing analytical (formal) methods are not independent of the number of iterations or size of the problem, and

overestimated bounds produced by IA and AA.
Taking into account the issues 1 and 2, we propose in this study, an analytical method based on vector and matrix norm properties to derive unvarying or robust bounds for the variables of EVD algorithm. The proof for deriving the bounds make use of the fact that all the eigenvalues of a symmetric semipositive definite matrix are bounded within the upper bound for the spectral norm of the matrix. Further taking into consideration the issue 3, we demonstrate that if the spectral norm of any matrix is kept within unity, tight ranges for the variables of the EVD algorithm can be derived. It is wellknown that the spectral norm of any matrix is bounded by [31], [42]
(9) 
For symmetric matrices, the spectral norm in (9) can be replaced with the spectral radius .
Theorem 1
Given the bounds for spectral norm as , the Jacobi EVD algorithm applied to has the following bounds for the variables for all , , and :
where , denote the iteration number and and denote the component of a vector and component of a matrix respectively.
Using vector and matrix norm properties the ranges of the variables can be derived. We start by bounding the elements of the input symmetric matrix as
(10) 
where (10) follows from [31]. Hence, the elements of are in the range . Line 30 in Algorithm 1 shows the computation of eigenvalues. We know that , so the upper bound for the eigenvalues is equal to . In this work, the fixedpoint Jacobi EVD algorithm is applied to covariance matrices. Due to the positive semidefiniteness property of covariance matrices, the lower bound for the eigenvalues is equal to zero. Thus, the range of is . The eigenvalues in Line 30 can also be calculated as
(11) 
According to vector norm property we can say that
(12) 
where is the maximum of the absolute of the elements in . From the upper bound of , (11) and (12) we can say that each element of lie in the range . Thus all elements of lie in the range for all the iterations. Since we have considered symmetric positive semidefinite matrices (unlike the offdiagonal entries the diagonal elements are always positive), the diagonal elements of are in the range . Rest of the elements lie in the range . Line 5, 6 and 7 in Algorithm 1 computes , and respectively. Since and are the diagonal elements of , their range is . is the offdiagonal entry of , therefore its range is . Line 8 in Algorithm 1 computes . Let = such that
(13) 
According to (13), numerator () of lies in the range . and are always positive. The summation is greater than or equal to , because if then is equal to or if then is greater than since is greater than or equal to zero and is greater than . From the range of , and the denominator of , we can say that will always be less or equal to . Thus, we can conclude that lies in the range and arc tangent of is limited in the range . The Jacobi EVD method tries to make the offdiagonal entries of 22 submatrix of zero by overwriting with . According to 22 symmetric Schur decomposition discussed in [31], and are cosine and sine trigonometric functions. Thus, the bounds of and are . Line 9 in Algorithm 1 computes which involves square root operation and therefore the range of can be modified to . As the range of and are and respectively, using multiplication rule of interval arithmetic [43] the range of (Line 10) can be derived as . Next we bound the elements of . is the eigenvector matrix each column of which has unity norm (eigenvectors of symmetric matrices are orthogonal). Hence all elements of are in the range following (14).
(14) 
Since the range of is , according to Line 15 of Algorithm 1, the range of can be fixed as . The bounds obtained according to Theorem 1 remain unchanged for all the iterations of the algorithm. The bounds are independent of the number of iterations or the size of the input matrix. Thus, the issue 2 has been handled accurately. Now considering the issue 1, the bounds according to Theorem 1 remain same (the pattern remains the same as shown in Theorem 1) for any input matrix, but depend on the factor . For different input matrices, the magnitude of will change and this, in turn, will differ the bounds. The issue 1 has not yet been handled prudently. Hence, we propose that if the input matrix is scaled through then we can achieve a twofold advantage: unvarying and tight bounds (solution for issue 3). This will resolve all the issues. If the input matrix is scaled as , the EVD of matrix is given as
(15) 
where and . is the eigenvector and is the eigenvalue. After scaling through a scalar value, the original eigenvectors do not change. The original eigenvalues change by a factor . We need not recover the original eigenvalues because, in PCA, eigenvaues are only used to calculate the required number of PCs. Since, all the eigenvalues are scaled by the same factor, the number of PCs do not change whether the number is fixed using original eigenvalues or scaled ones. In applications, where original eigenvalues are required, the number of IWLs required is depending on the magnitude of the scaling factor. Only the binary point of the eigenvalues is required to be adjusted online while for other variables it is fixed irrespective of the property of the input matrix.
Theorem 2
Given the scaling factor as , the Jacobi EVD algorithm (Algorithm 1) applied to has the following bounds for the variables for all , , and :
where , denote the iteration number and and denote the component of a vector and component of a matrix respectively.
Using vector and matrix norm properties the ranges of the variables can be derived. We start by bounding the elements of the input symmetric matrix as
(16) 
where (16) follows from [31]. Hence, the elements of are in the range . The remaining bounds are derived in the similar fashion as decribed in proof for Therorem 1. The bounds on the variables of EVD algorithm obtained after scaling remain constant for all the iterations and also do not vary for any input matrix. Besides, the bounds are also tight.
Iv More data sets with results
In this section, we present a few more hyperspectral data sets, and we compare the bounds on variables of Jacobi EVD algorithm produced by the existing range estimation methods and the proposed approach. Tables XII and XIII show the comparison between the bounds on the variables obtained by existing range estimation methods with respect to the proposed approach through the AVIRIS and Landscape data sets. We can observe that the ranges estimated using the simulation for AVIRIS data cannot avoid overflow in case of Landscape. This is quite apparent from the number of integer bits required, shown in Table XIV. As usual, the bounds produced by IA and AA outbursted. However, the proposed method produces robust and tight bounds. The bounds obtained are independent of any range of the input matrix and also the number of iterations.
Var  Simulation  IA  AA  Proposed 
Var  Simulation  IA  AA  Proposed 
Var  AVIRIS  Landscape 

25  116  
24  116  
25  116  
25  116 
WLs  50 bits  40 bits  32 bits 

Hyperion  176.76  106.44  78.03 
ROSIS  180.13  134.79  74.96 
Landscape  180.65  122.36  77.18 
AVIRIS  178.54  110.76  76.43 
EnMap  180.67  130.24  78.36 
WLs  Hyperion  ROSIS  Landscape  

PC1  PC2  PC3  PC4  PC1  PC2  PC3  PC4  PC5  PC1  PC2  
32  8.1e7  4.9e7  3.7e6  4.3e6  0  1.8e7  4.5e6  6.5e6  2.4e6  1.2e9  1.1e8 
40  0  0  8.5e7  1.9e7  0  0  0  0  0  1.8e10  7.4e11 
50  0  0  0  0  0  0  0  0  0  0  0 
Signaltoquantizationnoiseratio (SQNR) is chosen as an error measure to evaluate the accuracy of the proposed method [25], [44]. It is given by
(17) 
where and are the
eigenvalues obtained from double precision floatingpoint and fixedpoint
implementations. SQNR of the eigenvalues obtained through the proposed fixedpoint design is shown in Table XV. In Table XV, we observe high magnitudes of SQNR which exhibit that the set of ranges obtained according to Theorem 2 are sufficient for avoiding overflow for any input matrix. For data sets like Landscape, where the the range is exorbitant resulting in large IWLs (Table XIV), wordlengths like 50, 40 or 32 bits would never fit. In such cases, with the proposed approach it was possible to fit all the variables within 32 bit wordlength and obtain a high value of SQNR. A common measure to compare two images is meansquareerror (MSE) [45]. MSE between PCA images of fixedpoint implementations with various WLs after derving the ranges through proposed approach are shown in Table XVI. The required number of PCs for Hyperion, ROSIS
and Landscape are 4, 5 and 2 respectively. The number of PCs explaining 99.0 variance in case of AVIRIS and EnMap are relatively higher. Therefore, the Table XVI only exhibits the results of Hyperion, ROSIS and Landscape. However, similar results were obtained for AVIRIS and EnMap.
We observe that the MSE values are negligibly small which signify that the ranges obtained through the proposed approach are absolutely robust. Thus, the error metrics (SQNR and MSE) imply that the number of integer bits derived using the proposed approach is sufficient for avoiding overflow. After deriving the proper ranges through the proposed approach, the fixedpoint design is synthesized on Xilinx Virtex 7 XC7VX485 FPGA for different WLs through Vivado highlevel synthesis (HLS) design tool [46]. We have used SystemC (mimics hardware description language VHDL and Verilog) to develop the fixedpoint code [47]. Using the HLS tool, the SystemC fixedpoint code is transformed into a hardware IP (intellectual property) described in Verilog. We compare the resource utilization of simulation approach with respect to the proposed approach (for the same level of accuracy) through the test hyperspectral data sets. The comparative study is illustrated in Table XVII. There is a noteworthy difference in the hardware resources. The hardware resources in case of simulation approach are considerably large compared to the resources used in case of the proposed approach. For the sake of maintaining the same level of accuracy (SQNR, MSE) as the proposed method, the simulation approach uses 50 bit wordlength.
The proposed method also produces robust and tight analytical bounds for variables of singular value decomposition algorithm [48].
Proposed  

WL  Hyperion  ROSIS  AVIRIS  
FF  LUTs  Power  FF  LUTs  Power  FF  LUTs  Power  
32  1.62  6.29  0.413  1.62  6.32  0.42  1.63  6.51  0.45 
Simulation  
WL  Hyperion  ROSIS  AVIRIS  
FF  LUTs  Power  FF  LUTs  Power  FF  LUTs  Power  
50  8  23  2.59  8  23  2.64  8  23  2.64 
V Conclusion
In this paper, we bring out the problem of integer bitwidth allocation for the variables of eigenvalue decomposition algorithm. We highlight the issues of the existing range estimation methods in the context of EVD. Integer bitwidth allocation is an essential step in fixedpoint hardware design. In light of the significance of this step, this paper introduces an analytical method based on vector and matrix norm properties together with a scaling procedure to produce robust and tight bounds. Through some hyperspectral data sets, we demonstrate the efficacy of the proposed method in dealing with the issues associated with existing methods. SQNR and MSE values show that the ranges derived using the proposed approach are sufficient for avoiding overflow in case of any input matrix. There are many other numerical linear algebra algorithms which can benefit from the
proposed method like QR factorization, power method for finding largest eigenvalue,
bisection method for finding eigenvalues of a symmetric tridiagonal matrix, QR iteration,
Arnoldi method for transforming a nonsymmetric matrix into an upper Hessenberg matrix
and LU factorization and Cholesky factorization.
Dealing with the precision problem will be a scope for the future work.
References
 [1] Z. Nikolić, H. T. Nguyen, and G. Frantz, “Design and implementation of numerical linear algebra algorithms on fixed point dsps,” EURASIP Journal on Advances in Signal Processing, vol. 2007, 2007.
 [2] Y.L. Chen, C.Z. Zhan, T.J. Jheng, and A.Y. A. Wu, “Reconfigurable adaptive singular value decomposition engine design for highthroughput mimoofdm systems,” Very Large Scale Integration (VLSI) Systems, IEEE Transactions on, vol. 21, no. 4, pp. 747–760, 2013.
 [3] D. Milford and M. Sandell, “Singular value decomposition using an array of cordic processors,” Signal Processing, vol. 102, pp. 163–170, 2014.
 [4] T. Pradhan, B. Kabi, A. Routray, and G. Anirudh, “Fixedpoint hestenes svd algorithm for computing eigen faces.”
 [5] R. Mohanty, G. Anirudh, T. Pradhan, B. Kabi, and A. Routray, “Design and performance analysis of fixedpoint jacobi svd algorithm on reconfigurable system,” IERI Procedia, vol. 7, pp. 21–27, 2014.
 [6] P. M. Szecówka and P. Malinowski, “Cordic and svd implementation in digital hardware,” in Mixed Design of Integrated Circuits and Systems (MIXDES), 2010 Proceedings of the 17th International Conference. IEEE, 2010, pp. 237–242.
 [7] R. C. Grammenos, S. Isam, and I. Darwazeh, “Fpga design of a truncated svd based receiver for the detection of sefdm signals,” in Personal Indoor and Mobile Radio Communications (PIMRC), 2011 IEEE 22nd International Symposium on. IEEE, 2011, pp. 2085–2090.
 [8] Y. Wang, K. Cunningham, P. Nagvajara, and J. Johnson, “Singular value decomposition hardware for mimo: State of the art and custom design,” in Reconfigurable Computing and FPGAs (ReConFig), 2010 International Conference on. IEEE, 2010, pp. 400–405.
 [9] Z. Liu, K. Dickson, and J. V. McCanny, “Applicationspecific instruction set processor for soc implementation of modern signal processing algorithms,” Circuits and Systems I: Regular Papers, IEEE Transactions on, vol. 52, no. 4, pp. 755–765, 2005.
 [10] C. Shi and R. W. Brodersen, “Automated fixedpoint datatype optimization tool for signal processing and communication systems,” in Design Automation Conference, 2004. Proceedings. 41st. IEEE, 2004, pp. 478–483.
 [11] S. Kim, K.I. Kum, and W. Sung, “Fixedpoint optimization utility for c and c++ based digital signal processing programs,” Circuits and Systems II: Analog and Digital Signal Processing, IEEE Transactions on, vol. 45, no. 11, pp. 1455–1464, 1998.
 [12] K.I. Kum, J. Kang, and W. Sung, “Autoscaler for c: An optimizing floatingpoint to integer c program converter for fixedpoint digital signal processors,” Circuits and Systems II: Analog and Digital Signal Processing, IEEE Transactions on, vol. 47, no. 9, pp. 840–848, 2000.
 [13] D.U. Lee, A. A. Gaffar, R. C. Cheung, O. Mencer, W. Luk, G. Constantinides et al., “Accuracyguaranteed bitwidth optimization,” ComputerAided Design of Integrated Circuits and Systems, IEEE Transactions on, vol. 25, no. 10, pp. 1990–2000, 2006.
 [14] J. López, C. Carreras, O. NietoTaladriz et al., “Improved intervalbased characterization of fixedpoint lti systems with feedback loops,” ComputerAided Design of Integrated Circuits and Systems, IEEE Transactions on, vol. 26, no. 11, pp. 1923–1933, 2007.
 [15] A. B. Kinsman and N. Nicolici, “Bitwidth allocation for hardware accelerators for scientific computing using satmodulo theory,” ComputerAided Design of Integrated Circuits and Systems, IEEE Transactions on, vol. 29, no. 3, pp. 405–413, 2010.
 [16] S. Vakili, J. P. Langlois, and G. Bois, “Enhanced precision analysis for accuracyaware bitwidth optimization using affine arithmetic,” ComputerAided Design of Integrated Circuits and Systems, IEEE Transactions on, vol. 32, no. 12, pp. 1853–1865, 2013.
 [17] T. Pradhan, B. Kabi, R. Mohanty, and A. Routray, “Development of numerical linear algebra algorithms in dynamic fixedpoint format: a case study of lanczos tridiagonalization,” International Journal of Circuit Theory and Applications, vol. 44, no. 6, pp. 1222–1262, 2016.
 [18] B. Wu, J. Zhu, and F. N. Najm, “Dynamicrange estimation,” ComputerAided Design of Integrated Circuits and Systems, IEEE Transactions on, vol. 25, no. 9, pp. 1618–1636, 2006.
 [19] G. Constantinides, A. B. Kinsman, and N. Nicolici, “Numerical data representations for fpgabased scientific computing,” IEEE Design & Test of Computers, no. 4, pp. 8–17, 2011.
 [20] A. Banciu, “A stochastic approach for the range evaluation,” Ph.D. dissertation, Université Rennes 1, 2012.
 [21] J. Carletta, R. Veillette, F. Krach, and Z. Fang, “Determining appropriate precisions for signals in fixedpoint iir filters,” in Proceedings of the 40th annual Design Automation Conference. ACM, 2003, pp. 656–661.
 [22] E. Goubault and S. Putot, “A zonotopic framework for functional abstractions,” Formal Methods in System Design, vol. 47, no. 3, pp. 302–360, 2015.
 [23] R. Nehmeh, D. Menard, A. Banciu, T. Michel, and R. Rocher, “Integer wordlength optimization for fixedpoint systems,” in Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE nternational Conference on. IEEE, 2014, pp. 8321–8325.
 [24] A. B. Kinsman and N. Nicolici, “Automated range and precision bitwidth allocation for iterative computations,” IEEE Transactions on ComputerAided Design of Integrated Circuits and Systems, vol. 9, no. 30, pp. 1265–1278, 2011.
 [25] O. Sarbishei and K. Radecka, “On the fixedpoint accuracy analysis and optimization of polynomial specifications,” ComputerAided Design of Integrated Circuits and Systems, IEEE Transactions on, vol. 32, no. 6, pp. 831–844, 2013.
 [26] S. Lopez, T. Vladimirova, C. Gonzalez, J. Resano, D. Mozos, and A. Plaza, “The promise of reconfigurable computing for hyperspectral imaging onboard systems: A review and trends,” Proceedings of the IEEE, vol. 101, no. 3, pp. 698–722, 2013.
 [27] C. Egho, T. Vladimirova, and M. N. Sweeting, “Acceleration of karhunenloève transform for systemonchip platforms,” in Adaptive Hardware and Systems (AHS), 2012 NASA/ESA Conference on. IEEE, 2012, pp. 272–279.
 [28] C. Egho and T. Vladimirova, “Adaptive hyperspectral image compression using the klt and integer klt algorithms,” in Adaptive Hardware and Systems (AHS), 2014 NASA/ESA Conference on. IEEE, 2014, pp. 112–119.
 [29] J. Burger and A. Gowen, “Data handling in hyperspectral image analysis,” Chemometrics and Intelligent Laboratory Systems, vol. 108, no. 1, pp. 13–22, 2011.
 [30] B. N. Datta, Numerical linear algebra and applications. Siam, 2010.
 [31] G. H. Golub and C. F. Van Loan, Matrix computations. JHU Press, 2012, vol. 3.
 [32] T. Pradhan, A. Routray, and B. Kabi, “Comparative evaluation of symmetric svd algorithms for realtime face and eye tracking,” in Matrix information geometry. Springer, 2013, pp. 323–340.
 [33] J. Demmel and K. Veselic, “Jacobi’s method is more accurate than qr,” SIAM Journal on Matrix Analysis and Applications, vol. 13, no. 4, pp. 1204–1245, 1992.
 [34] C. Egho and T. Vladimirova, “Hardware acceleration of the integer karhunenloève transform algorithm for satellite image compression,” in Geoscience and Remote Sensing Symposium (IGARSS), 2012 IEEE International. IEEE, 2012, pp. 4062–4065.
 [35] C. Gonzalez, S. Lopez, D. Mozos, and R. Sarmiento, “A novel fpgabased architecture for the estimation of the virtual dimensionality in remotely sensed hyperspectral images,” Journal of RealTime Image Processing, pp. 1–12.
 [36] https://hyspiri.jpl.nasa.gov/downloads/2015_Symposium/day2/8_DecompositionAlgorithmDimensionalityReductionHyperspec_Mohanty.pdf.
 [37] B. Kabi, A. S. Sahadevan, R. Mohanty, A. Routray, B. S. Das, and A. Mohanty, “An overflowfree fixedpoint singular value decomposition algorithm for dimensionality reduction of hyperspectral images.”
 [38] http://www.ehu.eus/ccwintco/index.php?title=HyperspectralRemoteSensingScenes.
 [39] T. Skauli and J. Farrell, “A collection of hyperspectral images for imaging systems research,” in IS&T/SPIE Electronic Imaging. International Society for Optics and Photonics, 2013, pp. 86 600C–86 600C.
 [40] K. Segl, L. Guanter, C. Rogass, T. Kuester, S. Roessner, H. Kaufmann, B. Sang, V. Mogulsky, and S. Hofer, “Eetesâthe enmap endtoend simulation tool,” Selected Topics in Applied Earth Observations and Remote Sensing, IEEE Journal of, vol. 5, no. 2, pp. 522–530, 2012.
 [41] HySAT Download, University of Oldenburg, Oldenburg, Germany. [Online]. Available: http://hysat.informatik.unioldenburg.de/26273.html
 [42] N. J. Higham, Accuracy and stability of numerical algorithms. Siam, 2002.
 [43] R. E. Moore, Interval analysis. PrenticeHall Englewood Cliffs, 1966, vol. 4.
 [44] R. Rocher, D. Menard, P. Scalart, and O. Sentieys, “Analytical approach for numerical accuracy estimation of fixedpoint systems based on smooth operations,” Circuits and Systems I: Regular Papers, IEEE Transactions on, vol. 59, no. 10, pp. 2326–2339, 2012.
 [45] Z. Wang and A. C. Bovik, “Mean squared error: love it or leave it? a new look at signal fidelity measures,” Signal Processing Magazine, IEEE, vol. 26, no. 1, pp. 98–117, 2009.
 [46] C.b. Design, “Highlevel synthesis with vivado hls,” 2013.
 [47] I. S. Association et al., “Ieee std. 1666–2005 open systemc language reference manual,” 2005.
 [48] https://www.researchgate.net/publication/313036762_FixedPoint_Singular_Value_Decomposition_Algorithms_with_Tight_Analytical_Bounds.