An Overflow Free Fixed-point Eigenvalue Decomposition Algorithm: Case Study of Dimensionality Reduction in Hyperspectral Images
We consider the problem of enabling robust range estimation of eigenvalue decomposition (EVD) algorithm for a reliable fixed-point design. The simplicity of fixed-point circuitry has always been so tempting to implement EVD algorithms in fixed-point arithmetic. Working towards an effective fixed-point design, integer bit-width allocation is a significant step which has a crucial impact on accuracy and hardware efficiency. This paper investigates the shortcomings of the existing range estimation methods while deriving bounds for the variables of the EVD algorithm. In light of the circumstances, we introduce a range estimation approach based on vector and matrix norm properties together with a scaling procedure that maintains all the assets of an analytical method. The method could derive robust and tight bounds for the variables of EVD algorithm. The bounds derived using the proposed approach remain same for any input matrix and are also independent of the number of iterations or size of the problem. Some benchmark hyperspectral data sets have been used to evaluate the efficiency of the proposed technique. It was found that by the proposed range estimation approach, all the variables generated during the computation of Jacobi EVD is bounded within .
Affine arithmetic, eigenvalue decomposition, fixed-point arithmetic, formal methods, integer bit-width allocation, interval arithmetic, overflow, range analysis, satisfiability-modulo-theory.
Eigenvalue decomposition (EVD) is a key building block in signal processing and control applications. The fixed-point development of eigenvalue decomposition (EVD) algorithm have been extensively studied in the past few years [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] because fixed-point circuitry is significantly simpler and faster. Owing to its simplicity, fixed-point arithmetic is ubiquitous in low cost embedded platforms. Fixed-point arithmetic has played an important role in supporting the field-programmable-gate-array (FPGA) parallelism by keeping the hardware resources as low as possible. The most crucial step involved in the float-to-fixed conversion process is deciding the integer wordlengths (IWLs) in order to avoid overflow. This step has a significant impact on accuracy and hardware resources.
IWLs can be determined either using simulation , ,  or by analytical (formal) methods , , , . Existing works on fixed-point EVD have mainly used simulation-based approach for finding the IWLs [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 17] because of its capability to be performed on any kind of systems. In simulation-based methods, variable bounds are estimated using the extreme values obtained from the simulation of the floating-point model. This method needs a large amount of input matrices to obtain a reliable estimation of ranges. Thus, the method is quite slow. Moreover, it cannot guarantee to avoid overflow for non-simulated matrices. This is primarily due to the diverse range of input data. A stochastic range estimation method is discussed in  which computes the ranges by propagating statistical distributions through operations. It requires large number of simulations to estimate system parameters and an appropriate input data set to estimate quality parameters ,  and therefore, it does not produce absolute bounds .
There are several limitations associated with analytical (formal) methods. An analytical method based on norm and transfer function is described in . This method produces theoretical bounds that guarantee no overflow will occur, but the approach is only limited to linear time-invariant systems . Interval arithmetic (IA) ignores correlation among signals resulting in an overestimation of ranges . Affine arithmetic (AA) is a preferable approach that takes into account the interdependency among the signals , but ranges determined through AA explode during division if the range of divisor includes zero , . IA also suffers from the same problem. Both IA and AA are pessimistic approaches leading to higher implementation cost . Satisfiability modulo theory (SMT) produces tight ranges compared to IA and AA , . However, it is computationally expensive and much slower as compared to IA and AA , . IA and AA methods compute the ranges of the intermediate variables by propagating the bounds of the input data through the arithmetic operations. SMT refines the range results provided by IA and AA. There are common issues associated with IA, AA and SMT. Given a particular range of the input matrix, these analytical methods compute certain ranges of the intermediate variables based on the arithmetic operations. However, if the range of the input matrix changes, the bounds for the variables no longer remain the same. Another issue with these analytical methods is that the bounds of the variables obtained using these methods are not independent of the number of iterations or the size of the problem. We exemplify these common issues associated with IA, AA and SMT in the next section.
In this section, we illustrate the issues associated with the existing range estimation methods through a motivational example (dimensionality reduction of hyperspectral images using fixed-point EVD). Along with the covariance matrices of hyperspectral images, we have also used some random symmetric positive semi-definite matrices generated from MATLAB. We have chosen such an instance because it is discovered from the literature that some of the works on dimensionality reduction of hyperspectral images highlight the overflow issues while using fixed-point EVD algorithm. A sincere effort has been made to contemplate them.
The diverse range of the elements of the input data matrices for different hyperspectral images (HSIs) limits the use of fixed-point EVD for dimensionality reduction , . If the range of the input data is diverse, selecting a particular IWL may not avoid overflow for all range of input cases. Egho et al.  stated that fixed-point implementation of EVD algorithm leads to inaccurate computation of eigenvalues and eigenvectors due to overflow. Therefore, the authors implemented Jacobi algorithm in FPGA using floating-point arithmetic. Lopez et al.  reported overflow issues while computing EVD in fixed-point arithmetic. Burger et al.  mentioned that while processing millions of HSI, numerical instability like overflow should be avoided. Hence, determination of proper IWLs for variables of fixed-point EVD algorithm in order to free it from overflow for all range of input data remains a major research issue.
The most widely used algorithm for dimensionality reduction is principal component analysis (PCA). PCA requires computation of eigenvalues () and eigenvectors () given by
where is the covariance matrix, is a
diagonal matrix containing the eigenvalues and the columns of
contain the eigenvectors. is a new coordinate basis for the
image. There are several algorithms developed in the
literature for EVD of symmetric matrices , , . Among all, two-sided Jacobi algorithm
is most accurate and numerically stable , .
Most of the work attempted for dimensionality reduction via EVD uses two-sided Jacobi
algorithm , , .
The same algorithm is used in this paper for computing EVD of the
covariance matrix of the hyperspectral data. Apart from the accuracy and
stability of Jacobi algorithm, it also has high degree potential for parallelism, and hence can be implemented on FPGA
, . In [3, 5, 4, 6, 7, 8, 9, 10] this algorithm is implemented on FPGA with fixed-point arithmetic
to reduce power consumption and silicon area. However, in
all the works, fixed-point implementation of Jacobi algorithm uses
the simulation-based approach for estimating the ranges of
variables. It does not produce promising bounds (as discussed
earlier in section I).
Jacobi method computes EVD of a symmetric matrix by producing a sequence of orthogonally similar matrices, which eventually converges to a diagonal matrix  given by
where is the Jacobi rotation and is a diagonal
matrix containing eigenvalues (). In each step, we
compute a Jacobi rotation with and update to ,
where is chosen in such a way that two off-diagonal entries of
a matrix of are set to zero. This is called
two-sided or classical Jacobi method. Algorithm 1 lists the
steps for Jacobi method. In order to investigate the challenges with fixed-point EVD algorithm,
we have used four different types of HSI collected by
the space-borne (Hyperion), air-borne (ROSIS and AVIRIS), handheld sensors (Landscape) and Synthetic (simulated EnMap). The selected Hyperion image subset contains the
Chilika Lake (latitude: 19.63 N - 19.68 N, longitude: 85.13 E -
85.18 E) and its catchment areas , . ROSIS data was acquired during a
flight campaign at Pavia University, northern Italy . AVIRIS data was gathered by the AVIRIS sensor over the Indian Pines test site in North-western Indiana. Landscape data is obtained
from the database available from Stanford University . The simulated EnMap image subset contains the Maktesh Ramon, Israel (30.57 N, 34.83 E) . The sizes of the covariance matrix for the images are for Hyperion, for ROSIS, for AVIRIS, for Landscape and for simulated EnMap. Out
of 120, 103, 200, 148 and 244 bands, only a certain number of bands are
sufficient for obtaining suitable information due to the large
correlation between adjacent bands. Hence, the dimension of the
image should be reduced to
decrease the redundancy in the data. The principal components (PCs) are
decided from the magnitudes of the eigenvalues. The numbers of PCs
which explain 99.0 variance are retained for the
reconstruction purpose. The following paragraph describes the shortcomings of the existing range estimation methods while computing bounds for EVD algorithm.
Tables I and II shows the ranges obtained for Hyperion and ROSIS using simulation, IA, AA and the proposed method. The simulation-based range analysis is performed by feeding the floating-point algorithm with each input matrix separately and observing the data range. Notice that the ranges or the required IWLs (Table III) estimated using the simulation-based approach for Hyperion cannot avoid overflow in case of ROSIS. In other words, based on the ranges obtained using the simulation of Hyperion data one would allocate 24 bits to the integer part, but these number of bits cannot avoid overflow in case of ROSIS. Simulation-based method can only produce exact bounds for the simulated cases. Thus, simulation-based method is characterized by a need for stimuli, due to which it cannot be relied upon in practical implementations. In contrast, the static or analytical or formal methods like IA and AA which depends on the arithmetic operations always provide worst-case bounds so that no overflow occurs. However, the bounds are highly overestimated compared to the actual bounds produced by simulation-based method as shown in Tables I and II. This increases the hardware resources unnecessarily.
In order to examine the range explosion problem of IA and AA, we computed the range of using IA and AA for random symmetric positive semi-definite matrices of different sizes generated from MATLAB. Table IV shows how the range of explodes when computed through IA and AA. All the range estimation using IA and AA have been carried out using double precision floating-point format. According to the IEEE 754 double precision floating-point format, the maximum number that can be represented is in the order of . It is noticed in Table IV that whenever the range is more than the maximum representable number, it is termed as infinity. It is apparent from the algorithm that variable is some or the other way related to the computation of all other variables. So, with the range of becoming infinity, the range of other variables also result in infinity as shown in Tables I and II. The range of variable goes unbounded because of the pessimistic nature of bounds produced by IA and AA. All the issues with existing range estimation methods are handled meticulously by the proposed method that produces unvarying or robust bounds while at the same time tightens the ranges. This is quite apparent from Tables I and II.
|Size||Start||=1, =1, =2||=3, =3, =4||=4, =4, =6||=6, =6, =8||End|
In order to combat this, SMT has arisen which produce tight bounds compared to IA and AA. However, SMT is again computationally costly. Its runtime grows abruptly with application complexity. Hence, applying SMT for large size matrices would be too complex. Amidst the individual issues of the analytical methods, there are also some common issues. Provided with a particular range of the input matrix, the analytical methods (IA, AA and SMT) compute certain ranges of the intermediate variables based on the arithmetic operations. Notwithstanding, the ranges no longer remain the same, if the range of the input matrix changes. In order to investigate this issue, we consider two symmetric input matrices given by
The ranges obtained using IA and SMT in case of matrix cannot guarantee to avoid overflow in case of as shown in Tables V and VI. The fact is also similar for ranges derived using AA. This scenario is handled correctly by the proposed method that produces robust and tight bounds in both the cases and . The range estimation using SMT was carried out using the freely available HySAT implementation .
There is one more common issue with these anaytical methods. We know that, provided with a fixed range of the input stimuli, these analytical (formal) methods successfully produce robust bounds . Even though the range of the input matrix is fixed, the bounds produced by these analytical methods would be robust only for a particular size of the problem or number of iterations. In other words, the bounds obtained will not be independent of the number of iterations.
In order to illustrate this, let us consider two random symmetric positive definite matrices of sizes 3 3 and 5 5 given by
The bounds obtained using IA for the input matrices and are shown in Table VII. The bounds are unnecessarily large compared to the actual bounds produced by the simulation-based approach shown in Table VIII. Now, the input matrices are scaled through the upper bound of their spectral norm to limit their range within 1 and 1. The new matrices and whose elements range between 1 and 1 are given by
The ranges obtained for the scaled input matrices are shown in Table IX. Even though after scaling, the range of the variables obtained using IA are large and unbounded compared to the original bounds obtained using simulation-based method (Table X). The difference in unboundedness of the ranges shown in Tables VII and IX is not substantially large. This illustrates that the ranges obtained using IA are not independent of the number of iterations. Similar is the case for both AA and SMT. We can observe the phenomenon in Table XI for one of the test hyperspectral data (simulated EnMAP). Inspite of the range of covariance matrix being , the bounds estimated using IA and AA exploded compared to the actual bounds obtained using simulation-based approach. These examples comprehend that the bounds derived using the existing analytical methods are not independent of the number of iterations. Given the issues of the existing range estimation methods, our proposed method provides robust and tight bounds for the variables as shown in Tables I, II, V, VI and XI. Moreover, the bounds produced by the proposed method are independent of the size of the problem. The key to all these advantages is the usage of the scaling method and vector, matrix norm properties to derive the ranges.
|Variables||IA ()||IA ()|
|Variables||Simulation ()||Simulation ()|
|Variables||IA ()||IA ()|
|Variables||Simulation ()||Simulation ()|
Iii Proposed Solution
Particularizing, there are mainly three issues associated with the existing range estimation methods:
incompetence of the simulation-based approach to produce unvarying or robust bounds,
bounds produced by existing analytical (formal) methods are not independent of the number of iterations or size of the problem, and
overestimated bounds produced by IA and AA.
Taking into account the issues 1 and 2, we propose in this study, an analytical method based on vector and matrix norm properties to derive unvarying or robust bounds for the variables of EVD algorithm. The proof for deriving the bounds make use of the fact that all the eigenvalues of a symmetric semi-positive definite matrix are bounded within the upper bound for the spectral norm of the matrix. Further taking into consideration the issue 3, we demonstrate that if the spectral norm of any matrix is kept within unity, tight ranges for the variables of the EVD algorithm can be derived. It is well-known that the spectral norm of any matrix is bounded by , 
For symmetric matrices, the spectral norm in (9) can be replaced with the spectral radius .
Given the bounds for spectral norm as , the Jacobi EVD algorithm applied to has the following bounds for the variables for all , , and :
where , denote the iteration number and and denote the component of a vector and component of a matrix respectively.
Using vector and matrix norm properties the ranges of the variables can be derived. We start by bounding the elements of the input symmetric matrix as
where (10) follows from . Hence, the elements of are in the range . Line 30 in Algorithm 1 shows the computation of eigenvalues. We know that , so the upper bound for the eigenvalues is equal to . In this work, the fixed-point Jacobi EVD algorithm is applied to covariance matrices. Due to the positive semi-definiteness property of covariance matrices, the lower bound for the eigenvalues is equal to zero. Thus, the range of is . The eigenvalues in Line 30 can also be calculated as
According to vector norm property we can say that
where is the maximum of the absolute of the elements in . From the upper bound of , (11) and (12) we can say that each element of lie in the range . Thus all elements of lie in the range for all the iterations. Since we have considered symmetric positive semi-definite matrices (unlike the off-diagonal entries the diagonal elements are always positive), the diagonal elements of are in the range . Rest of the elements lie in the range . Line 5, 6 and 7 in Algorithm 1 computes , and respectively. Since and are the diagonal elements of , their range is . is the off-diagonal entry of , therefore its range is . Line 8 in Algorithm 1 computes . Let = such that
According to (13), numerator () of lies in the range . and are always positive. The summation is greater than or equal to , because if then is equal to or if then is greater than since is greater than or equal to zero and is greater than . From the range of , and the denominator of , we can say that will always be less or equal to . Thus, we can conclude that lies in the range and arc tangent of is limited in the range . The Jacobi EVD method tries to make the off-diagonal entries of 22 submatrix of zero by overwriting with . According to 22 symmetric Schur decomposition discussed in , and are cosine and sine trigonometric functions. Thus, the bounds of and are . Line 9 in Algorithm 1 computes which involves square root operation and therefore the range of can be modified to . As the range of and are and respectively, using multiplication rule of interval arithmetic  the range of (Line 10) can be derived as . Next we bound the elements of . is the eigenvector matrix each column of which has unity norm (eigenvectors of symmetric matrices are orthogonal). Hence all elements of are in the range following (14).
Since the range of is , according to Line 15 of Algorithm 1, the range of can be fixed as . The bounds obtained according to Theorem 1 remain unchanged for all the iterations of the algorithm. The bounds are independent of the number of iterations or the size of the input matrix. Thus, the issue 2 has been handled accurately. Now considering the issue 1, the bounds according to Theorem 1 remain same (the pattern remains the same as shown in Theorem 1) for any input matrix, but depend on the factor . For different input matrices, the magnitude of will change and this, in turn, will differ the bounds. The issue 1 has not yet been handled prudently. Hence, we propose that if the input matrix is scaled through then we can achieve a two-fold advantage: unvarying and tight bounds (solution for issue 3). This will resolve all the issues. If the input matrix is scaled as , the EVD of matrix is given as
where and . is the eigenvector and is the eigenvalue. After scaling through a scalar value, the original eigenvectors do not change. The original eigenvalues change by a factor . We need not recover the original eigenvalues because, in PCA, eigenvaues are only used to calculate the required number of PCs. Since, all the eigenvalues are scaled by the same factor, the number of PCs do not change whether the number is fixed using original eigenvalues or scaled ones. In applications, where original eigenvalues are required, the number of IWLs required is depending on the magnitude of the scaling factor. Only the binary point of the eigenvalues is required to be adjusted online while for other variables it is fixed irrespective of the property of the input matrix.
Given the scaling factor as , the Jacobi EVD algorithm (Algorithm 1) applied to has the following bounds for the variables for all , , and :
where , denote the iteration number and and denote the component of a vector and component of a matrix respectively.
Using vector and matrix norm properties the ranges of the variables can be derived. We start by bounding the elements of the input symmetric matrix as
where (16) follows from . Hence, the elements of are in the range . The remaining bounds are derived in the similar fashion as decribed in proof for Therorem 1. The bounds on the variables of EVD algorithm obtained after scaling remain constant for all the iterations and also do not vary for any input matrix. Besides, the bounds are also tight.
Iv More data sets with results
In this section, we present a few more hyperspectral data sets, and we compare the bounds on variables of Jacobi EVD algorithm produced by the existing range estimation methods and the proposed approach. Tables XII and XIII show the comparison between the bounds on the variables obtained by existing range estimation methods with respect to the proposed approach through the AVIRIS and Landscape data sets. We can observe that the ranges estimated using the simulation for AVIRIS data cannot avoid overflow in case of Landscape. This is quite apparent from the number of integer bits required, shown in Table XIV. As usual, the bounds produced by IA and AA outbursted. However, the proposed method produces robust and tight bounds. The bounds obtained are independent of any range of the input matrix and also the number of iterations.
|WLs||50 bits||40 bits||32 bits|
where and are the
eigenvalues obtained from double precision floating-point and fixed-point
implementations. SQNR of the eigenvalues obtained through the proposed fixed-point design is shown in Table XV. In Table XV, we observe high magnitudes of SQNR which exhibit that the set of ranges obtained according to Theorem 2 are sufficient for avoiding overflow for any input matrix. For data sets like Landscape, where the the range is exorbitant resulting in large IWLs (Table XIV), wordlengths like 50, 40 or 32 bits would never fit. In such cases, with the proposed approach it was possible to fit all the variables within 32 bit wordlength and obtain a high value of SQNR. A common measure to compare two images is mean-square-error (MSE) . MSE between PCA images of fixed-point implementations with various WLs after derving the ranges through proposed approach are shown in Table XVI. The required number of PCs for Hyperion, ROSIS
and Landscape are 4, 5 and 2 respectively. The number of PCs explaining 99.0 variance in case of AVIRIS and EnMap are relatively higher. Therefore, the Table XVI only exhibits the results of Hyperion, ROSIS and Landscape. However, similar results were obtained for AVIRIS and EnMap.
We observe that the MSE values are negligibly small which signify that the ranges obtained through the proposed approach are absolutely robust. Thus, the error metrics (SQNR and MSE) imply that the number of integer bits derived using the proposed approach is sufficient for avoiding overflow. After deriving the proper ranges through the proposed approach, the fixed-point design is synthesized on Xilinx Virtex 7 XC7VX485 FPGA for different WLs through Vivado high-level synthesis (HLS) design tool . We have used SystemC (mimics hardware description language VHDL and Verilog) to develop the fixed-point code . Using the HLS tool, the SystemC fixed-point code is transformed into a hardware IP (intellectual property) described in Verilog. We compare the resource utilization of simulation approach with respect to the proposed approach (for the same level of accuracy) through the test hyperspectral data sets. The comparative study is illustrated in Table XVII. There is a noteworthy difference in the hardware resources. The hardware resources in case of simulation approach are considerably large compared to the resources used in case of the proposed approach. For the sake of maintaining the same level of accuracy (SQNR, MSE) as the proposed method, the simulation approach uses 50 bit wordlength.
The proposed method also produces robust and tight analytical bounds for variables of singular value decomposition algorithm .
In this paper, we bring out the problem of integer bit-width allocation for the variables of eigenvalue decomposition algorithm. We highlight the issues of the existing range estimation methods in the context of EVD. Integer bit-width allocation is an essential step in fixed-point hardware design. In light of the significance of this step, this paper introduces an analytical method based on vector and matrix norm properties together with a scaling procedure to produce robust and tight bounds. Through some hyperspectral data sets, we demonstrate the efficacy of the proposed method in dealing with the issues associated with existing methods. SQNR and MSE values show that the ranges derived using the proposed approach are sufficient for avoiding overflow in case of any input matrix. There are many other numerical linear algebra algorithms which can benefit from the
proposed method like QR factorization, power method for finding largest eigenvalue,
bisection method for finding eigenvalues of a symmetric tridiagonal matrix, QR iteration,
Arnoldi method for transforming a non-symmetric matrix into an upper Hessenberg matrix
and LU factorization and Cholesky factorization.
Dealing with the precision problem will be a scope for the future work.
-  Z. Nikolić, H. T. Nguyen, and G. Frantz, “Design and implementation of numerical linear algebra algorithms on fixed point dsps,” EURASIP Journal on Advances in Signal Processing, vol. 2007, 2007.
-  Y.-L. Chen, C.-Z. Zhan, T.-J. Jheng, and A.-Y. A. Wu, “Reconfigurable adaptive singular value decomposition engine design for high-throughput mimo-ofdm systems,” Very Large Scale Integration (VLSI) Systems, IEEE Transactions on, vol. 21, no. 4, pp. 747–760, 2013.
-  D. Milford and M. Sandell, “Singular value decomposition using an array of cordic processors,” Signal Processing, vol. 102, pp. 163–170, 2014.
-  T. Pradhan, B. Kabi, A. Routray, and G. Anirudh, “Fixed-point hestenes svd algorithm for computing eigen faces.”
-  R. Mohanty, G. Anirudh, T. Pradhan, B. Kabi, and A. Routray, “Design and performance analysis of fixed-point jacobi svd algorithm on reconfigurable system,” IERI Procedia, vol. 7, pp. 21–27, 2014.
-  P. M. Szecówka and P. Malinowski, “Cordic and svd implementation in digital hardware,” in Mixed Design of Integrated Circuits and Systems (MIXDES), 2010 Proceedings of the 17th International Conference. IEEE, 2010, pp. 237–242.
-  R. C. Grammenos, S. Isam, and I. Darwazeh, “Fpga design of a truncated svd based receiver for the detection of sefdm signals,” in Personal Indoor and Mobile Radio Communications (PIMRC), 2011 IEEE 22nd International Symposium on. IEEE, 2011, pp. 2085–2090.
-  Y. Wang, K. Cunningham, P. Nagvajara, and J. Johnson, “Singular value decomposition hardware for mimo: State of the art and custom design,” in Reconfigurable Computing and FPGAs (ReConFig), 2010 International Conference on. IEEE, 2010, pp. 400–405.
-  Z. Liu, K. Dickson, and J. V. McCanny, “Application-specific instruction set processor for soc implementation of modern signal processing algorithms,” Circuits and Systems I: Regular Papers, IEEE Transactions on, vol. 52, no. 4, pp. 755–765, 2005.
-  C. Shi and R. W. Brodersen, “Automated fixed-point data-type optimization tool for signal processing and communication systems,” in Design Automation Conference, 2004. Proceedings. 41st. IEEE, 2004, pp. 478–483.
-  S. Kim, K.-I. Kum, and W. Sung, “Fixed-point optimization utility for c and c++ based digital signal processing programs,” Circuits and Systems II: Analog and Digital Signal Processing, IEEE Transactions on, vol. 45, no. 11, pp. 1455–1464, 1998.
-  K.-I. Kum, J. Kang, and W. Sung, “Autoscaler for c: An optimizing floating-point to integer c program converter for fixed-point digital signal processors,” Circuits and Systems II: Analog and Digital Signal Processing, IEEE Transactions on, vol. 47, no. 9, pp. 840–848, 2000.
-  D.-U. Lee, A. A. Gaffar, R. C. Cheung, O. Mencer, W. Luk, G. Constantinides et al., “Accuracy-guaranteed bit-width optimization,” Computer-Aided Design of Integrated Circuits and Systems, IEEE Transactions on, vol. 25, no. 10, pp. 1990–2000, 2006.
-  J. López, C. Carreras, O. Nieto-Taladriz et al., “Improved interval-based characterization of fixed-point lti systems with feedback loops,” Computer-Aided Design of Integrated Circuits and Systems, IEEE Transactions on, vol. 26, no. 11, pp. 1923–1933, 2007.
-  A. B. Kinsman and N. Nicolici, “Bit-width allocation for hardware accelerators for scientific computing using sat-modulo theory,” Computer-Aided Design of Integrated Circuits and Systems, IEEE Transactions on, vol. 29, no. 3, pp. 405–413, 2010.
-  S. Vakili, J. P. Langlois, and G. Bois, “Enhanced precision analysis for accuracy-aware bit-width optimization using affine arithmetic,” Computer-Aided Design of Integrated Circuits and Systems, IEEE Transactions on, vol. 32, no. 12, pp. 1853–1865, 2013.
-  T. Pradhan, B. Kabi, R. Mohanty, and A. Routray, “Development of numerical linear algebra algorithms in dynamic fixed-point format: a case study of lanczos tridiagonalization,” International Journal of Circuit Theory and Applications, vol. 44, no. 6, pp. 1222–1262, 2016.
-  B. Wu, J. Zhu, and F. N. Najm, “Dynamic-range estimation,” Computer-Aided Design of Integrated Circuits and Systems, IEEE Transactions on, vol. 25, no. 9, pp. 1618–1636, 2006.
-  G. Constantinides, A. B. Kinsman, and N. Nicolici, “Numerical data representations for fpga-based scientific computing,” IEEE Design & Test of Computers, no. 4, pp. 8–17, 2011.
-  A. Banciu, “A stochastic approach for the range evaluation,” Ph.D. dissertation, Université Rennes 1, 2012.
-  J. Carletta, R. Veillette, F. Krach, and Z. Fang, “Determining appropriate precisions for signals in fixed-point iir filters,” in Proceedings of the 40th annual Design Automation Conference. ACM, 2003, pp. 656–661.
-  E. Goubault and S. Putot, “A zonotopic framework for functional abstractions,” Formal Methods in System Design, vol. 47, no. 3, pp. 302–360, 2015.
-  R. Nehmeh, D. Menard, A. Banciu, T. Michel, and R. Rocher, “Integer word-length optimization for fixed-point systems,” in Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE nternational Conference on. IEEE, 2014, pp. 8321–8325.
-  A. B. Kinsman and N. Nicolici, “Automated range and precision bit-width allocation for iterative computations,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 9, no. 30, pp. 1265–1278, 2011.
-  O. Sarbishei and K. Radecka, “On the fixed-point accuracy analysis and optimization of polynomial specifications,” Computer-Aided Design of Integrated Circuits and Systems, IEEE Transactions on, vol. 32, no. 6, pp. 831–844, 2013.
-  S. Lopez, T. Vladimirova, C. Gonzalez, J. Resano, D. Mozos, and A. Plaza, “The promise of reconfigurable computing for hyperspectral imaging onboard systems: A review and trends,” Proceedings of the IEEE, vol. 101, no. 3, pp. 698–722, 2013.
-  C. Egho, T. Vladimirova, and M. N. Sweeting, “Acceleration of karhunen-loève transform for system-on-chip platforms,” in Adaptive Hardware and Systems (AHS), 2012 NASA/ESA Conference on. IEEE, 2012, pp. 272–279.
-  C. Egho and T. Vladimirova, “Adaptive hyperspectral image compression using the klt and integer klt algorithms,” in Adaptive Hardware and Systems (AHS), 2014 NASA/ESA Conference on. IEEE, 2014, pp. 112–119.
-  J. Burger and A. Gowen, “Data handling in hyperspectral image analysis,” Chemometrics and Intelligent Laboratory Systems, vol. 108, no. 1, pp. 13–22, 2011.
-  B. N. Datta, Numerical linear algebra and applications. Siam, 2010.
-  G. H. Golub and C. F. Van Loan, Matrix computations. JHU Press, 2012, vol. 3.
-  T. Pradhan, A. Routray, and B. Kabi, “Comparative evaluation of symmetric svd algorithms for real-time face and eye tracking,” in Matrix information geometry. Springer, 2013, pp. 323–340.
-  J. Demmel and K. Veselic, “Jacobi’s method is more accurate than qr,” SIAM Journal on Matrix Analysis and Applications, vol. 13, no. 4, pp. 1204–1245, 1992.
-  C. Egho and T. Vladimirova, “Hardware acceleration of the integer karhunen-loève transform algorithm for satellite image compression,” in Geoscience and Remote Sensing Symposium (IGARSS), 2012 IEEE International. IEEE, 2012, pp. 4062–4065.
-  C. Gonzalez, S. Lopez, D. Mozos, and R. Sarmiento, “A novel fpga-based architecture for the estimation of the virtual dimensionality in remotely sensed hyperspectral images,” Journal of Real-Time Image Processing, pp. 1–12.
-  https://hyspiri.jpl.nasa.gov/downloads/2015_Symposium/day2/8_DecompositionAlgorithmDimensionalityReductionHyperspec_Mohanty.pdf.
-  B. Kabi, A. S. Sahadevan, R. Mohanty, A. Routray, B. S. Das, and A. Mohanty, “An overflow-free fixed-point singular value decomposition algorithm for dimensionality reduction of hyperspectral images.”
-  http://www.ehu.eus/ccwintco/index.php?title=HyperspectralRemoteSensingScenes.
-  T. Skauli and J. Farrell, “A collection of hyperspectral images for imaging systems research,” in IS&T/SPIE Electronic Imaging. International Society for Optics and Photonics, 2013, pp. 86 600C–86 600C.
-  K. Segl, L. Guanter, C. Rogass, T. Kuester, S. Roessner, H. Kaufmann, B. Sang, V. Mogulsky, and S. Hofer, “Eetesâthe enmap end-to-end simulation tool,” Selected Topics in Applied Earth Observations and Remote Sensing, IEEE Journal of, vol. 5, no. 2, pp. 522–530, 2012.
-  HySAT Download, University of Oldenburg, Oldenburg, Germany. [Online]. Available: http://hysat.informatik.uni-oldenburg.de/26273.html
-  N. J. Higham, Accuracy and stability of numerical algorithms. Siam, 2002.
-  R. E. Moore, Interval analysis. Prentice-Hall Englewood Cliffs, 1966, vol. 4.
-  R. Rocher, D. Menard, P. Scalart, and O. Sentieys, “Analytical approach for numerical accuracy estimation of fixed-point systems based on smooth operations,” Circuits and Systems I: Regular Papers, IEEE Transactions on, vol. 59, no. 10, pp. 2326–2339, 2012.
-  Z. Wang and A. C. Bovik, “Mean squared error: love it or leave it? a new look at signal fidelity measures,” Signal Processing Magazine, IEEE, vol. 26, no. 1, pp. 98–117, 2009.
-  C.-b. Design, “High-level synthesis with vivado hls,” 2013.
-  I. S. Association et al., “Ieee std. 1666–2005 open systemc language reference manual,” 2005.
-  https://www.researchgate.net/publication/313036762_Fixed-Point_Singular_Value_Decomposition_Algorithms_with_Tight_Analytical_Bounds.