Determinants of interval matrices^{†}^{†}thanks: Received by the editors on Date of Submission. Accepted for publication on Date of Acceptance. Handling Editor: Handling Editor.
Abstract
In this paper we shed more light on determinants of interval matrices. Computing the exact bounds on a determinant of an interval matrix is an NPhard problem. Therefore, attention is first paid to approximations. NPhardness of both relative and absolute approximation is proved. Next, methods computing verified enclosures of interval determinants and their possible combination with preconditioning are discussed. A new method based on Cramer’s rule was designed. It returns similar results to the stateoftheart method, however, it is less consuming regarding computational time. As a byproduct, the Gerschgorin circles were generalized for interval matrices. New results about classes of interval matrices with polynomially computable tasks related to determinant are proved (symmetric positive definite matrices, class of matrices with identity midpoint matrix, tridiagonal Hmatrices). The mentioned methods were exhaustively compared for random general and symmetric matrices.
nterval matrices, Interval determinant, Enclosures of a determinant, Computational complexity. {AMS} 15A15, 68Q17 , 65G40.
1 Introduction
Interval determinants can be found in various applications. They were used e.g., in [24] for testing regularity of inverse Jacobian matrix, in [28] for workspace analysis of planar flexurejointed mechanism, in [30] for computer graphics applications or in [40] as a testing tool for Chebyshev systems.
In this work we first address computational properties of determinants of general interval matrices. We are going to prove two new results regarding absolute and relative approximation of interval determinants. Next, we slightly mention known tools that can be used for computing interval determinants – interval Gaussian elimination, Hadamard inequality and Gerschgorin circles. We introduce our new method based both on Cramer’s rule and solving interval linear systems. Regarding symmetric matrices, there are many results about enclosing their eigenvalues and they can be also used for computing interval determinants. All the methods work much better when combined with some kind of preconditioning. We briefly address that topic. We also prove that some classes of interval matrices have some tasks related to interval determinant computable in polynomial time (symmetric positive definite matrices, some matrices with identity midpoint matrix, tridiagonal Hmatrices). At the end we provide thorough numerical testing of the mentioned methods on random general and symmetric interval matrices.
2 Basic notation and definitions
In our work it will be sufficient to deal only with square interval matrices. An interval matrix is defined by for such that (understood componentwise). To compute with intervals we use the standard interval arithmetic, for more details on the interval arithmetic see for example [25] or [27].
We denote intervals and interval structures in boldface (). Real point matrices and vectors will be denoted in normal case (). An interval coefficient of lying at the position is denoted by .
An interval can be also defined by its midpoint and radius as . Interval vectors and matrices are defined similarly. Notation can be sometimes used instead of respectively. The set of all real closed intervals is denoted by and the set of all square interval matrices of order is denoted by . When we need (in a proof) open intervals we write them with brackets, i.e. .
The magnitude is defined by which is sometimes confused with the absolute value . The width of an interval is defined by . All these notions can be intuitively defined for vectors, we just use them componentwise. We will also use the interval vector Euclidean norm . The relation holds when (similarly for ). When we compare two interval structures, the relation is applied componentwise. In the following text, will denote a matrix consisting of ones of a corresponding size. The identity matrix of a corresponding size will be denoted with denoting its th column. By we denote the MoorePenrose pseudoinverse matrix to and by we denote the inverse matrix to . Spectral radius of is denoted . Now, we define the main notion of this work.
Definition 1 (Interval determinant)
Let be a square interval matrix, then its interval determinant is defined by
Computing the exact bounds, i.e., hull, of is a hard problem. That is why, we are usually satisfied with an enclosure of the interval determinant. Of course, the tighter the better.
Definition 2 (Enclosure of interval determinant)
Let be a square interval matrix, then an interval enclosure of its determinant is defined as any such that
3 What was known before
As it was said in the introduction, to the best knowledge of ours, there are only a few theoretical results regarding interval determinants. Some of them can be found in e.g., [20, 31, 36]. From linearity of a determinant with respect to matrix coefficients we immediately get the fact that the exact bounds on an interval determinant can be computed as minimum and maximum determinant of all possible ”edge” matrices of .
Proposition 1
For a given square interval matrix the interval determinant can be obtained as
Theorem
Computing the either of the exact bounds and of the matrix
where is rational nonnegative is NPhard.
4 Approximations
In the end of the previous section we saw that the problem of computing the exact bounds of an interval determinant is generally an NPhard problem. One can at least hope for having some approximation algorithms. Unfortunately, we prove that this is not the case, neither for relative nor for absolute approximation.
Theorem (Relative approximation)
Let be an interval matrix with nonnegative positive definite matrix and . Let be arbitrary such that . If there exists a polynomial time algorithm returning such that
then P = NP.
Proof
From [36] we use the fact that for a rational nonnegative symmetric positive definite matrix , checking whether the interval matrix is regular (every is regular) is a coNPcomplete problem.
We show that if such algorithm existed, it would decide whether a given interval matrix is regular. For a regular interval matrix we must have or . If then, from the second inclusion . On the other hand, if then from the first inclusion . Therefore, we have if and only if . The corresponding equivalence for can be derived in a similar way.
Theorem (Absolute approximation)
Let be a rational positive definite matrix. Let and let be arbitrary such that . If there exists a polynomial time algorithm returning such that
then P = NP.
Proof
Let matrix consist of rational numbers with nominator and denominator representable with bits (we can take as the maximum number of bits needed for any nominator or denominator). Then nominators and denominators of coefficients in and are also representable using bits. For each row we can multiply these matrices with product of all denominators from both matrices in the corresponding row. Now, each denominator uses still bits and each nominator uses bits. We obtained a new matrix . The whole matrix now uses bits which is polynomial in .
We only multiplied by nonzero constants therefore the following property is holds
After cancellation the new matrix has integer bounds. Its determinant must also have integer bounds. Therefore deciding whether is regular means deciding whether . We can multiply one arbitrary row of by and get a new matrix having . Now, we can apply the approximation algorithm and compute absolute approximation of the determinant of . Let . Then and the lower bound of absolute approximation is
On the other hand, if then
Hence, even and since it is an integer it must be greater or equal to 1. The case of is handled similarly. Therefore,
5 Enclosures of determinants – general case
5.1 Gaussian elimination
To compute a determinant of an interval matrix, we can use the well known Gaussian elimination – after transforming a matrix to the row echelon form an enclosure of determinant is computed as the product of intervals on the main diagonal. For more detailed description of the interval Gaussian elimination see for example [1, 16, 27]. Gaussian elimination is suitable to be used together with some form of preconditioning (more details will be explained in section 5.6). We would recommend the midpoint inverse version as was done in [40].
5.2 Gerschgorin discs
It is a well known result that a determinant of a real matrix is a product of its eigenvalues. To obtain an enclosure of an interval determinant, any method returning enclosures of eigenvalues of a general interval matrix can be used, e.g., [10, 14, 19, 23]. Here we will employ simple but useful bounds based on the well known Gerschgorin circle theorem. This classical result claims that for a square real matrix its eigenvalues lie inside the circles in complex plane with centers and radius . When is an interval matrix, to each real matrix there corresponds a set of Gerschgorin discs. Shifting coefficients of shifts or scales the discs. All discs in all situations are contained inside discs with centers and radii as depicted in Figure 1.
As in the case of real Gerschgorin discs, it is also well known that in the union of intersecting circles there somewhere lie eigenvalues. By intersecting circles we mean that their projection on the horizontal axis is a continuous line. That might complicate the situation a bit. In the intersection of discs there lie eigenvalues and their product contributes to the total determinant. That is why, we can deal with each bunch of intersecting discs separately. We compute a verified interval enclosure of a product of eigenvalues regardless of their position inside this bunch. The computation of the verified enclosure will depend on the number of discs in the bunch (odd/even) and on whether the bunch contains the point 0. In Figure 2 all the possible cases and resulting verified enclosures are depicted. The resulting determinant will be a product of intervals corresponding to all bunches of intersecting discs.
The formulas of enclosures are based on the following simple fact. The eigenvalue lying inside an intersection of circles can be real or complex (). In the second case the conjugate complex number is also an eigenvalue. Their product can be enclosed from above by as depicted in Figure 3. The whole reasoning is based on Pythagorean theorem and geometric properties of hypotenuse.
5.3 Hadamard inequality
A simple but rather crude enclosure of interval determinant can be obtained by the well known Hadamard inequality. For an real matrix we have where is the Euclidean norm of th column of . This inequality is simply transformable for the interval case. Since the inequality holds for every we have
It is a fast and simple method. A drawback is that the obtained enclosure is often quite wide. A second problem is that it is impossible to detect the sign of the determinant, which might be sometimes useful.
5.4 Cramer’s rule
Our next method is based on Cramer’s rule. It exploits methods for computing enclosure of a solution set of a square interval linear system. There are plenty of such algorithms, i.e., [11, 25, 27, 39]. Here we use the method ”\” built in Octave interval package. When solving a real system using Cramer’s rule we obtain
where emerges by omitting the first row and column from and is the first coefficient of the solution of . We have reduced our problem of determinant computation to a problem with lower dimension and we can repeat the same procedure iteratively until the determinant in the numerator is easily computable. For an interval matrix we actually get
(5.1) 
where is an interval enclosure of the first coefficient of the solution of , computed by some of the cited methods. Notice that we can use arbitrary index instead of 1. The method works when all enclosures of in the recursive calls do not contain 0.
5.5 Monotonicity checking
The derivative of a real nonsingular matrix is . Provided the interval matrix is regular and is an interval enclosure for the set , then and the signs of give information about monotonicity of the determinant. As long as is not in the interior of , then we can do the following reasoning. If is a nonnegative interval, then is nondecreasing in , and hence its minimal value is attained at . Similarly for nonpositive.
In this way, we split the problem of computing into two subproblems of computing the lower and upper bounds separately. For each subproblem, we can fix those interval entries of at the corresponding lower or upper bounds depending on the signs of . This makes the set smaller in general. We can repeat this process or call another method for the reduced interval matrix.
Notice that there are classes of interval matrices the determinant of which is automatically monotone. They are called inverse stable [33]. Formally, is inverse stable if for each . This class also includes interval Mmatrices [3], inverse nonnegative [21] or totally positive matrices [6] as particular subclasses that are efficiently recognizable; cf. [13].
5.6 Preconditioning
In the interval case by preconditioning we mean transforming an interval matrix into a better form as an input for further processing. It is generally done by multiplying an interval matrix by a real matrix from left and by a real matrix from right and we get some new matrix . Regarding determinants, from properties of the interval arithmetics we easily obtain and we will further use the fact
There are many possibilities how to choose the matrices for a square interval matrix. As in [7], we can take the midpoint matrix and compute its LU decomposition . When setting , we get
Another option is using an LDL decomposition. A symmetric positive definite matrix can be decomposed as , where is upper triangular with ones on the main diagonal and being diagonal matrix. By setting and obtain
In interval linear system solving, there are various preconditioners utilized depending on criteria used [12, 17]. The most common choice is taking when is regular. Then
Unlike the previous real matrices, the matrix does not have to have its determinant equal to 1. We need to compute a verified determinant of a real matrix. In [29] there are many variants of algorithms for computation of verified determinants of real matrices. We use the one by Rump [38].
6 Enclosures of determinants – special cases
Even though we are not going to compare all of the mentioned methods in this section, for the sake of completeness we will mention some cases of matrices that enable the use of other tools. For special classes of interval matrices we prove new results stating that it is possible to compute exact bounds of their determinants in polynomial time.
6.1 Symmetric matrices
Many problems in practical applications are described using symmetric matrices. We specify what we mean by an interval symmetric matrix by the following definition.
Definition 3 (Symmetric interval matrix)
For a square interval matrix we define
Next we define its eigenvalues.
Definition 4
For a real symmetric matrix let be its eigenvalues. For we define its th set of eigenvalues as
For symmetric interval matrices there exist various methods to enclose each th set of eigenvalues. A simple enclosure can be obtained by the following theorem in [14, 37].
Theorem
6.2 Symmetric positive definite matrices
Let be a symmetric positive definite matrix, that is, every is positive definite. Checking positive definiteness of a given symmetric interval matrix is NPhard [20, 34], but there are various sufficient conditions known [35].
The matrix with maximum determinant can be found by solving the optimization problem
since is an increasing function and is positive on . This is a convex optimization problem that is solvable in polynomial time using interior point methods; see Boyd & Vandenberghe [5]. Therefore, we have:
Proposition 2
The maximum determinant of a symmetric positive definite matrix is computable in polynomial time.
6.3 Matrices with
Preconditioning by results in an interval matrix the center of which is the identity matrix . This motivates us to study such matrices more in detail. Suppose that is such that . Such matrices have very useful properties. For example, solving interval linear systems is a polynomial problem [32]. Also checking regularity of can be performed effectively just by verifying ; see [27].
Proposition 3
Suppose that . Then the minimum determinant of is attained for .
Proof
We will proceed by mathematical induction. Case is trivial.
We will proceed by mathematical induction. Case is trivial. For the general case, we express the determinant of as in (5.1)
By induction, the smallest value of is attained for . Since and is regular , therefore and as it is the first coefficient of the solution of , its largest value is attained for ; see [32]. Therefore simultaneously minimizes the numerator and maximizes the denominator.
Example
If the condition does not hold, then the claim is not true in general. Consider the matrix where
We have and , however, . The minimum bound is attained e.g., for the matrix
Computing the maximum determinant of is a more challenging problem, and it is an open question whether is can be done efficiently in polynomial time. Obviously, the maximum determinant of is attained for a matrix such that for each . Specifying the offdiagonal entries is, however, not so easy.
6.4 Tridiagonal Hmatrices
Consider an interval tridiagonal matrix
Suppose that it is an interval Hmatrix, which means that each matrix is an Hmatrix. Interval Hmatrices are easily recognizable, see, e.g., Neumaier [26, 27].
Without loss of generality let us assume that the diagonal is positive, that is, for all . Otherwise, we could multiply the corresponding rows by . Recall that the determinant of a real tridiagonal matrix can be computed by a recursive formula as follows
Since is an Hmatrix with positive diagonal, the values of are positive for each . Hence the largest value of is attained at and such that . Analogously for the minimal value of . Therefore, we have:
Proposition 4
Determinants of interval tridiagonal Hmatrices are computable in polynomial time.
Complexity of the determinant computation for general tridiagonal matrices remains an open problem, similarly as solving an interval system with tridiagonal matrix [20]. Nevertheless, not all problems regarding tridiagonal matrices are open or hard, e.g., checking whether a tridiagonal matrix is regular can be done in polynomial time [2].
7 Comparison of methods
In this section the described methods are compared. All these methods were implemented for Octave and its interval package by Oliver Heimlich [8]. This package also contains function det, which computes an enclosure of the determinant of an interval matrix by LU decomposition, which is basically the same as the already described Gaussian elimination method and that is why we do not explicitly compare the methods against this function. All tests were run on an 8CPU machine Intel(R) Core(TM) i74790K, 4.00GHz. Let us start with general matrices first.
7.1 General case
For general matrices the following methods are compared:

GE  interval Gaussian elimination

HAD  interval Hadamard inequality

GERSCH  interval Gerschgorin circles

CRAM  our method based on Cramer’s rule
The suffix ”inv” is added when the preconditioning with midpoint inverse was applied and ”lu” is added when the preconditioning based on LU decomposition was used. We use the string HULL to denote the exact interval determinant.
Example
To obtain a general idea how the methods work, we can use the following example. Let us take the midpoint matrix and inflate it into an interval matrix using two fixed radii of intervals – respectively.
The resulting enclosures of the interval determinant by all methods are shown in Table 1.
method  

HULL  [4.060, 14.880]  [8.465, 9.545] 
GE  [3.000, 21.857]  [8.275, 9.789] 
GEinv  [3.600, 18.000]  [8.460, 9.560] 
GElu  [1.440, 22.482]  [8.244, 9.791] 
CRAM  [, ]  [8.326, 9.765] 
CRAMinv  [3.594, 78.230]  [8.460, 9.588] 
CRAMlu  [, ]  [8.244, 9.863] 
HAD  [526.712, 526.712]  [493.855, 493.855] 
HADinv  [16.801, 16.801]  [9.563, 9.563] 
HADlu  [35.052, 35.052]  [27.019, 27.019] 
GERSCH  [3132.927, 11089.567]  [2926.485, 10691.619] 
GERSCHinv  [0.000, 72.000]  [6.561, 11.979] 
GERSCHlu  [11089.567, 6116.667]  [10691.619, 5838.410] 
Based on this example it is not worth to test all methods, because some of them do not work well in comparison to others or do not work well without preconditioning. That is why we later test only – GEinv, CRAMinv, HADinv and GERSCHinv.
We can perceive the method GEinv used in [40] as the ”stateoftheart” method. Therefore, every other method will be compared to it. Primarily, for a given matrix and a we compute the ratio of widths of interval enclosures of computed by both methods as
We test all methods for sizes and random interval square matrices with given fixed radii of intervals ( or ). We test on 100 matrices for each size. For each size and method average ratio of computed enclosures, average computation time and its variance is computed. It can happen that an enclosure returned by a method is infinite. Such case is omitted from the computation of average or variance.
The remaining part to be described is generation of random matrices. First, a random midpoint matrix with coefficients uniformly within bounds is generated. Then, it is inflated into an interval matrix with intervals having their radius equal to or respectively.
Let us begin with the average ratios of widths. They are presented Table 2. When the ratio is a number less then 1000, it is displayed rounded to 2 digits. When it is greater, only the approximation is displayed.
size  GERSCHinv  HADinv  CRAMinv  GERSCHinv  HADinv  CRAMinv 
5  8.01  1.00  8.88  41.91  1.03  
10  19.90  1.00  144.46  16.65  1.03  
15  34.96  1.00  9.04  1.04  
20  48.18  1.00  5.97  1.04  
25  1.00  4.35  1.05  
30  203.06  251.69  1.00  3.71  1.07  
35  188.74  1.00  3.09  1.06  
40  171.65  1.00  2.74  1.05  
45  128.90  1.00  2.28  1.06  
50  129.55  1.00  2.20  1.07  
Computation times are displayed in Table 3. For each size of matrix the average computation time is displayed; the numbers in brackets are standard deviations. To more clearly see the difference in computation time between the two most efficient methods GEinv and CRAMinv see Figure 4.
size  GEinv  GERSCHinv  HADinv  CRAMinv  GEinv  GERSCHinv  HADinv  CRAMinv 

5  0.13  0.06  0.04  0.12  0.13  0.06  0.04  0.13 
(0.00)  (0.00)  (0.00)  (0.00)  (0.00)  (0.00)  (0.00)  (0.02)  
10  0.41  0.07  0.06  0.24 10  0.40  0.07  0.06  0.25 
(0.00)  (0.00)  (0.00)  (0.00)  (0.06)  (0.00)  (0.00)  (0.01)  
15  0.90  0.09  0.08  0.36 15  0.91  0.09  0.08  0.39 
(0.04)  (0.00)  (0.00)  (0.00)  (0.01)  (0.00)  (0.00)  (0.03)  
20  1.59  0.11  0.12  0.48 20  1.51  0.11  0.12  0.54 
(0.01)  (0.00)  (0.00)  (0.01)  (0.26)  (0.00)  (0.00)  (0.08)  
25  2.48  0.13  0.16  0.62 25  2.41  0.13  0.16  0.73 
(0.07)  (0.00)  (0.00)  (0.03)  (0.29)  (0.00)  (0.00)  (0.12)  
30  3.58  0.15  0.21  0.76 30  3.47  0.15  0.21  0.92 
(0.02)  (0.00)  (0.00)  (0.01)  (0.39)  (0.00)  (0.00)  (0.14)  
35  4.88  0.17  0.27  0.93 35  4.59  0.17  0.27  1.09 
(0.03)  (0.00)  (0.00)  (0.02)  (0.80)  (0.00)  (0.00)  (0.23)  
40  6.39  0.19  0.34  1.10 40  5.77  0.19  0.34  1.25 
(0.03)  (0.00)  (0.00)  (0.04)  (1.31)  (0.00)  (0.00)  (0.33)  
45  8.05  0.22  0.42  1.29 45  7.34  0.22  0.42  1.48 
(0.59)  (0.00)  (0.00)  (0.09)  (1.54)  (0.00)  (0.00)  (0.40)  
50  10.03  0.25  0.50  1.54 50  8.77  0.25  0.50  1.68 
(0.04)  (0.00)  (0.00)  (0.06)  (2.41)  (0.00)  (0.00)  (0.55) 
7.2 Symmetric matrices
We repeat the same test procedure with the best methods for interval symmetric matrices. Since these matrices have their eigenvalues real we can add the methods using real bounds on real eigenvalues. Symmetric matrices are generated in a similar way as before, only they are shaped to be symmetric. We compare the preconditioned methods with midpoint inverse GEinv, GERSCHinv, HADinv and CRAMinv. We add one new method EIG based on computation of enclosures of eigenvalues using Theorem 6.1. The method GEinv stays the reference method, i.e, we compare all methods with respect to this method.
The enclosures widths for symmetric matrices are displayed in Table 4. We can see that as in the general case CRAMERinv does slightly worse than GEinv. Another thing we can see is that EIG is worse than both CRAMERinv and GEinv.
size  GERSCHinv  HADinv  CRAMinv  EIG  GERSCHinv  HADinv  CRAMinv  EIG 

5  7.68  1.00  2.08  7.77  50.29  1.01  2.02  
10  18.38  1.00  2.56  61.98  19.22  1.01  2.47  
15  28.38  1.00  2.99  11.43  1.04  2.73  
20  44.43  1.00  3.10  7.67  1.03  2.90  
25  1.00  3.18  5.70  1.03  3.02  
30  80.43  1.00  3.33  4.53  1.05  3.10  
35  301.69  1.00  3.52  3.96  1.04  3.46  
40  219.13  1.00  3.38  3.41  1.04  3.70  
45  183.44  1.00  3.48  2.73  1.05  3.65  
50  162.34  1.00  3.62  2.70  1.04  4.32 
The computation times are displayed in Table 5. We can see that EIG shows low computational demands compared to the other methods. One can argue that we can use filtering methods to get even tighter enclosures of eigenvalues. However, they work well in specific cases [15] and the filtering is much more time consuming.
size  GEinv  GERSCHinv  HADinv  CRAMinv  EIG 

5  0.13  0.06  0.04  0.12  0.01 
(0.00)  (0.00)  (0.00)  (0.00)  (0.00)  
10  0.41  0.07  0.06  0.24  0.02 
(0.00)  (0.00)  (0.00)  (0.00)  (0.00)  
15  0.90  0.09  0.08  0.36  0.02 
(0.00)  (0.00)  (0.00)  (0.00)  (0.00)  
20  1.59  0.11  0.12  0.48  0.03 
(0.01)  (0.00)  (0.00)  (0.01)  (0.00)  
25  2.47  0.13  0.16  0.63  0.03 
(0.01)  (0.00)  (0.00)  (0.04)  (0.00)  
30  3.56  0.15  0.21  0.76  0.04 
(0.02)  (0.00)  (0.00)  (0.01)  (0.00)  
35  4.88  0.17  0.27  0.93  0.05 
(0.02)  (0.00)  (0.00)  (0.02)  (0.00)  
40  6.36  0.19  0.34  1.10  0.07 
(0.04)  (0.00)  (0.00)  (0.02)  (0.00)  
45  8.09  0.22  0.42  1.30  0.08 
(0.04)  (0.00)  (0.00)  (0.02)  (0.00)  
50  9.96  0.25  0.50  1.53  0.10 
(0.06)  (0.00)  (0.00)  (0.03)  (0.00) 
8 Conclusion
In the paper we showed that, unfortunately, even approximation of exact bounds of an interval determinant is NPhard problem (for both relative and absolute approximation). On the other hand, we proved that there are some special types of matrices where interval determinant can be computed in polynomial time – symmetric positive definite, certain matrices with or tridiagonal Hmatrices. We discussed four methods GE – the ”stateoftheart” Gaussian elimination, GERSCH – our generalized Gerschgorin circles for interval matrices, HAD – our generalized Hadamard inequality for interval matrices and CRAM – our designed method based on Cramer’s rule. We introduced a method that can possibly improve an enclosure based on monotonicity checking. All methods combined with preconditioning were tested on random matrices of various sizes. For interval matrices with radii less than the methods GEinv and CRAMinv return similar results. The larger the intervals the slightly worse CRAMinv becomes. However, its computation time is much more convenient (it is possible to compute a determinant of an interval matrix of order by CRAMinv at the same cost as an interval matrix of order by GEinv). Matrices of order larger than need some form of preconditioning otherwise GE and CRAM return infinite intervals. In our test cases the lu preconditioning did not prove to be suitable. The methods HAD and GERSCH always return finite intervals, but these intervals can be huge. Both methods work better with the inv preconditioning. The HADinv returns much tighter intervals than GERSCH, however, it can not distinguish the sign of determinant since the enclosure is symmetric around .
The analysed properties of the methods do not change dramatically when dealing with symmetric matrices. The newly added method EIG showed constant and not so huge overestimation and much smaller computation times. The possible improvement of EIG enclosures for symmetric matrices (by applying suitable forms of filtering and eigenvalue enclosures) might be matter of further research. There are many more options for future research – studying various matrix decompositions and preconditioners or studying other special classes of matrices.
References
 [1] Götz Alefeld and Jürgen Herzberger. Introduction to Interval Computations. Computer Science and Applied Mathematics. Academic Press, New York, 1983.
 [2] Ilan BarOn, Bruno Codenotti, and Mauro Leoncini. Checking robust nonsingularity of tridiagonal matrices in linear time. BIT Numerical Mathematics, 36(2):206–220, 1996.
 [3] W. Barth and E. Nuding. Optimale Lösung von Intervallgleichungssystemen. Comput., 12:117–125, 1974.
 [4] Olivier Beaumont. An algorithm for symmetric interval eigenvalue problem. Technical Report IRISAPI001314, Institut de recherche en informatique et systèmes aléatoires, Rennes, France, 2000.
 [5] Stephen Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
 [6] Jürgen Garloff. Criteria for sign regularity of sets of matrices. Linear Algebra Appl., 44:153–160, 1982.
 [7] Eldon Hansen and Roberta Smith. Interval arithmetic in matrix computations, part ii. SIAM Journal on Numerical Analysis, 4(1):1–9, 1967.
 [8] O Heimlich. Gnu octave interval package. version 1.4. 1, 2016.
 [9] David Hertz. The extreme eigenvalues and stability of real symmetric interval matrices. IEEE Trans. Autom. Control, 37(4):532–535, 1992.
 [10] Milan Hladík. Bounds on eigenvalues of real and complex interval matrices. Appl. Math. Comput., 219(10):5584–5591, 2013.
 [11] Milan Hladík. New operator and method for solving real preconditioned interval linear equations. SIAM J. Numer. Anal., 52(1):194–206, 2014.
 [12] Milan Hladík. Optimal preconditioning for the interval parametric Gauss–Seidel method. In M. Nehmeier et al., editor, Scientific Computing, Computer Arithmetic, and Validated Numerics: 16th International Symposium, SCAN 2014, volume 9553 of LNCS, pages 116–125. Springer, 2016.
 [13] Milan Hladík. An overview of polynomially computable characteristics of special interval matrices. preprint arXiv: 1711.08732, http://arxiv.org/abs/1711.08732, 2017.
 [14] Milan Hladík, David Daney, and Elias Tsigaridas. Bounds on real eigenvalues and singular values of interval matrices. SIAM J. Matrix Anal. Appl., 31(4):2116–2129, 2010.
 [15] Milan Hladík, David Daney, and Elias Tsigaridas. A filtering method for the interval eigenvalue problem. Applied Mathematics and Computation, 217(12):5236–5242, 2011.
 [16] Jaroslav Horáček and Milan Hladík. Computing enclosures of overdetermined interval linear systems. Reliable Computing, 19(2):142–155, 2013.
 [17] R. Baker Kearfott. Preconditioners for the interval Gauss–Seidel method. SIAM J. Numer. Anal., 27(3):804–822, 1990.
 [18] Lubomir V. Kolev. Outer interval solution of the eigenvalue problem under general form parametric dependencies. Reliab. Comput., 12(2):121–140, 2006.
 [19] Lubomir V. Kolev. Eigenvalue range determination for interval and parametric matrices. Int. J. Circuit Theory Appl., 38(10):1027–1061, 2010.
 [20] Vladik Kreinovich, Anatoly Lakeyev, Jiří Rohn, and Patrick Kahl. Computational Complexity and Feasibility of Data Processing and Interval Computations. Kluwer, Dordrecht, 1998.
 [21] J.R. Kuttler. A fourthorder finitedifference approximation for the fixed membrane eigenproblem. Math. Comput., 25(114):237–256, 1971.
 [22] Huinan Leng and Zhiqing He. Computing eigenvalue bounds of structures with uncertainbutnonrandom parameters by a method based on perturbation theory. Commun. Numer. Methods Eng., 23(11):973–982, 2007.
 [23] Günter Mayer. A unified approach to enclosure methods for eigenpairs. ZAMM, Z. Angew. Math. Mech., 74(2):115–128, 1994.
 [24] J.P. Merlet and P. Donelan. On the regularity of the inverse Jacobian of parallel robots. In Jadran Lennarčič and B. Roth, editors, Advances in Robot Kinematics, pages 41–48. 2006.
 [25] Ramon E. Moore, R Baker Kearfott, and Michael J. Cloud. Introduction to Interval Analysis. SIAM, 2009.
 [26] Arnold Neumaier. New techniques for the analysis of linear interval equations. Linear Algebra Appl., 58:273–325, 1984.
 [27] Arnold Neumaier. Interval Methods for Systems of Equations. Cambridge University Press, Cambridge, 1990.
 [28] D. Oetomo, D. Daney, B. Shirinzadeh, and J.P. Merlet. An intervalbased method for workspace analysis of planar flexurejointed mechanism. J Mech. Des., 131(1):0110141–01101411, 2009.
 [29] Takeshi Ogita. Accurate and verified numerical computation of the matrix determinant. International Journal of Reliability and Safety, 6(13):242–254, 2011.
 [30] Helmut Ratschek and Jon Rokne. Geometric Computations with Interval and New Robust Methods. Applications in Computer Graphics, GIS and Computational Geometry. Horwood Publishing, Chichester, 2003.
 [31] Jiří Rohn. Miscellaneous results on linear interval systems. Freiburger IntervallBerichte 85/9, AlbertLudwigsUniversität, Freiburg, 1985.
 [32] Jiří Rohn. Cheap and tight bounds: The recent result by E. Hansen can be made more efficient. Interval Comput., 1993(4):13–21, 1993.
 [33] Jiří Rohn. Inverse interval matrix. SIAM J. Numer. Anal., 30(3):864–870, 1993.
 [34] Jiří Rohn. Checking positive definiteness or stability of symmetric interval matrices is NPhard. Commentat. Math. Univ. Carol., 35(4):795–797, 1994.
 [35] Jiří Rohn. Positive definiteness and stability of interval matrices. SIAM J. Matrix Anal. Appl., 15(1):175–184, 1994.
 [36] Jiří Rohn. Checking properties of interval matrices. Technical Report 686, Institute of Computer Science, Academy of Sciences of the Czech Republic, Prague, 1996. http://hdl.handle.net/11104/0123221.
 [37] Jiří Rohn. A handbook of results on interval linear problems. Technical Report 1163, Institute of Computer Science, Academy of Sciences of the Czech Republic, Prague, 2012.
 [38] Siegfried M Rump. Computerassisted proofs and selfvalidating methods. In Accuracy and Reliability in Scientific Computing, pages 195–240. SIAM, 2005.
 [39] Siegfried M. Rump. Verification methods: Rigorous results using floatingpoint arithmetic. Acta Numer., 19:287–449, 2010.
 [40] Lyle B. Smith. Interval arithmetic determinant evaluation and its use in testing for a Chebyshev system. Communications of the ACM, 12(2):89–93, 1969.