1 Introduction

Subsquares Approach – Simple Scheme for Solving Overdetermined Interval Linear Systems

Jaroslav Horáček and Milan Hladík

Charles University, Faculty of Mathematics and Physics, Department of Applied Mathematics, Malostranské nám. 25, 118 00, Prague, Czech Republic horacek@kam.mff.cuni.cz, hladik@kam.mff.cuni.cz

Abstract. In this work we present a new simple but efficient scheme – Subsquares approach – for development of algorithms for enclosing the solution set of overdetermined interval linear systems. We are going to show two algorithms based on this scheme and discuss their features. We start with a simple algorithm as a motivation, then we continue with a sequential algorithm. Both algorithms can be easily parallelized. The features of both algorithms will be discussed and numerically tested.

Keywords: interval linear systems, interval enclosure, overdetermined systems, parallel computing

1 Introduction

In this paper we address the problem of solving overdetermined interval linear systems (OILS). They can occur in many situations e.g. computing eigenvectors of interval matrices [2] or when solving various continuous CSP problems. There exist a lot of efficient methods for solving square interval linear systems. Solving overdetermined systems is a little bit more tricky, that is because we can not use some favourable properties of matrices like diagonal dominance, positive definiteness etc. Nevertheless there are some methods – Rohn method [8], the least squares approach [3], linear programming [3], Gaussian elimination [1] or the method designed by Popova [7].

In our text [3] we showed that one of the best methods is using the least squares approach. This method returns a very narrow enclosure in a small time. But there is a problem, that the least squares always return solution, even if the system has none. That is sometimes not desirable. Other methods often rely on some kind of preconditioning which leads also to an overestimation and for some systems (e.g. with really wide intervals) can not be done. It is very difficult to develop one method suitable for every type of systems. We would like to present a scheme – Subsquares approach – which enables us to develop methods for solving overdetermined interval linear systems. Then we will move to a simple method and sequential method, both suitable for parallel computing. Before introducing the scheme and derived methods, it would be desirable to start with some basic interval notation and definitions first.

2 Basics of interval arithmetics

We will work here with real closed intervals , where . The numbers are called the lower bound and upper bound respectively.

We will use intervals as coefficients of matrices and vectors during our computations. The interval representation may be useful in many ways – it may represent uncertainty (e.g. lack of knowledge, damage of data), verification (e.g errors in measurement), computational errors (e.g. rounding errors in floating point arithmetic) etc. Intervals and interval vectors will be denoted in boldface i.e. . Interval matrices will be denoted by bold capitals i.e. .

Another notion we will use is the midpoint of an interval , it is defined as . By we will denote the midpoint matrix of . When comparing two intervals we need the notion width of an interval defined as . If is an -dimensional interval vector we will define ”width” and ”volume” of as

The vector and matrix operations will be defined using the standard interval arithmetic, for definition of interval operations and further properties of the interval arithmetic do not hesitate to see e.g. [5].

We continue with definitions connected with interval linear systems. Let us have an interval linear system , where is an matrix. When we will call it a square system. When we will call it an overdetermined system. In the further text when talking about an overdetermined system, we will always use the notation , where is matrix.

It is necessary to state what do we mean by the solution set of an interval linear system. It is the set

We shall point out, that this approach is different from the least squares method.

If a system has no solution, we call it unsolvable. The interval hull is an -dimensional box (aligned with axes) enclosing the solution set as tightly as possible. When we start using intervals in our computations (we have mentioned its advantage already), many problems become NP-hard [4]. So is the problem of finding the hull of the solution set [10]. It can be computed quite painfully using e.g. linear programming [3]. That is why we are looking for a little wider -dimensional box containing the hull. The tighter the better. We call it interval enclosure.

In this work we will provide numerical testing at various places in the text, therefore we rather mention its parameters here. The testing will be done on CPU Intel T2400, Core Duo, 1.83 GHz, with 2.50 GB memory. We used Matlab R2009a and toolbox for interval computation INTLAB v6 [11] and Versoft v10 [9] for verified interval linear programming.

All examples will be tested on random overdetermined systems. A random system is generated in the following way. First we generate random solvable point overdetermined system. Coefficients are taken uniformly from interval . Then we inflate the coefficients of this systems to intervals of certain width. The original point system is not necessarily a midpoint system of the new interval system. Each of its coefficients is randomly shifted towards one of the bounds of an interval in which it lies.

3 General Subsquares method scheme

By a square subsystem (we will also call it a subsquare) of an overdetermined system we mean a system composed of some equations taken (without repetition) from the original overdetermined system such that together they from a square system. Some of the possibilities are shown in the Figure 1. For the sake of simplicity we will denote the square subsystem of created by equations as . When we use some order (e.g. dictionary order) of systems (here it does not depend which one) the -th system will be denoted .

Let us suppose we can solve a square interval system efficiently and quickly. We can for example take one of the following method – Jacobi method [5], Gauss-Seidel method [5, 6], Krawczyk method [5, 6] etc. These methods usually can not be applied to overdetermined systems. Nevertheless, we can use the fact that we can solve the square systems efficiently together with the fact that the solution set of an overdetermined system must lie inside the solution set of its any subsquare. This follows from the fact that by adding new equations to the square system we can only make the solution set equal or smaller.

When we chose some subsquares of an overdetermined system we can simply provide an intersection of their solution enclosures or provide some further work. We get a simple scheme for solving overdetermined interval linear systems – Subsquares Approach – consisting of two steps:

  1. Select certain amount of square subsystems of

  2. Solve these subsystems and get together the enclosures

If a method for solving OILS uses this kind of approach, we call it Subsquares method. As a motivation for this approach let us take the randomly generated interval system (with rounded bounds), where

In the Figure 2 we can see the solution set and the hull of (red color) and the same for (blue color). It can be seen that if we provide intersection of the two hulls (or enclosures) the resulting enclosure might get remarkably tighter.

Figure 1: Various square subsystems a), b), c)
Figure 2: Enclosures and hulls of subsquares

3.1 The simple algorithm

If we solve square subsystems separately and then intersect resulting enclosures, we get a raw simple Algorithm 1 for solving OILS.

0:  enclosure of the solution set of
  while not (terminal condition) do
      choose randomly a subsquare of
     compute its enclosure
  end while
Algorithm 1 Subsquares method – simple algorithm

This approach is a little bit naive, but it has its advantage. First, if we compute enclosures of all possible square subsystems, we get really close to the interval hull. The Table 1 shows the average ratios of widths and volumes of enclosures returned by simple subsquares method and verifylss compared to the interval hull computed by linear programming. If we have an system, the number of all square subsystems is equal to . However, we can see that for small or for close to the number might not be so large. That is why solving all subsquares pays off when systems are noodle-like or nearly-squared.

system av av av av
1.0014 1.0043 1.1759 1.6502
1.0028 1.0140 1.1906 2.3831
1.0044 1.0316 1.2034 3.6733
1.0061 1.0565 1.1720 4.2902
1.0227 1.6060 1.0833 5.4266
1.0524 5.8330 1.0987 51.0466
Table 1: Simple subsq. method solving all subsquares – enclosures comparison

The second advantage is that Algorithm 1 can, in contrast to other methods, easily decide whether a system is unsolvable – if, in some iteration, the resulting enclosure is empty after intersection, then the overdetermined system is unsolvable. The table 2 shows average number of random subsquares solved until the unsolvability is discovered (empty intersection occurs). Each column represents systems of different coefficient radii. We can see that for systems with relatively small intervals unsolvability is revealed almost immediately.

2.1 2.0 2.0
2.2 2.0 2.0
2.2 2.0 2.0
2.4 2.0 2.0
2.9 2.1 2.0
7.1 2.1 2.0
Table 2: Simple subsq. method – unsolvability detection

For most rectangular systems it is however not convenient to compute solutions of all or many subsystems. The choice of square subsystems and the solving algorithm can be modified to be more economical and efficient.

3.2 The sequential algorithm

When selecting subsquares randomly, they usually overlap. We can think of overlaps as a ”meeting points” of square subsystems. They share some data (equations) there. That is why it would be advantageous to use this overlap to propagate a partial result of computation over one square subsystem into computations over other subsystems. When we are talking about immediate propagation of partially computed solution, our mind can easily come to Gauss-Seidel iterative method (GS).

This method starts with an initial enclosure . In -th iteration each entry of the current solution enclosure vector might be narrowed using the formula

Simply said, in each iteration this algorithm expresses from -th equation. It uses newly computed values immediately.

In our algorithm we will use GS iteration in a similar way for more square subsystems simultaneously. Again, we start with some initial enclosure . In -th iteration we provide -th GS iteration step for all systems. The advantage of this technique is that we express each variable according to formulas originating from more systems. We expect the narrowing rate will be much better this way. Similarly as in simple GS, if in some point of computation empty intersection occurs, whole overdetermined system has no solution.

Iterative methods usually require a preconditioning. We will use the preconditioning with . There are still two not answered problems yet – initial enclosure and terminal condition. To find , we can take the resulting enclosure of some other method. Or we can solve one square subsystem. The algorithm will terminate after -th iteration if e.g.

for some small positive and .

Now we have to choose subsystems for sequential method. Before getting on, we shall consider the following desirable properties of a new algorithm inspired with four unfavorable features of the simple algorithm:

  1. We do not want to have too many square subsystems

  2. We want to cover the whole overdetermined system by subsystems

  3. The overlap of subsquares is not too low, not too high

  4. We take subsquares that narrow the resulting enclosure as much as possible

First and second property can be solved by covering the system step by step using some overlap parameter. About third property, it proved itself to be a reasonable choice taking overlap . Property four is a difficult task to provide. We think deciding which systems to choose (in a favourable time) is still an area to be explored. Yet randomness will serve us well.

Among many possibilities we tested, the following choice of subsystems worked well. During our algorithm we divide equations of overdetermined system into two sets – , which contains equations that are already contained in some subsystems, and , which contains equations that are not covered yet. We also use a parameter .

The first subsystem will be chosen randomly, other subsystems will be composed of rows from and rows from . The last system is composed of all remaining uncovered rows and then some already covered rows are added to form a square system. This is described as Algorithm 2. The algorithm is not necessarily optimal, it should serve as an illustration. The procedure randsel(n, list) selects n random non-repeating numbers from list. The total number of subsquares chosen by this algorithm is

The whole sequential algorithm is summarized as Algorithm 3. The function GS-iteration(, ) applies one iteration of Gauss-Seidel method on the subsquare using as initial enclosure . Method has-converged() returns true if terminal condition (e.g. the one mentioned earlier) is satisfied.

0:  , overlap
0:  set of subsquares of
  systems {set of square subsystems}
  covered {numbers of covered equations by some subsystem}
  waiting {numbers of equations to be covered}
  while   do
     if  then
         indices randsel(, waiting)
     else if  then
         indices waiting randsel(, waiting)
         indices randsel(overlap, covered) randsel(n-overlap, waiting)
     end if
     systems systems
     covered covered indices
     waiting waiting indices
  end while
  return  systems
Algorithm 2 Choosing square subsystems
0:  enclosure of the solution set
  select square subsystems
  converged false
  while  not converged  do
     for  to  do
     end for
     converged has-converged()
  end while
Algorithm 3 Subsquares method – sequential version

As we showed in [3], verifylss (using the least squares) from INTLAB is one of the best and quite general method for overdetermined systems that are solvable. That is why we wanted to compare our method with this method. During our testing we realized verifylss works fine with small intervals, however it is not too efficient when the intervals become relatively large. We used enclosures returned by verifylss as inputs for subsquares sequential method and tested if our algorithm was able to narrow them. The Table 3 shows the results. Column shows radii of intervals of testing systems, we chose the same radii for all coefficients of a system. We tested on 100 random systems. For each system we chose 100 random subsquares sets and applied the sequential algorithm on them. The fourth column shows average ratios of enclosure widths of and . Each random selection of subsquares set usually produces a different ratio of enclosure widths. For each system we chose one of the 100 subsquares sets that produces the best ratio. The fifth column shows the average value of the best ratios found for each of 100 random systems. Columns and show computation times (in seconds) of verifylss and sequential method respectively.

system overlap av av
3 0.1 0.99 0.94 0.006 0.06
3 0.25 0.97 0.86 0.007 0.07
3 0.35 0.93 0.79 0.008 0.09
3 0.5 0.87 0.66 0.01 0.12
5 0.1 0.99 0.98 0.006 0.09
5 0.25 0.99 0.94 0.007 0.12
5 0.35 0.98 0.92 0.008 0.14
5 0.5 0.94 0.79 0.012 0.20
7 0.1 0.99 0.98 0.008 0.11
7 0.25 0.99 0.95 0.011 0.19
7 0.35 0.97 0.90 0.015 0.29
7 0.5 0.87 0.38 0.016 0.72
11 0.1 0.99 0.98 0.014 0.16
11 0.25 0.97 0.84 0.023 0.51
Table 3: Subsquare method shaving the verifylss enclosure

The larger interval radii are, the more intensively the sequential method sharpens verifylss enclosures. When an interval system has its midpoint system , or a system relatively close to it, solvable, we are able to compute the solution enclosure much tighter. This happens because of the preconditioning with matrix. Selected results can be seen in the Table 4.

system overlap av. rat best av. rat
3 0.1 0.98 0.89
3 0.25 0.85 0.59
3 0.35 0.76 0.40
5 0.1 0.99 0.96
5 0.25 0.93 0.74
5 0.35 0.76 0.33
7 0.1 0.99 0.97
7 0.25 0.89 0.34
11 0.1 0.98 0.82
Table 4: Subsquares method shaving the verifylss enclosure (solvable midpoint system)

3.3 Parallel algorithm

The naive algorithm can be easily parallelized. All the square subsystems can be solved in parallel and then all enclosures are intersected. We have only one barrier at the end of computation.

If we take a look at computation times in the Table 3, we realize verifylss is much faster. However, the time pay for gaining much precise enclosure using sequential method is not too high. Moreover, even the sequential algorithm can be parallelized. Propagation of newly computed enclosure can be guaranteed by sharing the enclosure vector among processors as a global variable. If we use Jacobi formula instead of Gauss-Seidel formula for one iteration, the computation becomes of kind SIMD - single instruction multiple data. Therefore it could be used even on GPUs – one pipeline narrows one variable from the interval enclosure vector, a bunch of pipelines computes over one subsquare. Nevertheless, shared vector might create a bottleneck. We believe this could by prevented by the following behaviour of each pipeline. When reading, each pipeline does not lock corresponding shared variable. After each iteration it overwrites a shared variable only if it has improved the value currently stored there. This is inspired with an observation, that in one iteration not many computations over different subsquares improve the same variable. However, we assume that there exist much more efficient ways how to make parallel subsquares methods more efficient and memory collision avoiding.

4 Conclusion

In this paper we introduced a simple but efficient scheme – subsquares approach – for enclosing the solution set of overdetermined interval linear systems. The first method derived from this scheme was a little bit naive, but for noddle-like or nearly-square systems it was able to find almost the interval hull. The second method was a little bit more sophisticated but still quite simple. It worked well on interval systems which coefficients were composed of wide intervals. This method was able to significantly sharpen enclosures produced by verifylss. Both methods were able to detect unsolvability of OILS. Moreover they could be easily parallelized. In the second method we chose the square subsystems randomly, that is why sharpening produced by this method had variable results. There is a question whether for each OILS there exists a deterministically chosen set of subsquares which gives the best possible enclosure, so we can avoid the randomization.

Our research was supported by the grant GAČR P402/13/10660S. Jaroslav Horáček was partially supported by the Grant Agency of the Charles University (GAUK) grant no. 712912 and by GAUK no. SVV-2013-267313.


  • [1] E. Hansen and G. Walster. Solving overdetermined systems of interval linear equations. Reliable computing, 12(3):239–243, 2006.
  • [2] M. Hladík, D. Daney, and E. P. Tsigaridas. An algorithm for addressing the real interval eigenvalue problem. J. Comput. Appl. Math., 235(8):2715–2730, 2011.
  • [3] J. Horáček and M. Hladík. Computing enclosures of overdetermined interval linear systems. Submitted to Reliable Computing, text available at http://arxiv.org/abs/1304.4738, 2013.
  • [4] V. Kreinovich, A. Lakeyev, J. Rohn, and P. Kahl. Computational complexity and feasibility of data processing and interval computations. Kluwer, 1998.
  • [5] R. Moore, R. Kearfott, and M. Cloud. Introduction to interval analysis. Society for Industrial Mathematics, 2009.
  • [6] A. Neumaier. Interval methods for systems of equations, volume 37. Cambridge University Press, 1990.
  • [7] E. D. Popova. Improved solution enclosures for over-and underdetermined interval linear systems. In Large-Scale Scientific Computing, pages 305–312. Springer, 2006.
  • [8] J. Rohn. Enclosing solutions of overdetermined systems of linear interval equations. Reliable Computing, 2(2):167–171, 1996.
  • [9] J. Rohn. VERSOFT: Verification software in MATLAB / INTLAB, version 10, 2009. http://uivtx.cs.cas.cz/$∼$rohn/matlab/.
  • [10] J. Rohn and V. Kreinovich. Computing exact componentwise bounds on solutions of lineary systems with interval data is NP-hard. SIAM Journal on Matrix Analysis and Applications, 16(2):415–420, 1995.
  • [11] S. Rump. INTLAB - INTerval LABoratory. In T. Csendes, editor, Developments in Reliable Computing, pages 77–104. Kluwer Academic Publishers, Dordrecht, 1999. http://www.ti3.tu-harburg.de/rump/.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description