Redundancies in Linear Systems with two Variables per Inequality

Redundancies in Linear Systems with two Variables per Inequality

Komei Fukuda Department of Mathematics and Department of Computer Science. Institute of Theoretical Computer Science, ETH Zürich. CH-8092 Zürich, Switzerland. komei.fukuda@math.ethz.ch    May Szedlák Department of Computer Science. Institute of Theoretical Computer Science, ETH Zürich. CH-8092 Zürich, Switzerland. may.szedlak@inf.ethz.ch
Research supported by the Swiss National Science Foundation (SNF Project 200021_150055 / 1)
October 10, 2016
Abstract

The problem of detecting and removing redundant constraints is fundamental in optimization. We focus on the case of linear programs (LPs), given by variables with inequality constraints. A constraint is called redundant, if after its removal, the LP still has the same feasible region. The currently fastest method to detect all redundancies is due to Clarkson: it solves linear programs, but each of them has at most constraints, where is the number of nonredundant constraints.

In this paper, we study the special case where every constraint has at most two variables with nonzero coefficients. This family, denoted by , has some nice properties. Namely, as shown by Aspvall and Shiloach, given a variable and a value , we can test in time whether there is a feasible solution with . Hochbaum and Naor present an algorithm for solving the feasibility problem in . Their technique makes use of the Fourier-Motzkin elimination method and the earlier mentioned result by Aspvall and Shiloach.

We present a strongly polynomial algorithm that solves redundancy detection in time . It uses a modification of Clarkson’s algorithm, together with a revised version of Hochbaum and Naor’s technique. Finally we show that dimensionality testing can be done with the same running time as solving feasibility.

1 Introduction

The problem of detecting and removing redundant constraints is fundamental in optimization. Being able to understand redundancies in a model is an important step towards improvements of the model and faster solutions.

Throughout we consider linear systems of inequalities of form , for , , . The -th constraint, denoted , is called redundant, if its removal does not change the set of feasible solutions. By removing from the system we get a new system denoted . Assume that is feasible, then by solving the following linear program (LP) we can decide redundancy of .

(1)

Namely, a constraint is redundant if and only if the optimal solution has value at most .

Let denote the time needed to solve an LP with inequalities and variables. Solving linear programs of form (1), with running time , is enough for detecting all redundancies. The currently fastest method is due to Clarkson with running time  [2], where we initially assume an interior point is given. This method also solves linear programs, but each of them has at most variables, where is the number of nonredundant variables. Hence, if , this output-sensitive algorithm is a major improvement.

In general no strongly polynomial time algorithm (polynomial in and ) to solve an LP is known. Although the simplex algorithm runs fast in practice, in general it can have exponential running time [3, 8]. On the other hand the ellipsoid method runs in polynomial time on the encoding of the input size, but is not practical [7]. A first practical polynomial time algorithm, the interior-point method, was introcuded in [6], and has been modified in many ways since.

In this paper we focus on the special case where every constraint has at most two variables with nonzero coefficients, we denote this family by . Our main result is that for a full-dimensional system in we can detect all redundancies in time (see Theorem 2), where we assume that an interior point solution is given. To our knowledge, this is a first strongly polynomial time algorithm for redundancy detection in .

To obtain this running time we use an alternated version of Clarkon’s algorithm, which solves feasibility problems instead of optimization problems. Moreover our algorithm makes use of a modified version of Hochbaum and Naor’s algorithm, which for a system in finds a feasible point or a certificate for infeasibility in time [5]. This result is an improvement of Megiddo’s algorithm with running time [9]. Although their techniques are similar and both rely heavily on [1], the improved version is much simpler.

We will give a summary of the Hochbaum-Naor Algorithm in Section 4. In Section 5 we will give a stronger version of this algorithm, which decides full-dimensionality and in the full-dimensional case outputs an interior point. Using this variant of the algorithm together with our modification of Clarkson’s algorithm we get an output sensitive, strongly polynomial time redundancy detection algorithm. In Section 6 we show how the results extend to non-full-dimensional systems (see Theorem 8).

In all cases the preprocessing can also be done in strongly polynomial time. Moreover, we show that dimensionality testing of a polytope can be done with the same running time as the feasibility testing method of Hochbaum and Naor (see Corollary 11). Note that for general LP’s one needs to solve up to optimization problems.

Although in one can find a feasible solution fast, it is not known how to find an optimal solution in strongly polynomial time. For general LPs a standard technique for converting an optimization problem into a feasibility problem is to use the dual linear program. However, the dual of a system in is generally not in . If the objective function is in , one can apply binary search on the value of the optimal solution, this gives an algorithm that depends on the input size.

Note that Clarkson’s algorithm relies on finding an optimal solution of a linear program. Since for we do not have a fast way to optimize, this is the reason why we modify the algorithm such that it only solves feasibility problems.

2 Definitions and Preliminaries

As already mentioned in the introduction we throughout consider linear systems of the form

where , .

The set of inequalities of is denoted by . A point is a feasible solution or feasible point of (or ) if . It is called an interior point solution of (or ) if all inequalities are satisfied with strict inequality, i.e., (where ”” denotes the componentwise strict inequality). The system (or ) is called feasible if a feasible solution exists, otherwise it is called infeasible. If an interior point solution exists, the system is called full-dimensional. The system is called -dimensional if the solution set is -dimensional.

For a subset we denote by the subsystem of containing only the inequalities indexed by . In particular the -th constraint is denoted by . This constraint is called redundant if implies or equivalently if there is no solution to the system

As mentioned in the introduction, we can test redundancy of a constraint by solving an LP of form (1).

For a feasible system and a variable let be the projection of the solution space of to the -axis, we call this the range of . This interval is exactly the set of values of for which a solution of the entire system can be constructed. It is possible that or .

In this paper we are interested in sparse linear systems, in particular the family . A linear system is in , if every constraint has at most two variables with nonzero coefficients. That means all inequalities have form for some , , , .

We define the neighbors of in , denoted , as the set of variables , for which there exists an inequality in containing and with nonzero coefficients.

The system (or ) is obtained form (or ) by substituting the variable by the constant , it hence has one variable less than the original system.

3 A Strongly Polynomial Time Redundancy Detection Algorithm for Linear Programs with two Variables per Inequality

In this section we will prove our main result, the running time of the strongly polynomial algorithm to detect all redundancies in (see Theorem 2).

Figure 1: Redundancy Certificates

We make use of the following modified version of Hochbaum and Naor’s result (Theorem 1). We will discuss their original result in Section 4 and the validity of the modification in Section 5. In Section 6 we will discuss how the results extend to non-full-dimensional systems.

Theorem 1.

For a system in one can decide in time whether the system is full-dimensional, and in the full-dimensional case output an interior point solution.

Theorem 2.

Let a full-dimensional system in . Let be an interior point solution of and let for some small enough, i.e., is a generic interior point. Then the following algorithm detects all redundancies in time .

Algorithm Modified Clarkson (,,);
begin
;
while do
pick any and use Theorem 1 on , ;
if , not full-dimensional then
;
else let be an interior point solution of , then
, where ;
endif;
endwhile;
output ;
end.

The function RayShoot returns the index of a facet-inducing hyperplane , which is hit first by the ray starting at along the direction of .

Note that Theorem 1 immediately implies that the interior point of Theorem 2 can be found in strongly polynomial time . It follows that finding all redundancies and the preprocessing can be achieved in strongly polynomial time.

Using Theorem 1 and the following observation (see also Figure 1) we can prove Theorem 2.

Observation 3.

Let be a full-dimensional system in . Then for the following are equivalent.

  1. is nonredundant in .

  2. is facet-inducing for .

  3. The system , is full-dimensional.

Proof of Theorem 2..

We have to show that the modified Clarkson Algorithm returns , the indices of the set of nonredundant constraints. We first discuss correctness of the algorithm by induction. We claim that in every step and . This is trivially true in the beginning. Assume that in some step of the algorithm we have , and . If , is not full-dimensional, then , is not full-dimensional and hence is redundant by Observation 3.

If , is full-dimensional and is an interior point, then we do ray shooting from to . Note that , hence is not in the feasible region of . Then the first constraint hit (with index ) is nonredundant. This constraint is unique, since is generic and is not a feasible solution. Denote the intersection point of the hyperplane given by with the ray by . It follows that since and we know that . This proves correctness of the algorithm.

It remains to discuss the running time. Since in every round we add either a variable to or , the outer loop is executed times. In every round we run the Algorithm of Theorem 1 on at most inequalities. This takes time by Theorem 1. Moreover there are at most stages of ray shooting which takes time in total. The running time follows. ∎

4 Revision of the Hochbaum-Naor Method

Since we modify Hochbaum and Naor’s Method in the next section, for completeness we review the basic components and the key ideas of the algorithm.

Theorem 4.

[5] For a system in one can decide whether the problem is feasible in time , and in the feasible case output a solution.

In Section 4.1 we will give all relevant tools to prove the theorem. We then discuss the feasible case in Sections 4.2 and 4.3, and finally the infeasible case in Section 4.4

4.1 The Ingredients

The Hochbaum-Naor Theorem is a mix of an efficient implementation of the Fourier-Motzkin method and the result by Aspvall and Shiloach described below [1]. In general the Fourier-Motzkin method may generate an exponential number of inequalities, however in the case one can implement it efficiently.

The Fourier-Motzkin Method (for general LPs)

(For more details please refer to [10, pp. 155 - 156]). Let be a set of inequalities on the variables . The Fourier-Motzkin Method eliminates the variables one by one to obtain a feasible solution or a certificate of infeasibility. At step , the LP only contains variables . Let us denote this system by . To eliminate variable all inequalities that contain are written as or , where and are some linear functions in . Let us denote the two families of inequalities obtained by and , respectively. For each and each we add a new inequality . This yields a set of inequalities , on the variables . The method is feasibility preserving and given a solution to , one can construct a solution of in time .

The Aspvall-Shiloach Method [1]

Hochbaum and Naor’s algorithm highly relies on a the following result by Aspvall and Shiloach [1].

For let , be of form , , for , i.e., and share a variable. If and , one can update with to get a bound on in terms of as follows:

Assume that , it follows that and , and hence

In the case that we get a lower bound on in terms of . If and one can similarly update . If the the signs of and are the same, then no update on is possible.

For a family of , () we can do a sequence of updates to the bound on , iff for all , and . If this is called a chain of length . If and , this is called cycle of length . A chain or a cycle is called simple, if every inequality appears at most once. (This concept was first introduced in [11]).

For example , , defines a chain of length as follows:

Similarly , , defines a cycle of length ,

which implies . A cycle results in an inequality of form (or ). If , (or , , respectively), then this is an infeasibility certificate for .

For a variable we denote by (), the best lower (upper) bound that can be obtained by considering all chains and cycles of length at most . If the system is feasible, one can show that . Recall that the interval denotes the range of , () is the smallest (largest) value that can take such that the system still has a feasible solution. The direction follows immediately from the definitions. For the other direction let be a point of such that . Looking at the simple chains and simple cycles of the halfspaces that contain on their boundary, gives us the lower bound . The case for and is equivalent.

On the other hand if the system is infeasible then two things can happen. If , then the range of is empty and this is a certificate for infeasibility. Such a certificate may not exist in general. This is the case for example if the linear system consists of two independent subsystems, one feasible and one infeasible. Also for the small (infeasible) example , , , we have that and .

From now on we will only discuss the case where is feasible. In Section 4.4 we will discuss how to get an infeasibility certificate using the same algorithm.

Theorem 5.

[1] Given a feasible system in , a variable and a value , one can decide in time whether , , , or .

Therefore in the feasible case this algorithm decides whether lies in the open range , on boundary of the range, or outside of the range (and on which side).

The proof of Theorem 5 requires many technical details. In the following we will summarize the method and provide an intuitive idea. For detailed proofs refer to [1, 5].

Let be a fixed variable and . For all let us denote by lo (up) the trivial lower (upper) bound on given by , which may also be infinite.

The following algorithm fixes and returns upper and lower bounds on accordingly. In rounds, it updates the lower and upper bounds, denoted by and , respectively, on all variables, where initially we are given and . Algorithm Aspvall-Shiloach (, , ); begin for do ; endfor; , ; for do for , with , do if then , ; elseif then , ; elseif then , ; else /**/ then , ; end for; for do , ; endfor; endfor; output ; end. The algorithm runs in rounds of steps each, which results in the running time of .

Figure 2: Output Aspvall-Shiloach

Since the output of the algorithm depends on we denote the outputs by and . By the properties of the algorithm it follows that the function is convex, whereas is concave (see Figure 2).

One can show that , if and only if . It follows that and . To distinguish between all cases of Theorem 5, we additionally need the (left and right) slopes of and . It is not hard to modify the Aspvall-Shiloach Algorithm in such a way that it keeps track of the slopes as well. For instance if and the slope of is smaller than one at , then . In the case where the slope is greater than one . Using a careful case distinction one can show that the for given , the values of and and their (left and right) slopes at are enough to decide all cases of Theorem 5.

4.2 The Algorithm for the Feasible Case

The rough idea of the algorithm is the following. At step we want to efficiently find in the current range and set to obtain a system with one less variable. Whenever this is not possible, we eliminate efficiently in a Fourier-Motzkin step. After this first part we set all variables that were eliminated to values in their current range in the normal Fourier-Motzkin backtracking step.

First Part

The first part of the algorithm runs in steps. In step we update two linear systems and from and respectively, where initially . The systems (on variables) and (on variables) do basically encode the same solution system, we will see later why a distinction is necessary. During the execution of the algorithm, is used to do Fourier-Motzkin elimination method, is used to run the algorithm of Theorem 5. We denote by FM the set of inequalities obtained by eliminating from by using one step of the Fourier-Motzkin elimination method.

For any two variables and in , with we represent the set of inequalities containing and (with nonzero coefficients) in the plane as two envelopes, the upper envelope and the lower envelope. The feasible region of and is contained between the envelopes (in the pink region of Figure 3) and each envelope can be represented as a piecewise linear function with breakpoints. The projection of the breakpoints onto the -axis is denoted by . If the envelope is unbounded in the -direction we add points at infinity to . The range of is hence contained in the interval given by the leftmost and rightmost point of .

Below follows the pseudo code and the explanation of the algorithm.

Algorithm Hochbaum-Naor ();
begin
;
for do
Generate , the sorted sequence of the points in
;
Use Theorem 5 to do binary search on ;
if such that then
;
;
else if s.t.  and then
;
;
;
else /* system infeasible */ then
output system infeasible;
endif;
endfor;
end.

Here rel denotes the so called relevant inequalities in which we obtain by removing some redundant inequalities. The exact definition follows in the description of the algorithm below.

In step the algorithm has computed and , where originally . For every pair , such that is a neighbor of in , i.e., , it computes the projections of the breakpoints . The union of those points are sorted and denoted by . The idea is now to run a binary search on using Theorem 5, in the hope of finding a point in the range of .

If the algorithm finds a breakpoint such that , then we set (see Figure 3). We set and .

Figure 3: First case Hochbaum-Naor, example with 3 variables

If there is no such , the algorithm finds , such that   and . In that case for any neighbor of the number of inequalities containing both is reduced to at most two (bold lines of 4), the ones that define the upper and lower envelope respectively on the interval (blue part of Figure 4). This can be done since and therefore all other inequalities are redundant and can be removed. We denote the set of inequalities obtained after the removal of the redundant ones by rel. The normal Fourier-Motzkin elimination is applied on rel to obtain . By the above discussion rel has the same solution space as . As the number of inequalities between and any neighbor is reduced to at most two, the algorithm adds at most four inequalities between any pair of neighbors of . This prevents the usual quadratic blowup of the Fourier-Motzkin Method. The system does not need to be updated, i.e., .

Figure 4: Second case Hochbaum-Naor, example with 3 variables

We observe that in every step only a constant number of constraints are added to , which guarantees the running time of the binary search to be . The size of can be of order (as we may add up to constraints in each step), hence running Theorem 5 on would not guarantee the running time in case where .

Second Part

The second part of the algorithm is now the normal backtracking of the Fourier-Motzkin Method. Assume that the variables that were eliminated (in the elseif-step) are , where and . In the end of part one we are left with the system on variable . By the properties of the Fourier-Motzkin elimination is feasible and the range of is the same as its range in . Now choose a feasible value of and continue inductively by backtracking through . The geometric interpretation is similar to the first part: for each variable we pick a value in its current range.

4.3 Discussion of the Algorithm

We briefly discuss the main points of the proof of Theorem 4 (for more detail see [5]).

Proof sketch of Theorem 4 for the feasible case.

Building and updating all enve-lopes takes time per step, hence in total. Since for all , the size of is and , in each step the binary search runs Theorem 5 times, where each evaluation takes time . It follows that the first part of the algorithm takes time . For the second part consider the step where we find a solution for a variable in the backtracking step in . Then shares at most two inequalities with each of its neighbors, therefore the whole second part only takes time .

During the whole algorithm, the variable is set to some value if and only if is in the current bounds . Therefore in the feasible case, it correctly outputs a feasible point of . ∎

4.4 Discussion of the Algorithm in the Infeasible Case

In the previous Section we showed that if is feasible, then the Hochbaum-Naor Method always correctly outputs a feasible point. We now show that in the infeasible case, infeasibility is always detected, which completes the proof of Theorem 4.

Proof sketch of Theorem 4 for the infeasible case.

Assume that is infeasible. We now run the first part of the algorithm as in the feasible case. If during the execution at some point during binary search, we detect a contradiction in form of this is a certificate for infeasibility and we are done. It is however possible that in every call of the algorithm of Theorem 5 we get some (wrong) output , , . In that case is an infeasible system on the variable . This follows since the Fourier-Motzkin elimination is feasibility preserving and setting some variables to fixed values in an infeasible system, keeps the system infeasible. Detecting infeasibility in a system with one variable can be trivially done in linear time in the number of constraints. It follows that infeasibility is always detected, which concludes the proof of Theorem 4. ∎

5 Modification of the Hochbaum-Naor Method

We now show how to modify the Hochbaum-Naor Method from Section 4, such that it decides full-dimensio-nality of the problem and in the full-dimensional case outputs an interior point (see Theorem 1). For this we need some preparatory lemmas.

Lemma 6.

Let feasible and let . Then is an interior point solution of if and only if is an interior point solution of , where is the system obtained by .

Proof.

Let be an interior point solution of . Then by definition and obviously . On the other hand let be an interior point solution of , i.e., . Then satisfies any inequalities containing some strictly. The only inequalities that might be satisfied with equality are the ones containing only , but this is a contradiction to . ∎

Lemma 7.

In the Fourier-Motzkin Method has an interior point solution, if has one. Moreover if an interior point solution of exists, it can be obtained in the running time of the Fourier-Motzkin algorithm.

Proof.

The first part follows by Lemma 6. For the second part is is enough to consider a slight variant of the Fourier-Motzkin elimination. Instead of running the algorithm on a system , we run it on . In each step the inequalities obtained are of form instead of . By induction, using Lemma 6 one can see that finding a solution using this variant, is equivalent to finding an interior point of . ∎

Proof of Theorem 1.

Assume that is feasible. We run the Hochbaum-Naor Algorithm almost in the same way as described in Section 4. The only difference is in the if-loop of the algorithm. The original algorithm distinguishes between the cases , and of Theorem 5. Our algorithm however distinguishes between , and .

In the first case we only set if there exists a breakpoint , such that . We only fix to some value , if is in the open range , (in the original Theorem we were considered the closed range). The second case accordingly changes to finding an interval () such that and , (originally and ). We see that in this case, the number of inequalities on each edge adjacent to is still reduced to at most two.

As Theorem 5 distinguishes the cases , , , and certificate for infeasibility in time , the running time remains the same.

It remains to show that the modified algorithm detects full-dimensionality and in the full-dimensional case an interior point.

The discussion of the case where is infeasible is equivalent as in the proof of Theorem 4. Hence assume that the system is feasible.

Let be full-dimensional. By Lemma 6, after the first part of the algorithm (and ) are associated with a system of inequalities whose interior point solutions can be extended to an interior point solution of . The interior point solution of can now be found in the backtracking step using Lemma 7. Assume such a point can not be found. Then by Lemma 7 there is no interior point of , which is a contradiction to full-dimensionality.

Let be feasible but non-full-dimensional. Then at some point of the backtracking the algorithm finds for the current bounds. Suppose this does not happen, then by Lemma 6 the algorithm finds an interior point, which is a contradiction to non-full-dimensionality.

6 The Non-Full-Dimensional Case

In the non full-dimensional case redundancies are dependent on each other, meaning that a redundant constraint can become nonredundant after the removal of another redundant constraint. The problem is therefore to find a maximal set of nonredundant constraints.

Clarkson’s Algorithm can be extended for redundancy removal in the non-full-dimensional case as follows: In a preprocessing step one can find the dimension of the system , by solving at most linear programs [4]. Of all the inequalities that are forced to equality, we can find a set of equalities that defines the -dimensional space where lies in. Let us denote the remaining system of inequalities (the ones not forced to equality) by . One can now rotate the the system such that lies in . Clarkson’s algorithm can now be applied in , where the constraints are the intersections of the rotated system of intersected with . After the preprocessing the running time is hence .

In the case of we observe that such a rotation may destroy the structure of two variables per inequality. It results that we are still able to match Clarkson’s running time, using substitution of variables.

Theorem 8.

Let a -dimensional system in , for . Then given a relative interior point solution of , all redundancies can be detected in time .

The term comes from Gaussian elimination, which is dominated by the preprocessing time needed to find the relative interior point (see Proposition 10). Note that the typically larger term is dependent on the dimension of the polytope and not .

We need the following observation for the proof of Theorem 8.

Observation 9.

Let in and , , an inequality of the system that is forced to equality, i.e., , for all solutions . Let be the system obtained by substituting . Then the following holds.

  • is still in LI(2).

  • A constraint is redundant in if and only if it is redundant in the system .

Proof.

Given a relative interior point , one can find , the subsystem of that is forced to be equality, in time . The remaining system is denoted by . Finding a minimal subsystem of with linearly independent equalities that defines the -dimensional space containing , takes time using the Gaussian elimination. Using these equalities we can substitute variables of in the same fashion as explained in Observation 9. Hence we get a -dimensional representation of which is in .

We can now run the algorithm given by Theorem 2 on , the system obtained from after substitution. These detected nonredundant constraints together with give us a minimal set of nonredundant inequalities. ∎

The following proposition shows that finding a relative interior point can also be done in strongly polynomial time.

Proposition 10.

Given , one can find a relative interior point of or a certificate for infeasibility in time .

Proof.

Using a similar argument as in Lemma 6, one can show the following.

  • If , then is a relative interior point of if and only if is a relative interior point of .

  • If , then is a relative interior point of if and only if is a relative interior point of .

The algorithm for finding a relative interior point is very similar to the modified Hochbaum-Naor Method. The first part runs equivalently. In the second part of the backtracking if at some point , we set . Infeasibility is detected by the same argument as in Theorem 4. Correctness follows from the above argument, the running time is the same as in Theorem 1. ∎

Corollary 11.

The dimension of or a certificate for infeasibility can be found in time , i.e., the same running time as finding a feasible point solution of a certificate for infeasibility.

Proof.

Consider the algorithm of the proof of Proposition 10. Since we know that infeasibility will be detected, assume that is feasible. Let us denote the current polytope by , where initally the polytope defined by . Every time is set to a value in the open range , the dimension of the current polytope decreases by 1, as we intersect it with a hyperplane not containing . If and we set , then the dimension of the current polygon stays the same, as we intersect it with a hyperplane containing . Since after setting all to some value we end up with a point (polygon of dimension 0), the dimension of is exactly the number of times we set to a value in the open range . ∎

7 Acknowledgments

The authors would like to thank Seffi Naor for insights into the problem. Moreover we would like to thank Jerri Nummenpalo and Luis Barba for interesting discussions. We are especially grateful to Jerri Nummenpalo for making us aware of some important literature and helping us during the writeup of this paper.

References

  • [1] B. Aspvall and Y. Shiloach. A polynomial time algorithm for solving systems of linear inequalities with two variables per inequality. Siam Journal on Computing, 9:827–845, 1980.
  • [2] K. L. Clarkson. More output-sensitive geometric algorithms. In Proc. 35th Annu. IEEE Sympos. Found. Comput. Sci., pages 695–702, 1994.
  • [3] G. B. Dantzig. Linear Programming and Extensions. Princeton University Press, Princeton, NJ, 1963.
  • [4] K. Fukuda. Lecture notes: Polyhedral computation. ETH, Zurich, Switzerland, 2016. \htmladdnormallinkhttps://www.inf.ethz.ch/personal/fukudak/lect/pclect/notes2016/PolyComp2016.pdf.
  • [5] D. S. Hochbaum and J. Naor. Simple and fast algorithm for linear and integer programs with two variables per inequality. Siam Journal on Computing, 23:1179–1192, 1994.
  • [6] N. Karmarkar. A new polynomial time algorithm for linear programming. Combinatorica, 4(4):373–395, 1984.
  • [7] L. Khachiyan. A polynomial algorithm in linear programming. Doklady Akademiia Nauk SSSR, 244:1093–1096, 1979. (Translated in Sovjet Mathematics Doklady 20, 191-194, 1979).
  • [8] V. Klee and G. J. Minty. How good is the simplex algorithm? In O. Shisha, editor, Inequalities III, pages 159–175. Academic Press, 1972.
  • [9] Nimrod Megiddo. Towards a genuinely polynomial algorithm for linear programming. Siam Journal on Computing, 12:347–353, 1983.
  • [10] Alexander Schrijver. Theory of Linear and Integer Programming. John Wiley & Sons, Chichester, 1986.
  • [11] R. Shostak. Deciding linear inequalities by computing loop residues. Journal of the Association for Computing Machinery, 28:769–679, 1981.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
267890
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description