Factored solution of nonlinear equation systems

Factored solution of nonlinear equation systems

Abstract

This article generalizes a recently introduced procedure to solve nonlinear systems of equations, radically departing from the conventional Newton-Raphson scheme. The original nonlinear system is first unfolded into three simpler components: 1) an underdetermined linear system; 2) a one-to-one nonlinear mapping with explicit inverse; and 3) an overdetermined linear system. Then, instead of solving such an augmented system at once, a two-step procedure is proposed in which two equation systems are solved at each iteration, one of them involving the same symmetric matrix throughout the solution process. The resulting factored algorithm converges faster than Newton’s method and, if carefully implemented, can be computationally competitive for large-scale systems. It can also be advantageous for dealing with infeasible cases.

{keywords}

Nonlinear equations, Newton-Raphson method, Factored solution

I Introduction

Efficiently and reliably solving nonlinear equations is of paramount importance in physics, engineering, operational research and many other disciplines in which the need arises to build detailed mathematical models of real-world systems, all of them nonlinear in nature to a certain extent [1, 2, 3, 4]. Moreover, large-scale systems are always sparse, which means that the total number of additive terms in a coupled set of nonlinear functions in variables is roughly O().

According to John Rice, who coined the term mathematical software in 1969, “solving systems of nonlinear equations is perhaps the most difficult problem in all of numerical computations” [5]. This surely explains why so many people have devoted so much effort to this problem for so long.

Since the 17th century, the reference method for the solution of nonlinear systems is Newton-Raphson’s (NR) iterative procedure, which locally approximates the nonlinear functions by their first-order Taylor-series expansions. Its terrific success stems from its proven quadratic convergence (when a sufficiently good initial guess is available), moderate computational expense, provided the Jacobian sparsity is fully exploited when dealing with large systems, and broad applicability to most cases in which the nonlinear functions can be analytically expressed [6].

For large-scale systems, by far the most time-consuming step lies in the computation of the Jacobian and its factors [7]. This has motivated the development of quasi-Newton methods, which make use of approximate Jacobians usually at the expense of more iterations [8]. Well-known examples in this category are the chord method, where the Jacobian is computed only once, and the secant method [9], which approximates the Jacobian through finite differences (no explicit derivatives are required). Broyden’s method is a generalization of secant method which carries out rank-one updates to the initial Jacobian [10].

Another category of related methods, denoted as inexact Newton methods, performs approximate computations of Newton steps, frequently in combination with pre-conditioners [11].

More sophisticated higher-order iterative methods also exist, achieving super-quadratic convergence near the solution at the expense of more computational effort. These include Halley’s cubic method.

When the initial guess is not close enough to the solution or the functions to be solved exhibit acute non-convexities in the region of interest, the NR method works slowly or may diverge quickly. This has motivated the development of so-called globally convergent algorithms which, for any initial guess, either converge to a root or fail to do so in a small number of ways [6, 11]. Examples of globally convergent algorithms are line search methods, continuation/homotopy methods [12, 13], such as Levenberg-Marquardt methods, which circumvent Newton’s method failure caused by a singular or near singular Jacobian matrix, and trust region methods.

Other miscellaneous methods, more or less related to Newton’s method, are based on polynomial approximations (e.g. Chebyshev’s method), solution of ordinary differential equations (e.g. Davidenko’s method, a differential form of Newton’s method) or decomposition schemes (e.g. Adomian’s method [14]).

While each individual in such a plethora of algorithms may be most appropriate for a particular niche application, the standard NR method still remains the best general-purpose candidate trading off simplicity and reliability, particularly when reasonable initial guesses can be made [15, 16]

A feature shared by most existing methods for solving nonlinear equations is that the structure of the original system is kept intact throughout the solution process. In other words, the algorithms are applied without any kind of preliminary transformation, even though it has long been known that by previously rearranging nonlinear systems better convergence can be achieved. According to [3] (pp. 174-176), “the general idea is that a global nonlinear transformation may create an algebraically equivalent system on which Newton’s method does better because the new system is more linear. Unfortunately, there is no general way to apply this idea; its application will be problem-specific”.

This article explores such an idea through a new promising perspective, based on unfolding the nonlinear system to be solved by identifying distinct nonlinear terms, each deemed a new variable. This leads to an augmented system composed of two sets of linear equations, which are coupled through a one-to-one nonlinear mapping with diagonal Jacobian. The resulting procedure involves two steps at each iteration: 1) solution of a linear system with symmetric coefficient matrix; 2) computation of a Newton-like step.

The basic idea of factoring the solution of complex nonlinear equations into two simpler problems, linear or nonlinear, was originally introduced in the context of hierarchical state estimation [18], where the main goal was to geographically decompose large-scale least-squares problems. Later on, it has been successfully applied to nonlinear equations with a very particular network structure, such as those arising in power systems analysis and operation [19, 20].

In this work, the factored solution method, initially derived for overdetermined equation systems, is conceptually re-elaborated from scratch, and generalized, so that it can be applied to efficiently solving broader classes of nonlinear systems of equations.

The paper is structured as follows: in the next section the proposed two-stage solution approach is presented. Then, sections III and IV introduce canonical and extended forms of nonlinear functions to which the proposed method can be applied. Next, section V discusses how, with minor modifications, the proposed method can reach different solution points from the same initial guess. Section VI is devoted to the analysis of cases which are infeasible in the real domain, where convergence to complex values takes place. Section VII briefly considers the possibility of safely reaching singular points while section VIII closes the paper by showing the behaviour of the proposed method when applied to large-scale test cases.

Ii Factored solution of nonlinear systems

Consider a general nonlinear system, written in compact form as follows:

(1)

where is a specified vector and is the unknown vector 1

By applying the NR iterative scheme, a solution can be obtained from an initial guess, , by successively solving

(2)

where subindex denotes the iteration counter, , is the Jacobian of , computed at the current point , and is the mismatch or residual vector.

In essence, the new method assumes that suitable auxiliary vectors, and , with , can be introduced so that the original system (1) can be unfolded into the following simpler problems:

(3)
(4)
(5)

where and are rectangular matrices of sizes and , respectively, and vector comprises a set of one-to-one nonlinear functions, also known as a diagonal nonlinear mapping [17],

(6)

each with closed-form inverse,

(7)

By eliminating vector the above augmented system can also be written in more compact form:

(8)
(9)

Notice that (8) is a linear underdetermined system whereas (9) is an overdetermined one. Among the infinite solutions to (8) only those exactly satisfying (9) constitute solutions to the original nonlinear system (1). As explained in section III, many systems can be found in practice where such a factorization is possible, the aim being to reduce as much as possible. In the limit, if a vector of size can be found, then solutions to the original nonlinear system will be obtained directly (i.e., without iterating) by sequentially solving the unfolded system of equations (3)-(5). But this case will arise only when the set of equations comprises just nonlinear distinct terms. Therefore, the need to iterate in the factored method stems from an excess of nonlinear terms () rather than from the nonlinear nature of the original problem.

Obviously, when the remaining auxiliary vector is eliminated the original ‘folded’ system is obtained in factored form:

(10)

This leads to an equivalent expression for the Newton step (2),

(11)

where is the trivial Jacobian of . Whether (11) offers any computational advantage over (2) will mainly depend on the complexity involved in the computation of and its Jacobian , compared to their much simpler counterparts and (plus the required matrix products), but the convergence pattern will be exactly the same in both cases.

Yet the augmented system (8)-(9), arising when the factored form is considered, opens the door to alternative solution schemes which may outperform the ‘vanilla’ NR approach. The scheme developed in the sequel begins by considering the solution of the factored model as a linearly-constrained Least Squares (LS) problem. Then, an auxiliary least-distance problem is formulated aimed at providing improved linearization points at each iteration.

Ii-a Solution of the factored model as an optimization problem

Finding a solution to the unfolded problem (8)-(9) can be shown to be equivalent to solving the following equality-constrained LS problem,

(12)

which reduces to minimizing the associated Lagrangian function,

(13)

The first-order optimality conditions (FOOC) give rise to the following system:

(14)

Given an estimate of the solution point, , we can choose in such a way that (9) is satisfied, i.e.,

(15)

Then, linearizing around

allows (II-A) to be written in incremental form,

(16)

where is a diagonal matrix with positive elements.

From the above symmetric system, can be easily eliminated, yielding:

(17)

or, in more compact form,

(18)

If the Jacobian remains nonsingular at the solution point, then and the above system reduces to (11), as expected. Therefore, the Lagrangian-based augmented formulation has the same convergence pattern and converges to the same point as the conventional NR approach.

Ii-B Auxiliary least-distance problem

The driving idea behind the proposed procedure is to linearize (II-A) around a point which, being closer to the solution than , can be obtained in a straightforward manner. The resulting scheme will be advantageous over the NR approach if the extra cost of computing the improved linearization point is offset by the convergence enhancement obtained, if any.

For this purpose, given the latest solution estimate, , we consider a simpler auxiliary optimization problem, as follows:

(19)

with . The associated Lagrangian function is,

(20)

which leads to the following linear FOOCs:

(21)

where . Besides being linear, the unknown is missing in this simpler problem, which can be solved in two phases. First, is computed by solving:

(22)

Then, is simply obtained from

(23)

By definition, is as close as possible to while satisfying (8).

Next, the FOOCs of the original problem (II-A) are linearized around , hopefully closer to the solution than ,

leading to,

(24)

Once again, eliminating yields,

(25)

or, in more compact form,

(26)

The above linear system, to be compared with (18), provides the next value , which replaces in the next iteration. Moreover, as happened with the solution of (18), so long as the Jacobian remains nonsingular, and can be obtained with less computational effort from:

(27)

Ii-C Two-step solution procedure

The two-step algorithm, arising from the proposed factored representation of nonlinear systems, can then be formally stated as follows:

Step 0: Initialization. Initialize the iteration counter (). Let be an initial guess and . Step 1: Auxiliary least-distance problem. First, obtain vector by solving the system (28) and then compute from, (29) Step 2: Non-incremental computation of . Solve for the system, (30) where is the factored Jacobian computed at . Then update . If (or, alternatively ) is small enough, then stop. Otherwise set and go back to Step 1.

As explained above, step 1 constitutes a linearly-constrained least-distance problem, yielding a vector which, being as close as possible to , satisfies (8). As a feasible solution is approached, and . Notice that the sparse symmetric matrix needs to be factorized only once, by means of the efficient Cholesky algorithm (in fact, only its upper triangle is needed). Therefore, the computational effort involved in step 1 is very low if Cholesky triangular factors are saved during the first iteration. Moreover, the vector is available from the previous iteration if it is computed to check for convergence.

It is worth noting that step 2 is expressed in non-incremental form. It directly provides improved values for with less computational effort than that of a conventional Newton step (11), which may well offset the moderate extra burden of step 1.

In [19, 20] the two-step procedure based on the factored model was originally developed from a different perspective, related with the solution of overdetermined systems in state estimation (i.e. filtering out noise from a redundant set of measurements).

Ii-D Factored versus conventional Newton’s method

The factored scheme can be more easily compared with NR method if step 2 is written in incremental form. For this purpose, the term is subtracted from both terms of (30). Considering that , this leads to,

(31)

Notice that the above expression, being equivalent to (30), is less convenient from the computational point of view owing to the extra operations involved.

Next, the Taylor’s expansion around of the set of nonlinear functions is rearranged as follows, for :

(32)

where the remainder comprises the second- and higher-order terms of the polynomial expansion,

(33)

is the diagonal matrix containing the -th derivatives of , computed at , and is a vector of 1’s.

Replacing (32) into (31), and taking into account that , yields,

(34)

Comparing the resulting incremental model (34) with that of the conventional Newton’s method (11), the following remarks can be made:

  1. If the current point, , is so close to the solution that , then will be negligible compared to . In these cases, the only difference of the factored method with respect to NR lies in the use of the updated Jacobian, , rather than , and the local convergence rate of the factored algorithm will be of the same order as that of Newton’s method.

  2. If the current point, , is still so far away from the solution that , then will dominate the right-hand side in (34). In these cases, the behaviour of the factored scheme can be totally different from that of Newton’s method, which fully ignores when computing .

  3. As expected, if step 1 is omitted (i.e., ), the factored scheme reduces to the standard Newton’s method.

Let and denote the solution point. Further insight into Remark (i) above, regarding local convergence rate, can be gained by subtracting from both sides of (30) and rearranging, which leads for the factored method to:

(35)

Manipulating (11) in a similar fashion yields, for the NR method:

(36)

The last two expressions confirm that both the NR and the factored schemes converge quadratically near the solution (third- and higher-order terms are negligible in ). Moreover, if is closer than to the solution , then the factored method converges faster than Newton’s.

In summary, from the above discussion it can be concluded that the local behaviour of the proposed factored algorithm is comparable to Newton’s method (quadratic convergence rate), while the global behaviour (shape of basins of attractions) will be in general quite different. The fact that step 1 tries to minimize the distance between and surely explains the much wider basins of attractions found in practice for the factored method, but there is no way to predict beforehand the global behaviour of any iterative algorithm in the general case (the reader is referred to Section 3, “Aspects of Convergence Analysis”, in [22], for a succinct but clarifying discussion on this issue).

Iii Application to canonical forms

The factored model and, consequently, the two-stage procedure presented above can be applied in a straightforward manner to a wide range of nonlinear equation systems, which are or can be directly derived from the following canonical forms. Small tutorial examples will be used to illustrate the ideas. None of those examples are intended to assess the potential computational benefits of the factored solution process, but rather to show the improved convergence pattern over the NR method (the reader is referred to Section VIII, where very large equation systems arising in the steady-state solution of electrical power systems are solved by both methods and solution times are compared).

Iii-a Sums of single-variable nonlinear elementary functions

The simplest canonical form of nonlinear systems arises when they are composed of sums of nonlinear terms, each being an elementary invertible function of just a single variable. Mathematically, each component of should have the form,

where is any real number and can be analytically obtained. In this case, for each different nonlinear term, , a new variable

is added to the auxiliary vector , keeping in mind that repeated definitions should be avoided. This leads to the desired underdetermined linear system to be solved at the first step:

while matrix of the overdetermined linear system is trivially obtained from the definition of :

Example 1:

Consider the simple nonlinear function,

represented in Fig. 1. Introducing two auxiliary variables,

leads to,

Fig. 1: Nonlinear function used in Example 1. It crosses at points A and B ( and , respectively).

Therefore, with the above notation, the involved matrices are:

For there are two solutions (points A and B in figure 1). Table I shows, for different starting points, the number of iterations required to converge (convergence tolerance in all examples: ) by both the NR and the proposed factored procedure. Notice that the factored procedure converges much faster than the NR method. The farther from the solution point, the better the behaviour of the proposed procedure compared to Newton’s method.

NR Factored
30 16 (B) 6 (B)
10 12 (B) 6 (B)
5 9 (B) 5 (B)
1 7 (B) 4 (B)
0.9 9 (B) 5 (B)
0.8 13 (B) 5 (B)
0.5 10 (A) 6 (B)
0 Fails 6 (B)
6 (A) 7 (B)
TABLE I: Number of iterations required by both NR and factored procedures to converge from arbitrary starting points for the nonlinear function of Example 1, with . In parenthesis, A or B indicates which point is reached in each case.

For values of (the minimum of the function, where the slope changes its sign), the NR method converges to point A. On the other hand, the factored scheme always converges to point B no matter which initial guess is chosen. This issue is discussed in more detail in section V, where it is also explained how to reach other solution points. As expected, the NR method fails for (null initial slope), whereas the factored solution does not suffer from this limitation (the updated Jacobian corresponding to the first value is not singular).

Finally, the components of vector (and obviously ) are always null at the solution point, indicating that a feasible solution has been found in all cases (see section VI).

Example 2:

In this example, the following periodic function will be considered,

for which two auxiliary variables are needed,

with . The relevant matrices arising in this case are:

For , the two solutions closest to the origin are and (points A and B in Fig. 2).

Fig. 2: Nonlinear function for Example 2

Table II shows, for different starting points, the number of iterations required to converge by both methods and the final solution reached. In this particular example (), the NR method sometimes outperforms the factored scheme in terms of number of iterations. However, note that, for starting points far away from the origin, the NR method may converge to ‘remote’ periodic solutions (shown in boldface), whereas the factored scheme always converges to points A or B (this issue if further discussed in section V). Other advantages of the factored method will become apparent for infeasible values of (see the same example, with , in section VI).

NR Factored
Iter Iter
10 5 0.6435 4 0.9273
5 6 6.9267 8 0.6435
1 4 0.9273 4 0.9273
0 6 0.6435 7 0.6435
7 0.6435 8 0.6435
6 55.6214 7 0.9273
7 11.6391 8 0.9273
TABLE II: Example 2: No. of iterations and solution points for

In all cases, the final solution provided by the factored scheme is real (or, more precisely, its imaginary component is well below the convergence tolerance). However, complex intermediate solutions are obtained in some cases, owing to the fact that and return complex values for (this is discussed below in section VI).

Iii-B Sums of products of single-variable power functions

Another canonical form to which the factored method can be easily applied arises when the nonlinear equations to be solved are the sum of products of nonlinear terms, each being an arbitrary power of a single variable. Mathematically, each component of has the form,

where and are arbitrary real numbers.

This case can be trivially handled if the original set of variables, , is replaced by its log counterpart,

Then the auxiliary vector is defined as in the previous case, avoiding duplicated entries:

which leads to the desired underdetermined linear system:

The second key point is to embed the function in the nonlinear relationship , as follows:

which leads to the overdetermined linear system to be solved at the second step:

Once vector is obtained by the factored procedure, the original unknown vector can be recovered from:

Example 3:

Consider the following nonlinear system in two unknowns:

which, upon introduction of four variables,

is converted into an underdetermined linear system:

The nonlinear relationships are,

which lead to the following Jacobian:

and the final overdetermined system in the log variables,

Once the problem is solved in the log variables, the original variables are recovered from,

For and there is a known solution at and . Table III provides the number of iterations for different starting points. Apart from taking a larger number of iterations, the NR method sometimes converges to the alternative point and (marked with an ‘A’ in the table), unlike the factored scheme, which seems to converge invariably to the same point. For the farthest starting point tested (last row), the NR method does not converge in 50 iterations. On the other hand, the factored scheme is much less sensitive to the initial guess, since it converges systematically to the same point.

NR Factored
7 6
14 6
(A) 29 6
8 7
(A) 10 8
(A) 22 7
Fails 7
TABLE III: Number of iterations required by the NR and the factored procedures to converge from arbitrary starting points for the nonlinear system of Example 3, with and .

Iv Transformation to canonical forms

There are many systems of equations which can be easily transformed into one of the canonical forms discussed above, by strategically introducing extra variables aimed at simplifying the complicating factors. Then, the nonlinear expressions defining the artificial variables are added to the original equation system, leading to an augmented one.

Example 4:

Consider the following nonlinear system:

By defining a new variable,

it can be rewritten in augmented canonical form, as follows:

Note that in this case only the original unknowns, and , need to be initialized, since can be set according to its definition.

V Extending the range of reachable solutions

When there are multiple solutions, the NR method converges to a point determined by the basin of attraction in which the initial guess is located. Some heuristics have been developed enabling the NR method to reach alternative solutions in a more or less systematic manner, such as the differential equation approach proposed in [23], which modifies the equation structure according to the sign of the Jacobian determinant.

In this regard, the behaviour of the proposed factored algorithm is mainly affected by the computable range of components. When this range does not span the whole real axis, the two-stage procedure may not be able to reach some solution points, irrespective of the starting point chosen. For instance, if , with even, then during the solution process, the expression

will always return a positive , provided is positive (or a complex with positive real component if is negative). This will be determinant of the computed values. In those cases, an easy way of allowing the factored procedure to reach alternative solution points is by considering negative ranges for , i.e.,

Example 5:

In Example 1 it was noted that the factored procedure always converges to point B (the positive root in Fig. 1). However, if the original expression for ,

is replaced by2

then the factored procedure always converges to point A (negative root) in fewer iterations than in Example 1.

      

A similar situation arises when the nonlinear system to be solved contains periodic functions, since their inverses will have a limited range related to the period of the original function. For instance, if , then will naturally lie in the range . If we want to obtain values in the extended range,

for any integer , then we should replace by

In a similar fashion, , which naturally lies in the range , should be replaced by,

to obtain values in the same extended range.

Example 6:

If, in Example 2, the expression,

is replaced by,

then, the factored method converges, in a similar number of iterations, to the points 6.9267 and 7.2105, at a distance from points A and B in figure 2 (in this simple example, involving a purely periodic function, this is totally an expected result, but the reader is referred to the not so trivial Example 7).

      

The fact that the range of adopted during the iterative process determines the solution point which is finally reached, irrespective of the starting point, is a nice feature worth investigating in the future, since by fully exploring all alternative values one might be able to systematically reach solution points in given regions of interest, without having to test a huge number of starting points. The following examples illustrate this idea.

Example 7:

Consider the extremely nonlinear non-periodic function,

which is graphically represented in figure 3.

Fig. 3: Nonlinear function for Example 7

As explained in the previous examples, the following augmented system is to be factored,

The relevant components and relationships of the factored solution approach are,

where variables have been introduced,

The inverse of the Jacobian is in this case:

For no real solution exists when . In order to obtain real solutions for (points A, B, C and D in figure 3) as explained before, we have to replace,

by the more general expression,

for .

Table IV shows, for increasing values of , the points to which the factored solution converges, as well as the number of iterations (starting with ). The complex solutions provided for and (whose real component is very close to the local maximum), indicate that no real solution exists for .

It.
0 10
1 8
2 5 (A)
3 5 (B)
4 5 (C)
5 4 (D)
TABLE IV: Example 7: No. of iterations and solution points for

      

One more example will be worked out to illustrate the capability of the factored procedure to reach different solutions, just by extending the computed range of certain components.

Example 8:

Consider the 22 system proposed by P. Boggs [21],

which is known to have only three solutions: , and .

In this case, the following relationships are involved in the factored method:

When and are defined as above, the factored method always converges to , irrespective of the starting point . What is as important, the number of iterations is only slightly affected by , no matter how arbitrary it is.

If we extend the computed range of to negative values, by replacing the original definition with,

then the factored method always converges to , irrespective of the starting point. Finally, when the ranges of both and are extended, as follows,

the third solution point, , is reached for arbitrary values of .

This kind of controlled convergence cannot be easily implemented in the NR method, characterized by rather complex attractions basins in this particular example. In fact, Newton’s method may behave irregularly even when started near the solutions [22].

Interestingly enough, if the fourth combination of ranges is selected,

then, the factored procedure converges to the complex point showing that no real solution exists in that region. This issue is discussed in the next section.

      

Vi Infeasibility and complex solutions

A nonlinear system with real coefficients will have in general an undetermined number of real and complex solutions. It can be easily shown that complex solutions constitute conjugate pairs if and only if , which happens for many functions of practical interest. When no real solutions exist we say that vector is infeasible.

The basin of attraction associated to each solution (real or complex) is characteristic of the iterative method adopted. As discussed in section II.II-D and illustrated by the previous examples, the basins of attraction associated to the factored method are quite different from those of Newton’s algorithm. Depending on which basin of attraction the initial guess lies in, the factored method will converge to a real or complex solution.

A summary of the possible outcomes of the factored algorithm follows:

  • Convergence to a real solution: for feasible cases, starting from a real close enough to a solution, the factored scheme eventually converges to the nearby real solution. This is favoured by the introduction of the first step, aimed at minimizing the distance between successive intermediate points. Note however that, even if both and the solution are real, depending on the domain of existence of the nonlinear elementary functions , the two-stage procedure may give rise to complex intermediate values during the iterative process. A real solution (or, more precisely, a solution with negligible imaginary component) can also be reached starting from a complex .

  • Convergence to a complex solution: for infeasible cases the factored method can only converge to a complex solution, provided the right is chosen. Notice that complex solutions cannot be reached if is real and the domains of and span the entire real axis. In those cases, selecting a complex initial guess is usually helpful to allow the algorithm to reach a complex solution. For feasible cases, if is real but far from a real solution, the factored iterative procedure might converge to a complex point.

  • Inability to converge: Like any iterative algorithm, the factored scheme may numerically break down owing to unbounded values provided by the elemental functions or their inverse , which is the case for instance when an asymptote is approached. It can also remains oscillating, either periodically or chaotically, which typically happens when is real, no real solution exists and complex intermediate values cannot pop up.

The following examples illustrate the behaviour of the factored scheme as well as the nature of the complex solutions obtained when solving infeasible cases.

Example 10:

Let us reconsider Example 2 with , which constitutes an infeasible case for which the NR fails to converge to a meaningful point. The two-stage procedure always converges in a reduced number of iterations to the same complex value (or its conjugate), as shown in table V for the same real starting points as in Example 2 (the same happens with other real starting points).

Iter
10 8
5 5
1 8
0 5
5
6
5
TABLE V: Example 10: No. of iterations and solution points for

However, if we prevent from taking complex values by restricting to its real domain, then the two-stage procedure remains oscillating around real values, much like the NR method.

We can use this example to see what happens if is further increased. Table VI presents the number of iterations and the solution points for increasing values of (starting from ). The maximum value for which there is a feasible (real) solution is (critical point). For larger values the factored approach converges to complex values with the same real component and increasing absolute values. Eventually, for the two-step procedure breaks down, which means that is not in the basin of attraction for this infeasible value of .

Iter
1.4 7
1.4142 12
1.4143 10
1.5 5
2.5 5
3 5
4.203 10
4.204 Fails
TABLE VI: Example 10: No. of iterations and solution points for feasible and infeasible values of , starting from

Experimental results show that the complex values to which the factored method converges in infeasible cases are not arbitrary. In this example, evaluating the nonlinear function at the real component of the solution point (0.7854) yields (point C in figure 2). This lies almost exactly on the maximum of the function ().

In Example 1, if we set , which is infeasible in the real domain, the factored method converges from nearly arbitrary real starting points to the complex value . Taking the real part of the solution yields (point C in figure 1), which is indeed very close to the minimum of the function ( at ).

      

Example 11:

Let us consider the period function,

which is infeasible for and has an infinite number of solutions otherwise (see figure 4).

Fig. 4: Periodic function for Example 10

This example is very similar to Example 2, except for the way the auxiliary variables are defined:

In this case, both the elementary nonlinear functions ,

and the Jacobian functions,

are well defined all over the real axis (except for the asymptotes of the functions).

Therefore, starting from a real value , the factored procedure (and also the NR scheme) will always remain in the real domain, unlike in Example 2 containing the functions and .

With , starting from , the two-stage algorithm converges in 5 iterations to 1.2059, whereas the solution point 0.3649 is reached, also in 5 iterations, when .

For the infeasible value , the algorithm remains oscillating for any real starting point. However, with the complex starting value , it converges to the complex solutions shown in table VII for different infeasible values of .

Iter.
6
4
4
TABLE VII: Example 11: No. of iterations and complex solution points starting from

Note that the real component of the solution is always the same (0.7854). This value is exactly , a local minimum of the nonlinear function.

      

The above examples suggest that, when the two-stage method fails to find a real solution, it tries to reach a complex point whose real component is in the neighborhood of an extreme point of the function, which is indeed a nice feature. This is surely a consequence of the strategy adopted in step 1, which attempts to minimize the distance from new solution points to the previous ones, but this conjecture deserves a closer examination which is out of the scope of this preliminary work.

In any case, it is advisable to adopt complex starting points, particularly when infeasibility is suspected, in order to facilitate complex solutions being smoothly reached.

Vii Critical points

Feasible (real) and infeasible (complex) solution points can be obtained as long as the Jacobian matrix () remains nonsingular throughout the iterative process. This applies to both the NR and the factored methods. Even though critical points are theoretically associated with singular Jacobians, in practice they can also be safely approached so long as the condition number of the Jacobian is acceptable for the available computing precision and convergence threshold adopted. However, the convergence rate tends to significantly slow down near critical points.

It is worth noting that, in the surroundings of a critical point, it may be preferable to deal with the augmented equation (26) rather than solving the more compact one (27), which assumes that .

Example 12:

Let us consider again the periodic nonlinear function of Example 11:

which has an infinite number of critical points for . Table VIII provides the number of iterations and the solution points reached by both the NR and factored methods for different starting points (like in previous examples the convergence threshold is ). As can be seen, the factored method is much more robust against the starting point choice, and converges always to the same critical point ().

Factored NR Factored NR
Iter Iter Iter Iter
5 16 0.7854 23 5 0.6305 8
3 15 0.7854 25 101.3164 6 0.6305 13 4.0819
1.5 16 0.7854 19 0.7854 6 0.9403 9 0.9403
16 0.7854 20 6 0.6305 12
15 0.7854 18 6 0.9403 8
16 0.7854 17 5 0.9403 6
TABLE VIII: Example 12: No. of iterations and solution points for and

In the neighbourhood of the critical point the number of iterations is significantly reduced, but the factored procedure still performs much better than the NR method. For the sake of comparison, Table VIII also shows the values corresponding to .

      

In addition, numerical problems may arise during the factored solution procedure if, by a numerical coincidence, any element of the diagonal matrix becomes null or undefined. This risk can be easily circumvented by preventing diagonal elements of from being smaller than a certain threshold or abnormally large.

Viii Large-scale test cases

This section provides test results corresponding to the nonlinear systems arising in the steady-state analysis of electrical pow