Factored solution of nonlinear equation systems

# Factored solution of nonlinear equation systems

## Abstract

This article generalizes a recently introduced procedure to solve nonlinear systems of equations, radically departing from the conventional Newton-Raphson scheme. The original nonlinear system is first unfolded into three simpler components: 1) an underdetermined linear system; 2) a one-to-one nonlinear mapping with explicit inverse; and 3) an overdetermined linear system. Then, instead of solving such an augmented system at once, a two-step procedure is proposed in which two equation systems are solved at each iteration, one of them involving the same symmetric matrix throughout the solution process. The resulting factored algorithm converges faster than Newton’s method and, if carefully implemented, can be computationally competitive for large-scale systems. It can also be advantageous for dealing with infeasible cases.

{keywords}

Nonlinear equations, Newton-Raphson method, Factored solution

## I Introduction

Efficiently and reliably solving nonlinear equations is of paramount importance in physics, engineering, operational research and many other disciplines in which the need arises to build detailed mathematical models of real-world systems, all of them nonlinear in nature to a certain extent [1, 2, 3, 4]. Moreover, large-scale systems are always sparse, which means that the total number of additive terms in a coupled set of nonlinear functions in variables is roughly O().

According to John Rice, who coined the term mathematical software in 1969, “solving systems of nonlinear equations is perhaps the most difficult problem in all of numerical computations” [5]. This surely explains why so many people have devoted so much effort to this problem for so long.

Since the 17th century, the reference method for the solution of nonlinear systems is Newton-Raphson’s (NR) iterative procedure, which locally approximates the nonlinear functions by their first-order Taylor-series expansions. Its terrific success stems from its proven quadratic convergence (when a sufficiently good initial guess is available), moderate computational expense, provided the Jacobian sparsity is fully exploited when dealing with large systems, and broad applicability to most cases in which the nonlinear functions can be analytically expressed [6].

For large-scale systems, by far the most time-consuming step lies in the computation of the Jacobian and its factors [7]. This has motivated the development of quasi-Newton methods, which make use of approximate Jacobians usually at the expense of more iterations [8]. Well-known examples in this category are the chord method, where the Jacobian is computed only once, and the secant method [9], which approximates the Jacobian through finite differences (no explicit derivatives are required). Broyden’s method is a generalization of secant method which carries out rank-one updates to the initial Jacobian [10].

Another category of related methods, denoted as inexact Newton methods, performs approximate computations of Newton steps, frequently in combination with pre-conditioners [11].

More sophisticated higher-order iterative methods also exist, achieving super-quadratic convergence near the solution at the expense of more computational effort. These include Halley’s cubic method.

When the initial guess is not close enough to the solution or the functions to be solved exhibit acute non-convexities in the region of interest, the NR method works slowly or may diverge quickly. This has motivated the development of so-called globally convergent algorithms which, for any initial guess, either converge to a root or fail to do so in a small number of ways [6, 11]. Examples of globally convergent algorithms are line search methods, continuation/homotopy methods [12, 13], such as Levenberg-Marquardt methods, which circumvent Newton’s method failure caused by a singular or near singular Jacobian matrix, and trust region methods.

Other miscellaneous methods, more or less related to Newton’s method, are based on polynomial approximations (e.g. Chebyshev’s method), solution of ordinary differential equations (e.g. Davidenko’s method, a differential form of Newton’s method) or decomposition schemes (e.g. Adomian’s method [14]).

While each individual in such a plethora of algorithms may be most appropriate for a particular niche application, the standard NR method still remains the best general-purpose candidate trading off simplicity and reliability, particularly when reasonable initial guesses can be made [15, 16]

A feature shared by most existing methods for solving nonlinear equations is that the structure of the original system is kept intact throughout the solution process. In other words, the algorithms are applied without any kind of preliminary transformation, even though it has long been known that by previously rearranging nonlinear systems better convergence can be achieved. According to [3] (pp. 174-176), “the general idea is that a global nonlinear transformation may create an algebraically equivalent system on which Newton’s method does better because the new system is more linear. Unfortunately, there is no general way to apply this idea; its application will be problem-specific”.

This article explores such an idea through a new promising perspective, based on unfolding the nonlinear system to be solved by identifying distinct nonlinear terms, each deemed a new variable. This leads to an augmented system composed of two sets of linear equations, which are coupled through a one-to-one nonlinear mapping with diagonal Jacobian. The resulting procedure involves two steps at each iteration: 1) solution of a linear system with symmetric coefficient matrix; 2) computation of a Newton-like step.

The basic idea of factoring the solution of complex nonlinear equations into two simpler problems, linear or nonlinear, was originally introduced in the context of hierarchical state estimation [18], where the main goal was to geographically decompose large-scale least-squares problems. Later on, it has been successfully applied to nonlinear equations with a very particular network structure, such as those arising in power systems analysis and operation [19, 20].

In this work, the factored solution method, initially derived for overdetermined equation systems, is conceptually re-elaborated from scratch, and generalized, so that it can be applied to efficiently solving broader classes of nonlinear systems of equations.

The paper is structured as follows: in the next section the proposed two-stage solution approach is presented. Then, sections III and IV introduce canonical and extended forms of nonlinear functions to which the proposed method can be applied. Next, section V discusses how, with minor modifications, the proposed method can reach different solution points from the same initial guess. Section VI is devoted to the analysis of cases which are infeasible in the real domain, where convergence to complex values takes place. Section VII briefly considers the possibility of safely reaching singular points while section VIII closes the paper by showing the behaviour of the proposed method when applied to large-scale test cases.

## Ii Factored solution of nonlinear systems

Consider a general nonlinear system, written in compact form as follows:

 h(x)=p (1)

where is a specified vector and is the unknown vector 1

By applying the NR iterative scheme, a solution can be obtained from an initial guess, , by successively solving

 HkΔxk=Δpk (2)

where subindex denotes the iteration counter, , is the Jacobian of , computed at the current point , and is the mismatch or residual vector.

In essence, the new method assumes that suitable auxiliary vectors, and , with , can be introduced so that the original system (1) can be unfolded into the following simpler problems:

 Ey = p (3) u = f(y) (4) Cx = u (5)

where and are rectangular matrices of sizes and , respectively, and vector comprises a set of one-to-one nonlinear functions, also known as a diagonal nonlinear mapping [17],

 ui=fi(yi);i=1,…m (6)

each with closed-form inverse,

 yi=f−1i(ui)⇒y=f−1(u) (7)

By eliminating vector the above augmented system can also be written in more compact form:

 Ey = p (8) Cx = f(y) (9)

Notice that (8) is a linear underdetermined system whereas (9) is an overdetermined one. Among the infinite solutions to (8) only those exactly satisfying (9) constitute solutions to the original nonlinear system (1). As explained in section III, many systems can be found in practice where such a factorization is possible, the aim being to reduce as much as possible. In the limit, if a vector of size can be found, then solutions to the original nonlinear system will be obtained directly (i.e., without iterating) by sequentially solving the unfolded system of equations (3)-(5). But this case will arise only when the set of equations comprises just nonlinear distinct terms. Therefore, the need to iterate in the factored method stems from an excess of nonlinear terms () rather than from the nonlinear nature of the original problem.

Obviously, when the remaining auxiliary vector is eliminated the original ‘folded’ system is obtained in factored form:

 h(x)=Ef−1(Cx)=p (10)

This leads to an equivalent expression for the Newton step (2),

 [EF−1kCHk]Δxk=p−Ef−1(Cxk)=Δpk (11)

where is the trivial Jacobian of . Whether (11) offers any computational advantage over (2) will mainly depend on the complexity involved in the computation of and its Jacobian , compared to their much simpler counterparts and (plus the required matrix products), but the convergence pattern will be exactly the same in both cases.

Yet the augmented system (8)-(9), arising when the factored form is considered, opens the door to alternative solution schemes which may outperform the ‘vanilla’ NR approach. The scheme developed in the sequel begins by considering the solution of the factored model as a linearly-constrained Least Squares (LS) problem. Then, an auxiliary least-distance problem is formulated aimed at providing improved linearization points at each iteration.

### Ii-a Solution of the factored model as an optimization problem

Finding a solution to the unfolded problem (8)-(9) can be shown to be equivalent to solving the following equality-constrained LS problem,

 Minimize [y−f−1(Cx)]T[y−f−1(Cx)] (12) s.t. p−Ey=0

which reduces to minimizing the associated Lagrangian function,

 L=12[y−f−1(Cx)]T[y−f−1(Cx)]−μT(p−Ey) (13)

The first-order optimality conditions (FOOC) give rise to the following system:

 y−f−1(Cx)+ETμ = 0 −CTF−T[y−f−1(Cx)] = 0 (14) p−Ey = 0

Given an estimate of the solution point, , we can choose in such a way that (9) is satisfied, i.e.,

 yk=f−1(Cxk) (15)

Then, linearizing around

 y−f−1(Cx)≅Δyk−F−1kCΔxk

allows (II-A) to be written in incremental form,

 ⎡⎢ ⎢⎣I−F−1kCET−CTF−TkCTDkC0E00⎤⎥ ⎥⎦⎡⎢⎣ΔykΔxkμ⎤⎥⎦=⎡⎢⎣00Δpk⎤⎥⎦ (16)

where is a diagonal matrix with positive elements.

From the above symmetric system, can be easily eliminated, yielding:

 [0CTF−TkETEF−1kC−EET][Δxkμ]=[0Δpk] (17)

or, in more compact form,

 [0HTkHk−EET][Δxkμ]=[0Δpk] (18)

If the Jacobian remains nonsingular at the solution point, then and the above system reduces to (11), as expected. Therefore, the Lagrangian-based augmented formulation has the same convergence pattern and converges to the same point as the conventional NR approach.

### Ii-B Auxiliary least-distance problem

The driving idea behind the proposed procedure is to linearize (II-A) around a point which, being closer to the solution than , can be obtained in a straightforward manner. The resulting scheme will be advantageous over the NR approach if the extra cost of computing the improved linearization point is offset by the convergence enhancement obtained, if any.

For this purpose, given the latest solution estimate, , we consider a simpler auxiliary optimization problem, as follows:

 Minimize (y−yk)T(y−yk) (19) s.t. p−Ey=0

with . The associated Lagrangian function is,

 La=12(y−yk)T(y−yk)−λT(p−Ey) (20)

which leads to the following linear FOOCs:

 [IETE0][Δ~ykλ]=[0Δpk] (21)

where . Besides being linear, the unknown is missing in this simpler problem, which can be solved in two phases. First, is computed by solving:

 EETλ=−Δpk (22)

Then, is simply obtained from

 ~yk=yk−ETλ (23)

By definition, is as close as possible to while satisfying (8).

Next, the FOOCs of the original problem (II-A) are linearized around , hopefully closer to the solution than ,

 y−f−1(Cx)≅Δ~yk−~F−1[Cx−f(~yk)]

 ⎡⎢ ⎢⎣I−~F−1CET−CT~F−TCT~DC0E00⎤⎥ ⎥⎦⎡⎢⎣Δ~ykxk+1μ⎤⎥⎦=⎡⎢ ⎢⎣−~F−1f(~yk)CT~Df(~yk)0⎤⎥ ⎥⎦ (24)

Once again, eliminating yields,

 [0CT~F−TETE~F−1C−EET][xk+1μ]=[0E~F−1f(~yk)] (25)

or, in more compact form,

 [0~HT~H−EET][xk+1μ]=[0E~F−1f(~yk)] (26)

The above linear system, to be compared with (18), provides the next value , which replaces in the next iteration. Moreover, as happened with the solution of (18), so long as the Jacobian remains nonsingular, and can be obtained with less computational effort from:

 ~Hxk+1=E~F−1f(~yk) (27)

### Ii-C Two-step solution procedure

The two-step algorithm, arising from the proposed factored representation of nonlinear systems, can then be formally stated as follows:

Step 0: Initialization. Initialize the iteration counter (). Let be an initial guess and . Step 1: Auxiliary least-distance problem. First, obtain vector by solving the system (28) and then compute from, (29) Step 2: Non-incremental computation of . Solve for the system, (30) where is the factored Jacobian computed at . Then update . If (or, alternatively ) is small enough, then stop. Otherwise set and go back to Step 1.

As explained above, step 1 constitutes a linearly-constrained least-distance problem, yielding a vector which, being as close as possible to , satisfies (8). As a feasible solution is approached, and . Notice that the sparse symmetric matrix needs to be factorized only once, by means of the efficient Cholesky algorithm (in fact, only its upper triangle is needed). Therefore, the computational effort involved in step 1 is very low if Cholesky triangular factors are saved during the first iteration. Moreover, the vector is available from the previous iteration if it is computed to check for convergence.

It is worth noting that step 2 is expressed in non-incremental form. It directly provides improved values for with less computational effort than that of a conventional Newton step (11), which may well offset the moderate extra burden of step 1.

In [19, 20] the two-step procedure based on the factored model was originally developed from a different perspective, related with the solution of overdetermined systems in state estimation (i.e. filtering out noise from a redundant set of measurements).

### Ii-D Factored versus conventional Newton’s method

The factored scheme can be more easily compared with NR method if step 2 is written in incremental form. For this purpose, the term is subtracted from both terms of (30). Considering that , this leads to,

 ~H(xk+1−xk)=E~F−1[f(~yk)−f(yk)] (31)

Notice that the above expression, being equivalent to (30), is less convenient from the computational point of view owing to the extra operations involved.

Next, the Taylor’s expansion around of the set of nonlinear functions is rearranged as follows, for :

 f(~yk)−f(yk)=~F(~yk−yk)−R(yk,~yk) (32)

where the remainder comprises the second- and higher-order terms of the polynomial expansion,

 R(yk,~yk)=∞∑j=2~Fjj![diag (yk−~yk)]je (33)

is the diagonal matrix containing the -th derivatives of , computed at , and is a vector of 1’s.

Replacing (32) into (31), and taking into account that , yields,

 ~HΔxk=Δpk−E~F−1R(yk,~yk) (34)

Comparing the resulting incremental model (34) with that of the conventional Newton’s method (11), the following remarks can be made:

1. If the current point, , is so close to the solution that , then will be negligible compared to . In these cases, the only difference of the factored method with respect to NR lies in the use of the updated Jacobian, , rather than , and the local convergence rate of the factored algorithm will be of the same order as that of Newton’s method.

2. If the current point, , is still so far away from the solution that , then will dominate the right-hand side in (34). In these cases, the behaviour of the factored scheme can be totally different from that of Newton’s method, which fully ignores when computing .

3. As expected, if step 1 is omitted (i.e., ), the factored scheme reduces to the standard Newton’s method.

Let and denote the solution point. Further insight into Remark (i) above, regarding local convergence rate, can be gained by subtracting from both sides of (30) and rearranging, which leads for the factored method to:

 ~H(x∞−xk+1)=E~F−1R(y∞,~yk) (35)

Manipulating (11) in a similar fashion yields, for the NR method:

 Hk(x∞−xk+1)=EF−1kR(y∞,yk) (36)

The last two expressions confirm that both the NR and the factored schemes converge quadratically near the solution (third- and higher-order terms are negligible in ). Moreover, if is closer than to the solution , then the factored method converges faster than Newton’s.

In summary, from the above discussion it can be concluded that the local behaviour of the proposed factored algorithm is comparable to Newton’s method (quadratic convergence rate), while the global behaviour (shape of basins of attractions) will be in general quite different. The fact that step 1 tries to minimize the distance between and surely explains the much wider basins of attractions found in practice for the factored method, but there is no way to predict beforehand the global behaviour of any iterative algorithm in the general case (the reader is referred to Section 3, “Aspects of Convergence Analysis”, in [22], for a succinct but clarifying discussion on this issue).

## Iii Application to canonical forms

The factored model and, consequently, the two-stage procedure presented above can be applied in a straightforward manner to a wide range of nonlinear equation systems, which are or can be directly derived from the following canonical forms. Small tutorial examples will be used to illustrate the ideas. None of those examples are intended to assess the potential computational benefits of the factored solution process, but rather to show the improved convergence pattern over the NR method (the reader is referred to Section VIII, where very large equation systems arising in the steady-state solution of electrical power systems are solved by both methods and solution times are compared).

### Iii-a Sums of single-variable nonlinear elementary functions

The simplest canonical form of nonlinear systems arises when they are composed of sums of nonlinear terms, each being an elementary invertible function of just a single variable. Mathematically, each component of should have the form,

 hi(x)=mi∑j=1cijhij(xk)

where is any real number and can be analytically obtained. In this case, for each different nonlinear term, , a new variable

 yij=hij(xk)

is added to the auxiliary vector , keeping in mind that repeated definitions should be avoided. This leads to the desired underdetermined linear system to be solved at the first step:

 hi(x)=mi∑j=1cijyij

while matrix of the overdetermined linear system is trivially obtained from the definition of :

 uij=h−1ij(yij)=xk

Example 1:

Consider the simple nonlinear function,

 p=x4−x3

represented in Fig. 1. Introducing two auxiliary variables,

 y1=x4;y2=x3

 p=y1−y2

Therefore, with the above notation, the involved matrices are:

 u=f(y)=(4√y13√y2);F−1=(4u31003u22)

For there are two solutions (points A and B in figure 1). Table I shows, for different starting points, the number of iterations required to converge (convergence tolerance in all examples: ) by both the NR and the proposed factored procedure. Notice that the factored procedure converges much faster than the NR method. The farther from the solution point, the better the behaviour of the proposed procedure compared to Newton’s method.

For values of (the minimum of the function, where the slope changes its sign), the NR method converges to point A. On the other hand, the factored scheme always converges to point B no matter which initial guess is chosen. This issue is discussed in more detail in section V, where it is also explained how to reach other solution points. As expected, the NR method fails for (null initial slope), whereas the factored solution does not suffer from this limitation (the updated Jacobian corresponding to the first value is not singular).

Finally, the components of vector (and obviously ) are always null at the solution point, indicating that a feasible solution has been found in all cases (see section VI).

Example 2:

In this example, the following periodic function will be considered,

 p=sinx+cosx

for which two auxiliary variables are needed,

 y1=sinu1;y2=cosu2

with . The relevant matrices arising in this case are:

 E=(11);C=([]c11)
 u=f(y)=(arcsiny1arccosy2);F−1=(cosu100−sinu2)

For , the two solutions closest to the origin are and (points A and B in Fig. 2).

Table II shows, for different starting points, the number of iterations required to converge by both methods and the final solution reached. In this particular example (), the NR method sometimes outperforms the factored scheme in terms of number of iterations. However, note that, for starting points far away from the origin, the NR method may converge to ‘remote’ periodic solutions (shown in boldface), whereas the factored scheme always converges to points A or B (this issue if further discussed in section V). Other advantages of the factored method will become apparent for infeasible values of (see the same example, with , in section VI).

In all cases, the final solution provided by the factored scheme is real (or, more precisely, its imaginary component is well below the convergence tolerance). However, complex intermediate solutions are obtained in some cases, owing to the fact that and return complex values for (this is discussed below in section VI).

### Iii-B Sums of products of single-variable power functions

Another canonical form to which the factored method can be easily applied arises when the nonlinear equations to be solved are the sum of products of nonlinear terms, each being an arbitrary power of a single variable. Mathematically, each component of has the form,

 hi(x)=mi∑j=1cijnj∏k=1xqkk

where and are arbitrary real numbers.

This case can be trivially handled if the original set of variables, , is replaced by its log counterpart,

 αk=lnxkk=1,…,n

Then the auxiliary vector is defined as in the previous case, avoiding duplicated entries:

 yij=nj∏k=1xqkk=nj∏k=1exp(qkαk)

which leads to the desired underdetermined linear system:

 hi(x)=mi∑j=1cijyij

The second key point is to embed the function in the nonlinear relationship , as follows:

 uij=lnyij⇒yij=expuij

which leads to the overdetermined linear system to be solved at the second step:

 uij=nj∑k=1qkαk

Once vector is obtained by the factored procedure, the original unknown vector can be recovered from:

 xk=expαkk=1,…,n

Example 3:

Consider the following nonlinear system in two unknowns:

 p1 = x1x2+x1x22 p2 = 2x21x2−x21

which, upon introduction of four variables,

 y1=x1x2;y2=x1x22;y3=x21x2;y4=x21

is converted into an underdetermined linear system:

 (p1p2)=(1100002−1)E⎛⎜ ⎜ ⎜⎝y1y2y3y4⎞⎟ ⎟ ⎟⎠

The nonlinear relationships are,

 ui=lnyii=1,2,3,4

which lead to the following Jacobian:

 F−1=⎛⎜ ⎜ ⎜⎝expu10000expu20000expu30000expu4⎞⎟ ⎟ ⎟⎠

and the final overdetermined system in the log variables,

 ⎛⎜ ⎜ ⎜⎝11122120⎞⎟ ⎟ ⎟⎠C(α1α2)=⎛⎜ ⎜ ⎜⎝u1u2u3u4⎞⎟ ⎟ ⎟⎠

Once the problem is solved in the log variables, the original variables are recovered from,

 xi=expαii=1,2,3,4

For and there is a known solution at and . Table III provides the number of iterations for different starting points. Apart from taking a larger number of iterations, the NR method sometimes converges to the alternative point and (marked with an ‘A’ in the table), unlike the factored scheme, which seems to converge invariably to the same point. For the farthest starting point tested (last row), the NR method does not converge in 50 iterations. On the other hand, the factored scheme is much less sensitive to the initial guess, since it converges systematically to the same point.

## Iv Transformation to canonical forms

There are many systems of equations which can be easily transformed into one of the canonical forms discussed above, by strategically introducing extra variables aimed at simplifying the complicating factors. Then, the nonlinear expressions defining the artificial variables are added to the original equation system, leading to an augmented one.

Example 4:

Consider the following nonlinear system:

 p1 = x1sin(x21+x2)−x21 p2 = x21x2−√x2

By defining a new variable,

 x3=sin(x21+x2)

it can be rewritten in augmented canonical form, as follows:

 p1 = x1x3−x21 p2 = x21x2−√x2 0 = x21+x2−arcsinx3

Note that in this case only the original unknowns, and , need to be initialized, since can be set according to its definition.

## V Extending the range of reachable solutions

When there are multiple solutions, the NR method converges to a point determined by the basin of attraction in which the initial guess is located. Some heuristics have been developed enabling the NR method to reach alternative solutions in a more or less systematic manner, such as the differential equation approach proposed in [23], which modifies the equation structure according to the sign of the Jacobian determinant.

In this regard, the behaviour of the proposed factored algorithm is mainly affected by the computable range of components. When this range does not span the whole real axis, the two-stage procedure may not be able to reach some solution points, irrespective of the starting point chosen. For instance, if , with even, then during the solution process, the expression

 u=f(y)=q√y

will always return a positive , provided is positive (or a complex with positive real component if is negative). This will be determinant of the computed values. In those cases, an easy way of allowing the factored procedure to reach alternative solution points is by considering negative ranges for , i.e.,

 u=−q√y

Example 5:

In Example 1 it was noted that the factored procedure always converges to point B (the positive root in Fig. 1). However, if the original expression for ,

 u1=4√y1

is replaced by2

 u1=−4√y1,

then the factored procedure always converges to point A (negative root) in fewer iterations than in Example 1.

A similar situation arises when the nonlinear system to be solved contains periodic functions, since their inverses will have a limited range related to the period of the original function. For instance, if , then will naturally lie in the range . If we want to obtain values in the extended range,

 (q−1/2)π

for any integer , then we should replace by

 u=qπ+(−1)qarcsiny

In a similar fashion, , which naturally lies in the range , should be replaced by,

 u=(q+1/2)π+(−1)q(arccosy−π/2)

to obtain values in the same extended range.

Example 6:

If, in Example 2, the expression,

 u=f(y)=(arcsiny1arccosy2)

is replaced by,

 u=(2π2π)+(arcsiny1arccosy2)

then, the factored method converges, in a similar number of iterations, to the points 6.9267 and 7.2105, at a distance from points A and B in figure 2 (in this simple example, involving a purely periodic function, this is totally an expected result, but the reader is referred to the not so trivial Example 7).

The fact that the range of adopted during the iterative process determines the solution point which is finally reached, irrespective of the starting point, is a nice feature worth investigating in the future, since by fully exploring all alternative values one might be able to systematically reach solution points in given regions of interest, without having to test a huge number of starting points. The following examples illustrate this idea.

Example 7:

Consider the extremely nonlinear non-periodic function,

 p=xsinx+√x

which is graphically represented in figure 3.

As explained in the previous examples, the following augmented system is to be factored,

 p = x1x2+√x1 0 = x2−sinx1

The relevant components and relationships of the factored solution approach are,

 y1=x1x2y2=√x1y3=x2y4=sinx1 u1=lny1u2=lny2u3=lny3u4=ln(arcsiny4)
 (p0) = (1100001−1)⎛⎜ ⎜ ⎜⎝y1y2y3y4⎞⎟ ⎟ ⎟⎠
 ⎛⎜ ⎜ ⎜⎝111/200110⎞⎟ ⎟ ⎟⎠(α1α2) = ⎛⎜ ⎜ ⎜⎝u1u2u3u4⎞⎟ ⎟ ⎟⎠

where variables have been introduced,

 αi=lnxi;i=1,2

The inverse of the Jacobian is in this case:

 F−1=⎛⎜ ⎜ ⎜ ⎜⎝expu10000expu20000expu30000expu4cos(expu4)⎞⎟ ⎟ ⎟ ⎟⎠

For no real solution exists when . In order to obtain real solutions for (points A, B, C and D in figure 3) as explained before, we have to replace,

 u4=ln(arcsiny4)

by the more general expression,

 u4=ln[qπ+(−1)qarcsiny4]

for .

Table IV shows, for increasing values of , the points to which the factored solution converges, as well as the number of iterations (starting with ). The complex solutions provided for and (whose real component is very close to the local maximum), indicate that no real solution exists for .

One more example will be worked out to illustrate the capability of the factored procedure to reach different solutions, just by extending the computed range of certain components.

Example 8:

Consider the 22 system proposed by P. Boggs [21],

 −1 = x21−x2 0 = x1−cos(πx2/2)

which is known to have only three solutions: , and .

In this case, the following relationships are involved in the factored method:

 y1=x21y2=x2y3=x1y4=cos(πx2/2) u1=√y1u2=y2u3=y3u4=2(arccosy4)/π
 (−10) = (1−100001−1)⎛⎜ ⎜ ⎜⎝y1y2y3y4⎞⎟ ⎟ ⎟⎠
 ⎛⎜ ⎜ ⎜⎝10011001⎞⎟ ⎟ ⎟⎠(x1x2) = ⎛⎜ ⎜ ⎜⎝u1u2u3u4⎞⎟ ⎟ ⎟⎠
 F−1=⎛⎜ ⎜ ⎜⎝2u100001000010000−πsin(πu4/2)/2⎞⎟ ⎟ ⎟⎠

When and are defined as above, the factored method always converges to , irrespective of the starting point . What is as important, the number of iterations is only slightly affected by , no matter how arbitrary it is.

If we extend the computed range of to negative values, by replacing the original definition with,

 u1=−√y1

then the factored method always converges to , irrespective of the starting point. Finally, when the ranges of both and are extended, as follows,

 u1=−√y1u4=2(2π−arccosy4)/π

the third solution point, , is reached for arbitrary values of .

This kind of controlled convergence cannot be easily implemented in the NR method, characterized by rather complex attractions basins in this particular example. In fact, Newton’s method may behave irregularly even when started near the solutions [22].

Interestingly enough, if the fourth combination of ranges is selected,

 u1=√y1u4=2(2π−arccosy4)/π

then, the factored procedure converges to the complex point showing that no real solution exists in that region. This issue is discussed in the next section.

## Vi Infeasibility and complex solutions

A nonlinear system with real coefficients will have in general an undetermined number of real and complex solutions. It can be easily shown that complex solutions constitute conjugate pairs if and only if , which happens for many functions of practical interest. When no real solutions exist we say that vector is infeasible.

The basin of attraction associated to each solution (real or complex) is characteristic of the iterative method adopted. As discussed in section II.II-D and illustrated by the previous examples, the basins of attraction associated to the factored method are quite different from those of Newton’s algorithm. Depending on which basin of attraction the initial guess lies in, the factored method will converge to a real or complex solution.

A summary of the possible outcomes of the factored algorithm follows:

• Convergence to a real solution: for feasible cases, starting from a real close enough to a solution, the factored scheme eventually converges to the nearby real solution. This is favoured by the introduction of the first step, aimed at minimizing the distance between successive intermediate points. Note however that, even if both and the solution are real, depending on the domain of existence of the nonlinear elementary functions , the two-stage procedure may give rise to complex intermediate values during the iterative process. A real solution (or, more precisely, a solution with negligible imaginary component) can also be reached starting from a complex .

• Convergence to a complex solution: for infeasible cases the factored method can only converge to a complex solution, provided the right is chosen. Notice that complex solutions cannot be reached if is real and the domains of and span the entire real axis. In those cases, selecting a complex initial guess is usually helpful to allow the algorithm to reach a complex solution. For feasible cases, if is real but far from a real solution, the factored iterative procedure might converge to a complex point.

• Inability to converge: Like any iterative algorithm, the factored scheme may numerically break down owing to unbounded values provided by the elemental functions or their inverse , which is the case for instance when an asymptote is approached. It can also remains oscillating, either periodically or chaotically, which typically happens when is real, no real solution exists and complex intermediate values cannot pop up.

The following examples illustrate the behaviour of the factored scheme as well as the nature of the complex solutions obtained when solving infeasible cases.

Example 10:

Let us reconsider Example 2 with , which constitutes an infeasible case for which the NR fails to converge to a meaningful point. The two-stage procedure always converges in a reduced number of iterations to the same complex value (or its conjugate), as shown in table V for the same real starting points as in Example 2 (the same happens with other real starting points).

However, if we prevent from taking complex values by restricting to its real domain, then the two-stage procedure remains oscillating around real values, much like the NR method.

We can use this example to see what happens if is further increased. Table VI presents the number of iterations and the solution points for increasing values of (starting from ). The maximum value for which there is a feasible (real) solution is (critical point). For larger values the factored approach converges to complex values with the same real component and increasing absolute values. Eventually, for the two-step procedure breaks down, which means that is not in the basin of attraction for this infeasible value of .

Experimental results show that the complex values to which the factored method converges in infeasible cases are not arbitrary. In this example, evaluating the nonlinear function at the real component of the solution point (0.7854) yields (point C in figure 2). This lies almost exactly on the maximum of the function ().

In Example 1, if we set , which is infeasible in the real domain, the factored method converges from nearly arbitrary real starting points to the complex value . Taking the real part of the solution yields (point C in figure 1), which is indeed very close to the minimum of the function ( at ).

Example 11:

Let us consider the period function,

 p=tanx−tan(x−π/2)

which is infeasible for and has an infinite number of solutions otherwise (see figure 4).

This example is very similar to Example 2, except for the way the auxiliary variables are defined:

 y1=tanx;y2=tan(x−π/2)

In this case, both the elementary nonlinear functions ,

 u1=arctany1;u2=π/2+arctany2

and the Jacobian functions,

 F−1=(1+tan2u1001+tan2(u2−π/2))

are well defined all over the real axis (except for the asymptotes of the functions).

Therefore, starting from a real value , the factored procedure (and also the NR scheme) will always remain in the real domain, unlike in Example 2 containing the functions and .

With , starting from , the two-stage algorithm converges in 5 iterations to 1.2059, whereas the solution point 0.3649 is reached, also in 5 iterations, when .

For the infeasible value , the algorithm remains oscillating for any real starting point. However, with the complex starting value , it converges to the complex solutions shown in table VII for different infeasible values of .

Note that the real component of the solution is always the same (0.7854). This value is exactly , a local minimum of the nonlinear function.

The above examples suggest that, when the two-stage method fails to find a real solution, it tries to reach a complex point whose real component is in the neighborhood of an extreme point of the function, which is indeed a nice feature. This is surely a consequence of the strategy adopted in step 1, which attempts to minimize the distance from new solution points to the previous ones, but this conjecture deserves a closer examination which is out of the scope of this preliminary work.

In any case, it is advisable to adopt complex starting points, particularly when infeasibility is suspected, in order to facilitate complex solutions being smoothly reached.

## Vii Critical points

Feasible (real) and infeasible (complex) solution points can be obtained as long as the Jacobian matrix () remains nonsingular throughout the iterative process. This applies to both the NR and the factored methods. Even though critical points are theoretically associated with singular Jacobians, in practice they can also be safely approached so long as the condition number of the Jacobian is acceptable for the available computing precision and convergence threshold adopted. However, the convergence rate tends to significantly slow down near critical points.

It is worth noting that, in the surroundings of a critical point, it may be preferable to deal with the augmented equation (26) rather than solving the more compact one (27), which assumes that .

Example 12:

Let us consider again the periodic nonlinear function of Example 11:

 p=tanx−tan(x−π/2)

which has an infinite number of critical points for . Table VIII provides the number of iterations and the solution points reached by both the NR and factored methods for different starting points (like in previous examples the convergence threshold is ). As can be seen, the factored method is much more robust against the starting point choice, and converges always to the same critical point ().

In the neighbourhood of the critical point the number of iterations is significantly reduced, but the factored procedure still performs much better than the NR method. For the sake of comparison, Table VIII also shows the values corresponding to .

In addition, numerical problems may arise during the factored solution procedure if, by a numerical coincidence, any element of the diagonal matrix becomes null or undefined. This risk can be easily circumvented by preventing diagonal elements of from being smaller than a certain threshold or abnormally large.

## Viii Large-scale test cases

This section provides test results corresponding to the nonlinear systems arising in the steady-state analysis of electrical pow