Solution of network localization problem with noisy distances and its convergence
The network localization problem with convex and non-convex distance constraints may be modeled as a nonlinear optimization problem. The existing localization techniques are mainly based on convex optimization. In those techniques, the non-convex distance constraints are either ignored or relaxed into convex constraints for using the convex optimization methods like SDP, least square approximation, etc.. We propose a method to solve the nonlinear non-convex network localization problem with noisy distance measurements without any modification of constraints in the general model. We use the nonlinear Lagrangian technique for non-convex optimization to convert the problem to a root finding problem of a single variable continuous function. This problem is then solved using an iterative method. However, in each step of the iteration the computation of the functional value involves a finite mini-max problem (FMX). We use smoothing gradient method to fix the FMX problem. We also prove that the solution obtained from the proposed iterative method converges to the actual solution of the general localization problem. The proposed method obtains the solutions with a desired label of accuracy in real time.
Network localization technique, Localization with non-convex distances constraints, Localization with noisy distances, Applications of Lagrange optimization in localization, Mini-max optimization problem, Non-convex optimization.
In recent technological advances, sensor networks are being adopted for collecting data from different hostile environments and monitoring them (Figure 1). A network may consists of sensor nodes, RFID readers, or members in a rescue team in a disaster management system, etc.
Air pollution monitoring , forest fire detection , landslide detection , water quality detection , natural disasters prevention are some familiar field of applications in which sensor networks are useful. When a network is deployed in some region, the sensor nodes identify the events within their sensing ranges and transmit the collected information to the nodes within their communication ranges (a node within communication range of another node is called a neighbor). The location of an event can naively be estimated by the positions of the nodes identifying the event. Thus knowing the locations of the nodes are essential for properly monitoring the events. The objective of the network localization is to determine the node locations of a network using available distance information.
The GPS (Global Positioning System)  installation with each node of a network for finding its location is costly. Therefore the localization technique without using GPS needs research focus. Many researchers have proposed novel algorithms so far, for finding node positions using the information available from neighboring nodes. Distance measurements among neighboring nodes are popularly used for computing the node positions. These distances are measured by instruments embedded inside the nodes. It is practically difficult to exactly measure the distances among nodes even with existing sophisticated hard-wares. Thus the network localization with noisy distance measurements demands rigorous research. In the literature, there are several algorithms based on exact [29, 7] as well as noisy distances [12, 8]. Graph rigidity theory  and optimization theory are popularly used by researchers for developing localization algorithms [12, 8].
Any network may be represented by a distance graph (a graph with edge weights equal to the distances between the end points of the edge). The network localization problem with the graph model of the network is equivalent to the graph realization problem. In , Saxe  proved that the problem of embedding graphs in the one dimensional space is NP-complete and in higher dimensional spaces it is NP-hard. Later Aspnes, Goldenberg and Yang  proved that the problem of finding a realization of a graph is an NP-hard problem even if it is known that the graph has unique realization.
During the last few decades, some variants of the general network localization problem have been solved. In [27, 26, 28] the localization problem with exact node distances has been discussed for wireless sensor networks; they used the ordering of nodes of the underlying network and graph rigidity property for localization. In distance-based network localization, the number of solutions of the network localization problem may be unique, finite or infinite (up to congruence). Testing the unique localizability of networks having exact distances among nodes has been discussed in [18, 6]. If a network is not uniquely localizable then it must have some nodes which may either be freely rotated with respect to some other nodes or reflected with respect to some edges. If some nodes of the network may be rotated then the number of solutions of the associated localization problem is infinite. If in the underlying graph of a network, a vertex (or a few vertices) may be reflected with respect to a set of neighbors that are almost co-linear then it is called a flip vertex of the network and this phenomenon in network localization is called a flip ambiguity. Analysis of flip ambiguity in network localization has been discussed in . To find unique localization of networks removing the flip ambiguity of nodes is essential.
In real field of applications, collecting the exact distances among adjacent pair of nodes is almost impossible. Doherty et al.  formulated the localization problem with noisy distance as a non-convex optimization problem. They excluded the non-convex constraints from the general problem to obtain a convex version of it and solved the problem using semi-definite programming (SDP) . Biswas et al.  converted the same non-convex network localization problem to convex localization problem by using the relaxation technique and solved by SDP. To the best of our knowledge, solving the general problem is still a challenge. In this work we sort the challenge by solving the general nonlinear non-convex network localization problem using Lagrangian optimization. As far as we know this is the first approach for solving the network localization problem without any modification of constraints.
In our previous works [24, 25] (published in the proceedings of international conferences), we converted the nonlinear non-convex network localization problem to a root finding problem of a single variable continuous function , (where ). We choose the standard bisection method for solving this root finding problem since the iterations in this method are guaranteed to converge to a root. The root finding problem inherently includes the finite mini-max problem which is NP-hard . We used the sequential quadratic programming [16, 31, 22] method to solve the finite mini-max problem. Using the sequential quadratic programming method, an approximate finite mini-max value of the function was computed at . Therefore in the iterative method, the sign of the function was determined incorrectly due to the approximation. For instance, suppose at an iterative step, an approximate value of is . But the actual value of the function is . In this case, the bisection method will consider the sign of the function positive though it is actually negative. Thus the correct interval containing the root was not determined in the bisection method due to approximation. This is a drawback of using the bisection method which is rectified in this paper.
A network localization problem may have different congruent solutions in the Euclidean space even if the distance information are collected from some practical field of application of WSN. The Euclidean space is unbounded. Therefore the congruent solutions of the localization problem may be anywhere in the Euclidean space. In our work, we constructed the root finding problem [24, 25] such that the estimated node positions of the network will be closer to the origin of the Euclidean Space with respect to some rectangular axis. We established that, we may always identify a compact region (i.e., a closed and bounded region) in the two dimensional Euclidean space containing the origin within which the localization problem must have a solution [24, 25].
In this paper, the construction of the root finding problem from the network localization problem is revised thoroughly. We develop an iterative method in light of the bisection method for finding a root of , . In each step of the iterative method it is required to determine whether the function has values with opposite signs at the end points of a sub-interval of the interval identified in the previous iteration. We compute a tight bound for the function at which depends on its approximately computed value in the iteration. Using these bounds and the monotonic non-increasing property of we determine the required sub-interval in our method. In this way without computing the exact value of we proceed for finding a solution of . We establish that the method converges to a solution of the network localization problem and the solutions of root finding problem may be achieved up to a desired label of accuracy within an acceptable number of iterations.
Organization of the paper: In Section 2, we present the general network localization problem with the convex and non-convex distance constraints. The construction of Lagrangian form of the network localization problem is given in Section 3. In Section 4, we discuss the technique for solving the Lagrangian network localization problem. Convergence of the solution technique is analyzed in Section 5 along with some instances of networks for which we implement the root finding method for finding a localization. We sketch an error analysis of the proposed method in Section 6 and conclude in Section 7.
2 Network localization problem
Let be an ad-hoc network; is the set of nodes () and is the set of communication links. The underlying graph is the grounded graph of . A realization of in -space is a - mapping from the vertex set to . Two different realizations and of are equivalent if for each edge in , where, is the standard Euclidean norm in . and are congruent if the equality holds for each pair of vertices in . A realization of in -space is unique up to congruence if every realization equivalent to is congruent to . If and are congruent to each other then can be obtained from by a suitable transformation of the coordinate system in -space and recomputing according to the new coordinate system. To fix a coordinate system in a -space, independent points are required with known positions. Therefore a uniquely realizable framework can be uniquely located in a -dimensional space if we can fix points in the space. On the other hand, if has two or more equivalent realizations which are non-congruent in a -space then is called ambiguously -realizable.
A distance based localization algorithm determines the locations of nodes in a network by using known positions of anchors, if any, and a given set of inter-node distance measurements. Let be the the set of anchor nodes with known positions , , , and be the nodes with unknown positions . In this work, we find the positions of the nodes in assuming , i.e., the network has no anchor node. The technique is equally applicable for networks with anchor vertices. Let N be the set of all edges joining ’s. Upper and lower bound on the exact length of an edge in N joining and are denoted by and . Let and be the matrices with -th entries and respectively. If two nodes and are not adjacent in the grounded graph then both the matrices have -th entry zero. Under the anchor free setting, the network localization problem may also be formulated as a nonlinear problem. The problem may formally be described as follows.
Given the edge set and the matrices , , of the grounded graph of a network with a set of nodes with unknown positions,
If is an estimation for the unknown positions of nodes in obtained by solving Problem 1, every realization congruent to obtained by translating the coordinate system is also a solution to Problem 1. The re-computations of such solutions can be avoided by including a function in the Problem 1 as the objective function. A solution to this optimization problem will minimize the sum of square distances of unknown nodes from the origin.
Let each then ,,, is a point in . Suppose . For each edge , let be the function
Given the matrices , of the network with a set of nodes with unknown positions. Find solutions of the nonlinear optimization problem
In Problem 2, each constraint can be broken into two parts, namely, and . For each , if satisfy then
Therefore, each is a convex constraint . It can be shown that the constraints are not convex. Thus the constraints in Problem 2 can be classified into two types based on the convexity,
This work is focused on solving the network localization problem keeping the non-convex distance constraints unaltered. Though Doherty, et al.  formulated the localization problem as non-convex optimization problem  they exclude the non-convex distance constraints to solve the problem using semi-definite programming (SDP). Biswas, et al.  converted the non-convex network localization problem into a convex optimization problem by relaxing the non-convex inequality constraints and solved the relaxed problem [13, 14] using SDP . A reason behind using SDP method is that the SDP is approximately solvable in polynomial time . Yet none of these approaches solved the general network localization problem.
In this paper, using the Lagrangian theory, the anchor free network localization problem with noisy distance measurements is converted into a root finding problem without any modification of the nonlinear non-convex distance constraints. We solve the root finding problem using an iterative method and prove the convergence of the method to a solution of the localization problem. The method gives an estimation for node positions up to a desired level of accuracy within a real time period.
3 Root finding problem construction using Lagrangian function
The network localization problem is inherently a non-convex optimization problem. In this section, we describe the Lagrangian function with the help of which we transform the general localization problem into a root finding problem. In Problem 2, each non-convex constraint can be written as where, . These modifications convert Problem 2 into a non-convex optimization problem as described in Problem 3.
Given the matrices , of the network consisting of nodes with unknown positions , solve the nonlinear optimization problem:
where, and .
3.1 Lagrangian function
Let , where is a positive real number independent of and are defined in Problem 3. Note that, , because the distance information for each pair of nodes is collected from a network where no two sensors are in the same position.
The Lagrangian function for Problem 3 is defined as,
Lagrangian function may be defined in many ways  for an optimization problem among which we consider the above form for the Lagrangian function in this paper. Shortly we prove that the Lagrangian function always attains its infimum within the field of interest. With the help of the problem defined in Problem 4 is later proved to be equivalent to 3 under certain restrictions which are acceptable in any real situations.
Let the function attains its infimum at some point over the domain of definition, i.e.,
We have to find the .
Below we describe a result from  which says that, is an optimal solution of Problem 3 if and only if it is a solution of Problem 4. Thus if we can find a solution of Problem 4 then may easily be mapped to an optimal solution of Problem 3 using this result. It may also be noted that this technique does not need the convexity of the constraint functions, i.e. we do not ignore the non-convex constraints from the general problem.
Result 1 ().
3.2 Lagrange’s optimization problem
The network models under consideration are picked up from networks already embedded in the field of interest. For such an already embedded network, Problem 3 satisfies the following conditions:
The problem always has at least one feasible solution, since the graph underlying the network is constructed from a network already embedded in the field of interest.
Since in the feasible region and Problem 3 has feasible solution,
Problem 3 always possesses an optimal solution, say , in . At , . It may be noted that only when all the points are at origin.
Since is polynomial, it is uniformly continuous on the feasible region .
For , there always exists some such that since is continuous.
Under the above assumptions, we develop the following result which is used for developing the proposed localization problem.
In Problem 3 the feasible region
Without lose of generality we restrict in . Otherwise the origin may be shifted so that the feasible region is included in .
A set is compact in if and only if it is both bounded and closed . We give an explicit proof of the compactness by showing that the above defined set is both bounded and closed in . In view of the condition , is bounded.
Closed-ness of : Let for an arbitrarily chosen , and [where ] be a Cauchy sequence in with limit . Let -th edge of the grounded graph joins the nodes of the network. For , and if , . Let us first consider the case .
Since therefore for given any there exists some where for all
This gives for all ,
Thus for ,
Since the above relation holds for arbitrarily chosen we get , i.e., . Thus if then is a compact set. For , the proof for closed-ness is similar as before. Hence for each , is closed. is the intersection of finite number of closed sets and it is closed. Therefore is compact. ∎
The optimum solution of varies for different values of (for , each is given in the problem as constants). We define a scalar function of parameter as follows,
To construct the Lagrangian optimization problem we here present some results from  involving function .
Result 2 ().
Let be a finite set of continuously differentiable functions defined on an unbounded set , . Consider the optimization problem
under the following assumptions:
The feasible region is compact.
If is an optimal solution and for any arbitrary constant
Then there exists some such that . Also the following conditions hold.
With addition to the above conditions if is uniformly continuous then .
is a monotone non-increasing continuous function.
if and only if
Let be an optimal solution of Problem 3. Then has the following properties:
If then .
If then .
is a non-increasing continuous function of .
if and only if
Under the network model, we have seen that s in Problem 3 are continuously differentiable and . The above mentioned condition of the underlying network model shows that the feasible region of Problem 3 is compact. The condition shows that the Problem 3 has an optimal solution, say . By the condition , there always exists some such that when . The proof of this theorem then follows from Problem 2. ∎
Given the matrices , of the network with a set of nodes with unknown positions. Let
Find such that .
To get a good estimation for the node positions in the network, we need to search for some positive real number for which there exists some such that the value of the function is equal or very close to . In the rest of this paper, we will concentrate for finding or estimating the roots of .
4 Solving the root finding problem
In the previous section, we have seen that solving the network localization problem is equivalent to solving Problem 5. Here we prove that if we can find a root of the equation , node positions of the network will be obtained from the corresponding at which exactly equals .
Let is a root of the equation . Then there exists an optimizing such that,
Since the feasible region of the network localization problem is compact the optimal solution of the problem lies within a compact set. Therefore instead of searching the minimizing of the function all over we may restrict our search on a compact subset, say , containing the feasible region of the network localization problem. Such a compact set for Problem 5 may be constructed as follows:
Consider the field of interest, , for localizing the network in . If is a feasible solution of the general network localization problem somewhere in then by using the translation and rotation operations we may get a congruent realization of the network in . Since is bounded we will get some upper bound as well as lower bound for each coordinate of any point lying in the region. If we get any localization of the network obtained by solving Problem 5 then it will lie within the field of interest.
Let and be the maximum and minimum for all of the coordinates in the field of interest. Consider a -dimensional box
in . is always compact. Let be a realization of the network within the field of interest. For each , if then . Therefore corresponding to each solution of the network localization problem there is a point in the -dimensional box.
The function is a continuous function of the variable on this compact set . The proof of the theorem will be followed if we can show that the continuous function defined on attains its minimum at some point in .
Since is a compact set and the function is continuous, is a compact set (i.e., closed and bounded). Also the infimum of any set is either a limit point or an element of the set. In both cases, the infimum of lies inside since, is closed. Therefore, we get some such that i.e., ∎
We develop an iterative method in light of the bisection method for finding a root of , . The method is guaranteed to converge to a root of the continuous function on an interval , if and have opposite signs. At the initial stage of the iterative method we search for an interval containing within which must have a root. The searching process may progress as follows:
Consider a real number . For computing it is required to solve a finite mini-max problem which is an NP-hard problem . We use smoothing gradient technique (Section 4.1) for computing an approximate value of . In this smoothing technique, may be approximated such that depending on the approximated value of the function we will get an interval within which the actual functional value lies. Using these bounds and the monotonic non-increasing property of the function we determine the sign of (Theorem 3). If is not a root of then one of the following cases may occur.
case . ( is positive): Choose a point is an arbitrary positive number. Since is a non-increasing continuous function of (Theorem 1), then for sufficiently large constant either or . If then is a root of the equation and we are done. Otherwise the required interval is .
case . ( is negative at ): Choose a point . With similar reason as in case 1, either or . If then we are done, otherwise the required interval is .
In this way without computing the exact value of we obtain an interval at the end points of which take values with opposite signs and proceed for finding a solution of . In the following section we describe the smoothing gradient technique which we implemented for approximately computing .
4.1 Smoothing Gradient Technique for solving finite mini-max
In the literature there are several smoothing techniques which may be used for solving a finite mini-max problem. Among those techniques we choose one for solving our finite mini-max optimization problem in which the function remains bounded for each . The technique uses a smoothing function (given in (4)) to approximate the underlying non-smooth objective function . A smoothing function for a given non-smooth continuous function may be defined as follows.
 Let, be a continuous non-smooth function. We call a smoothing function of if is continuously differntiable in for every and
for any .
To solve the root finding problem we require an estimation for , where is a non-smooth continuous function. We consider the smoothing function for as follows:
The function provides a good estimation for since the following inequality holds.
, for .
Since , at any point ,
We describe the smoothing gradient algorithm below which will produce a clarkr stationary point (Appendix A) for
In each step of the Smoothing Gradient Algorithm we use WolfLineSearchAlgorithm  for finding for the next iteration. The algorithm searches for finding the maximum value of the constant satisfying the following two conditions.
where are from the smoothing gradient algorithm. The first condition ensures that at the -th step of the iteration the functional value is smaller than (since is negative). Here we show that an satisfying this condition always exists. From the Taylor theorem for multivariate functions  of we get
if, i.e. if,
Using the Taylor’s theorem it may be concluded that such an always exists. Condition of WolfLineSearchAlgorithm has been inserted to keep sufficiently large as such the slope of remains at least times larger than the slope of .
5 Convergence analysis of the root finding method
The convergence of the root finding method inherently depends on the convergence of the SmoothingGradientAlgorithm.
5.1 Convergence of SmoothingGradientAlgorithm
Let and are the sequences generated by the smoothing gradient algorithm. Towards proving the convergence of the method let us first consider the set
in the smoothing gradient algorithm.
The set can not be finite.
If is a finite set then from the smoothing gradient algorithm we get, there exists an integer such that for all ,