Robust Target Localization Based on Squared Range Iterative Reweighted Least Squares

Robust Target Localization Based on Squared Range Iterative Reweighted Least Squares

Alireza Zaeemzadeh, Mohsen Joneidi, Behzad Shahrasbi, and Nazanin Rahnavard
School of Electrical Engineering and Computer Science
University of Central Florida, Orlando, FL 32816, USA
Emails: {zaeemzadeh, joneidi, behzad, and nazanin}@eecs.ucf.edu
This material is based upon work supported by the National Science Foundation under Grant No. ECCS-1418710 and Grant No. CCF-1718195.
Abstract

In this paper, the problem of target localization in the presence of outlying sensors is tackled. This problem is important in practice because in many real-world applications the sensors might report irrelevant data unintentionally or maliciously. The problem is formulated by applying robust statistics techniques on squared range measurements and two different approaches to solve the problem are proposed. The first approach is computationally efficient; however, only the objective convergence is guaranteed theoretically. On the other hand, the whole-sequence convergence of the second approach is established. To enjoy the benefit of both approaches, they are integrated to develop a hybrid algorithm that offers computational efficiency and theoretical guarantees.

The algorithms are evaluated for different simulated and real-world scenarios. The numerical results show that the proposed methods meet the Cràmer-Rao lower bound (CRLB) for a sufficiently large number of measurements. When the number of the measurements is small, the proposed position estimator does not achieve CRLB though it still outperforms several existing localization methods.

Target localization, robust localization, robust statistics, iterative reweighted least squares, generalized trust region subproblems

1 Introduction

The problem of localization arises in different fields of study such as wireless networks, navigation, surveillance, and acoustics [1, 2, 3]. There are many different approaches to localization based on various types of measurements such as range and squared-range (SR), time-of-arrival (ToA), time-difference-of-arrival (TDoA), two-way time-of-flight (TW-ToF), direction-of-arrival (DoA), and received-signal-strength (RSS) [2, 4, 5, 6, 7].

In [2], localization from range measurements and range-difference measurements are considered and least-squares (LS) estimators are exploited. Authors in [1, 2, 3, 7] have established methods to find the exact or approximate solution in the maximum likelihood (ML) framework. Usually finding the solution for ML estimators is a difficult task or computationally burdensome [3, 7].

In this paper, the problem of robust target localization is considered. In sensor networks, some nodes may report faulty data to the processing node unintentionally or maliciously. This may occur because of network failures, low battery, physical obstruction of the scene, and attackers. Thus, the processing node should not simply aggregate measurements from all sensors. It is more efficient to disregard the outlier measurements and localize the target based on reliable measurements.

There are different approaches toward robust localization. The method in [4] is obtained by modeling the ToA estimation error as Cauchy-Lorentz distribution. In [8], robust statistics, and specifically Huber norm, is exploited to localize sensors in a network in a distributed manner using the location of a subset of nodes. Authors in [6] try to minimize the worst-case likelihood function and employ semidefinite relaxation to attain the estimate using TW-ToF measurements. The authors in [9] have developed a robust geolocation method by estimating the probability density function (PDF) of the measurement error as a summation of Gaussian kernels. This method works best when the measurement error is drawn from a Gaussian mixture PDF.

In this paper, the goal is to localize a single target in the presence of outlier range measurements in a centralized manner. We aim to achieve outlier distributional robustness, which means the estimator performs well for different outlier probability distributions. A least squares methodology is applied to the squared range measurements. Although, this formulation is not optimal in the ML sense [3], it provides us with the opportunity to find the estimate efficiently.

The contributions of this work can be summarized as follows. First, a robust optimization problem is formulated, which disregards unreliable measurements, using squared-range formulation. Next, two different algorithms are proposed to find the solution of the optimization problem. In the first algorithm, which is based on iteratively reweighted least squares (IRLS), the proposed optimization problem is transformed into a special class of optimization problems, namely Generalized Trust Region Subproblems (GTRS) [10]. Numerical simulations show that this algorithm has fast objective convergence. However the whole-sequence convergence is not established theoretically.

The second algorithm is based on gradient descent. This algorithm is globally convergent, but needs more iterations to converge. By using these two algorithms, we proposed a hybrid method, which has desirable theoretical and practical features, such as fast whole-sequence convergence.

The rest of this paper is organized in the following order. In Section 2, the system model is introduced. Section 3 describes the robust localization problem and two methods to tackle the problem are presented. Section 4 presents the simulation results and finally Section 5 draws conclusions.

2 System Model

Since the problem of source localization arises in different fields such as wireless networks, surveillance, navigation, and acoustics, a general system model is exploited. In the generalized model, the system is comprised of sensors, with known locations, and the location of the target is estimated using the range measurements reported by these sensors. A central processing node collects the measurements and computes the location of the target.

Each sensor reports a range estimate, denoted by , given by

(1)

where denotes Euclidean distance, is the coordinates of the target, is the location of the sensor and models the measurement error. It is clear that for the aforementioned applications or .

The measurement errors are assumed to be independent and identically distributed random variables. To model the outlier measurements, a two-mode mixture PDF is assigned to the measurement errors, which can be written as:

(2)

In other words, measurement errors are drawn from the distribution with probability or the distribution with probability . models the measurement noise for the outlier-free measurements, which is assumed to be a zero mean Gaussian distribution with variance , and models the outlier errors. Thus, the probability denotes the ratio of outlier measurements to all the measurements, also known as the contamination ratio. The outlier error distribution, , is commonly modeled with a Uniform distribution [11, 12], a shifted Gaussian distribution [13, 14, 9], a Rayleigh distribution [14], or an exponential distribution [15]. However, it is worthwhile to mention that our proposed method does not rely on the distribution of .

Here, the goal is to estimate using the measurements , while disregarding the measurements from outlier sensors. The processing node has no information about the number of the outlier sensors and the distribution of outlier measurements. Moreover, it is assumed that all the reported measurements including the noisy and irrelevant measurements are positive. For that, we exploit robust statistics and propose methods to obtain the solution.

3 Robust Localization From Squared Range Measurements

In this section, a localization method is developed by applying robust statistics to the squared range measurements. Although this formulation is not optimal in the ML sense, unlike the methods based on range measurements, the solution can be attained easily.

The conventional square-range-based least squares (SR-LS) formulation is as follows [2]:

(3)

It is clear that the problem stated in (3) is not convex. However, we can transform (3) into a special class of optimization problems by reformulating it as a constrained minimization problem given by [2, 10]

(4)
subject to

It is worthwhile to mention that is also an outcome of the optimization procedure, not a parameter to be set. In this formulation, the unreliable measurements from outlier sensors affect the accuracy of localization significantly. We plan to use robust statistics to decrease the sensitivity of the estimator to the common assumptions. Here, robustness signifies insensitivity to small deviation from the common assumption, which is the Gaussian distribution for noise. In (2), the parameter represents the deviation from this assumption. Our goal is to deal with the unknown distribution and to achieve distributional robustness.

As described in [16], a proposed statistical procedure should have the following features. It must be efficient, in the sense that it must have an optimal or near optimal performance at the assumed model, i.e., the Gaussian distribution for noise. It must be stable, i.e., robust to small deviations from the assumed model. Also, in the case of breakdown, or large deviation from the model, a catastrophe should not occur. In the numerical experiments, we will look for these features in the proposed methods.

The general recipe to robustize any statistical procedure is to decompose the observations to fitted values and residuals [16]. In our proposed methods, we will try to find the residuals and re-fit iteratively until convergence is obtained. Each term of summation in (4) corresponds to the residual from a single sensor. These residuals can be exploited to re-fit the observations iteratively.

Specifically, we use the residuals to assign weights to each observation. If an observation is fitted to the model, it should have a larger weight in the procedure of decision making. Inspired by [17], we define the objective function as:

(5)

where , , , and is the weight vector with . The value of the parameter is a function of the standard deviation of the noise, we set based on the discussion presented in Appendix A.

The first summation of the objective function (5) is the weighted version of the objective in (4). The other terms are added in such a way that result in the commonly used class of M-estimators known as Geman-McClure (GM) function [18, 19]. The aim of GM function is reduce the effect of large errors, by interpolating between and norm minimizations [18]. There are other M-estimatiors with similar behavior as the Geman-McClure such as Tukey, Welsch, and Cauchy estimators. These types of M-estimators are known to be more robust to large errors than Huber M-estimator [18]. The desirable feature of Huber function is the convexity, unlike all the other mentioned estimators. However, our numerical results show that the proposed algorithms perform well for different scenarios and different values of contamination ratio.

Our goal is to minimize over and . Specifically, we are solving the following optimization problem:

(6)
subject to

where

(7)

Our algorithms will exploit an alternative approach to update the weights and . We initialize by taking . Then at the iteration, the following optimization problem is solved to update the value of :

(8)

Likewise, the weights are updated as follows:

(9)

This problem is convex and the global minimizer can be obtained easily. As a result, the weights are given by:

(10)
where

Choosing such weights is common in iteratively reweighted least square (IRLS) methods [20, 16, 18, 21, 22].

In robust statistics terms, the measurements are decomposed into the fitted values and residuals at each iteration . Then, the residuals are exploited to tune the weights of the observations. For large residuals, i.e., , each term of the first summation in (5), tends to . Similarly, for small residuals each term in summation tends to zero. In other words, we are minimizing the number of the observations with large residuals.

Now, two different approaches to find the solution of (8) are introduced. In the first approach, we show that (8) can be mapped into a special class of optimization problems known as Generalized Trust Region Subproblems (GTRS) [10]. Then at each iteration, the exact solution is derived by employing the GTRS formulations. In the second approach, a method based on gradient descent is introduced to solve the problem. This method is not as computationally efficient as the first approach, but offers an array of desirable theoretical features.

3.1 The Squared Range Iterative Reweighted Least Squares (SR-IRLS) Approach

The optimization problem in (8) can be formulated in the matrix form as:

(11)
subject to

with

(12)

and is a diagonal weighting matrix in the iteration and is the diagonal entry of .

Note that in (11), a quadratic objective function is being minimized subject to a quadratic equality constraint. This special class of optimization problems is called Generalized Trust Region Subproblem (GTRS) [10]. The equality constraint makes this optimization problem non-convex. However, it is shown that the global solution of GTRS problems can be obtained efficiently [10, 2].

Theorem 1

Let and be quadratics and assume is not empty. If

(13)

where

then the optimization problem has a global minimizer.

Theorem 2

Let and be quadratics and assume that and . A vector is a global minimizer of problem if and only if and there is a multiplier such that the Kuhn-Tucker condition

is satisfied with

positive semidefinite.

Specifically, using Theorem 1 and the definitions of , , and , we can easily verify that (13) holds for the proposed optimization problem in (11). Thus, the optimization problem (11) has a global minimizer for all the iterations. Also by using Theorem 2, is an optimal solution of (11) if and only if there exists such that:

(14)

The last expression means that is positive semidefinite. The first two equalities in (14) can be exploited to obtain a solution for , i.e. . To ensure that is positive semidefinite, it is easy to show that we need to seek for in the interval

(15)

where is the largest generalized eigenvalue of the matrix pair . It is shown that if (13) holds, then for some [10, Theorem 2.2]. Moreover, the resulting characteristic function needed to be solved to find is strictly decreasing over this interval [10, Theorem 5.2]. Thus, at each iteration, can be obtained using a bisection algorithm. The interval for starting point of the bisection algorithm is specified as , where [10].

Then, is updated using the estimated . Algorithm 1 illustrates the procedure to calculate the estimate of (11) using the equations in (14). The convergence of the algorithm is analyzed in Theorem 3.

0:   , for , , maximum number of iterations , and the convergence tolerance .
1:   , and using (12) and (7).
2:    and .
3:   Repeat
4:         
5:          solve using a bisection algorithm in interval , where .
6:         .
7:          using (10).
8:   Until Convergence, i.e., if or the maximum number of iterations is reached.
Algorithm 1 Calculating the SR-IRLS estimate
Theorem 3

The sequence generated by by Algorithm 1 converges to a constant value and every limit point of the iterates is a stationary point of (6).

Proof:

See Appendix B. \qed

Inspection of the algorithm reveals that the matrix inversions are only needed for matrices, where is the space dimension and is equal to or . Thus the main computational burden of the algorithm stems from the matrix multiplications. The per iteration complexity of the algorithm is . Similarly, the growth rate for the legacy least square problem is also . Thus, the main computational burden of the SR-IRLS algorithm arises from the number of the iterations.

Our numerical experiments show that the SR-IRLS method needs a few iterations to solve the problem. The convergence of the objective is also proven in Appendix B. However, due to the lack of convexity, the standard convergence analysis tools cannot be used to show the convergence of the whole-sequence of the iterates. The problem becomes more difficult when the objective function is not a linear or a quadratic function of the previous iterates. Thus, in Appendix B, the convergence of a subsequence of the iterates to a critical point is proved, although the whole-sequence convergence is almost always observed.

This motivates us to propose a globally convergent algorithm. In Section 3.2, an algorithm, referred to as SR-GD, is introduced to find the solution of (11) based on gradient descent. Then, we will integrate SR-IRLS and SR-GD to derive a computationally efficient and globally convergent algorithm.

3.2 The Squared Range Gradient Descent (SR-GD) Approach

In this section, a new algorithm for solving the optimization problem in (8) is proposed based on gradient descent (SR-GD), for which the convergence of the whole-sequence of the iterates has been proven theoretically [23]. For that, the Lipschitz continuity of the gradient of the objective function as well as the special form of the objective and the constraint are employed. The numerical experiments show that this algorithm needs more iterations to converge than the SR-IRLS. Our goal will be to employ SR-GD and SR-IRLS to propose a hybrid fast converging algorithm.

Inspired by [23], at each iteration, the value of is updated as follows:

(16)
subject to

where

and is the Lipschitz constant of at the iteration. By the definition of Lipschitz continuity, we have

Intuitively, the first term of the objective finds the steepest descent, while the second term prevents large changes in the magnitude of the gradient. The Lipschitz constant of the gradient function limits the step size of the algorithm and the new estimate is enforced to be around the prediction . The prediction is constructed using the previous iterates and an extrapolation factor [23]. The update rule for is the same as (10).

This problem is not convex as well, but authors in [23] have proven the convergence of the whole-sequence of the algorithm by exploiting the properties of the objective.

It is easy to notice that the minimization problem stated in (16) is a GTRS problem. This is because a quadratic objective is minimized subject to a quadratic equality constraint. By exploiting the definition of and , we can show that (13) holds. Thus the optimization problem in (16) has global minimizer for all iterations. Also Theorem 2 states that is an optimal solution of (16) if and only if there exists such that:

(17)

At each iteration, after finding the predicted value for the iterate , the equality expressions in (17) are used to find and to update the values of and . We should look for the solution of in an interval that satisfies the positive semidefiniteness constraint. Since (13) holds, this interval exists and the characteristic function is strictly decreasing over this interval [10, Theorem 2.2, Theorem 5.2]. Algorithm 2 shows the steps to find the solution of the localization problem using the SR-GD method.

0:   , for , , maximum number of iterations , and the convergence tolerance .
1:   , and using (12) and (7).
2:    with identity matrix, , , and .
3:   Repeat
4:         .
5:         .
6:         .
7:          solve using a bisection algorithm in interval , where .
8:         
9:          using (10).
10:   Until Convergence, i.e., if or the maximum number of iterations is reached.
Algorithm 2 Calculating the SR-GD estimate

The numerical experiments show that the SR-GD method needs more time to find the solution than SR-IRLS. This is due to the fact that in SR-GD, the value of the new iterate is bounded to be around the previous iterate, unlike the SR-IRLS method.

To take advantage of the fast convergence of the SR-IRLS and the whole sequence convergence of the SR-GD, we propose a hybrid method. Specifically, we can start with the SR-IRLS method and update the iterates by steps stated in Algorithm 1. After convergence of the objective function, which is proven in Appendix B, the update rules in Algorithm 2 are employed to find the final solution. The performance, convergence rate, and computational cost of this hybrid method is evaluated and compared with other methods in Section 4.

4 Numerical Results

In this section, we present the simulation results to evaluate the performance of our proposed methods. We will seek for the main features of a robust estimator, which are discussed in Section 3. We will examine the performance of the algorithms at the assumed model (), small deviations from model (small ), and large deviations from the model (large ). Moreover, we check distributional robustness of the proposed algorithms, which means that the performance of the methods will be evaluated for different outlier noise distributions .

Two different simulation scenarios will be investigated. In Scenario I, a general system model is considered and the outlier measurements obey a uniform distribution, which models a harsh environment. In Scenario II, localization of a target in a cellular radio network is investigated. The geometry of sensors is taken from an operating network and the measurement errors are drawn from a Gaussian mixture distribution to model the non-line-of-sight (NLOS) measurements.

The performances of the proposed methods are compared with existing least-square-based [2] and robust [9] methods.

4.1 Scenario I

The simulation parameters are as follows, unless otherwise is stated. In a area, there exist sensors trying to localize a target. The sensors and the target are distributed uniformly at random, The range measurements are corrupted by the additive white Gaussian noise with standard deviation of . Moreover, among the sensors, there exist outlier sensors. The noise of the outlier sensors are uniformly distributed in range . Mathematically speaking, the distribution of the measurement error is as follows:

(18)

where is a uniform distribution with support , which is modeling the outlier measurements. is a zero mean Gaussian distribution with variance .

To ensure that all the range measurements are positive, we set the non-positive values equal to a small value, i.e. . Localization is performed in a -dimensional space, i.e. .

The performances of the proposed methods SR-IRLS, SR-GD, and the hybrid version are compared with the performance of SR-LS [2], a least-square-based method, as well as a robust method, namely Robust Iterative Nonparametric (RIN) [9]. The performances are compared according to the root mean square error (RMSE),

(19)

averaged over sufficiently large random simulations. is the estimated value of the target location .

In our first numerical experiment, the convergence of SR-IRLS and SR-GD are compared. Figure 1 depicts at different iterations. Moreover, the labels show the elapsed time for some of the iterations. Although the convergence of the SR-GD method is theoretically provable, Figure 1 shows that it needs more iterations and more time to converge. The hybrid version of the algorithm (SR-Hybrid) uses the update rules of SR-IRLS until the convergence of the objective function, then it employs the update rules of SR-GD. As a result, it needs less iterations than SR-GD, while its convergence is still theoretically provable.

Fig. 1: Convergence of SR-IRLS, SR-GD, and the hybrid method. Labels show the execution time of different algorithms at different iterations.

To study the influence of the number of outlier sensors, Figure 2 exhibits the RMSE of the estimate for different number of outlier sensors, or equivalently different values of . In this study, the results are based on Monte Carlo (MC) trials. It is clear that as the number of outliers increases, the performance of the SR-LS method deteriorates significantly. SR-IRLS and SR-GD perform closely for small values of , but the difference becomes more noticeable as increases. This was expected since SR-GD is more likely to result in local optimum solutions caused by the outliers, because of the smooth convergence of the iterates. However, the hybrid version, which only uses small step size when it is sufficiently close to the limit point, performs the best for different values of contamination ratio. This figure shows that the proposed methods are efficient at the assumed method () and stable for small deviations. Also, for large deviations, a catastrophe is not occurred.

Fig. 2: Robustness against outliers for Monte Carlo trials, and . Number of outlier sensors is set to .

To estimate the target location, the RIN method [9] approximates the PDF of the measurement error with a summation of Gaussian kernels. For that, it needs a considerable number of measurements. Hence, unlike our proposed methods, it cannot produce proper results with measurements. Further, since the RIN method employs Gaussian kernels, it works most accurately when the measurement errors are drawn from a Gaussian mixture distribution (see Section 4.2). Using the Gaussian kernels decreases the distributional robustness of the RIN method significantly.

To elaborate the point, Figure 3 illustrates the impact of the number of sensors on performance of different methods. In this experiment, of the sensors are reporting unreliable data to the processing node, i.e. . This figure exhibits that the accuracy of the localization methods improves as the number of sensors increases. As it was expected, the RMSE of the estimates produced by the RIN method decreases significantly as the number of sensors increases.

Moreover, It is clear that the proposed methods meet the Cràmer-Rao lower bound (CRLB) for large number of measurements. From Figure 3(a) and Figure 3(b), we can infer that the proposed methods are efficient for this simulation parameters, because they meet the CRLB and they are unbiased. The CRLB is approximated by using Monte Carlo integration techniques explained in [9].

(a) Bias
(b) RMSE
(c) Running time
Fig. 3: Performance of the localization methods versus number of sensors for Monte Carlo trials and .

Figure 3(c) shows the running times111Running time reflects the time required to execute all the steps of the algorithms, including initialization, preprocessing, convergence, and post-processing. All simulations have been performed under MATLAB 2014a environment on a PC equipped with Intel Xeon E5-1650 processor (3.20 GHz) and 8 GB of RAM. for different number of sensors. Clearly, the iterative methods requires more computation time than the least square method. Also, as it was expected and can be noticed in Figure 1, the running time of the hybrid method is less than SR-GD, but more than SR-IRLS.

It is also worthwhile to compare the performance of the localization methods for the case when no sensor is reporting unreliable measurements and the range measurements are corrupted only by an additive Gaussian noise, i.e. . As it can be seen in Figure 4, the LS method outperform the robust methods. This was expected since the LS methods are particularly tailored to deal with Gaussian noise, while the robust methods are customized to handle the unreliable measurements. We are sacrificing efficiency for , to achieve stability in deviation from the model. However, it is easy to notice that the RMSE of the proposed robust methods is close to the RMSE of the SR-LS method, which implies the near optimal performance for Gaussian noise.

Fig. 4: Comparison of the RMSEs in an environment with no outlier sensor, .

4.2 Scenario II

In this section, the problem of localizing a target in a radio cellular network is considered. The network consists base stations (BSs), which are trying to estimate the location of a target in a city center area. The configuration of the BSs and the city center, as depicted in Figure 5, is taken from a realistic network [9].

Fig. 5: Geometry of the sensors, marked as triangle, and the city center area, marked as gray square, in a real world operating cellular radio network.

The outlier-free measurements are result of line-of-sight (LOS) sensings. On the other hand NLOS sensings produce unreliable measurements. Field trials have indicated that the measurement errors in harsh LOS/NLOS environments can be modeled as a Gaussian mixture distribution [9],

(20)

where is a Gaussian distribution with mean and variance , modeling the NLOS measurements.

For each BS, we obtain measurements and stack them in the measurement vector as follows:

(21)

In the simulations, it is assumed that each BS reports samples. The measurement errors are drawn from the distribution in (20) with , , and . The position of the target is uniformly generated in the city center area.

Figure 6 illustrates the performance of different localization methods versus the contamination ratio for . This figure shows that SR-GD outperforms its competitors. Moreover, the hybrid version and SR-IRLS perform the same in this configuration and are able to handle NLOS measurements up to a certain amount and meet the CRLB up to a certain . For, Large values of , the SR-IRLS method breaks down, but still works better than the least square method. However, in this scenario SR-GD is able to localize the target for even large contamination ratios.

Fig. 6: Mean RMSE of different localization methods versus contamination ratio, for MC trials.

Moreover, the RIN method performs accurately in this scenario, in comparison with the previous scenario. In this scenario, the RIN can estimate the PDF of the error more accurately. This was expected because, firstly, we are collecting measurements, secondly, the measurement error has a mixture Gaussian distribution. As a result, the RIN method can produce a better estimate of the target location. With measurements, RIN is able to approximate the measurement error distribution. Thus, the its RMSE does not change considerably for different values of . This fact is vividly clear for the extreme case. For , the RIN method is able to approximate as the PDF of the measurement error. As a result, this method outperforms the competitors for the special case of .

5 Conclusions

In this paper, we have considered the problem of localizing a single target in the presence of unreliable measurements with unknown probability distribution. For that, the squared-range formulation is exploited. To disregard the outlier measurements and find the estimate using the reliable measurements, we have used robust statistics. Then the problem is converted into a known class of optimization problems, namely GTRS, using the concepts in robust statistics. Two algorithms and a hybrid method are proposed to solve the problem. Convergence of the algorithms is analyzed theoretically.

The simulation results suggest that the proposed methods outperform the existing methods, while providing a near optimal performance for Gaussian noise.

Appendix A Choosing the Parameter

Here, we establish a connection between the objective function introduced in (5) with Huber norm and use the results in robust statistics to tune . The first summation in (5) can be seen as an iterative approximation of , where

(22)

indicates a measurement as an outlier if the residual is greater than a threshold and this threshold is a function of . Robustness to noise is improved by increasing the value of , at the expense of losing robustness to the outlier measurements. Hence, as the variance of noise increases, we should assign a larger to . To set the value of , a link between the proposed problem and the Huber norm is established.

In robust statistics [16], Huber norm, , is utilized to disregard the outlier measurements. is defined as

(23)

Assuming that the additive measurement noise is Gaussian, the estimator would be asymptotically efficient, meets Cràmer-Rao bound, by setting the parameter to , where is the variance of the noise [16].

The Huber norm is a convex function. To use the results of robust statistics in the proposed problem, a convex version of the cost function in (22) should be employed. The function , can be surrogated by its closest convex approximation,

(24)

with . Figure 7 illustrates the similarity between the Huber norm and the convex approximation of , i.e., . The cost functions resemble a least square estimator for errors less than a cut-off parameter, which is the optimal cost function for Gaussian noise. On the other hand, for large values of error, the cost functions resemble the or norms, which are known to promote sparsity.

Fig. 7: Comparison of the IRLS weight function, its convex approximation, and the Huber norm.

By extending the results of robust statistics to the proposed problem, we utilize the same cut-off parameter for as the Huber norm. It means that for the case of Gaussian noise, we set , assuming that the nominal noise variance is available. If is unknown, an estimation of it can be used [24, Sec. 4.4]. The numerical experiments in Section 4 show that the estimator meets the Cràmer-Rao lower bound for sufficiently large number of sensors, by setting .

Appendix B Proof of Theorem 3

Algorithm 1 alternates between two subproblems introduced in (9) and (11). As discussed in Section 3.1, the optimization problem in (11) is a GTRS and has a global minimizer for all the iterations. Moreover, , the global minimizer of (11), is obtained by exploiting the conditions in (14).

Also the optimization problem in (9) is strictly convex and the global minimizer, , can be calculated using the update rule in (10) at each iteration.

Lemma 1

is non-increasing using the update rules in Algorithm 1, i.e.,

Proof:

Using the update rules in Algorithm 1, we have

The first inequality uses the fact that is the global minimizer of . Likewise, the second inequality uses the fact that is the global minimizer of . \qed

Since and is non-increasing, then either , or converges to some limit and as .

Here, by setting the constant , we can assure that . Then, it is easy to notice that is bounded and the sequence will converge to a constant value. To study the convergence of the iterates , the definition of a limit point is presented [25].

Definition 1

is a limit point of if there exists a subsequence of that converges to . Note that every bounded sequence in has a limit point (or convergent subsequence).

Now there exist a subsequence that converges to a limit point . By plugging in and into the update rules, we will have

which are the derivatives of the Lagrange function of (6) w.r.t. and . Thus, is a stationary point of (6).

References

  • [1] P. Oguz-Ekim, J. P. Gomes, J. Xavier, and P. Oliveira, “Robust Localization of Nodes and Time-Recursive Tracking in Sensor Networks Using Noisy Range Measurements,” Signal Processing, IEEE Transactions on, vol. 59, pp. 3930–3942, 8 2011.
  • [2] A. Beck, P. Stoica, and J. Li, “Exact and Approximate Solutions of Source Localization Problems,” Signal Processing, IEEE Transactions on, vol. 56, pp. 1770–1778, 5 2008.
  • [3] G. Destino and G. Abreu, “On the Maximum Likelihood Approach for Source and Network Localization,” Signal Processing, IEEE Transactions on, vol. 59, pp. 4954–4970, 10 2011.
  • [4] Y. Jiang and M. R. Azimi-Sadjadi, “A Robust Source Localization Algorithm Applied to Acoustic Sensor Network,” in Acoustics, Speech and Signal Processing, 2007. ICASSP 2007. IEEE International Conference on, vol. 3, pp. III–1233–III–1236, 4 2007.
  • [5] H. Jamali-Rad and G. Leus, “Sparsity-aware TDOA localization of multiple sources,” in Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on, pp. 4021–4025, 5 2013.
  • [6] Y. Zhang, K. Yang, and M. G. Amin, “Robust target localization in moving radar platform through semidefinite relaxation,” in Acoustics, Speech and Signal Processing, 2009. ICASSP 2009. IEEE International Conference on, pp. 2209–2212, 4 2009.
  • [7] G. Wang and K. Yang, “A New Approach to Sensor Node Localization Using RSS Measurements in Wireless Sensor Networks,” Wireless Communications, IEEE Transactions on, vol. 10, pp. 1389–1395, 5 2011.
  • [8] S. Yousefi, X. W. Chang, and B. Champagne, “Distributed cooperative localization in wireless sensor networks without NLOS identification,” in Positioning, Navigation and Communication (WPNC), 2014 11th Workshop on, pp. 1–6, 3 2014.
  • [9] F. Yin, C. Fritsche, F. Gustafsson, and A. M. Zoubir, “TOA-Based Robust Wireless Geolocation and Cramer-Rao Lower Bound Analysis in Harsh LOS/NLOS Environments,” Signal Processing, IEEE Transactions on, vol. 61, pp. 2243–2255, 5 2013.
  • [10] J. J. Mor, “Generalizations Of The Trust Region Problem,” OPTIMIZATION METHODS AND SOFTWARE, vol. 2, pp. 189–209, 1993.
  • [11] M. Hussain, Y. Aytar, N. Trigoni, and A. Markham, “Characterization of non-line-of-sight (NLOS) bias via analysis of clutter topology,” in Position Location and Navigation Symposium (PLANS), 2012 IEEE/ION, pp. 1247–1256, 4 2012.
  • [12] S. Nawaz and N. Trigoni, “Convex programming based robust localization in NLOS prone cluttered environments,” in Information Processing in Sensor Networks (IPSN), 2011 10th International Conference on, pp. 318–329, 4 2011.
  • [13] F. Gustafsson and F. Gunnarsson, “Mobile positioning using wireless networks: possibilities and fundamental limitations based on available wireless network measurements,” Signal Processing Magazine, IEEE, vol. 22, pp. 41–53, 7 2005.
  • [14] F. Yin and A. M. Zoubir, “Robust positioning in NLOS environments using nonparametric adaptive kernel density estimation,” in Acoustics, Speech and Signal Processing (ICASSP), 2012 IEEE International Conference on, pp. 3517–3520, 3 2012.
  • [15] P.-C. Chen, “A non-line-of-sight error mitigation algorithm in location estimation,” in Wireless Communications and Networking Conference, 1999. WCNC. 1999 IEEE, pp. 316–320, 1999.
  • [16] P. J. Huber, Robust statistics. Springer, 2011.
  • [17] I. Daubechies, R. DeVore, M. Fornasier, and C. S. G�nt�rk, “Iteratively reweighted least squares minimization for sparse recovery,” Communications on Pure and Applied Mathematics, vol. 63, no. 1, pp. 1–38, 2010.
  • [18] P. Pennacchi, “Robust estimate of excitations in mechanical systems using M-estimators – Theoretical background and numerical applications,” Journal of Sound and Vibration, vol. 310, no. 4–5, pp. 923–946, 2008.
  • [19] S. Geman and D. E. McClure, “Statistical methods for tomographic image reconstruction,” 1987.
  • [20] R. Chartrand and W. Yin, “Iteratively reweighted algorithms for compressive sensing,” in Acoustics, Speech and Signal Processing, 2008. ICASSP 2008. IEEE International Conference on, pp. 3869–3872, 3 2008.
  • [21] M. Boloursaz Mashhadi, N. Salarieh, E. Shahrabi Farahani, and F. Marvasti, “Level crossing speech sampling and its sparsity promoting reconstruction using an iterative method with adaptive thresholding,” IET Signal Processing, vol. 11, pp. 721–726, 8 2017.
  • [22] A. Zaeemzadeh, M. Joneidi, B. Shahrasbi, and N. Rahnavard, “Missing spectrum-data recovery in cognitive radio networks using piecewise constant Nonnegative Matrix Factorization,” vol. 2015-Decem, pp. 238–243, IEEE, 10 2015.
  • [23] Y. Xu and W. Yin, “A globally convergent algorithm for nonconvex optimization based on block coordinate update,” arXiv preprint arXiv:1410.1386, 2014.
  • [24] R. Maronna, D. Martin, and V. Yohai, Robust statistics. John Wiley & Sons, Chichester. ISBN, 2006.
  • [25] M. Razaviyayn, M. Hong, and Z.-Q. Luo, “A Unified Convergence Analysis of Block Successive Minimization Methods for Nonsmooth Optimization,” SIAM Journal on Optimization, vol. 23, no. 2, pp. 1126–1153, 2013.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
101644
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description