Optimal Information-Theoretic Wireless Location Verification

Optimal Information-Theoretic Wireless Location Verification

Abstract

We develop a new Location Verification System (LVS) focussed on network-based Intelligent Transport Systems and vehicular ad hoc networks. The algorithm we develop is based on an information-theoretic framework which uses the received signal strength (RSS) from a network of base-stations and the claimed position. Based on this information we derive the optimal decision regarding the verification of the user’s location. Our algorithm is optimal in the sense of maximizing the mutual information between its input and output data. Our approach is based on the practical scenario in which a non-colluding malicious user some distance from a highway optimally boosts his transmit power in an attempt to fool the LVS that he is on the highway. We develop a practical threat model for this attack scenario, and investigate in detail the performance of the LVS in terms of its input/output mutual information. We show how our LVS decision rule can be implemented straightforwardly with a performance that delivers near-optimality under realistic threat conditions, with information-theoretic optimality approached as the malicious user moves further from the highway. The practical advantages our new information-theoretic scheme delivers relative to more traditional Bayesian verification frameworks are discussed.

L

ocation Verification, Wireless Networks, Mutual Information, Likelihood Ratio Test, Decision Rule, Threat Model.

1 Introduction

The almost ubiquitous use of position information in emerging wireless networks has made the issue of wireless location determination and location-based services a very active research topic in recent years, e.g. see ([1, 2, 3, 4, 5, 6, 7, 8, 9, 10]). This in turn has made the supplementary issue of Location verification in wireless networks an area of increasing importance. This is in part a consequence of not only the growing number of mobile services that utilize location information, but also in part due to the mission-critical nature the location information being supplied has on the performance, security and safety of some services. The importance of location verification is perhaps best illustrated in emerging Intelligent Transport Systems (ITS) and vehicular ad hoc networks (VANETs) where the verification of the location information supplied by vehicles is vital to the safety issues ITS (and VANETs) hope to address [11]. Indeed, recently there has been much effort in analyzing how a Location Verification System (LVS) in the context of ITS may operate [12, 13, 14, 15, 16, 17, 18, 19]. Such ITS-based LVS work is also considered in other recent research efforts on location verification in more generic wireless network settings (e.g. [20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31]).

In an LVS one aims at verifying a user’s claimed position based on some input measurements so as to perform a binary decision on whether the user is legitimate (claims his true position) or malicious (spoofs his claimed position). In general, an LVS aims at obtaining a low false positive rate for legitimate users and a high detection rate for malicious users, leading to a tradeoff perhaps best illustrated by a receiver operating characteristic (ROC) curve. However, it is established that ROCs are not always ideal in comparing performances of two separate systems (e.g. [32, 33]). It is also the case that the use of a ROC does not in any formal sense indicate what the optimal operating point of an LVS is. A possible direction to follow in attempting an optimization of an LVS is to utilize a Bayesian hypothesis test, which with uninformative priors contains the structure of a Likelihood Ratio Test (LRT) - which minimizes the input/output classification error in the scenario where the cost of all types of misclassifications are equal [27]. Additionally, if the costs of misclassifications are not equal, then a variation of the LRT decision rule can be formed, namely the Bayes criterion [34]. The Maximum A Posteriori criterion and the Maximum Likelihood criterion are special cases of the Bayes criterion. However, it is well known that these Bayes-decision criteria possess a weakness - they are subjective. This subjectivity arises through the necessity to pre-assign costs to the different types of misclassification. It has been discussed before how such subjectivity in Bayes criteria can give rise to confusion when comparisons of detector performances are made [32, 33] . As such, although many of the previous works on LVSs have their own specific verification performance goals in mind, and their own pros and cons, none of these works identify an optimal LVS in any non-subjective sense.

To make progress, what is actually required is an objective measure of detector performance, namely a single unified metric that takes into account all key aspects of intrusion detection in an objective fashion. As argued in [33], this metric should be the information-theoretic mutual information, and it is this approach we develop here in the context of location verification. More specifically, we develop here an information-theoretic framework for an LVS in which the mutual information between the input and output LVS data is used as the objective optimization criterion. Some preliminary work along these lines has been attempted but only for sub-optimal decision rules[35]. In this work we pursue an information-theoretic framework in which the decision rule is an optimal one.

In general an LVS can be characterized as follows. The input data (users to be verified) are represented by binary random variables , , whose realized elements indicate legitimate () or malicious (). Likewise the output data can be represented by binary random variables , , whose realized elements indicate the binary decision made by the LVS, namely verified () or not verified (). In the LVS, a decision rule is formed which indicates whether a user is malicious or not. This decision rule ultimately forms a test on whether some statistic (derived from network measurements and some prior information) is less than or equal to some threshold. With these definitions in place, the contributions of this paper can be specifically summarized thus.

  1. We develop for the first time an information-theoretic framework for an LVS, which allows us to utilize the mutual information between and as a unique criterion to evaluate and optimize the performance of an LVS.

  2. Under the assumption of known likelihood functions for the measurements, we prove that the likelihood ratio is the test statistic that produces the maximum mutual information between and .

  3. Identifying the threshold value that maximizes the mutual information between and , we then show how the Likelihood Ratio Test (LRT) is the decision rule which maximizes the mutual information between and , and leads to the information-theoretic optimal LVS. We take the further step of determining the likelihood functions under a series of threat models. This leads to a working LVS that will be an optimal information-theoretic approach under the given threat models.

  4. We show from our analysis how an effectively optimal LVS, which is simple to deploy in practice, can be developed. We show that our LVS leads to an optimal solution for most realistic attack scenarios in which a malicious user who is outside a network region, is attempting to spoof that he is within the network region. We further show how optimality is approached as the malicious user moves further from the network region.

The remainder of the paper is structured as follows: Section 2 presents both the general network system model and our information-theoretic LVS framework. The decision rule that optimizes mutual information is constructed in Section 3. In Section 4, analysis and simulations of our LVS are presented for a realistic threat model. Section 5 concludes the paper.

2 System Model and LVS Framework

In this section, we first present the general location verification system model and the related assumptions. Then, we develop an information-theoretic framework for an LVS, which allows us to utilize the mutual information between and as an unique criterion to evaluate and optimize an LVS.

2.1 General System Model

The values of the input data can be represented as two hypotheses. The first of these is the null hypothesis, , which assumes the user to be verified is legitimate (). The second one is the alternative hypothesis, , which assumes the user to be verified is malicious (). Likewise, the possible values of the output data can be represented as two decisions, where denotes verified (), and denotes not verified (). We now outline the general LVS model, and detail the assumptions we use.

  1. A single user (legitimate or malicious) reports his claimed location, , to a network with () Base Stations (BSs) in the communication range of the user (the BSs are not in a line), where is the location of the th BS, . One of the BSs is the Process Center (PC), and all other BSs will transmit the measurements collected from the user to the PC. The PC is to make decisions based on the user’s claimed location and the measurements collected by all the BSs. We assume all BSs are perfectly synchronized.

  2. We assume a user (legitimate or malicious) knows the locations of the BSs, and that is supplied by the user to the PC.

  3. For the legitimate user, we assume the true location is given by (here we will ignore the small location determination error, the GPS error1). We assume the malicious user’s true location is known exactly to him (i.e. again we ignore any small localization error), but is unknown to the network.

  4. We assume is a bivariate random variable following some distribution. The prior distribution, the probability density function (pdf), for under is denoted as .

  5. In general, the measurement () collected by the th BS from a legitimate user is dependent on and the legitimate user’s . In practice, a malicious user can impact the measurements collected by all BSs in order to avoid detection. Thus, the measurement () collected by the th BS from a malicious user is some function of and the malicious user’s and his spoofed . Therefore, the measurement () collected by the th BS can be given as a composite model as follows:

    (1)

    where and are some functions yet to be specified (can involve additional parameters), and is random variable representing the communication channel noise. Given the statistical nature of , the composite system model in (1) can produce the likelihood functions under and , which are denoted as and , respectively, where is a realization of the measurement vector, .

  6. We also assume a user is legitimate with a known prior probability, which is . The probability of a user to be malicious is denoted as , and .

Figure 1: A Location Verification System (LVS) model.

2.2 Information-Theoretic Framework for an LVS

In general, the purpose of an LVS is to map the input data to the output data , , and can be represented as shown in Fig. 1. In this figure, the false positive rate, , and the detection rate, , are given as follows

where is the probability of an outcome conditional on a hypotheses. The mutual information between and can be expressed as , where is the entropy of , and is the conditional entropy of given . Given , the entropy of the discrete binary random variable can be written as . With these definitions, the conditional entropy can be expressed as in [32]

(2)

The mutual information measures the reduction of uncertainty of the input given the output . For example, if we make verification decisions without any observations (e.g. of received signal strengths), and will be independent of each other, and will be minimized (zero). However, based on some observations our LVS attempts to map into so as to minimize the uncertainty of given . An extreme example of this is when and are identical and therefore is maximized (of course this would require infinite noisy observations or finite noiseless observations). More generally, given some finite noisy observations, maximizing on the mutual information leads to decisions which maximize the dependence of and . As such, the mutual information is the natural optimization metric for an LVS from an information-theoretic viewpoint. The information-theoretic optimal location verification algorithm can be defined as the one which maximizes defined above.

3 The Optimal Location Verification Algorithm

In this section, based on the assumption of known likelihood functions under both and , we take the additional step of identifying the information-theoretic optimal location verification algorithm, which produces the maximum relative to any other location verification algorithm.

3.1 The Decision Rule for Maximizing

In the context of an LVS, a location verification algorithm must formulate a decision rule to infer whether the user is consistent with or . The algorithm ultimately forms a comparison of some test statistic, , and a corresponding threshold, , in the form of

For a given , we will be interested in the value of which maximizes , , . Furthermore, we will be interested in determining the functional form of that maximizes . This leads to our main result, which is stated in Theorem 1.

Theorem 1

Given the decision rule

(3)

the functional form of that maximizes the mutual information is , where

(4)

To prove Theorem 1, we first introduce two lemmas, of which the first is the Neyman-Pearson Lemma [36].

Lemma 1

Consider two hypotheses and , the decision rule to maximize a detection rate () for a given false positive rate () is

(5)

where is determined by the specified value of . For proof, see [36]. Before proceeding, we note will be a basic requirement for any useful LVS.

Lemma 2

Given the assumption , the mutual information is a monotonic increasing function of the detection rate .

{proof}

[Proof of Lemma 2] Since is not dependent on , the first derivative of with respect to can be expressed as

Note, since , and the logarithm is a monotonic increasing function of , then has the same sign as , where

Thus, given the assumption , then , and Lemma 2 is proved.

Given Lemma 1 and Lemma 2, we now prove Theorem 1.
{proof}[Proof of Theorem 1] If the specified value of in Lemma 1 is that which results in the value of (3), then by Lemma 2 the result follows.

3.2 The Optimal Location Verification Algorithm

Based on the above discussion, the optimal information-theoretic location verification algorithm is presented in Algorithm 1.

0:  priori probability , likelihood functions.
0:  binary decisions and .
1:  Determine the functional forms of and in (1).
2:  Specify the prior distributions for , and , and determine the likelihood functions and .
3:  With (5) as the general decision rule, derive the functional form of and . Note, and will be functions of .
4:  Using as the objective function, search for , which is the value of that maximizes .
5:  Collect measurements and calculate the likelihood ratio according to the likelihood functions determined in step 2.
6:  Form the optimal decision rule,
(6)
Algorithm 1 Optimal Location Verification Algorithm

4 Specific Optimal Location Verification Algorithm with RSS as Measurements

In order to implement the optimal location verification algorithm, in this section we take the further step of determining the likelihood functions under and with Received Signal Strength (RSS) as the system measurements, and we consider the algorithm under a series of threat models.

Although the framework we develop can be built on any measurement (location information metric), such as RSS, TOA (time of arrival) and TDOA (time difference of arrival), for purposes of illustration we focus here only on an RSS implementation. In this case, the measurement is the RSS (in dB) collected by the th BS. We will also assume that the legitimate user and all BSs are equipped with only a single omni-direction antenna.

Let us define the set of BSs that are within range of a legitimate user positioned at as the in-range BSs. This set of BSs forms an effective perimeter for the network used in the location verification. We will assume a single malicious user has the technology (e.g. directional beam-forming), which allows him to ensure (if required) that from some position outside the perimeter only the in-range BSs receive a non-zero RSS. The malicious user can set the power of the main directional beam. We do not allow an adversary to set multiple beams to different BSs via colluding malicious users (see later discussion).

Based on the log-normal propagation model [37], in (1) can be specified as

(7)

where is a reference received power, is the reference distance, is the path loss exponent, (in dB) is a zero-mean normal random variable with variance , and the Euclidean distance of the th BS to the user’s claimed location is

A malicious user can adjust his transmit power to impact the measurements collected by the BSs, thus in (1) can be expressed as

(8)

where is a power level set by the malicious user, and is the Euclidean distance of the th BS to the user’s true location .

Assuming all ’s are independent from each other, the likelihood function can be expressed as

(9)

where

Also, the pdf of conditional on under , , can be written as

(10)

In general, a malicious user will utilize in an attempt to impact the measurements collected by the BSs in order to avoid detection. We now discuss how to determine the ‘optimal’ value of from a malicious user’s point of view. An LVS can be spoofed optimally if the measurements collected from a malicious user follow exactly , which is given by (9). Therefore, in order to avoid detection a malicious user attempts to minimize the difference between and . This difference can be quantified through the KL-divergence between the two likelihood functions, which is defined as follows [38]

where

This KL-divergence is the information lost when is used to approximate , and it becomes zero if and only if the two distributions are identical. From an information-theoretic point of view the optimal value of can be expressed as

(11)

Setting in (8), can be rewritten as

(12)

where

Although is a known deterministic parameter for a malicious user, it is unknown for the network. This means is still unknown, and therefore the likelihood function is unknown for the LVS. To make progress, we will assume some realistic threat models within which becomes known.

4.1 Threat Model

Figure 2: Illustration of the Minimum Distance (MD) threat model.

The threat model we adopt can in principle accommodate any true/spoofed location pair. However, as the spoofed location approaches the true location, our detection rate will approach zero (as expected for any verification system). As such, in the following we will make the assumption that the true position of an attacker is some minimum distance from the spoofed position. To quantify this we will assume that this distance is always greater than the mean separation distance between BSs. This is a reasonable assumption since it is unlikely an attacker will try to spoof a location too close to his actual locale, for fear of apprehension. Pragmatically, this assumption also means that in effect the attacker will be placed at some minimum distance off the highway, since we find that if he is on the highway, under our minimum distance assumption, he is trivially detectable.2

We henceforth refer to our generic threat model as the Minimum Distance (MD) threat model, a schematic of which is given in Fig. 2. In this scenario it is assumed the BSs that form the infrastructure part of the VANET, are placed alongside the highway (or on overhanging structures along the highway). This represents a realistic expectation for the physical deployment architecture for VANETs that is emerging from the ITS community.

However, before presenting the details of the DM threat model, we will first consider some simplifying approximations, which although not adopted in the DM threat model, do allow for additional insight and analytical clarity. We will also show how the optimal threshold derived for the MD threat model is effectively the same as the optimal threshold derived under some of the simplifying approximations.

Far Field Approximation (FFA)

In this subsection we propose the deployment of our LVS within a threat model where the far-field approximation (FFA) is made, meaning that the malicious user’s distance from the highway is far enough that we can assume all RSS values received by all BSs are equal. Although never achieved in practice this simplification will allow us some initial insight into the performance of the LVS. Under the FFA we can take the distance of a malicious user’s true location to every BS to be approximated as a constant, . Therefore, we will assume

(13)

Substituting (13) into (12), under the FFA can be expressed as

(14)

Then, (which now does not depend on ) can be written as

(15)

Based on (4), (9) and (15), we construct the decision rule

(16)

In order to help determine and analytically, this decision rule can be rewritten as

(17a)
where
(17b)
and
(17c)

Given (9) and (17) , we have

where represents a normal distribution with and as the mean and variance, respectively. Likewise, given (15) and (17), we have

The false positive and detection rates under the FFA can now be expressed analytically as

(18)
(19)

where .

Having determined and under the FFA, we can use these in (2) for the conditional entropy . The value of which maximizes , denoted as , can be determined numerically. Using (17c), the can be determined by . Then, the decision rule in (6) which leads to the optimal verification algorithm under the FFA can be formed, where is specified in (16).

Figure 3: Analytical and simulated , and Normalized Mutual Information (NMI) with different values of , where , and .

We verify the false positive and detection rates, given by (18) and (19), respectively, via detailed Monte Carlo simulations. The simulation settings are chosen so as to mimic a location verification test over an area spanning the intersection of several major freeways:

  • The BSs are randomly distributed in a square area, .

  • The claimed locations of legitimate and malicious users are the same, which is .

  • The legitimate users are at . The malicious users are infinitely far away from , which in practice means the measurements collected are generated according to (14).

  • Each BS collects measurements from each user.

Figure 4: Maximum Normalized Mutual Information (NMI) with different values of and , where .

In the following, the simulation results are obtained through 10,000 Monte Carlo realizations of the measurement vector , and in all the specific results shown we have adopted the prior probability , and the path loss exponent . Also note, we denote the Normalized Mutual Information (NMI) as

(20)

In Fig. 3, the analytical and are directly derived from (18) and (19), respectively, and the analytical NMI is calculated using (2), (18), (19) and (20). In order to obtain the simulated , we randomly generate according to (7), from which we get a specific realization of , and for each value of we decide whether the user is legitimate or malicious by (16). To obtain the simulated , we randomly generate according to (14), and follow the same procedure as above for . The simulated NMI are calculated using (2) and (20) with the simulated and as the input. In Fig. 3 we have set , , and . From Fig. 3, we can see that the comparison between simulation and analysis shows excellent agreement, which verifies the analysis we have provided under the FFA. As we can see from Fig. 3, relatively high false positive rates are found at low thresholds. This is a consequence of the LVS operating at a point far from the optimal threshold. However, we see that at the information-theoretic optimal threshold, which maximizes the NMI, the false positive rate is approximately 4%. This strong dependence on the threshold (also seen in all our other results), re-emphasizes the critical importance of always operating the LVS at the optimal threshold. We have investigated a range of other values of , and . Some of these results are shown in Fig. 4, where the maximum NMI is shown as a function of for different values of . From Fig. 4, we see again the simulations agree with the analytical results.

Uniformly Distributed Approximation (UDA)

In this subsection we propose the Uniformly Distributed Approximation (UDA), where the malicious users are assumed to be uniformly distributed on a circle. Again, although never achievable in practice this simplification will allow us additional insights. More specifically, the malicious user’s true location is uniformly distributed on a circle, whose radius and center are and , respectively, where , and .

The main purpose of this model is to commence our probe of how reliable the use of the FFA will be when its assumptions are violated. To this end, we note that if the maximum difference between any two measurements collected from a malicious user, and , is no larger than , the scale at which this occurs provides a natural distance at which we could anticipate the FFA and the UDA to be approximately equivalent. To quantify this let us introduce . Under the UDA, the difference between and can be written as

(21)

Given that for a malicious user, we have , and , we can write (21) as

where without loss of generality we have assumed . In order to guarantee the required constraint , we should have

which results in

(22)

where is a reference value that will be utilized when comparison under the FFA is made. Such a comparison is achieved by using the FFA decision rule in (16) but under the UDA. In such a set up we would anticipate that the optimal thresholds of under the FFA and UDA would be very similar at .

To proceed with a comparison under the FFA and UDA we conduct Monte Carlo simulations. In these simulations, note that although as given by (9) is used, the likelihood given

(23)

must be determined numerically. The other simulation settings are the same as under the FFA, except the malicious users are uniformly distributed on a circle, whose radius and center are and , respectively. The measurements collected from the legitimate and malicious users are generated according to (7) and (12), respectively. To obtain the true numerical NMI under the UDA, we use (9) and (23) to calculate and , respectively, and utilize (5) as the decision rule. To simulate the NMI obtained from the use of the FFA decision rule (but under the UDA) we use (9) and (15) in order to implement the decision rule in (16).

From our results, shown in Fig. 5, we can see that at values of the optimal thresholds (the values of which maximize the NMI) for the two cases are very different. However, as the optimal threshold obtained under blindly adopting the FFA decision rule (even though the malicious user is not at infinity) is effectively the optimal value. Note also, the maximum values of the two NMIs and the corresponding are coincident when , which verifies that the reference value of in (22) is reasonable.

Figure 5: Normalized Mutual Information (NMI) with different values of , where , , and . The solid curves represent the NMI achieved under the correct decision rule (5). The dashed curves represent the NMI achieved under the FFA decision rule (16).

The DM Threat Model

In this subsection we implement the optimal location verification algorithm under our adopted threat model - the DM threat model. In this model is assumed to be uniform over the annulus formed by two concentric circles, whose finite radii are and , respectively, and whose mutual center is . The use of an annulus setting allows us to cover more general settings (beyond just single highways/freeways) such as freeway intersection regions where the the freeways can have multiple directions. In any scenario (single or intersecting roads) it will be assumed that the malicious user will not enter into any region (we assume the malicious users knows the locations of all BSs) where he is less than some distance from any of the VANET’s infrastructure BSs (see footnote 2).

This implies , where . Under this model, is also the same as in (9), and is as given in (23) but with the modified prior distribution. Again, no closed form solution is available for (23).

We present new Monte Carlo simulations where the settings are again the same as under the FFA, except that now the malicious users are uniformly distributed in the annulus. Again, the measurements collected from the legitimate and malicious users are generated according to (7) and (12), respectively. To obtain the true numerical NMI under the DM threat model, we use (9) and (23) to calculate and , respectively, and utilize (5) as the decision rule. To simulate the NMI obtained from the use of the FFA decision rule (but under the DM threat model) we use (9) and (15) in order to implement the decision rule in (16). The results of our simulations are shown in Fig. 6, where , and is redefined as .

Figure 6: Numerical and IFA approximated Normalized Mutual Information (NMI) with different values of , , and . The solid curves represent the NMI achieved under the correct decision rule (5). The dashed curves represent the NMI achieved under the FFA decision rule (16).

In the top left plot of Fig. 6, we have set , so in this specific plot the DM threat model is equivalent to that under the UDA (the result is the same as that shown in the top right plot of Fig. 5). However, again we see that as increases the optimal threshold obtained under blindly adopting the FFA decision rule (even though the malicious users are constrained within an annulus) is effectively the optimal value. Note also, that in the DM threat model, as increases this results holds for cases when (which was not the case under the UDA).

As a final point, we note that instead of numerically solving (23), it may be useful to find an approximate closed-form solution to (23) (e.g. this would allow for an approximate closed-form for the false positive and detection rates under this threat model). We can approximate via an application of the Laplace approximation, which can approximate integrals through a series expansion by using local information about the integrand around its maximum [39] [40]. The details are as follows. First, let us define a quantity as:

can be expanded using a Taylor series around its maximum a-posteriori (MAP) estimate, denoted by . This is the point where the posterior density is maximized, , the mode of the posterior distribution. Hence, we obtain to second order,

(24)

The second term in (24) is , because the first derivative is zero at the maximum of . Replacing by the truncated second-order Taylor series yields:

where is the Hessian of the posterior, evaluated at :

Using the above approximation, we have the following

Finally, the marginal likelihood estimate can be written as

(25)

In (25), and are both constant for a specific ; thus the Laplace approximated likelihood function, , is a dimensional normal distribution with the same variance as (because the variances of and are the same). Under the Laplace approximation the decision rule in (5) is approximated by

(26)
Figure 7: Numerical and Laplace approximated ROC curves with different values of , where , and .

To study the performance our Laplace approximation we calculate ROC curves for both the numerical Monte Carlo calculation of and the Laplace approximation of . These different forms are then used in the same decision rule (5) in order to form the ROC curves. The results of these simulations are shown in Fig. 7, and as we can see the approximation is a good one for the parameters used. We have further investigated the accuracy of the approximation over a range of other parameters, finding similar results to those shown in Fig. 7.

4.2 Discussion

In the preceding sections we have looked at a general attack scenario under specific threat models. The attack scenario we have focussed on is that of a non-colluding adversary who is attempting to spoof he is within the perimeter of some wireless network, when in reality he is some distance beyond the network boundary. A non-colluding adversary who is within the network region will in general be easily identified due to his inability to set different received signal strengths at different BSs. An attack from outside the network region is the perhaps most realistic and likely scenario one can imagine for the emerging ITS scenario. For example, a single adversary who is some distance from the highway (so as not to be easily identified or caught) is attempting to disrupt proper functioning of the ITS on the highway.

What we have shown through our investigations of specific threat models under our general attack scenario, is that an optimal LVS can be developed for each threat model, but in a non-straightforward manner in most cases - i.e. no closed-form solutions for the detection and false positive rates are available. Without such closed-form solutions for these rates, one must resort to complex and time consuming Monte Carlo simulations in order to determine the optimal threshold. However, from considerations of the FFA, and how other threat models can be approximated under the FFA, we have shown how a straightforward LVS algorithm can be deployed which is effectively optimal for most circumstances. More specifically, using analytical solutions for the detection and false positive rates in the FFA setting, which are then used in easily determining the optimal threshold value, a straightforward LVS is developed whose performance is near-optimal when the adversary is close to the network, and optimal as the adversary moves to a large distance from network.

However, of course more sophisticated attacks than those highlighted above are possible. The most obvious of these is that of colluding adversaries who can communicate and cooperate with each other so as to form collective attacks on the LVS. An example of such an attack would be colluding adversaries who set different received signal strengths at different BSs. On the defensive side, the network could also deploy beam-forming techniques to help the LVS thwart these types of attacks. The LVS could also deploy tracking algorithms and physical layer security techniques to assist in its defense. These more sophisticated forms of attack and their corresponding defensive strategies are out of scope of the current work, but do form part of our ongoing research efforts in this area. However, we should be clear that ultimately any defensive strategy for an LVS is ultimately doomed if the colluding adversary is afforded unlimited resources and the communications network is purely classical in nature.3

In this work, we assume error free location estimation for the legitimate users. In fact, if the localization error is small relative to the scale of the network boundary, the effects of this error can be ignored. To verify this, we have carried out additional Monte Carlo simulations identical to those producing Fig. 5 except the localization error for legitimate users is assumed to follow a bivariate normal distribution. We find the results are negligibly different from those shown in Fig. 5 even if the variance of the localization error in each coordinate is 100 . It is perhaps worth noting that the prior distributions of the localization error and a malicious user’s location are different, and we could not distinguish between them if there is an overlap between the two distributions. Inaccurate knowledge of the system and channel parameters will reduce the LVS performance. We have quantified this for our specific LVS by carrying out additional simulations in which knowledge of the input LVS parameters are modified by up to from the true underlying parameters. We find that such errors induce an 40 error in the value of the optimal threshold. However, we do point out that this is a worse case scenario as we have assumed in our simulations that the attacker retains perfect knowledge of the parameters (which in reality will be untrue) so as to perform the optimal attack (optimal power boost). We should also note that any spatial correlation of shadowing beyond that accounted for by the shadowing standard deviation has not been included in our simulations. Any such correlation would add additional uncertainty into any LVS system unless it had been pre-measured and included as part of the channel model.

4.3 Results in Relation to Other Works

Location verification has been an active research area, and many verification algorithms have been proposed for VANETs e.g. [12, 13, 14, 15, 16, 17, 18, 19], wireless sensor networks e.g. [21, 31], and generic wireless networks e.g. [22, 23, 24, 25, 26, 27, 28, 29, 30].

Perhaps the most closely related works to ours are those which propose optimizing the system’s threshold by minimizing the probability of misclassification (e.g. [27]), which is defined as . Of course, a direct comparison between such systems and ours is not entirely meaningful, due to the different optimization metrics being used. Further to this, it is important to note the complex interplay between the entropy of a random variable, , and the probability of misclassification. Although it may seem at first counter-intuitive, the fact is that there is not a one-to-one relationship between and . That is, two random variables with the same entropy can have different [46]. This same issue extends to NMI and , and in the context of our LVS, it is important to recognize this fact. As such, if optimization of is the system objective, then use of a Bayesian hypothesis, where the costs of all types of classifications are equal, will suffice.4 But again we must stress that in the context of real-world LVS deployments this represents a strong subjective decision on the cost of misclassifications. Given the complexity, and the many different roles of location information within the ITS scenario (crash avoidance, vehicle-congestion avoidance, vehicle-to-vehicle communication protocols etc.), proper determination of misclassification costs will be, at best, extremely complex in nature. It is for this reason we have approached optimization of our LVS from an objective information-theoretic viewpoint. Our guiding light has been the well-known Infomax principle [47], which states an optimal system must transfer as much information as possible from its input to its output - i.e. maximizes the mutual information between its inputs and outputs.

Figure 8: Normalized Mutual Information (NMI) and probability of misclassification () with different values of . (Note here the system and network parameters are same as those utilized in Fig. 3.)

Notwithstanding the above discussion, we compare the optimal thresholds for NMI and , as shown in Fig. 8. From this figure, we can see that for the optimal threshold () is the same for both algorithms. However, the optimal thresholds for the two algorithms are different when . Further, we see that in the case, the change in , if the optimal NMI threshold is used instead of the optimal threshold, is significantly less than the change in NMI if the optimal threshold is used instead of the optimal NMI threshold.

We expand upon this last point in Fig. 9 where the optimal thresholds for each system are plotted as functions of . This figure outlines another pragmatic advantage of the NMI approach. In reality, the base rate of intrusions () is an unknown parameter for all LVS systems. As such, we see from Fig. 9 that the use of NMI results in a more robust system. As the true value of approaches small values (in any real situation it will be small) the NMI threshold is insensitive to the assumed . This means that when using NMI as the optimization metric, any mismatch between true and assumed has little impact on system performance. In the Bayesian framework, however, the optimal threshold for minimizing remains very sensitive (linear) to the assumed value of . In this latter case, a mismatch between true and assumed results in very poor system performance.

Figure 9: Optimal NMI threshold and optimal threshold as functions of . (Note here the system and network parameters are same as those utilized in Fig. 3.)

5 Conclusions

In this paper, we developed an information-theoretic framework for an LVS, utilizing as the objective optimization criterion the mutual information between input and output data of the LVS. We investigated our new optimal LVS under a realistic threat model, showing how in a straightforward implementation of an LVS, information-theoretic optimality is approached as the non-colluding adversary moves further from the network region it is claiming to be within. This straightforward implementation makes our new algorithm an ideal candidate for the LVS that will be needed in emerging network-based and safety-enhanced transportation systems, such as ITS.

6 Acknowledgment

This work has been supported by the University of New South Wales, and the Australian Research Council, grant DP120102607. Gareth W. Peters is supported in this research by a Royal Society International Exchanges Scheme IE121426.

Footnotes

  1. When this error is much smaller than the average distance between BSs the effect on the results is negligible.
  2. A malicious user (vehicle) on the highway claiming to be at a position which is at least the mean separation distance between BSs, must boost its transmissions significantly. However, since the user is on the actual highway, the nearest BS to its true location will easily detect an attack. This is confirmed by our analysis, and as such we will henceforth consider the attacker to be sophisticated enough to realize that to have any reasonable chance of remaining undetected he must launch his attacks from a position at some minimum distance from the closet VANET BS (i.e. off the highway).
  3. Note that location verification in the context of quantum communications systems has previously been considered e.g. [41], [42], [43], and it has been argued that such systems are able to securely verify a location under all known threat models [44] - although see [45] who argue otherwise. It is undisputed that classical communications alone cannot achieve secure location verification under all known threat models.
  4. In a more general Bayesian framework, the average Bayesian cost is defined as , where is the pre-assigned cost of rejecting a legitimate user, and is the pre-assigned cost of accepting a malicious user [34]. If and are known or can be set, the Bayesian framework is optimal for an LVS in the sense that it minimizes the average Bayesian cost ( is the special case of with ).

References

  1. R. A. Malaney, “Nuisance parameters and location accuracy in log-normal fading models,” IEEE Trans. Wireless Commun., vol. 6, no. 3, pp. 937–947, Mar. 2007.
  2. B. Liu and K. Lin, “Wireless location uses geometrical transformation method with single propagation delay: model and detection performance,”IEEE Trans. Veh. Technol., vol. 57, no. 5, pp. 2920–2932, Sep. 2008.
  3. E. Kuiper and S. Nadjm-Tehrani, “Geographical routing with location service in intermittently connected MANETs,” IEEE Trans. Veh. Technol., vol. 60, no. 2, pp. 592–604, Feb. 2011.
  4. M. Anisetti, C. A. Ardagna, V. Bellandi, E. Damiani, and S. Reale, “Map-based location and tracking in multipath outdoor mobile networks,” IEEE Trans. Wireless Commun., vol. 10, no. 3, pp. 814–824, Mar. 2011.
  5. E. Tsalolikhin, I. Bilik, and N. Blaunstein, “A single-base-station localization approach using a statistical model of the NLOS propagation conditions in urban terrain,” IEEE Trans. Veh. Technol., vol. 60, no. 3, pp. 1124–1137, Mar. 2011.
  6. F. Mourad, H. Snoussi, F. Abdallah, and C. Richard, “A robust localization algorithm for mobile sensors using belief functions,” IEEE Trans. Veh. Technol., vol. 60, no. 4, pp. 1799–1811, May. 2011.
  7. S. Chang, Y. Qi, H. Zhu, J. Zhao, and X. Shen, “Footprint: detecting sybil attacks in urban vehicular networks,” IEEE Trans. Parallel Distrib. Syst., vol. 23, no. 6, pp. 1103–1114, Jun. 2012.
  8. M. Al-Rabayah and R. Malaney, “A New Scalable Hybrid Routing Protocol for VANETs”, IEEE Trans. Veh. Technol., vol. 61, no. 6, pp. 2625–2635, Jul. 2012.
  9. Z. Yang, C. Wu, T. Chen, Y. Zhao, W. Gong, and Y. Liu, “Detecting outlier measurements based on graph rigidity for wireless sensor network localization,” IEEE Trans. Veh. Technol., vol. 62, no. 1, pp. 374–383, Jan. 2013.
  10. O. Bialer, D. Raphaeli, and A. J. Weiss, “Maximum-likelihood direct position estimation in dense multipath,” IEEE Trans. Veh. Technol., vol. 62, no. 5, pp. 2069–2079, Jun. 2013.
  11. IEEE Std. 1609.2-2006, “IEEE trial-use standard for wireless access in vehicular environments- security services for applications and management messages,” Jul. 2006.
  12. N. Sastry, U. Shankar, and D. Wagner, “Secure verification of location claims,” in Proc. ACM Workshop Wireless Security (WiSe ’03), Sep. 2003, pp. 1–10.
  13. T. Leinmller, E. Schoch, and F. Kargl, “Position verification approaches for vehicular ad hoc networks,” IEEE Wireless Commun., vol. 13, no. 5, pp. 16–21, Oct. 2006.
  14. B. Xiao, B. Yu, and C. Gao, “Detection and localization of Sybil nodes in VANETs,” in Proc. Workshop DIWANS, Sep. 2006, pp. 1–8.
  15. J.-H. Song, V. W. S. Wong, and V. C. M. Leung, “Secure location verification for vehicular ad-hoc networks,” in Proc. IEEE GLOBECOM, Dec. 2008, pp. 1–5.
  16. G. Yan, S. Olariu, and M. Weigle, “Providing location security in vehicular ad hoc networks,” IEEE Wireless Commun., vol. 16, no. 6, pp. 48–55, Dec. 2009.
  17. Z. Ren, W. Li, and Q. Yang, “Location verification for VANETs routing,” in Pro. IEEE WIMOB, Oct. 2009, pp. 141–146.
  18. G. Yan, S. Olariu, and M. Weigle, “Cross-layer location verification enhancement in vehicular networks,” in Pro. IEEE Intelligent Vehicles Symposium (IV), Jun. 2010, pp. 95–-100.
  19. O. Abumansoor and A. Boukerche, “A secure cooperative approach for nonline-of-sight location verification in VANET,” IEEE Trans. Veh. Technol., vol. 61, pp. 275–285, Jan. 2012.
  20. R. A. Malaney, “A location enabled wireless security system,” in Proc. IEEE GLOBECOM, Nov. 2004, pp. 2196–2200.
  21. A. Vora and M. Nesterenko, “Secure location verification using radio broadcast,” IEEE Trans. on Dependable and Secure Computing, vol. 3, no. 4, pp. 377–385, Oct. 2006.
  22. R. A. Malaney, “A secure and energy efficient scheme for wireless VoIP emergency service,” in Proc. IEEE GLOBECOM, Nov. 2006, pp. 1–6.
  23. R. A. Malaney, “Securing Wi-Fi networks with position verification: extended version,” International J. Security Netw., vol. 2, no. 1, pp. 27–36, Mar. 2007.
  24. R. A. Malaney, “Wireless intrusion detection using tracking verification,” in Proc. IEEE ICC, Jun. 2007, pp. 1558–1563.
  25. S. apkun, K. B. Rasmussen, M. agalj, and M. Srivastava, “Secure location verification with hidden and mobile base station,” IEEE Trans. Mobile Comput., vol. 7, no. 4, pp. 470–483, Apr. 2008.
  26. Z. Yu, L. Zang, and W. Trappe, “Evaluation of localization attacks on power-modulated challenge-response systems,” IEEE Trans. Inf. Forensics Security, vol. 3, no. 2, pp. 259–272, Jun. 2008.
  27. Y. Chen, J. Yang, W. Trappe, and R. P. Martin, “Detecting and localizing identity-based attacks in wireless and sensor networks,” IEEE Trans. Veh. Technol., vol. 59, no. 5, pp. 2418–2434, Jun. 2010.
  28. L. Dawei, L. Moon-Chuen, and W. Dan, “A node-to-node location verification method,” IEEE Trans. Ind. Electron, vol. 57, pp. 1526–1537, May, 2010.
  29. J. Chiang, J. Haas, J. Choi, and Y. Hu, “Secure location verification using simultaneous multilateration,” IEEE Trans. Wireless Commun., vol. 11, pp. 584–591, Feb. 2012.
  30. J. Yang, Y. Chen, S. Macwan, C. Serban, S. Chen, and W. Trappe, “Securing mobile location-based services through position verification leveraging key distribution,” in Pro. IEEE WCNC, Apr. 2012, pp. 2694–2699.
  31. Y. Wei and Y. Guan, “Lightweight location verification algorithms for wireless sensor networks,” IEEE Trans. Parallel Distrib. Syst., vol. 24, no. 5, pp. 938–950, May. 2013.
  32. G. Gu, P. Fogla, D. Dagon, W. Lee, and B. Skoric, “An information-theoretic measure of intrusion detection capability,” College of Computing, Georgia Tech, Tech. Rep. GIT-CC-05-10, 2005.
  33. G. Gu, P. Fogla, D. Dagon, W. Lee, and B. Skoric, “Measuring intrusion detection capability: An information-theoretic approach,” in Proc. ASIACCS’ 06, Mar. 2006, pp. 90–101.
  34. M. Barkat, Signal Detection and Estimation. Boston, MA: Artech House, 2005.
  35. S. Yan, R. Malaney, I. Nevat, and G. Peters, “An information theoretic location verification system for wireless networks,” in Proc. IEEE GLOBECOM, Dec. 2012, pp. 5415–5420.
  36. J. Neyman and E. Pearson, “On the problem of the most efficient tests of statistical hypotheses,” Phil. Trans. R. Soc. A, vol. 231, pp. 289–337, Jan. 1933.
  37. A. Goldsmith, Wireless communications. Cambridge University press, 2005.
  38. T. Cover and J. Thomas, Elements of information theory. Wiley interscience, 2006.
  39. R. Kass and A. Raftery, “Bayes factors,” Journal of the American statistical association, vol. 90, pp. 773–795, Jun. 1995.
  40. I. Nevat, G. Peters, J. Yuan, and I. Collings, “Quick Cooperative Spectrum Sensing for Amplify-and-Forward Cognitive Networks,” submitted to IEEE Trans. Wireless Commun., arXiv.org 1104.2355.
  41. A. Kent, W. Munro, T. Spiller and R. Beausoleil, “Tagging Systems,” US Patent, Pub. No