Location Trace Privacy Under Conditional Priors

Location Trace Privacy Under Conditional Priors

Abstract

Providing meaningful privacy to users of location based services is particularly challenging when multiple locations are revealed in a short period of time. This is primarily due to the tremendous degree of dependence that can be anticipated between points. We propose a Rényi differentially private framework for bounding expected privacy loss for conditionally dependent data. Additionally, we demonstrate an algorithm for achieving this privacy under Gaussian process conditional priors. This framework both exemplifies why conditionally dependent data is so challenging to protect and offers a strategy for preserving privacy to within a fixed radius for every user location in a trace.

1 Introduction

Location data is acutely sensitive information, detailing where we live, work, eat, shop, worship, and often when, too. Yet increasingly, location data is being uploaded for smartphone services such as ride hailing and weather forecasting and then being brokered in a thriving user location aftermarket to advertisers and even investors nyt. As such, there is a keen need for privacy preserving methods with meaningful guarantees. However, for virtually any application involving locations that are dependent, like location traces, there are few results in the privacy literature.

The widely accepted standard of Geo-Indistinguishability (GI) (GI) effectively applies differential privacy to location points and makes use of the Laplace mechanism for sanitizing them. The idea is that a given point in space enjoys a radius of privacy after noise is added, wherein the radius and degree of privacy are tuneable parameters. However, like most privacy literature GI is prior agnostic, meaning it ignores any prior dependence between points in a user’s location trace. Even under weak conditional dependence, the privacy guarantees begin to quickly dissolve.

A number of existing works have considered the problem setting of aggregate location data and queries. Here, individual data is protected in a dataset of aggregated location trajectories over a large number of people and an adversary making aggregate data queries about the dataset traffic_monitoring; aggregated_encryption. These, however, provide little utility when applied to a single or a few location traces. Another, more general, line of work such as the median and matrix mechanisms aim to provide privacy when a series of correlated and aggregate queries are made on the same database, allowing us to provide prior-agnostic solutions which offer both privacy and utility roth; matrix_mechanism. These too, focus on aggregate data and consider protecting against correlated queries, as opposed to correlated data points.

Other work perhaps closer to our setting is that of inferential privacy such as Pufferfish privacy pufferfish. More general works focused on inferential privacy such as markov; universal fail to address the spatiotemporal aspects of the problem. Inferential privacy methods more specifically aimed at location trace privacy such as temporal; predictive, are either too limiting in their prior distribution, or only consider on-line location data release as opposed to full trace release.

In this work, we revisit the problem of preserving privacy for a single user’s location trace under the assumption of a flexible conditional prior. We first lay out a Pufferfish style framework that limits the class of prior distributions and the discriminative pairs to be protected. We then specify a Rényi differential privacy measurement (renyi) of privacy loss, and finally show how to bound this loss under Gaussian assumptions, thus preserving privacy to within a specified radius of each point in the trace. Finally, we provide some empirical examples of how even under weak correlative priors, more noise is needed than would be specified by prior agnostic approaches such as GI.

2 Preliminaries and Problem Formulation

Without loss of generality, we consider a location trace as real-valued points. We refer to a -point location trace as either a vector in , , or a set of points in , . We will use these two definitions interchangeably depending on the context. Similarly, we refer to the -point privatized version of as , existing in the same instance space. A mechanism privatizes each to its corresponding as i s depicted in the graphical model of Figure 0(a) . For a more realistic 2- or 3-dimensional application, either the dimensions may be deemed independent and privatized separately, or be linked through techniques like co-krigging.

(a)

(b)
Figure 1: (a) An example graphical model of a four point trace . (b) The more general grouped version of the model in (a), where the secret set , the remaining set , and the obfuscated , are shown.

In the same fashion as GI, we aim to provide privacy to within a radius for each real-valued location point . However, existing location privacy notions like GI consider a mechanism sufficient if each does not relate where its corresponding is within some radius . In this work we motivate the need for a more strict notion of location privacy that requires each to not only protect its corresponding , but what might be implied about the remaining . Additionally, must privatize subsequences of points by ensuring the released subsequence does not relate where or the remaining (complement) subsequence points are within radius .

To formalize what we mean by ‘implying’ something about , we mean that it significantly affects the conditional probability . We ask that any privacy preserving mechanism be private to some class of ‘conditional priors’.

Definition 2.1.

A class of conditional priors specifies a set, , of conditional prior distributions, , that define the probability of any subsequence of location points conditioned on the remaining points : .

It is important to note that specifying the class of conditional priors still leaves a large class of marginal priors . For example, a Gaussian conditional prior does not require a Gaussian marginal prior.

To see the importance of subsequence privacy, consider a 3-point trace, , and its privatized trace . We reason about the odds of the points being in the locations , which are within of each other. Frameworks like GI assume independence of each , and then bound the odds of any given is at versus . However, if the ’s are interdependent (and likely are) this may not be the case. Reasoning about and , a mechanism satisfying GI does not bound the odds of given versus . Depending on the conditional prior , these two orders of events may have very different implications about the location of , and thus .

There are two primary distinguishing features about our location privacy framework. First, instead of individually bounding the odds of for pairs of hypotheses within , we bound the odds of for pairs of hypotheses such that each hypothesis pair are within . Second, instead of bounding max divergence as in GI, we opt to bound the -th order Rényi divergence for simplicity of analysis with a wide range of conditional prior classes, . Recall that, for distributions on the same measurable space, this is defined as .

Borrowing from ‘Pufferfish Privacy’ defined in pufferfish, we guarantee privacy for a specific set of secret pairs of subsequence value assignments , and for a specific (conditional) prior class :

Definition 2.2.

()-Conditional Inferential Privacy (CIP) A randomized mechanism applied to location trace offers subsequence ()-CIP provided that

  • for all possible outputs

  • for all discriminative pairs for subsequence

  • for all conditional priors such that , for all discriminative pairs

Where for simplicity we will refer to the events , , as , , respectively.

An interpretation of ()-CIP compliance for subsequence is that release does not sharpen any marginal prior (that has a conditional prior in ) by more than in expectation.

Corollary 1.

For two distributions, , with Rényi Divergence bounded by , we also know that the expected gap between the prior and posterior log-odds ratios is bounded by .

(1)

Returning to the original intention of providing privacy within radius of each , we point out that a CIP-compliant mechanism, , for subsequence does not necessarily guarantee this radius to each . This is because — without any constraint on  — there may exist another subsequence including , , for which is not CIP compliant. As such, we assert that achieving true location privacy for point in trace , requires being CIP compliant for every subsequence containing , which is exponential in the number of points . This exemplifies why privacy for highly dependent data is challenging: the number of ways an adversary can infer the true value of is exponential in the number of points.

In the next section, we demonstrate a CIP compliant mechanism for a Gaussian process conditional prior and additive Gaussian mechanism.

3 CIP Compliance Under Gaussian Assumptions

3.1 The Privacy Bound for Additive Mechanisms

It is instructive to analyze the privacy loss in the case of an additive noise mechanism, , where . Using the model depicted in Figure 1, we know that all of the values are fully connected, and each of the values are conditionally independent given their associated value. Using this conditional independence relation, reconsider the Rényi Divergence bound defined in Definition 2.2. Here, the notation correspond to the noise vector indexed at the and points, respectively.

(2)

where . (See Appendix for proof)

Here, we see that the divergence separates into two terms: the first for and the second for . The second is prior agnostic, behaving much more like differential privacy. The first term depends on the conditional distribution, , of the points on the points and on the mechanism noise .

3.2 CIP for Gaussian Process Data

We now demonstrate a CIP mechanism for the case of a Gaussian process conditional prior class, , where a timestamp is released for each . The conditional distribution for any partition of , is the conditional distribution of a mean zero multivariate normal distribution with some covariance matrix dictated by some kernel .

where and . The class of covariance matrices, , includes all covariance matrices of an RBF kernel with equal variance and with length scales less than some maximum length scale . For location points with time values and , . However, this analysis applies to any kernel. As a first pass, we consider the mechanism of adding mean zero, equal variance, Gaussian noise to each location point: and .

While is not necessarily Gaussian distributed, is Gaussian distributed, making the Rényi Divegence in Definition 2.2 easy to analyze. Substituting this fact into Equation 2, we solve four our privacy loss,

(3)

where and . (See Appendix for proof)

By Definition 2.2, requires that . For simplicity, we show maximum privacy loss for a wider set discriminative pairs: , effectively solving for the worst case privacy loss in the ball circumscribing the ball suggested in Definition 2.2.

This allows us to easily maximize the fist term in Equation 3: a Mahalanobis distance maximized by the eigenvector of with maximal eigenvalue. The second term, being prior agnostic and only affected by magnitude of , will automatically be maximized if the first term is maximized. We denote this worst case eigenvector discriminative pair , and associated worst case loss as , which may be simplified as

(4)

where is the largest eigenvalue of . If we choose a large enough that , then the mechanism is CIP compliant for subsequence . Furthermore, if the mechanism is CIP compliant for the covariance matrix with highest correlation induced by , then it is CIP compliant for all , and thus all .

Prior agnostic methods like GI, effectively assume all points are independent, and thus . To demonstrate, we compute the maximum privacy loss on a ten point trace for a diagonal ‘GI’ covariance matrix along with a series of other RBF matrices with different length scales. We observe for a subsequence of every other point, and see a 50% increase in privacy loss even for , which assigns a correlation of 0.6 to neighboring points. See Figure 2 for plots of this privacy loss.

(a)
(b)
Figure 2: (a) Worst case privacy loss vs. noise variance, , added to each for a subsequence of every other point in . We show privacy loss of several RBF kernel covariance matrices with different length scales . The blue dotted line shows the privacy loss for points that are assumed independent (as in GI), which drastically under-represents privacy loss against an RBF conditional prior. (b) Here, we show an example location trace, , and superimpose the worst case hypothesis on for an RBF conditional prior of length scale . If an adversary compares with , they will witness a large deviation in the distribution of the remaining points .

References

4 Appendix

4.1 Demonstration of Results

Corollary 1. For two distributions, , with Rényi Divergence bounded by , we also know that the expected gap between the prior and posterior log-odds ratios is bounded by .

Proof.
(a)
(b)
(c)
(d)

Step (a) simply uses the definition of Rényi Divergence, which for two distributions defined on the same measurable space, and for order is given by

Step (b) first takes the log of both sides, and then applies Jensen’s Inequality. Step (c) uses the fact that . Step (d) simply uses Bayes Rule. ∎

Equation 2.

(2)
Proof.
(a)

Simplify according to the model defined in Figure 1:

where and . Returning to the divergence calculation at step (a)

(b)
(c)
(d)

Step (a) uses the conditional independence properties specified in Figure 1. Step (b) substitutes the redefined distribution of into the divergence. Step (c) is possible because as specified in Figure 1; this is a property of Rényi Divergence for product distributions. Step (d) separates the first term into separate terms since, each of the s are independent conditioned on , thus separating their Rényi Divergences into the sum of terms. ∎

Equation 3.

(3)
Proof.
(a)
(b)
(c)

Step (a) is simply a restatement of Equation 2. Step (b) substitutes in the distributions for a Gaussian conditional prior and Gaussian additive noise mechanism. Step (c) is due to the Rényi Divergence of two normal distributions with identical covariance, which is the Mahalanobis distance between the conditional means:

In step (c), since Gaussian conditional covariance matrices only depend on which points are being conditioned on, not their values. Substituting in the values of and , which is defined by the Gaussian conditional mean, this expression simplifies nicely. Let , then the above expression becomes

(d)
(e)

where . One can see that increasing , reduces the eccentricity of , and thus reduces the top eigenvalue and the worst case privacy loss. Meanwhile, having a longer length scale, , in the kernel increases covariance, and increases the eccentricity of . ∎

Equation 4.

(4)
Proof.
(a)
(b)
(c)
(d)
(e)
(f)

Step (a) is a restatement of Equation 3. Step (b) is a maximization of all discriminative pairs described in Definition 2.2. Here, the set of all pairs is given by . The choice of as the upper bound on magnitude, is because this includes a radius of around each . Step (c) substitutes in the optimal solution . Step (d) identifies the optimal solution as the maximizer of the Mahalanobis distance of the first term. Here, is the top eigenvalue of . The maximizer will also have maximum magnitude, , in the direction of the top eigenvector, which is incorporated into step (f). ∎

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
401269
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description