Differentially Private Grids for Geospatial Data
Abstract
In this paper, we tackle the problem of constructing a differentially private synopsis for twodimensional datasets such as geospatial datasets. The current stateoftheart methods work by performing recursive binary partitioning of the data domains, and constructing a hierarchy of partitions. We show that the key challenge in partitionbased synopsis methods lies in choosing the right partition granularity to balance the noise error and the nonuniformity error. We study the uniformgrid approach, which applies an equiwidth grid of a certain size over the data domain and then issues independent count queries on the grid cells. This method has received no attention in the literature, probably due to the fact that no good method for choosing a grid size was known. Based on an analysis of the two kinds of errors, we propose a method for choosing the grid size. Experimental results validate our method, and show that this approach performs as well as, and often times better than, the stateoftheart methods.
We further introduce a novel adaptivegrid method. The adaptive grid method lays a coarsegrained grid over the dataset, and then further partitions each cell according to its noisy count. Both levels of partitions are then used in answering queries over the dataset. This method exploits the need to have finer granularity partitioning over dense regions and, at the same time, coarse partitioning over sparse regions. Through extensive experiments on realworld datasets, we show that this approach consistently and significantly outperforms the uniformgrid method and other stateoftheart methods.
ignore
I Introduction
We interact with locationaware devices on a daily basis. Such devices range from GPSenabled cellphones and tablets, to navigation systems. Each device can report a multitude of location data to centralized servers. Such location information, commonly referred to as geospatial data, can have tremendous benefits if properly processed and analyzed. For many businesses, a locationbased view of information can enhance business intelligence and enable smarter decision making. For many researchers, geospatial data can add an interesting dimension. Location information from cellphones, for instance, can help in various social research that is interested in how populations settle and congregate. Furthermore, location from incar navigation systems can help provide information on areas of common traffic congestion.
If shared, such geospatial data can have significant impact for research and other uses. Sharing such information, however, can have significant privacy implications. In this paper, we study the problem of releasing static geospatial data in a private manner. In particular, we introduce methods of releasing a synopsis of twodimensional datasets while satisfying differential privacy.
Differential privacy [1] has recently become the defacto standard for privacy preserving data release, as it is capable of providing strong worstcase privacy guarantees. We consider twodimensional, differentially private, synopsis methods in the following framework. Given a dataset and the twodimensional domain that tuples in the dataset are in, we view each tuple as a point in twodimensional space. One partitions the domain into cells, and then obtains noisy counts for each cell in a way that satisfies differential privacy. The differentially private synopsis consists of the boundaries of these cells and their noisy counts. This synopsis can then be used either for generating a synthetic dataset, or for answering queries directly.
In general, when answering queries, there are two sources of error in such differentially private synopsis methods. The first source is the noise added to satisfy differential privacy. This noise has a predefined variance and is independent of the dataset, but depends on how many cells are used to answer a query. The second source is the nature of the dataset itself. When we issue a query which only partially intersects with some cell, then we would have to estimate how many data points are in the intersected cells, assuming that the data points are distributed uniformly. The magnitude of this error depends both on the distribution of points in the dataset and on the partitioning. Our approach stems from careful examination of how these two sources of error depend on the grid size.
Several recent papers have attempted to develop such differentially private synopsis methods for twodimensional datasets [2, 3]. These papers adapt spatial indexing methods such as quadtrees and kdtrees to provide a private description of the data distribution. These approaches can all be viewed as adapting the binary hierarchical method, which works well for 1dimensional datasets, to the case of 2 dimensions. The emphasis is on how to perform the partitioning, and the result is a deep tree.
Somewhat surprisingly, none of the existing papers on summarizing multidimensional datasets compare with the simple uniformgrid method, which applies an equiwidth grid over the data domain and then issues independent count queries on the grid cells. We believe one reason is that the accuracy of is highly dependent on the grid size , and how to choose the best grid size was not known. We propose choosing to be where is the number of data points, is the total privacy budget, and is some small constant depending on the dataset. Extensive experimental results, using realworld datasets of different sizes and features, validate our method of choosing . Experimental results also suggest that setting work well for datasets of different sizes and different choices of , and show that performs as well as, and often times better than the stateoftheart hierarchical methods in [2, 3].
This result is somewhat surprising, as hierarchical methods have been shown to greatly outperform the equivalence of uniformgrid in dimensional case [4, 5]. We thus analyze the effect of dimensionality on the effectiveness of using hierarchies.
We further introduce a novel adaptivegrid method. This method is motivated by the need to have finer granularity partitioning over dense regions and, at the same time, coarse partitioning over sparse regions. The adaptive grid method lays a coarsegrained grid over the dataset, and then further partitions each cell according to its noisy count. Both levels of partitions are then used in answering queries over the dataset. We propose methods to choose the parameters for the partitioning by careful analysis of the aforementioned sources of error. Extensive experiments validate our methods for choosing the parameters, and show that the adaptivegrid method consistently and significantly outperforms the uniform grid method and other stateoftheart methods.
The contributions of this paper are as follows:

We identify that the key challenge in differentially private synopsis of geospatial datasets is how to choose the partition granularity to balance errors due to two sources, and propose a method for choosing grid size for the uniform grid method, based on an analysis of how the errors depend on the grid size.

We propose a novel, simple, and effective adaptive grid method, together with methods for choosing the key parameters.

We conducted extensive evaluations using 4 datasets of different sizes, including geospatial datasets that have not been used in differentially private data publishing literature before. Experimental results validate our methods and show that they outperform existing approaches.

We analyze why hierarchical methods do not perform well in dimensional case, and predict that they would perform even worse with higher dimensions.
The rest of this paper is organized as follows. In Section II, we set the scope of the paper by formally defining the problem of publishing two dimensional datasets using differential privacy. In Section III, we discuss previous approaches and related work. We present our approach in Section IV, and present the experimental results supporting our claims in Section V. Finally, we conclude in Section VI.
Ii Problem Definition
Iia Differential Privacy
Informally, differential privacy requires that the output of a data analysis mechanism be approximately the same, even if any single tuple in the input database is arbitrarily added or removed.
Definition 1 (Differential Privacy [1, 6])
A randomized mechanism gives differential privacy if for any pair of neighboring datasets and , and any ,
In this paper we consider two datasets and to be neighbors if and only if either or , where denotes the dataset resulted from adding the tuple to the dataset . We use to denote this. This protects the privacy of any single tuple, because adding or removing any single tuple results in multiplicativebounded changes in the probability distribution of the output. If any adversary can make certain inference about a tuple based on the output, then the same inference is also likely to occur even if the tuple does not appear in the dataset.
Differential privacy is composable in the sense that combining multiple mechanisms that satisfy differential privacy for results in a mechanism that satisfies differential privacy for . Because of this, we refer to as the privacy budget of a privacypreserving data analysis task. When a task involves multiple steps, each step uses a portion of so that the sum of these portions is no more than .
To compute a function on the dataset in a differentially privately way, one can add to a random noise drawn from the Laplace distribution. The magnitude of the noise depends on , the global sensitivity or the sensitivity of . Such a mechanism is given below:
In the above, denotes a random variable sampled from the Laplace distribution with scale parameter . The variance of is ; hence the standard deviation of is .
IiB Problem Definition
We consider the following problem. Given a 2dimensional geospatial dataset , our aim is to publish a synopsis of the dataset to accurately answer count queries over the dataset. We consider synopsis methods in the following framework. Given a dataset and the twodimensional domain that tuples in the dataset are in, we view each tuple as a point in twodimensional space. One partitions the domain into cells, and then obtains noisy counts for each cell in a way that satisfies differential privacy. The differentially private synopsis consists of the boundary of these cells and their noisy counts. This synopsis can then be used either for generating a synthetic dataset, or for answering queries directly.
We assume that each query specifies a rectangle in the domain, and asks for the number of data points that fall in the rectangle. Such a count query can be answered using the noisy counts for cells in the following fashion. If a cell is completely included in the query rectangle, then the noisy count is included in the total. If a cell is partially included, then one estimates the point count in the intersection between the cell and the query rectangle, assuming that the points within the cell is distributed uniformly. For instance, if only half of the area of the cell is included in the query, then one assumes that half of the points are covered by the query.
Two Sources of Error. Under this method, there are two sources of errors when answering a query. The noise error is due to the fact that the counts are noisy. To satisfy differential privacy, one adds, to each cell, an independently generated noise, and these noises have the same standard deviation, which we use to denote. When summing up the noisy counts of cells to answer a query, the resulting noise error is the sum of the corresponding noises. As these noises are independently generated zeromean random variables, they cancel each other out to a certain degree. In fact, because these noises are independently generated, the variance of their sum equals the sum of their variances. Therefore, the sum has variance , corresponding to a standard deviation of . That is, the noise error of a query grows linearly in . Therefore, the finer granularity one partitions the domain into, the more cells are included in a query, and the larger the noise error is.
The second source of error is caused by cells that intersect with the query rectangle, but are not contained in it. For these cells, we need to estimate how many data points are in the intersected cells assuming that the data points are distributed uniformly. This estimation will have errors when the data points are not distributed uniformly. We call this the nonuniformity error. The magnitude of the nonuniformity error in any intersected cell, in general, depends on the number of data points in that cell, and is bounded by it. Therefore, the finer the partition granularity, the lower the nonuniformity error.
As argued above, reducing the noise error and nonuniformity error imposes conflicting demands on the partition granularity. The main challenge of partitionbased differentially private synopsis lies in how to meet this challenge and reconcile the conflicting needs of noise error and nonuniformity error.
Iii Previous Approaches and Related Work
Differential privacy was presented in a series of papers [7, 8, 9, 6, 1] and methods of satisfying it for evaluating some function over the dataset are presented in [1, 10, 11].
Recursive Partitioning. Most approaches that directly address twodimensional and spatial datasets use recursive partitioning [12, 3, 2]. These approaches perform a recursive binary partitioning of the data domain.
Xiao et al. [2] proposed adapting the standard spatial indexing method, KDtrees, to provide differential privacy. Nodes in a KDtree are recursively split along some dimension. In order to minimize the nonuniformity error, Xiao et al. use the heuristic to choose the split point such that the two subregions are as close to uniform as possible.
Cormode et al. [3] proposed a similar approach. Instead of using a uniformity heuristic, they split the nodes along the median of the partition dimension. The height of the tree is predetermined and the privacy budget is divided among the levels. Part of the privacy budget is used to choose the median, and part is used to obtain the noisy count. [3] also proposed combining quadtrees with noisy medianbased partitioning. In the quadtree, nodes are recursively divided into four equal regions via horizontal and vertical lines through the midpoint of each range. Thus no privacy budget is needed to choose the partition point. The method that gives the best performance in [3] is a hybrid approach, which they call “KDhybrid”. This method uses a quadtree for the first few levels of partitions, and then uses the KDtree approach for the other levels. A number of other optimizations were also applied in KDhybrid, including the constrained inference presented in [4], and optimized allocation of privacy budget. Their experiments indicate that “KDhybrid” outperforms the KDtree based approach and the approach in [2].
Qardaji and Li [12] proposed a general recursive partitioning framework for multidimensional datasets. At each level of recursion, partitioning is performed along the dimension which results in the most balanced partitioning of the data points. The balanced partitioning employed by this method has the effect of producing regions of similar size. When applied to twodimensional datasets, this approach is very similar to building a KDtree based on noisy median.
We experimentally compare with the stateoftheart KDhybrid method. In Section IVC, we analyze the effect of dimensionality and show that hierarchical methods provide limited benefit in the dimensional case.
Hierarchical Transformations. The recursive partitioning methods above essentially build a hierarchy over a representation of the data points. Several approaches have been presented in the literature to improve count queries over such hierarchies.
In [4], Hay et al. proposed the notion of constrained inference for hierarchical methods to improve accuracy for range queries. This work has been mostly developed in the context of onedimensional datasets. Using this approach, one would arrange all queried intervals into a binary tree, where the unitlength intervals are the leaves. Count queries are then issued at all the nodes in the tree. Constrained inference exploits the consistency requirement that the parent’s count should equal the sum of all children’s counts to improve accuracy.
In [5], Xiao et al. propose the Privlet method answering histogram queries, which uses wavelet transforms. Their approach applies a Harr wavelet transform to the frequency matrix of the dataset. A Harr wavelet essentially builds a binary tree over the dataset, where each node (or “coefficient”) represents the difference between the average value of the nodes in its right subtree, and the average value of the nodes in its left subtree. The privacy budget is divided among the different levels, and the method then adds noise to each transformation coefficient proportional to its sensitivity. These coefficients are then used to regenerate an anonymized version of the dataset by applying the reverse wavelet transformation. The benefit of using wavelet transforms is that they introduce a desirable noise canceling effect when answering range queries.
For two dimensional datasets, this method uses standard decomposition when applying the wavelet transform. Viewing the dataset as a frequency matrix, the method first applies the Harr wavelet transform on each row. The result is a vector of detail coefficients for each row. Then, using the matrix of detail coefficients as input, the method applies the transformation on the columns. Noise is then added to each cell, proportional to the sensitivity of the coefficient in that cell. To reconstruct the noisy frequency matrix, the method applies the reverse transformation on each column and then each row.
Both constrained inference and wavelet methods have been shown to be very effective at improving query accuracy in the 1dimensional case. Our experiments show that applying them to a uniform grid provides small improvements for the 2dimensional datasets. We note that these methods can only be applied when one has decided what are the leaf cells. When combined with the uniform grid method, it requires a method to choose the right grid size, as the performance will be poor when a wrong grid size is used. In Section V, we experimentally compare with the wavelet method.
Other related work. Blum et al. [13] proposed an approach that employs nonrecursive partitioning, but their results are mostly theoretical and lack general practical applicability to the domain we are considering.
[14, 15, 16] provide methods of differentially private release which assume that the queries are known before publication. The most recent of such works by Li and Miklau [16] proposes the matrix mechanism. Given a workload of count queries, the mechanism automatically selects a different set of strategy queries to answer privately. It then uses those answers to derive answers to the original workload. Other techniques for analyzing general query workloads under differential privacy have been discussed in [17, 18]. These approaches also require the base cells to be fixed. Furthermore, they require the existence of a known set of queries, which are represented as a matrix, and then compute how to combine base cells to answer the original queries. It is unclear how to use this method when one aims at answering arbitrary range queries.
A number of approaches exist for differentially private interactive data analysis, e.g., [19], and methods of improving the accuracy of such release [20] have been suggested. In such works, however, one interacts with a privacy aware database interface rather than getting access to a synopsis of the dataset. Our approach deals with the latter.
Iv The Adaptive Partitioning Approach
In this section, we present our proposed methods.
Iva The Uniform Grid Method 
Perhaps the simplest method one can think of is the Uniform Grid () method. This approach partitions the data domain into grid cells of equal size, and then obtains a noisy count for each cell. Somewhat surprisingly, none of the existing papers on summarizing multidimensional datasets compare with . We believe one reason is that the accuracy of is highly dependent on the grid size , and how to choose the best grid size was not known.
We propose the following guideline for choosing in order to minimize the sum of the two kinds of errors presented in Section II.
Guideline 1
In order to minimize the errors due to , the grid size should be about
where is the number of data points, is the total privacy budget, and is some small constant depending on the dataset. Our experimental results suggest that setting works well for the datasets we have experimented with.
Below we present our analysis supporting this guideline. As the sensitivity of the count query is , the noise added for each cell follows the distribution and has a standard deviation of . Given an grid, and a query that selects portion of the domain (where is the ratio of the area of the query rectangle to the area of the whole domain), about cells are included in the query, and the total noise error thus has standard deviation of .
The nonuniformity error is proportional to the number of data points in the cells that fall on the border of the query rectangle. For a query that selects portion of the domain, it has four edges, whose lengths are proportional to of the domain length; thus the query’s border contains on the order of cells, which on average includes on the order of data points. Assuming that the nonuniformity error on average is some portion of the total density of the cells on the query border, then the nonuniformity error is for some constant .
To minimize the two errors’ sum, , we should set to , where .
Using Guideline 1 requires knowing , the number of data points. Obtaining a noisy estimate of using a very small portion of the total privacy budget suffices.
The parameter depends on the uniformity of the dataset. In the extreme case where the dataset is completely uniform, then the optimal grid size is . That is, the best method is to obtain as accurate a total count as possible, and then any query can be fairly accurately answered by computing what fraction of the region is covered by the query. This corresponds to a large . When a dataset is highly nonuniform, then a smaller value is desirable. In our experiments, we observe that setting gives good results across datasets of different kinds.
IvB The Adaptive Grids Approach 
The main disadvantage of is that it treats all regions in the dataset equally. That is, both dense and sparse regions are partitioned in exactly the same way. This is not optimal. If a region has very few points, this method might result in overpartitioning of the region, creating a set of cells with close to zero data points. This has the effect of increasing the noise error with little reduction in the nonuniformity error. On the other hand, if a region is very dense, this method might result in underpartitioning of the region. As a result, the nonuniformity error would be quite large.
Ideally, when a region is dense, we want to use finer granularity partitioning, because the nonuniformity error in this region greatly outweighs that of noise error. Similarly, when a region is sparse (having few data points), we want to use a more coarse grid there. Based on this observation, we propose an Adaptive Grids () approach.
The approach works as follows. We first lay a coarse grid over the data domain, creating firstlevel cells, and then we issue a count query for each cell using a privacy budget , where . For each cell, let be the noisy count of the cell, then partitions the cell using a grid size that is adaptively chosen based on , creating leaf cells. The parameter determines how to split the privacy budget between the two levels.
Applying Constrained Inference. As discussed in Section III, constrained inference has been developed in the context of onedimensional histograms to improve hierarchical methods [4]. The method produces a 2level hierarchy. For each firstlevel cell, if it is further partitioned into a grid, we can perform constrained inference.
Let be the noisy count of a firstlevel cell, and let be the noisy counts of the cells that is further partitioned into in the second level. One can then apply constrained inference as follows. First, one obtains a more accurate count by taking the weighted average of and the sum of such that the standard deviation of the noise error at is minimized.
This value is then propagated to the leaf nodes by distributing the difference among all nodes equally
When , the constrained inference step becomes issuing another query with budget and then computing a weighted average of the two noisy counts.
Choosing Parameters for . For the method, we need to decide the formula to adaptively determine the grid size for each firstlevel cell. We propose the following guideline.
Guideline 2
Given a cell with a noisy count of , to minimize the errors, this cell should be partitioned into cells, where is computed as follows:
where is the remaining privacy budget for obtaining noisy counts for leaf cells, , and is the same constant as in Guideline 1.
The analysis to support this guideline is as follows. When the firstlevel cell is further partitioned into leaf cells, only queries whose borders go through this firstlevel cell will be affected. These queries may include , , , , up to rows (or columns) of leaf cells, and thus leaf cells. When a query includes more than half of these leaf cells, constrained inference has the effect that the query is answered using the count obtained in the first level cell minus those leaf cells that are not included in the query. Therefore, on average a query is answered using
leaf cells, and the average noise error is on the order of . The average nonuniformity error is about ; thereby to minimize their sum, we should choose to be about .
The choice of , the gridsize for the first level, is less critical than the choice of . When is larger, the average density of each cell is smaller, and the further partitioning step will partition each cell into fewer number of cells. When is smaller, the further partitioning step will partition each cell into more cells. In general, should be less than the grid size for computed according to Guideline 1, since it will further partition each cell. At the same time, should not be too small either. We set
The choice of also appears to be less critical. Our experiments suggest that setting to be in the range of results in similar accuracy. We set .
IvC Comparing with Existing Approaches
We now compare our proposed and with existing hierarchical methods, in terms of runtime efficiency, simplicity, and extensibility to higher dimensional datasets.
Efficiency. The and methods are conceptually simple and easy to implement. They also work well with very large datasets that cannot fit into memory. can be performed by a single scan of the data points. For each data point, just needs to increase the counter of the cell that the data point is in by 1. requires two passes over the dataset. The first pass is similar to that of . In the second pass, it first computes which firstlevel cell the data point is in, and then which leaf cell it is in. It then increases the corresponding counter.
We point out that another major benefit of and over recursive partitionbased methods are their higher efficiency. For all these methods, the running time is linear in the depth of the tree, as each level of the tree requires one pass over the dataset. Existing recursive partitioning methods have much deeper trees (e.g., reaching 16 levels is common for 1 million data points). Furthermore, these methods require expensive computation to choose the partition points.
Effect of Dimensionality. Existing recursive partitioning approaches can be viewed as adapting the binary hierarchical method, which works well for 1dimensional dataset, to the cases of 2 dimensions. Some of these methods adapt quadtree methods, which can be viewed as extending 1dimensional binary trees to 2 dimensions. The emphasis is on how to perform the binary partition, e.g., using noisy mean, exponential method for finding the median, exponential method using nonuniformity measurement, etc. The result is a deep tree.
We observe, however, while a binary hierarchical tree works well for the 1dimensional case, their benefit for the 2dimensional case is quite limited, and the benefit can only decrease with higher dimensionality. When building a hierarchy, the interior of a query can be answered by higherlevel nodes, but the borders of the query have to be answered using leaf nodes. The higher the dimensionality, the larger the portion of the border region.
For example, for a 1dimensional dataset with domain divided into cells, when one groups each adjacent cells into one larger cell, each larger cell is of size of the whole domain. Each query has border regions which need to be answered by leaf cells; each region is of size on the order of that of one larger cell, i.e., of the whole domain. In the 2dimensional case, with a grid and a total of cells, if one groups adjacent cells together, then a query’s border, which needs to be answered by leaf nodes, has sides, and each side is of size on the order of of the whole domain. Note that is much larger than , since is always much larger than . For example, when and , , and .
Therefore, in 2dimensional case, one benefits much less from a hierarchy, which provides less accurate counts for the leaf cells. This effect keeps growing with dimensionality. For dimensions, the border of a query has hyperplanes, each of size on the order of . In our experiments, we have observed some small benefits for using hierarchies, which we conjecture will disappear with or higher dimensional cases.
This analysis suggests that our approach of starting from the Uniform Grid method and trying to improve upon this method is more promising than trying to improve a hierarchical tree based method. When focusing on Uniform Grid, the emphasis in designing the algorithm shifts from choosing the axis for partitioning to choosing the partition granularity. When one partitions a cell into two subcells, the question of how to perform the partitioning depending on the data in the cell seems important and may affect the performance; and thus one may want to use part of the privacy budget to figure out what the best partitioning point is. On the other hand, when one needs to partition a cell into, e.g., , subcells in a differentially private way, it appears that the only feasible solution is to do equiwidth partition. Hence the only parameter of interest is what is the grid size.
V Experimental Results
Va Methodology
We have conducted extensive experiments using four real datasets, to compare the accuracy of different methods and to validate our analysis of the choice of parameters.
Datasets. We illustrate these datasets by plotting the data points directly in Figure 1. We also present the parameters for these datasets in Table II.
The first dataset (which we call the “road” dataset) includes the GPS coordinates of road intersections in the states of Washington and New Mexico, obtained from 2006 TIGER/Line from US Census. This is the dataset that was used in [3] for experimental evaluations. There are about 1.6M data points. As illustrated in Figure 1(a), the distribution of the data points is quite unusual. There are large blank areas with two dense regions (corresponding to the two states).
The second dataset is derived from the checkin dataset
We obtained both the third dataset (“landmark” dataset) and the fourth dataset (“storage” datset) from infochimps. The landmark dataset
The storage dataset
Absolute and Relative Error. Following [3], we primarily consider the relative error, defined as follows: For a query , we use to denote the correct answer to . For a method and a query , we use to denote the answer to the query when using the histogram constructed by method to answer the query , then the relative error is defined as
where we set to be , where is the total number of data points in . This avoids dividing by 0 when .
is likely to be largest when the query is midsize. When the range of a query, , is large, is likely to be small since is likely to be large. On the other hand, when the range of is small, the absolute error is likely to be small.
While we primarily use relative error, we also use absolute error in the final comparison.
Understanding the Figures. We use two values, and . For each algorithm, we use query sizes, with being the smallest, each doubles both the range and range of , thereby quadrupling the query area, and , the largest query size covering between and of the whole space. The query sizes we have used are given in Table II.
For each query size, we randomly generate queries, and compute the errors in answering them. We use two kinds of graphs. To illustrate the results across different query sizes, we use line graphs to plot the arithmetic mean of the relative error for each query size. To provide a clearer comparison among different algorithms, we use candlesticks to plot the profile of relative errors for all query sizes. Each candlestick provides 5 pieces of information: the 25 percentile (the bottom of candlestick), the median (the bottom of the box), the 75 percentile (the top of the box), the 95 percentile (the top of the candlestick), and the arithmetic mean (the black bar). We pay the most attention to the arithmetic mean.
K  KDstandard 

K  KDhybrid 
U  with grid 
W  Privlet with grid 
H  Hierarchy with levels and branching 
A  with grid and the given value 
Algorithm Notation. The notation for the algorithms we use in our experiments are given in Table I. The method is denoted by A, which first lays a grid, then uses to issue count query for each cell. In addition, it partitions each cell with noisy count into grid, with . Unless explicitly noted, is set to be .
dataset  # of points  domain size  size of  size of  best grid size  best grid size  
sugg.  actual  actual  sugg.  actual  actual  
road  1.6M  400  96192  3248  126  48128  1032  
checkin  1M  316  192384  4896  100  64128  1648  
landmark  0.9M  300  256512  64128  95  64128  3264  
storage  9K  30  3264  1232  10  1032  1016 
Columns are dataset name, number of data points, domain size, the largest query size in experiments, the smallest query size , and three grid sizes each for and , including the grid size suggested by Guideline 1, the range of grid sizes that perform the best in the experiments with , and the range of bestperforming sizes for .
VB Comparing KDTree with
In the first set of experiments, we compare KDstandard, KDhybrid with with different grid sizes, and we identify the best performing grid size for . The results are presented in Figure 2.
Analysis of Results. We can observe that generally the relative errors are maximized at queries of the middle sizes. More specifically, the maximizing points are for the road dataset, for the checkin dataset, and for landmark and storage. We believe this is due to the existence of large blank areas in the road dataset and the checkin dataset. The large blank areas cause large queries to have low true count, which cause large relative errors due to the large noise error for large queries.
We can see that when varying the grid size for the method, there exist a range of sizes where the methods perform the best. Larger or smaller sizes tend to perform worse. When leaving the optimal range, the error steadily increases. This suggests that choosing a good grid size is important.
The ranges for the experimentally observed optimal grid sizes are give in Table II. We can see that Guideline 1 works remarkably well. The predicted best size generally lie within the range of the sizes that experimentally perform the best, and often fall in the middle of the range. In two cases, the predicted size lies outside the observed optimal range. For the storage dataset with , the predicted size is , which is quite close to , the range of the sizes observed to have lowest. Only on the road dataset (which has unusually high uniformity) at , our prediction (400) lies outside the observed optimal range (96192). However, we observe that even though the high uniformity calls for a smaller optimal grid size, the performance at grid sizes 384 and 512 is quite reasonable; indeed, the average relative error in both cases are still lower than that of KDhybrid. Jumping ahead, in Figure 6(b) we will see that U significantly outperforms U in terms of absolute error, further validating Guideline 1.
We can also see that the KDhybrid method performs worse than the best method on the road dataset and the storage dataset, and is very close to the best method on the other two datasets.
Effect of Adding Hierarchies. In Figure 3, we evaluate the effect of adding hierarchies to to improve its accuracy. Our goal is to understand whether adding hierarchies of different branching factor to would result in better accuracy. Here we present only results for the checkin and the landmark dataset, because the road dataset is unusual, and the storage dataset is too small to benefit from a hierarchy.
We include results for the method with the lowest observed relative error, the method with , which is close to the size suggested by Guideline 1 and is multiples of many numbers, facilitating experiments with different branching factors. We also include results for W, which applies the Privlet [5] method, described in Section III, to leaf cells from a grid. We consider hierarchical methods with branching factors ranging from to . We also vary depths of the tree for the branching factor . That is H uses levels, with sizes at .
From the results we observe that while adding hierarchies can somewhat improve the accuracy, the benefit is quite small. In Section IVC, we have analyzed the reason for this. Applying Privlet, however, results in clear, if not significant, accuracy improvements. This can be attributed to the noise reduction effect that the Privlet method has over general hierarchical methods. Jumping slightly ahead to Figure 5, we observe, however, applying Privlet to smaller grid sizes (e.g., ) tends to be worse the .
VC Evaluating Adaptive Grids
Figure 4 presents experimental results on the effect of choosing different parameters for the method. We use checkin and landmark datasets. The first column shows comparison of the three methods with best performing grid sizes with the bestperforming method and the Privlet method with the same grid size. We show results for different query sizes. We see that the methods outperform and Privlet across all query sizes.
The second column shows the effect of varying , the firstlevel grid size of the method. We see that while the method like the method is affected by , it is less sensitive to and provides good performance for a wider range of , and that the suggested by Guideline 2 is either at or close to the optimal size.
The third and the fourth columns explore the effect of varying and . In each figure, there are candlesticks, divided into groups of each. The left group uses , the middle group uses , and the right group uses . Within each group, we vary the value of . As can be seen, setting as suggested significantly outperforms larger values of , namely and . We have also conducted experiments with values from to , and the results (which we did not include in the paper for space limitation) show that setting in the range of to result in almost the same accuracy. The effect of varying can be seen by comparing the left group of , with the middle group, and the right group. We observe that setting performs worse than the other values. Setting and give very similar results, perhaps with slightly better. We have also experimented with setting from to , with increment of . The results suggest that setting in the range of to give very similar results. We use as the default value.
VD Final Comparison
In Figure 5 we perform an exhaustive comparison of methods: KDhybrid, with size giving lowest observed relative error, Privlet on this grid size, with giving lowest observed relative error, with suggested size, with suggested size. We use all datasets, and two values (0.1 and 1). From these results, we observe that consistently and significantly outperforms other methods. We also observe that with the suggested grid sizes provides about the same accuracy as KDhybrid, and with suggested grid sizes clearly outperforms all non methods. When compared with with the experimentally observed best grid size, the results are slightly worse but in general quite close.
In Figure 6 we plot the same comparisons, but using absolute error, instead of relative error. Here we use logscale for the candlesticks because the ranges of the absolute errors are quite large. Again we observe that methods consistently and significantly outperforms other methods. It is interesting to note that for the road dataset, we observe that with suggested sizes outperform using sizes optimized for the relative error. Recall that this is the only dataset that has a large difference between suggested size and observed optimal size, because the dataset is highly uniform. When one considers absolute errors, our suggest size seem to work very well. This suggests the robustness of our error analysis and the guidelines that follow from the analysis. Recall that our analysis did not depend upon the using of relative error or absolute error.
Vi Conclusion
In this paper, we tackle the problem of releasing a differentially private synopsis for two dimensional datasets. We have identified how to choose the partition granularity to balance errors due to two sources as the key challenge in differentially private synopsis methods, and propose a methodology for choosing grid size for the uniform grid method, based on an analysis of how the errors depend on the grid size. We have proposed a novel, simple, and effective adaptive grid method, together with methods for choosing the key parameters. We have conducted extensive evaluations using 4 real datasets, including large geospatial datasets that have not been used in differentially private data publishing literature before. Experimental results validate our methodology and show that our methods outperform existing approaches. We have analyzed the effect of dimensionality on hierarchical methods, illustrating why hierarchical methods do not provide significant benefit in dimensional case, and predicting that they would perform even worse with higher dimensions.
Footnotes
 http://snap.stanford.edu/data/locgowalla.html
 http://www.infochimps.com/datasets/storagefacilitiesbylandmarks
 http://www.infochimps.com/datasets/storagefacilitiesbyneighborhood–2
References
 C. Dwork, “Differential privacy,” in ICALP, 2006, pp. 1–12.
 Y. Xiao, L. Xiong, and C. Yuan, “Differential private data release through multidimensional partitioning,” in VLDB SDM Workshop, 2010.
 G. Cormode, M. Procopiuc, E. Shen, D. Srivastava, and T. Yu, “Differential private spatial decompositions,” in ICDE, 2012.
 M. Hay, V. Rastogi, G. Miklau, and D. Suciu, “Boosting the accuracy of differentially private histograms through consistency,” Proc. VLDB Endow., vol. 3, pp. 1021–1032, September 2010.
 X. Xiao, G. Wang, and J. Gehrke, “Differential privacy via wavelet transforms,” IEEE Transactions on Knowledge and Data Engineering, vol. 23, pp. 1200–1214, 2011.
 C. Dwork, F. McSherry, K. Nissim, and A. Smith, “Calibrating noise to sensitivity in private data analysis,” in TCC, 2006, pp. 265–284.
 I. Dinur and K. Nissim, “Revealing information while preserving privacy,” in PODS ’03: Proceedings of the twentysecond ACM SIGMODSIGACTSIGART symposium on Principles of database systems. New York, NY, USA: ACM, 2003, pp. 202–210.
 C. Dwork and K. Nissim, “Privacypreserving datamining on vertically partitioned databases,” in CRYPTO, 2004, pp. 528–544.
 A. Blum, C. Dwork, F. McSherry, and K. Nissim, “Practical privacy: the sulq framework,” in Proceedings of the twentyfourth ACM SIGMODSIGACTSIGART symposium on Principles of database systems, ser. PODS ’05. New York, NY, USA: ACM, 2005, pp. 128–138.
 K. Nissim, S. Raskhodnikova, and A. Smith, “Smooth sensitivity and sampling in private data analysis,” in STOC, 2007, pp. 75–84.
 F. McSherry and K. Talwar, “Mechanism design via differential privacy,” in FOCS, 2007, pp. 94–103.
 W. Qardaji and N. Li, “Recursive partitioning and summarization: A general framework for privacy preserving data publishing,” in Proceedings of the 7th ACM Symposium on Information, Computer, and Communications Security. ACM, 2012.
 A. Blum, K. Ligett, and A. Roth, “A learning theory approach to noninteractive database privacy,” in STOC, 2008, pp. 609–618.
 C. Dwork, M. Naor, O. Reingold, G. Rothblum, and S. Vadhan, “On the complexity of differentially private data release: efficient algorithms and hardness results,” Proceedings of the 41st annual ACM symposium on Theory of computing, pp. 381–390, 2009.
 C. Dwork, G. Rothblum, and S. Vadhan, “Boosting and differential privacy,” Foundations of Computer Science (FOCS), 2010 51st Annual IEEE Symposium on, pp. 51 – 60, 2010.
 C. Li and G. Miklau, “An adaptive mechanism for accurate query answering under differential privacy,” Proc. VLDB Endow., vol. 5, no. 6, pp. 514–525, Feb. 2012. [Online]. Available: http://dl.acm.org/citation.cfm?id=2168651.2168653
 M. Hardt and K. Talwar, “On the geometry of differential privacy,” in Proceedings of the 42nd ACM symposium on Theory of computing, ser. STOC ’10. New York, NY, USA: ACM, 2010, pp. 705–714.
 C. Li, M. Hay, V. Rastogi, G. Miklau, and A. McGregor, “Optimizing linear counting queries under differential privacy,” in Proceedings of the twentyninth ACM SIGMODSIGACTSIGART symposium on Principles of database systems, ser. PODS ’10. New York, NY, USA: ACM, 2010, pp. 123–134.
 F. McSherry, “Privacy integrated queries: an extensible platform for privacypreserving data analysis,” in SIGMOD, 2009, pp. 19–30.
 A. Roth and T. Roughgarden, “Interactive privacy via the median mechanism,” in Proceedings of the 42nd ACM symposium on Theory of computing, ser. STOC ’10. New York, NY, USA: ACM, 2010, pp. 765–774.