# Random Differential Privacy

## Abstract

We propose a relaxed privacy definition called random differential privacy (RDP). Differential privacy requires that adding any new observation to a database will have small effect on the output of the data-release procedure. Random differential privacy requires that adding a randomly drawn new observation to a database will have small effect on the output. We show an analog of the composition property of differentially private procedures which applies to our new definition. We show how to release an RDP histogram and we show that RDP histograms are much more accurate than histograms obtained using ordinary differential privacy. We finally show an analog of the global sensitivity framework for the release of functions under our privacy definition.

## 1 Introduction

Differential privacy (DP) ([8]) is a type of privacy guarantee that has become quite popular in the computer science literature. The advantage of differential privacy is that it gives a strong and mathematically rigorous guarantee. The disadvantage is that the strong privacy guarantee often comes at the expense of the statistical utility of the released information. We propose a weaker notion of privacy, called “random differential privacy” (RDP), under which it is possible to achieve better accuracy.

The privacy guarantee provided by RDP represents a radical weakening of the ordinary differential privacy. This could be a cause for concern for those who want very strong privacy guarantees. Indeed, we are not suggesting the RDP should replace ordinary differential privacy. However, as we shall show in this paper (and has been observed many times in the past), differential privacy can lead to large information losses in some cases (see e.g., [9]). Thus, we feel there is great value in exploring weakened versions of differential privacy. In other words, we are proposing a new privacy definition as a way of exploring the privacy/accuracy tradeoff.

We begin by introducing ordinary differential privacy and setting up some notation. We then explore the lower limits for accuracy of differentially private techniques in the context of histograms. We introduce a concept which parallels minimaxity in statistics, and identify the minimax risk for a differentially private histogram. We describe an important subset of these minimax differentially private histograms which we show to have risk which is uniformly lower bounded at a rate which is linear in the dimension of the histogram. We then introduce our proposed relaxation to differential privacy, under which our technique enjoys the same minimax risk, but with a lower bound which depends only on the size of the support of the histogram (namely, the number of nonzero cells). Thus we show that in the context of sparse histograms, the relaxation allows for a strictly better data release. We also demonstrate some important properties of our relaxation, such as an analog of the composition lemma.

## 2 Differential Privacy (DP)

### 2.1 Definition

Let be an input database with observations
where .
The goal is to produce some output . For example the inputs may consist of database rows in which each column is a measurement of an individual, and the output is the number of individuals having some property.
Let be a conditional distribution for given .
Write if
and
and differ in one coordinate.
We say that and are
neighboring databases.
^{1}

We say satisfies differential privacy if, for all measurable and all ,

(1) |

The intuition is that, for small , the value of one individual’s data has small effect on the output. We consider any DP algorithm to be a family of distributions over the output space . We index a family of distributions by to show the size of the dataset.

It has been shown by researchers in privacy that differential privacy provides a very strong guarantee. Essentially it means that whether or not one particular individual is entered in the database, has negligible effect on the output. The research in differential privacy is vast. A few key references are [8], [7], [2], [5], [3] and references therein.

### 2.2 Noninteractive Privacy and Histograms

Much research on differential privacy focuses on the case where is a response to some query such as “what is the mean of the data.” A simple way to achieve differential privacy in that case is to add some noise to the mean of where the noise has a Laplace distribution. The user may send a sequence of such queries. This is called interactive privacy. We instead focus on the noninteractive privacy where the goal is to output a whole database (or a “synthetic dataset”) . Then the user is not restricted to a small number of queries.

One way to release a private database is to first release a privatized histogram. We can then draw an arbitrarily large sample from the histogram. It is easy to show that if the histogram satisfies DP then also satisfies DP. Hence, in the rest of the paper, we focus on constructing a private histogram.

We consider privatization mechanisms which are permutation invariant with respect to their inputs (i.e., those distributions which treat the values as a set rather than a vector) in the context of histograms this appears to be a very mild restriction.

We partition the sample space into cells (or bins) .^{2}

Now we give a concrete example of a which achieves differential privacy. Define where are independent draws from a Laplace distribution with mean zero and rate one. Then satisfy DP (see e.g.,[8]). However, the themselves do not represent a histogram, because they can be negative and they do not necessarily sum to one. Hence we may take, for example:

(2) |

where we use the norm: . This procedure hence results in a valid histogram. Note that satisfies the differential privacy, since each subset of values it may take clearly corresponds to a measurable subset of . Since the differential privacy held for the real vector then it also holds for the projection (see e.g., [16]). We will refer to this as the histogram perturbation method (see e.g., [16]). There are other methods for generating differentially private histograms, and our results below concern hold over a large subset of all the possible techniques available (to be made precise after proposition 3.2). Hence our results apply to more than the above concrete scheme.

## 3 Lower Bounds for Accuracy with Differential Privacy

To motivate the need for relaxed versions of differential privacy, we consider here the accuracy of differentially private histograms. We evaluate a differentially private procedure in terms of its “risk” which is a natural measure of accuracy taken from statistics. We consider the loss function, and the associated risk:

(3) |

where is the output of the differentially private algorithm, is the input histogram, and the distribution is the one induced by the randomized algorithm. Typically this risk will be a non-constant function of the parameter and of the distribution . Therefore we consider the “minimax risk” which is the smallest achievable worst-case risk, and gives a measure of the hardness of the problem which does not depend on a particular choice of procedure:

(4) |

We next describe the minimax risk of the best fully differentially private mechanism .

###### Proposition 3.1.

###### Proof.

The proof uses a standard method for deriving minimax lower bounds in statistical estimation. Consider the - dimensional hypercube

Take , to be neighboring corners of this hypercube (namely two elements which differ in exactly one coordinate ). Take the KL divergence between the conditional distributions at these corners to be:

By considering a sequence of points corresponding to neighboring inputs, we find the ratio of densities to have the upper bound: since elements of the input have to change to move from to , and the ratio at each step is bounded by . Therefore the KL divergence obeys The “affinity” between the two distributions is:

###### Remark 1.

The previous result demonstrates that the minimax risk of the differentially private histogram is of the order .

###### Remark 2.

Hardt and Talwar [10] have a similar result although their setting is somewhat different. In particular, they do not restrict to the space of histograms based on observations.

The above results demonstrates that for every differentially private scheme, there is at least one input for which the risk is growing in the order shown (in fact, at least one point in every hypercube of side length ). However the prospect exists that at many other inputs the risk is much lower. We now demonstrate that this is not the case when , by presenting a uniform lower bound for the risk among all minimax schemes. In the case of the output may be regarded as a single number where , which gives the proportion of the data points in the first bin. Our result will show that in a sense, the minimax differential privacy schemes are similar to “equalizer rules” in the sense that the risk is on the same order for every input.

###### Proposition 3.2.

For for any which achieves we have that

###### Proof.

Note that for any and , due to the uniform upper bound on the risk, Markov’s inequality gives

Therefore, due to the constraint of differential privacy, we have that, for any ,

Since elements of the input change to move from to . Therefore taking to give gives

As is arbitrary, this gives a uniform lower bound under the conditions above. ∎

For the relaxation of differential privacy given in definition 2.2 of [10], the above result remains intact for large enough . The relaxation is:

where is negligible (i.e., tending to zero faster than any inverse polynomial in ). Thus via the same technique as above, we have

For large enough this latter term is bounded from below by . This indicates that the above relaxation of differential privacy will not be useful in achieving higher accuracy.

For , we may write

With

where the subscript means the coordinate. Thus, whenever we have that uniformly over , we have that . Therefore the only opportunity to improve upon the rate of is when some have some coordinate at which the risk upper bound does not apply.

We conclude by remarking that we have demonstrated, that for a certain class of differentially private algorithms which achieve the “minimax rate,” their risk is uniformly lower bounded at the same rate. The rate in question is linear in , which is problematic when is large relative to . It remains an open question whether there are different techniques which achieve the minimax rate, yet do not have this property. Such a technique would have to lose the uniform upper bound on the coordinate-wise risk. Below, we present a weakening of differential privacy, which admits release mechanisms, which both keep the uniform upper bound on the coordinate-wise risk, and also have a minimax risk which is growing only in the support of the histogram (namely, the number of cells which contain observations).

## 4 Random Differential Privacy

In random differential privacy (RDP) we view the data as random draws from an unknown distribution . This is certainly the case in statistical sampling and of course it is the usual assumption in most learning theory. Let us denote the observed values of the random variables by . Recall that under DP, is not strongly affected if we replace some value with another value . We continue to restrict to the case in which, is invariant to permutations of . Thus we may restate DP by saying that is not strongly affected if we replace by some other arbitrary value . In RDP, we require instead that the distribution is not strongly affected if we replace by some new which is also randomly drawn from .

###### Definition 1 (-Random Differential Privacy).

We say that a randomized algorithm is -Randomly Differentially Private when:

where

(i.e., ), and the probability is with respect to the -fold product measure on the space , that is, .

We also give the “random” analog of the -Differential Privacy:

###### Definition 2 (-Random Differential Privacy).

We say that a randomized algorithm is -Randomly Differentially Private when:

where is negligible (i.e., decreasing faster than any inverse polynomial).

We note that [12] also consider a probabilistic relaxation of DP. However, their relaxation is quite different than the one considered here. Namely, their relaxation bounds the probability that the differential privacy criteria is not met, but where the probability is taken with respect to the randomized algorithm itself. Our relaxation takes the probability with respect to the generation of the data itself. The following result is clear from the definition of random differential privacy.

###### Proposition 4.1.

-RDP is a strict relaxation of -DP. That is, if is DP then it is also RDP. However, there are RDP procedures that are not DP.

###### Remark 3.

Although an -DP procedure fulfils the requirement of -RDP, the converse is not true. The reason is that the latter requires that the condition (that the ratio of densities be bounded) holds almost everywhere with respect to the unknown measure, whereas DP require that this condition holds uniformly everywhere in the space.

We next show an important property of the definition, namely, that RDP algorithms may be composed to give other RDP algorithms with different constants. The analogous composition property for DP was considered to be important because it allowed rapid development of techniques which release multiple statistics, as well as techniques which allow interactive access to the data.

###### Proposition 4.2 (Composition).

Suppose are distributions over which are -RDP and -RDP respectively. The following distribution over is -RDP:

This result is simply an application of the union bound combined with the standard composition property of differential privacy. As an example, suppose it is required to release different statistics of some data sample. If each one is released via a -RDP procedure, then the overall release of all statistics together achieves -RDP. A similar result holds for the composition of -RDP releases.

## 5 RDP Sparse Histograms

We first give a technique for the release of a histogram which works well in the case of a sparse histogram, and which satisfies the -Random Differential Privacy. We then compare the accuracy of this method to a lower bound on the accuracy of a -Differentially Private approach.

The basic idea is to not add any noise to cells with low counts. This results in partitioning the space into two blocks and releasing a noise-free histogram in one block, and use a differentially private histogram in the other. The partition will depend on the data itself. For a sample , we denote:

Then we consider the release mechanism:

(5) |

###### Proposition 5.1.

The random vector as defined in (5) satisfies the -RDP.

In demonstrating RDP, we take the sample and denote: and . We consider the output distribution of our method when applied to each of the neighboring samples. The event that the ratio of densities fail to meet the requisite bound is a subset of the event where either or , and when . In the complement of this event then the partitions are the same, and the differing samples both fall within the block which receives the Laplace noise, so the DP condition is achieved. In demonstrating the RDP, we simply bound the probability of the aforementioned event, conditional on the order statistics.

###### Proof of proposition 5.1.

In the interest of space let the vector of order statistics be denoted . Let . We have that . We thus have

The latter probability is just the fraction of ways in which the order statistics may be rearranged so that fall within . Due to the condition , we have . Therefore the number of rearrangements having at least one of or in is bounded above

Therefore

Finally:

∎

### 5.1 Accuracy

Here we show that from (2) is close to even when the histogram is sparse.

###### Theorem 5.2.

Suppose that . Let for some . Then .

###### Proof.

Let . Let be the event that for all . Then holds, except on a set of exponentially small probability. Suppose holds. Let . For , For , . Hence . Furthermore Hence via the triangle inequality we have, . ∎

We thus have a technique for which the risk is uniformly bounded above by as with the DP technique, and which also enjoys the coordinate-wise upper bound on the risk. However in this regime, the risk is no longer uniformly lower bounded with a rate linear in , since the upper bound is linear in in the case of sparse vectors.

## 6 RDP via Sensitivity Analysis

We next demonstrate that RDP allows schemes for release of other kinds of statistics (besides histograms). A common technique used to establish a differentially private technique is to use Laplace noise with variance proportional to the “global sensitivity” of the function [6]. We show that there is an analog of this technique for RDP. We next demonstrate a method for the RDP release of an arbitrary function .

We consider the algorithm which samples the distribution

(6) |

It is well known that when is the constant function which gives an upper bound of the global sensitivity [6] of , this method enjoys the -DP. As we allow to depend on the data we may make use of the local sensitivity framework of [14]. There it is demonstrated that whenever:

(7) |

and

(8) |

then (6) gives -DP with:

(9) |

(see [14] definition 2.1, lemma 2.5 and example 3). In moving from DP to RDP we may now require that conditions (7) and (8) hold only with the requisite probability . Then (6) will achieve -RDP.

We consider a special subset of functions for which:

Examples of functions satisfying this property are e.g., statistical point estimators [15] and regularized logistic regression estimates [4]. In particular in these cases it is assumed that is some compact subset of and then e.g., gives the diameter of this set.

(10) |

and

(11) |

Note that are random draws from which are independent of the random vectors . The first condition simply requires (7), to hold except on a set of measure . The second condition implies that both and give upper bounds to the local sensitivity, except on a set of measure . Putting these together along with the above considerations will yield a -RDP method. We note that we are essentially asking that and both give valid quantiles for the random variable , and that they give similar values with high probability.

We consider the empirical process based on and the data sample given by:

This is exactly an empirical CDF for the distribution of , based on independent samples of . We may anticipate that sample quantiles of this empirical CDF will be close to the quantiles from the true CDF, which we denote by . This is made precise by the DKW inequality (see e.g., [13]), which in this case yields:

(12) |

Thus taking to be the smallest with , and to give the quantile of , with , we have:

The second inequality comes from applying the monotone function to both sides of the inequality statement in the probability, and then rearranging, to yield which is bounded due to the DKW inequality (12). Thus for some appropriate choice of we may take , and thus achieve (11).

Now to achieve (10) we turn to the Bahadur-Kiefer representation of sample quantiles (see [11]). We have that:

where is the derivative of (namely the density). Hence we concentrate on the case when is a continuous random variable. We find the ratio to be bounded in probability:

where the final equality stems from using DKW to bound the and along with the triangle inequality to bound . This therefore demonstrates that:

This means that for large enough , and some probability , the ratio is bounded by where is polynomial in . Examining (9) we find to be negligible for such a choice of . Therefore the use of achieves the RDP as required.

We note that in principle this same approach would work, were we to replace with the U-statistic process:

Though this is essentially another empirical CDF, it is based on non-independent samples since each participates in of the evaluations of . Nevertheless an analog of the DKW inequality still applies to this process, and we still have the same behavior of the quantiles (see e.g., [1]).

## 7 Privacy Concerns

As stated above, we mainly use random differential privacy as a vehicle for a theoretical exploration of the boundaries of differential privacy. Although it is a conceptually reasonable weakening of differential privacy, whether it is appropriate to use in practice requires more attention. For example, if the hypothesized adversary (of e.g., [16] theorem 2.4), really had access to a subset of of the data, and the one remaining element was the only inhabitant of its histogram cell, then this would be immediately revealed to the adversary. Whether this is a critical problem depends on the application.

## 8 Example

We present two examples in which the RDP technique and the DP techniques are compared on synthetic histogram data. In the first example the histogram has bins, all but two of which are empty and points fall in to the other two. Figure 1(a) shows the original data as well as the sanitized data due to differential privacy and RDP. Figure 1(b) shows the distribution of loss from 100 simulations of both approaches. We see that the risk of the RDP histogram is typically much lower than that of the DP histogram, which occasionally has risk in excess of 0.5 (recall that the maximum possible loss is 2 in the case that the original and sanitized histograms had completely disjoint support).

We present an analogous two dimensional example in figure 2. Here the histogram has bins in which all but 16 are empty. In this example we see that the RDP technique has uniformly better loss than the DP technique.

## 9 Conclusion

We have introduced a relaxed version of differential privacy— random differential privacy—shown how to apply it to histograms and examined the accuracy of the resulting method. We also demonstrated some properties of our definition, and explained a basic construction for release of arbitrary functions of the data. As we mentioned in the introduction, we are not suggesting that differential privacy should be abandoned and replaced by random differential privacy. However, we do think it is fruitful to consider various relaxations of differential privacy to gain a deeper understanding of the tradeoffs between the strength of the privacy guarantee and the accuracy of the data release mechanism.

In ongoing work we are extending this work to allow for data dependent choices of the number of bins and to allow for other density estimators besides histograms. We are also considering other relaxations of differential privacy. We will report on these results in future work.

#### References

### Footnotes

- In some papers, the definition is changed so that one sample is a strict subset of the other, having exactly one less element. Although this definition is perhaps slightly stronger, we do not use it and remark that the approaches we present below may all be fit into this framework if so desired.
- In this paper, is taken as a given integer. The problem of choosing an optimal in a private matter is the subject of future work.

### References

- Miguel A. Arcones. The bahadur-kiefer representation for u-quantiles. The Annals of Statistics, 24(3):1400–1422, 1996.
- B. Barak, K. Chaudhuri, C. Dwork, S. Kale, F. McSherry, and K. Talwar. Privacy, accuracy, and consistency too: a holistic solution to contingency table release. Proceedings of the twenty-sixth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, pages 273–282, 2007.
- A. Blum, C. Dwork, F. McSherry, and K. Nissim. Practical privacy: the SuLQ framework. Proceedings of the twenty-fourth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, pages 128–138, 2005.
- Kamalika Chaudhuri and Claire Monteleoni. Privacy preserving logistic regression. NIPS 2008, 2008.
- C. Dwork and J. Lei. Differential privacy and robust statistics. Proceedings of the 41st ACM Symposium on Theory of Computing, pages 371–380, May–June 2009.
- C. Dwork, F. McSherry, K. Nissim, and A. Smith. Calibrating noise to sensitivity in private data analysis. Proceedings of the 3rd Theory of Cryptography Conference, pages 265–284, 2006.
- C. Dwork, F. McSherry, and K. Talwar. The price of privacy and the limits of LP decoding. In Proceedings of Symposium on the Theory of Computing, 2007.
- Cynthia Dwork. Differential privacy. 33rd International Colloquium on Automata, Languages and Programming, pages 1–12, 2006.
- Stephen E. Fienberg, Alessandro Rinaldo, and Xiaolin Yang. Differential privacy and the risk-utility tradeoff for multi-dimensional contingency tables. Privacy in Statistical Databases, pages 197 – 199, 2010.
- Moritz Hardt and Kunal Talwar. On the geometry of differential privacy. STOC ’10 Proceedings of the 42nd ACM symposium on Theory of computing, pages 705–714, 2010.
- J. Kiefer. On bahadur’s representation of sample quantiles. The Annals of Mathematical Statistics, 38(5):1323–1342, 1967.
- A. Machanavajjhala, D. Kifer, J. Abowd, J. Gehrke, and L. Vilhuber. Privacy: Theory meets Practice on the Map. Proceedings of the 24th International Conference on Data Engineering, pages 277–286, 2008.
- P. Massart. The Tight Constant in the Dvoretzky-Kiefer-Wolfowitz Inequality. The Annals of Probability, 18(3), 1990.
- K. Nissim, S. Raskhodnikova, and A. Smith. Smooth sensitivity and sampling in private data analysis. Proceedings of the 39th annual ACM annual ACM symposium on Theory of computing, pages 75–84, 2007.
- Adam Smith. Efficient, differentially private point estimators. arXiv:0809.4794, 2008.
- Larry Wasserman and Shuheng Zhou. A statistical framework for differential privacy. The Journal of the American Statistical Association, 105:375–389, 2010.
- Bin Yu. Assouad, fano, and le cam. In D. Pollard, E. Torgersen, and G. Yang, editors, Festschrift for Lucien Le Cam, pages 423–435. Springer, 1997.