Concentrated Differential Privacy:
Simplifications, Extensions, and Lower Bounds
“Concentrated differential privacy” was recently introduced by Dwork and Rothblum as a relaxation of differential privacy, which permits sharper analyses of many privacy-preserving computations. We present an alternative formulation of the concept of concentrated differential privacy in terms of the Rényi divergence between the distributions obtained by running an algorithm on neighboring inputs. With this reformulation in hand, we prove sharper quantitative results, establish lower bounds, and raise a few new questions. We also unify this approach with approximate differential privacy by giving an appropriate definition of “approximate concentrated differential privacy.”
- 1.1 Our Reformulation: Zero-Concentrated Differential Privacy
- 1.2 Results
- 1.3 Related Work
- 1.4 Further Work
- 2 Rényi Divergence
- 3 Relation to Differential Privacy
- 4 Zero- versus Mean-Concentrated Differential Privacy
- 5 Group Privacy
- 6 Lower Bounds
- 7 Obtaining Pure DP Mechanisms from zCDP
- 8 Approximate zCDP
- A Postprocessing and mCDP
- B Miscellaneous Proofs and Lemmata
- C Privacy versus Sampling
Differential privacy [DMNS06] is a formal mathematical standard for protecting individual-level privacy in statistical data analysis. In its simplest form, (pure) differential privacy is parameterized by a real number , which controls how much “privacy loss”111The privacy loss is a random variable which quantifies how much information is revealed about an individual by a computation involving their data; it depends on the outcome of the computation, the way the computation was performed, and the information that the individual wants to hide. We discuss it informally in this introduction and define it precisely in Definition 1.2 on page 1.2. an individual can suffer when a computation (i.e., a statistical data analysis task) is performed involving his or her data.
One particular hallmark of differential privacy is that it degrades smoothly and predictably under the composition of multiple computations. In particular, if one performs computational tasks that are each -differentially private and combines the results of those tasks, then the computation as a whole is -differentially private. This property makes differential privacy amenable to the type of modular reasoning used in the design and analysis of algorithms: When a sophisticated algorithm is comprised of a sequence of differentially private steps, one can establish that the algorithm as a whole remains differentially private.
A widely-used relaxation of pure differential privacy is approximate or -differential privacy [DKM06], which essentially guarantees that the probability that any individual suffers privacy loss exceeding is bounded by . For sufficiently small , approximate -differential privacy provides a comparable standard of privacy protection as pure -differential privacy, while often permitting substantially more useful analyses to be performed.
Unfortunately, there are situations where, unlike pure differential privacy, approximate differential privacy is not a very elegant abstraction for mathematical analysis, particularly the analysis of composition. The “advanced composition theorem” of Dwork, Rothblum, and Vadhan [DRV10] (subsequently improved by [KOV15, MV16]) shows that the composition of tasks which are each -differentially private is -differentially private. However, these bounds can be unwieldy; computing the tightest possible privacy guarantee for the composition of arbitrary mechanisms with differing -differential privacy guarantees is -hard [MV16]! Furthermore, these bounds are not tight even for simple and natural privacy-preserving computations. For instance, consider the mechanism which approximately answers statistical queries on a given database by adding independent Gaussian noise to each answer. Even for this basic computation, the advanced composition theorem does not yield a tight analysis.222In particular, consider answering statistical queries on a dataset of individuals by adding noise drawn from independently for each query. Each individual query satisfies -differential privacy for any . Applying the advanced composition theorem shows that the composition of all queries satisfies -differential privacy for any . However, it is well-known that this bound can be improved to -differential privacy.
Dwork and Rothblum [DR16] recently put forth a different relaxation of differential privacy called concentrated differential privacy. Roughly, a randomized mechanism satisfies concentrated differentially privacy if the privacy loss has small mean and is subgaussian. Concentrated differential privacy behaves in a qualitatively similar way as approximate -differential privacy under composition. However, it permits sharper analyses of basic computational tasks, including a tight analysis of the aforementioned Gaussian mechanism.
Using the work of Dwork and Rothblum [DR16] as a starting point, we introduce an alternative formulation of the concept of concentrated differential privacy that we call “zero-concentrated differential privacy” (zCDP for short). To distinguish our definition from that of Dwork and Rothblum, we refer to their definition as “mean-concentrated differential privacy” (mCDP for short). Our definition uses the Rényi divergence between probability distributions as a different method of capturing the requirement that the privacy loss random variable is subgaussian.
1.1 Our Reformulation: Zero-Concentrated Differential Privacy
As is typical in the literature, we model a dataset as a multiset or tuple of elements (or “rows”) in , for some “data universe” , where each element represents one individual’s information. A (privacy-preserving) computation is a randomized algorithm , where represents the space of all possible outcomes of the computation.
Definition 1.1 (Zero-Concentrated Differential Privacy (zCDP)).
A randomised mechanism is -zero-concentrated differentially private (henceforth -zCDP) if, for all differing on a single entry and all ,
where is the -Rényi divergence333Rényi divergence has a parameter which allows it to interpolate between KL-divergence () and max-divergence (). It should be thought of as a measure of dissimilarity between distributions. We define it formally in Section 2. Throughout, we assume that all logarithms are natural unless specified otherwise — that is, base . This includes logarithms in information theoretic quantities like entropy, divergence, and mutual information, whence these quantities are measured in nats rather than in bits. between the distribution of and the distribution of .
We define -zCDP to be -zCDP.444For clarity of exposition, we consider only -zCDP in the introduction and give more general statements for -zCDP later. We also believe that having a one-parameter definition is desirable.
Equivalently, we can replace (1) with
where is the privacy loss random variable:
Definition 1.2 (Privacy Loss Random Variable).
Let and be random variables on . We define the privacy loss random variable between and – denoted – as follows. Define a function by .555Throughout we abuse notation by letting represent either the probability mass function or the probability density function of evaluated at . Formally, denotes the Radon-Nikodym derivative of the measure with respect to the measure evaluated at , where we require to be absolutely continuous with respect to , i.e. . Then is distributed according to .
Intuitively, the value of the privacy loss represents how well we can distinguish from given only the output or . If , then the observed output of is more likely to have occurred if the input was than if was the input. Moreover, the larger is, the bigger this likelihood ratio is. Likewise, indicates that the output is more likely if is the input. If , both and “explain” the output of equally well.
A mechanism is -differentially private if and only if , where is the privacy loss of on arbitrary inputs differing in one entry. On the other hand, being -differentially private is equivalent, up to a small loss in parameters, to the requirement that .
In contrast, zCDP entails a bound on the moment generating function of the privacy loss — that is, as a function of . The bound (2) implies that is a subgaussian random variable666A random variable being subgaussian is characterised by the following four equivalent conditions [Riv12]. (i) for all . (ii) for all . (iii) for all . (iv) for some . with small mean. Intuitively, this means that resembles a Gaussian distribution with mean and variance . In particular, we obtain strong tail bounds on . Namely (2) implies that
for all .777We only discuss bounds on the upper tail of . We can obtain similar bounds on the lower tail of by considering .
Thus zCDP requires that the privacy loss random variable is concentrated around zero (hence the name). That is, is “small” with high probability, with larger deviations from zero becoming increasingly unlikely. Hence we are unlikely to be able to distinguish from given the output of or . Note that the randomness of the privacy loss random variable is taken only over the randomnesss of the mechanism .
1.1.1 Comparison to the Definition of Dwork and Rothblum
For comparison, Dwork and Rothblum [DR16] define -concentrated differential privacy for a randomized mechanism as the requirement that, if is the privacy loss for differing on one entry, then
for all . That is, they require both a bound on the mean of the privacy loss and that the privacy loss is tightly concentrated around its mean. To distinguish our definitions, we refer to their definition as mean-concentrated differential privacy (or mCDP).
Our definition, zCDP, is a relaxation of mCDP. In particular, a -mCDP mechanism is also -zCDP (which is tight for the Gaussian mechanism example), whereas the converse is not true. (However, a partial converse holds; see Lemma 4.3.)
1.2.1 Relationship between zCDP and Differential Privacy
Like Dwork and Rothblum’s formulation of concentrated differential privacy, zCDP can be thought of as providing guarantees of -differential privacy for all values of :
If provides -zCDP, then is -differentially private for any .
We also prove a slight strengthening of this result (Lemma 3.6). Moreover, there is a partial converse, which shows that, up to a loss in parameters, zCDP is equivalent to differential privacy with this quantification (see Lemma 3.7).
There is also a direct link from pure differential privacy to zCDP:
If satisfies -differential privacy, then satisfies -zCDP.
Dwork and Rothblum [DR16, Theorem 3.5] give a slightly weaker version of Proposition 1.4, which implies that -differential privacy yields -zCDP; this improves on an earlier bound [DRV10] by the factor .
1.2.2 Gaussian Mechanism
Just as with mCDP, the prototypical example of a mechanism satisfying zCDP is the Gaussian mechanism, which answers a real-valued query on a database by perturbing the true answer with Gaussian noise.
Definition 1.5 (Sensitivity).
A function has sensitivity if for all differing in a single entry, we have .
Proposition 1.6 (Gaussian Mechanism).
Let be a sensitivity- query. Consider the mechanism that on input , releases a sample from . Then satisfies -zCDP.
1.2.3 Basic Properties of zCDP
Our definition of zCDP satisfies the key basic properties of differential privacy. Foremost, these properties include smooth degradation under composition, and invariance under postprocessing:
Lemma 1.7 (Composition).
Let and be randomized algorithms. Suppose satisfies -zCDP and satisfies -zCDP. Define by . Then satisfies -zCDP.
Lemma 1.8 (Postprocessing).
Let and be randomized algorithms. Suppose satisfies -zCDP. Define by . Then satisfies -zCDP.
These properties follow immediately from corresponding properties of the Rényi divergence outlined in Lemma 2.2.
We remark that Dwork and Rothblum’s definition of mCDP is not closed under postprocessing; we provide a counterexample in Appendix A. (However, an arbitrary amount of postprocessing can worsen the guarantees of mCDP by at most constant factors.)
1.2.4 Group Privacy
A mechanism guarantees group privacy if no small group of individuals has a significant effect on the outcome of a computation (whereas the definition of zCDP only refers to individuals, which are groups of size ). That is, group privacy for groups of size guarantees that, if and are inputs differing on entries (rather than a single entry), then the outputs and are close.
Dwork and Rothblum [DR16, Theorem 4.1] gave nearly tight bounds on the group privacy guarantees of concentrated differential privacy, showing that a -concentrated differentially private mechanism affords -concentrated differential privacy for groups of size . We are able to show a group privacy guarantee for zCDP that is exactly tight and works for a wider range of parameters:
Let satisfy -zCDP. Then guarantees -zCDP for groups of size — i.e. for every differing in up to entries and every , we have
In particular, this bound is achieved (simultaneously for all values ) by the Gaussian mechanism. Our proof is also simpler than that of Dwork and Rothblum; see Section 5.
1.2.5 Lower Bounds
The strong group privacy guarantees of zCDP yield, as an unfortunate consequence, strong lower bounds as well. We show that, as with pure differential privacy, zCDP is susceptible to information-based lower bounds, as well as to so-called packing arguments [HT10, MMP10, De12]:
Let satisfy -zCDP. Let be a random variable on . Then
where denotes the mutual information between the random variables (in nats, rather than bits). Furthermore, if the entries of are independent, then .
Theorem 1.10 yields strong lower bounds for zCDP mechanisms, as we can construct distributions such that, for any accurate mechanism , reveals a lot of information about (i.e. is large for any accurate ).
In particular, we obtain a strong separation between approximate differential privacy and zCDP. For example, we can show that releasing an accurate approximate histogram (or, equivalently, accurately answering all point queries) on a data domain of size requires an input with at least entries to satisfy zCDP. In contrast, under approximate differential privacy, can be independent of the domain size [BNS13]! In particular, our lower bounds show that “stability-based” techniques (such as those in the propose-test-release framework [DL09]) are not compatible with zCDP.
Our lower bound exploits the strong group privacy guarantee afforded by zCDP. Group privacy has been used to prove tight lower bounds for pure differential privacy [HT10, De12] and approximate differential privacy [SU15a]. These results highlight the fact that group privacy is often the limiting factor for private data analysis. For -differential privacy, group privacy becomes vacuous for groups of size . Indeed, stability-based techniques exploit precisely this breakdown in group privacy.
As a result of this strong lower bound, we show that any mechanism for answering statistical queries that satisfies zCDP can be converted into a mechanism satisfying pure differential privacy with only a quadratic blowup in its sample complexity. More precisely, the following theorem illustrates a more general result we prove in Section 7.
Let and be arbitrary. Set and . Let be an arbitrary family of statistical queries. Suppose satisfies -zCDP and
for all . Then there exists for satisfying -differential privacy and
for all .
For some classes of queries, this reduction is essentially tight. For example, for one-way marginals, the Gaussian mechanism achieves sample complexity subject to zCDP, whereas the Laplace mechanism achieves sample complexity subject to pure differential privacy, which is known to be optimal.
1.2.6 Approximate zCDP
To circumvent these strong lower bounds for zCDP, we consider a relaxation of zCDP in the spirit of approximate differential privacy that permits a small probability of (catastrophic) failure:
Definition 1.12 (Approximate Zero-Concentrated Differential Privacy (Approximate zCDP)).
A randomized mechanism is -approximately -zCDP if, for all differing on a single entry, there exist events (depending on ) and (depending on ) such that , , and
where denotes the distribution of conditioned on the event . We further define -approximate -zCDP to be -approximate -zCDP.
In particular, setting gives the original definition of zCDP. However, this definition unifies zCDP with approximate differential privacy:
If satisfies -differential privacy, then satisfies -approximate -zCDP.
Approximate zCDP retains most of the desirable properties of zCDP, but allows us to incorporate stability-based techniques and bypass the above lower bounds. This also presents a unified tool to analyse a composition of zCDP with approximate differential privacy; see Section 8.
1.3 Related Work
Our work builds on the aforementioned prior work of Dwork and Rothblum [DR16].888Although Dwork and Rothblum’s work only appeared publicly in March 2016, they shared a preliminary draft of their paper with us before we commenced this work. As such, our ideas are heavily inspired by theirs. We view our definition of concentrated differential privacy as being “morally equivalent” to their definition of concentrated differential privacy, in the sense that both definitions formalize the same concept.999We refer to our definition as “zero-concentrated differential privacy” (zCDP) and their definition as “mean-concentrated differential privacy” (mCDP). We use “concentrated differential privacy” (CDP) to refer to the underlying concept formalized by both definitions. (The formal relationship between the two definitions is discussed in Section 4.) However, the definition of zCDP generally seems to be easier to work with than that of mCDP. In particular, our formulation in terms of Rényi divergence simplifies many analyses.
Dwork and Rothblum prove several results about concentrated differential privacy that are similar to ours. Namely, they prove analogous properties of mCDP as we prove for zCDP (cf. Sections 1.2.1, 1.2.2, 1.2.3, and 1.2.4). However, as noted, some of their bounds are weaker than ours; also, they do not explore lower bounds.
Several of the ideas underlying concentrated differential privacy are implicit in earlier works. In particular, the proof of the advanced composition theorem of Dwork, Rothblum, and Vadhan [DRV10] essentially uses the ideas of concentrated differential privacy. Their proof contains analogs of Propositions 1.7, 1.3, and 1.4,.
We also remark that Tardos [Tar08] used Rényi divergence to prove lower bounds for cryptographic objects called fingerprinting codes. Fingerprinting codes turn out to be closely related to differential privacy [Ull13, BUV14, SU15b], and Tardos’ lower bound can be (loosely) viewed as a kind of privacy-preserving algorithm.
1.4 Further Work
We believe that concentrated differential privacy is a useful tool for analysing private computations, as it provides both simpler and tighter bounds. We hope that CDP will be prove useful in both the theory and practice of differential privacy.
Furthermore, our lower bounds show that CDP can really be a much more stringent condition than approximate differential privacy. Thus CDP defines a “subclass” of all -differentially private algorithms. This subclass includes most differentially private algorithms in the literature, but not all — the most notable exceptions being algorithms that use the propose-test-release approach [DL09] to exploit low local sensitivity.
This “CDP subclass” warrants further exploration. In particular, is there a “complete” mechanism for this class of algorithms, in the same sense that the exponential mechanism [MT07, BLR13] is complete for pure differential privacy? Can we obtain a simple characterization of the sample complexity needed to satisfy CDP? The ability to prove stronger and simpler lower bounds for CDP than for approximate DP may be useful for showing the limitations of certain algorithmic paradigms. For example, any differentially private algorithm that only uses the Laplace mechanism, the exponential mechanism, the Gaussian mechanism, and the “sparse vector” technique, along with composition and postprocessing will be subject to the lower bounds for CDP.
There is also room to examine how to interpret the zCDP privacy guarantee. In particular, we leave it as an open question to understand the extent to which -zCDP provides a stronger privacy guarantee than the implied -DP guarantees (cf. Proposition 1.3).
In general, much of the literature on differential privacy can be re-examined through the lens of CDP, which may yield new insights and results.
2 Rényi Divergence
Recall the definition of Rényi divergence:
Definition 2.1 (Rényi Divergence [Rén61, Equation (3.3)]).
Let and be probability distributions on . For , we define the Rényi divergence of order between and as
where and are the probability mass/density functions of and respectively or, more generally, is the Radon-Nikodym derivative of with respect to .101010If is not absolutely continuous with respect to (i.e. it is not the case that ), we define for all . We also define the KL-divergence
and the max-divergence
Alternatively, Rényi divergence can be defined in terms of the privacy loss (Definition 1.2) between and :
for all . Moreover, .
We record several useful and well-known properties of Rényi divergence. We refer the reader to [vEH14] for proofs and discussion of these (and many other) properties. Self-contained proofs are given in Appendix B.1.
Let and be probability distributions and .
Non-negativity: with equality if and only if .
Composition: Suppose and are distributions on . Let and denote the marginal distributions on induced by and respectively. For , let and denote the conditional distributions on induced by and respectively, where specifies the first coordinate. Then
In particular if and are product distributions, then the Rényi divergence between and is just the sum of the Rényi divergences of the marginals.
Quasi-Convexity: Let and be distributions on , and let and for . Then . Moreover, KL divergence is convex:
Postprocessing: Let and be distributions on and let be a function. Let and denote the distributions on induced by applying to or respectively. Then .
Note that quasi-convexity allows us to extend this guarantee to the case where is a randomized mapping.
Monotonicity: For , .
2.1 Composition and Postprocessing
Lemma 2.3 (Composition & Postprocessing).
Let and . Suppose satisfies -zCDP and satisfies -zCDP (as a function of its first argument). Define by . Then satisfies -zCDP.
2.2 Gaussian Mechanism
The following lemma gives the Rényi divergence between two Gaussian distributions with the same variance.
Let and . Then
Consequently, the Gaussian mechanism, which answers a sensitivity- query by adding noise drawn from , satisfies -zCDP (Proposition 1.6).
For the multivariate Gaussian mechanism, Lemma 2.4 generalises to the following.
Let , , and . Then
Thus, if is the mechanism that, on input , releases a sample from for some function , then satisfies -zCDP for
3 Relation to Differential Privacy
We now discuss the relationship between zCDP and the traditional definitions of pure and approximate differential privacy. There is a close relationship between the notions, but not an exact characterization.
For completeness, we state the definition of differential privacy:
A randomized mechanism satisfies -differential privacy if, for all differing in a single entry, we have
for all (measurable) . Further define -differential privacy to be -differential privacy.
3.1 Pure DP versus zCDP
Pure differential privacy is exactly characterized by -zCDP:
A mechanism satisfies -DP if and only if it satisfies -zCDP.
Let be neighbouring. Suppose satisfies -DP. Then . By monotonicity,
for all . So satisfies -zCDP. Conversely, suppose satisfies -zCDP. Then
Thus satisfies -DP. ∎
We now show that -differential privacy implies -zCDP (Proposition 1.4).
Let and be probability distributions on satisfying and . Then for all .
In particular, Proposition 3.3 shows that the KL-divergence . A bound on the KL-divergence between random variables in terms of their max-divergence is an important ingredient in the analysis of the advanced composition theorem [DRV10]. Our bound sharpens (up to lower order terms) and, in our opinion, simplifies the previous bound of proved by Dwork and Rothblum [DR16].
Proof of Proposition 3.3..
We may assume , as otherwise , whence the result follows from monotonicity. We must show that
We know that for all . Define a random function by for all . By Jensen’s inequality,
where denotes for a random . We also have . From this equation, we can conclude that
The result now follows from the following inequality, which is proved in Lemma B.1.
3.2 Approximate DP versus zCDP
The statements in this section show that, up to some loss in parameters, zCDP is equivalent to a family of -DP guarantees for all .
Let satisfy -zCDP. Then satisfies -DP for all and
Thus to achieve a given -DP guarantee it suffices to satisfy -zCDP with
Let be neighbouring. Define . Let and . That is, is the privacy loss random variable. Fix to be chosen later. Then
By Markov’s inequality
Now, for any measurable ,
Let satisfy -zCDP. Then satisfies -DP for all and
Alternatively satisfies -DP for all and
Note that the last of three options in the minimum dominates the first two options. We have included the first two options as they are simpler.
Let satisfy -DP for all and
for some constants . Then is -zCDP.
Thus zCDP and DP are equivalent up to a (potentially substantial) loss in parameters and the quantification over all .
4 Zero- versus Mean-Concentrated Differential Privacy
We begin by stating the definition of mean-concentrated differential privacy:
Definition 4.1 (Mean-Concentrated Differential Privacy (mCDP) [Dr16]).
A randomized mechanism satisfies -mean-concentrated differential privacy if, for all differing in one entry, and letting , we have
for all .
In contrast -zCDP requires that, for all , , where is the privacy loss random variable. We now show that these definitions are equivalent up to a (potentially significant) loss in parameters.
If satisfies -mCDP, then satisfies -zCDP.
For all ,
If satisfies -zCDP, then satisfies -mCDP.
The proof of Lemma 4.3 is deferred to the appendix.
Thus we can convert -mCDP into -zCDP and then back to -mCDP. This may result in a large loss in parameters, which is why, for example, pure DP can be characterised in terms of zCDP, but not in terms of mCDP.
We view zCDP as a relaxation of mCDP; mCDP requires the privacy loss to be “tightly” concentrated about its mean and that the mean is close to the origin. The triangle inequality then implies that the privacy loss is “weakly” concentrated about the origin. (The difference between “tightly” and “weakly” accounts for the use of the triangle inequality.) On the other hand, zCDP direcly requires that the privacy loss is weakly concentrated about the origin. That is to say, zCDP gives a subgaussian bound on the privacy loss that is centered at zero, whereas mCDP gives a subgaussian bound that is centered at the mean and separately bounds the mean.
There may be some advantage to the stronger requirement of mCDP, either in terms of what kind of privacy guarantee it affords, or how it can be used as an analytic tool. However, it seems that for most applications, we only need what zCDP provides.
5 Group Privacy
In this section we show that zCDP provides privacy protections to small groups of individuals.
Definition 5.1 (zCDP for Groups).
We say that a mechanism provides -zCDP for groups of size if, for every differing in at most entries, we have
The usual definition of zCDP only applies to groups of size . Here we show that it implies bounds for all group sizes. We begin with a technical lemma.
Lemma 5.2 (Triangle-like Inequality for Rényi Divergence).
Let , , and be probability distributions. Then
for all .
Let and . Then . By Hölder’s inequality,