Randomized Smoothing of All Shapes and Sizes
Abstract
Randomized smoothing is a recently proposed defense against adversarial attacks that has achieved stateoftheart provable robustness against perturbations. Soon after, a number of works devised new randomized smoothing schemes for other metrics, such as or ; however, for each geometry, substantial effort was needed to derive new robustness guarantees. This begs the question: can we find a general theory for randomized smoothing?
In this work we propose a novel framework for devising and analyzing randomized smoothing schemes, and validate its effectiveness in practice. Our theoretical contributions are as follows: (1) We show that for an appropriate notion of “optimal”, the optimal smoothing distributions for any “nice” norm have level sets given by the Wulff Crystal of that norm. (2) We propose two novel and complementary methods for deriving provably robust radii for any smoothing distribution. Finally, (3) we show fundamental limits to current randomized smoothing techniques via the theory of Banach space cotypes. By combining (1) and (2), we significantly improve the stateoftheart certified accuracy in on standard datasets. On the other hand, using (3), we show that, without more information than label statistics under random input perturbations, randomized smoothing cannot achieve nontrivial certified accuracy against perturbations of norm , when the input dimension is large. We provide code in github.com/tonyduan/rs4a.
mathx”17
1 Introduction
Deep learning models are vulnerable to adversarial examples – small imperceptible perturbations to their inputs that lead to misclassification (Goodfellow et al., 2015; Szegedy et al., 2014). To solve this problem, recent works proposed heuristic defenses that are robust to specific classes of perturbations, but many would later be broken by stronger attacking algorithms (Carlini & Wagner, 2017; Athalye et al., 2018; Uesato et al., 2018). This led the community to both strengthen empirical defenses Kurakin et al. (2016); Madry et al. (2017) as well as build certified defenses that provide robustness guarantees, i.e., models whose predictions are constant within a neighborhood of their inputs (Wong & Kolter, 2018; Raghunathan et al., 2018a). In particular, randomized smoothing is a recent method that has achieved stateoftheart provable robustness (Lecuyer et al., 2018; Li et al., 2018; Cohen et al., 2019). In short, given an input, it outputs the class most likely to be returned by a base classifier, typically a neural network, under random noise perturbation of the input. This mechanism confers stability of the output against perturbations, even if the base classifier itself is highly nonLipschitz. Canonically, this noise has been Gaussian, and the adversarial perturbation it protects against has been (Cohen et al., 2019; Salman et al., 2019a), but some have explored other kinds of noises and adversaries as well (Lecuyer et al., 2018; Li et al., 2019; Dvijotham et al., 2019). In this paper, we seek to comprehensively understand the interaction between the choice of smoothing distribution and the perturbation norm.

We propose two new methods to compute robust certificates for additive randomized smoothing against different norms.

We show that, for adversaries, the optimal smoothing distributions have level sets that are their respective Wulff Crystals — a kind of equilibrated crystal structure studied in physics since 1901 (Wulff).

Using the above advances, we obtain stateoftheart certified accuracy on CIFAR10 and ImageNet
^{1} . 
Finally, we leverage the classical theory of Banach space cotypes (Wojtaszczyk, 1996) to show that current techniques for randomized smoothing cannot certify nontrivial accuracy at more than radius, if all one uses are the probabilities of labels when classifying randomly perturbed input.
ImageNet  Radius  0.5  1.0  1.5  2.0  2.5  3.0  3.5  4.0 

Uniform, Ours (%)  55  49  46  42  37  33  28  25  
Laplace, Teng et al. (2019) (%)  48  40  31  26  22  19  17  14  
CIFAR10  Radius  0.5  1.0  1.5  2.0  2.5  3.0  3.5  4.0 
Uniform, Ours (%)  70  59  51  43  33  27  22  18  
Laplace, Teng et al. (2019) (%)  61  39  24  16  11  7  4  3 
2 Related Works
Defences against adversarial examples are mainly divided into empirical defenses and certified defenses.
Empirical defenses are heuristics designed to make learned models empirically robust. An example of these are adversarial training based defenses (Kurakin et al., 2016; Madry et al., 2017) which optimize the parameters of a model by minimizing the worstcase loss over a neighborhood around the input to these models. Such defenses may seem powerful, but has no guarantees that they are not “breakable”. In fact, the majority of the empirical defenses proposed in the literature were later “broken” by stronger attacks (Carlini & Wagner, 2017; Athalye et al., 2018; Uesato et al., 2018; Athalye & Carlini, 2018). In order to mitigate the deficiency in such defenses, recent works explored certified defenses with formal robustness guarantees.
Certified defenses guarantee that for any input , the classifier’s output is constant within a small neighborhood of . Such defenses are typically based on certification methods that are either exact or conservative. The exact methods include those based on Satisfiability Modulo Theories solvers (Katz et al., 2017; Ehlers, 2017) or mixed integer linear programming (Tjeng et al., 2019; Lomuscio & Maganti, 2017; Fischetti & Jo, 2017), which, although guaranteed to find adversarial examples if they exist, are unfortunately computationally inefficient. On the other hand, conservative methods are more computationally efficient, but might mistakenly flag a “safe” data point as vulnerable to adversarial examples (Wong & Kolter, 2018; Wang et al., 2018a, b; Raghunathan et al., 2018a, b; Wong et al., 2018; Dvijotham et al., 2018b, a; Croce et al., 2018; Salman et al., 2019b; Gehr et al., 2018; Mirman et al., 2018; Singh et al., 2018; Gowal et al., 2018; Weng et al., 2018; Zhang et al., 2018). However, none of these defences scale to practical networks. Recently, a new method called randomized smoothing has been proposed as a probabilistically certified defense, whose architectureindependence makes it scalable.
Randomized smoothing
Randomized smoothing was first proposed as heuristic defense without any guarantees (Liu et al., 2018; Cao & Gong, 2017). Later on, Lecuyer et al. (2018) proved an robustness guarantee for randomized smoothing classifiers from a differential privacy perspective. Subsequently, Li et al. (2018) gave a stronger robustness guarantee utilizing tools from information theory. Recently, Cohen et al. (2019) provided a tight robustness guarantee for randomized smoothing. Furthermore, a series of papers came out recently that developed robustness guarantees using randomized smoothing against other adversaries such as bounded (Teng et al., 2019), bounded Zhang* et al. (2020), bounded Levine & Feizi (2019a); Lee et al. (2019), and Wasserstein attacks Levine & Feizi (2019b). In Section 4.3, we give a more indepth comparison on how our techniques compare to their results.
Wulff Crystal
We are the first to relate to adversarial robustness the theory of Wulff Crystals, which has an interesting history. Just as the round soap bubble minimizes surface tension for a given volume, the Wulff Crystal minimizes certain surface energy that arises when the crystal interfaces with another material, akin to surface tension. The Russian physicist George Wulff first proposed this shape via physical arguments in 1901 (Wulff, 1901), but its energy minimization property was not proven in full generality until relatively recently, building on a century worth of work (Gibbs, 1875; Wulff, 1901; Hilton, 1903; Liebmann, 1914; von Laue, ; Dinghas, 1944; Burton et al., 1951; Herring, ; Constable, 1968; Taylor, 1975, 1978; Fonseca & Müller, 1991; Brothers & Morgan, 1994; Cerf, 2006).
Nogo theorems for randomized smoothing
Prior to the initial submission of this manuscript, the only other nogo theorem for randomized smoothing in the context of adversarial robustness is Zheng et al. (2020). However, they are only concerned with a nonstandard notion of certified robustness that does not imply anything for the original problem. Moreover, they show that, under this different notion of robustness, if they are robust for , then the norm of the noise must be large on average. While this provides some indirect evidence for the hardness of certifying , it does not actually address the question. Our result, on the other hand, directly rules out a large suite of current techniques for deriving robust certificates for all norms for , for the standard notion of certified robustness.
After the initial submission of this manuscript, we became aware of two concurrent works Kumar et al. (2020); Blum et al. (2020) that claim impossibility results for randomized smoothing. Blum et al. (2020) demonstrate that, under some mild conditions, any smoothing distribution for with must have large componentwise magnitude. While this work, like Zheng et al. (2020), gives indirect evidence for the hardness of the problem, it does not directly show a limit for the utility of randomized smoothness. In contrast, our nogo result directly shows impossibility of the underlying robust classification problem. Kumar et al. (2020) demonstrate that certain classes of smoothing distributions cannot certify without losing dimensiondependent factors. Our result is more general, as it rules out any class of smoothing distributions.
3 Randomized Smoothing
Consider a classifier from to classes and a distribution on . Randomized smoothing with is a method that constructs a new, smoothed classifier from the base classifier . The smoothed classifier assigns to a query point the class which is most likely to be returned by the base classifier when is perturbed by a random noise sampled from , i.e.,
(1) 
where is the decision region , denotes the translation of by , and is the measure of under , i.e. .
Robustness guarantee for smoothed classifiers
For define the growth function
One should think of has the decision region of some base classifier. Thus gives the maximal growth of measure of a set (i.e. decision region) when is shifted by the vector , if we only know the initial measure of the set.
Consider an adversary that can perturb an input additively by any vector inside an allowed set . In the case when is the ball and is the Gaussian measure, Cohen et al. (2019) gave a simple expression for involving the Gaussian CDF, derived via the NeymanPearson lemma, which is later rederived by Salman et al. (2019a) as a nonlinear Lipschitz property. Likewise, the expression for Laplace distributions was derived by Teng et al. (2019). (See Appendix H and Appendix I for their expressions.)
Suppose when the base classifier classifies , , the class is returned with probability .
Then the smoothed classifier will not change its prediction under the adversary’s perturbations if
(2) 
4 Methods for Deriving Robust Radii
Let be a distribution with a density function, and we shall write , for the value of the density function on . Then, given a shift vector and a ratio , define the NeymanPearson set
(3) 
Then the NeymanPearson lemma tells us that Neyman & Pearson (1933); Cohen et al. (2019)
(NP) 
While this gives way to a simple expression for the growth function when is Gaussian Cohen et al. (2019), it is difficult for more general distributions as the geometry of becomes hard to grasp. To overcome this difficulty, we propose the level set method that decomposes this geometry so as to compute the growth function exactly, and the differential method that upper bounds the growth function derivative, loosely speaking.
4.1 The Level Set Method
For each , let be the superlevel set
Then its boundary is the level set with under regularity assumptions. The integral of ’s density is of course 1, but this integral can be expressed as the integral of the volumes of its superlevel sets:
() 
If has a differentiable density, then we may rewrite this as an integral of level sets (Appendix F):
() 
The graphics above illustrate the two integral expressions (best viewed on screen). In this level set perspective, the NeymanPearson set (Eq. 3) can be written as
Then naturally, its measure is calculated by
() 
Similarly, the NeymanPearson set can also be written from the perspective of ,
where is the interior of the closed set . So its measure under is
() 
The graphics above illustrate the integration domains of in Eqs. and ‣ 4.1. In general, the geometry of or is still difficult to handle, but in highly symmetric cases when are concentric balls or cubes, Eqs. and ‣ 4.1 can be calculated efficiently.
Computing Robust Radius
Eqs. and ‣ 4.1 allow us to compute the growth function by Eq. NP. In general, this yields an upper bound of the robust radius
for any particular with . With sufficient symmetry, e.g. with adversary and distributions with spherical level sets, this upper bound becomes tight for wellchosen , and we can build a lookup table of certified radii. See Algorithms 2 and 1.
4.2 The Differential Method
To derive certification (robust radius lower bounds) for more general distributions, we propose a differential method, which can be thought of as a vast generalization of the proof in Salman et al. (2019a) of the Gaussian robust radius. The idea is to compute the largest possible infinitesimal increase in measure due to an infinitesimal adversarial perturbation. More precisely, given a norm , and a smoothing measure , we define
(4) 
Intuitively, one can then think of as the smallest possible perturbation in needed to effect a unit of infinitesimal increase in . Therefore, {thm}[Appendix G] The robust radius in is at least
where is the probability that the base classifier predicts the right label under random perturbation by . By exchanging differentiation and integration and applying a similar greedy reasoning as in the NeymanPearson lemma, can be derived for many distributions and integrated symbolically to obtain expressions for . We demonstrate the technique with a simple example below, but much of it can be automated; see Appendix G.
[see Section N.1] If the smoothing distribution is , then the robust radius against an adversary is at least
when is the probability of the correct class as in Section 4.2.
Proof Sketch.
By linearity in , we WLOG assume . By Section 4.2 and the monotonicity of , it suffices to show that for For any fixed with ,
Note , where , is the th unit vector, and . Additionally, the above integral is linear in , so the supremum over is achieved on one of the vertices of the ball. So we may WLOG consider only ; furthermore, due to symmetry of , we can just assume :
where ranges over all . Note if , and otherwise. Thus, to maximize subject to the constraint that , we should put as much mass on those with large . For , we thus should occupy the entire region , which has mass , and then assign the rest of the mass (amounting to ) to the region , which has mass . This shows that
as desired. ∎
4.3 Comparison of the Two Methods and Prior Works
We summarize the distributions our methods cover in Fig. 1 and the bounds we derive in Table 2. We highlight a few broadly applicable robustness guarantees: {exmp}[Section M.1] Let be convex and even, and let be the inverse CDF of the 1D random variable with density . If , and is the probability of the correct class, then the robust radius in is
and this radius is tight. This in particular recovers the Gaussian bound of Cohen et al. (2019), Laplace bound of Teng et al. (2019), and Uniform bound of Lee et al. (2019) in the setting of adversary.
[Sections O.1 and N.1] Facing an adversary, cubical distributions, like that in Section 4.2, typically enjoy, via the differential method, robust radii of the form
for some constant depending on the distribution.
In general, the level set method always gives certificate as tight as NeymanPearson, while the differential method is tight only for infinitesimal perturbations, but can be shown to be tight for certain families, like in Section 4.3 above. On the other hand, the latter will often give efficiently evaluable symbolic expressions and apply to more general distributions, while the former in general will only yield a table of robust radii, and only for distributions whose level sets are sufficiently symmetric (such as a sphere or cube).
For distributions that are covered by both methods, we compare the bounds obtained and note that the differential and level set methods yield almost identical robustness certificates in high dimensions (e.g. number of pixels in CIFAR10 or ImageNet images). See Appendix B.
Many earlier works used differential privacy or divergence methods to compute robust radii of smoothed models (Lecuyer et al., 2018; Li et al., 2019; Dvijotham et al., 2019). In particular, Dvijotham et al. (2019) proposed a general divergence framework that subsumed all such works. Our robust radii are computed only from ; Dvijotham et al. called this the “informationlimited” setting, and we shall compare with their robustness guarantees of this type. While their algorithm in a certain limit becomes as good as NeymanPearson, in practice outside the Gaussian distribution, their robust radii are too loose. This is evident by comparing our baseline Laplace results in Table 1 with theirs, which are trained the same way. Additionally, our differential method often yields symbolic expressions for robust radii, making the certification algorithm easy to implement, verify, and run. Moreover, we derive robustness guarantees for many more (distributions, adversary) pairs (Tables 2 and 1). See Appendix E for a more detailed comparison.
5 Wulff Crystals
A priori, it is a daunting task to understand the relationship between the adversary and the smoothing distribution . In this section, we shall begin our investigation by looking at uniform distributions, and then end with an optimality theorem for all “reasonable” distributions.
Let be the uniform distribution supported on a measurable set . WLOG, assume has (Lebesgue) volume 1, . Then for any and any ,
This can be seen easily by taking in Section 3 to be a subset of with volume (or any set of volume containing if ) unioned with the complement of . For example, in the figure here, would be the gray region, if has volume .
If is convex, and we take to be an infinitesimal translation, then the RHS above is infinitesimally larger than , as follows:
where is the projection of along the direction , and is its dimensional Lebesgue measure. A similar formula holds when is not convex as well (Eq. 6). In the context of randomized smoothing, this means that the classifier smoothed by is robust at under a perturbation when is small, and is the probability the base classifier misclassifies , . Thus, for small, we have
with as in Eq. 4. The smaller is, the more robust the smoothed classifier is, for a fixed . A natural question, then, is: among convex sets of volume 1,
which set minimizes ? 
If is the ball, the reader might guess should either be the ball or the ball with .
It turns out the correct answer, at least in the case when is a highly symmetric polytope (e.g. balls), is a kind of energyminimizing crystals studied in physics since 1901 (Wulff).
{defn}
The Wulff Crystal (w.r.t. ) is defined as the unit ball of the norm dual to , where and is sampled uniformly from the vertices of
among all measurable (not necessarily convex) sets of the same volume, when is sufficiently symmetric (e.g. balls). When is a finite set, the Wulff Crystal has an elegant description as the zonotope of , i.e. the Minkowski sum of the vertices of as vectors (Section J.1), from which we can derive the following examples. {exmp} The Wulff Crystal w.r.t. ball is the ball itself. The Wulff Crystal w.r.t. ball is a cube ( ball). The Wulff Crystal w.r.t. in 2 dimensions is a rhombus; in 3 dimensions, it is a rhombic dodecahedron; in higher dimension , there is no simpler description of it other than the zonotope of the vectors .
In fact, distributions with Wulff Crystal level sets more generally maximizs the robust radii for “hard” inputs.
{thm}[Appendix L, informal]
Let be sufficiently symmetric.
Let be any distribution with a “reasonable”
is minimized by the unique distribution whose superlevel sets are proportional to the Wulff Crystal w.r.t. . This theorem implies that distributions with Wulff Crystal level sets give the best robust radii for those hard inputs that a smooth classifier classifies correctly but only barely, in that the probability of the correct class for some small . The constraint on the volumes of superlevel sets indirectly controls the variance of the distribution. While this theorem says nothing about the robust radii for away from , we find the Wulff Crystal distributions empirically to be highly effective, as we describe next in Section 6.
6 Experiments
We empirically study the performance of different smoothing distributions on standard image classification datasets, using the bounds derived via the level set or the differential method, and verify the predictions made by the Wulff Crystal theory. We follow the experimental procedure described by Cohen et al. (2019) and further works on randomized smoothing (Salman et al., 2019a; Li et al., 2019) using the ImageNet (Deng et al., 2009) and CIFAR10 (Krizhevsky, 2009) datasets.
The certified accuracy at a radius is defined as the fraction of the test set for which the smoothed classifier correctly classifies and certifies robust at an radius of . All results were certified with samples and failure probability .
For each distribution , we train models across a range of scale parameter (see Table 2), corresponding to the same range of noise variances across different distributions. Then we calculate for each model the certified accuracies across the range of considered . Finally, in our plots, we present, for each distribution, the upper envelopes of certified accuracies attained over the range of considered . Further details of experimental procedures are described in Appendix D.
6.1 Adversary
As previously mentioned, the Wulff Crystal for the ball is a cube. With this motivation, we explore certified accuracies attained by distributions with cubical level sets.

Uniform,

Exponential,

Power law,
We compare to previous stateoftheart approaches using the Gaussian and Laplace distributions, as well as new noncubical distributions.

Exponential (noncubical),

Pareto i.i.d. (noncubical),
The relevant certified bounds are given in Table 2.
We obtain stateoftheart robust certificates for ImageNet and CIFAR10, finding that the Uniform distribution performs best, significantly better than the Gaussian and Laplace distributions (Table 1, Fig. 2). The other distributions with cubic level sets match but do not exceed the performance of Uniform distribution, after sweeping hyperparameters. This verifies that distributions with cubical level sets are significantly better for certified accuracy than those with spherical or crosspolytope level sets. Full results regarding the distributions not shown are available in Appendix C.
6.2 Adversary
6.3 Adversary
The Wulff Crystal for the ball is the zonotope of vectors , which is a highly complex polytope hard to sample from and related to many open problems in polytope theory (Ziegler, 1995). However, we can note that it is approximated by a sphere with constant ratio (Section K.2), and in high dimension , the sphere gets closer and closer to minimizing (Footnote 4), but the cube and the cross polytope do not (Section K.2). Accordingly, we find that distributions with spherical level sets outperforms those with cubical or cross polytope level sets in certifying robustness (Fig. 3, right). In fact, in the next section we show that up to a dimensionindependent factor, the Gaussian distribution is optimal for defending against adversary if we don’t use a more powerful technique than NeymanPearson.
7 NoGo Results for Randomized Smoothing
In this section, we demonstrate a theoretical limit for using NeymanPearson style bounds to derive meaningful randomized smoothing schemes for many norms, including . In essence, we formalize an inherent tension between 1) having to have large enough noise variance to be robust and 2) having to have small enough noise variance to avoid trivializing the smoothed classifier. This tension becomes problematic against adversaries in high dimensions when is large. This motivates the following definition: {defn} Let be a norm over , and let be a smoothing distribution. We say that satisfies useful smoothing with respect to if:

(Robustness) For all with , if is any set satisfying , then .

(Accuracy) For all with , there exists so that .
Finally, let , minimizing over all such that is useful. We pause to interpret this definition. Recall that given a smoothing distribution , a point , and a binary base classifier (identified wth its decision region), the smoothed classifier outputs where is the “confidence” of this prediction (Eq. 1). Randomized smoothing (via NeymanPearson) tells us that, if is large enough, then, no matter what is, a small perturbation of cannot decrease too much to change (Eq. 2). In particular, for this guarantee to be valid, Condition (1), with some setting of parameters , has to hold. Intuitively, if is small, then NeymanPearson can certify large radius when is moderately large (e.g. ). But if is not useful for any small , then NeymanPearson cannot yield large radius unless (e.g. ).
Condition (2) says that the resulting smoothing should not “collapse” points: in particular, if are far in norm, then there should be some smoothed classifier ( in Section 7) that distinguishes them. If all we care about is robustness, then the optimal strategy would set to be an arbitrarily wide distribution (say, e.g. a wide Gaussian), and the resulting smoothed classifier is roughly constant. Of course, such a smoothed classifier can never achieve good clean accuracy, so it is not useful. Condition (2) formalizes this notion. Note that it is a relatively weak assumption, for two reasons. First, for any pair violating Condition (2), by linearity is a pair violating this condition, for all . Hence, any single violation of Condition (2) implies that there is an entire direction – namely, along the direction — which the randomized smoothing mechanism collapses. Second, for Condition (2) to be satisfied, the which distinguishes these two points can be completely arbitrary. Thus, if it is violated when , the two distributions are indistinguishable by any statistical test in high dimension, implying the impossibility of classification after smoothing.
Finally, the ratio measures the natural tradeoff: If is large, then the scheme is either not robust, or cannot attain good clean accuracy.
Randomized Smoothing as Metric Embedding A randomized smoothing scheme can be thought of as a mapping from a normed space supported on to the space of distributions, e.g. each point is mapped to the distribution .
In this perspective, Section 7 is roughly equivalent to a biLipschitz condition on this mapping, where the target distributions are equipped with the total variation distance.
Then the existence of a useful smoothing scheme is equivalent to whether can be embedded with low distortion into the total variation space of distributions.
Classical mathematics has a definitive answer to this question in the form a geometric invariant, called the cotype.
{defn}[see e.g. Wojtaszczyk (1996)]
A normed space is said to have cotype for if there exists such that for all finite sequences , we have
,
where the are independent Rademacher random variables.
The smallest which satisfies this constraint is denoted .
When the underlying space of the normed space is , John’s theorem (John, 1948) implies that any norm has cotype with .
Because lower bounds the distortion of a metric embedding of , by the aforementioned connection with randomized smothing, also limits the usefulness of any smoothing scheme of :
{thm}
Let for any norm over .
For any distribution , .
In particular, it is wellknown that .
Then, setting , we get
Moreover, we know that for all , by taking to be a Gaussian. Thus, up to constants, Gaussian randomized smoothing is optimal, in the sense of Section 7 (but distinguish from the optimality notion in Section 5), for all for .
Discussion
Sections 7 and 7 present a strong barrier to extending randomized smoothing to norms such as . In words:
Without using more than the probability of correctly classifying an input under random noise, no smoothing techniques can achieve nontrivial certified accuracy at radius .
However, we point out two ways to bypass this barrier.
For one, more information about the base classifier can be collected to produce better robustness certificates.
In fact, Dvijotham et al. (2019) proposed a “fullinformation” algorithm that computes many moments of the base classifier in a convex optimization procedure to improve certified radius, but it is 100 times slower than the “informationlimited” algorithms we discuss here that use only .
It would be interesting to see whether this technique can be scaled up, and whether other methods can leverage more information
Another way to bypass our barrier is by going outside translational smoothing schemes, where any input is made noisy by adding some , for a fixed . This is the form covered by our nogo result. Instead of associating the distribution to every input , one could consider more general distributions that are not just translations of some . For example, Levine & Feizi (2019a) feeds a randomly sampled subset of pixels in the image into the base classifier to obtain certified defense against perturbations. In other words, this mechanism associates to each point the uniform distribution over the random subsets of ’s pixels, and our nogo result does not apply to this scheme. However, to the best of our knowledge, no scheme for certification has been devised that is not translational.
Finally, we formulated our nogo result in the setting of binary classification, and it is not clear whether a similarly strong barrier applies for multiclass classification. However, current techniques for certification only look at the two most likely classes, and separately reason about how much each one can change by perturbing the input. Our nogo result then straightforwardly applies to this case as well.
8 Conclusion
In this work, we have showed how far we can push randomized smoothing with different smoothing distributions against different adversaries, by presenting two new techniques for deriving robustness guarantees, by elucidating the geometry connecting the noise and the norm, and by empirically achieving stateoftheart in provable defense. At the same time, we have showed the limit current techniques face against adversaries when , especially . Our results point out ways to bypass this barrier, by either leveraging more information about the base classifier or by designing nontranslational smoothing schemes. We wish to investigate both directions in the future.
More broadly, randomized smoothing is a method for inducing stability in a mechanism while maintaining utility — precisely the bread and butter of differential privacy. We suspect our methods for deriving robustness guarantees here and for optimizing the noise distribution can be useful in that setting as well, where Laplace and Gaussian noise dominate the discussion. Whereas previous work Lecuyer et al. (2018) has applied differential privacy tools to randomized smoothing, we hope to go the other way around in the future.
Acknowledgement
We thank Huan Zhang for brainstorming of ideas and performing a few experiments that unfortunately did not work out. We also thank Aleksandar Nikolov, Sebastien Bubeck, Aleksander Madry, Zico Kolter, Nicholas Carlini, Judy Shen, Pengchuan Zhang, and Maksim Andriushchenko for discussions and feedback.
figuresection \counterwithintablesection
Appendix A Table of Certified Radii
Distribution  Density  Adv.  Certified radius  Reference 

iid Log Concave  Section M.1  
iid Log Convex*  for , see  Section M.1  
Exp.  Section M.1  
Exp.  for , see  Section M.1  
Gaussian  Appendix H  
Symmetry  
Symmetry  
Laplace  Appendix I  
see  Section P.2  
Exp.  Section N.1  
Section N.2  
Exp.  Section R.1  
Symmetry  
Symmetry  
Uniform  Section N.1  
Section N.2  
Uniform  Appendix S  
Symmetry  
Symmetry  
General Exp.  Section N.1  
for , see  Section N.2  
General Exp.  level set method  Section T.1  
General Exp.  for , see  Section P.1  
for , see  Section P.2  
Power Law  Section O.1  
Section O.2  
Power Law  level set method  Section T.1  
Pareto (i.i.d.)  Section Q.1 
Distribution  Density  

Exp.  
Gaussian  
Laplace  
Exp.  
Exp.  
Uniform  
Uniform  
General Exp.  
General Exp.  
General Exp.  
Power Law  
Power Law  
Pareto (i.i.d.) 
Appendix B Level Set Method vs Differential Method
Here we concretely compare the robust radii obtained from the level set method and those obtained from the differential method for the distribution , for various input dimensions (we scale the distributions this way so each coordinate has size ). For convenience, here’s the robust radius from the differential method (Section R.1):
The robust radii from level set method are computed as in Section T.1, and they are tight. As we see in Fig. 4, the differential method is very slightly loose in low dimensions and 4, but in high dimensions or , the robust radii obtained from both methods are indistinguishable.