Optimal bounds for monotonicity and Lipschitz testing over hypercubes and hypergrids
Abstract
The problem of monotonicity testing over the hypergrid and its special case, the hypercube, is a classic, wellstudied, yet unsolved question in property testing. We are given query access to (for some ordered range ). The hypergrid/cube has a natural partial order given by coordinatewise ordering, denoted by . A function is monotone if for all pairs , . The distance to monotonicity, , is the minimum fraction of values of that need to be changed to make monotone. For (the boolean hypercube), the usual tester is the edge tester, which checks monotonicity on adjacent pairs of domain points. It is known that the edge tester using samples can distinguish a monotone function from one where . On the other hand, the best lower bound for monotonicity testing over general is . We resolve this long standing open problem and prove that samples suffice for the edge tester. For hypergrids, existing testers require samples. We give a (nonadaptive) monotonicity tester for hypergrids running in time, recently shown to be optimal. Our techniques lead to optimal property testers (with the same running time) for the natural Lipschitz property on hypercubes and hypergrids. (A Lipschitz function is one where .) In fact, we give a general unified proof for query testers for a class of “boundedderivative” properties that contains both monotonicity and Lipschitz.
Theory \categoryF.2.2Analysis of algorithms and problem complexityNonnumerical Algorithms and Problems[Computations on discrete structures] \categoryG.2.1Discrete MathematicsCombinatorics[Combinatorial algorithms] {bottomstuff} A preliminary version of this result appeared as [Chakrabarty and Seshadhri (2013a)].
1 Introduction
Monotonicity testing over hypergrids [Goldreich et al. (2000)] is a classic problem in
property testing. We focus on functions , where the domain, , is the hypergrid and the range, , is a total order.
The hypergrid/hypercube defines the natural coordinatewise partial order: , iff . A function is monotone if whenever .
The distance to monotonicity, denoted by ,
is the minimum fraction of places at which must be changed to have the property . Formally,
if is the set of all monotone functions,
Given a parameter , the aim is to
design a randomized algorithm for the following problem. If (meaning is monotone),
the algorithm must accept with probability , and if , it must reject with
probability . If , then any answer
is allowed.
Such an algorithm is called a monotonicity tester.
The quality of a tester is determined by the number of queries to .
A onesided tester accepts with probability if the function is monotone.
A nonadaptive tester decides all of its queries in advance, so
the queries are independent of the answers it receives.
Monotonicity testing has been studied extensively in the past decade [Ergun et al. (2000), Goldreich et al. (2000), Dodis et al. (1999), Lehman and Ron (2001), Fischer et al. (2002), Ailon and
Chazelle (2006), Fischer (2004), Halevy and
Kushilevitz (2008), Parnas
et al. (2006), Ailon
et al. (2006), Batu
et al. (2005), Bhattacharyya et al. (2009), Briët et al. (2012), Blais
et al. (2012)].
Of special interest is the hypercube domain, .
[Goldreich et al. (2000)] introduced the edge tester. Let be the pairs
that differ in precisely one coordinate (the edges of the hypercube).
The edge tester picks a pair in uniformly at random and checks if monotonicity is satisfied by this pair. For boolean range, [Goldreich et al. (2000)] prove samples suffice
to give a bonafide montonicity tester. [Dodis et al. (1999)] subsequently showed that samples suffice for a general range . In the worst case, , and so this gives a query tester. The best known general lower bound is
[Blais
et al. (2012)].
It has been an outstanding open problem in property testing (see Question 5 in the
Open Problems list from the Bertinoro Workshop []) to give
an optimal bound for monotonicity testing over the hypercube.
We resolve this by showing that the edge tester is indeed
optimal (when ).
{theorem} The edge tester is a
query nonadaptive, onesided monotonicity tester for functions .
For general hypergrids , [Dodis et al. (1999)] give a query monotonicity tester.
Since can be as large as , this gives a query tester.
In this paper, we give a query monotonicity tester on hypergrids that generalizes the edge tester.
This tester is also a uniform pair tester, in the sense it defines a set of pairs, picks a pair uniformly at random from it, and checks for monotonicity among this pair. The pairs in also differ in exactly one coordinate, as in the edge tester.
{theorem}
There exists a nonadaptive, onesided query monotonicity tester for functions .
{remark}
Subsequent to the conference version of this work, the authors proved
a query lower bound for monotonicity testing on the hypergrid
for any (adaptive, twosided error) tester [Chakrabarty and
Seshadhri (2013b)]. Thus, both the above theorems
are optimal.
A property that has been studied recently is that of a function being Lipschitz:
a function is called Lipschitz if for all .
The Lipschitz testing question was introduced by [Jha and
Raskhodnikova (2011)], who show that for the range ,
queries suffice for Lipschitz testing.
For general hypergrids, [Awasthi et al. (2012)] recently give an query tester
for the same range.
[Blais
et al. (2014)] prove a lower bound of queries for
nonadaptive monotonicity testers (for sufficiently large ).
We give a tester for the Lipschitz property that improves all known results and matches
existing lower bounds. Observe that the following holds for arbitrary ranges.
{theorem}
There exists a nonadaptive, onesided query Lipschitz tester for functions .
Our techniques apply to a class of properties that contains monotonicity and Lipschitz. We call it the bounded derivative property, or more technically, the Lipschitz property.
Given parameters , with , we say that a function
has the Lipschitz property if for any , and obtained by increasing exactly
one coordinate of by exactly , we have . Note that when
1.1 Previous work
We discuss some other previous work on monotonicity testers for hypergrids. For the total order (the case ), which has been called the monotonicity testing problem on the line, [Ergun et al. (2000)] give a query tester, and this is optimal [Ergun et al. (2000), Fischer (2004)]. Results for general posets were first obtained by [Fischer et al. (2002)]. The elegant concept of TC spanners introduced by [Bhattacharyya et al. (2009)] give a general class of monotonicity testers for various posets. It is known that such constructions give testers with polynomial dependence of for the hypergrid [Bhattacharyya et al. (2012)]. For constant , [Halevy and Kushilevitz (2008), Ailon and Chazelle (2006)] give a query tester (although the dependency on is exponential). From the lower bound side, [Fischer et al. (2002)] first prove an (nonadaptive, onesided) lower bound for hypercubes. [Briët et al. (2012)] give an lower bound for nonadaptive, onesided testers, and a breakthrough result of [Blais et al. (2012)] prove a general lower bound. Testing the Lipschitz property is a natural question that arises in many applications. For instance, given a computer program, one may like to test the robustness of the program’s output to the input. This has been studied before, for instance in [Chaudhuri et al. (2011)], however, the solution provided looks into the code to detect if the program satisfies Lipschitz or not. The property testing setting is a blackbox approach to the problem. [Jha and Raskhodnikova (2011)] also provide an application to differential privacy; a class of mechanisms known as Laplace mechanisms proposed by [Dwork et al. (2006)] achieve privacy in the process of outputting a function by adding a noise proportional to the Lipschitz constant of the function. [Jha and Raskhodnikova (2011)] gave numerous results on Lipschitz testing over hypergrids. They give a query tester for the line, a general query lower bound for the Lipschitz testing question on the hypercube, and a nonadaptive, 1sided query lower bound on the line.
2 The Proof Roadmap
The challenge of property testing is to relate the tester behavior to the distance of the function to the property. Consider monotonicity over the hypercube. To argue about the edge tester, we want to show that a large distance to monotonicity implies many violated edges. Most current analyses of the edge tester go via what we could call the contrapositive route. If there are few violated edges in , then they show the distance to monotonicity is small. This is done by modifying to make it monotone, and bounding the number of changes as a function of the number of violated edges. There is an inherently “constructive” viewpoint to this: it specifies a method to convert nonmonotone functions to monotone ones. Implementing this becomes difficult when the range is large, and existing bounds degrade with . For the Lipschitz property, this route becomes incredibly complex. A nonconstructive approach may give more power, but how does one get a handle on the distance? The violation graph provides a method. The violation graph has as the vertex set and an edge between any pair of comparable domain vertices () if . The following theorem can be found as Corollary 2 in [Fischer et al. (2002)]. {theorem}[[Fischer et al. (2002)]] The size of the minimum vertex cover of the violation graph is exactly . As a corollary, the size of any maximal matching in the violation graph is at least . Can a large matching in the violated graph imply there are many violated edges? [Lehman and Ron (2001)] give an approach by reducing the monotonicity testing problem on the hypercube to routing problems. For any sourcesink pairs on the directed hypercube, suppose edges need to be deleted in order to pairwise separate them. Then queries suffice for the edge tester. Therefore, if is at least a constant, one gets a linear query monotonicity tester on the cube. Lehman and Ron [Lehman and Ron (2001)] explicitly ask for bounds on . [Briët et al. (2012)] show that could be as small as , thereby putting an bottleneck to the above approach. In the reduction above, the function values are altogether ignored. More precisely, once one moves to the combinatorial routing question on sourcesink pairs, the fact that they are related by actual function values is lost. Our analysis crucially uses the value of the functions to argue about the structure of the maximal matching in the violation graph.
2.1 It’s all about matchings
The key insight is to move to a weighted violation graph. The weight of violation depends on the property at hand; for now it suffices to know that for monotonicity, the weight of () is . This can be thought of as a measure of the magnitude of the violation. (Violation weights were also used for Lipschitz testers [Jha and Raskhodnikova (2011)].) We now look at a maximum weighted matching in the violation graph. Naturally, this is maximal as well, so . All our algorithms pick a pair uniformly at random from a predefined set of pairs, and check the property on that pair. For the hypercube domain, is the set of all edges of the hypercube. Our analysis is based on the construction of a onetoone mapping from pairs in to violating pairs in . This mapping implies the number of violated pairs in is at least , and thus the uniform pair tester succeeds with probability , implying queries suffice to test monotonicity. For the hypercube, and , giving the final bound of . To obtain this mapping, we first decompose into sets such that each pair in is in at least one . Furthermore, we partition into perfect matchings . In the hypercube case, is the collection of pairs in whose th coordinates differ, and is the collection of hypercube edges differing only in the th coordinate; for the hypergrid case, the partitions are more involved. We map each pair in to a unique violating pair in . For simplicity, let us ignore subscripts and call the matchings and . We will assume in this discussion that . Consider the alternating paths and cycles generated by the symmetric difference of and . Take a point involved in a pair of , and note that it can only be present as the endpoint of an alternating path, denoted by . Our main technical lemma shows that each such contains a violated pair.
2.2 Getting the violating pairs
Consider , the pairs of which differ on the th coordinate, and is the
set of edges in the dimension cut along
this coordinate. Let , and say giving us . (We denote the th coordinate
of by .) Recall that the weight of this violation is .
It is convenient to think of as follows. We begin from and take the incident
edge to reach (note that that ). Then we take the pair
containing to get . But what if no such pair existed? This can be possible in two ways: either was unmatched or is matched.
If is unmatched, then delete from and add to obtain a new matching. If was not a violation, and therefore
2.3 Attacking the generalized Lipschitz property
One of the challenges in dealing with the Lipschitz property is the lack of direction. The Lipschitz property, defined as , is an undirected property, as opposed to monotonicity. In monotonicity, a point only “interacts” with the subcube above and below , while in Lipschitz, constraints are defined between all pairs of points. Previous results for Lipschitz testing require very technical and clever machinery to deal with this issue, since arguments analogous to monotonicity do not work. The alternating paths argument given above for monotonicity also exploits this directionality, as can be seen by heavy use of inequalities in the informal calculations. Observe that in the monotonicity example for hypergrids in Fig. 1, the fact that (as opposed to ) required the potential (and a whole new proof). A subtle point is that while the property of Lipschitz is undirected, violations to Lipschitz are “directed”. If , then either or , but never both. This can be interpreted as a direction for violations. In the alternating paths for monotonicity (especially for the hypercube), the partial order relation between successive terms follow a fixed pattern. This is crucial for performing the matching rewiring. As might be guessed, the weight of a violation becomes . For the generalized Lipschitz problem, this is defined in terms of a pseudodistance over the domain. We look at the maximum weight matching as before (and use the same potential function ). The notion of “direction” takes the place of the partial order relation in monotonicity. The main technical arguments show that these directions follow a fixed pattern in the corresponding alternating paths. Once we have this pattern, we can perform the matching rewiring argument for the generalized Lipschitz problem.
3 The Alternating Paths Framework
The framework of this section is applicable for all Lipschitz properties over hypergrids. We begin with two objects: , the matching of violating pairs, and , a matching of . The pairs in will be aligned along a fixed dimension (denote it by ) with the same distance, called the distance. That is, each pair in will differ only in one coordinate and the difference will be the same for all pairs. We now give some definitions.

: Each pair has a “lower” end and an “upper” end depending on the value of the coordinate at which they differ. We use (resp. ) to denote the set of lower (resp. upper) endpoints. Note that .

straight pairs, : All pairs with both ends in or both in .

cross pairs, : All pairs such that , , and the distance divides .

skew pairs, .

: A set of lower endpoints in .
Consider the domain . We set to be (say) the first dimension cut. is the set of pairs in where . All other pairs () are in since and . There are no skew pairs. The set will be chosen differently for the applications. We require the following technical definition of adequate matchings. This arises because we will use matchings that are not necessarily perfect. A perfect matching is always adequate. {definition} A matching is adequate if for every violation , both and participate in the matching . We will henceforth assume that is adequate. The symmetric difference of and is a collection of alternating paths and cycles. Because is adequate and , any point in is the endpoint of some alternating path (denoted by ). Throughout the paper, denotes an even index, denotes an odd index, and is an arbitrary index.

The first term is .

For even , .

For odd : if is matched, . Otherwise, terminate.
We start with a simple property of these alternating paths. {proposition} For , . For nonnegative , . {proof} If is even, then . Therefore, either and or vice versa. If is odd, is a straight pair. So and lie in the same sets. Starting with , a trivial induction completes the proof. The following is a direct corollary of Prop. 3. {corollary} If , . If , . We will prove that every contains a violated pair. Henceforth, our focus is entirely on some fixed sequence .
3.1 The sets and
Our proofs are based on matching rearrangements, and this motivates the definitions in this subsection. For convenience, we denote by . We also set . Consider the sequence , for even . We define
This is simply the set of pairs in up to . We now define . Think of this as follows. We first pair up . Then, we go in order of to pair up the rest. We pick the first unmatched and pair it to the first term of opposite parity. We follow this till is paired. These sets are illustrated in Fig. 2.
involves , while involves .
4 The Structure of for Monotonicity
We now focus on monotonicity, and show that is highly structured. (The proof for general Lipschitz will also follow the same setup, but requires more definitions.) The weight of a pair is defined to be if , and is otherwise. We will assume that all function values are distinct. This is without loss of generality although we prove it formally later in Claim 8. Thus violating pairs have positive weight. We choose a maximum weight matching of pairs. Note that every pair in is a violating pair. We remind the reader that for even , and for odd , .
4.1 Preliminary observations
{proposition}For all (or ), iff . Consider pair such that . Then and . {proof} For any point in , is obtained by adding the distance to a specific coordinate. This proves the first part. The distance divides (where is aligned in dimension ) and is a cross pair. Hence is at least the distance. Note that is obtained by simply adding this distance to the coordinate of , so . {proposition} All pairs in and are comparable. Furthermore, and for all even , iff . {proof} All pairs in are in , and hence comparable. Consider pair . Since and is a crosspair, by Prop. 4.1, . Consider pair , where is even. (Refer to Fig. 2.) The pair is in . Hence, the points are comparable and both lie in or . By Prop. 4.1, inherit their comparability from . For some even , suppose is a not a violation. Corollary 3 implies
($*$) 
We will also state an ordering condition on the sequence.
($**$) 
Remember these conditions and Corollary 3 together as follows. If , is on smaller side, otherwise it is on the larger side. In other words, if , is smaller than its “neighbors” in . For , it is bigger. For condition (($*$) ‣ 4.1), if , .
4.2 The structure lemmas
We will prove a series of lemmas that prove structural properties of that are intimately connected to conditions (($*$) ‣ 4.1) and (($**$) ‣ 4.1). These proofs are where much of the insight lies. {lemma} Consider some even index such that exists. Suppose conditions (($*$) ‣ 4.1) and (($**$) ‣ 4.1) held for all even indices . Then, is matched. {proof} The proof is by contradiction, so assume that does not exist. Assume . (The proof for the case is similar and omitted.) Consider sets and . Note that are all distinct. By Prop. 3.1, is a valid matching. We will argue that , a contradiction. By condition (($**$) ‣ 4.1),
(1)  
By the second part of Prop. 4.1 (for even , iff ) and condition (($**$) ‣ 4.1), we know the comparisons for all pairs in .
(2)  
Note that the coefficients of common terms in and are identical. The only terms not involves (by Prop. 3.1) are in and in . The weight of the new matching is precisely . By (($*$) ‣ 4.1) for , this is strictly greater than , contradicting the maximality of . So, under the condition of Lemma 4.2, is matched. We can also specify the comparison relation of , (as condition (($**$) ‣ 4.1)) using an almost identical argument. Abusing notation, we will denote as . (This is no abuse if is a straight pair.) {lemma} Consider some even index such that exists. Suppose conditions (($*$) ‣ 4.1) and (($**$) ‣ 4.1) held for all even indices . Then, condition (($**$) ‣ 4.1) holds for . Before we prove this lemma, we need the following distinctness claim.
Claim \thetheorem
Consider some odd such that and exist. Suppose condition (($*$) ‣ 4.1) and (($**$) ‣ 4.1) held for all even . Then the sequence are distinct.
(If , this is obviously true. The challenge is when terminates at .) The sequence from to is an alternating path, so all terms are distinct. If , then the claim holds. Suppose . Note that , since . Since , by Prop. 3, . Condition (($**$) ‣ 4.1) holds for , so and by Corollary 3, . Note that and is a cross pair. By Prop. 4.1, and thus . We replace pairs with , and argue that the weight has increased. We have . By condition (($*$) ‣ 4.1) on , , contradicting the maximality of . {proof} (of Lemma 4.2) By Lemma 4.2, exists. Assume (the other case is analogous and omitted). The proof is again by contradiction, so we assume condition (($**$) ‣ 4.1) does not hold for . This means . Consider sets and . By Claim 4.2, are distinct. So is a valid matching and we argue that . By condition (($**$) ‣ 4.1) for even and the assumption .
Observe how the last term in the summation differs from the trend. All comparisons in are determined by Prop. 3.1, just as we argued in the proof of Lemma 4.2. The expression for is basically given in (2). It remains to deal with . By condition (($**$) ‣ 4.1) for , . Thus, by Prop. 3.1, . Combining with the assumption of , we deduce .
The coefficients are identical, except that and do not appear in . We get . By (($*$) ‣ 4.1) for , we contradict the maximality of . A direct combination of the above statements yields the main structure lemma. {lemma} Suppose contains no violated pair. Let the last term by ( is odd). For every even , condition (($**$) ‣ 4.1) holds, and belongs to a pair in . {proof} We prove the first statement by contradiction. Consider the smallest even where condition (($**$) ‣ 4.1) does not hold. Note that for , the condition does hold, so . We can apply Lemma 4.2 for , since all even indices at most satisfy (($*$) ‣ 4.1) and (($**$) ‣ 4.1). But condition (($**$) ‣ 4.1) holds for , completing the proof. Now apply Lemma 4.2 and Lemma 4.2 for . Conditions (($*$) ‣ 4.1) and (($**$) ‣ 4.1) hold for all relevant even indices. Hence, must be matched and condition (($**$) ‣ 4.1) holds for . Since terminates at , cannot be matched. Suppose was matched. Let . By Prop. 3, , so , violating condition (($**$) ‣ 4.1). A similar argument holds when . Hence, must be matched.
5 Monotonicity on Boolean Hypercube
We prove Theorem 1. Since is also is a maximal family of disjoint violating pairs, and therefore, . We denote the set of all edges of the hypercube as . We partition into where is the collection of hypercube edges which differ in the th coordinate. Each is a perfect matching and is adequate. Note that is the set of pairs which do not differ in the th coordinate. The distance is trivially , so is the set of pairs that differ in the th coordinate. Importantly, . {lemma} For all , the number of violating edges is at least . {proof} Feed in and to the alternating path machinery. Set to be the set of all lower endpoints of , so . Since , by Lemma 4.2, all sequences must contain a violated edge. The total number of violated edges is at least . The above lemma proves Theorem 1. Observe that every pair in belongs to some set . The edge tester only requires queries, since the success probability of a single test is at least
6 Setting up for Hypergrids
We setup the framework for hypergrid domains. The arguments here are property independent.
Consider domain and set .
We define to be pairs that differ in exactly one coordinate, and furthermore, the difference is a power of .
The tester chooses a pair in uniformly at random, and checks the property on this pair.
We partition into sets , , .
consists of pairs which differ only in the th coordinate, and furthermore .
Unfortunately, is not a matching, since each point can participate in potentially two
pairs in .
To remedy this, we further partition into and .
For any pair , exactly one among