Optimal bounds for monotonicity and Lipschitz testing over hypercubes and hypergrids

# Optimal bounds for monotonicity and Lipschitz testing over hypercubes and hypergrids

## Abstract

The problem of monotonicity testing over the hypergrid and its special case, the hypercube, is a classic, well-studied, yet unsolved question in property testing. We are given query access to (for some ordered range ). The hypergrid/cube has a natural partial order given by coordinate-wise ordering, denoted by . A function is monotone if for all pairs , . The distance to monotonicity, , is the minimum fraction of values of that need to be changed to make monotone. For (the boolean hypercube), the usual tester is the edge tester, which checks monotonicity on adjacent pairs of domain points. It is known that the edge tester using samples can distinguish a monotone function from one where . On the other hand, the best lower bound for monotonicity testing over general is . We resolve this long standing open problem and prove that samples suffice for the edge tester. For hypergrids, existing testers require samples. We give a (non-adaptive) monotonicity tester for hypergrids running in time, recently shown to be optimal. Our techniques lead to optimal property testers (with the same running time) for the natural Lipschitz property on hypercubes and hypergrids. (A -Lipschitz function is one where .) In fact, we give a general unified proof for -query testers for a class of “bounded-derivative” properties that contains both monotonicity and Lipschitz.

Property Testing, Monotonicity, Lipschitz functions
\terms

Theory \categoryF.2.2Analysis of algorithms and problem complexityNonnumerical Algorithms and Problems[Computations on discrete structures] \categoryG.2.1Discrete MathematicsCombinatorics[Combinatorial algorithms] {bottomstuff} A preliminary version of this result appeared as [Chakrabarty and Seshadhri (2013a)].

## 1 Introduction

Monotonicity testing over hypergrids [Goldreich et al. (2000)] is a classic problem in property testing. We focus on functions , where the domain, , is the hypergrid and the range, , is a total order. The hypergrid/hypercube defines the natural coordinate-wise partial order: , iff . A function is monotone if whenever . The distance to monotonicity, denoted by , is the minimum fraction of places at which must be changed to have the property . Formally, if is the set of all monotone functions, Given a parameter , the aim is to design a randomized algorithm for the following problem. If (meaning is monotone), the algorithm must accept with probability , and if , it must reject with probability . If , then any answer is allowed. Such an algorithm is called a monotonicity tester. The quality of a tester is determined by the number of queries to . A one-sided tester accepts with probability if the function is monotone. A non-adaptive tester decides all of its queries in advance, so the queries are independent of the answers it receives. Monotonicity testing has been studied extensively in the past decade [Ergun et al. (2000), Goldreich et al. (2000), Dodis et al. (1999), Lehman and Ron (2001), Fischer et al. (2002), Ailon and Chazelle (2006), Fischer (2004), Halevy and Kushilevitz (2008), Parnas et al. (2006), Ailon et al. (2006), Batu et al. (2005), Bhattacharyya et al. (2009), Briët et al. (2012), Blais et al. (2012)]. Of special interest is the hypercube domain, .  [Goldreich et al. (2000)] introduced the edge tester. Let be the pairs that differ in precisely one coordinate (the edges of the hypercube). The edge tester picks a pair in uniformly at random and checks if monotonicity is satisfied by this pair. For boolean range,  [Goldreich et al. (2000)] prove samples suffice to give a bonafide montonicity tester. [Dodis et al. (1999)] subsequently showed that samples suffice for a general range . In the worst case, , and so this gives a -query tester. The best known general lower bound is  [Blais et al. (2012)]. It has been an outstanding open problem in property testing (see Question 5 in the Open Problems list from the Bertinoro Workshop []) to give an optimal bound for monotonicity testing over the hypercube. We resolve this by showing that the edge tester is indeed optimal (when ). {theorem} The edge tester is a -query non-adaptive, one-sided monotonicity tester for functions . For general hypergrids [Dodis et al. (1999)] give a -query monotonicity tester. Since can be as large as , this gives a -query tester. In this paper, we give a -query monotonicity tester on hypergrids that generalizes the edge tester. This tester is also a uniform pair tester, in the sense it defines a set of pairs, picks a pair uniformly at random from it, and checks for monotonicity among this pair. The pairs in also differ in exactly one coordinate, as in the edge tester. {theorem} There exists a non-adaptive, one-sided -query monotonicity tester for functions . {remark} Subsequent to the conference version of this work, the authors proved a -query lower bound for monotonicity testing on the hypergrid for any (adaptive, two-sided error) tester [Chakrabarty and Seshadhri (2013b)]. Thus, both the above theorems are optimal. A property that has been studied recently is that of a function being Lipschitz: a function is called -Lipschitz if for all . The Lipschitz testing question was introduced by [Jha and Raskhodnikova (2011)], who show that for the range , queries suffice for Lipschitz testing. For general hypergrids, [Awasthi et al. (2012)] recently give an -query tester for the same range. [Blais et al. (2014)] prove a lower bound of queries for non-adaptive monotonicity testers (for sufficiently large ). We give a tester for the Lipschitz property that improves all known results and matches existing lower bounds. Observe that the following holds for arbitrary ranges. {theorem} There exists a non-adaptive, one-sided -query -Lipschitz tester for functions . Our techniques apply to a class of properties that contains monotonicity and Lipschitz. We call it the bounded derivative property, or more technically, the -Lipschitz property. Given parameters , with , we say that a function has the -Lipschitz property if for any , and obtained by increasing exactly one coordinate of by exactly , we have . Note that when 1, we get monotonicity. When , we get -Lipschitz. {theorem} There exists a non-adaptive, one-sided -query -Lipschitz tester for functions , for any . There is no dependence in the running time on and . Although Theorem 1 implies all the other theorems stated above, we prove Theorem 1 and Theorem 1 before giving a whole proof of Theorem 1. The final proof is a little heavy on notation, and the proof of the monotonicity theorems illustrates the new techniques.

### 1.1 Previous work

We discuss some other previous work on monotonicity testers for hypergrids. For the total order (the case ), which has been called the monotonicity testing problem on the line, [Ergun et al. (2000)] give a -query tester, and this is optimal [Ergun et al. (2000), Fischer (2004)]. Results for general posets were first obtained by [Fischer et al. (2002)]. The elegant concept of -TC spanners introduced by [Bhattacharyya et al. (2009)] give a general class of monotonicity testers for various posets. It is known that such constructions give testers with polynomial dependence of for the hypergrid [Bhattacharyya et al. (2012)]. For constant [Halevy and Kushilevitz (2008), Ailon and Chazelle (2006)] give a -query tester (although the dependency on is exponential). From the lower bound side, [Fischer et al. (2002)] first prove an (non-adaptive, one-sided) lower bound for hypercubes. [Briët et al. (2012)] give an -lower bound for non-adaptive, one-sided testers, and a breakthrough result of [Blais et al. (2012)] prove a general lower bound. Testing the Lipschitz property is a natural question that arises in many applications. For instance, given a computer program, one may like to test the robustness of the program’s output to the input. This has been studied before, for instance in [Chaudhuri et al. (2011)], however, the solution provided looks into the code to detect if the program satisfies Lipschitz or not. The property testing setting is a black-box approach to the problem. [Jha and Raskhodnikova (2011)] also provide an application to differential privacy; a class of mechanisms known as Laplace mechanisms proposed by [Dwork et al. (2006)] achieve privacy in the process of outputting a function by adding a noise proportional to the Lipschitz constant of the function. [Jha and Raskhodnikova (2011)] gave numerous results on Lipschitz testing over hypergrids. They give a -query tester for the line, a general -query lower bound for the Lipschitz testing question on the hypercube, and a non-adaptive, 1-sided -query lower bound on the line.

The challenge of property testing is to relate the tester behavior to the distance of the function to the property. Consider monotonicity over the hypercube. To argue about the edge tester, we want to show that a large distance to monotonicity implies many violated edges. Most current analyses of the edge tester go via what we could call the contrapositive route. If there are few violated edges in , then they show the distance to monotonicity is small. This is done by modifying to make it monotone, and bounding the number of changes as a function of the number of violated edges. There is an inherently “constructive” viewpoint to this: it specifies a method to convert non-monotone functions to monotone ones. Implementing this becomes difficult when the range is large, and existing bounds degrade with . For the Lipschitz property, this route becomes incredibly complex. A non-constructive approach may give more power, but how does one get a handle on the distance? The violation graph provides a method. The violation graph has as the vertex set and an edge between any pair of comparable domain vertices () if . The following theorem can be found as Corollary 2 in [Fischer et al. (2002)]. {theorem}[[Fischer et al. (2002)]] The size of the minimum vertex cover of the violation graph is exactly . As a corollary, the size of any maximal matching in the violation graph is at least . Can a large matching in the violated graph imply there are many violated edges? [Lehman and Ron (2001)] give an approach by reducing the monotonicity testing problem on the hypercube to routing problems. For any source-sink pairs on the directed hypercube, suppose edges need to be deleted in order to pairwise separate them. Then queries suffice for the edge tester. Therefore, if is at least a constant, one gets a linear query monotonicity tester on the cube. Lehman and Ron [Lehman and Ron (2001)] explicitly ask for bounds on [Briët et al. (2012)] show that could be as small as , thereby putting an bottleneck to the above approach. In the reduction above, the function values are altogether ignored. More precisely, once one moves to the combinatorial routing question on source-sink pairs, the fact that they are related by actual function values is lost. Our analysis crucially uses the value of the functions to argue about the structure of the maximal matching in the violation graph.

### 2.1 It’s all about matchings

The key insight is to move to a weighted violation graph. The weight of violation depends on the property at hand; for now it suffices to know that for monotonicity, the weight of () is . This can be thought of as a measure of the magnitude of the violation. (Violation weights were also used for Lipschitz testers [Jha and Raskhodnikova (2011)].) We now look at a maximum weighted matching in the violation graph. Naturally, this is maximal as well, so . All our algorithms pick a pair uniformly at random from a predefined set of pairs, and check the property on that pair. For the hypercube domain, is the set of all edges of the hypercube. Our analysis is based on the construction of a one-to-one mapping from pairs in to violating pairs in . This mapping implies the number of violated pairs in is at least , and thus the uniform pair tester succeeds with probability , implying queries suffice to test monotonicity. For the hypercube, and , giving the final bound of . To obtain this mapping, we first decompose into sets such that each pair in is in at least one . Furthermore, we partition into perfect matchings . In the hypercube case, is the collection of pairs in whose th coordinates differ, and is the collection of hypercube edges differing only in the th coordinate; for the hypergrid case, the partitions are more involved. We map each pair in to a unique violating pair in . For simplicity, let us ignore subscripts and call the matchings and . We will assume in this discussion that . Consider the alternating paths and cycles generated by the symmetric difference of and . Take a point involved in a pair of , and note that it can only be present as the endpoint of an alternating path, denoted by . Our main technical lemma shows that each such contains a violated -pair.

### 2.2 Getting the violating H-pairs

Consider , the pairs of which differ on the th coordinate, and is the set of edges in the dimension cut along this coordinate. Let , and say giving us . (We denote the th coordinate of by .) Recall that the weight of this violation is . It is convenient to think of as follows. We begin from and take the incident -edge to reach (note that that ). Then we take the -pair containing to get . But what if no such pair existed? This can be possible in two ways: either was -unmatched or is -matched. If is -unmatched, then delete from and add to obtain a new matching. If was not a violation, and therefore 2, we get . Thus the new matching has strictly larger weight, contradicting the choice of . If was -matched, then let . First observe that . This is because (since ) and since they must differ on the th coordinate implying . This implies , and so we could replace pairs and in with . Again, if is not a violation, then , contradicting the maximality of . Therefore, we can taje a -pair to reach . With care, this argument can be carried over till we find a violation, and a detailed description of this is given in §5. Let us demonstrate a little further (refer to the left of Fig. 1). Start with , and . Following the sequence , the first term is projected “up” dimension cut . The second term is obtained by following the -pair incident to to get . Now we claim that , for otherwise one can remove and and add and to increase the matching weight. (We just made the argument earlier; the interested reader may wish to verify.) In the next step, is projected “down” along to get . By the nature of the dimension cut , and . So, if is unmatched and is not a violation, we can again rearrange the matching to improve the weight. We alternately go “up” and “down” in traversing , because of which we can modify the pairs in and get other matchings in the violation graph. The maximality of imposes additional structure, which leads to violating edges in . In general, the spirit of all our arguments is as follows. Take an endpoint of and start walking along the sequence given by the alternating paths generated by and . Naturally, this sequence must terminate somewhere. If we never encounter a violating pair of during the entire sequence, then we can rewire the matching and increase the weight. Contradiction! Observe the crucial nature of alternating up and down movements along . This happens because the first coordinate of the points in switches between the two values of and (for ). Such a reasoning does not hold water in the hypergrid domain. The structure of needs to be more complex, and is not as simple as a partition of the edges of the hypergrid. Consider the extreme case of the line . Let be less than . We break into contiguous pieces of length . We can now match the first part to the second, the third to the fourth, etc. In other words, the pairs look like , , , , then , , etc. We can construct such matchings for all powers of less than , and these will be our ’s. Those familiar with existing proofs for monotonicity on will not be surprised by this set of matchings. All methods need to cover all “scales” from to (achieved by making them all powers of up to ). It can also be easily generalized to . What about the choice of ? Simply choosing to be a maximum weight matching and setting up the sequences does not seem to work. It suffices to look at and the matching along the first coordinate where , so the pairs are . A good candidate for the corresponding is the set of pairs in that connect lower endpoints of to higher endpoints of . Let us now follow as before. Refer to the right part of Fig. 1. Take and let . We get by following the -edge on , so . We follow the -pair incident to (suppose it exists) to get . It could be that . It is in that we see a change from the hypercube. We could get , because there is no guarantee that is at the higher end of an -pair. This could not happen in the hypercube. We could have a situation where is unmatched, we have not encountered a violation in , and yet we cannot rearrange to increase the weight. For a concrete example, consider the points as given in Fig. 1 with function values , . Some thought leads to the conclusion that must be less than for any such rearrangement argument to work. The road out of this impasse is suggested by the two observations. First, the difference in -coordinates between and must be odd. Next, we could rearrange and match and instead. The weight may not increase, but this matching might be more amenable to the alternating path approach. We could start from a maximum weight matching that also maximizes the number of pairs where coordinate differences are even. Indeed, the insight for hypergrids is the definition of a potential for . The potential is obtained by summing for every pair and every coordinate , the largest power of dividing the difference . We can show that a maximum weight matching that also maximizes does not end up in the bad situation above. With some addition arguments, we can generalize the hypercube proof. We describe this in §7.

### 2.3 Attacking the generalized Lipschitz property

One of the challenges in dealing with the Lipschitz property is the lack of direction. The Lipschitz property, defined as , is an undirected property, as opposed to monotonicity. In monotonicity, a point only “interacts” with the subcube above and below , while in Lipschitz, constraints are defined between all pairs of points. Previous results for Lipschitz testing require very technical and clever machinery to deal with this issue, since arguments analogous to monotonicity do not work. The alternating paths argument given above for monotonicity also exploits this directionality, as can be seen by heavy use of inequalities in the informal calculations. Observe that in the monotonicity example for hypergrids in Fig. 1, the fact that (as opposed to ) required the potential (and a whole new proof). A subtle point is that while the property of Lipschitz is undirected, violations to Lipschitz are “directed”. If , then either or , but never both. This can be interpreted as a direction for violations. In the alternating paths for monotonicity (especially for the hypercube), the partial order relation between successive terms follow a fixed pattern. This is crucial for performing the matching rewiring. As might be guessed, the weight of a violation becomes . For the generalized Lipschitz problem, this is defined in terms of a pseudo-distance over the domain. We look at the maximum weight matching as before (and use the same potential function ). The notion of “direction” takes the place of the partial order relation in monotonicity. The main technical arguments show that these directions follow a fixed pattern in the corresponding alternating paths. Once we have this pattern, we can perform the matching rewiring argument for the generalized Lipschitz problem.

## 3 The Alternating Paths Framework

The framework of this section is applicable for all -Lipschitz properties over hypergrids. We begin with two objects: , the matching of violating pairs, and , a matching of . The pairs in will be aligned along a fixed dimension (denote it by ) with the same distance, called the -distance. That is, each pair in will differ only in one coordinate and the difference will be the same for all pairs. We now give some definitions.

• : Each pair has a “lower” end and an “upper” end depending on the value of the coordinate at which they differ. We use (resp. ) to denote the set of lower (resp. upper) endpoints. Note that .

• -straight pairs, : All pairs with both ends in or both in .

• -cross pairs, : All pairs such that , , and the -distance divides .

• -skew pairs, .

• : A set of lower endpoints in .

Consider the domain . We set to be (say) the first dimension cut. is the set of pairs in where . All other pairs () are in since and . There are no -skew pairs. The set will be chosen differently for the applications. We require the following technical definition of adequate matchings. This arises because we will use matchings that are not necessarily perfect. A perfect matching is always adequate. {definition} A matching is adequate if for every violation , both and participate in the matching . We will henceforth assume that is adequate. The symmetric difference of and is a collection of alternating paths and cycles. Because is adequate and , any point in is the endpoint of some alternating path (denoted by ). Throughout the paper, denotes an even index, denotes an odd index, and is an arbitrary index.

1. The first term is .

2. For even , .

3. For odd : if is -matched, . Otherwise, terminate.

We start with a simple property of these alternating paths. {proposition} For , . For non-negative , . {proof} If is even, then . Therefore, either and or vice versa. If is odd, is a straight pair. So and lie in the same sets. Starting with , a trivial induction completes the proof. The following is a direct corollary of Prop. 3. {corollary} If , . If , . We will prove that every contains a violated -pair. Henceforth, our focus is entirely on some fixed sequence .

### 3.1 The sets E−(i) and E+(i)

Our proofs are based on matching rearrangements, and this motivates the definitions in this subsection. For convenience, we denote by . We also set . Consider the sequence , for even . We define

 E−(i)=(s−1,s0),(s1,s2),(s3,s4),…,(si−1,si)={(sj,sj+1):j odd,−1≤j

This is simply the set of -pairs in up to . We now define . Think of this as follows. We first pair up . Then, we go in order of to pair up the rest. We pick the first unmatched and pair it to the first term of opposite parity. We follow this till is paired. These sets are illustrated in Fig. 2.

 E+(i) = (s−1,s1),(s0,s3),(s2,s5),…,(si−4,si−1),(si−2,si+1) = {(s−1,s1)}∪{(si′,si′+3):%$i′$even,0≤i′≤i−2}
{proposition}

involves , while involves .

## 4 The Structure of Sx for Monotonicity

We now focus on monotonicity, and show that is highly structured. (The proof for general Lipschitz will also follow the same setup, but requires more definitions.) The weight of a pair is defined to be if , and is otherwise. We will assume that all function values are distinct. This is without loss of generality although we prove it formally later in Claim 8. Thus violating pairs have positive weight. We choose a maximum weight matching of pairs. Note that every pair in is a violating pair. We remind the reader that for even , and for odd , .

### 4.1 Preliminary observations

{proposition}

For all (or ), iff . Consider pair such that . Then and . {proof} For any point in , is obtained by adding the -distance to a specific coordinate. This proves the first part. The -distance divides (where is aligned in dimension ) and is a cross pair. Hence is at least the -distance. Note that is obtained by simply adding this distance to the coordinate of , so . {proposition} All pairs in and are comparable. Furthermore, and for all even , iff . {proof} All pairs in are in , and hence comparable. Consider pair . Since and is a cross-pair, by Prop. 4.1, . Consider pair , where is even. (Refer to Fig. 2.) The pair is in . Hence, the points are comparable and both lie in or . By Prop. 4.1, inherit their comparability from . For some even , suppose is a not a violation. Corollary 3 implies

 If i≡0 (mod4),f(si+1)−f(si)>0. If i≡2 (mod4),f(si)−f(si+1)>0. ($*$)

We will also state an ordering condition on the sequence.

 If i≡0 (mod4), si≺si−1. If i≡2 (mod4), si≻si−1. ($**$)

Remember these conditions and Corollary 3 together as follows. If , is on smaller side, otherwise it is on the larger side. In other words, if , is smaller than its “neighbors” in . For , it is bigger. For condition (($*$)4.1), if , .

### 4.2 The structure lemmas

We will prove a series of lemmas that prove structural properties of that are intimately connected to conditions (($*$)4.1) and (($**$)4.1). These proofs are where much of the insight lies. {lemma} Consider some even index such that exists. Suppose conditions (($*$)4.1) and (($**$)4.1) held for all even indices . Then, is -matched. {proof} The proof is by contradiction, so assume that does not exist. Assume . (The proof for the case is similar and omitted.) Consider sets and . Note that are all distinct. By Prop. 3.1, is a valid matching. We will argue that , a contradiction. By condition (($**$)4.1),

 w(E−(i)) = [f(s0)−f(s−1)]+[f(s1)−f(s2)]+[f(s4)−f(s3)]+⋯ (1) ⋯+[f(si−3)−f(si−2)]+[f(si)−f(si−1)]

By the second part of Prop. 4.1 (for even , iff ) and condition (($**$)4.1), we know the comparisons for all pairs in .

 w(E+(i+2)) = [f(s1)−f(s−1)]+[f(s0)−f(s3)]+[f(s5)−f(s2)]+⋯ (2) ⋯+[f(si−4)−f(si−1)]+[f(si+1)−f(si−2)]

Note that the coefficients of common terms in and are identical. The only terms not involves (by Prop. 3.1) are in and in . The weight of the new matching is precisely . By (($*$)4.1) for , this is strictly greater than , contradicting the maximality of . So, under the condition of Lemma 4.2, is -matched. We can also specify the comparison relation of , (as condition (($**$)4.1)) using an almost identical argument. Abusing notation, we will denote as . (This is no abuse if is a straight pair.) {lemma} Consider some even index such that exists. Suppose conditions (($*$)4.1) and (($**$)4.1) held for all even indices . Then, condition (($**$)4.1) holds for . Before we prove this lemma, we need the following distinctness claim.

###### Claim \thetheorem

Consider some odd such that and exist. Suppose condition (($*$)4.1) and (($**$)4.1) held for all even . Then the sequence are distinct.

{proof}

(If , this is obviously true. The challenge is when terminates at .) The sequence from to is an alternating path, so all terms are distinct. If , then the claim holds. Suppose . Note that , since . Since , by Prop. 3, . Condition (($**$)4.1) holds for , so and by Corollary 3, . Note that and is a cross pair. By Prop. 4.1, and thus . We replace pairs with , and argue that the weight has increased. We have . By condition (($*$)4.1) on , , contradicting the maximality of . {proof} (of Lemma 4.2) By Lemma 4.2, exists. Assume (the other case is analogous and omitted). The proof is again by contradiction, so we assume condition (($**$)4.1) does not hold for . This means . Consider sets and . By Claim 4.2, are distinct. So is a valid matching and we argue that . By condition (($**$)4.1) for even and the assumption .

 w(E−(i+2)) = [f(s0)−f(s−1)]+[f(s1)−f(s2)]+[f(s4)−f(s3)]+⋯ ⋯+[f(si−3)−f(si−2)]+[f(si)−f(si−1)]+[f(si+2)−f(si+1)]

Observe how the last term in the summation differs from the trend. All comparisons in are determined by Prop. 3.1, just as we argued in the proof of Lemma 4.2. The expression for is basically given in (2). It remains to deal with . By condition (($**$)4.1) for , . Thus, by Prop. 3.1, . Combining with the assumption of , we deduce .

 w(E+(i+2)) = [f(s1)−f(s−1)]+[f(s0)−f(s3)]+[f(s5)−f(s2)]+⋯ ⋯+[f(si−3)−f(si−6)]+[f(si−4)−f(si−1)]+[f(si+2)−f(si−2)]

The coefficients are identical, except that and do not appear in . We get . By (($*$)4.1) for , we contradict the maximality of . A direct combination of the above statements yields the main structure lemma. {lemma} Suppose contains no violated -pair. Let the last term by ( is odd). For every even , condition (($**$)4.1) holds, and belongs to a pair in . {proof} We prove the first statement by contradiction. Consider the smallest even where condition (($**$)4.1) does not hold. Note that for , the condition does hold, so . We can apply Lemma 4.2 for , since all even indices at most satisfy (($*$)4.1) and (($**$)4.1). But condition (($**$)4.1) holds for , completing the proof. Now apply Lemma 4.2 and Lemma 4.2 for . Conditions (($*$)4.1) and (($**$)4.1) hold for all relevant even indices. Hence, must be -matched and condition (($**$)4.1) holds for . Since terminates at , cannot be -matched. Suppose was matched. Let . By Prop. 3, , so , violating condition (($**$)4.1). A similar argument holds when . Hence, must be -matched.

## 5 Monotonicity on Boolean Hypercube

We prove Theorem 1. Since is also is a maximal family of disjoint violating pairs, and therefore, . We denote the set of all edges of the hypercube as . We partition into where is the collection of hypercube edges which differ in the th coordinate. Each is a perfect matching and is adequate. Note that is the set of -pairs which do not differ in the th coordinate. The -distance is trivially , so is the set of -pairs that differ in the th coordinate. Importantly, . {lemma} For all , the number of violating -edges is at least . {proof} Feed in and to the alternating path machinery. Set to be the set of all lower endpoints of , so . Since , by Lemma 4.2, all sequences must contain a violated -edge. The total number of violated -edges is at least . The above lemma proves Theorem 1. Observe that every pair in belongs to some set . The edge tester only requires queries, since the success probability of a single test is at least

 1|H|n∑r=1crHr(M)/2≥|M|/(n2n−2)≥ε/2n.

## 6 Setting up for Hypergrids

We setup the framework for hypergrid domains. The arguments here are property independent. Consider domain and set . We define to be pairs that differ in exactly one coordinate, and furthermore, the difference is a power of . The tester chooses a pair in uniformly at random, and checks the property on this pair. We partition into sets , , . consists of pairs which differ only in the th coordinate, and furthermore . Unfortunately, is not a matching, since each point can participate in potentially two pairs in . To remedy this, we further partition into and . For any pair , exactly one among 3 and is and one is . We put with in if , and in the set if . For example, has all pairs that only differ by in the first coordinate. We partition these pairs depending on whether the higher endpoint has even or odd first coordinate. Note that each and are matchings. We have