Algorithmic construction of the subdifferential from directional derivatives ^{†}^{†}thanks: Work of the first author was supported by Natural Sciences and Engineering Research Council of Canada (NSERC), Discovery grant #201505311. Work of the second author was supported by NSERC, Discovery Grant #3555712013.
Abstract
The subdifferential of a function is a generalization for nonsmooth functions of the concept of gradient. It is frequently used in variational analysis, particularly in the context of nonsmooth optimization. The present work proposes algorithms to reconstruct a polyhedral subdifferential of a function from the computation of finitely many directional derivatives. We provide upper bounds on the required number of directional derivatives when the space is and , as well as in where subdifferential is known to possess at most three vertices.
Key words. subdifferential, directional derivative, polyhedral construction, Geometric Probing.
AMS subject classifications. Primary 52B12, 65K15; Secondary 49M37, 90C30, 90C56.
1 Introduction
The subdifferential of a nonsmooth function represents the set of generalized gradients for the function (a formal definition appears in Section 2). It can be used to detect descent directions [17, Thm 8.30], check first order optimality conditions [17, Thm 10.1], and create cutting planes for convex functions [17, Prop 8.12]. It shows up in numerous algorithms for nonsmooth optimization: steepest descent [13, Sec XII.3], projective subgradient methods [13, Sec XII.4], bundle methods [13, Sec XIV.3], etc. Its calculus properties have been well researched [17, Chpt 10], and many favourable rules have been developed. Overall, it is reasonable to state that the subdifferential is one of the most fundamental objects in variational analysis.
Given its role in nonsmooth optimization, it is no surprise that some researchers have turned their attention to the question of how to approximate a subdifferential using ‘simpler information’. Besides the mathematical appeal of such a question, such research has strong links to the fields of Derivativefree Optimization and Geometric Probing.
Derivativefree optimization focuses on the development of algorithms to minimize a function using only function values. Thus, in this case, ‘simpler information’ takes the form of function values. The ability to use function evaluations to approximate generalized gradient information is at the heart of the convergence analyses of many derivativefree optimization methods [1, 2, 9, 10]. Some research has explicitly proposed methods to approximate subdifferential for the purposes of derivativefree optimization methods [3, 4, 15, 12, 11]. Many of these researchers focus on approximating the subdifferential by using a collection of approximate directional derivatives [3, 4, 15]. Recall that directional derivative provides the slope for a function in a given direction in the classical limiting sense of single variable calculus – a formal definition appears in Section 2. Directional derivatives are intimately linked to subdifferential maps (see Section 2) and are appealing in that they can be approximated by simple finite difference formulae. This makes them an obvious tool to approximate subdifferentials. The present paper studies how many directional derivatives are needed to reconstruct the subdifferential.
Geometric Probing considers problems of determining the geometric structure of a set by using a probe [20, 18]. If the set is a subdifferential and the probe is a directional derivative, then the Geometric Probing problem is to reconstruct the subdifferential using the ‘simpler information’ of directional derivatives (details appear in Subsection 2.1). Geometric Probing first arose in the area of robotics, where tactile sensing is used to determine the shape of an object [6]. The problem has since been well studied in [6, 16] and partly studied in [14]. As Geometric Probing principally arises in robotics and computer vision, it is not surprising that literature outside of and appears absent.
In this paper we examine the links between directional derivatives and the ability to use them to reconstruct a subdifferential. We focus on the easiest case, where the subdifferential is a polytope and the directional derivatives are exact. Let denote the number of vertices of the subdifferential, and be a given upper bound on . (The value is accepted to represent the situation when no upper bound is known.) We show that,

in , the subdifferential can be reconstructed using a single directional derivative evaluation if , and evaluations otherwise (Subsection 3.1);

in , if , then the subdifferential can be reconstructed using directional derivative evaluations (Subsection 3.2);

in , if , then the subdifferential can be reconstructed using directional derivative evaluations (Subsection 3.2);

in , if , then the subdifferential can be reconstructed using directional derivative evaluations (Subsection 4.2); and

in , if vertices, then the subdifferential can be reconstructed using directional derivative evaluations (Subsection 4.3).
These results can be loosely viewed as providing a lower bound on the number of approximate directional derivative evaluations that would be required to create a good approximation of the subdifferential in derivativefree optimization. The results also advance research in Geometric Probing, which historically has only considered polytopes in and .
Before proving these results, Section 2 proposes the necessary background for this work. It also includes a method of problem abstraction which links the research to Geometric Probing. Sections 3.1 and 3.2 present the results in and in , and Section 4 focuses on . The paper concludes with some thoughts on the challenge of reconstructing polyhedral subdifferentials when directional derivatives are only available via finite difference approximations, and some other possible directions for future research.
2 Definitions and problem abstraction
Given a nonsmooth function and a point where is finite, we define the regular subdifferential of at , , as the set
If is convex, then this is equivalent to the classical subdifferential of convex analysis [17, Prop 8.12], i.e.,
In this paper, we consider the situation where the subdifferential is a polyhedral set. This arises, for example, when is a finite max function. In particular, if
(1) 
where each , then
(2) 
where [17, Ex 8.31]. It follows that, in this case, the subdifferential is a (nonempty) polytope with at most vertices.
Related to the subdifferential is the directional derivative. Formally, the directional derivative of a continuous function at in the direction is
Given a possibly nonsmooth function and a point where is finite, is defined via
This directional derivative is also known as the Hadamard lower derivative [7], or the semiderivative [17]. Directional derivatives are linked to the subdifferential through the following classical formula
(3) 
Thus, given all possible directional derivatives, one can recreate the subdifferential as the infinite intersection of halfspaces,
Using this to develop a numerical approach to construct the subdifferential is, in general, impractical. However, if the subdifferential is a polytope, then it may be possible to reconstruct the exact subdifferential using a finite number of directional derivative evaluations.
In general we will consider two basic cases:

is a polytope, and an upper bound on its number of vertices is known;

is a polytope, but no upper bound on the number of vertices is available.
Case I corresponds to the situation where is a finite max function and the number of subfunctions used in constructing is known. Case II corresponds to the case where is a finite max function, but no information about the function is available. We shall see that the algorithms for both cases are the same, but Case I provides the potential for early termination.
2.1 Links to Geometric Probing
Under our assumption that is a nonempty polytope, and in light of Equation (3), we reformulate the problem in the following abstract manner:
Working in , the goal is to find all vertices of a nonempty compact polytope using an oracle that returns the value .
In general, polytopes will be denoted by a capital letter variable () and their corresponding sets of vertices will be denoted using a subscript (e.g., ). In , we shall denote the edges of polytope by using a subscript : . Linking to constructing subdifferentials is achieved by setting and . The assumptions for our two cases can be written as

is a nonempty polytope with vertices, and is given,

is a nonempty polytope with vertices, and is given.
The interest in this reformulation is that it almost exactly corresponds to what is known as the Hyperplane Probing problem in the field of Geometric Probing. In Hyperplane Probing the goal is to determine the shape of a polyhedral set in given a hyperplane probe which measures the first time and place that a hyperplane moving parallel to itself intersects the polyhedral set [8, 20]. In Hyperplane Probing, it is generally assumed that is in the interior of the set, although this is only for convenience and has no effect on algorithm design [8].
Hyperplane Probing dates back to at least 1986 [8], where it was shown to be the dual problem to Finger Probing (where the probe measures the point where a ray exits the polyhedral set. Using this knowledge, it was proven that to fully determine a polyhedral set with vertices, probes are necessary, and probes are sufficient [8]. Some variants of Hyperplane Probing exist. In 1986, Bernstein considered the case when the polyhedral set is one of a finite list of potential sets [5]. In this case the number of probes can be reduced to , where is the number of vertices of and is the size of the potential list of polyhedral sets. In another variant, a double hyperplane probe is considered, which provides both the first and last place that a hyperplane moving parallel to itself intersects the polyhedral set [19]. This extra information allows the resulting algorithm to terminate after probes.
Our problem differs from Hyperplane Probing in two small ways. First, instead of providing the first time and place that a hyperplane moving parallel to itself intersects the polyhedral set, our assumptions provide an oracle that yields the first time and but does not give the place. Interestingly, this reduction of information has very little impact on the algorithm or convergence. Indeed, we find it is sufficient to use oracle calls (as opposed to for Hyperplane Probing). In our case, the extra call is required to confirm that the final suspected vertex is indeed a vertex. Second, we consider the space of polytopes , instead of polyhedral sets in . While some recent research has examined Hyperplane Probing in [14], to our knowledge no research has explored the most general case of . It is worth noting that the original work of Dobkin, Edelsbrunner, and Yap [8] defined Hyperplane Probing as a problem in , but only studied the problem in .
2.2 Notation
Using the oracle notation of Geometric Probing, we introduce the following notation. The vector denotes the unit vector in the direction of the coordinate. For a vector and a value , we define the generated constraint halfspace by
and the generated hyperplane by
Finally, given a set of vectors and corresponding values, we define the generated constraint set by
3 One and twodimensional spaces
3.1 Onedimensional space
When working in the problem is trivially solved. Indeed, in , the number of vertices of must be either or , i.e., the polytope will either be a single point, or a closed interval.
If is known to be equal to , then a single evaluation suffices: . If is unknown or is known to be , then exactly two evaluations suffice. Specifically, evaluate and ; if both are equal then and , otherwise and .
3.2 Twodimensional space
In , the problem becomes more difficult, as or its upper bound can take on any positive integer value. Our proposed algorithm (Algorithm 3.1 below) continually refines two polyhedral approximations of . The set is a polyhedral outer approximation of , and the set is a polyhedral inner approximation of . The outer approximation set is initialized with a triangle, and the inner approximation is initialized as the empty set. The algorithm proceeds by carefully truncating vertices of until a vertex of is proven to be a true vertex of . As vertices of are found to be true vertices of , they are added to . The algorithm terminates when or when the cardinality of equals .
In the algorithm below, recall we denote the edge set of by and the vertex set of by . Similarly, is the edge set of and is the vertex set of .
Algorithm 3.1
Finding in ,
given an oracle , and
given an upper bound on the number of vertices: , .

Initialize:
Define the initial outer approximation polytope with . If is a singleton, then set and terminate. Otherwise, determine the 3 vertices of and enumerate them clockwise . Create the (empty) initial inner approximation, and initialize counter and set .While and
Set , , and , working modularly if . Choose such that . Compute . If , then : update , , . If , then : update , , reenumerate starting at working clockwise, . Otherwise (if ), then two new potential vertices are discovered: compute the two distinct intersection points , update , , reenumerate starting at working clockwise, .
Return .
Before examining the algorithm’s convergence properties, we provide an illustrative example.
Example 3.2
Consider the polyhedral set in Figure 1, a triangle in .
Using the notation of Subsection 2.2, is initialized as with . This creates a triangle containing . If the triangle was degenerate, i.e., was a singleton, then the problem would be solved: . In this example, is not degenerate, hence has exactly extreme points, which we label , see Figure 2 (left). Notice that would still be a triangle if the set were a line segment rather than a triangle.
Setting , the algorithm creates a hyperplane parallel to the line segment adjoining and of Figure 2: is set to . The fourth oracle call is used to check for new potential vertices. In this case , so two new potential vertices are discovered. As a result, the index is unchanged and only is updated. The result appears in Figure 2 (right) in which the indices of the vertices are relabelled.
The reordered vertices in are now used to create a new hyperplane parallel to the line segment adjoining and (note has been relabelled from the left and right parts of Figure 2, so this line is distinct from the line in step 1). In this case the result is , where . As a result, the index is incremented to , and is the first vertex to be discovered. It is placed into . The result appears in Figure 3 (left).
In step 3, the new hyperplane is parallel to the segment joining and of Figure 3 (left). Like step 1, the result is two new potential vertices, and remains unchanged, see Figure 3 (right).
Step 4, Figure 4 (left), discovers vertex with an hyperplane tangent to one of the sides of the triangle. The index is incremented, and is added to .
Step 5, Figure 4 (right), creates two new potential vertices with an hyperplane parallel to the line segment joining the an from step 4. This is an example of using modularity.
In step 6, Figure 5 (left), we have , so two vertices are discovered simultaneously. (In this case, vertex was actually already known, but this is not necessarily always the case.) This results in being relabelled as and being added to .
If , then the algorithm can stop at this point, as contains three known vertices. Conversely, if , then the algorithm requires one more step to truncate from the potential vertex list, see Figure 5 (right).
In summary, the algorithm uses either a total of oracle calls if , or oracle calls if . In both situations it returns .
Example 3.2 demonstrates the ideas behind the algorithm, and shows that it is possible to require oracle calls. The next theorem proves the algorithm converges to the correct vertex set. It also proves that, if , then at most oracle calls will be required, and if , then at most oracle calls will be required.
Theorem 3.3 (Convergence of Algorithm 3.1)
Let be a polytope in with vertices contained in set .
Proof: We shall use the notation of Subsection 2.2. First note, if is a singleton, then the initialization step will result in and the algorithm terminates after oracle calls.
If is not a singleton, then each oracle call of the algorithm, introduces a new tangent plane to . Specifically
is a tangent plane to . As is polyhedral, we must have . Let . The vertex lies in one of three sets: the interior, the edges or the vertices of .
If , then the previously undiscovered vertex of has been added to . As has vertices, this can happen at most times.
If , then it will be shifted from into the potential vertex set . Again, as has vertices, this can happen at most times.
Finally, if , then we are in the case of or , so the potential vertex has been confirmed as a true vertex of and placed in . This can happen at most times.
Thus, after at most oracle calls, will contain all vertices of . If , then the algorithm will terminate at this point.
If and , then after at most oracle calls, , but may still contain . One final oracle call will remove from , making and the algorithm will terminate.
In some situations it is possible to terminate the algorithm early.
Lemma 3.4 (Improved stopping when )
Let be a polytope with vertices contained in set . Suppose . Suppose the algorithm has run to the point where vertices are identified. If contains two edges that are not adjacent to any of the known vertices, then the intersection of those two edges must be the final vertex. As such, the algorithm can be terminated.
Lemma 3.4 is particularly useful when .
Corollary 3.5 (Special case of )
Let be a polytope with vertices contained in set . If , then the algorithm can be terminated after just oracle calls.
Proof: Following the logic in the proof of Theorem 3.3, at most oracle calls can move a vertex from the interior of to the edge set of , and at most oracle calls can move a vertex from the edge set of to the vertex set of . Therefore, after 5 oracle calls, at least one vertex has been identified. If two vertices are identified, then we are done. Otherwise, we must be in the situation shown in Figure 6.
In particular, we must have one vertex of that has lines through it, one of which is redundant in defining . The other 2 lines that make up must not intersect this vertex, and cannot create a vertex of with 3 lines through it. The only possible way to do this is a quadrilateral. Lemma 3.4 now applies, so the final vertex can be identified without an additional oracle calls.
It is worth noting that, unless the special termination trick in Lemma 3.4 applies, then the bounds provided in Theorem 3.3 are tight, as was demonstrated in Example 3.2.
It is clear that there is nothing particularly special about the directions and used in the initialization of Algorithm 3.1. If these directions are replaced by any three directions that positively span , then the algorithm behaviour is essentially unchanged. More interestingly, the initialization set can be replaced by any set that positively spans , with the only negative impact being the potential to waste oracle calls during the initialization phase. The following lemma analyses this situation and will be referred to later in the paper.
Lemma 3.6
Suppose the initialization set in Algorithm 3.1 is replaced by , where positively spans in the sense that
If , then Algorithm 3.1 terminates after at most oracle calls.
If , then Algorithm 3.1 terminates after at most oracle calls.
In either case, when the algorithm terminates we have .
Proof: Since , the initialization directions allow the algorithm to create a compact initialization polytope that contains . If a vertex of is defined by or more oracle, then any oracle call past the first is potentially wasted. However, all other oracle calls follow the same rules as those in the proof of Theorem 3.3. The bounds follow from the fact that a maximum of oracle calls will be wasted.
We conclude this section with a remark that will be important in analyzing Algorithm 4.3.
Remark 3.7
If the initialization set is used and , then no oracle calls will be wasted in creating the initialization set. Thus, in this case, the exact bounds of Theorem 3.3 will still hold. (If , then the resulting will use oracle calls instead of , but the algorithm will still terminate immediately after the initialization step.)
4 dimensional space with at most vertices
This section is devoted to in which the polytope has at most vertices. We consider subcases.
4.1 dimensional subcase with
The simplest case in occurs when the upper bound on the number of vertices of is . This trivial case is solved by calls to the oracle : For , evaluate and return
4.2 dimensional subcase with
The next simplest case is when the upper bound is . We present an alternate algorithm for this case, which uses a similar approach to constructs an outer approximation of . However, in this case a hyperrectangle is used in the initialization phase to bound the vertices.
Algorithm 4.1
Finding in ,
given an oracle , and
given the upper bound on the number of vertices is .

Initialize:
Define the initial outer approximation polytope hyperrectangle with . If is a singleton, then return and terminate. Otherwise, set and for all . By relabelling indices if necessary, assume . Initialize points , .For from to
If , then choose with and for all indices such that ; if , then reset and .
Return .
Theorem 4.2 (Convergence of Algorithm 4.1)
Let be a polytope with vertices. Let the known upper bound for be .
Proof: The initialization phase of the algorithm calls the oracle exactly times, providing bounds for each index . It is obvious that is a singleton if and only if for all , in which case . Otherwise the algorithm proceeds into the iterative loop, that identifies the components of the two vertices of .
For each , the bound is initially assigned to , and is initially assigned to . If both lower and upper bounds are equal, , then both and remain at this value. When they differ, an additional call to the oracle will indicate to which of or will the bounds be associated. The algorithm constructs the vector in such a way that , which implies that . Therefore, if , then it follows that the lower bound was correctly attributed to and the upper bound was correctly attributed to . Otherwise, they need to be swapped: and .
The total number of calls to the oracle is for initialization step plus a maximum of from the loop of to . Thus, an overall maximum of oracle calls are required.
4.3 dimensional subcase with
Reconstructing the subdifferential gets more complicated as the number of vertices increases. In this last subcase, we propose a method for the dimensional case where the number of vertices is bounded by . The proposed strategy exploits the following fact. Let be the projection on a linear subspace of the polytope . Any vertex of is the projection of a vertex of [21]. If the number of vertices is small (), then the combinatorics involved in deducing the vertices of from those of is manageable.
We propose the following algorithm that proceeds by successively finding the vertices of the projections of on . The number of vertices of the projection in is denoted .
Algorithm 4.3
Finding in ,
given an oracle , and
given the upper bound on the number of vertices is .

Initialize:
Apply Algorithm 3.1 to obtain the vertices of the projection of in : for .For from to
Given for . Call the oracle twice and set and . If , then set and reset for each . Otherwise and apply the appropriate case Case I : . Set , and reset and . Case II : . The vertices of the projection in lie in the twodimen sional plane containing . Use a change of variables to reduce this to a problem in and apply Algorithm 3.1 using the current initialization state, as allowed by Lemma 3.6. Case III : . Set . For j = 1,2,3 Choose so that and Call the oracle and reset .
Return .
Theorem 4.4 (Convergence of Algorithm 4.3)
Let be a polytope in with vertices contained in set .
Proof: The proof is done by induction on , the dimension of the space.
Now, suppose that the result is true for some . Set
Thus, after oracle calls, the algorithm has correctly identified the vertices of projected onto as a subspace of .
As the Algorithm 4.3 proceeds into the iterative loop, it uses oracle calls to compute the bounds and on the variable. If both values are equal, then this implies that all vertices belong to the subspace where and no more calls to the oracle are required, and the algorithm terminates using oracle calls. If , then the algorithm proceeds to Case I, II, or III.
In Case I, oracle calls sufficed to find the vertices in the projection on . No other calls are required because all vertices in belong to the one dimensional subspace in which the components lie in the interval . The overall number of oracle calls is .
In Case II, , and the vertices of the projection in must lie in the two dimensional plane containing . Remark 3.7 ensures that at most additional oracles calls are required if and at most more oracles calls are required if . So, the algorithm terminates using at most oracle calls if , and oracle calls if .
In Case III, and . Figure 7 illustrates Case III. The three vertices must lie on the vertical edges of a triangular prism. One vertex must be on a topmost vertex of this prism, and another vertex must be on a bottommost vertex of the prism. The final vertex can be located anywhere on the third vertical edge of the prism. The algorithm uses 3 oracle calls to resolve the situation. Figure 7, depicts one step of the inner loop within Case III. The direction is selected such that at the three vertices represented by squared, and at the vertex represented by a circle. The hyperplane is represented by the shaded region. Calling the oracle reveals the position of the vertex on the edge joining and . The final result is a total of oracle calls.
Figure 8 summarizes the total number of calls to the oracle to construct the vertices of the projection in from those of . The left part of the figure lists the number of calls required to identify the vertices of the projection of onto . Immediately after each of the three possible values of the symbol “” indicates that two oracle calls are made to compute and . The rest of the figure indicates the remaining number of oracle calls.
For example, if then is necessarily equal to or . If then more calls are necessary (by Case II), and if then more calls are necessary (by Case III). The maximal value between and constitutes an upper bound on the number of oracle calls when .
5 Discussion
We have studied the question of how to reconstruct a polyhedral subdifferential using directional derivatives. By reformulating the question as the reconstruction of an arbitrary polyhedral set using an oracle that returns , we observed that the question is closely linked to Geometric Probing.
We have developed a number of algorithms that provide methods to reconstruct a polyhedral subdifferential using directional derivatives in various situations. However, many situations remain as open questions. Table 1 summarizes the results in this paper.
Space  Nb vertices and bound  Nb calls  Source 

1  Section 3.1  
2  Section 3.1  
Subsection 4.1  
Algorithm 3.1 with Theorem 3.3  
Algorithm 3.1 with Corollary 3.5  
Algorithm 3.1 with Theorem 3.3  
Algorithm 3.1 with Theorem 3.3  
Subsection 4.1  
,  Algorithm 4.1 with Theorem 4.2  
,  Algorithm 4.1 with Theorem 4.2  
,  Algorithm 4.3 with Theorem 4.4  
,  Algorithm 4.3 with Theorem 4.4  
,  Algorithm 4.3 with Theorem 4.4 
From Table 1 we see that the problem can be considered solved in and . However, in for an arbitrary bound on the number of vertices the problem is still open.
This research is inspired in part by recent techniques that create approximate subdifferentials by using a collection of approximate directional derivatives [3, 4, 15]. As such, a natural research direction in this field is to examine how to adapt these algorithms to inexact oracles. That is an oracle that returns , where is an unknown error term bounded by . Some of the algorithms in this paper trivially adapt to this setting. Specifically, the algorithm of Section 3.1 (for ) and the algorithm of Subsection 4.1 (for ) also work for inexact oracles and the error bounds are trivial to calculate. However, the more interesting algorithms (Algorithm 3.1, 4.1 and 4.3) are not so trivial to adapt.
We conclude this paper with Figure 9, which demonstrates a potential problematic outcome of Algorithm 3.1 if run using an inexact oracle as if it were exact. The continuous lines represent the hyperplane generated by an exact oracle, and the dotted ones are generated by an inexact one. In this example, the fifth oracle call is incompatible with Algorithm 3.1, as no new vertices are discovered.
Acknowledgements
The authors are appreciative to the two anonymous referees for their time reviewing and constructive comments pertaining to this paper.
References
 [1] C. Audet and J.E. Dennis, Jr. Analysis of generalized pattern searches. SIAM Journal on Optimization, 13(3):889–903, 2003.
 [2] C. Audet and J.E. Dennis, Jr. Mesh adaptive direct search algorithms for constrained optimization. SIAM Journal on Optimization, 17(1):188–217, 2006.
 [3] A.M. Bagirov. Continuous subdifferential approximations and their applications. Journal of Mathematical Sciences (New York), 115(5):2567–2609, 2003. Optimization and related topics, 2.
 [4] A.M. Bagirov, B. Karasözen, and M. Sezer. Discrete gradient method: derivativefree method for nonsmooth optimization. Journal of Optimization Theory and Applications, 137(2):317–334, 2008.
 [5] H.J. Bernstein. Determining the shape of a convex nsided polygon by using 2n + k tactile probes. Information Processing Letters, 22(5):255 – 260, 1986.
 [6] R. Cole and C.K. Yap. Shape from probing. Journal of Algorithms, 8(1):19 – 38, 1987.
 [7] V.F. Demyanov and A.M. Rubinov. Constructive nonsmooth analysis. Approximation and Optimization, vol. 7. Verlag Peter Lang, Frankfurt, 1995.
 [8] D. Dobkin, H. Edelsbrunner, and C.K. Yap. Probing convex polytopes. In Proceedings of the Eighteenth Annual ACM Symposium on Theory of Computing, STOC ’86, pages 424–432, New York, NY, USA, 1986. ACM.
 [9] D.E. Finkel and C.T. Kelley. Convergence analysis of the DIRECT algorithm. Technical Report CRSCTR0428, Center for Research in Scientific Computation, 2004.
 [10] D.E. Finkel and C.T. Kelley. Convergence analysis of sampling methods for perturbed lipschitz functions. Pacific Journal of Optimization, 5(2):339–350, 2009.
 [11] W. Hare and J. Nutini. A derivativefree approximate gradient sampling algorithm for finite minimax problems. Computational Optimization and Applications, 56(1):1–38, 2013.
 [12] W. Hare and M. Macklem. Derivativefree optimization methods for finite minimax problems. Optimization Methods and Software, iFirst:1–13, 2011.
 [13] J.B. HiriartUrruty and C. Lemaréchal. Convex Analysis and Minimization Algorithms. II, volume 306 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. SpringerVerlag, Berlin, 1993. Advanced theory and bundle methods.
 [14] A. Imiya and K. Sato. Shape from silhouettes in discrete space. In E.V. Brimkov and P.R. Barneva, editors, Digital Geometry Algorithms: Theoretical Foundations and Applications to Computational Imaging, pages 323–358. Springer Netherlands, Dordrecht, 2012.
 [15] K.C. Kiwiel. A nonderivative version of the gradient sampling algorithm for nonsmooth nonconvex optimization. SIAM Journal on Optimization, 20(4):1983–1994, 2010.
 [16] M. Lindenbaum and A. Bruckstein. Parallel strategies for geometric probing. Journal of Algorithms, 13(2):320 – 349, 1992.
 [17] R.T. Rockafellar and R.J.B. Wets. Variational Analysis, volume 317 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. SpringerVerlag, Berlin, 1998.
 [18] K. Romanik. Geometric probing and testing  a survey. Technical Report 9542, DIMACS Technical Report, 1995.
 [19] R.L. ShuoYen. Reconstruction of polygons from projections. Information Processing Letters, 28(5):235 – 240, 1988.
 [20] S.S. Skiena. Problems in geometric probing. Algorithmica, 4(1):599–605, 1989.
 [21] H.R. Tiwary. Complexity of some polyhedral enumeration problems. PhD thesis, Universität des Saarlandes, Germany, 2008.