1 Introduction and Problem Formulation

# Applications of Variational Analysis to a Generalized Heron Problem

APPLICATIONS OF VARIATIONAL ANALYSIS

TO A GENERALIZED HERON PROBLEM

[2ex] BORIS S. MORDUKHOVICH1, NGUYEN MAU NAM2 and JUAN SALINAS JR.3

[2ex]

Abstract. This paper is a continuation of our ongoing efforts to solve a number of geometric problems and their extensions by using advanced tools of variational analysis and generalized differentiation. Here we propose and study, from both qualitative and numerical viewpoints, the following optimal location problem as well as its further extensions: on a given nonempty subset of a Banach space, find a point such that the sum of the distances from it to given nonempty subsets of this space is minimal. This is a generalized version of the classical Heron problem: on a given straight line, find a point such that the sum of the distances from to the given points and is minimal. We show that the advanced variational techniques allow us to completely solve optimal location problems of this type in some important settings.
Key words. Heron problem and its extensions, variational analysis and optimization, generalized differentiation, minimal time function, convex and nonconvex sets.
AMS subject classifications. 49J52, 49J53, 90C31.

## 1 Introduction and Problem Formulation

In this paper we propose and largely investigate various extensions of the Heron problem, which seem to be mathematically interesting and important for applications. In particular, the one of this type is to replace two given points in the classical Heron problem by finitely many nonempty closed subsets of a Banach space and to replace the straight line therein by another nonempty closed subset of this space. The reader are referred to our paper [14] for partial results concerning a convex version of this problem in the Euclidean space .

Recall that the classical Heron problem was posted by Heron from Alexandria (10–75 AS) in his Catroptica as follows: find a point on a straight line in the plane such that the sum of the distances from it to two given points is minimal; see [4, 6] for more discussions. We formulate the distance function version of the generalized Heron problem as follows:

 minimize D(x):=n∑i=1d(x;Ωi) subject to x∈Ω, (1.1)

where and , , are given nonempty closed subsets of a Banach space endowed with the norm , and where

 d(x;Q):=inf{∥x−y∥∣∣y∈Q}. (1.2)

is the usual distance from to a set . Observe that in this new formulation the generalized Heron problem (1.1) is an extension of the generalized Fermat-Torricelli problem proposed and studied in [13]. The difference is that the latter problem in unconstrained, i.e., in (1.1) while the presence of the geometric constraint in the generalized Heron version (1.1) makes it more mathematically complicated and more realistic for applications. Among the most natural areas of applications we mention constrained problems arising in location science, optimal networks, wireless communications, etc. We refer the reader to the corresponding discussions and results in [13] and the bibliographies therein concerning unconstrained Fermat-Torricelli-Steiner-Weber versions. Needless to say that the presence of geometric (generally nonconvex) constraints in (1.1) essentially changes these versions while referring us to the original Heron geometric problem.

In fact, we are able to investigate a more general version of problem (1.1), where the distance function (1.2) is replaced by the so-called minimal time function

 TFQ(x):=inf{t≥0∣∣Q∩(x+tF)≠∅} (1.3)

with the constant dynamics and the target set in a Banach space ; see [12] and the references therein for more discussions and results on this class of functions important for various aspects of optimization theory and its numerous applications.

The main problem under consideration in this paper, called below the generalized Heron problem, is formulated as follows:

 minimize T(x):=n∑i=1TFΩi(x) subject to x∈Ω, (1.4)

where is a closed, bounded, and convex set containing the origin as an interior point, and where and for are nonempty closed subsets of a Banach space ; these are the standing assumptions of the paper.

When in (1.4), this problem reduces to the one in (1.1). Note that involving the minimal time function (1.3) into (1.4) instead of the distance function in (1.1) allows us to cover some important location models that cannot be encompassed by formalism (1.1); cf. [15] for the case of convex unconstrained problems of type (1.4) and [13] for the generalized Fermat-Torricelli problem corresponding to (1.4) with .

A characteristic feature of the generalized Heron problem (1.4) and its distance function specification (1.1) is that they are intrinsically nonsmooth, since the functions (1.2) and (1.3) are nondifferentiable. These problems are generally nonconvex while the convexity of both cost functions in (1.1) and (1.4) follows from the convexity the sets . This makes it natural to apply advanced methods and tools of variational analysis and generalized differentiation to study these problems. To proceed in this direction, we largely employ the recent results from [12] on generalized differentiation of the minimal time function (1.3) in convex and nonconvex settings as well as comprehensive rules of generalized differential calculus. As can be seen from the solutions below, the constraint nature of the Heron problem and its extensions leads to new structural phenomena in comparison with the corresponding Fermat-Torricelli counterparts. Note that a number of the results obtained in this paper are new even for the unconstrained setting of the generalized Fermat-Torricelli problem.

The rest of the paper is organized as follows. In Section 2, we present some basic constructions and properties from variational analysis that are widely used in the sequel. Section 3 concerns deriving necessary optimality conditions for solutions to the generalized Heron problem in the case of arbitrary closed sets and , , in (1.4) and its specification (1.1). The results obtained are expressed in terms of the limiting normal cone to closed sets in the sense of Mordukhovich [9]. We pay a special attention to the Hilbert space setting, which allows us to establish necessary (in some cases necessary and sufficient) optimality conditions in the most efficient forms. Some examples are given to illustrate applications of general results in particular situations. In Section 4 we develop a numerical algorithm to solve some versions of the generalized Heron problem in finite dimensions while the concluding Section 5 is devoted to the implementation of this algorithm and its specifications in various settings of their own interest.

Our notation is basically standard in the area of variational analysis and generalized differentiation; see [9, 16]. We recall some of them in the places they appear.

## 2 Tools of Generalized Differentiation

This section contains basic constructions and results of the generalized differentiation theory in variational analysis employed in what follows. The reader can find all the proofs, discussions, and additional material in the books [2, 9, 10, 16, 17] and the references therein.

Given an extended-real-valued function with from the domain and given , define first the -subdifferential of at by

 ˆ∂εφ(¯x):={x∗∈X∗∣∣liminfx→¯xφ(x)−φ(¯x)−⟨x∗,x−¯x⟩∥x−¯x∥≥−ε}. (2.1)

For the set is known as Fréchet/regular subdifferential of at . It follows from definition (2.1) that regular subgradients are described as follows: if and only if for any there is such that

 ⟨x∗,x−¯x⟩≤φ(x)−φ(¯x)+(ε+η)∥x−¯x∥ whenever x∈¯x+γIB

with standing for the closed unit ball of the space in question. When is Fréchet differentiable at , its regular subdifferential reduces to the classical gradient . Despite the simple definition (2.1) closely related to the classical derivative, the regular subdifferential and its -enlargements in general do not happen to be appropriate for applications to the generalized Heron problem under consideration due to the serious lack of calculus rules.

To get a better construction, we need to employ a certain robust limiting procedure, which lies at the heart of variational analysis. Recall that, given a set-valued mapping between a Banach space and its topological dual , the sequential Painlevé-Kuratowski outer limit of as is defined by

 Limsupx→¯xG(x):={x∗∈X∗∣∣∃ sequences xk→¯x,x∗k\lx@stackrelw∗→x∗ as k→∞such that x∗k∈G(xk) for all k∈IN:={1,2,…}}, (2.2)

where signifies the weak topology of . Applying the limiting operation (2.2) to the set-valued mapping in (2.1) and using the notation with give us the subgradient set

 ∂φ(¯x):=Limsupxφ−→¯xε↓0ˆ∂εφ(x) (2.3)

known as the Mordukhovich/limiting subdifferential of at . We can equivalently put in (2.3) if is lower semicontinuous around and if is Asplund, i.e., each of its separable subspaces has a separable dual; the latter is automatics, e.g., when is reflexive. Recall that is subdifferentially regular at if .

Note that every convex function is subdifferentially regular at any point with the classical subdifferential representation

 ∂φ(¯x)={x∗∈X∗∣∣⟨x∗,x−¯x⟩≤φ(x)−φ(¯x) for all x∈X}. (2.4)

However, the latter property often fails in nonconvex setting, where may be empty (as for at ) with a poor calculus, while the limiting subdifferential (2.3) enjoys a full calculus (at least in Asplund spaces) due to variational/extremal principles of variational analysis. We following calculus results are most useful in this paper.

###### Theorem 2.1

(subdifferential sum rules). Let , , be lower semicontinuous functions on a Banach space . Suppose that all but one of them are locally Lipschitzian around . Then:

(i) We have the inclusion

 ∂(n∑i=1φi)(¯x)⊂n∑i=1∂φi(¯x) (2.5)

provided that is Asplund. Furthermore, inclusion (2.5) becomes an equality if all the functions are subdifferentially regular at .

(ii) When all the functions are convex, the equality

 ∂(n∑i=1φi)(¯x)=n∑i=1∂φi(¯x) (2.6)

holds with no Asplund space requirement.

Note that assertion (ii) of Theorem 2.1, which is the classical Moreau-Rockafellar theorem, is a consequence of assertion (i) in the case of Asplund spaces; see [9, Theorem 3.36].

Finally in this section, recall that the corresponding normal cones to a set at can be defined via the subdifferentials (2.1) and (2.3) of the indicator function by

 ˆN(¯x;Ω):=ˆ∂δ(¯x;Ω) % and N(¯x;Ω):=∂δ(¯x;Ω), (2.7)

where if and otherwise.

## 3 Optimality Conditions for the Generalized Heron Problem

The main results of this section give necessary optimality conditions for the generalized Heron problem under consideration, which occur to be necessary and sufficient for optimality in the case of convex data. To begin with, we would like make sure that problem (1.4) admits an optimal solution under natural assumptions.

###### Proposition 3.1

(existence of optimal solutions to the generalized Heron problem). The generalized Heron problem (1.4) admits an optimal solution in each of the following three cases:

(i) is a Banach space, and the constraint set is compact.

(ii) is finite-dimensional, and one of the sets and as is bounded.

(iii) is reflexive, the sets and as are convex and one of them is bounded.

Proof.It follows from [11, Proposition 2.2] that the minimal time function (1.3) and hence the function in (1.4) are Lipschitz continuous. Thus the conclusion in the case (i) follows from the classical Weierstrass theorem.

Consider the infimum value

 γ:=infx∈ΩT(x)<∞

in problem (1.4) and take a minimizing sequence with as and for all . Now assume that is finite dimensional and is bounded. When is sufficiently large, one has

 TFΩ1(xk)≤T(xk)<γ+1.

Thus there exist , , and such that

 xk+tkfk=wk.

Since both and are bounded, is a bounded sequence, and hence it has subsequence that converges to . Then is a solution of the problem under (ii). The proof in case (iii) is similar to that given in [14, Proposition 4.1].

To proceed with deriving optimality conditions for the generalized Heron problem (1.4) and its specification (1.1), we need more notation. Define the support level set

 C∗:={x∗∈X∗∣∣σF(−x∗)≤1}

via the support function of the constant dynamics

 σF(x∗):=supx∈F⟨x∗,x⟩,x∗∈X∗.

The generalized projection to the target set via the minimal time function (1.3) is a set-valued mapping defined by

 ΠFQ(x):=Q∩(x+TFQ(x)F),x∈X. (3.1)

Considering further the Minkowski gauge

 ρF(x):=inf{t≥0∣∣x∈tF},x∈X, (3.2)

and involving the limiting normal cone from (2.7), we define the sets

 Ai(x):=⎧⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪⎩⋃ω∈ΠFΩi(x)[−∂ρF(ω−x)∩N(ω;Ωi)] for x∉Ωi,ΠFΩi(x)≠∅,N(x;Ωi)∩C∗ for x∈Ωi as i=1,…,n. (3.3)

We say that the minimal time function is well posed at if for every sequence converging to there is a sequence such that and contains a convergent subsequence. The reader is referred to [12, Proposition 6.2] for a number of verifiable conditions ensuring such a well-posedness of the minimal time function.

Our first theorem establishes necessary as well as necessary and sufficient conditions for optimality in (1.4) via the sets from (3.3) in general infinite-dimensional settings.

###### Theorem 3.2

(necessary and sufficient optimality conditions for the generalized Heron problem in Banach and Asplund spaces). Given , suppose in the setting of (1.4) that the minimal time function is well posed at for each such that . The following assertions hold:

(i) Let be a local optimal solution to (1.4), and let be Asplund. Then we have

 0∈n∑i=1Ai(¯x)+N(¯x;Ω), (3.4)

where the sets are defined in (3.3).

(ii) Let be a general Banach space, and let all the sets and as be convex. Given , assume that for with , select any , and construct by

 Ai(¯x):=N(¯ω;Ωi)∩[−∂ρF(¯ω−¯x)] for ¯x∉Ωi (3.5)

and by the second formula in (3.3) otherwise. Then is an optimal solution to (1.4) if and only if inclusion (3.4) is satisfied.

Proof. Observe first that problem (1.4) can be equivalently written in the form

 minimize T(x)+δ(x;Ω). (3.6)

It easily follows from definitions (2.1) and (2.3) of regular and limiting subgradients and their description (2.4) for convex functions that the generalized Fermat rule

 0∈ˆ∂f(¯x)⊂∂f(¯x) (3.7)

is a necessary condition for a local minimizer of any function being also sufficient for this if is convex. To justify now assertion (i), we apply (3.7) via to the cost function in (3.6) and then use the subdifferential sum rule for limiting subgradients from Theorem 2.1(i) in Asplund spaces by taking into account that the functions are Lipschitz continuous. It follows in this way that

 0∈∂(T+δ(⋅;Ω))(¯x)⊂∂T(¯x)+N(¯x;Ω)⊂n∑i=1∂TFΩi(¯x)+N(¯x;Ω). (3.8)

Employing further the subdifferential formulas for the minimal time function from [13, Theorem 3.1 and Theorem 3.2] gives us

 ∂TFΩi(¯x)⊂Ai(¯x),i=1,…,n. (3.9)

Substituting the latter into (3.8) justifies inclusion (3.4) in assertion (i) of the theorem.

To justify assertion (ii), we apply Theorem 2.1(ii) for convex functions on Banach spaces and conclude in this way that both inclusions “” in (3.8) hold as equalities and provide necessary and sufficient optimality conditions for optimality of in (1.4). Employing finally [12, Theorem 7.1 and 7.3] gives us the equalities in (3.9), where the sets are calculated by (3.5) when . This completes the proof of the theorem.

It is not hard to check under our standing assumptions that the requirement in Theorem 3.2(ii) is automatically satisfied when the space is reflexive.

The next theorem allows us to significantly simplify the calculation of the sets in Theorem 3.2 for the case of Hilbert spaces and thus to ease the implementation of the optimality conditions obtained therein. Besides this, it leads us to an improvement of optimality under some additional assumptions. Namely, we can replace the limiting normal cone in (3.4) by the smaller regular one for an arbitrary closed constraint set . Define the index sets

 I(x):={i∈{1,…,n}∣∣x∈Ωi} and % J(x)={i∈{1,…,n}∣∣x∉Ωi},x∈X. (3.10)

We obviously have and for all .

###### Theorem 3.3

(improved optimality conditions in Hilbert spaces). Consider version (1.1) of the generalized Heron problem with a Hilbert space in the assumptions of Theorem 3.2. The following assertions hold:

(i) Let be a local optimal solution to (1.1), and let whenever . Then for any as we have

 −∑i∈J(¯x)ai(¯x)∈∑i∈I(¯x)Ai(¯x)+N(¯x;Ω), (3.11)

where each set is computed by

 Ai(¯x)=⎧⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪⎩¯x−Π(¯x;Ωi)d(¯x;Ωi) for ¯x∉Ωi,N(¯x;Ωi)∩IB for ¯x∈Ωi (3.12)

whenever . If in addition , then

 −n∑i=1ai(¯x)∈ˆN(¯x;Ω). (3.13)

(ii) If all the sets and as are convex, then each set as in (3.12) is a singleton and condition (3.11) is necessary and sufficient for the global optimality of in problem (1.1).

Proof. To justify assertion (i), pick for all such that and get the relationships

 ∑i∈J(¯x)∥¯x−¯ωi∥+∑i∈I(¯x)d(¯x;Ωi)=n∑i=1d(¯x;Ωi)≤n∑i=1d(x;Ωi)≤∑i∈J(¯x)∥x−¯ωi∥+∑i∈I(¯x)d(x;Ωi)

for all around . This shows that is a local optimal solution to the problem

 minimize p(x):=∑i∈J(¯x)∥x−¯ωi∥+∑i∈I(¯x)d(x;Ωi) subject to x∈Ω. (3.14)

Since the norm function on a Hilbert space is Fréchet differentiable in any nonzero point, we conclude that each as is Fréchet differentiable at with

 ∇pi(¯x)=¯x−¯ωi∥¯x−¯ωi∥=¯x−¯ωid(¯x;Ωi)=ai(¯x).

Applying to (3.14) the first inclusion in the generalized Fermat rule (3.7) and then using the subdifferential sum rules from [9, Proposition 1.107(i)] for regular subgradients and from Theorem 2.1(i) for limiting ones, we get

 0∈ˆ∂[p+δ(⋅;Ω)](¯x) =∑i∈J(¯x)∇pi(¯x)+ˆ∂[∑i∈I(¯x)d(⋅;Ωi)+δ(⋅;Ω)](¯x) ⊂∑i∈J(¯x)ai(¯x)+∂[∑i∈I(¯x)d(⋅;Ωi)+δ(⋅;Ω)](¯x) ⊂∑i∈J(¯x)ai(¯x)+∑i∈I(¯x)∂d(¯x;Ωi)+N(¯x;Ω) ⊂∑i∈J(¯x)ai(¯x)+∑i∈I(¯x)[N(¯x;Ωi)∩IB]+N(¯x;Ω) =∑i∈J(¯x)ai(¯x)+∑i∈I(¯x)Ai(¯x)+N(¯x;Ω),

where the last three relationships hold since for each . This justifies inclusion (3.11). In the case of , we arrive at inclusion (3.13) by the first row of the above relationships and the normal cone definition (2.7).

Assertion (ii) is justified similarly to the proof of Theorem 3.2(ii) by using the results of assertion (i) and the well-known fact that the projection operator for a closed and convex set in a Hilbert space is single-valued.

Observe that in Theorem 3.3, in contrast to Theorem 3.2, we do not impose the well-posedness requirement. In fact, under the assumptions of Theorem 3.3(ii) it holds automatically; see [9, Corollary 1.106]. Note also that in finite-dimensional spaces we always have the Fréchet differentiability of the distance function at out-of-set points with unique projections (see, e.g., [16, Exercise 8.53]), and so we can deal in the proof of Theorem 3.3(i) directly with the cost function in the generalized Heron problem (1.1), without considering the auxiliary problem (3.14). However, in Hilbert spaces this approach requires additional and unavoidable assumptions on the projection continuity; see [5, Corollary 3.5]. In finite dimensions the projection continuity and Fréchet differentiability of the distance functions actually follows from the projection uniqueness, while it is not the case in Hilbert spaces as shown in [5, Example 5.2]. Observe to this end that neither uniqueness nor continuity of projections is required in Theorem 3.3.

On the other hand, the next result shows that for the unconstrained version of (1.1), i.e., for the generalized Fermat-Torricelli problem [13] with disjoint sets , the projection nonemptiness at a local optimal solution automatically implies the projection uniqueness in arbitrary Hilbert spaces.

###### Proposition 3.4

(projection uniqueness at optimal solutions). Let be a local optimal solution to problem (1.4) in a Hilbert space with and . Assume that as . Then the fulfillment of the condition for all implies that the projection set is a singleton whenever .

Proof. Since for the first index set in (3.10), it follows from the proof of Theorem 3.3(i) with that for every as we have the equality

 0=n∑i=1¯x−ωid(¯x;Ωi). (3.15)

Picking any , say , let us check that the set is singleton. Indeed, take two projections and fix arbitrary projections for . Then from (3.15) we get the relationships

 0=¯x−¯ω1,1d(¯x;Ω1)+n∑i=2¯x−¯ωid(¯x;Ωi)=¯x−¯ω1,2d(¯x;Ω1)+n∑i=2¯x−¯ωid(¯x;Ωi),

which imply that and thus complete the proof of the proposition.

Observe that if belongs to one of the sets as , the conclusion of Proposition 3.4 does not generally hold even in finite dimensions as it is demonstrated by the following example.

###### Example 3.5

(nonuniqueness of projections at solution points). Let in the setting of Proposition 3.4, let be the unit circle of , and let . Then is a solution of the Fermat-Torricelli problem generated by and , but the projection is the whole unit circle. It is also clear that any point inside of the unit circle other than is also a solution to this problem, and is a singleton for both , which is consistent with the result of Proposition 3.4.

The observation made in Proposition 3.4 allows us to improve the optimality conditions obtained in [13, Corollary 4.1] for the generalized Fermat-Torricelli problem.

###### Corollary 3.6

(improved optimality conditions for the generalized Fermat-Torricelli problem with three nonconvex sets in Hilbert spaces). Let in the framework of Theorem 3.3, where , and are pairwisely disjoint subsets of and . The following alternative holds for a local optimal solution with the sets defined by (3.12):

(i) The point belongs to one of the sets , say . Then for any as we have the relationships

 ⟨a2,a3⟩≤−1/2 and −a2−a3∈ˆN(¯x;Ω1).

(ii) The point does not belong to all the three sets , , and . Then for all and we have

 ⟨ai,aj⟩=−1/2 for i≠j as i,j∈{1,2,3}.

Conversely, suppose that the sets , , are convex and that satisfies either (i) or (ii). Then it is a global optimal solution to the problem under consideration.

Proof. In case (i) for any as take such that

 ai=¯x−¯ωid(¯x;Ωi),i=2,3.

Since , we have the relationships

 ∥¯x−¯ω1∥+∥¯x−ω2∥=3∑i=1d(¯x;Ωi)≤3∑i=1d(x;Ωi)≤d(x;Ω1)+∥x−¯ω2∥+∥x−¯ω3∥

whenever is near . Thus is a local optimal solution to the problem

 minimize q(x):=d(x;Ω1)+∥x−¯ω2∥+∥x−¯ω3∥. (3.16)

Employing the generalized Fermat rule in (3.16) and then the aforementioned sum rule for regular subgradients gives us by using the well-known formula for the regular subdifferential of the distance function (see, e.g., [9, Corollary 1.96]) that

 0∈ˆ∂q(¯x)=ˆ∂d(¯x;Ω1)+a2+a3=ˆN(¯x;Ω1)∩IB+a2+a3.

The latter implies therefore that

 −a2−a3∈ˆN(¯x;Ω1) with ∥a2+a3∥≤1.

The rest of the proof follows the lines of that in [13, Corollary 4.1]. Assertion (ii) and the converse statement are derived similarly from Proposition 3.4 and the proof of [13, Corollary 4.1] by the same procedure, which thus allows us to fully justify the corollary.

From now on in this section we concentrate on the distance function version (1.1) of the generalized Heron problem while paying the main attention to deriving efficient forms of optimality conditions for (1.1) under additional structural assumptions on the constraint set . In what follows in this section we impose the nonintersection condition

 Ω∩Ωi=∅ for all i=1,…,n (3.17)

on the sets and in (1.1), which is specific for the (constrained) generalized Heron problem. In this case we obviously have for the first index set in (3.10) whenever , and so the sets are calculated by

 Ai(¯x)=¯x−Π(¯x;Ωi)d(¯x;Ωi),i=1,…,n, (3.18)

in the Hilbert space setting under consideration.

To proceed, for any nonzero vectors define the quantity

 cos(u,v):=⟨u,v⟩∥u∥⋅∥v|

and, given a linear subspace of , recall that

 L⊥:={x∗∈X∣∣⟨x∗,v⟩=0 for all% v∈L}.

We say that has a tangent space at if . Note that for any affine subspace parallel to a linear subspace the tangent space to at every is .

Next we derive verifiable necessary and sufficient conditions for optimal solutions to (1.1) in Hilbert spaces provided that the constraint set admits a tangent space at the reference point.

###### Proposition 3.7

(optimality conditions for the case of constraint sets with tangent spaces). Consider the generalized Heron problem (1.1) under condition (3.17) in Hilbert spaces. The following assertions hold:

(i) Let be a local optimal solution to (1.1), let be computed in (3.18) where
for , and let admit a tangent space at . Then for any , one has

 n∑i=1cos(ai(¯x),v)=0 for every v∈L(¯x)∖{0}. (3.19)

(ii) Let all the sets , , be convex. Then and condition (3.19) with the tangent space for is necessary and sufficient for the global optimality of in (1.1).

Proof. To justify (i), observe by the assumptions made and the definition of the tangent space to at that

 ˆN(¯x;Ω)=L⊥={v∈X∣∣⟨v,x⟩=0 for all x∈L(¯x)}.

By Theorem 3.3 for any , one has

 0∈n∑i=1ai(¯x)+L⊥(¯x),

which implies in turn that

 ⟨n∑i=1ai(¯x),v⟩=0 for all v∈L(¯x).

Since by (3.17), we have due to (3.18) that for , and hence

 n∑i=1⟨ai(¯x),v⟩∥ai(¯x)∥⋅∥v∥=0 whenever v∈L(¯x)∖{0}.

Thus we arrive at the the necessary optimality condition (3.19).

To justify (ii), observe that the implication “” follows directly from assertion (i) of the theorem, since the sets are singletons for in this case. The oppositive implication “” follows from Theorem 3.3(ii) by taking into account the special structure of the normal cone . This completes the proof of the proposition.

We have the following specification of optimality conditions in Proposition 3.7 when the tangent space therein is finitely generated.

###### Corollary 3.8

(optimality conditions for the case of finitely generated tangent spaces). Let with as in the setting of Proposition 3.7. Then condition (3.19) in all of its conclusions is equivalent to

 n∑i=1cos(ai,vj)=0 for all j=1,…,s. (3.20)

Proof. We obviously have that (3.19)(3.20). To justify the converse implication, set and observe by as and as that (3.20) yields for all . Picking further an arbitrary vector , we arrive at the representation

 v=s∑j=1λjvj

with some . It gives by linearity that , which yields (3.19) and completes the proof of the proposition.

The next result concerns the generalized Heron problem for two nonconvex sets in Hilbert spaces with a one-dimensional structure of the regular normal cone to the constraint.

###### Proposition 3.9

(necessary conditions for the generalized Heron problem with two nonconvex sets in Hilbert spaces). Consider problem (1.1) for two sets in Hilbert spaces under the nonintersection condition (3.17). Let be a local optimal solution to (1.1) such that with some and that for . Then for any as we have the conditions:

 either a1(¯x)+a2(¯x)=0 or cos(a1(¯x),v)=cos(a2(¯x),v). (3.21)

Proof. It follows from Theorem 3.3(i) in this setting that

 −a1(¯x)−a2(¯x)∈ˆN(¯x;Ω) for any ai(¯x)∈Ai(¯x),i=1,2. (3.22)

Denoting for simplicity as and taking into account the assumed structure of the regular normal cone to , we get that (3.22) is equivalent to the following:

 either a1+a2=0 or a1+a2=λv with % some λ≠0.

Let us show that the latter condition implies that . Indeed, in this case we have , which gives by the Euclidean norm on that

 λ2∥v∥2=∥a1+a2∥2=∥a1∥2+∥a2∥2+2⟨a1,a2⟩=2+2⟨a1,a2⟩.

This implies in turn the relationships

 ⟨a1,λv⟩ =⟨λv−a2,λv⟩ =λ2∥v∥2−λ⟨a2,v⟩ =2+2⟨a1,a2⟩−λ⟨a2,v⟩ =2⟨a2,a2⟩+2⟨a1,a2⟩−λ⟨a2,v⟩ =2⟨a2+a1,a2⟩−λ⟨a2,v⟩ =2⟨λv,a2⟩−λ⟨a2,v⟩=⟨a2,λv⟩,

which yield that since . By taking into account that and , we conclude that and thus complete the proof.

Observe that sufficient optimality conditions in the form of Proposition 3.9 do not hold even in convex settings. The next result provides slightly modified conditions, which are sufficient for optimality in the case of the convex generalized Heron problem on the plane.

###### Proposition 3.10

(characterizing optimal solutions for the generalized Heron problem with two convex sets). Let the sets and be convex in the setting of Proposition 3.9, and let as . Then the modification