Solving Coupled Composite Monotone Inclusions by Successive Fejér Approximations of Their KuhnTucker Set^{†}^{†}thanks: Received by the editors XXX XX, 2013; accepted for publication XXX XX, 2014; published electronically DATE. \Urlmms/xx/XXXX.html.
Abstract
We propose a new class of primaldual Fejér monotone algorithms for solving systems of composite monotone inclusions. Our construction is inspired by a framework used by Eckstein and Svaiter for the basic problem of finding a zero of the sum of two monotone operators. At each iteration, points in the graph of the monotone operators present in the model are used to construct a halfspace containing the KuhnTucker set associated with the system. The primaldual update is then obtained via a relaxed projection of the current iterate onto this halfspace. An important feature that distinguishes the resulting splitting algorithms from existing ones is that they do not require prior knowledge of bounds on the linear operators involved or the inversion of linear operators.
uality, Fejér monotonicity, monotone inclusion, monotone operator, primaldual algorithm, splitting algorithm
Primary 47H05; Secondary 65K05, 90C25, 94A08
1 Introduction
The first monotone operator splitting methods arose in the late 1970s and were motivated by applications in mechanics and partial differential equations [32, 35, 39]. In recent years, the field of monotone operator splitting algorithms has benefited from a new impetus, fueled by emerging application areas such as signal and image processing, statistics, optimal transport, machine learning, and domain decomposition methods [3, 5, 24, 36, 41, 43, 46]. Three main algorithms dominate the field explicitly or implicitly: the forwardbackward method [38], the DouglasRachford method [37], and the forwardbackwardforward method [47]. These methods were originally designed to solve inclusions of the type , where and are maximally monotone operators acting on a Hilbert space (via product space reformulations, they can also be extended to problems involving sums of more than 2 operators [9, 45]). Until recently, a significant challenge in the field was to design splitting techniques for inclusions involving linearly composed operators, say
(1) 
where and are maximally monotone operators acting on Hilbert spaces and , respectively, and is a bounded linear operator from to . In the case when and are subdifferentials, say and , where and are lower semicontinuous convex functions satisfying a suitable constraint qualification, (1) corresponds to the minimization problem
(2) 
The FenchelRockafellar dual of this problem is
(3) 
and the associated KuhnTucker set is
(4) 
The importance of this set is discussed extensively in [44], notably in connection with the fact that KuhnTucker points provide solutions to (2) and (3). To the best of our knowledge, the first splitting method for composite problems of the form (1) is that proposed in [16], which was developed around the following formulation.
Problem \thetheorem
Let and be real Hilbert spaces, and set . Let and be maximally monotone operators, and let be a bounded linear operator. Consider the inclusion problem
(5) 
the dual problem
(6) 
and the associated KuhnTucker set
(7) 
The problem is to find a point in . The sets of solutions to (5) and (6) are denoted by and , respectively.
The KuhnTucker set (7) is a natural extension of (4) to general monotone operators. In [16], a point in was obtained by applying the forwardbackwardforward method to a suitably decomposed inclusion in (the use of DouglasRachford splitting was also discussed there). Subsequently, the idea of using traditional splitting techniques to find KuhnTucker points was further exploited in a variety of settings, e.g., [1, 12, 14, 23, 25, 26, 48]. Despite their broad range of applicability, existing splitting methods suffer from two shortcomings that precludes their use in certain settings. Thus, a shortcoming of splitting methods based on the forwardbackwardforward [16, 25] or the forwardbackward algorithms [2, 26, 48] is that they require knowledge of ; this is also true for the DouglasRachfordbased method of [14]. On the other hand, a shortcoming of splitting methods based on the DouglasRachford [16, Remark 2.9] or Spingarn [1] algorithms is that they require the inversion of linear operators, as does [12, Algorithm 3]. In some applications, however, cannot be evaluated reliably and the inversion of linear operators is not numerically feasible. As will be seen in Section 4, this issue becomes particularly acute when dealing with systems of coupled monotone inclusions, which constitute the main motivation for our investigation.
Our objective is to devise a new class of algorithms for solving Problem 1 that alleviate the abovementioned shortcomings of existing methods. Our approach is inspired by an original splitting framework proposed in [28] for solving the basic inclusion (see also [29] for the extension to the sum of several operators)
(8) 
The main idea of [28] is to use points in the graphs of and to construct a sequence of Fejér approximations to the socalled extended solution set
(9) 
and to iterate by projection onto these successive approximations. This extended solution set is actually nothing but the specialization of the KuhnTucker set (7) to the case when and . This construction led to novel splitting methods for solving (8) that do not seem to derive from the traditional methods mentioned above. In the present paper, we extend it significantly beyond (8) in order to design new primaldual splitting algorithms for Problem 1.
The paper is organized as follows. Preliminary results are established in Section 2 and algorithms for solving Problem 1 are developed in Section 3. These results are then used in Section 4 to solve systems of composite monotone inclusions in duality.
Notation. The scalar product of a Hilbert space is denoted by and the associated norm by . The symbols and denote, respectively, weak and strong convergence, and denotes the identity operator. Let and be real Hilbert spaces, let be the power set of , and let . We denote by the range of , by the graph of , and by the inverse of , which is defined through its graph . The resolvent of is . We say that is monotone if
(10) 
and maximally monotone if there does not exist any monotone operator such that . In this case, is firmly nonexpansive and defined everywhere on . The Hilbert direct sum of and is denoted by . The projection operator onto a nonempty closed convex subset of is denoted by . The necessary background on convex analysis and monotone operators will be found in [9].
2 Preliminary results
We first investigate some basic properties of Problem 1, starting with the fact that KuhnTucker points automatically provide primal and dual solutions.
A fundamental concept in algorithmic nonlinear analysis is that of Fejér monotonicity: a sequence in a Hilbert space is said to be Fejér monotone with respect to a set if
(11) 
Alternatively (see [8, Section 2]), is Fejér monotone with respect to if, for every , is a relaxed projection of onto a closed affine halfspace containing , i.e.,
(12) 
The halfspaces in (12) are called Fejér approximations to . The Fejér monotonicity property (11) makes it possible to greatly simplify the analysis of the asymptotic behavior of a broad class of algorithms; see [7, 9, 21, 22, 30, 31] for background, examples, and historical notes.
In the following proposition, we consider the problem of constructing a Fejér approximation to the KuhnTucker set (7).
In the setting of Problem 1, for every and , set
(13) 
Then the following hold:

Let and . Then .

Let and . Then .

.

Let , , and . Set , , and ; if , set . Then
(14)
(i): Suppose that . Then and . Hence, (7) implies that . In addition,
(15) 
and therefore . Conversely, and .
(ii): Suppose that . Then and, by monotonicity of ,
(16) 
Likewise, since , we have
(17) 
Using (16) and (17), we obtain
(18) 
Thus, .
(iii): By (ii), . Conversely, fix and , and let . Then and therefore
(19) 
Now set . Then, since is an arbitrary point in and since [9, Propositions 20.22 and 20.23] imply that is maximally monotone, we derive from (2) that , i.e., that .
(iv): Let . As seen in (i), if , then and . Hence and . Otherwise, it follows from [9, Example 28.16] that
(20) 
In view of (13), the proof is complete.
Remark \thetheorem
Our analysis will require the following asymptotic principle, which is of interest in its own right.
In the setting of Problem 1, let be a sequence in , let be a sequence in , and let . Suppose that , , , and . Then and . {proof} Define
(21) 
Then
(22) 
Now set
(23) 
We deduce from (7) that, for every ,
(24)  
On the other hand, [1, Lemma 3.1] asserts that
(25) 
Now set
(26) 
Since and , we derive from (25) that and . Altogether, since and are weakly continuous, the assumptions yield
(27) 
However, (27) and [9, Proposition 25.3] imply that
(28) 
In view of (24), the proof is complete.
3 Finding KuhnTucker points by Fejér approximations
In view of Proposition 2(i), Problem 1 reduces to finding a point in a nonempty closed convex subset of a Hilbert space. This can be achieved via the following generic Fejérmonotone algorithm.
[22] Let be a real Hilbert space, let be a nonempty closed convex subset of , and let . Iterate
(29) 
Then the following hold:

is Fejér monotone with respect to : .

.

Suppose that, for every and every strictly increasing sequence in , . Then converges weakly to a point in .
We now derive from the above convergence principle a conceptual primaldual splitting framework.
Consider the setting of Problem 1. Suppose that , let , let , and iterate
(30) 
Then either (30) terminates at a solution in a finite number of iterations or it generates infinite sequences and such that the following hold:

is Fejér monotone with respect to .

.

Suppose that for every , every , and every strictly increasing sequence in ,
(31) Then converges weakly to a point , converges weakly to a point , and .
We first observe that, by Proposition 2, is nonempty, closed, and convex. Two alternatives are possible. First, suppose that, for some , . Then Proposition 2(i) asserts that the algorithm terminates at . Now suppose that . For every , set
(32) 
and define
(33) 
Then we derive from (30) and Proposition 2(ii) that . On the other hand, Proposition 2(iv) implies that
(34) 
Thus, the conclusions follow from Proposition 2(i) and Proposition 3.
At the th iteration of algorithm (30), one picks the quadruple in . In the following corollary, this quadruple is taken in a more restricted set adapted to the current primaldual iterate , which leads to more explicit convergence conditions.
Consider the setting of Problem 1. Suppose that , let , let , let , and let . For every , set
(35) 
Iterate
(36) 
Then either (36) terminates at a solution in a finite number of iterations or it generates infinite sequences and such that the following hold:

and .

and .

Suppose that
(37) Then converges weakly to a point , converges weakly to a point , and .
This corollary is an application of Proposition 3. To see this, let . First, to show that the algorithm is well defined, we must prove that . Since , it follows from Proposition 2(ii) that . Now let , and set and . Then (7) yields and . Moreover,
(38) 
Hence and (36) is welldefined. Next, to show that (36) is a special case of (30) it is enough to consider the case when . Note that (36) yields
(39) 
In turn, if we define as in (30), we obtain
(40) 
Hence (36) is a special case of (30). Moreover, it follows from (40) and Proposition 3(ii) that
(41) 
which establishes (i). On the other hand, (ii) results from (36) and (41) since
(42) 
Finally, to prove (iii), it remains to check (31). Take , , and a strictly increasing sequence in such that and . Then it follows from (37) and (i) that
(43) 
and from (36) that and . We therefore appeal to Proposition 2 to conclude that .
Remark \thetheorem
Corollary 3 is conceptual in that it does not specify a rule for selecting the quadruple in at iteration . We now provide an example of a concrete selection rule.
Consider the setting of Problem 1. Suppose that , let