The Proximal Alternating Minimization Algorithm for two-block separable convex optimization problems with linear constraints

The Proximal Alternating Minimization Algorithm for two-block separable convex optimization problems with linear constraints

Abstract. The Alternating Minimization Algorithm (AMA) has been proposed by Tseng to solve convex programming problems with two-block separable linear constraints and objectives, whereby (at least) one of the components of the latter is assumed to be strongly convex. The fact that one of the subproblems to be solved within the iteration process of AMA does not usually correspond to the calculation of a proximal operator through a closed formula, affects the implementability of the algorithm. In this paper we allow in each block of the objective a further smooth convex function and propose a proximal version of AMA, called Proximal AMA, which is achieved by equipping the algorithm with proximal terms induced by variable metrics. For suitable choices of the latter, the solving of the two subproblems in the iterative scheme can be reduced to the computation of proximal operators. We investigate the convergence of the proposed algorithm in a real Hilbert space setting and illustrate its numerical performances on two applications in image processing and machine learning.

Key Words. Proximal AMA, Lagrangian, saddle points, subdifferential, convex optimization, Fenchel duality

AMS subject classification. 47H05, 65K05, 90C25

1 Introduction and preliminaries

The Alternating Minimization Algorithm (AMA) has been proposed by Tseng (see [16]) in order to solve optimization problems of the form

(1)

where is a proper, -strongly convex with (this means that is convex) and lower semicontinuous function, is a proper, convex and lower semicontinuous function, and .

For we consider the augmented Lagrangian associated with problem (1)

The Lagrangian associated with problem (1) is

The Alternating Minimization Algorithm reads:

Algorithm 1.

(AMA) Choose and a sequence of stepsizes . For all set:

(2)
(3)
(4)

The main convergence properties of this numerical algorithm are summarized in the theorem below (see [16]).

Theorem 2.

Let and be such that . Assume that the sequence of stepsizes satisfies

where . Let be the sequence generated by Algorithm 1. Then there exist and an optimal Lagrange multiplier associated with the constraint such that

If the function has bounded level sets, then is bounded and any of its cluster points provides with an optimal solution of (1).

The strong convexity of allows to reduce the minimization problem in (2) to the calculation of the proximal operator of a proper, convex and lower semicontinuous function. This is for the minimization problem in (3), due to the presence of the linear operator , in general not the case. This fact makes the AMA method not very tractable for implementation issues. With the exception of some very particular cases, one has to use a subroutine in order to compute , a fact which can have a negative influence on the convergence behaviour of the algorithm. One possibility to avoid this, without losing the convergence properties of AMA, is to replace (3) by a proximal step of . The papers [3] and [9] provide convincing evidences for the versatility and efficiency of proximal point algorithms for solving nonsmooth convex optimization problems.

In this paper we address in a real Hilbert space setting a problem of type (1), which is obtained by adding in each block of the objective a further smooth convex function. To solve this problem we propose a so-called Proximal Alternating Minimization Algorithm (Proximal AMA), which is obtained by inducing in each of the minimization problems (2) and (3) additional proximal terms defined by means of positively semidefinite operators. The two smooth convex functions in the objective are evaluated via gradient steps. We will show that, for appropriate choices of these operators, the minimization problem in (3) reduces to the performing of a proximal step. We perform the convergence analysis of the proposed method and show that the generated sequence converges weakly to a saddle point of the Lagrangian associated with the optimization problem under investigation.The numerical performances of Proximal AMA, in particular in comparison with AMA, are illustrated on two applications in image processing and machine learning.

A similarity of AMA to the classical ADMM algorithm, introduced by Gabay and Mercier in [12], is evident. In [10, 15] (see also [1, 5]) proximal versions of the ADMM algorithm have been proposed and investigated from the point of view of their convergence properties. Parts of the convergence analysis for Proximal AMA are carried out in a similar spirit to the convergence proofs in these papers.

In the remainder of this section, we discuss some notations, definitions and basic properties we will use in this paper (see [2]). Let and be real Hilbert spaces with corresponding inner products and associated norms . In both spaces we denote by the weak convergence and by the strong convergence.

We say that a function is proper, if and for all . Let be

Let be . The (Fenchel) conjugate function of is defined as

and is a proper, convex and lower semicontinuous function. It also holds , where is the conjugate function of . The (convex) subdifferential of is defines as , if , and as , otherwise.

The infimal convolution of two proper functions is the function , defined by .

The proximal point operator of parameter of at , where , is defined as

According to Moreau’s decomposition formula we have

Let be a convex and closed set. The strong quasi-relative interior of is

We always have and, if is finite dimensional, then where denotes the relative interior of and represents the interior of relative to its affine hull.

We set

For we define the seminorm , . We consider the Loewner partial ordering on , defined for by

Furthermore, we define for

where for all , denotes the identity operator on .

Let be a linear continuous operator. The operator , fulfilling for all and , denotes the adjoint operator of , while denotes the norm of .

2 The Proximal Alternating Minimization Algorithm

The two-block separable optimization problem we are going to investigate has the following formulation.

Problem 3.

Let , and be real Hilbert spaces, -strongly convex with , , a convex and Fréchet differentiable function with -Lipschitz continuous gradient, , a convex and Fréchet differentiable functions with -Lipschitz continuous gradient, , and linear continuous operators such that and . Consider the following optimization problem with two-block separable objective function and linear constraints

(5)

Notice that we allow the Lipschitz constant of the gradient of the function to be zero. In this case is an affine function. The same applies for the function .

The Lagrangian associated with the optimization problem (5) is

We say that is a saddle point of the Lagrangian , if

holds for all .

One can show that is a saddle point of the Lagrangian if and only if is an optimal solution of (5), is an optimal solution of its Fenchel dual problem

(6)

and the optimal objective values of (5) and (6) coincide. The existence of saddle points for is guaranteed when (5) has an optimal solution and, for instance, the Attouch-Brézis-type condition

(7)

holds (see [4, Theorem 3.4]). In the finite dimensional setting, this asks for the existence of and satisfying and coincides with the assumption used by Tseng in [16].

The system of optimality conditions for the primal-dual pair of optimization problems (5)-(6) reads:

(8)

This means that if (5) has an optimal solution and a qualification condition, like for instance (7), is fulfilled, then there exists an optimal solution of (6) such that (8) holds, consequently, is a saddle point of the Lagrangian . Conversely, if is a saddle point of the Lagrangian , thus, satisfies relation (8), then is an optimal solution of (5) and is an optimal solution of (6).

Remark 4.

If and are two saddle points of the Lagrangian , then . This follows easily by using the strong monotonicity of , the monotonicity of and the relations in (8).

In the following we formulate the Proximal Alternating Minimization Algorithm to solve (5). To this end, we modify Tseng’s AMA by evaluating in each of the two subproblems the functions and via gradient steps, respectively, and by introducing proximal terms defined through two sequence of positively semidefinite operators and .

Algorithm 5.

(Proximal AMA) Let and . Choose and a sequence of stepsizes . For all set:

(9)
(10)
(11)
Remark 6.

The sequence is uniquely determined if there exists such that for all . This actually ensures that the objective function in the subproblem (10) is strongly convex.

Remark 7.

Let be fixed and , where and . Then is positively semidefinite and the update of in the Proximal AMA method becomes a proximal step. Indeed, (10) holds if and only if

or, equivalently,

But this is nothing else than

The convergence of the Proximal AMA method is addressed in the next theorem.

Theorem 8.

In the setting of Problem 3 let the set of the saddle points of the Lagrangian be nonempty. Assume that for all and that is a monotonically decreasing sequence satisfying

(12)

where . If one of the following assumptions:

  1. there exists such that for all ;

  2. there exists such that ;

holds true, then the sequence generated by Algorithm 5 converges weakly to a saddle point of the Lagrangian .

Proof.

Let be a fixed saddle point of the Lagrangian . This means that it fulfils the system of optimality conditions

(13)
(14)
(15)

We start by proving that

and that the sequences and are bounded.

Assume that and . Let be fixed. Writing the optimality conditions for the subproblems (9) and (10) we obtain

(16)

and

(17)

respectively. Combining (13), (14), (16), (17) with the strong monotonicity of and the monotonicity of , it yields

and

which after summation lead to

(18)

According to the Baillon-Haddad-Theorem (see [2, Corollary 18.16]) the gradients of and are and -cocoercive, respectively, thus

On the other hand, by taking into account (11) and (15), it holds:

By employing the last three relations in (2), it yields

which, after expressing the inner products by means of norms, becomes

Using again (11), the inequality and the following expressions

and

it yields

Finally, by using the monotonicity of and of , we obtain

(19)

where

If (and, consequently, is constant) and , then, by using the same arguments, we obtain again (19), but with