A primaldual dynamical approach to structured convex minimization problems
Abstract. In this paper we propose a primaldual dynamical approach to the minimization of a structured convex function consisting of a smooth term, a nonsmooth term, and the composition of another nonsmooth term with a linear continuous operator. In this scope we introduce a dynamical system for which we prove that its trajectories asymptotically converge to a saddle point of the Lagrangian of the underlying convex minimization problem as time tends to infinity. In addition, we provide rates for both the violation of the feasibility condition by the ergodic trajectories and the convergence of the objective function along these ergodic trajectories to its minimal value. Explicit time discretization of the dynamical system results in a numerical algorithm which is a combination of the linearized proximal method of multipliers and the proximal ADMM algorithm.
Keywords. structured convex minimization, dynamical system, proximal ADMM algorithm, primaldual algorithm
AMS Subject Classification. 37N40, 49N15, 90C25, 90C46
1 Introduction and preliminaries
For and real Hilbert spaces, we consider the convex minimization problem
(1) 
where and are proper, convex and lower semicontinuous functions, is a convex and Fréchet differentiable function with Lipschitz continuous gradient , i.e. for every , and is a continuous linear operator.
Problem (1) can be rewritten as
(2) 
Obviously, is an optimal solution of (1) if and only if is an optimal solution of (2) and .
Based on this reformulation of problem (1) we define its Lagrangian
An element is said to be a saddle point of the Lagrangian , if
It is known that is a saddle point of if and only if is an optimal solution of (1), , and is an optimal solution of the Fenchel dual to problem (1), which reads
(3) 
In this situation the optimal objective values of (1) and (3) coincide.
In the formulation of (3),
and
denote the conjugate functions of and , respectively, and denotes the adjoint operator of . The infimal convolution of the functions and is defined by
It is also known that is a saddle point of the Lagrangian if and only if it is a solution of the following system of primaldual optimality conditions
We recall that the convex subdifferential of the function at is defined by , for , and by , otherwise.
A saddle point of the Lagrangian exists whenever the primal problem (1) has an optimal solution and the socalled AttouchBrézis regularity condition
holds. Here,
denotes the strong quasirelative interior of a set . We refer the reader to [9, 11, 28] for more insights into the world of regularity conditions and convex duality theory.
Let denote the family of continuous linear operators which are selfadjoint and positive semidefinite. For we introduce the following seminorm on :
This introduces on the following partial ordering: for
For fixed, let be
where denotes the identity operator on .
The subject of our investigations in this paper will be the following dynamical system, for which we will show that it asymptotically approaches the set of solutions of the primaldual pair of optimization problems (1)(3)
(4) 
where , , and and .
One of the motivation for the study of this dynamical system comes from the fact that, as we will see in Remark 1, it provides through explicit time discretization a numerical algorithm which is a combination of the linearized proximal method of multipliers and the proximal ADMM algorithm.
In the next section we will show the existence and uniqueness of strong global solutions for the dynamical system (4) in the framework of the CauchyLipschitz Theorem. In Section 3 we will prove some technical results, which will play an important role in the asymptotic analyis. In Section 4 we will investigate the asymptotic behaviour of the trajectories as the time tends to infinity. By carrying out a Lyapunov analysis and by relying on the continuous variant of the Opial Lemma, we are able to prove that the trajectories generated by (4) asymptotically convergence to a saddle point of the Lagrangian . Furthermore, we provide convergence rates of for the violation of the feasibility condition by ergodic trajectories and the convergence of the objective function along these ergodic trajectories to its minimal value.
The approach of optimization problems by dynamical systems has a long tradition. Crandall and Pazy considered in [20] dynamical systems governed by subdifferential operators (and more general by maximally monotone operators) in Hilbert spaces, addressed questions like the existence and uniqueness of solution trajectories, and related the latter to the theory of semigroups of nonlinear contractions. Brézis [14] studied the asymptotic behaviour of the trajectories for dynamical systems governed by convex subdifferentials, and Bruck carried out in [15] a similar analysis for maximally monotone operators. Dynamical systems defined via resolvent/proximal evaluations of the governing operators have enjoyed much attention in the last years, as they result by explicit time discretization in relaxed versions of standard numerical algorithms, with high flexibility and good numerical performances. Abbas and Attouch introduced in [1] a forwardbackward dynamical system, by extending to more general optimization problems an approach proposed by Antipin in [5] and Bolte in [10] on a gradientprojected dynamical system associated to the minimization of a smooth convex function over a convex closed set. Implicit dynamical systems were considered also in [13] in the context of monotone inclusion problems. A dynamical system of forwardbackwardforward type was considered in [7], while, a dynamical system of DouglasRachford type was recently introduced in [21].
It is important to notice that the approaches mentioned above have been introduced in connection with the study of “simple” monotone inclusion and convex minimization problems. They rely on straightforward splitting strategies and cannot be efficiently used when addressing structured minimization problems, like (1), which need to be addressed from a primal and a dual perspective, thus, require for tools and techniques from the convex duality theory. The dynamical approach we introduce and investigate in this paper is, to our knowledge, the first meant to address structured convex minimization problems in the spirit of the full splitting paradigm.
Remark 1.
The explicit discretization of (5) with respect to the time variable and constant step yields the iterative scheme
By convex subdifferential calculus, one can easily see that this can be for every equivalently written as
and, further, as
Similarly, (6) leads for every to
which is nothing else than
Here, and are two operator sequences in and , respectively.
Thus the dynamical system (4) leads through explicit time discretization to a numerical algorithm, which, for a starting point , generates a sequence for every as follows
(7) 
The algorithm (7) is a combination of the linearized proximal method of multipliers and the proximal ADMM algorithm.
Indeed, in the case when , (7) becomes the proximal ADMM algorithm with variable metrics from [8] (see, also, [12]). If, in addition, and the operator sequences and are constant, then (7) becomes the proximal ADMM algorithm investigated in [25, Section 3.2] (see, also, [23]). It is known that the proximal ADMM algorithm can be seen as a generalization of the full splitting primaldual algorithms of ChambollePock (see [16]) and CondatVu (see [19, 27]).
In the following remark we provide a particular choice for the linear maps and , which transforms (4) into a dynamical system of primaldual type formulated in the spirit of the full splitting paradigm.
Remark 2.
For every , define
where is such that .
Let be fixed. In this particular setting, (5) is equivalent to
and further to
In other words,
where
denotes the proximal point operator of a proper, convex and lower semicontinuous function .
On the other hand, relation (6) is equivalent to
hence,
This is further equivalent to
and further to
In other words,
Consequently, in this particular setting, the dynamical system (4) can be equivalently written as
(8) 
Let us also mention that when and the dynamical system (8) reads
(9) 
The explicit time discretization of (9) leads to a numerical algorithm, which, for a starting point , generates the sequence for every as follows
(10) 
By substituting in the first equation of (10) the term by , which is allowed according to the last equation, one can easily see that (10) is equivalent to the following numerical algorithm, which, for a starting point , generates the sequence for every as follows
(11) 
For for every , (11) is nothing else than the primaldual algorithm proposed by Chambolle and Pock in [16].
Example 1.
In this example we will illustrate via some numerical experiments the way in which the parameters and may influence the asymptotic convergence of the primal and dual trajectories. In this scope, we considered the following primal optimization problem
(12) 
which is in fact problem (1) written in the following particular setting: , , , , , for every , and , . One can easily see that is the unique optimal solution of (12) and that
(13) 
is the Fenchel dual problem of (12). This means that every feasible element of (13) is a dual optimal solution.
We considered the dynamical system (8) attached to the primaldual pair (12)(13) with starting points , and in the case when for every is a constant function. In order to solve the resulting dynamical system we used the Matlab function ode15s and, to this end, we reformulated it as
where
and
is defined as
Notice that
where denotes the projection operator on a convex and closed set .
As we will see later in Theorem 12, the asymptotic convergence of the trajectories as the time tends to infinity can be proved when . Since , we considered for three different choices, namely, and . The primal and the dual trajectories generated by the dynamical system for each of these three choices are represented in the figures 1, 2 and 3, respectively. The first row of each figure represents the primal trajectories for and , while the second row represents the dual trajectories for the same choices of the parameter .
One can see that the parameter plays in the dynamical system a regularizing role. Namely, in all three figures, thus somehow independently of the choice of the parameters and , the convergence behaviour of the primal trajectories, which approach the unique primal optimal solution are more stable when gets closer to . For the dual trajectories we can observe a reverse phenomenon. Namely, in all three figures, thus also independently of the choice of the other parameters, the dual trajectories, which approach a dual optimal solution, are more stable when gets closer to .
Notations.
The following two functions will play an important role in particular in the forthcoming analysis
and
With these two notations, the dynamical system (4) can be rewritten as
(14) 
Let be fixed. The function is proper, convex and lower semicontinuous, hence is proper, strongly convex and lower semicontinuous for every . This allows us to use the sign equal in the second relation of (14). On the other hand, a sufficient condition which guarantees that the function , which is proper and lower semicontinuous, is strongly convex is that there exists such that . This actually ensures that is proper, strongly convex and lower semicontinuous for every .
This means that if the assumption
holds, then we can use also in the first relation of (14) the sign equal. It is easy to see, that, if holds, then is strongly monotone for every . In other words, for every , all and all we have
Notice that, since and for every , is fulfilled, if
(15) 
or, if
(16) 
Notice also that, if is a finite dimensional Hilbert space, then (16), which is independent of , is nothing else than is positively definite or, equivalently, is injective.
Let be the unit sphere of . Assumption is fulfilled if and only if for every . In this case we can take for every .
2 Existence and uniqueness of the trajectories
In this section we will investigate the existence and uniqueness of the trajectories generated by (4). We start by recalling the definition of a locally absolutely continuous map.
Definition 1.
A function is said to be locally absolutely continuous, if it is absolutely continuous on every interval ; that is, for every there exists an integrable function such that
Remark 3.
(a) Every absolutely continuous function is differentiable almost everywhere, its derivative coincides with its distributional derivative almost everywhere and one can recover the function from its derivative by the above integration formula.
(b) Let be and an absolutely continuous function. This is equivalent to (see [6, 2]): for every there exists such that for any finite family of intervals the following property holds:
From this characterization it is easy to see that, if is Lipschitz continuous with , then the function is absolutely continuous, too. This means that is differentiable almost everywhere and holds almost everywhere.
The following definition specifies which type of solutions we consider in the analysis of the dynamical system (4).
Definition 2.
Let , , , and and . We say that the function is a strong global solutions of (4), if the following properties are satisfied:

the functions are locally absolutely continuous;

for almost every

The following results will be useful in the proof of the existence and uniqueness theorem.
Lemma 1.
Assume that holds. Then, for every fixed , the operator
is Lipschitz continuous.
Proof.
Let be fixed and . By subdifferential calculus we obtain that
and
Using that, due to , is strongly monotone, we get