## Abstract

A small improvement in the structure of the material could save the manufactory a lot of money. The free material design can be formulated as an optimization problem. However, due to its large scale, second-order methods cannot solve the free material design problem in reasonable size. We formulate the free material optimization (FMO) problem into a saddle-point form in which the inverse of the stiffness matrix in the constraint is eliminated. The size of is generally large, denoted as . This is the first formulation of FMO without . We apply the primal-dual subgradient method [17] to solve the restricted saddle-point formula. This is the first gradient-type method for FMO. Each iteration of our algorithm takes a total of floating-point operations and an auxiliary vector storage of size , compared with formulations having the inverse of which requires arithmetic operations and an auxiliary vector storage of size . To solve the problem, we developed a closed-form solution to a semidefinite least squares problem and an efficient parameter update scheme for the gradient method, which are included in the appendix. We also approximate a solution to the bounded Lagrangian dual problem. The problem is decomposed into small problems each only having an unknown of ( or ) matrix, and can be solved in parallel. The iteration bound of our algorithm is optimal for general subgradient scheme. Finally we present promising numerical results.

keywords fast gradient method, Nesterov’s primal-dual subgradient method, free material optimization, large-scale problems, first-order method, saddle-point, Lagrangian, complexity, duality, constrained least squares.

AMS 90C90, 90C06, 90C25, 90C30, 90C47, 9008

## 1 Introduction

The approach of Free Material Optimization (FMO) optimizes the material structure while the distribution of material and the material itself can be freely varied. FMO has been used to improve the overall material arrangement in air frame design (www.plato-n.org). The fundamentals of FMO were introduced in [3, 19]. And the model was further developed in [2, 24] etc. In the model, the elastic body of the material under consideration is represented as a bounded domain with a Lipschitzian boundary in a two- or three-dimensional Euclidean space depending on the design requirement. For computational purpose, the domain is discretized into finite elements: so that all the points in the same element are considered to have the same property.

Let denote the displacement vector of the body at point under load. Denote the (small-)strain tensor as:

 eij(u(x))\lx@stackreldef=12(∂u(x)i∂xj+∂u(x)j∂xi).

Let () denote the stress tensor. The system is assumed to follow the Hooke’s law—the stress is a linear function of the strain:

 σij(x)=Eijkl(x)ekl(u(x))(in tensor % notation),

where is the (plain-stress) elasticity tensor of order , which maps the strain to the stress tensor. The matrix measures the degree of deformation of a material under external loads and is a symmetric positive semidefinite matrix of order for the 2-dimensional and of order for the -dimensional material design problem. The diagonal elements of measure the stiffness of the material at in the coordinate directions. Hence the trace of is used to measure the cost (resource used) of a material in the model.

Denote by the identity matrix of order and the direct product of cones of symmetric positive semidefinite -matrices:

 S+mk=S+k×⋯×S+km times.

For a symmetric matrix , let denote .

Let denote the elasticity tensor of order for the th element : The ’s are considered to be constant on each but can be different for different ’s and are the design variables of the FMO model:

 E=(E1,…,Em),Ei⪰0,i=1,…,m.

The design problem is to find a structure that is low ‘cost’ (the tensor having small trace) and is stable under given multiple independent loads (forces). There are some different formulas of the FMO problem depending on the design needs. This paper focuses on the minimum-cost FMO problem which is to design a material structure that can withstand a whole given set of loads in the worst-case scenario and the trace of is minimal. Below we describe the model based on [11].

The “cost”—stiffness of the material—is measured by the trace of : . For each , is lower bounded to avoid singularity in the FMO model. The constraints for the point-wise stiffness upper and lower bounds are:

 trEi≤ρ(i)u,trE≥ρ(i)L.

From the engineering literature the dynamic stiffness of a structure can be improved by raising its fundamental eigenfrequency. Thus we have a lower bound on its eigen values:

 λmin(E)≥r.

Let be the number of nodes (vertices of the elements). Let denote the number of Gauss integration points in each element. In every element, the displacement vector is approximated as a continuous function which is linear in every coordinate:

 u(x)=n∑i=1uiϑi(x),

where is the value of at the th node, and is the basis function associated with the th node. For , define matrices

 Unknown environment '%

For , let be the block matrix whose th block is evaluated at the th integration point and zero otherwise. The full dimension of is for the -dimensional case and for the -dimensional case.

Let denote the stiffness matrix relating the forces to the displacements; Let denote the element stiffness matrices:

 A(E)\lx@stackreldef=m∑i=1Ai(E),Ai(E)=nig∑k=1B⊤i,kEiBi,k.

Since the material obeys Hooke’s law, forces (loads) on each element, denoted as , are linearly related to the displacement vector:

 fj=A(E)uj=1,…L. (1)

The system is in equilibrium for if outer and inner forces balance each other. The equilibrium is measured by the compliances of the system: the less the compliance, the more rigid the structure with respect to the loads. The compliance can be represented as:

 f⊤ju.

In the minimum-cost FMO model, an upper bound is imposed on the compliances. Further in view of equation (1), we have

 ⟨A(E)−1fj,fj⟩≤γ,j=1,…L.

In summary, with given loads , imposed upper and lower bounds and , , and compliance upper bound , the minimum-cost multiple-load material design problem is the following:

 minE∈S+mkm∑i=1⟨Ik,Ei⟩s.t.ρ(i)l≤⟨Ik,Ei⟩≤ρ(i)u,i=1,…,m,⟨A(E)−1fj,fj⟩≤γ,j=1,…,Lλmin(E)≥r. (2)

Some optimization approaches have been applied to FMO; for instance, Zowe et al. [24] formulate the multiple-load FOM as a - convex program. They propose penalty/barrier multiplier methods and interior-point methods for the problem. Ben-Tal et al. [2] consider bounded trace minimum compliance multiple-load FMO problem. They formulate the problem as a semidefinite program and solve the problem by an interior-point method. Stingl et al. [21] solve the minimum compliance multiple-load FMO problem by a sequential semidefinite programming algorithm. Weldeyesus and Stolpe [22] propose a primal-dual interior point method to several equivalent FMO formulations. Stingl et al. [20] study minimum compliance single-load FMO problem with vibration constraint and propose an approach to the problem based on nonlinear semidefinite low-rank approximation of the semidefinite dual. Haslinger et al. [8] extend the original problem statement by a class of generic constraints. Czarnecki and Lewiński [5] deal with minimization of the weighted sum of compliances related to the non-simultaneously applied load cases. All of them are second-order methods. To our knowledge, no first-order methods have been employed to FMO.

Second-order methods exploit the information of Hessians in addition to gradients and function values. Thus, compared with first-order methods, second-order methods generally converge faster and are more accurate; on the other hand, first-order methods don’t require formulation, storage, and inverse of Hessian and thus can be applied to large-scale problems. For certain structured problems with bounded simple feasible sets, Nesterov [13] showed that the complexity of fast gradient methods is one magnitude lower than the theoretical lower complexity bound of the gradient-type method for the black-box oracle model. After that work, there appears quite a lot of papers on fast gradient-type methods, such as [12, 17, 16, 18, 14, 6, 4, 15, 23].

However, not every real-world problem is suitable for second-order methods or fast gradient-type methods; for instance, when the structure of the problem is too complex to apply the interior-point method or the smoothing technique to. The minimum weight FMO model (2) is such a case. For the model, although the matrices are sparse, is generally not. The number is at least thousands; and is smaller than only by a constant factor. To roughly measure the amount of work per iteration, we use flops, i.e. floating point operations, such as arithmetic operations (), comparisons and exchanges. It takes a vector of length to store the matrix or its Cholesky factor in the memory, and about flops to evaluate . Hence, it is difficult to manage model (2) of reasonable size by second-order methods, since second-order methods work on the Hessian of the problem whose size is at least the square of total variables. And the variables of model (2) are matrices of size . In addition, the constraints of model (2) are not simple, which prevents us from applying usual gradient-project type methods to it, because it is not easy to project onto its feasible set.

In this paper, we reformulate model (2) into a saddle-point problem and apply the primal-dual subgradient method [17] to the saddle-point problem. The advantage of our formulation is that the inverse or the Cholesky factorization of doesn’t need to be calculated; thus reduce the computational cost of each iteration to just .

The traditional subgradient method for minimizing a nonsmooth convex function over the Euclidean space employs a pre-chosen sequence of steps which satisfies the divergent-series rule:

 λk>0,λk→0,∞∑k=0λk=∞.

The iterates are generated as follows:

 gk∈∂F(xk),xk+1=xk−λkgk,k≥0.

In the traditional subgradient method, new subgradients enter the model with decreasing weights, which contradicts to the general principle of iterative scheme—new information is more important than the old one. But the vanishing of steps is necessary for the convergence of the iterates .

The primal-dual subgradient method [17] associates the primal minimization sequence with a master process in the dual space; it doesn’t have the drawback of diminishing step sizes in the dual space; the method is proven to be optimal for saddle-point problems, nonsmooth convex minimization, minimax problems, variational inequalities, and stochastic optimization. Let be a finite dimensional real vector space equipped with a norm . Let be its dual. Let be a closed convex set. Let be a prox-function of with convexity parameter : , :

 d(αx+(1−α)y)≤αd(x)+(1−α)d(y)−12σα(1−α)∥x−y∥2.

Let be a function mapping to . For instance, for the convex minimization problem, the function can be a subgradient of the objective function. The generic scheme of dual averaging (DA-scheme) [17] works as below:

Initialization:

Set . Choose .

Iteration

():

1. Compute .

2. Choose . Set .

3. Choose . Set .

Let

 ^β0=^β1,^βi+1=^βi+1^βi,i≥1.

The scheme has two main variants: simple averages where and with constant , and weighted averages where and with constant .

There are some other gradient methods for saddle-point problems. In [4], Chambolle and Pock study a first-order primal-dual algorithm for a class of saddle-point problems in two finite-dimensional real vector spaces and :

 minx∈Emaxy∈V⟨Kx,y⟩+G(x)−T∗(y),

where is a linear operator, and and are proper convex, lower-semicontinuous functions. That algorithm, as well as the classical Arrow-Hurwicz method [1] and its variants for saddle-point problems, is not applicable to our FMO formulation, because in our formulation the function between two spaces is nonlinear. Nemirovski’s prox-method [12] reduces the problem of approximating a saddle-point of a function to that of solving the associated variational inequality by a prox-method. The approach is not applicable to our FMO formulation, because the structure of our FMO formulation is not simple enough and its objective function is not in .

In our approach, the inverse of in model (2) doesn’t need to be calculated, which decreases computational cost per iteration by one magnitude. Solutions of the primal and dual subproblems at each iteration can be written in closed-form. Each iteration takes roughly flops. And the auxiliary storage space is linear in . Furthermore, since the primal subproblem is decoupled into small problems that can be solved in parallel. And each small problem can be solved in approximately flops. Thus, it is possible to work on large-scale problems, compared with second-order methods dealing with the Hessian of or variables plus additional constraints on the matrices. To prove the efficiency of the algorithm, we give iteration complexity bounds of our algorithm, which includes simple dual averaging and weighted dual averaging schemes. The complexity bounds are optimal for the general subgradient methods. Numerical experiments are described at the end of the paper.

The remainder of the paper is organized as follows. In Section 2, we give our saddle-point form of the problem. In Section 3, we show that a solution to our bounded Lagrangian form either solves the original problem or gives an approximate solution of the original problem. In Section 4, we present our algorithm. In Section 5, we give closed-form solutions to the subproblems. In Section 6, we derive complexity bounds of our algorithm. In section 8, we present some computational examples of our algorithm. In Section 7, we describe and analyze a penalized lagrangian approach. In the Appendixes, we give a closed-form solution of a related matrix projection problem and an update scheme for the parameters of the algorithm.

We first rewrite problem (2) in a saddle-point form. Denote

 Q(i)k\lx@stackreldef={U∈Sk:λmin(U)≥r,ρ(i)l≤⟨Ik,U⟩≤ρ(i)u}. (3)

The second group of constraints in (2) can be represented in max form:

 γ≥⟨A(E)−1fj,fj⟩=maxyj∈RN{2⟨fj,yj⟩−⟨A(E)yj,yj⟩}.

Assume that problem (2) satisfies some constraint qualifications, such as the Slater condition—there exists such that for . Then a Lagrangian multiplier exists; and we can solve the Lagrangian of problem (2) instead. Thus, problem (2) can be written as follows:

 min\lx@stackrelEi∈Q(i)kk=1,…,mmax\lx@stackrelyj∈RN,λj≥0j=1,…,L{m∑i=1⟨Ik,Ei⟩+L∑j=1λj⋅[2⟨fj,yj⟩−⟨A(E)yj,yj⟩−γ]}\lx@stackrelλjyj→xj=min\lx@stackrelEi∈Q(i)kk=1,…,m{m∑i=1⟨Ik,Ei⟩+max(0,max\lx@stackrelxj∈RN,λj>0j=1,…,LL∑j=1[2⟨fj,xj⟩−1λj⟨A(E)xj,xj⟩−γλj])}=min\lx@stackrelEi∈Q(i)kk=1,…,mmax\lx@stackrelxj∈RN,j=1,…,L{m∑i=1⟨Ik,Ei⟩+L∑j=12[⟨fj,xj⟩−γ1/2⟨A(E)xj,xj⟩1/2]}.

The dimension of the matrix is large; the first transformation eliminates the need of calculating its inverse, but that results in a nonconcave objective function in and . The second transformation makes the function concave in and . In the last step, variable is eliminated to simplify the formulation.

Denote

 F(E,x)\lx@stackreldef=m∑i=1⟨Ik,Ei⟩+L∑j=12[⟨fj,xj⟩−γ1/2⟨A(E)xj,xj⟩1/2].

Thus, to solve problem (2), we only need to solve

 min\lx@stackrelEi∈Q(i)ki=1,…,mmax\lx@stackrelxj∈RN,j=1,…,LF(E,x). (4)

Note that is convex in and concave in .

## 3 Bounded Lagrangian

We apply the primal-dual subgradient method [17] to the saddle-point formulation (4). The convergence of the algorithm requires the iterates be uniformly bounded [17]. We therefore impose a bound on :

 min\lx@stackrelEi∈Q(i)ki=1,…,mmax\lx@stackrel∥xj∥≤η,j=1,…,LF(E,x). (5)

Next we show that the primal solution of the saddle-point problem (5) is either a solution to the original problem (2) or an approximate solution in the sense that its constraint-violation is bounded by and its objective value is smaller than that of the optimal value of (2).

Let be a solution to the saddle-point problem (4); then for any , is also its solution. We can choose small enough; for instance, let , so that is a solution to the bounded saddle-point form (5).

For any , denote the index set of its violated constraints as

 Missing or unrecognized delimiter for \left

Denote . We have the following results regarding our material design problem.

###### Lemma 1

Let be a solution of (5). Let be the optimal value of (2).

1. If for ; then is a solution to (2).

2. Otherwise, has the following properties:

1. .

2. .

Proof: Item 1 is obvious as the constraints are non-binding.

Next, we prove item 2.

Because

 max\lx@stackrel∥xj∥≤ηj=1,…,LF(E,x)≤max\lx@stackrelxj∈RN,j=1,…,LF(E,x),

we have item (a). For any fixed , the point

 xj={η∥A(E)−1fj∥A(E)−1fjj∈WE0j∉WE

is feasible to

 max\lx@stackrel∥xj∥≤η,j=1,…,LF(E,x)

with objective value

 Fx(E)\lx@stackreldef=⟨I,E⟩+2∑j∈WE(⟨fj,A(E)−1fj⟩1/2−γ1/2)⟨fj,A(E)−1fj⟩1/2∥A(E)−1fj∥η. (6)

Since

 ⟨fj,A(E)−1fj⟩=⟨A(E)−1fj,A(E)(A(E)−1fj)⟩,

we also have

 ⟨fj,A(E)−1fj⟩1/2∥A(E)−1fj∥≥λminA(E)1/2≥rλmin(BBT),and⟨I,E⟩≥mρl.

Therefore,

 F(~E,~x)≥mρl+2rλmin(BBT)η∑j∈W~E(⟨fj,A(~E)−1fj⟩1/2−γ1/2),

By item (a) of the lemma, we have

 F(~E,~x)≤F∗.

Item (b) then follows.

Note that as , the set of saddle-points of (5) approaches that of the original problem.

## 4 The Algorithm

In this part, we describe how to apply the primal-dual subgradient method [17] to the saddle-point reformulation of model (2). We have developed a parameter update scheme for the algorithm, which is included in the Appendix.

For a matrix , let vector denote the eigenvalues of ; let be the smallest eigenvalue of . The gradient (subgradients) of at are: for , ,

 gEi(E,x)=Ik−√γ∑j∈R⟨A(E)xj,xj⟩−1/2(nig∑l=1Bi,lxjxTjBTi,l),where R\lx@stackreldef={1≤l≤L:⟨A(E)xl,xl⟩>0};gxj(E,x)={2fj−2√γ⟨A(E)xj,xj⟩−1/2A(E)xj⟨A(E)xj,xj⟩>0{2fj−2√γA(E)y:⟨A(E)y,y⟩=1}⟨A(E)xj,xj⟩=0.

For the primal space, we choose the standard Frobenius norm:

 ∥E∥2F=m∑i=1∥Ei∥2F=tr(E2),d(E)=12m∑i=1∥Ei−rIk∥2F.

For the dual space, we choose the standard Euclidean norm:

 ∥x∥22=L∑j=1∥xj∥22=xTx,d(x)=12L∑j=1∥xj∥22.

Their dual norms are denoted as: , .

The set for is defined in (3); and the set for is

 Qx={xj∈RN:∥xj∥≤η}.

Note that is nonsmooth. The primal-dual subgradient method [17] for saddle-point problems (5) works as follows.

Initialization: Set , .
Choose , .
Iteration
1. Compute , , for .
2. Choose , set
 sEit+1=sEit+αtg(t)Ei(i=1,…,m),sxjt+1=sxjt−αtg(t)xj(j=1,…,L).
3. Choose , set
 E(t+1)=argminEi∈Q(i)k{⟨sEt+1,E⟩+βt+1τdE(E)},
 x(t+1)=argminxj∈Qx{⟨sxt+1,x⟩+βt+1(1−τ)dx(x)}.
Output: .

Details of a parameter update scheme for is given in the Appendix.

We take

 ^β0=^β1=1,^βt+1=^βt+1^βt,t=1,…βt=σ^βt,t=0,….

Based on different choices of , there are two variants of the algorithm:

1. Method of Simple Dual Averages

We let

 αt=1,t=0,….
2. Method of Weighted Dual Averages

We let

 αt=1/(∥g(t)E∥2F,∗τ+∥g(t)x∥22,∗1−τ)1/2,t=0,….

## 5 Solution to the Subproblem

In this part, we give closed-form solutions to the subproblems at each iteration of our algorithm.

Solution of . The closed-form solution for in Step 3 of the algorithm is derived as below.

By Cauchy-Schwartz-Boniakovsky inequality, for :

 ⟨sxjt+1,xj⟩+βt+1(1−τ)dxj(xj)≥−∥sxjt+1∥2,∗⋅∥xj∥2+βt+1(1−τ)2∥xj∥22=βt+1(1−τ)2(∥xj∥2−1βt+1(1−τ)∥sxjt+1∥2,∗)2−12βt+1(1−τ)∥sxjt+1∥22,∗,

with equality iff for some . Therefore,

 x(t+1)j=−min(η∥sxjt+1∥2,∗,1βt+1(1−τ))sxjt+1. (7)

Solution of . For a set , let denote the cardinality of ; i.e., the number of elements in . In Step 3 of the algorithm, can be seen as the projection

 minV∈Q(i)k∥V+1βt+1τsEit+1−rI∥2F.

By Theorem 4 in Appendix: Matrix Projection, we can represent as follows.

For each , let be the eigenvalue decomposition of , and be its eigenvalues. Define the sets

 M0\lx@stackreldef={1≤l≤k:λl≥0},¯M0\lx@stackreldef={1,…,k}∖M0.

Then

 E(t+1)i=Udiag(ω)UT, (8)

where is determined according to the following three cases.

1. .

Let

 ωl={rl∈M0r−λlβt+1τl∉M0.
2. .

Then there is a partition :

 P={l∈¯M0:λl<βt+1τ(ρ(i)u−kr)+∑q∈Pλq|P|},¯P={l∈¯M0:λl≥βt+1τ(ρ(i)u−kr)+∑q∈Pλq|P|}.

Let

 ωl=⎧⎨⎩rl∈¯P∪M0r−λlβt+1τ+βt+1τ(ρ(i)u−kr)+∑q∈Pλqβt+1τ|P|l∈P.
3. .

Then there is a partition :

 Pm=⎧⎨⎩l∈M0:ρ(i)l+1βt+1τ∑j∈Pm∪¯M0λj−kr|Pm|+|¯M0|>λlβt+1τ⎫⎬⎭.

Let

 ωl=⎧⎪⎨⎪⎩−λlβt+1τ+βt+1τ(ρ(i)l−|¯Pm|r)+∑q∈¯M0∪Pmλqβt+1τ(|¯M0|+|Pm|),l∈¯M0∪Pm.rl∈¯Pm

The eigenvalues in case 2 can be obtained by the following algorithm:

##### Algorithm projSyml
Step 1

(Initialization) Let be the negative eigenvalues of .
Let

 P={σ(1)},T=βt+1τ(ρ(i)u−kr)+λσ(1),q=1.
Step 2

While , do

 P∪{σ(q+1)}→P,T+λσ(q+1)→T,q+1→q.
Step 3

Let

 Unknown environment '%

Similarly, the eigenvalues in case 3 can be obtained by the following algorithm:

##### Algorithm projSymg
Step 1

(Initialization) Let be the positive eigenvalues of .

• If , let

 U={σ(1)},T=βt+1τ(ρ(i)l−kr)+λσ(1),q=1.
• If , let

 U=¯M0∪{i:λi=0},T=βt+1τ(ρ(i)l−kr)+∑j∈¯M0λj,q=|U|.
Step 2

While , do

 U∪{σ(q+1)}→U,T+λσ(q+1)→T,q+1→q.
Step 3

Let

 ωl={rl∈M0∖Ur−λlβt+1τ+Tβt+1τql∈¯M0∪U.

## 6 Complexity of the Algorithm

To understand the complexity of the algorithm for model (2), in this part we study duality gap and computational cost of each iteration. By [17], it takes iterations to solve a general convex-concave saddle-point problem to the absolute accuracy , which is the exact lower complexity bound for such class of algorithm schemes. To give an insight of how the data of FMO model, such as , , and , affect convergence time, we give upper bounds of the duality gap of the iterates generated by our algorithm in terms of the number of iterations and input data in §§ 6.1. In §§ 6.2, we derive computational cost per iteration. From the duality gap and computational cost per iteration given in this section, we can estimate from given data how much computational effort is needed at most to approximate a solution of a problem instance of model (2) based on the method proposed in the paper.

### 6.1 Iteration Bounds

By [7, Chapter 6, Proposition 2.1], for a function , assume

• the sets and are convex, closed, non-empty, and bounded;

• for any fixed , is concave and upper semicontinuous;

• for any fixed , is convex and upper semicontinuous;

then the function has at least one saddle-point.

Since and are bounded, and is continuous and finite, by the above results, we conclude that has a saddle-point and a finite saddle-value. An upper bound on duality gap is given in [17, Theorem 6]. We next represent the duality gap in terms of input data.

Define

 ∥(gE,gx)∥∗\lx@stackreldef=[1τ∥gE∥2F,∗+11−τ∥gx∥22,∗