A Subgradient Method for Free Material Design Submitted to the editors .

A Subgradient Method for Free Material Design

Abstract

A small improvement in the structure of the material could save the manufactory a lot of money. The free material design can be formulated as an optimization problem. However, due to its large scale, second-order methods cannot solve the free material design problem in reasonable size. We formulate the free material optimization (FMO) problem into a saddle-point form in which the inverse of the stiffness matrix in the constraint is eliminated. The size of is generally large, denoted as . This is the first formulation of FMO without . We apply the primal-dual subgradient method [17] to solve the restricted saddle-point formula. This is the first gradient-type method for FMO. Each iteration of our algorithm takes a total of floating-point operations and an auxiliary vector storage of size , compared with formulations having the inverse of which requires arithmetic operations and an auxiliary vector storage of size . To solve the problem, we developed a closed-form solution to a semidefinite least squares problem and an efficient parameter update scheme for the gradient method, which are included in the appendix. We also approximate a solution to the bounded Lagrangian dual problem. The problem is decomposed into small problems each only having an unknown of ( or ) matrix, and can be solved in parallel. The iteration bound of our algorithm is optimal for general subgradient scheme. Finally we present promising numerical results.

keywords fast gradient method, Nesterov’s primal-dual subgradient method, free material optimization, large-scale problems, first-order method, saddle-point, Lagrangian, complexity, duality, constrained least squares.

AMS 90C90, 90C06, 90C25, 90C30, 90C47, 9008

1 Introduction

The approach of Free Material Optimization (FMO) optimizes the material structure while the distribution of material and the material itself can be freely varied. FMO has been used to improve the overall material arrangement in air frame design (www.plato-n.org). The fundamentals of FMO were introduced in [3, 19]. And the model was further developed in [2, 24] etc. In the model, the elastic body of the material under consideration is represented as a bounded domain with a Lipschitzian boundary in a two- or three-dimensional Euclidean space depending on the design requirement. For computational purpose, the domain is discretized into finite elements: so that all the points in the same element are considered to have the same property.

Let denote the displacement vector of the body at point under load. Denote the (small-)strain tensor as:

Let () denote the stress tensor. The system is assumed to follow the Hooke’s law—the stress is a linear function of the strain:

where is the (plain-stress) elasticity tensor of order , which maps the strain to the stress tensor. The matrix measures the degree of deformation of a material under external loads and is a symmetric positive semidefinite matrix of order for the 2-dimensional and of order for the -dimensional material design problem. The diagonal elements of measure the stiffness of the material at in the coordinate directions. Hence the trace of is used to measure the cost (resource used) of a material in the model.

Denote by the identity matrix of order and the direct product of cones of symmetric positive semidefinite -matrices:

For a symmetric matrix , let denote .

Let denote the elasticity tensor of order for the th element : The ’s are considered to be constant on each but can be different for different ’s and are the design variables of the FMO model:

The design problem is to find a structure that is low ‘cost’ (the tensor having small trace) and is stable under given multiple independent loads (forces). There are some different formulas of the FMO problem depending on the design needs. This paper focuses on the minimum-cost FMO problem which is to design a material structure that can withstand a whole given set of loads in the worst-case scenario and the trace of is minimal. Below we describe the model based on [11].

The “cost”—stiffness of the material—is measured by the trace of : . For each , is lower bounded to avoid singularity in the FMO model. The constraints for the point-wise stiffness upper and lower bounds are:

From the engineering literature the dynamic stiffness of a structure can be improved by raising its fundamental eigenfrequency. Thus we have a lower bound on its eigen values:

Let be the number of nodes (vertices of the elements). Let denote the number of Gauss integration points in each element. In every element, the displacement vector is approximated as a continuous function which is linear in every coordinate:

where is the value of at the th node, and is the basis function associated with the th node. For , define matrices

For , let be the block matrix whose th block is evaluated at the th integration point and zero otherwise. The full dimension of is for the -dimensional case and for the -dimensional case.

Let denote the stiffness matrix relating the forces to the displacements; Let denote the element stiffness matrices:

Since the material obeys Hooke’s law, forces (loads) on each element, denoted as , are linearly related to the displacement vector:

(1)

The system is in equilibrium for if outer and inner forces balance each other. The equilibrium is measured by the compliances of the system: the less the compliance, the more rigid the structure with respect to the loads. The compliance can be represented as:

In the minimum-cost FMO model, an upper bound is imposed on the compliances. Further in view of equation (1), we have

In summary, with given loads , imposed upper and lower bounds and , , and compliance upper bound , the minimum-cost multiple-load material design problem is the following:

(2)

Some optimization approaches have been applied to FMO; for instance, Zowe et al. [24] formulate the multiple-load FOM as a - convex program. They propose penalty/barrier multiplier methods and interior-point methods for the problem. Ben-Tal et al. [2] consider bounded trace minimum compliance multiple-load FMO problem. They formulate the problem as a semidefinite program and solve the problem by an interior-point method. Stingl et al. [21] solve the minimum compliance multiple-load FMO problem by a sequential semidefinite programming algorithm. Weldeyesus and Stolpe [22] propose a primal-dual interior point method to several equivalent FMO formulations. Stingl et al. [20] study minimum compliance single-load FMO problem with vibration constraint and propose an approach to the problem based on nonlinear semidefinite low-rank approximation of the semidefinite dual. Haslinger et al. [8] extend the original problem statement by a class of generic constraints. Czarnecki and Lewiński [5] deal with minimization of the weighted sum of compliances related to the non-simultaneously applied load cases. All of them are second-order methods. To our knowledge, no first-order methods have been employed to FMO.

Second-order methods exploit the information of Hessians in addition to gradients and function values. Thus, compared with first-order methods, second-order methods generally converge faster and are more accurate; on the other hand, first-order methods don’t require formulation, storage, and inverse of Hessian and thus can be applied to large-scale problems. For certain structured problems with bounded simple feasible sets, Nesterov [13] showed that the complexity of fast gradient methods is one magnitude lower than the theoretical lower complexity bound of the gradient-type method for the black-box oracle model. After that work, there appears quite a lot of papers on fast gradient-type methods, such as [12, 17, 16, 18, 14, 6, 4, 15, 23].

However, not every real-world problem is suitable for second-order methods or fast gradient-type methods; for instance, when the structure of the problem is too complex to apply the interior-point method or the smoothing technique to. The minimum weight FMO model (2) is such a case. For the model, although the matrices are sparse, is generally not. The number is at least thousands; and is smaller than only by a constant factor. To roughly measure the amount of work per iteration, we use flops, i.e. floating point operations, such as arithmetic operations (), comparisons and exchanges. It takes a vector of length to store the matrix or its Cholesky factor in the memory, and about flops to evaluate . Hence, it is difficult to manage model (2) of reasonable size by second-order methods, since second-order methods work on the Hessian of the problem whose size is at least the square of total variables. And the variables of model (2) are matrices of size . In addition, the constraints of model (2) are not simple, which prevents us from applying usual gradient-project type methods to it, because it is not easy to project onto its feasible set.

In this paper, we reformulate model (2) into a saddle-point problem and apply the primal-dual subgradient method [17] to the saddle-point problem. The advantage of our formulation is that the inverse or the Cholesky factorization of doesn’t need to be calculated; thus reduce the computational cost of each iteration to just .

The traditional subgradient method for minimizing a nonsmooth convex function over the Euclidean space employs a pre-chosen sequence of steps which satisfies the divergent-series rule:

The iterates are generated as follows:

In the traditional subgradient method, new subgradients enter the model with decreasing weights, which contradicts to the general principle of iterative scheme—new information is more important than the old one. But the vanishing of steps is necessary for the convergence of the iterates .

The primal-dual subgradient method [17] associates the primal minimization sequence with a master process in the dual space; it doesn’t have the drawback of diminishing step sizes in the dual space; the method is proven to be optimal for saddle-point problems, nonsmooth convex minimization, minimax problems, variational inequalities, and stochastic optimization. Let be a finite dimensional real vector space equipped with a norm . Let be its dual. Let be a closed convex set. Let be a prox-function of with convexity parameter : , :

Let be a function mapping to . For instance, for the convex minimization problem, the function can be a subgradient of the objective function. The generic scheme of dual averaging (DA-scheme) [17] works as below:

Initialization:

Set . Choose .

Iteration

():

  1. Compute .

  2. Choose . Set .

  3. Choose . Set .


Let

The scheme has two main variants: simple averages where and with constant , and weighted averages where and with constant .

There are some other gradient methods for saddle-point problems. In [4], Chambolle and Pock study a first-order primal-dual algorithm for a class of saddle-point problems in two finite-dimensional real vector spaces and :

where is a linear operator, and and are proper convex, lower-semicontinuous functions. That algorithm, as well as the classical Arrow-Hurwicz method [1] and its variants for saddle-point problems, is not applicable to our FMO formulation, because in our formulation the function between two spaces is nonlinear. Nemirovski’s prox-method [12] reduces the problem of approximating a saddle-point of a function to that of solving the associated variational inequality by a prox-method. The approach is not applicable to our FMO formulation, because the structure of our FMO formulation is not simple enough and its objective function is not in .

In our approach, the inverse of in model (2) doesn’t need to be calculated, which decreases computational cost per iteration by one magnitude. Solutions of the primal and dual subproblems at each iteration can be written in closed-form. Each iteration takes roughly flops. And the auxiliary storage space is linear in . Furthermore, since the primal subproblem is decoupled into small problems that can be solved in parallel. And each small problem can be solved in approximately flops. Thus, it is possible to work on large-scale problems, compared with second-order methods dealing with the Hessian of or variables plus additional constraints on the matrices. To prove the efficiency of the algorithm, we give iteration complexity bounds of our algorithm, which includes simple dual averaging and weighted dual averaging schemes. The complexity bounds are optimal for the general subgradient methods. Numerical experiments are described at the end of the paper.

The remainder of the paper is organized as follows. In Section 2, we give our saddle-point form of the problem. In Section 3, we show that a solution to our bounded Lagrangian form either solves the original problem or gives an approximate solution of the original problem. In Section 4, we present our algorithm. In Section 5, we give closed-form solutions to the subproblems. In Section 6, we derive complexity bounds of our algorithm. In section 8, we present some computational examples of our algorithm. In Section 7, we describe and analyze a penalized lagrangian approach. In the Appendixes, we give a closed-form solution of a related matrix projection problem and an update scheme for the parameters of the algorithm.

2 Saddle-Point Formulation

We first rewrite problem (2) in a saddle-point form. Denote

(3)

The second group of constraints in (2) can be represented in max form:

Assume that problem (2) satisfies some constraint qualifications, such as the Slater condition—there exists such that for . Then a Lagrangian multiplier exists; and we can solve the Lagrangian of problem (2) instead. Thus, problem (2) can be written as follows:

The dimension of the matrix is large; the first transformation eliminates the need of calculating its inverse, but that results in a nonconcave objective function in and . The second transformation makes the function concave in and . In the last step, variable is eliminated to simplify the formulation.

Denote

Thus, to solve problem (2), we only need to solve

(4)

Note that is convex in and concave in .

3 Bounded Lagrangian

We apply the primal-dual subgradient method [17] to the saddle-point formulation (4). The convergence of the algorithm requires the iterates be uniformly bounded [17]. We therefore impose a bound on :

(5)

Next we show that the primal solution of the saddle-point problem (5) is either a solution to the original problem (2) or an approximate solution in the sense that its constraint-violation is bounded by and its objective value is smaller than that of the optimal value of (2).

Let be a solution to the saddle-point problem (4); then for any , is also its solution. We can choose small enough; for instance, let , so that is a solution to the bounded saddle-point form (5).

For any , denote the index set of its violated constraints as

Denote . We have the following results regarding our material design problem.

Lemma 1

Let be a solution of (5). Let be the optimal value of (2).

  1. If for ; then is a solution to (2).

  2. Otherwise, has the following properties:

    1. .

    2. .

Proof: Item 1 is obvious as the constraints are non-binding.

Next, we prove item 2.

Because

we have item (a). For any fixed , the point

is feasible to

with objective value

(6)

Since

we also have

Therefore,

By item (a) of the lemma, we have

Item (b) then follows.  

Note that as , the set of saddle-points of (5) approaches that of the original problem.

4 The Algorithm

In this part, we describe how to apply the primal-dual subgradient method [17] to the saddle-point reformulation of model (2). We have developed a parameter update scheme for the algorithm, which is included in the Appendix.

For a matrix , let vector denote the eigenvalues of ; let be the smallest eigenvalue of . The gradient (subgradients) of at are: for , ,

For the primal space, we choose the standard Frobenius norm:

For the dual space, we choose the standard Euclidean norm:

Their dual norms are denoted as: , .

The set for is defined in (3); and the set for is

Note that is nonsmooth. The primal-dual subgradient method [17] for saddle-point problems (5) works as follows.

Initialization: Set , .
  Choose , .
Iteration
1. Compute , , for .
2. Choose , set
3. Choose , set
Output: .

Details of a parameter update scheme for is given in the Appendix.

We take

Based on different choices of , there are two variants of the algorithm:

  1. Method of Simple Dual Averages

    We let

  2. Method of Weighted Dual Averages

    We let

5 Solution to the Subproblem

In this part, we give closed-form solutions to the subproblems at each iteration of our algorithm.

Solution of . The closed-form solution for in Step 3 of the algorithm is derived as below.

By Cauchy-Schwartz-Boniakovsky inequality, for :

with equality iff for some . Therefore,

(7)

Solution of . For a set , let denote the cardinality of ; i.e., the number of elements in . In Step 3 of the algorithm, can be seen as the projection

By Theorem 4 in Appendix: Matrix Projection, we can represent as follows.

For each , let be the eigenvalue decomposition of , and be its eigenvalues. Define the sets

Then

(8)

where is determined according to the following three cases.

  1. .

    Let

  2. .

    Then there is a partition :

    Let

  3. .

    Then there is a partition :

    Let

The eigenvalues in case 2 can be obtained by the following algorithm:

Algorithm projSyml
Step 1

(Initialization) Let be the negative eigenvalues of .
Let

Step 2

While , do

Step 3

Let

Similarly, the eigenvalues in case 3 can be obtained by the following algorithm:

Algorithm projSymg
Step 1

(Initialization) Let be the positive eigenvalues of .

  • If , let

  • If , let

Step 2

While , do

Step 3

Let

6 Complexity of the Algorithm

To understand the complexity of the algorithm for model (2), in this part we study duality gap and computational cost of each iteration. By [17], it takes iterations to solve a general convex-concave saddle-point problem to the absolute accuracy , which is the exact lower complexity bound for such class of algorithm schemes. To give an insight of how the data of FMO model, such as , , and , affect convergence time, we give upper bounds of the duality gap of the iterates generated by our algorithm in terms of the number of iterations and input data in §§ 6.1. In §§ 6.2, we derive computational cost per iteration. From the duality gap and computational cost per iteration given in this section, we can estimate from given data how much computational effort is needed at most to approximate a solution of a problem instance of model (2) based on the method proposed in the paper.

6.1 Iteration Bounds

By [7, Chapter 6, Proposition 2.1], for a function , assume

  • the sets and are convex, closed, non-empty, and bounded;

  • for any fixed , is concave and upper semicontinuous;

  • for any fixed , is convex and upper semicontinuous;

then the function has at least one saddle-point.

Since and are bounded, and is continuous and finite, by the above results, we conclude that has a saddle-point and a finite saddle-value. An upper bound on duality gap is given in [17, Theorem 6]. We next represent the duality gap in terms of input data.

Define