A decentralized approach to multi-agent MILPs: finite-time feasibility and performance guarantees

A decentralized approach to multi-agent MILPs: finite-time feasibility and performance guarantees

Abstract

We address the optimal design of a large scale multi-agent system where each agent has discrete and/or continuous decision variables that need to be set so as to optimize the sum of linear local cost functions, in presence of linear local and global constraints. The problem reduces to a Mixed Integer Linear Program (MILP) that is here addressed according to a decentralized iterative scheme based on dual decomposition, where each agent determines its decision vector by solving a smaller MILP involving its local cost function and constraint given some dual variable, whereas a central unit enforces the global coupling constraint by updating the dual variable based on the tentative primal solutions of all agents. An appropriate tightening of the coupling constraint through iterations allows to obtain a solution that is feasible for the original MILP. The proposed approach is inspired by a recent method to the MILP approximate solution via dual decomposition and constraint tightening, and presents the advantage of guaranteeing feasibility in finite-time and providing better performance guarantees. The two approaches are compared on a numerical example on plug-in electric vehicles optimal charging.

1Introduction

In this paper we are concerned with the optimal design of a large-scale system composed of multiple agents, each one characterized by its set of design parameters that should be chosen so as to solve a constrained optimization problem where the agents’ decisions are coupled by some global constraint. More specifically, the goal is to minimize the sum of local linear cost functions, subject to local polyhedral constraints and a global linear constraint. A key feature of our framework is that design parameters can have both continuous and discrete components.

Let denote the number of agents. Then, the optimal design problem takes the form of the following Mixed Integer Linear Program (MILP):

where, for all , is the decision vector of agent , its local cost, and its local constraint set defined by a matrix and a vector of appropriate dimensions, being the number of continuous decision variables and the number of discrete ones, with . The coupling constraint is defined by matrices , , and a -dimensional vector .

Despite the advances in numerical methods for integer optimization, when the number of agents is large, the presence of discrete decision variables makes the optimization problem hard to solve, and calls for some decomposition into lower scale MILPs, as suggested in [17].

A common practice to handle problems of the form of Equation 1 consists in first dualizing the coupling constraint introducing a vector of Lagrange multipliers and solving the dual program

to obtain , and then constructing a primal solution by solving MILPs given by:

where the search within the closed constraint polyhedral set can be confined to its set of vertices since the cost function is linear.

Unfortunately, while this procedure guarantees to satisfy the local constraints since for all , it does not guarantees the satisfaction of the coupling constraint.

A way to enforce the satisfaction of the coupling constraint is to solve Equation 2 via a subgradient method, and then use a recovery procedure for the primal variables, [14]. Albeit this method is very useful in applications since it allows for a distributed implementation, see e.g. [9], it provides a feasible solution only when there are no discrete decision variables. As a matter of fact, if we let denote the convex hull of all points inside , then, the primal solution recovered using [14] is the optimal solution of the following Linear Program (LP):

The dual of the convexified problem Equation 4 coincides with the one of Equation 1 and is given by Equation 2 (see [10] for a proof). Clearly does not necessarily imply that . Therefore the solution recovered using [14] satisfies the coupling constraint but not necessarily the local constraints.

For these reasons recovery procedures for MILPs are usually composed of two steps: a tentative solution that is not feasible for either the joint constraint or the local ones is first obtained through duality, and then a heuristic is applied to recover feasibility starting from this tentative solution, see, e.g., [4].

Problems in the form of Equation 1 arise in different contexts like power plants generation scheduling [18] where the agents are the generation units with their on/off state modeled with binary variables and the joint constraint consists in energy balance equations, or buildings energy management [11], where the cost function is a cost related to power consumption and constraints are related to capacity, comfort, and actuation limits of each building. Other problems that fits the structure of Equation 1 are supply chain management [8], portfolio optimization for small investors [2], and plug-in electric vehicles [17]. In all these cases it is of major interest to guarantee that the derived (primal) solution is implementable in practice, which means that it must be feasible for Equation 1.

Interestingly, a large class of dynamical systems involving both continuous and logic components can be modeled as a Mixed Logical Dynamical (MLD) system, [3], which are described by linear equations and inequalities involving both discrete and continuous inputs and state variables. Model predictive control problems for MLD systems involving the optimization of a linear finite-horizon cost function then also fit the MILP description in Equation 1.

Background

Problems in the form of Equation 1 have been investigated in [1], where the authors studied the behavior of the duality gap (i.e. the difference between the optimal value of Equation 1 and Equation 2) showing that it decreases relatively to the optimal value of Equation 1 as the number of agents grows. The same behavior has been observed in [4]. In the recent paper [17], the authors explored the connection between the solutions to the linear program Equation 4 and recovered via from the solution to the dual program Equation 2. They proposed a method to recover a primal solution which is feasible for Equation 1 by using the dual optimal solution of a modified primal problem, obtained by tightening the coupling constraint by an appropriate amount.

Let with and consider the following pair of primal-dual problems:

and

Equation 5 constitutes a tightened version of Equation 4, whereas Equation 6 is the corresponding dual. For all , let be defined as follows:

where denotes the -th row of and the -th entry of .

From [17], we have the following result:

Let us define

Consider the following assumption:

Then, the sub-optimality level of the approximate solution to Equation 1 can be quantified as follows:

Note that both Proposition ? on feasibility and Proposition ? on optimality require the knowledge of the dual solution . This may pose some issues if cannot be computed centrally, which is the case, e.g., when the agents are not willing to share with some central entity their private information coded in their local cost and constraint set. In those cases, the value of can only be achieved asymptotically using a decentralized/distributed scheme to solve Equation 6 with .

Contribution of this paper

In this paper we propose a decentralized iterative procedure which provides in a finite number of iterations a solution that is feasible for the optimal design problem Equation 1, thus overcoming the issues regarding the finite-time computability of a decentralized solution in [17]. Furthermore, the performance guarantees quantifying the sub-optimality level of our solution with respect to the optimal one of Equation 1 are less conservative than those derived in [17].

As in the inspiring work in [17], we still exploit some tightening of the coupling constraint to enforce feasibility. However, the amount of tightening is decided through the iterations, based on the explored candidate solutions , , and not using the overly conservative worst-case tightening as in [17] where for all , the and of are computed letting vary over the whole set . The amount of tightening plays a crucial role in the applicability of Proposition ?. In fact, a too large value of may prevent Equation 5 to be feasible when is set equal to , thus violating Assumption ?. A less conservative way to select an appropriate amount of tightening can extend the applicability of the approach to a larger class of problems. According to a similar reasoning, we are able to improve the bound on the performance degradation of our solution with respect to the optimal one of Equation 1 by taking a less conservative value for the quantity in that is used in the performance bound .

Notably, the proposed decentralized scheme allows agents to preserve the privacy on their local information, since they do not have to send to the central unit either their cost coefficients or their local constraints.

2Proposed approach

We next introduce Algorithm ? for the decentralized computation in a finite number of iterations of an approximate solution to Equation 1 that is feasible and improves over the solution in [17] both in terms of amount of tightening and performance guarantees.

Algorithm ? is a variant of the dual subgradient algorithm. As the standard dual subgradient method, it includes two main steps: step ? in which a subgradient of the dual objective function is computed by fixing the dual variables and minimizing the Lagrangian with respect to the primal variables, and step ? which involves a dual update step with step size equal to , and a projection onto the non-negative orthant (in Algorithm ? denotes the projection operator onto the -dimensional non-negative orthant ). The operators and appearing in steps ?, ?, and ? of Algorithm ? with arguments in are meant to be applied component-wise. The sequence is chosen so as to satisfy and , as requested in the standard dual subgradient method to achieve asymptotic convergence. Furthermore, in order to guarantee that the solution to step ? in Algorithm ? is well-defined, we impose the following assumption on Equation 1:

If in step ? is a set of cardinality larger than 1, then, a deterministic tie-break rule is applied to choose a value for .

Algorithm ? is conceived to be implemented in a decentralized scheme where, at each iteration , every agent updates its local tentative solution and communicates to some central unit that is in charge of the update of the dual variable. The tentative value for the dual variable is then broadcast to all the agents. Note that the agents do not need to communicate to the central unit their private information regarding their local constraint set and cost but only their tentative solution .

The tentative primal solutions , , computed at step ? are used in Algorithm ? by the central unit to determine the amount of tightening entering step ?. The value of is progressively refined through iterations based only on those values of , , that are actually considered as candidate primal solutions, and not based on the whole sets , . This reduces conservativeness in the amount of tightening and also in the performance bound of the feasible, yet suboptimal, primal solution.

Algorithm ? terminates after a given stopping criteria is met at the level of the central unit, e.g., if for a given number of subsequent iterations satisfies the coupling constraint. As shown in the numerical study in Section 4, variants of Algorithm ? can be conceived to get an improved solution in the same number of iterations of Algorithm ?. The agents should however share with the central entity additional information on their local cost, thus partly compromising privacy preservation.

As for the initialization of Algorithm ?, is set equal to so that at iteration each agent computes its locally optimal solution

Since , if the local solutions , , satisfy the coupling constraint (and they hence are optimal for the original problem Equation 1), then, Algorithm ? will terminate since will remain 0, and the agents will stick to their locally optimal solutions.

Before stating the feasibility and performance guarantees of the solution computed by Algorithm ?, we need to introduce some further quantities and assumptions.

Let us define for any

where , , are the tentative primal solutions computed at step ?.

Due to Assumption ?, for any , is a bounded polyhedron. If it is also non-empty, then is a non-empty finite set (see Corollaries 2.1 and 2.2 together with Theorem 2.3 in [6]). As a consequence, the sequence takes values in a finite set. Since this is a monotonically non-decreasing sequence, it converges in finite-time to some value .
The same reasoning can be applied to show that the sequence , iteratively computed in Algorithm ? (see step ?), and given by

for , converges in finite-time to some since it takes values in a finite set and is (component-wise) monotonically non-decreasing. Note that the limiting values and for and satisfy and where and are defined in and .

Similarly to [17], define and as the primal-dual pair of optimization problems that are given by setting equal to in Equation 5 and Equation 6.

Note that, since , if Assumption ? is satisfied, then Assumption ? is automatically satisfied.

We are now in a position to state the two main results of the paper.

By a direct comparison of and we can see that the bound in is no worse than due to the fact that and .

3Proof of the main results

3.1Preliminary results

Exploiting Lemma ?, we shall show next that each sequence, , converges in finite-time to some set.

3.2Proof of Theorems and

66

Theorem 2.5 of [17] establishes a relation between the solution of and the one recovered in from the optimal solution of the dual optimization problem . Specifically, it states that there exists a set of indices of cardinality at least , such that for all , where is the subvector of corresponding to the -th agent. Therefore, following the proof of Theorem 3.1 in [17], we have that

where , and constitutes an upper bound for given that is feasible for .

According to [14], the component of the (unique, under Assumption ?) solution to is the limit point of the sequence , defined as

By linearity, for all , we have that

where the first inequality is due to the fact that all are positive and the second equality follows from step ? of Algorithm ?. In the final inequality, is lower bounded by , that denotes the limiting value of the non-increasing finite-valued sequence . Note that all inequalities have to be intended component-wise. By taking the limit for , we also have that

By Proposition ?, there exists a finite iteration index such that satisfies . Since holds for any choice of which minimizes over , if , then we can choose . Therefore, for all , becomes

where the second inequality is obtained by taking the maximum up to , the first equality is due to step ? of Algorithm ?, the third inequality is due to the fact that is the limiting value of the non-decreasing finite-valued sequence together with , and the last equality comes from the definition of where .

From we have that, for any , the iterates , , generated by Algorithm ? provide a feasible solution for Equation 1, thus concluding the proof.

66

Denote as , , and the optimal cost of Equation 1, , and Equation 4, respectively. From Assumption ? it follows that , , and are finite.

Consider the quantity .
As in the proof of Theorem 3.3 in [17], we add and subtract and to obtain

We shall next derive a bound for each term in .

Bound on

:
Similarly to the proof of Theorem ? for feasibility, due to Theorem 2.5 in [17], have that there exists a set of cardinality at least such that , for all . Therefore,

where .

According to [14], the components of the (unique, under Assumption ?) solution to is the limit point of the sequence , defined as

By linearity, for all , we have that

where the first inequality is due to the fact that all are positive and the last one derives from the fact is a non-increasing sequence that takes values in a finite set, and hence is lower bounded by its limiting value . Therefore, by taking the limit for , we also have that

Since holds for any choice of which minimize over , by Proposition ? it follows that, for , and, as a result

where the second inequality is obtained by taking the maximum up to iteration and the third inequality is due to .

Now if we recall the definition of in and its finite-time convergence to , jointly with the fact that is the limiting value of , we finally get that there exists , such that for

thus leading to

Bound on :


Problem Equation 4 can be considered as a perturbed version of , since the coupling constraint of is given by

and that of Equation 4 can be obtained by adding to its right-hand-side. From perturbation theory (see [7]) it then follows that the optimal cost is related to by:

From Assumption ?, by applying [12] we have that for all

where the second inequality is obtained setting , the third inequality comes from the fact that and that , and the third equality is due to . Using in we have

where the second inequality is due to the Hölder’s inequality.

Bound on :


Since Equation 4 is a relaxed version of Equation 1, then .

The proof is concluded considering and inserting the bounds obtained for the three terms.

4Application to optimal PEVs charging

In this section we show the efficacy of the proposed approach in comparison to the one described in [17] on the Plug-in Electric Vehicles (PEVs) charging problem described in [17]. This problem consists in finding an optimal overnight charging schedule for a fleet of vehicles, which has to satisfy both local requirements and limitations (e.g., maximum charging power and desired final state of charge for each vehicle), and some network-wide constraints (i.e., maximum power that the network can deliver at each time slot). We consider both version of the PEVs charging problem, namely, the “charge only” setup in which all vehicles can only draw energy from the network, and the “vehicle to grid” setup where the vehicles are also allowed to inject energy in the network.

The improvement of our approach with respect to that in [17] is measured in terms of the following two relative indices: the reduction in the level of conservativeness

and the improvement in performance achieved by the primal solution

where and . A positive value for these indices indicates that our approach is less conservative.

For a thorough comparison we determined the two indices while varying: i) the number of vehicles in the network, ii) the realizations of the random parameters entering the system description (cost of the electrical energy and local constraints), and iii) the right hand side of the joint constraints. All parameters and their probability distributions were taken from [17].

In Table 1 we report the conservativeness reduction and the cost improvement for the “vehicle to grid” setup. As it can be seen from the table, the level of conservativeness is reduced by while the improvement in performance (witnessed by positive values of ) drops as the number of agents grows. This is due to the fact that the relative gap between and tends to zero as , thus reducing the margin for performance improvement.

Table 1: Reduction in the level of conservativeness () and improvement in performance () achieved by the primal solution obtained by the proposed method when compared with the one proposed in .

We do not report the results for the “charge only” setup since the two methods lead to the same level of conservativeness and performance of the primal solution.

Figure 1: Histogram of the performance improvement (\Delta J_\%) achieved by the primal solution obtained by the proposed method with respect to the one proposed in  over 1000 runs.
Figure 1: Histogram of the performance improvement () achieved by the primal solution obtained by the proposed method with respect to the one proposed in over runs.

We also tested the proposed approach against changes of the random parameters defining the problem. We fixed and performed tests running Algorithm ? and the approach in [17] with different realization for all parameters, extracted independently. Figure 1 plots an histogram of the values obtained for in the tests. Note that the cost improvement ranges from to and, accordingly to the theory, is always non-negative. The reduction in the level of conservativeness is also in this case , suggesting that the proposed iterative scheme exploits some structure in the PEVs charging problem that the approach in [17] oversees. Also in this case, in the “charge only” setup the two methods lead to the same level of conservativeness and performance.

Finally, we compared the two approaches in the “vehicle to grid” setup against changes in the joint constraints. If the number of electric vehicles is and we decrease the maximum power that the network can deliver by , then the that results from applying the approach in [17] makes Equation 5 with infeasible, thus violating Assumption ?. Whereas with our approach Equation 5 with remains feasible, being the limiting value for in Algorithm ?.

4.1Performance-oriented variant of Algorithm

While Algorithm ? is able to find a feasible solution to Equation 1, it does not directly consider the performance of the solution, whereas the user is concerned with both feasibility and performance with higher priority given to feasibility. This calls for a modification to Algorithm ? which also takes into account the performance achieved.

Theorem ? guarantees that there exists an iteration index after which the iterates stay feasible for Equation 1 for all . Now, suppose that the agents, together with the also transmit to the central unit, then the central unit can construct the cost of at each iteration. When a feasible solution is found, its cost may be compared with that of a previously stored solution, and the central unit can decide to keep the new tentative solution or discard it. This way we are able to track the best feasible solution across iterations.

The modified procedure is summarized in Algorithm ?. Note that, compared to Algorithm ?, each agent is required to transmit also the cost of its tentative solution.

To show the benefits of Algorithm ? in terms of performance, we run test with vehicles in the “charge only” setup, where we are also able to compute the optimal solution of Equation 1, and compare the performance of Algorithm ? and ? in terms of relative distance from the optimal cost of Equation 1.

Figure 2 shows the distribution of obtained with Algorithm ? (blue) and obtained with Algorithm ? (orange) for the runs. As can be seen from the picture, most runs of Algorithm ? result in a performance very close to the optimal one, while the runs from Algorithm ? exhibit lower performance.

Figure 2: Histogram of the relative distance from the optimal value of  achieved by the primal solution obtained by Algorithm  (blue) and Algorithm  (orange), over 1000 runs.
Figure 2: Histogram of the relative distance from the optimal value of achieved by the primal solution obtained by Algorithm (blue) and Algorithm (orange), over runs.

5Concluding remarks

We proposed a new method for computing a feasible solution to a large-scale mixed integer linear program via a decentralized iterative scheme that decomposes the program in smaller ones and has the additional beneficial side-effect of preserving privacy of the local information if the problem originates from a multi-agent system.

This work improves over existing state-of-the-art results in that feasibility is achieved in a finite number of iterations and the decentralized solution is accompanied by a less conservative performance certificate. The application to a plug-in electric vehicles optimal charging problem verifies the improvement gained in terms of performance.

Future research directions include the development a distributed algorithm, which does not require any central authority but only communications between neighboring agents, and allows for time-varying communications among agents.

Moreover, we aim at exploiting the analysis of [16] to generalize our results to problems with nonconvex objective functions.

References

  1. Estimates of the duality gap in nonconvex optimization.
    J.-P. Aubin and I. Ekeland. Mathematics of Operations Research, 1(3):225–245, 1976.
  2. Portfolio-optimization models for small investors.
    P. Baumann and N. Trautmann. Mathematical Methods of Operations Research, 77(3):345–356, 2013.
  3. Control of systems integrating logic, dynamics, and constraints.
    A. Bemporad and M. Morari. Automatica, 35(3):407–427, 1999.
  4. Optimal short-term scheduling of large-scale power systems.
    D. Bertsekas, G. Lauer, N. Sandell, and T. Posbergh. IEEE Transactions on Automatic Control, 28(1):1–11, 1983.
  5. Nonlinear programming.
    D. P. Bertsekas. Athena scientific Belmont, 1999.
  6. Introduction to linear optimization, volume 6.
    D. Bertsimas and J. N. Tsitsiklis. Athena Scientific Belmont, MA, 1997.
  7. Convex optimization.
    S. Boyd and L. Vandenberghe. Cambridge university press, 2004.
  8. Effective heuristics for multiproduct partial shipment models.
    M. Dawande, S. Gavirneni, and S. Tayur. Operations research, 54(2):337–352, 2006.
  9. Dual decomposition for multi-agent distributed optimization with coupling constraints.
    A. Falsone, K. Margellos, S. Garatti, and M. Prandini. Automatica, 2017.
  10. Lagrangean relaxation for integer programming.
    A. M. Geoffrion. Mathematical Programming Study 2, pages 82–114, 1974.
  11. An iterative scheme to hierarchically structured optimal energy management of a microgrid.
    D. Ioli, A. Falsone, and M. Prandini. In 54th Conference on Decision and Control (CDC2015), pages 5227–5232, 2015.
  12. Approximate primal solutions and rate analysis for dual subgradient methods.
    A. Nedić and A. Ozdaglar. SIAM Journal on Optimization, 19(4):1757–1780, 2009.
  13. Short-term hydro-thermal coordination by lagrangian relaxation: solution of the dual problem.
    N. J. Redondo and A. Conejo. IEEE Transactions on Power Systems, 14(1):89–95, 1999.
  14. Minimization Methods for Non-Differentiable Functions.
    N. Shor. Springer, 1985.
  15. Primal recovery from consensus-based dual decomposition for distributed convex optimization.
    A. Simonetto and H. Jamali-Rad. Journal of Optimization Theory and Applications, 168(1):172–197, 2016.
  16. Bounding duality gap for separable problems with linear constraints.
    M. Udell and S. Boyd. Computational Optimization and Applications, 64(2):355–378, 2016.
  17. A decomposition method for large scale MILPs, with performance guarantees and a power system application.
    R. Vujanic, P. M. Esfahani, P. J. Goulart, S. Mariéthoz, and M. Morari. Automatica, 67:144–156, 2016.
  18. Review on methods of generation scheduling in electric power systems.
    H. Y. Yamin. Electric Power Systems Research, 69(2):227–248, 2004.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
14515
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description