Online Mixed Packing and Covering This work was supported in part by NSF grants CCF-0728869 and CCF-1016778.

Online Mixed Packing and Covering thanks: This work was supported in part by NSF grants CCF-0728869 and CCF-1016778.

Umang Bhaskar222Dartmouth College, 6211 Sudikoff Lab, Hanover NH 03755. {umang, lkf}@cs.dartmouth.edu.    Lisa Fleischer222Dartmouth College, 6211 Sudikoff Lab, Hanover NH 03755. {umang, lkf}@cs.dartmouth.edu.
July 18, 2019
Abstract

In many problems, the inputs to the problem arrive over time. As each input is received, it must be dealt with irrevocably. Such problems are online problems. An increasingly common method of solving online problems is to solve the corresponding linear program, obtained either directly for the problem or by relaxing the integrality constraints. If required, the fractional solution obtained is then rounded online to obtain an integral solution.

We give algorithms for solving linear programs with mixed packing and covering constraints online. We first consider mixed packing and covering linear programs, where packing constraints are given offline and covering constraints are received online. The objective is to minimize the maximum multiplicative factor by which any packing constraint is violated, while satisfying the covering constraints. For general mixed packing and covering linear programs, no prior sublinear competitive algorithms are known. We give the first such — a polylogarithmic-competitive algorithm for solving mixed packing and covering linear programs online. We also show a nearly tight lower bound.

Our techniques for the upper bound use an exponential penalty function in conjunction with multiplicative updates. While exponential penalty functions are used previously to solve linear programs offline approximately, offline algorithms know the constraints beforehand and can optimize greedily. In contrast, when constraints arrive online, updates need to be more complex.

We apply our techniques to solve two online fixed-charge problems with congestion. These problems are motivated by applications in machine scheduling and facility location. The linear program for these problems is more complicated than mixed packing and covering, and presents unique challenges. We show that our techniques combined with a randomized rounding procedure can be used to obtain polylogarithmic-competitive integral solutions. These problems generalize online set-cover, for which there is a polylogarithmic lower bound. Hence, our results are close to tight.

1 Introduction

In this paper, we give the first online algorithm for general mixed packing and covering linear programs (LPs) with a sublinear competitive ratio. The problem we study is as follows.

Online Mixed Packing and Covering (OMPC). Given: a set of packing constraints , and a set of covering constraints , with positive coefficients and variables , such that the packing constraints are known in advance, and the covering constraints arrive one at a time. Goal: After the arrival of each covering constraint, increase so that the new covering constraint is satisfied and the amount by which we must multiply to make the packing constraints feasible is as small as possible.

Mixed packing and covering problems model a wide range of problems in combinatorial optimization and operations research. These problems include facility location, machine scheduling, and circuit routing. In these problems, requests for resources such as bandwidth or processing time are received over time, or online, whereas the set of resources is known offline. As each request arrives, we must allocate resources to satisfy the request. These allocations are often impossible or costly to revoke. The resources correspond to packing constraints in our setting and are known offline. Requests correspond to covering constraints, and arrive online. The performance of an online algorithm is measured by the competitive ratio, defined as the worst case ratio of the value of the solution obtained by the online algorithm to the value obtained by the optimal offline algorithm which has as its input the entire sequence of requests. The worst case ratio is over all possible sequences of inputs.

Many techniques to solve integer problems online first obtain a fractional solution, and then round this to an integer solution  [1, 2, 5, 6]. The first step involves solving a linear program relaxation of the original problem online. In fact, this can be a bottleneck step in obtaining a good competitive ratio. Thus, our algorithm for online mixed packing and covering can provide an important first step in obtaining good online solutions to several combinatorial problems. We demonstrate the power of our ideas by extending them to give the first online algorithms with sublinear competitive ratios for a number of fixed-charge problems with capacity constraints. For these problems, we first solve the linear program relaxation online, and then use known randomized rounding techniques to obtain an integer solution online.

Applications. We use our techniques to study two problems with fixed-charge and congestion costs. Both fixed-charge problems and congestion problems are widely studied offline and online; we discuss specific applications and references below. In general, fixed charges are used to model one-time costs such as resource purchases or installation costs, while congestion captures the load on any resource. In machine scheduling, for example, the makespan can be modelled as the maximum congestion by setting each resource to be a machine and setting unit capacity for each machine.

Application 1: Unrelated Machine Scheduling with Start-up Costs (UMSC). Given offline: a set of machines with start-up cost for machine . Jobs arrive online, and job requires time to be processed on machine . Goal: when a job arrives, determine whether to “open” new machines by paying their start-up cost, and then assign the job to one of the open machines, so that the sum of the makespan of the schedule — the maximimum over machines of the processing times of the jobs assigned to it — and the sum of start-up costs is minimized.

The problem of scheduling jobs to minimize the makespan and the fixed charges is studied both offline [17, 26] and online [14, 25, 24]. The problem is motivated by reducing energy consumption in large data centers, such as those used by Google and Amazon [8, 26]. The energy consumption of a large data center is estimated to exceed that of thousands of homes, and the energy costs of these centers is in the tens of millions of dollars [32], hence algorithms that focus on reducing energy consumption are of practical importance. The inclusion of a fixed charge models the cost of starting up a machine. Thus machines do not need to stay on, and can be started when the load increases. Bicriteria results for the offline problem are given in [26] and [17] using different techniques. We show strong lower bounds for bicriteria results in the online setting, and therefore focus on algorithms for the sum objective. For the online problem with identical machines, [14, 25] give constant-competitive algorithms for the sum objective. These are extended to the case where machines have speed either 1 or , with more general costs for the machines in [24].

Application 2: Capacity Constrained Facility Location (CCFL). Given offline: a set of facilities with fixed-charge and capacity for each facility in . Clients arrive online, and each client has an assignment cost and a demand on being assigned to facility . Goal: when client arrives, determine whether to open new facilities by paying their fixed charge, and then assign the client to an open facility, so that the sum of the maximum congestion of any facility, total assignment costs, and the total fixed charges paid for opened facilities is minimized. The congestion of a facility is the ratio of the sum of the loads of clients assigned to the facility to the capacity of the facility.

For online facility location without capacity constraints, a -competitive ratio is possible when the assignment costs form a metric [18], and a -competitive ratio is possible when assignment costs are non-metric [1], with clients and facilities. Capacitated facility location is a natural extension to the problem. In the offline setting, constant-factor approximation algorithms are known for both facility location with soft capacities — when multiple facilities can be opened at a location — and hard capacities — when either a single facility or no facility is opened at each location [30, 35]. Our problem is a variant of non-metric soft-capacitated facility location where instead of minimizing the cost of installing multiple facilities at a location, we minimize the load on the single facility at each location, in addition to fixed-charge and assignment costs.

Our Results. We give polylogarithmic competitive ratios for the problems discussed. Our results are the first sublinear guarantees for these problems.

  • For OMPC:

    • A deterministic -competitive algorithm, where is the number of packing constraints, is the maximum number of variables in any constraint, is the ratio of the maximum to the minimum non-zero packing coefficient and is the ratio of the maximum to the minimum non-zero covering coefficient (Section 2). If all coefficients are either 0 or 1, this gives a -competitive algorithm.

    • A lower bound of for any deterministic algorithm for OMPC. Our algorithm for OMPC is thus nearly tight (Section 2.3).

  • For CCFL and UMSC:

    • A randomized -competitive algorithm for CCFL, where and are the number of facilities and clients respectively, and is is the ratio of the maximum to the minimum total cost of assigning a single client (Section 3). We obtain the same competitive ratio for UMSC, where and are the number of machines and jobs respectively, and is the ratio of the maximum to the minimum total cost of assigning a single job.

    • A lower bound for bicriteria results for CCFL: even if the maximum congestion is given offline, no deterministic online algorithm can obtain a fractional solution with maximum congestion and fixed-charge within a polylogarithmic factor of the optimal (Section 3.5). This lower bound also holds for UMSC , where is the makespan.

Since each of our applications include fixed-charges as part of the objective, they generalize online set cover. In UMSC, for example, set cover is obtained by setting the processing times to be either zero or infinity. Since the makespan in any bounded solution to the problem is now zero, this reduces the problem to covering jobs with machines to minimize the sum of machine startup costs. Online set cover has a lower bound of on the competitive ratio assuming BPP NP [3]. Thus, our results for UMSC and CCFL are tight modulo a logarithmic factor.

Our Techniques. Our techniques for online mixed packing and covering are based on a novel extension of multiplicative weight updates. We replace the packing constraints in our problem with an exponential penalty function that gives an upper bound on the violation of any constraint. When a covering constraint arrives, the increment to any variable is inversely proportional to the rate of change of this penalty function with respect to the variable. We use a primal-dual analysis to show that this technique, combined with a doubling approach used in previous online algorithms, gives the required competitive ratio. While exponential potential functions are widely used for offline algorithms and machine learning, e.g. [4], our work is the first to use an exponential potential function to drive multiplicative updates that yield provably good competitive ratios for online algorithms.

Our work is closely related to work on solving pure packing packing and pure covering linear programs online, and Lagrangean-relaxation techniques for solving linear programs approximately offline. Multiplicative weight updates are used in [10] to obtain -competitive fractional solutions for covering linear programs when the constraints arrive online. In [10], the cost is a simple linear function of the variables. The update to each variable is inversely proportional to the sensitivity of the cost function relative to the variable, given by the variable’s coefficient in the cost function. In our problem, however, the cost is the maximum violation of any packing constraint. The cost function is thus nonlinear, and since its sensitivity relative to a variable changes, it is not apparent how to extend the techniques from [10]. We use an exponential potential function to obtain a differentiable approximation to this nonlinear cost. For each variable, our updates depend on the sensitivity of this potential function relative to the variable. In addition to the primal-dual techniques in [10], a key step in our analysis is to obtain bounds on the rate of change of this potential function.

A large body of work uses Lagrangean-relaxation techniques to obtain approximate algorithms for solving LPs offline, e.g., [31, 34]. In these papers, the constraints in the LP are replaced by an exponential penalty function. In each update, the update vector for the variables minimizes the change in the penalty function. In this sense, the updates in these offline algorithms are greedy. Since the constraints are available offline, this gives -approximate solutions. In our case, since covering constraints arrive online, greedy algorithms perform very poorly, and we must use different techniques. We use an exponential penalty function similar to offline algorithms. However, our updates are very different. Instead of a greedy strategy as used in [31, 34], we hedge our bets and increment all variables that appear in the covering constraint. The increment to each variable is inversely proportional to its contribution to the penalty function.

For fixed-charge problems with capacity constraints, we solve the corresponding linear programs online and, for our applications, round the fractional solutions to obtain integral solutions online. The linear programs for these problems are significantly more complicated than mixed packing and covering. We combine our techniques for mixed packing and covering with a more complex doubling approach to obtain fractional solutions, and adapt randomized rounding procedures used previously offline for machine scheduling [26] and online for set cover [11] to obtain integral solutions.

Other Related Work. Multiplicative updates are used in a wide variety of contexts. They are used in both offline approximation algorithms for packing and covering problems [7, 16, 19, 20, 21, 22, 27, 28, 29, 31, 33, 34], and online approximations for pure packing or pure covering problems based on linear programs such as set cover [12], caching [6], paging [5], and ad allocations [9]. Both offline and online, these algorithms are analyzed using a primal-dual framework. Multiplicative updates are used earlier [1, 2] to implicitly solve a linear program online for various network optimization problems. The fractional solution obtained was rounded online to obtain an integral solution. Multiplicative weight updates also have a long history in learning theory; these results are surveyed in [4].

Our work studies the worst-case behaviour of our algorithms assuming adversarial inputs. A large body of work studies algorithms for online problems when the inputs are received as the result of a stochastic process. Two common models studied in the literature are (1) when the inputs are picked from a distribution (either known or unknown), and (2) when an adversary picks the inputs, but the inputs are presented to the algorithm in random order. The adwords and display ads problems can be modeled as packing linear programs with variables arriving online. A number of papers give algorithms for these problems assuming stochastic inputs; some of these results are presented in [13, 15].

2 Online Mixed Packing and Covering

In this section, we consider mixed packing and covering linear programs. A mixed packing and covering linear program has two types of constraints: covering constraints of the form , and packing constraints of the form . We normalize the constraints so that the right side of each constraint is 1. Our objective is to obtain a solution that minimizes the maximum amount by which any packing constraint is violated. Thus, our problem is to obtain a solution to the following linear program:

(1)

The packing constraints are given to us initially, and the covering constraints are revealed one at a time. Our online algorithm assigns fractional values to the variables. As covering constraints arrive, the variable values can be increased, but cannot be decreased.

For a vector , we use both and to denote its th component. We use to denote the set , , , . The vector of all ones and all zeros is denoted by and , respectively.

The number of variables, number of packing constraints, and number of covering constraints in the linear program are denoted by , , and respectively. We use to denote the maximum number of variables in any constraint. We define and similarly . The value of is used only in the analysis of the algorithm; we do not need to know its value during execution. Define , i.e., is the maximum coefficient in the first covering constraint to arrive. Define as the maximum number of variables in any packing constraint, and the first covering constraint. Define , and . Here, is the base of the natural logarithm.

We use to denote the optimal value of given and , hence is the value returned by the optimal offline algorithm.

In order to analyze our algorithm, we consider the dual of (1) as well:

(2)

2.1 An Algorithm for Mixed Packing and Covering Online

We now give an algorithm for solving OMPC and show that it is -competitive. We assume in the following discussion that we are given a scaling parameter , which is used to scale the matrix of packing coefficients . In Theorem 11, we show that if then our algorithm yields the stated competitive ratio. Without this estimate , we can use a “doubling procedure” commonly used in online algorithms, which increases the competitive ratio obtained by a factor of 4 (Section 2.2).

Given a vector , let . For a given scaling parameter , let , and . Let be an estimate of , and note that . For each variable , define

(3)

Our algorithm is given as Algorithm 1. Upon receiving the first constraint, we initialize for all . We also initialize a counter variable .

When a covering constraint arrives it gets assigned a new dual variable , and the variables are incremented as described. The dual variables are used only in the analysis.

For covering constraint , define

(4)

so that for all , . In line 8, each variable gets increased by at most a factor of , and at least one variable gets incremented by a factor of .

1:  When first constraint arrives, initialize for all , and .
2:  Upon arrival of th covering constraint:
3:  while  do
4:     ,
5:     ,             /* defined in (3) */
6:                                /* defined in (4) */
7:     for  do
8:        
9:                             /* for analysis */
10:     if then return FAIL
Algorithm 1 MPC-Approx: Upon arrival of th covering constraint:

A single iteration of the while loop is a phase, indexed by , and the first phase is phase 0. The value of the variables before they are incremented in phase is . denotes the values after initialization. For covering constraint , is the indices of the phases executed from its arrival until , and .

We first show upper bounds on values attained by the variables, and on the running time.

Lemma 1.

During the execution of the algorithm, for any , .

Proof.

For any , if , then will not be incremented further in any phase since any covering constraint with must already be satisfied. Thus, since the value of any variable increases by at most a factor of in a phase, . ∎

Lemma 2.

MPC-Approx executes phases, and each phase takes time .

Proof.

In each phase, the value of at least one variable gets incremented by a factor of . Each variable has an initial value of . Let be the number of phases in which gets increased by a factor of . Then . Since by Lemma 1, , . Observing that for all , and , it follows that each variable can be increased by in at most phases. Since in each phase at least one variable increases by a factor of , the number of phases is at most . Since for , . Thus the number of phases is at most . In each phase, can be computed in time; then and each can be computed in time . Thus each phase takes time . ∎

Our proof of the competitive ratio follows from a primal-dual analysis. We show in Corollary 6 that plus the value of the dual objective maintained by the algorithm is an upper bound on the primal objective maintained by the algorithm. Lemmas 7 and 8 show how the dual variables maintained by the algorithm can be scaled down to obtain feasible dual values. We show in Theorem 11 that together these prove the bound on the competitive ratio.

We first show that the initialization of the variables ensures that does not exceed .

Lemma 3.

For the variables as initialized, , and hence .

Proof.

Let be the values for the variables in an optimal solution. After the first covering constraint is received, . Since the first covering constraint has at most variables, there exists variable , and hence

Using ,

(5)

Our algorithm initializes , and hence

(6)

where the first inequality is because any packing constraint has at most variables. Thus, , proving the lemma. ∎

Corollary 4.

If , then at the beginning of any phase .

Proof.

Since , by (6), . Thus the lemma is satisfied for the first phase. For any phase , the algorithm would have failed at the end of phase if . Since the algorithm did not fail in phase , in any phase , . ∎

Lemma 5.

If , the increase in the dual objective is an upper bound on the increase in in every phase.

Proof.

Let and denote the values of before and after the variables are incremented in phase , respectively. We will show that , which is the increase in in phase .

Let and be the values of before and after being incremented in phase . For each , let for . Note that and . Define . With some abuse of notation, any function of , say , can be viewed as a function of , with . Thus, the functions and can be written as functions of : , and

(7)

We use these alternate expressions in the remainder of the proof. By the chain rule,

and hence,

(8)

In a phase, each variable is incremented by at most a factor of . Therefore . Then since , by Corollary 4, in any phase . Thus for by Lemma 45 (in Appendix). Hence

Since in phase each variable gets multiplied by ,

where the last inequality follows since, on entering the for loop, . Since is the increase in the dual objective, this proves the lemma. ∎

Corollary 6.

If , then .

Proof.

By Lemma 5, the increase in is an upper bound on the increase in , thus . By Lemma 3, , and hence . ∎

We now show that the dual variables do not violate the dual constraints by much. We choose the dual variable corresponding to each packing constraint as

(9)
Lemma 7.

For as defined in (9), .

Proof.

For each packing constraint , let . Thus attains its value in phase . We index the packing constraints so that . Then for any with so that , we have since the variables are increasing. Thus,

(10)

Substituting (10) into (9) yields

Then by Lemma 44 in the appendix, with ,

(11)

where the last inequality follows since and by definition of . ∎

The next lemma tells us how much we must scale the dual solution obtained by the algorithm to obtain a dual feasible solution.

Lemma 8.

For any , .

Proof.

Consider a phase executed upon arrival of a covering constraint . In this phase, gets incremented by . This increment occurs in every phase in . Hence

(12)

By Lemma 1, . Further, since the initial value of is and is multiplied by in every phase, for all ,

where the last inequality is since and for , . Multiplying on both sides by , taking the natural log, and reversing the inequality,

and multiplying both sides by ,

(13)

Thus from (12) and (13), . We will now show that , completing the proof. This follows since

We now use the previous lemmas to prove the bound on the competitive ratio of our algorithm.

Lemma 9.

If , then MPC-Approx does not fail.

Proof.

Let and (, ) be the values for the primal and dual variables when at line 10 in the algorithm. may be infeasible for the primal since the current job may not yet be assigned, however, (, ) are feasible for the dual. Let and (, ) be the optimal solution. Then

(14)

where the last equality follows from LP strong duality. For convenience of notation, let . Since is non-decreasing, . Then by Lemmas 7 and 8, and are feasible values for the dual variables. Thus the optimal dual value is at least as large as . From (5), . Hence if , the condition for Corollary 6 is satisfied. From (14) and Corollary 6,

or, rearranging terms,

Substituting the value of , and since ,

(15)

Using the bound on from the statement of the lemma, and since ,

and simplifying yields . Hence, if , the algorithm does not fail. ∎

Lemma 10.

If and MPC-Approx does not fail, it returns a -competitive solution.

Proof.

Since , from (15),

Using the upper bound on , and since by Corollary 4,

This proves the lemma. ∎

Since , Lemmas 9 and 10 imply

Theorem 11.

If , then MPC-Approx does not fail and returns a -competitive solution.

2.2 Proceeding Without an Estimate on OPT.

We now discuss how to proceed without an estimate on OPT. We use a doubling procedure commonly used in online algorithms. We initially set and use this value to scale the packing constraints. We run Algorithm MPC-Approx with the scaled values. If the algorithm fails, we double , scale the packing constraints by the new value of and run the algorithm again. We repeat this each time the algorithm fails.

Each execution of Algorithm MPC-Approx is a trial. Each trial has distinct primal and dual variables , and , that are initialized at the start of the trial and increase as the trial proceeds. At the start of the trial, each is initialized to . If a trial fails, we double the value of and proceed with the next trial with new primal and dual variables. Thus in every trial, .

Our final value for is the sum of the values obtained in each trial. Thus, our variables are non-decreasing. Let be the value of used in trial , and be the value of the primal when trial ends. is the last trial, i.e., the algorithm does not fail in trial . Since obtained by the algorithm is the sum of in each trial , the value of the primal objective obtained by the algorithm is at most . Then

Theorem 12.

The value of the primal objective obtained is .

We first show a bound on in any trial.

Lemma 13.

In any trial, .

Proof.

Initially, by (5). Hence the lemma is true for the first trial. Since is doubled after each failed trial, by Lemma 9 some trial with will not fail. Hence, for every trial, . ∎

Proof of Theorem 12. Define . By Corollary 4, at the start of any phase. Within a phase, each variable gets multiplied by at most a factor of . Hence when trial fails, , or . Since the value of doubles after each trial,

(16)

Thus, from (16) and Lemma 13, , proving the theorem. ∎

2.3 A Lower Bound for Mixed Packing and Covering Online

We give a lower bound on the competitive ratio of any deterministic algorithm for online mixed packing and covering. Given upper bounds and on the number of packing constraints and on the number of variables in any (packing or covering) constraint respectively, we give an example to show the following lower bound.

Theorem 14.

Any deterministic algorithm for OMPC is -competitive.

Our algorithm for OMPC in Section 2.1 is thus nearly tight. For parameters and , we give an example which has packing constraints, at most variables in each covering constraint, and at most variables in each packing constraint. For this example, we show that and any deterministic algorithm gets value . The theorem follows.

We assume that both and are powers of 2 without loss of generality, otherwise we redefine to be the highest power of 2 that is at most the given value of , and redefine similarly. Our example has variables. We partition the variables into pairwise disjoint sets, with each set consisting of variables, and use to refer to the th set. We refer to these sets as blocks. For any set of variables , we use to refer to the sum of the values assigned by the algorithm to the variables in , and use to refer to the expression .

We first show how given two blocks and of size , we can construct covering constraints so that for one of , while the constraints can be satisfied by setting for a single variable , . The covering constraints are given by Algorithm 2. refers to the th harmonic number.

1:  ,
2:  for  do
3:     Offer the covering constraint
4:     Let , be the variables assigned maximum value in and respectively
5:     ,
6:  Offer the covering constraint
Algorithm 2 Given blocks and :
Lemma 15.

Either or