Assumption 1 (Exponential type)

Online Constrained Optimization over Time Varying Renewal Systems: An Empirical Method

Xiaohan Wei

Department of Electrical Engineering, University of Southern California

xiaohanw@usc.edu

Michael J. Neely

Department of Electrical Engineering, University of Southern California

mjneely@usc.edu

This paper considers constrained optimization over a renewal system. A controller observes a random event at the beginning of each renewal frame and then chooses an action that affects the duration of the frame, the amount of resources used, and a penalty metric. The goal is to make frame-wise decisions so as to minimize the time average penalty subject to time average resource constraints. This problem has applications to task processing and communication in data networks, as well as to certain classes of Markov decision problems. We formulate the problem as a dynamic fractional program and propose an online algorithm which adopts an empirical accumulation as a feedback parameter. Prior work considers a ratio method that needs statistical knowledge of the random events. A key feature of the proposed algorithm is that it does not require knowledge of the statistics of the random events. We prove the algorithm satisfies the desired constraints and achieves near optimality with probability 1.

Key words: renewal system, Markov decision processes, stochastic optimization, online optimization

MSC2000 subject classification: 90C15, 90C30, 90C40, 93E35

History:

Consider a system that operates over the timeline of real numbers . The timeline is divided into back-to-back periods called renewal frames and the start of each frame is called a renewal (see Fig. 1). The system state is refreshed at each renewal. At the start of each renewal frame the controller observes a random event and then takes an action from an action set . The pair affects: (i) the duration of that renewal frame; (ii) a vector of resource expenditures for that frame; (iii) a penalty incurred on that frame. The goal is to choose actions over time to minimize time average penalty subject to time average constraints on the resources.

• Task processing: Consider a device that processes computational tasks back-to-back. Each renewal period corresponds to the time required to complete a single task. The random event observed for task corresponds to a vector of task parameters, including the type, size, and resource requirements for that particular task. The action set consists of different processing mode options, and the specific action determines the processing time, energy expenditure, and task quality. In this case, task quality can be defined as a negative penalty, and the goal is to maximize time average quality subject to power constraints and task completion rate constraints.

• Markov decision problems: Consider a discrete time Markov decision problem over an infinite horizon and with constraints on average cost per slot (see  and  for exposition of Markov decision theory). Assume there is a special state that is recurrent under any sequence of actions (similar assumptions are used in ). Renewals are defined as revisitation times to that state. A random event is observed upon each revisitation and affects the Markov properties for all timeslots of that frame. This can be viewed as a two-timescale Markov decision problem . In this case, the control action chosen on frame is in fact a policy for making decisions over slots until the next renewal. The algorithm of the current paper does not require knowledge of the statistics of , but does require knowledge of the conditional Markov transition probabilities (given ) over each frame .

In the special case when the set of all possible random events is finite, the action set is finite, and the probabilities of are known, the problem can be solved (offline) by finding a solution to a linear fractional program (see also  for basic renewal-reward theory). Methods for solving linear fractional programs are in . The current paper seeks an online method that does not require statistical knowledge and that can handle possibly infinite random event sets and action sets.

Prior work in  considers the same renewal optimization problem as the current paper and solves it via a drift-plus-penalty ratio method. This method requires an action to be chosen that minimizes a ratio of expectations. However, this choice requires knowledge of the statistics of . Methods for approximate minimization of the ratio are considered in . A heuristic algorithm is also proposed in  that is easier to implement because it does not require knowledge of statistics. That algorithm is partially analyzed: It is shown that if the process converges, then it converges to a near-optimal point. However, whether or not it converges is unknown.

The current paper develops a new algorithm that is easy to implement, requires neither statistics of nor explicit estimation of them, and is fully analyzed with convergence properties that provably hold with probability 1. In particular, the feasibility of the algorithm is justified through a stability analysis of virtual queues and the near optimality comes from a novel construction of exponential supermartingales. Simulation experiments on a time varying constrained MDP conform with theoretical results.

The renewal system problem considered in this paper is a generalization of stochastic optimization over fixed time slots. Problems are categorized based on whether or not the random event is observed before the decisions are made. Cases where the random event is observed are useful in network optimization problems where max-weight (), Lyapunov optimization (), fluid model methods (), and dual subgradient methods () are often used.

Cases where the random events are not observed are called online convex optimization. Various algorithms are developed for unconstrained learning including (but not limited to) weighted majority algorithm (), multiplicative weighting algorithm (), following the perturbed leader () and online gradient descent (). The resource constrained learning problem is studied in  and . Moreover, online learning with underlying MDP structure is also treated using modified multiplicative weighting () and improved following the perturbed leader ().

Consider a system where the time line is divided into back-to-back time periods called frames. At the beginning of frame (), a controller observes the realization of a random variable , which is an i.i.d. copy of a random variable taking values in a compact set with distribution function unknown to the controller. Then, after observing the random event, the controller chooses an action vector . Then, the tuple induces the following random variables:

• The penalty received during frame : .

• The length of frame : .

• A vector of resource consumptions during frame : .

We assume that given and at frame , is a random vector independent of the outcomes of previous frames, with known expectations. We then denote these conditional expectations as

 ^y(ω,α)= E[y[n] | ω,α], ^T(ω,α)= E[T[n] | ω,α], ^z(ω,α)= E[^z[n] | ω,α],

which are all deterministic functions of and . This notation is useful when we want to highlight the action we choose. The analysis assumes a single action in response to the observed at each frame. Nevertheless, an ergodic MDP can fit into this model by defining the action as a selection of a policy to implement over that frame so that the corresponding , and are expectations over the frame under the chosen policy.

Let

 ¯¯¯y[N] =1NN−1∑n=0y[n], ¯¯¯¯T[N] =1NN−1∑n=0T[n], ¯¯¯zl[N] =1NN−1∑n=0zl[n]   l∈{1,2,⋯,L}.

The goal is to minimize the time average penalty subject to constraints on resource consumptions. Specifically, we aim to solve the following fractional programming problem:

 min limsupN→∞¯¯¯y[N]¯¯¯¯T[N] (1) s.t. limsupN→∞¯¯¯zl[N]¯¯¯¯T[N]≤cl,  ∀l∈{1,2,⋯,L}, (2) α[n]∈A, ∀n∈{0,1,2,⋯}, (3)

where are nonnegative constants, and both the minimum and constraint are taken in an almost sure sense. Finally, we use to denote the minimum that can be achieved by solving above optimization problem. For simplicity of notations, let

 K[n]= ⎷L∑l=1(zl[n]−clT[n])2. (4)

Our main result requires the following assumptions, their importance will become clear as we proceed. We begin with the following boundedness assumption:

###### Assumption 1 (Exponential type)

Given and for a fixed , it holds that with probability 1 and are of exponential type, i.e. there exists a constant s.t.

 E[exp(ηy[n])|ω,α]≤B, E[exp(ηK[n])|ω,α]≤B, E[exp(ηT[n])|ω,α]≤B,

where is a positive constant.

The following proposition is a simple consequence of the above assumption:

###### Proposition 1

Suppose Assumption 1 holds. Let be any of the three random variables , and for a fixed . Then, given any and ,

 E[X[n]|ω,α]≤B/η,  E[X[n]2∣∣ω,α]≤2B/η2.

The proof is easy by expanding the term using Taylor series and bounding , using the first and the second order term, respectively.

###### Assumption 2

There exists a positive constant large enough so that the optimal objective of , denoted as , falls into with probability 1.

###### Remark 1

If , then, we shall find a constant large enough so that . Then, define a new penalty . It is then easy to see that minimizing is equivalent to minimizing and the optimal objective of the new problem is , which is nonnegative.

###### Assumption 3

Let be the performance vector under a certain pair. Then, for any fixed , the set of achievable performance vectors over all is compact.

In order to state the next assumption, we need the notion of randomized stationary policy. We start with the definition:

###### Definition 1 (Randomized stationary policy)

A randomized stationary policy is an algorithm that at the beginning of each frame , after observing the realization , the controller chooses with a conditional probability which depends only on .

###### Assumption 4 (Bounded achievable region)

Let

 (¯¯¯y, ¯¯¯¯T, ¯¯¯z)≜E[(^y(ω,α∗), ^T(ω,α∗), ^z(ω,α∗))]

be the one-shot average of one randomized stationary policy. Let be the set of all achievable one-shot averages . Then, is bounded.

###### Assumption 5 (ξ-slackness)

There exists a randomized stationary policy such that the following holds,

 E[^zl(ω[n],α(ξ)[n])]E[^T(ω[n],α(ξ)[n])]=cl−ξ,  ∀l∈{1,2,⋯,L},

where is a constant.

###### Remark 2 (Measurability issue)

We implicitly assume the policies for choosing in reaction to result in a measurable , so that , , are valid random variables and the expectations in Assumption 4 and 5 are well defined. This assumption is mild. For example, when the sets and are finite, it holds for any randomized stationary policy. More generally, if and are measurable subsets of some separable metric spaces, this holds whenever the conditional probability in Definition 1 is “regular” (see  for the exposition of regular conditional probability and related topics), and , , are continuous functions on .

We define a vector of virtual queues which are 0 at and updated as follows:

 Ql[n+1]=max{Ql[n]+zl[n]−clT[n],0}. (5)

The intuition behind this virtual queue idea is that if the algorithm can stabilize , then the “arrival rate” is below “service rate” and the constraint is satisfied. The proposed algorithm then proceeds as in Algorithm 1 via two fixed parameters , , and an additional process that is initialized to be . For any real number , the notation stands for ceil and floor function:

 [x]θmax0=⎧⎪⎨⎪⎩θmax,if x∈(θmax,+∞);x,if x∈[0,θmax];0,if x∈(−∞,0).

Note that we can rewrite (6) as the following deterministic form:

 V(^y(ω[n],α[n])−θ[n]^T(ω[n],α[n]))+L∑l=1Ql[n](^zl(ω[n],α[n])−cl^T(ω[n],α[n])),

Thus, Algorithm 1 proceeds by observing on each frame and then choosing in to minimize the above deterministic function. We can now see that we only use knowledge of current realization , not statistics of . Also, the compactness assumption (Assumption 3) guarantees that the minimum of (6) is always achievable. For the rest of the paper, we introduce several abbreviations:

 D[n]≜ n∑i=0(y[i]−θ[i]T[i]+1VL∑l=1Ql[i](zl[i]−clT[i])) → empirical accumulation ^θ[n]≜ 1(n+1)δn∑i=0(y[i]−θ[i]T[i]+1VL∑l=1Ql[i](zl[i]−clT[i])) → original pseudo average θ[n]≜ [1(n+1)δn∑i=0(y[i]−θ[i]T[i]+1VL∑l=1Ql[i](zl[i]−clT[i]))]θmax0 → trimmed pseudo average.

The intuitive reason why we need trimmed pseudo average is to ensure the empirical accumulation does not blow up, which plays an important role in the analysis.

In this section, we prove that the proposed algorithm gives a sequence of actions which satisfies all desired constraints with probability 1. Specifically, we show that all virtual queues are stable with probability 1, in which we leverage an important lemma from  to obtain a exponential bound for the norm of .

The start of our proof uses the drift-plus-penalty methodology. For a general introduction on this topic, see  for more details. We define the 2-norm function of the virtual queue vector as:

 ∥Q[n]∥2=L∑l=1Ql[n]2.

Define the Lyapunov drift as

 Δ(Q[n])=12(∥Q[n+1]∥2−∥Q[n]∥2).

Next, define the penalty function at frame as , where is a fixed trade-off parameter. Then, the drift-plus-penalty methodology suggests that we can stabilize the virtual queues by choosing an action to greedily minimize the following drift-plus-penalty expression, with the observed , and :

 E[V(y[n]−θ[n]T[n])+Δ(Q[n])|Ql[n],θ[n],ω[n]].

The penalty term uses the variable, which depends on events from all previous frames. This penalty does not fit the rubric of  and convergence of the algorithm does not follow from prior work. A significant thrust of the current paper is convergence analysis via the trimmed pseudo averages defined in the previous subsection.

In order to obtain an upper bound on , we square both sides of (5) and use the fact that ,

 Ql[n+1]2≤Ql[n]2+(zl[n]−clT[n])2+2Ql[n](zl[n]−clT[n]). (7)

Then we have

 E[V(y[n]−θ[n]T[n])+Δ(Q[n])|Ql[n],θ[n],ω[n]] ≤ E[V(y[n]−θ[n]T[n])+L∑l=1Ql[n](zl[n]−clT[n])∣∣ ∣∣Ql[n],θ[n],ω[n]]+12L∑l=1E[(zl[n]−clT[n])2] ≤ E[V(y[n]−θ[n]T[n])+L∑l=1Ql[n](zl[n]−clT[n])∣∣ ∣∣Ql[n],θ[n],ω[n]]+B2η2. (8)

where the last inequality follows from Proposition 1. Thus, as we have already seen in Algorithm 1, the proposed algorithm observes the vector , the random event and the trimmed pseudo average at frame , and minimizes the right hand side of (8).

In this section, we show how the bound (8) leads to the feasibility of the proposed algorithm. Define as the system history information up until frame . Formally, is a filtration where each is the -algebra generated by all the random variables before frame . Notice that since and depend only on the events before frame , contains both and . The following important lemma gives a stability criterion for any given real random process with certain negative drift property:

###### Lemma 1 (Theorem 2.3 of )

Let be a real random process over satisfying the following two conditions for a fixed :

1. For any , , for some .

2. Given , , with some .

Suppose further that is given and finite, then, at every , the following bound holds:

 E[erR[n]]≤ρnerR+1−ρn1−ρΓerσ.

Thus, in order to show the stability of the virtual queue process, it is enough to test the above two conditions with . The following lemma shows that satisfies these two conditions:

###### Lemma 2 (Drift condition)

Let , then, it satisfies the two conditions in Lemma 1 with the following constants:

 Γ =B, r =min{η,ξη24B}, σ =C0V, ρ =1−rξ2+2Bη2r2<1.

where .

The central idea of the proof is to plug the -slackness policy specified in Assumption 5 into the right hand side of (8). A similar idea has been presented in the Lemma 6 of  under the bounded increment of the virtual queue process. Here, we generalize the idea to the case where the increment of the virtual queues contains exponential type random variables and . Note that the boundedness of is crucial for the argument to hold, which justifies the truncation of pseudo average in the algorithm. Lemma 1 is proved in the Appendix.

Combining the above two lemmas, we immediately have the following corollary:

###### Corollary 1 (Exponential decay)

Given , the following holds for any under the proposed algorithm,

 E[er∥Q[n]∥]≤D, (9)

where

 D=1+B1−ρerC0V,

and are as defined in Lemma 2.

With Corollary 1 in hand, we can prove the following theorem:

###### Theorem 1 (Feasibility)

All constraints in (1)-(3) are satisfied under the proposed algorithm with probability 1.

\@trivlist

By queue updating rule (5), for any and any , one has

 Ql[n+1]≥Ql[n]+zl[n]−clT[n].

Fix as a positive integer. Then, summing over all ,

 Ql[N]≥Ql+N−1∑n=0(zl[n]−clT[n]).

Since and ,

 ∑N−1n=0zl[n]∑N−1n=0T[n]−cl≤Ql[N]N−1∑n=0T[n]≤Ql[N]N. (10)

Define the event

 A(ε)N={Ql[N]>εN}.

By the Markov inequality and Corollary 1, for any , we have

 Pr(Ql[N]>εN)≤ Pr(r∥Q[N]∥>rεN) = Pr(er∥Q[N]∥>erεN) ≤ E[er∥Q[N]∥]erεN≤De−rεN,

where is defined in Corollary 1. Thus, we have

 ∞∑N=0Pr(Ql[N]>εN)≤D∞∑N=0e−rεN<+∞.

Thus, by the Borel-Cantelli lemma (),

 Pr(A(ε)N occurs infinitely often)=0.

Since is arbitrary, letting gives

 Pr(limN→∞Ql[N]N=0)=1.

Finally, taking the from both sides of (10) and substituting in the above equation gives the claim.  \@endparenv

In this section, we show that the proposed algorithm achieves time average penalty within of the optimal objective . Since the algorithm meets all the constraints, it follows,

 limsupn→∞∑n−1i=0y[i]∑n−1i=0T[i]≥θ∗,  w.p.1.

Thus, it is enough to prove the following theorem:

###### Theorem 2 (Near optimality)

For any and , the objective value produced by the proposed algorithm is near optimal with

 limsupn→∞∑n−1i=0y[i]∑n−1i=0T[i]≤θ∗+B2η2V, w.p.1,

i.e. the algorithm achieves near optimality.

The proof of this theorem is relatively involved, thus, we would like to sketch the roadmap of our proof before jumping into the details.

The key point of the proof is to bound the pseudo average asymptotically from above by , which is achieved in Theorem 3 below. We then prove Theorem 3 through the following three-step-constructions:

• Introduce a term-wise truncated version of , denoted as , who has the same limit as (shown in Lemma 5), so that it is enough to show asymptotically.

• Construct a process as in (14) and show that is a necessary condition for (shown in Lemma 7).

• Show that only finitely often (with probability 1) by an exponential supermartingale construction related to (shown in Lemma 8 and Lemma 9).

We start with a preparation lemma illustrating that the original pseudo average behaves almost the same as the trimmed pseudo average . Recall that is defined as:

 θ[n]=[^θ[n]]θmax0.
###### Lemma 3 (Equivalence relation)

For any ,

1. if and only if .

2. if and only if .

3. if and only if .

4. if and only if .

This lemma is intuitive and the proof is shown in the Appendix. We will see that this is sometimes easier to work with than , and we will prove results on which extend naturally to by Lemma 3.

The following lemma states that the optimality of (1)-(3) is achievable within the closure of the set of all one-shot averages specified in Assumption 4:

###### Lemma 4 (Stationary optimality)

Let be the optimal objective of (1)-(3). Then, there exists a tuple , the closure of , such that the following hold:

 y∗/T∗=θ∗ (11) z∗l/T∗≤cl, ∀l∈{1,2,⋯,L}, (12)

i.e. the optimality is achievable within .

The proof of this lemma is similar to the proof of Theorem 4.5 as well as Lemma 7.1 of . We omit the details for brevity.

We start the truncation by picking up an small enough so that . We aim to show . By Lemma 3, it is enough to show . The following lemma tells us it is enough to prove it on a further term-wise truncated version of .

###### Lemma 5 (Truncation lemma)

Consider the following alternative pseudo average by truncating each summand such that and

 ~θ[n+1]=1(n+1)δn∑i=0[(y[i]−θ[i]T[i]+1VL∑l=1Ql[i](zl[i]−clT[i]))∧((2η+4√LηrV)log2(i+1))],

where , is defined in Assumption 1 and is defined in Lemma 2. Then, we have

 limsupn→∞^θ[n]=limsupn→∞~θ[n].
\@trivlist

Consider any frame such that there is a discrepancy between the summand of and , i.e.

 y[i]−θ[i]T[i]+1VL∑l=1Ql[i](zl[i]−clT[i])>(2η+4√LηrV)log2(i+1), (13)

By Cauchy-Schwartz inequality, this implies

 y[i]−θ[i]T[i]+1V ⎷L∑l=1Ql[i]2 ⎷L∑l=1(zl[i]−clT[i])2>(2η+4√LηrV)log2(i+1).

Thus, at least one of the following three events happened:

1. .

2. .

3. .

where is defined in (4). Indeed, the occurence of one of the three events is necessary for (13) to happen. We then argue that these three events jointly occur only finitely many times. Thus, as , the discrepancies are negligible.

Assume the event occurs, then, since , it follows . Then, we have

 Pr(Ai)≤ Pr(y[i]>2ηlog2(i+1)) = Pr(eηy[i]>e2log2(i+1)) ≤ E[eηy[i]](i+1)2log(i+1)≤B(i+1)2log(i+1),

where the second to last inequality follows from Markov inequality and the last inequality follows from Assumption 1.

Assume the event occurs, then, we have

 ∥Q[i]∥= ⎷L∑l=1Ql[i]2>2√Lrlog(i+1)≥2rlog(i+1).

Thus,

 Pr(Bi)≤ Pr(∥Q[i]∥>2rlog(i+1)) = Pr(er∥Q[i]∥>e2log(i+1)) ≤ E[er∥Q[i]∥](i+1)2≤D(i+1)2,

where the second to last inequality follows from Markov inequality and the last inequality follows from Corollary 1.

Assume the event occurs. Again, by Assumption 1 and Markov inequality,

 Pr(Ei)= Pr(K[i]>2ηlog(i+1)) = Pr(eηK[i]>e2log(i+1)) ≤ E[eηK[i]](i+1)2≤B(i+1)2,

where the last inequality follows from Assumption 1 again. Now, by a union bound,

 Pr(Ai∪Bi∪Ei)≤Pr(Ai)+Pr(Bi)+Pr(Ei)≤B(i+1)2log(i+1)+B+D(i+1)2,

and thus,

 ∞∑i=0Pr(Ai∪Bi∪Ei)≤∞∑i=0(B(i+1)2log(i+1)+B+D(i+1)2)<∞

By the Borel-Cantelli lemma, we have the joint event occurs only finitely many times with probability 1, and our proof is finished.  \@endparenv

Lemma 5 is crucial for the rest of the proof. Specifically, it creates an alternative sequence which has the following two properties:

1. We know exactly what the upper bound of each of the summands is, whereas in , there is no exact bound for the summand due to and other exponential type random variables.

2. For any , we have . Thus, if for some , then, .

The following preliminary lemma demonstrates a negative drift property for each of the summands in .

###### Lemma 6 (Key feature inequality)

For any , if , then, we have

 E[(y[i]−θ[i]T[i]+1VL∑l=1Ql[i](zl[i]−clT[i]))∧((2η+4√LηrV)log2(i+1))∣∣ ∣∣Hi]≤−ε0/V,
\@trivlist

Since the proposed algorithm minimizes (6) over all possible decisions in , it must achieve value less than or equal to that of any randomized stationary algorithm . This in turn implies,

 E[(y[i]−θ[i]T[i]+1VL∑l=1Ql[i](zl[i]−clT[i]))∣∣ ∣∣Hi,ω[i]] ≤ E[(^y(ω[i],α∗[i])−θ[i]^T(ω[i],α∗[i])+1VL∑l=1Ql[i](^zl(ω[i],α∗[i])−cl^T(ω[i],α∗[i])))∣∣ ∣∣Hi,ω[i]].

Taking expectation from both sides with respect to and using the fact that randomized stationary algorithms are i.i.d. over frames and independent of , we have

 E[(y[i]−θ[i]T[i]+1VL∑l=1Ql[i](zl[i]−clT[i]))∣∣ ∣∣Hi,ω[i]]≤¯¯¯y−θ[i]¯¯¯¯T+1VL∑l=1Ql[i](¯¯¯zl−cl¯¯¯¯T),

for any . Since specified in Lemma 4 is in the closure of , we can replace by the tuple and the inequality still holds. This gives

 E[(y[i]−θ[i]T[i]+1VL∑l=1Ql[i](zl[i]−clT[i]))∣∣ ∣∣Hi,ω[