# Exact Sampling of Stationary and Time-Reversed Queues

###### Abstract

We provide the first algorithm that, under minimal assumptions, allows to simulate the stationary waiting-time sequence of a single-server queue backwards in time, jointly with the input processes of the queue (inter-arrival and service times). The single-server queue is useful in applications of DCFTP (Dominated Coupling From The Past), which is a well known protocol for simulation without bias from steady-state distributions. Our algorithm terminates in finite time assuming only finite mean of the inter-arrival and service times. In order to simulate the single-server queue in stationarity until the first idle period in finite expected termination time we require the existence of finite variance. This requirement is also necessary for such idle time (which is a natural coalescence time in DCFTP applications) to have finite mean. Thus, in this sense, our algorithm is applicable under minimal assumptions.

## 1 Introduction

It is a pleasure to contribute to this special issue in honor of Professor Don Iglehart, whose scientific contributions have had an enormous impact in the applied probability and stochastic simulation communities. Professor Iglehart research contributions expand areas such as steady-state simulation and queueing analysis. We are glad, in this paper, to contribute to both of these areas from the standpoint of exact (also known as perfect) simulation theory, which aims at sampling without any bias from the steady-state distribution of stochastic systems.

The theory of exact simulation has attracted substantial attention, particularly since the ground breaking paper [PW96]. In their paper, the authors introduced the most popular sampling protocol for exact simulation to date; namely, Coupling From The Past (CFTP). CFTP is a simulation technique which results in samples from the steady-state distribution of a Markov chain under certain compactness assumptions. The paper [Ken98] describes a useful variation of CFTP, called Dominated CFTP (DCFTP). Like CFTP, DCFTP aims to sample from the steady-state distribution of a Markov chain, but this technique can also be applied to cases in which the state-space is unbounded.

The idea in the DCFTP method is to simulate a dominating stationary process backwards in time until the detection of a so-called coalescence time, in which the target and dominating processes coincide. The sample path of the target process can then be reconstructed forward in time from coalescence up to time zero. The state of the target process at time zero is a sample from the associated stationary distribution.

Our contribution in this paper is to provide, under nearly minimal assumptions (finite-mean service and inter-arrival times), an exact simulation algorithm for the stationary workload of a single-server queue backwards in time. This is a fundamental queueing system which can be used in many applications as a natural dominating process when applying DCFTP. Usually, additional assumptions, beyond the ones we consider here, have been imposed to enable the simulation of the stationary single-server queue backwards in time. For example, in [Si11a, Si11b] the author takes advantage of a single-server queue with Poisson arrivals for exact simulation of a multi-server system; see also the recent work of [CK14], which dramatically improves the running time in [Si11b], but also requires the Poisson arrivals assumption. In the paper [BS11], under the existence of a finite moment generating function for the service times, the single-server queue, simulated backwards in time, is used to sample from a general class of perpetuities. The paper [BD12], which builds upon the ideas in [BS11], also uses the single-server queue backwards in time to sample the state descriptor of the infinite server queue in stationarity; in turn, the infinite-server queue is used to simulate loss networks in stationarity. Other example in which the single-server queue arises as a natural dominating process occurs in the setting of so-called multi-dimensional stochastic-fluid networks, see [BC11]. Our contribution here allows to extend the applicability these instances, in which the single-server queue has been used as a dominated process under stronger assumptions than the ones we impose here. The extensions are direct in most cases, the multi-server queue with general renewal arrivals requires the application of an additional coupling idea and it is reported in [BDP15].

The first idle period (backwards in time starting from stationarity) is a natural coalescence time when applying DCFTP. Therefore, we are specially interested in an algorithm that has finite expected termination time to simulate such first idle period. Moreover, it is well known that finite-variance service times are necessary if the first idle period (starting from stationarity) has finite expected time (this follows from Wald’s identity, [Durrett] p. 178, and from Theorem 2.1 in [As03], p. 270). While our algorithm terminates with probability one imposing only the existence of finite mean of service times and inter-arrival times, when we assume finite variances we obtain an algorithm that has finite expected running time (see Theorem LABEL:Thm_MAIN in Section LABEL:sec:Implementation-of-the_algorithm).

Let us now provide the mathematical description of the problem we want to solve. Consider a random walk for , and . We assume that is a sequence of independent and identically distributed (IID) random variables with

(1.1) |

As we indicated earlier, of special interest is the case for some . Now, for and we define the negative-drift random walk and its associated running (forward) maximum by

(1.2) |

respectively. Note that the maximum is taken over an infinite time-horizon, so the process is not adapted to the random walk . Our aim in this paper is to design an algorithm that samples jointly from the sequence for any finite (potentially a stopping time adapted to ). Of particular interest is the first idle time, , which can often be used as a coalescence time.

Note that if we define for , then we can easily verify the so-called Lindley’s recursion (see [As03], p. 92) namely

(1.3) |

and therefore corresponds to a single-server queue waiting time sequence backwards in time; the sequence is clearly stationary since the ’s are all equal in distribution. Simulating jointly allows to couple the single-server queue backwards in time with the driving sequence (i.e. the ’s). Such coupling is required in the applications of the DCFTP method.

The algorithm that we propose here extends previous work in [EG00], which shows how to simulate assuming the existence of the so-called Cramer root (i.e. such that ). The paper [BS11] explains how to simulate assuming a finite moment generating function in a neighborhood of the origin. Multidimensional extensions, also under the assumption of a finite moment generating function around the origin, are discussed in [BC11].

Our strategy for simulating the sequence relies on certain “upward events” and “downward events” that occur at random times. These “milestone events” will be discussed in Section 2. In Section 2 we will also present the high-level description of our proposed algorithm, which will be elaborated in subsequent sections. Section LABEL:Sec_Sampling_M0 explains how to simulate under the assumption that for . In Section LABEL:sec:Implementation-of-the_algorithm we built on our construction for the sampling of to simulate the sequence . Section LABEL:Sec_Add_Cons will explain how to extend our algorithm to the case for and also discuss additional considerations involved in evaluating certain normalizing constants. Finally, in Section LABEL:Sec:_Numerical we will present a numerical example that tests the empirical performance of our proposed algorithm.

## 2 Construction of via “milestone events”

We will describe the construction of a pair of sequences of stopping times (with respect to the filtration generated by ), denoted by and , which track certain downward and upward milestones in the evolution of . We follow similar steps as described in [BS11]. These “milestone events” will be used in the design of our proposed algorithm. The elements of the two stopping times sequences interlace with each other (when finite) and their precise description follows next.

figure\end@float

We start by fixing any , . Eventually we will choose as small as possible subject to certain constraints described in Section LABEL:Sec_Sampling_M0, and then we can choose as small as possible to satisfy

(2.1) |

Typically, is feasible. This constraint on will be used in the proof of Proposition 1 and also in the implementation of Step 2 in Procedure LABEL:Algo_Hueristics_for_S_n_M_n below.

Now set . We observe the evolution of the process and detect the time (the first downward milestone),

Once is detected we check whether or not ever goes above the height (the first upward milestone); namely we define

For now let us assume that we can check if or (how exactly to do so will be explained in Section LABEL:Sec_Sampling_M0). To continue simulating the rest of the path, namely , we potentially need to keep track of the conditional upper bound implied by the fact that . To this end, we introduce the conditional upper bound variable (initially ). If at time we detect that , then we set and continue sampling the path of the random walk conditional on never crossing the upper bound , that is, conditional on . Otherwise, if , we simulate the path conditional on , until we detect the time . We continue on sequentially checking whenever a downward or an upward milestone is crossed as follows: For , define

(2.2) |

with the convention that if , then . Therefore, we have that if and only if .

Let us define

(2.3) |

So, for example, if we have that and the drifted random walk will never reach level again. This allows us to evaluate by computing

(2.4) |

Similarly, the event , for some , implies that the level is never crossed for all , and we let . The value of keeps updating as the random walk evolves, at times where .

The advantage of considering these stopping times is the following: once we observed that some , the values of are known without a need of further simulation. A detailed example is illustrated in Figure LABEL:fig:Construction_of_stopping_times.

Before we summarize the properties of the stopping times ’s and ’s it will be useful to introduce the following. For any and let

(2.5) |

###### Proposition 1.

###### Proof.

The statement in (2.6) follows easily from the Law of Large Numbers since . Now we will verify that . Recall that was defined by . Therefore, since , for all we have (see [As03] p. 224),

Our next goal is to show that for we can find such that

Suppose first that for each . Then, by the strong Markov property we have that

Now suppose that for some and let . Define . Note that

(2.8) |

Keep in mind that the right hand side of (2.8) regards as a deterministic constant and note that

(2.9) |

Hence, we conclude that

It then follows by the Borel-Cantelli lemma that . ∎

In the setting of Proposition 1, for each we can define and , both finite random variables such that

(2.10) |

In words, is the time, not earlier than , at which we detect a second unsuccessful attempt at building an upward patch directly. The fact that the relation in (2.10) holds, follows easily by construction of the stopping times in (2.2). Note that it is important, however, to define so that is computed first. That way we can make sure that the maximum of the sequence is achieved between and (see Figure LABEL:fig:Construction_of_stopping_times).

Proposition 1 ensures that it suffices to sequentially simulate and jointly with the underlying random walk in order to sample from the sequence . This observation gives rise to our suggested scheme. The procedure sequentially constructs the random walk in the intervals for . Here is the high-level procedure to construct :