We study several stochastic combinatorial problems, including the expected utility maximization problem, the stochastic knapsack problem and the stochastic bin packing problem. A common technical challenge in these problems is to optimize some function (other than the expectation) of the sum of a set of random variables. The difficulty is mainly due to the fact that the probability distribution of the sum is the convolution of a set of distributions, which is not an easy objective function to work with. To tackle this difficulty, we introduce the Poisson approximation technique. The technique is based on the Poisson approximation theorem discovered by Le Cam, which enables us to approximate the distribution of the sum of a set of random variables using a compound Poisson distribution. Using the technique, we can reduce a variety of stochastic problems to the corresponding deterministic multipleobjective problems, which either can be solved by standard dynamic programming or have known solutions in the literature. For the problems mentioned above, we obtain the following results:

We first study the expected utility maximization problem introduced recently [Li and Despande, FOCS11]. For monotone and Lipschitz utility functions, we obtain an additive PTAS if there is a multidimensional PTAS for the multiobjective version of the problem, strictly generalizing the previous result. The result implies the first additive PTAS for maximizing threshold probability for the stochastic versions of global mincut, matroid base and matroid intersection.

For the stochastic bin packing problem (introduced in [Kleinberg, Rabani and Tardos, STOC97]), we show there is a polynomial time algorithm which uses at most the optimal number of bins, if we relax the bin size and the overflow probability by for any constant . Based on this result, we obtain a 3approximation if only the bin size can be relaxed by , improving the known factor for constant overflow probability.

For the stochastic knapsack problem, we show a approximation using extra capacity for any , even when the size and reward of each item may be correlated and cancelations of items are allowed. This generalizes the previous work [Balghat, Goel and Khanna, SODA11] for the case without correlation and cancelation. Our algorithm is also simpler. We also present a factor approximation algorithm for stochastic knapsack with cancelations, for any constant , improving the current known approximation factor of [Gupta, Krishnaswamy, Molinaro and Ravi, FOCS11].

We also study an interesting variant of the stochas tic knapsack problem, where the size and the profit of each item are revealed before the decision is made. The problem falls into the framework of Bayesian on line selection problems, which has been studied a lot recently. We obtain in polynomial time a approximate policy using extra capacity for any constant .
Lastly, we remark that the Poisson approximation technique is quite easy to apply and may find other applications in stochastic combinatorial optimization.
1 Introduction
We study several stochastic combinatorial optimization problems, including the threshold probability maximization problem [52, 50, 45], the expected utility maximization problem [45], the stochastic knapsack problem [24, 13, 35], the stochastic bin packing problem [41, 31] and some of their variants. All of these problems are known to be #Phard and we are interested in obtaining approximation algorithms with provable performance guarantees. We observe a common technical challenge in solving these problems, that is, roughly speaking, given a set of random variables with possibly different probability distributions, to find a subset of random variables such that certain functional (other than the expectation ^{1}^{1}1 We can use the linearity of expectation to circumvent the difficulty of convolution. ) of their sum is optimized. The difficulty is mainly due to the fact that the probability distribution of the sum is the convolution of the distributions of individual random variables. To address this issue, a number of techniques have been proposed (briefly reviewed in the related work section). In this paper, we introduce a new technique, called the Poisson approximation technique, which can be used to approximate the probability distribution of a sum of several random variables. The technique is very easy to use and yields better or more general results than the previous techniques for a variety of stochastic combinatorial optimization problems mentioned above. In the rest of the section, we formally introduce these problems and state our results.
Terminology: We first set up some notations and review some standard terminologies. Following the literature, the exact version of a problem (denoted as ) asks for a feasible solution of with weight exactly equal to a given number . An algorithm runs in pseudopolynomial time for  if the running time of the algorithm is bounded by a polynomial of and .
A polynomial time approximation scheme (PTAS) is an algorithm which takes an instance of a maximization problem and a parameter and produces a solution whose cost is at least , and the running time, for any fixed , is polynomial in the size of the input. If appears as an additive factor in the above definition, namely the cost of the solution is at least , we say the algorithm is an additive PTAS. We say a PTAS is a fully polynomial time approximation scheme (FPTAS) if the running time is polynomial in the size of the input and .
In a multidimensional minimization problem, each element is associated with a weight vector . We are also given a budget vector . The goal is to find a feasible solution such that . We use  to denote the problem if the corresponding single dimensional optimization problem is . A multidimensional PTAS for is an algorithm which either returns a feasible solution such that , or asserts that there is no feasible solution with .
1.1 Expected Utility Maximization
We first consider the fixed set model of a class of stochastic optimization problems introduced in [45]. We are given a ground set of elements (or items) . Each feasible solution to the problem is a subset of the elements satisfying some property. In the deterministic version of the problem, we want to find a feasible solution with the minimum total weight. Many combinatorial problems such as shortest path, minimum spanning tree, and minimum weight matching belong to this class. In the stochastic version, each element is associated with a random weight . We assume all s are discrete nonnegative random variables and are independent of each other. We are also given a utility function to capture different riskaverse or riskprone behaviors that are commonly observed in decisionmaking under uncertainty. Our goal is to to find a feasible set such that the expected utility is maximized, where . We refer to this problem as the expected utility maximization (EUM) problem. An important special case is to find a feasible set such that is maximized, which we call the threshold probability maximization (TPM) problem. Note that if , we have that . In fact, this special case has been studied extensive in literature for various combinatorial problems including stochastic versions of shortest path [52], minimum spanning tree [38, 30], knapsack [31] as well as some other problems [1, 50]. We use to denote the deterministic version of the optimization problem under consideration, and use accordingly EUM and TPM to denote the expected utility maximization problem and the threshold probability maximization problem for respectively.
Our Results: Following the previous work [45], we assume (if the weight of our solution is too large, it is almost useless). We also assume is Lipschitz in , i.e., for any , where is a positive constant. Our first result is an alternative proof for the main result in [45].
Theorem 1.1.
Assume there is a pseudopolynomial time algorithm for Exact. For any , there is a polytime approximation algorithm for EUM that finds a feasible solution such that
For many combinatorial problems, including shortest path, spanning tree, matching and knapsack, a pseudopolynomial algorithm for the exact version is known. Therefore, Theorem 1.1 immediately implies an additive PTAS for the EUM version of each of these problems. An important corollary of the above theorem is a relaxed additive PTAS for TPM: For any , we can find in polynomial time a feasible solution such that
provided that there is a pseudopolynomial time algorithm for . In fact, the corollary follows easily by considering the monotone utility function which is Lipschitz. We refer the interested reader to [45] for more implications of Theorem 1.1. However, this is not the end of story. Our second major result considers EUM with monotone nonincreasing utility functions, a natural class of utility functions (we denoted the problem as EUMMono). We can get the following strictly more general result for EUMMono.
Theorem 1.2.
Assume there is a multidimensional PTAS for Multi. For any , there is a polytime approximation algorithm for EUMMono that finds a feasible solution such that
It is worthwhile mentioning that condition of Theorem 1.2 is strictly more general than the condition of Theorem 1.1. It is known that if there is pseudopolynomial time algorithm for , there is a multidimensional PTAS for , by Papadimitriou and Yannakakis [53]. However, the converse is not true. Consider the minimum cut () problem. A pseudopolynomial time algorithm for  would imply a polynomial time algorithm for the NPhard MAXCUT problem, while a multidimensional PTAS for  is known [3]. Therefore, Theorem 1.2 implies the first relaxed additive PTAS for TPM. Other problems that can justify the superiority of Theorem 1.2 include the matroid base () problem and the matroid intersection () problem. Obtaining pseudopolynomial time exact algorithms for  and  is still open [14]^{2}^{2}2 Pseudopolynomial time algorithms are known only for some special cases, such as spanning trees [9], matroids with parity conditions [14]. , while multidimensional PTASes for  and  are known [54, 19].
We would like to remark that obtaining an additive PTAS for EUM for nonmonotone utility functions under the same condition as Theorem 1.2 is impossible. Consider again EUM. Suppose the weights are deterministic. The given utility function is (which is 100Lipschitz) and the maximum cut of the given instance has a weight . So, the optimal utility is , but obtaining a utility value better than 0 is equivalent to finding a cut of weight at least , which is impossible given the imapproximability result for MAXCUT [40].
Our techniques: Our algorithm consists of two major steps, discretization and enumeration. Our discretization is similar to, yet much simpler than, the one developed by Bhalgat et al. [13]. In their work, they developed a technique which can discretize all (size) probability distributions into equivalent classes. This is a difficult task and their technique applies several clever tricks and is quite involved. However, we only need to discretize the distributions so that the size of the support of each distribution is a constant, which is sufficient for the enumeration step. In the enumeration step, we distinguish the items with large expected weights (heavy items) and those with small expected weights (light items). We argue that there are very few heavy items in the optimal solution, so we can afford to enumerate all possibilities. To deal with light items, we invoke Le Cam’s Poisson approximation theorem which (roughly) states that the distribution of the total size of the set of light items can be approximated by a compound Poisson distribution, which can be specified by the sum of the (discretized) distribution vectors of the light items (called the signature of the set). Therefore, instead of enumerating combinations of light items, we only need to enumerate all possible signatures and check whether there is a set of light items with the sum of their distribution vectors approximately equal to (or at most) the signature. To solve the later task, we need the pseudopolynomial time algorithm for  (or the multidimensional PTAS for ).
1.2 Stochastic Bin Packing
In the stochastic bin packing (SBP) problem, we are given a set of items and an overflow probability . The size of each item is an independent random variable following a known discrete distribution. The distributions for different items may be different. Each bin has a capacity of . The goal is to pack all the items in using as few bins as possible such that the overflow probability for each bin is at most . The problem was first studied by Kleinberg, Rabani and Tardos [41]. They obtained a approximation, for only Bernoulli distributed items, if we relax the bin size to or the overflow probability to . They also obtained a approximation without relaxing the bin size and the overflow probability. Goel and Indyk [31] obtained PTAS for both Poisson and exponential distributions and QPTAS (i.e., quasipolynomial time) for Bernoulli distribution.
Our Results: Our main result for SBP is the following theorem.
Theorem 1.3.
For any fixed constant , there is a polynomial time algorithm for SBP that uses at most the optimal number of bins, when the bin size is relaxed to and the overflow probability is relaxed to .
To the best of our knowledge, our result is the first result for SBP for arbitrary discrete distributions. Based on this result,we can get the following result when the overflow probability is not relaxed. For Bernoulli distributions, this improves the approximation in [41] for any constant .
Theorem 1.4.
For any constant , we can find in polynomial time a packing that uses at most bins of capacity such that the overflow probability of each bin is at most .
Our technique: Our algorithm for SBP is similar to that for EUM. We distinguish the heavy items and the light items and use the Poisson approximation technique to deal with the light items. One key difference from EUM is that we have a linear number of bins, each of them may hold a constant number of heavy items. Therefore, we can not simply enumerate all configurations of the heavy items since there are exponential many of them. To reduce the number of the configurations to a polynomial, we classify the heavy items into a constant number of types (again, by discretization). For a fixed configuration of the heavy items, using the Poisson approximation, we reduce SBP to a multidimensional version of the multiprocessors scheduling problem, called the vector scheduling problem, for which a PTAS is known [17].
1.3 Stochastic Knapsack
The deterministic knapsack problem is a classical and fundamental problem in combinatorial optimization. In this problem, we are given as input a set of items each associated with a size and a profit, and our objective is to find a maximum profit subset of items whose total size is at most the capacity of the knapsack. In many applications, the size and/or the profit of an item may not be fixed values and only their probability distributions are known to us in advance. The actual size and profit of an item are revealed to us as soon as it is inserted into the knapsack. For example, suppose we want to schedule a subset of jobs on a single machine by a fixed deadline and the precise processing time and profit of a job are only revealed until it is completed. In the following, we use terms items and jobs interchangeably. If the insertion of an item causes the knapsack to overflow, we terminate and do not gain the profit of that item. The problem is broadly referred to as the stochastic knapsack (SK) problem [24, 13, 35]. Unlike the deterministic knapsack problem for which a solution is a subset of items, a solution to SK is specified by an adaptive policy which determines which item to insert next based on the remaining capacity and and the set of available items. In contrast, a nonadaptive policy specifies a fixed permutation of items.
A significant generalization of the problem, introduced in [35], considers the scenarios where the profit of a job can be correlated with its size and we can cancel a job during its execution in the policy. No profit is gathered from a canceled job. This generalization is referred as Stochastic Knapsack with Correlated Rewards and Cancelations (SKCC). Stochastic knapsack and several of its variants have been studied extensively by the operation research community (see e.g., [25, 26, 5, 4]). In recent years, the problem has also attracted a lot of attention from the theoretical computer science community where researchers study the problems from the perspective of approximation algorithms [24, 13, 35].
Our Results: For SK, Bhalgat, Goel and Khanna [13] obtained a approximation using extra capacity. We obtain an alternative proof of this result, using the Poisson approximation technique. The running time of our algorithm is where , improving upon the running time in [13]. Our algorithm is also considerably simpler.
Theorem 1.5.
For any , there is a polynomial time algorithm that finds a approximate adaptive policy for SK when the capacity is relaxed to .
Our next main result is a generalization of Theorem 1.5 to SKCC where the size and profit of an item may be correlated, and cancelation of item in the middle is allowed. The current best known result for SKCC is a factor 8 approximation algorithm by Gupta, Krishnaswamy, Molinaro and Ravi [35], base on a new timeindexed LP relaxation. We remark that it is not clear how to extend the enumeration technique developed in [13] to handle cancelations (see a detailed discussion in Section 5).
Theorem 1.6.
For any , there is a polynomial time algorithm that finds a approximate adaptive policy for SKCC when the capacity is relaxed to .
We use SKCan to denote the stochastic knapsack problem where cancelations are allowed (the size and profit of an item are not correlated). Based on Theorem 1.6 and the algorithm in [12], we obtain a generalization of the result in [12] as follows.
Theorem 1.7.
For any , there is a polynomial time algorithm that finds a approximate adaptive policy for SKCan.
Bayesian Online Selection: The technique developed for SKCC can be used to obtain the following result for an interesting variant of SK, where the size and the profit of an item are revealed before the decision whether to select the item is made. We call this problem the Bayesian online selection problem subject to a knapsack constraint (BOSPKC). The problem falls into the framework of Bayesian online selection problems (BOSP) formulated in [42]. BOSP problems subject to various constraints have attracted a lot of attention due to their applications to mechanism design [36, 16, 2, 42]. BOSPKC also has a close relation with the knapsack secretary problem [6] (See Section 6 for a discussion).
Theorem 1.8.
For any , there is a polynomial time algorithm that finds a approximate adaptive policy for BOSPKC when the capacity is relaxed to .
As a byproduct of our discretization procedure, we also give a linear time FPTAS for the stochastic knapsack problem where each item has unlimited number of copies (denoted as SKU), if we relax the knapsack capacity by . The problem has been studied extensively under different names and optimal adaptive policies are known for several special distributions [25, 26, 5, 4]. However, no algorithmic result about general (discrete and continuous) distributions is known before. The details can be found in Appendix D.
Our techniques: SKCC and BOSPKC are more technically interesting since their solutions are adaptive policies, which do not necessarily have polynomial size representations. So it is not even clear at first sight where to use the Poisson approximation technique. As before, we first discretize the distributions. In the second step, we attempt to enumerate all possible blockadaptive policies, a notion which is introduced in [13]. In a blockadaptive policy, instead of inserting them items one by one, we insert the items block by block. In terms of the decision tree of a policy, each node in the tree corresponding to the insertion of a block of items. A remarkable discovery in [13] is that there exists a blockadaptive policy that approximates the optimal policy and has only blocks in the decision tree (the constant depends on ) for SK. However, their proof does not easily generalize to SKCC. We extend their result to SKCC with a essentially different proof, which might be of independent interest. Fixing the topology of the decision tree of the blockadaptive policy, we can enumerate the signatures of all blocks in polynomial time, and check for each signature whether there exists a blockadaptive policy with the signature using dynamic programming. Again, in the analysis, we use the Poisson approximation theorem to argue that two block adaptive policies with the same tree topology and signatures behave similarly.
1.4 Other Related Work
Recently, stochastic combinatorial optimization problems have drawn much attention from the theoretical computer science community. In particular, the twostage stochastic optimization models for many classical combinatorial problem have been studied extensively. We refer interested reader to [57] for a comprehensive survey.
There is a large body of literature on EUM and TPM, especially for specific combinatorial problems and/or special utility functions. Loui [47] showed that the EUM version of the shortest path problem reduces to the ordinary shortest path (and sometimes longest path) problem if the utility function is linear or exponential. For the same problem, Nikolova, Brand and Karger [51] identified more specific utility and distribution combinations that can be solved optimally in polynomial time. Nikolova, Kelner, Brand and Mitzenmacher [52] studied the TPM version of shortest path when the distributions of the edge lengths are normal, Poisson or exponential. Nikolova [50] extended this result to an FPTAS for any problem for normal distributions, if the deterministic version of the problem has a polynomial time algorithm. Many heuristics for the stochastic shortest path problems have been proposed to deal with more general utility functions (see e.g., [48, 49, 11]). However, either their running times are exponential in worst cases or there is no provable performance guarantee for the produced solution. The TPM version of the minimum spanning tree problem has been studied in [38, 30], where polynomial time algorithms have been developed for Gaussian distributed edges.
The bin packing problem is a classical NPhard problem. It is well known that it is hard to approximate within a factor of from a reduction from the subset sum problem. Alternatively, the problem admits an asymptotic PTAS, i.e., it is possible to find in polynomial time a packing using at most bins for any [29]. The stochastic model where all items follow the same size distribution have been studied extensively in the literature (see, e.g., [20, 55]). However, these works require that the actual items sizes are revealed before put in the bins and their focus is to design simple rules that achieve nearly optimal packings.
Kleinberg, Rabani and Tardos [41] first considered the fixed set version of the stochastic knapsack problem with Bernoullitype distributions. Their goal is to find a set of items with maximum total profit subject to the constraint that the overflow probability is at most a given parameter . They provided a polynomialtime approximation. For exponentially distributed items, Goel and Indyk [31] presented a bicriterion PTAS. Chekuri and Khanna [18] pointed out that a PTAS for Bernoulli distributed items can be obtained using their techniques for the multiple knapsack problem. For Gaussian distributions, Goyal and Ravi [32] obtained a PTAS.
The adaptive stochastic knapsack problem and several of it variants have been shown to be PSPACEhard [24], which implies that it is impossible in polynomial time to construct an optimal adaptive policy, which may be exponentially large and arbitrarily complicated. Dean, Goemans, and Vondrak [24] first studied SK from the perspective of approximation algorithms and gave an algorithm with an approximation factor of . In fact, their algorithm produces a nonadaptive policy (a permutation of items) which implies the adaptivity gap of the problem, the maximum ratio between the expected values achieved by the best adaptive and nonadaptive strategies, is a constant. Using the technique developed for the approximation using extra capacity, Bhalgat, Goel and Khanna [13] also gave an improved approximation without extra capacity. Stochastic multidimensional knapsack (also called stochastic packing) has been also studied [23, 13, 8]. The stochastic knapsack problem can be formulated as an exponentialsize Markov decision process (MDP). Recently, there is a growing literature on approximating the optimal policies for exponentialsize MDPs in theoretical computer science literature (see e.g., [24, 34, 35, 42]).
BOSP problems are often associated with the name prophet inequalities since the solutions of the online algorithms are often compared with “the prophet’s solutions” (i.e., the offline optimum). The prophet inequalities were proposed in the seminal work of Krengel and Sucheston [43] and have been studied extensively since then. The secretary problem is a also classical online selection problem introduced by Dynkin [28]. Recently, both problems enjoy a revival due to their connections to mechanism design and many generalizations have been studied extensively [7, 6, 37, 15, 39, 36, 16, 2, 42]. We note that performances in all the work mentioned above are measured by comparing the solutions of the online policies with the offline optimum. Complementarily, our work compares our policies with the optimal online policies.
Finally, we would like to point out that Daskalakis and Papadimitrious [21] recently used Poisson approximation in approximating mixed Nash Equilibria in anonymous games. However, the problem and the technique developed there are very different from this paper.
Prior Techniques: As mentioned in the introduction, it is a common challenge to deal with the convolution of a set of random variables (directly or indirectly). To address this issue, a number of techniques have been developed in the literature. Most of them only work for special distributions [47, 51, 52, 31, 33, 50], such as Gaussian, exponential, Poisson and so on. There are much fewer techniques that work for general distributions. Among those, the effective bandwidth technique [41] and the linear programming technique [24, 8, 35] have proven to be quite powerful for many problems, but the approximation factors obtained are constants at best (no exception is known so far). In order to obtain (multiplicative/additive) PTAS, two techniques are developed very recently: one is the discretization technique [13] for the stochastic knapsack problem and the other is the Fourier decomposition technique [45] for the utility maximization problem. However, both of them have certain limitations. The discretization technique [13] typically reduces a stochastic optimization problem to a complicated enumeration problem (in some sense, it is an dimensional optimization problem since the distributions are discretized into equivalent classes). If the structure of the problem is different from or has more constraints than the knapsack problem, the enumeration problem can become overly complicated or even intractable (for example, the SKCC problem or the TPM version of the shortest path problem). In the Fourier decomposition technique [45], due to the presence of complex numbers, we lose certain monotonicity property in the reduction from the stochastic optimization problem to an deterministic optimization problem, thus it is impossible to obtain something like Theorem 1.2 using that technique.
2 Expected Utility Maximization
We prove Theorem 1.1 and Theorem 1.2 in this section. For each item , we use to denote probability distribution of the weight of . We use to denote the set of feasible solutions. For example, in the minimum spanning tree problem, is the set of edges and is the set of all spanning trees. Since we are satisfied with an additive approximation, we can assume w.l.o.g. the utility function if for some constant (e.g., we can choose to be a constant such that if ). The support of is assumed to be a subset of . By scaling, we can assume and for . We also assume is Lipschitz where is a constant that does not depend on . It is straightforward to extend our analysis to the case where depends on . We first consider the general EUM problem and then focus on the EUMMono problem where the utility function is monotone nonincreasing.
We start by bounding the total expected size of solution if is not negligible. This directly translates to an upper bound of the number of items with large expected weight in , which we handle separately. The proof is fairly standard and can be found in the appendix.
Lemma 2.1.
Suppose each item has a nonnegative random weight taking values from for some . Then, , , if , then .
Let denote the optimal feasible set and the optimal value. If , then any feasible solution achieves the desired approximation guarantee since . Hence, we focus on the other case where . We call an item heavy item if . Otherwise we call it light. By Lemma 2.1, we can see that the number of heavy items in is at most .
Enumerating Heavy Elements We enumerate all possible set of heavy items with size at most . There are at most such possibilities. Suppose we successfully guess the set of heavy items in . In the following parts, we mainly consider the question that given a set of heavy items, how to choose a set of light items such that their union is a feasible solution, and is close to optimal.
Dealing with Light Elements Unlike heavy items, there may be many light items in , which makes the enumeration computationally prohibitive. Our algorithm consists of the following steps. First, we discretize the weight distributions of all items. After the discretization, there are only a constant number of discretized weight values in . The discretized distribution can be thought as a vector with constant dimensions. Then, we argue that for a set of light items (with certain conditions), the distribution of the sum of their discretized weights behaves similarly to a single item whose weight follows a compound Poisson distribution. The compound Poisson distribution is completely determined by a constant dimensional vector (which we call the signature of ) which is the sum of the distribution vectors in . The argument is carried out by using the Poisson approximation theorem developed by Le Cam [44]. Then, our task amounts to enumerating all possible signatures, and checking whether there is a set of light items with the signature and is a feasible set in . Since the number of possible signatures is polynomial, our algorithm runs in polynomial time. Now, we present the details of our discretization method.
2.1 Discretization
In this section, we discuss how to discretize the size distributions for items, using parameter . W.l.o.g., we assume the range of is for all . Our discretization is similar in many parts to the one in [13], however, ours is much simpler.
For item , we say realizes to a “large” size if . Otherwise we say realizes to a “small” size. The discretization consists of two steps. We discretize the small size region in step 1 and the large size region in step 2. We use to denote the size after discretization and its distribution.
Step 1. Small size region In the small size region, follows a Bernoulli distribution, taking only values and . The probability values and are set such that
More formally, suppose w.l.o.g. that there is a value such that . We create a mapping between and as follows:
In the appendix, we discuss the case where such value does not exist.
Step 2. Large size region If realizes to a large size, we simply discretize it as follows: Let (i.e., we round a large size down to a multiple of ).
The above two discretization steps are used throughout this paper. We denote the set of the discretized sizes by where . Note that are also included in , even though their probability is . It is straightforward to see that . This finishes the description of the discretization.
The following lemma states that for a set of items, the behavior of the sum of their discretized distributions is very close to that of their original distributions.
Lemma 2.2.
Let be a set of items such that . For any , we have that

;

.
Lemma 2.3.
For any set of items such that ,
Proof.
For a set , we use and to denote the CDFs of and respectively. We first observe that
The second equation follows from applying integration by parts and the last is because is Lipschitz. From Lemma 2.2, we can see that
In fact, the above can be seen as follows:
The proof for the other direction is similar and we omit it here. ∎
2.2 Poisson Approximation
For an item , we define its signature to be the vector
where for all nonzero discretized size . For a set of items, its signature is defined to be the sum of the signatures of all items in , i.e.,
We use to denote the th coordinate of . By Lemma 2.1, . Thus for all . Therefore, the number of possible signatures is bounded by , which is polynomial in .
For an item , we let be the random variable that for , and with the rest of the probability mass. Similarly, we use to denote for a set of items.
The following lemma shows that it is sufficient to enumerate all possible signatures for the set of light items.
Lemma 2.4.
Let be two sets of light items such that and . Then, the total variation distance between and satisfies
The following Poisson approximation theorem by Le Cam [44], rephrased in our language, is essential for proving Lemma 2.4. Suppose we are given a dimensional vector . Let . we say a random variable follows the compound Poisson distribution corresponding to if it is distributed as where follows Poisson distribution with expected value (denoted as ) and are i.i.d. random variables with and for and .
Lemma 2.5.
[44] Let be independent random variables taking integer values in , let . Let and where . Suppose . Let be the compound Poisson distribution corresponding to vector . Then, the total variation distance between and can be bounded as follows:
Proof of Lemma 2.4: By definition of , we have that for any item . Since and contains at most items, by the standard coupling argument, we have that
If we apply Lemma 2.5 to both and , we can see they both correspond to the same compound Poisson distribution, say , since their signatures are the same. Moreover, since the total variation distance is a metric, we have that
The last equality holds since for any light item ,
and
∎
2.3 Approximation Algorithm of Eum
Now, everything is in place to present our approximation algorithm and the analysis.
In step (a), we can use the pseudopolynomial time algorithm for the exact version of the problem to find a set with the signature exact equal to . Since is a vector with coordinates and the value of each coordinate is bounded by O(n), it can be encoded by an integer which is at most . Thus the pseudopolynomial time algorithm actually runs in time, which is a polynomial. Since there are at most different heavy item sets and different signatures, the algorithm runs in time overall. Finally, we present the analysis of the performance guarantee of the algorithm.
Proof of Theorem 1.1: Assume the optimal feasible set is where items in are heavy and items in are light. Assume our algorithm has guessed correctly. Since there is a pseudopolynomial algorithm, we can find a set of light items such that . By Lemma 2.4, we know that Therefore, we can get that
Moreover, we have that
where is the PDF for . It is time to derive our final result:
2.4 Approximation Algorithm for EUMMono
We prove Theorem 1.2 in this subsection. Recall that EUMMono is a special case of EUM where the utility function is monotone nonincreasing. The algorithm is the same as that in EUM except we adopt the new step (a), as follows.
Lemma 2.6.
We are given two vectors (coordinatewise). and are random variables following CPD corresponding to and , respectively. Then, stochastically dominates .
Proof.
We are not aware of an existing proof of this intuitive fact, so we present one here for completeness. The lemma can be proved directly from the definition of CPD, but the proof is tedious. Instead, we use Lemma 2.5 to give an easy proof as follows: Consider the sum of a large number of nonnegative random variables , each having a very small expectation. Suppose . As goes to infinity and each goes to 0, the distribution of approaches to that of since their total variation distance approaches to 0. We can select a subset of so that . So, the sum of the subset, which approaches to in the limit, is clearly stochastically dominated by total sum . ∎
Lemma 2.7.
Let be two sets of light items with and . If , then we have that for any
Proof.
Let and be the compound Poisson distribution (CPD) corresponding to and , respectively. Denote . Let be the CPD defined as where and s are i.i.d. random variables with for each . By Lemma 2.5, is distributed as where . By the standard coupling argument, we can see that
This is because for all . Since , . Therefore the total variation distance of and can be bounded by .
Since , (the CPD corresponding to ) is stochastically dominated by (the CPD corresponding to ) by Lemma 2.6. Therefore,
We also have for . Thus
This completes the proof of the lemma. ∎
Proof of Theorem 1.2: The proof is similar to the proof of Theorem 1.1. Assume the optimal feasible set is where items in are heavy and items in are light. Assume our algorithm has guessed correctly. Since there is a multidimensional PTAS, we can find a set of light items such that . By Lemma 2.7, we know that for any , . Therefore, we can get that for any , Now, we can bound the expected utility loss for discretized distributions:
Finally, we can show the performance guarantee of our algorithm:
The last inequality follows from Lemma 2.3.∎
3 Stochastic Bin Packing
Recall that in the stochastic bin packing (SBP) problem, we are given a set of items and an overflow probability . The size of each item is an independent random variable . The goal is to pack all the items in into bins with capacity , using as few bins as possible, such that the overflow probability for each bin is at most . The main goal of this section is to prove Theorem 1.3.
W.l.o.g., we can assume that where is the error parameter. Otherwise, the overflow probability is relaxed to , and we can pack all items in a single bin. Let the number of bins used in the optimal solution be . In our algorithms, we relax the bin size to , which is less than . W.l.o.g., we assume the support of is . From now on, assume that our algorithm has guessed correctly. We use to denote the bins.
3.1 Discretization
We first discretize the size distributions for all items in , using parameter , as described in Section 2.1. Denote the discretized size of by .
We call item a heavy item if . Otherwise, is light. We need to further discretize the size distributions of the heavy items. We round down the probabilities of taking each nonzero value to multiples of . Denote the resulting random size by . More formally, for any . Use to denote the set of all discretized distributions for heavy items. We can see that . Denote them by (in an arbitrary order).
For a set of items, we use to denote the set of heavy items in , and use to denote the set of light items in . We define the arrangement for heavy items in to be the dimensional vector:
where is the number of heavy items in following the discretized size distribution , . Suppose we pack all items in into one bin. By Lemma 2.1 and the assumption that , . So, we can pack at most heavy items into a bin. Therefore, the number of possible arrangements for a bin is bounded by , which is a constant.
Let the signature of a light item be (Note that the definition is slightly different from the previous one). The signature of the a set of light items is defined to be If consists of both heavy and light items, we use as a short for . Moreover, for set , we define the rounded signature to be