Sharing within limits: Partial resource pooling in loss systems^{†}^{†}thanks: A preliminary version of this work appeared in the proceedings of COMSNETS 2016 [1].
Abstract
Fragmentation of expensive resources, e.g., spectrum for wireless services, between providers can introduce inefficiencies in resource utilisation and worsen overall system performance. In such cases, resource pooling between independent service providers can be used to improve performance. However, for providers to agree to pool their resources, the arrangement has to be mutually beneficial. The traditional notion of resource pooling, which implies complete sharing, need not have this property. For example, under full pooling, one of the providers may be worse off and hence have no incentive to participate. In this paper, we propose partial resource sharing models as a generalization of full pooling, which can be configured to be beneficial to all participants.
We formally define and analyze two partial sharing models between two service providers, each of which is an Erlang loss system with the blocking probabilities as the performance measure. We show that there always exist partial sharing configurations that are beneficial to both providers, irrespective of the load and the number of circuits of each of the providers. A key result is that the Pareto frontier has at least one of the providers sharing all its resources with the other. Furthermore, full pooling may not lie inside this Pareto set. The choice of the sharing configurations within the Pareto set is formalized based on bargaining theory. Finally, large system approximations of the blocking probabilities in the qualityefficiencydriven regime are presented.
1 Introduction
High availability is an important requirement of many services like wireless communications, cloud computing, hospitals, and fire fighting services. The resources required to provide these are expensive — think spectrum and base stations for wireless communication, servers and associated infrastructure for cloud computing, medical equipment and doctors for hospitals, fire trucks and trained personnel for fire fighting services. Service denial, which is the inability of the resources to satisfactorily meet a fraction of the demand, is an important performance measure for these services. When the demand is stochastic, the amount of resources required to provide a prescribed grade of service may be such that the utilization is low, especially in smaller systems. This means that small providers require more resources for a given service level. This in turn can make these services expensive for small providers. However, large systems experience statistical multiplexing gains and hence achieve economies of scale. Thus resource sharing or resource pooling can be useful when there are several independent entities providing similar services using similar resources.
Typically, resource pooling is assumed to involve the combining of the resources of all the participating providers and treating the combined system as one unit. In this paper we propose partial resource pooling as a generalization of the full pooling models. Specifically, we consider two loss systems modeled as M/M/N/N queues that operate independently in that they manage their own calls but they cooperate by pooling their servers partially as follows. When an overflow call arrives at one of the systems, (i.e., the number of active calls of the provider is greater than the number of servers it has), the other provider may loan one of its free servers in which case the call will be admitted. The server is loaned for the duration of the call. The overflow call is lost if the other provider chooses not to loan the server. The partial sharing model determines when such an overflow call is admitted. At one extreme would be the no pooling case where all overflow calls are lost and at the other extreme is the full pooling case where all overflow calls are admitted if there is a free server.
As mentioned above, several resource pooling models are available in the literature with the key feature being independent service systems, managed by independent decision makers, cooperating fully, acting as a single entity, and sharing the costs and/or benefits suitably. In other words, it is an allornothing game with the parties either pooling their resources completely or staying out of the coalition and operating on their own. These models typically use cooperative or coalitional game theoretic ideas to determine the answers to the following questions. (1) Which entities will form a cooperating unit? (2) How are the revenues and costs shared?
In [2, 3] independent wireless network operators share base station infrastructure and spectrum to efficiently serve their customers. Stable cost sharing arrangements between the network operators are explored in this setting. Note that the sharing model here involves complete pooling of the spectrum and the base stations, as opposed to the opportunistic sharing of resources with secondary users in cognitive radio systems (e.g., [4, 5]). In the system studied in [6], the cooperating entities choose the quantity of resources to provide a specified service grade and stable cost sharing arrangements are determined.
Server pooling has also been studied in the context of reengineering of manufacturing lines by modeling them as Jackson networks. Here several nodes (service stations) are combined into one service station that is capable of providing the services of all the components, e.g., [7] and references therein.
More abstract forms of resource pooling have also been considered in the queueing literature. In [8], cooperating single server queues are combined into one single server queue whose service rate is upper bounded by the sum of the capacities. The actual service rate is determined by a cost structure and the service grade. In [9], cooperation among queues to optimally invest in a common service capacity, or choose the optimal demand to serve as a common entity, is analyzed.
To motivate our break from the preceding literature, consider the following example of two M/M/N/N loss systems, e.g., cellular service providers with a fixed number of channels. Provider , with channels and a load of 88 Erlangs, has a blocking probability of and Provider , with channels and Erlangs load, has a blocking probability of . If the two providers are combined into one, the joint system would have a combined load of Erlangs served by channels with blocking probability of . Clearly cooperation is beneficial to but unacceptable to And if blocking probability were the only performance measure, it is a case of “and never the twain shall meet.” The partial pooling mechanisms that we develop in this paper allow both the operators to improve their performance.
The rest of the paper is organized as follows. In the next section, we introduce the system model and describe two partial sharing models: the bounded overflow sharing model and the probabilistic sharing model. The blocking probabilities under these models and their monotonicity properties are also derived. In Section 3, we characterize the Pareto frontier of the sharing configurations. The key result is that the Pareto frontier is non empty and is at the boundary of all possible sharing configurations—one of the providers has to always yield its free servers to overflow calls of the other. In Section 4, we characterize the economics of partial sharing by treating the sharing that emerges as the solution of Nash bargaining, KalaiSmorodinsky bargaining, egalitarian sharing (both parties experience the same benefit) and utilitarian sharing (maximize the system benefit). The utility sets over which these bargaining solutions will be computed for our model do not satisfy the usual properties of convexity or comprehensiveness, making it less straightforward to guarantee the uniqueness of the bargaining solution. Nevertheless, using monotonicity properties of the blocking probabilities shown in Section 3, we are able to show uniqueness of the KalaiSmorodinsky and egalitarian solutions. Via numerical experiments, we demonstrate the contrasts between the different bargaining solutions, and also the potential benefits of partial resource pooling for both providers. In Section 5, we address the computational complexity of the blocking probabilities for large loss systems [10]. We consider large system limits under the well known qualityefficiencydriven (QED) regime. Our large system analysis provides computationally light, yet accurate approximations of the blocking probabilities for realistic system settings. Finally, we conclude with a discussion on alternate sharing models, connections to more familiar models from the circuit multiplexing literature, alternate applications, and future work in Section 6.
2 Model and Preliminaries
In this section, we describe our system model, propose our mechanisms for partial resource pooling, and state some preliminary results.
We begin by describing the baseline model with no resource pooling. We consider two service providers, and Each provider is modeled as an M/M/N/N queue or an ErlangB loss system. Specifically, has servers/circuits. Calls arrive for service at according to a Poisson process of rate When a call arrives, it begins service at a free server if one is available. If all servers are busy, then the call is blocked. The holding times (a.k.a. service times) of calls at are i.i.d., with denoting a generic call holding time. We assume that Thus, the offered load seen by is given by With no resource pooling between the providers, it is well known that the steady state call blocking probability for is given by the ErlangB formula:
It is also well known that the steady state call blocking probability is insensitive to the distribution of the call holding times, i.e., it depends only on the average call holding time. Moreover, the blocking probability depends on the workload only through the offered load
Next, we describe the proposed partial resource pooling models.
2.1 Probabilistic sharing model
The probabilistic sharing model is parameterized by the tuple Informally, under this model, accepts an overflow call from with probability ^{1}^{1}1When referring to the provider labeled i, we use to refer to the other provider.
Formally, the probabilistic sharing model is defined as follows. Let denote the number of active calls of When a call of arrives,

If and the call is admitted

If and the call is admitted with probability

If the call is blocked
The vector defines the (partial) sharing configuration. Note that captures the extent to which pools its resources with In particular, the configuration corresponds to no pooling, and the configuration corresponds to complete pooling. Moreover, note that the probabilistic sharing model does not keep track of whether an ongoing call of is occupying a server of or This simplification, which makes the model analytically tractable, is identical to the maximum packing or call repacking model of [11, 12] and has been used extensively in the literature. One interpretation of this assumption is that once a server becomes free, if there are any ongoing calls on servers, one of those is instantaneously shifted to the free server.
Next, we characterize the steady state blocking probabilities under this partial sharing model. To do so, we define the following subsets of
For
Here refers to the set of feasible states, corresponds to the feasible states when all the servers are busy, and are the states in which calls of are accepted with probability
Lemma 1.
Under the probabilistic sharing model, the steady state blocking probability for Provider is given by
where
A key takeaway from Lemma 1 is that under the probabilistic sharing model, the steady state blocking probabilities remain insensitive to the distributions of the call holding times. Moreover, the dependence of the incoming workload on each provider’s blocking probability is only through the vector of offered loads Finally, note that
Proof.
Assuming that the call holding times are exponentially distributed, the state of the system evolves as a continuous time Markov chain (CTMC) over It is easy to check that this CTMC is timereversible and its invariant distribution has a product form:
The steady state blocking probability is then obtained by invoking the PASTA property.
The insensitivity of the blocking probabilities to the call holding time distributions is a direct consequence of the reversibility of the above CTMC [13]. ∎
2.2 Bounded overflow pooling model
The bounded overflow (BO) model is parameterized by the tuple where Informally, under the BO model, accepts up to overflow calls from the other provider Thus, is indicative of the extent to which shares its resources with We use randomization to let take real values in specifically, admits up to overflow calls from , and admits a th overflow call with probability where denotes the fractional part of
Formally, the BO model is defined as follows. Recall that denotes the number of active calls of When a call of arrives,

If and the call is admitted

If and the call is admitted with probability

Else, the call is blocked
We refer to the tuple as the (partial) sharing configuration between and Under the BO model, can have at most concurrent calls. Note that corresponds to no resource pooling, and corresponds to full pooling between the providers. Finally, we note that the BO model also assumes call repacking [11, 12].
Next, we characterize the blocking probability of each provider under the BO model. To express the blocking probabilities, we define the following subsets of
For
Here refers to the set of feasible states, corresponds to the feasible states when all the servers are busy, is the set of feasible states for which arriving calls of are blocked due to the constraint on the number of overflow calls, and are the states for which calls of are accepted with probability
The following lemma characterizes the blocking probabilities of both providers under the BO partial sharing model.
Lemma 2.
Under the bounded overflow sharing model, the steady state blocking probability for provider is given by
where
2.3 Monotonicity properties of the blocking probabilities
We conclude this section by collecting some monotonicity properties of the blocking probabilities under the above partial sharing models. These properties play a key role in our analysis of the game theoretic aspects of partial sharing in Sections 3 and 4.
When stating results that apply to both sharing models, we refer to the steady state blocking probability of Provider as with the understanding that this represents

under the probabilistic sharing model,

under the bounded overflow sharing model (i.e., ).
Note that the overall steady state blocking probability of the system is given by
Our monotonicity results are summarized in the following theorem.
Theorem 1.
Under the probabilistic as well as the bounded overflow partial sharing models, the steady state blocking probabilities satisfy the following properties, for

is a strictly increasing function of

is a strictly decreasing function of

If then is a strictly decreasing function of
Theorem 1 highlights the impact of an increase in on the blocking probabilities of and as well as the overall blocking probability. In particular, an increase in (i.e., an increase in the extent to which shares its servers with ) decreases the fraction of blocked calls at at the expense of increasing the fraction of blocked calls at Note that Statements 1 and 2 imply that is the unique Nash equilibrium between the providers, assuming that the utility of each provider is a strictly decreasing function of its blocking probability. This means that a noncooperative interaction sans signalling would not yield a mutually beneficial partial sharing configuration between the providers. In contrast, we show in Section 4 that a bargainingbased interaction would indeed result in mutually beneficial partial sharing configurations.
Finally, Statement 3 of Theorem 1 highlights that so long as the mean call holding times are matched across both providers, an increase in results in an overall reduction in the call drop probability of the system. This is because increasing provides additional opportunities for calls to get admitted when there are free circuits. In particular, Statement 3 above implies that for
implying that complete pooling minimizes the overall blocking probability of the system (when ).
Note that even through the statement of Theorem 1 applies compactly to both sharing models, a separate proof is required for each model. We provide the proof of Theorem 1 for the bounded overflow sharing model in Appendix A, and for the probabilistic sharing model in Appendix D. It is important to point out that while the statement of Theorem 1 seems intuitive, the proof is fairly nontrivial. In particular, our proof of Statement 3 for the bounded overflow model involves a subtle sample path argument (see Appendix A).
3 Efficient Partial Sharing Configurations
We have seen that complete resource pooling between providers is not necessarily stable, in the sense that it is not guaranteed to be beneficial to both providers. Having defined mechanisms for partial resource sharing in Section 2, the natural questions that arise are:

Do there exist stable partial sharing configurations?

If so, can one characterize the Pareto frontier of the space of partial sharing configurations?
The goal of this section is to address the above questions.
First, we prove that under both the sharing mechanisms defined in Section 2, there exist stable partial sharing configurations, i.e., there exist partial sharing configurations that result in a strictly lower blocking probability for each provider, compared to the case of no pooling. Next, we focus on characterizing the set of Paretoefficient partial sharing configurations. Intuitively, this is the set of ‘efficient’ sharing configurations, over which it is not possible to lower the blocking probability for any provider without increasing the blocking probability of the other. Our main result is that any Pareto sharing configuration has at least one provider pooling all of its servers (i.e., for some ).^{2}^{2}2 pooling all its servers means that it always yields a free server to an overflow call from Intuitively, efficient partial sharing configurations involve the more congested provider pooling all of its servers, enabling both providers to benefit from the resulting statistical economies of scale. Finally, we provide an exact characterization of the set of Pareto efficient sharing configurations (a.k.a. the Pareto frontier) under the probabilistic and bounded overflow partial sharing models.
We begin by defining ‘stable’ partial sharing configurations.
Definition 1.
A sharing configuration is QoSstable if for
The following lemma guarantees the existence of QoSstable sharing configurations.
Lemma 3.
Under the probabilistic as well as the bounded overflow partial sharing models, the set of QoSstable partial sharing configurations is nonempty.
Lemma 3 essentially validates our partial sharing mechanisms. Specifically, it asserts that even when the providers are highly asymmetric with respect to capacity and/or offered load, and even when complete resource pooling is not beneficial to one of the providers, there exists a partial sharing configuration that is beneficial to both providers. We omit the proof of Lemma 3 since it is a direct consequence of Lemma 4 below.
Now that we are certain that mutually beneficial partial sharing configurations exist, we turn to the characterization of the set of efficient configurations. We begin by defining Paretoefficient sharing configurations.
Definition 2.
A sharing configuration is Paretoefficient if

is QoSstable,

there does not exist a sharing configuration such that for all and for some
Condition (2) above is the standard definition of Paretoefficiency—a configuration is Paretoefficient if it is not possible to enhance the utility of one party (the utility of a provider being a strictly decreasing function of its blocking probability) without diminishing the utility of the other. Since our interest is in capturing the set of configurations that the providers could potentially agree upon, it is also natural to impose the requirement that each provider stands to benefit from the partial sharing agreement; this is captured by Condition (1) in the definition.
Our main result is that at any Paretoefficient sharing configuration, at least one provider pools all of its servers.
Theorem 2.
Under the probabilistic as well as the bounded overflow partial sharing models, the set of Paretoefficient sharing configurations is nonempty. Moreover, any Paretostable sharing configuration satisfies the property that for some
Intuitively, if the providers are symmetric, full pooling ( for all is Paretoefficient, thanks to the statistical economies of scale in the pooled system. Theorem 2 highlights that under general (possibly asymmetric) settings, where full pooling may not be QoSstable, efficient configurations still involve at least one provider pooling all its servers. Indeed, statistical economies of scale lie at the heart of this result as well, as is highlighted by Lemma 4 stated below, which forms the basis of the proof of Theorem 2.
Lemma 4.
Under the probabilistic as well as the bounded overflow partial sharing models, for any there exists such that
Lemma 4 implies that at any sharing configuration , it is possible to strictly improve the blocking probability of both providers by increasing both components of (in the direction ).^{3}^{3}3It is not hard to see that the blocking probabilities under the probabilistic sharing model (characterized in Lemma 1) are continuously differentiable over For the bounded overflow model, the blocking probabilities (characterized in Lemma 2) are continuous over and differentiable for If is an integer, then the partial left and right derivatives with respect to exist. Thus, for the bounded overflow model, the gradients in the statement of Lemma 4 are understood to be composed of the right derivative with respect to when is an integer.
Proof of Theorem 2.
We provide a unified proof of Theorem 2 for both partial sharing models. Invoking Lemma 4 at the configuration we conclude that the set of QoSstable configurations is nonempty. For define
Consider the following optimization:
Since this is the maximization of a continuous function over a compact domain, a maximizer exists. It is easy to see that is Paretoefficient, implying that the set of Paretoefficient configurations is nonempty. Finally, Lemma 4 implies that no Paretostable configuration lies in implying that any Paretoefficient configuration lies in This completes the proof. ∎
It now remains to prove Lemma 4.
Proof of Lemma 4.
We provide a unified proof of Lemma 4 for both partial sharing models. is equivalent to
Similarly, is equivalent to
We therefore have to prove that which is equivalent to
(1) 
Since the blocking probabilities depend and only through , we consider two fictitious providers () with and such that . For the providers , we invoke Theorem 1, to deduce that is a strictly decreasing function of and . This means
(2)  
(3) 
Noting that terms on both sides of (2) and (3) are positive, we can multiply the two inequalities to obtain (1).
It is important to note that even though Statement 3 of Theorem 1 assumes that the present proof does not. ∎
While Theorem 2 states that the (nonempty) set of Paretoefficient configurations lies on the boundary of the space of partial sharing configurations (specifically, in the set ), it does not provide a precise characterization of this set. Interestingly, such a precise characterization is possible, which is the goal of the following lemma.
Lemma 5.
Under the probabilistic as well as the bounded overflow partial sharing models, the set of Paretoefficient sharing configurations is characterized as follows.

If then there exist uniquely defined constants and such that for
In this case,

If then there exist uniquely defined constants and satisfying such that
In this case,

If then there exist uniquely defined constants and satisfying such that
In this case,
Figure 1 provides a pictorial representation of the set of Paretoefficient partial sharing configurations under the three cases considered in Lemma 5. Note that Case 1 corresponds to settings where full pooling is beneficial to both providers. Cases 2 and 3 cover the more asymmetric settings, where exactly one provider (the more congested one) stands to benefit from full pooling. Lemma 5 states that in such cases, the more congested provider pools all of its servers under any Paretoefficient sharing configuration. Intuitively, this is because the asymmetry in the value of servers pooled by each provider to the other. Indeed, servers pooled by the more congested provider add less value, since those servers are available for overflow calls of the less congested provider less often. As a result, mutually beneficial sharing configurations have the more congested provider pool more servers than the less congested provider.
4 Economics of Partial Sharing
The set of Paretoefficient configurations characterized in Section 3 contains all possible sharing configurations which are minimal for the partial order induced by the usual relation “” applied componentwise on the vectors of possible blocking probabilities. In other words, for every QoSstable configuration outside of this set, there exists a configuration within that improves the blocking probability of at least one provider without worsening the blocking probability for the other. Unfortunately, the configurations within the Pareto set are not comparable under this componentwise relation. If we take any two configurations inside this set, then a configuration that is better for one of the providers will be worse for the other provider. Thus, rational providers who want to minimize their blocking probability will agree that it is beneficial for both of them to choose a configuration inside the Pareto set rather than one outside of this set, but will disagree on the choice of the configuration within the Pareto set.
It is then the natural to ask: Which configuration within the Pareto set should the two providers choose? Of course, in addition to the choices within the Pareto set, they could also choose not to share. This question, in a more general setting, has been investigated inside the framework of bargaining theory. In a typical twoplayer bargaining problem, two players have to agree upon one option amongst several. If both agree upon the option, then each player gets a utility corresponding to this option. On the other hand, if they fail to arrive at a consensus, then they get a utility corresponding to that of a disagreement point. In our setting, the two players are the two providers who have to choose between the various configurations. Of course, they could choose not to share with the other, in which case the blocking probability for each will be that of the system with no pooling, i.e., the disagreement point is just the configuration .
Our aim in this section is to present some of the most common solution concepts from bargaining theory and apply them to the partial resource sharing problem under consideration. We also present results of numerical experiments for different realistic network settings, highlighting the potential benefits of partial resource pooling in practice. Note that the discussion in this section applies to both the partial pooling models defined in Section 2.
4.1 Bargaining solutions
The usual way to compute a solution of a bargaining problem is to first fix a set of axioms that a solution must satisfy. Axioms that appear often (though not necessarily together) are Pareto optimality (PO), Symmetry (SYM), Scale Invariance (SI), Independence of Irrelevant Alternatives (IIA) and Monotonicity (MON).
In addition to the axioms, some solution concepts rely on the convexity of the space of feasible utility pairs in order to guarantee uniqueness. In the present setting, the utility of a provider is a strictly decreasing function of its blocking probability. Due to space constraints, we restrict our attention to the linear case, i.e., the utility of is taken to be where denotes its blocking probability. Numerical experiments show that this utility space is not convex. The usual method to overcome this drawback is to convexify the utility space by considering its convex hull. For our problem, this could lead to a solution of the form (as an example): configuration with probability and with probability . While on an abstract level, a solution in an extended space is acceptable, in practice its implementation may not be straightforward. Should the probability be interpreted as a fraction of time during which is implemented? If so, at what timescale should the changes in configuration occur?
Another method of getting around convexity is to modify the set of axioms and show that some variation of the solutions concepts for the convex case satisfy them (see [14] and references therein). These however require some other assumptions on the utility set such as comprehensiveness^{4}^{4}4Comprehensiveness says that for any vector in the utility set, all vectors that are weakly dominated by this vector and that weakly dominate the disagreement point are also in the utility set. which is again difficult to verify in our setting.
We now apply four bargaining solutions from the literature to our partial pooling model. These are the Nash, KalaiSmorodinsky, egalitarian and utilitarian bargaining solutions. The main result in this section shows the uniqueness of the KalaiSmorodinsky and the egalitarian solutions without calling upon the standard arguments of convexity or comprehensiveness. The proof is based upon monotonicity properties highlighted in Section 2.
For the bargaining solutions in this section, we assume that the utility of each provider is the negative of its blocking probability. In some situations, it may be more meaningful to take the negative logarithm of the blocking probability as the utility of a provider. We give the logarithmic variants of the Nash, KalaiSmorodinsky, and the egalitarian solutions in Appendix E.
Nash bargaining solution
The first concept we present was proposed by Nash in the seminal paper [15].
Definition 3.
A partial sharing configuration is Nash bargaining solution (NBS), if the partial sharing configuration satisfies the following condition:
Here denotes the positive part of At the NBS, the players are maximizing the product of the individual utilities relative to the disagreement point^{5}^{5}5Here, relative means upon subtracting the utilities at the disagreement point.. Clearly, any maximizer would lie in the set of However, the drawback of the NBS for our problem is that the utility space is not convex (observed in numerical experiments) which implies that the NBS may not be unique.
KalaiSmorodinsky bargaining solution
One of criticisms of the NBS is the axiom of IIA which may not hold in practice. In [16], Kalai and Smorodinsky replaced IIA with MON and obtained the following solution concept.
Definition 4.
A partial sharing configuration is a KalaiSmorodinsky bargaining solution (KSBS), if and satisfies
At a KSBS solution the ratio of relative utilities of the providers is equal to the ratio of their maximal relative utilities. For our problem, the following results guarantees uniqueness of the solution which could be make it potentially more attractive than the NBS.
Theorem 3.
For the bounded overflow sharing model, the KSBS is unique.
Proof of Theorem 3.
Define the following functions.
From the Statements 1 and 2 of Theorem 1, we get
i.e., each provider gets the maximum benefit when it pools none of its servers and the other provider pools all of its servers.
It is easy to see that Consider the three cases for from Lemma 5.
Case 1:
Sweeping the (topologically onedimensional) Paretofrontier clockwise
from to it is easy to see that
is strictly decreasing and continuous, with
There is thus a unique point on the Paretofrontier that satisfies the KSBS condition.
Case 2:
As before, sweeping the Paretofrontier clockwise from
to it is easy to see that
is strictly decreasing and continuous, with
There is thus a unique point on the Paretofrontier that satisfies the KSBS condition.
Case 3:
The argument here is analogous to that for the above cases. ∎
Egalitarian solution
The next solution concept we present was also proposed by Kalai [17]. It satisfies PO, SYM, IIA, and MON but violates SI. It captures the sharing configuration in which the gains relative to the disagreement solution for both the providers is the same.
Definition 5.
A partial sharing configuration is an egalitarian solution (ES), if and satisfies
Under an ES, the providers will see the same amount of improvement in their blocking probabilities relative to the nosharing option. The following result shows that the ES is unique. Its proof follows similar lines as the proof of Theorem 3.
Lemma 6.
For the bounded overflow sharing model, the ES is unique.
Proof of Lemma 6.
The argument in the proof of Theorem 3 applies as is here, except that the constant is replaced by 1. ∎
An interesting property of the ES is that if the standalone blocking probabilities of the two providers are identical, that the ES corresponds to complete pooling.
Lemma 7.
If then the ES lies at
Proof of Lemma 7.
We invoke the following well known property of the ErlangB formula.
If it follows then that
implying that the set of Paretoefficient configurations includes (see Lemma 5).
Further, is then the ES clearly satisfies However, from the monotonicity properties of the blocking probabilities, is the only point in that satisfies this property. ∎
Utilitarian solution
The last solution concept is that of utilitarian bargaining solution (see, e.g., [18]). It minimizes the blocking probability of the customers as a whole without distinguishing them according the provider to which they subscribe. It captures the greatest good to the system. The advantage is that it is a concept that is easy for customers to identify with. On the other hand, the axioms of SI and MON are violated. Nonetheless, the violation of SI does not seem to be problematic when the utilities are blocking probabilities. Indeed, there is a unique natural scale on which the blocking probability satisfies the axioms that define a probability measure.
Definition 6.
A partial sharing configuration is a utilitarian bargaining solution (US) if it satisfies
Here, denotes the closure of We relax the above minimization to be over instead of over the open set because in some cases, it turns out that the solution lies on the boundary. Assuming that the average call holding time for both providers is identical, the utilitarian solution is unique and can be characterized precisely.
Lemma 8.
If under the bounded overflow model, the US is characterized as follows.^{6}^{6}6We use the notation from Lemma 5.

If then the US is

If then the US is

If then the US is
We omit the proof of Lemma 8, since it is direct consequence of Statement 3 of Theorem 1. Another quick observation is that when the standalone blocking probabilities are matched, the utilitarian solution, like the egalitarian solution, corresponds to full pooling.
Corollary 1.
If then the US lies at
Proof of Corollary 1..
While the utilitarian solution is the most efficient, in that is minimizes the overall blocking probability, it may not be fair. Indeed, under Cases 2 and 3 of Lemma 8 above, one of the providers (the less congested provider) sees no reduction in its blocking probability relative to the disagreement point.
4.2 Numerical examples
In this section, we present numerical results illustrating the various bargaining solutions under realistic system settings. The goal of this section is twofold: to demonstrate the benefits of partial resource pooling to the two providers, and to illustrate differences between the different bargaining solutions. Due to space constraints, we are only able to consider two network settings. Also, restrict our attention in this section to the bounded overflow sharing model; we represent the bargaining solution as where
Bargaining  

solution  
US  100  13.1  1. 73%  1% 
KSBS  100  6  3.39%  0.63 % 
NBS  100  5.5  3.6%  0.6 % 
ES  100  1.35  5.36%  0.36% 
First we consider a scenario where the two providers have the same number of servers, but differ with respect to their standalone blocking probabilities. Specifically, we set with (6%), (1%), and Clearly is the more congested provider. The different bargaining solutions for this scenario are summarized in Table 1. As expected, the more congested provider pools all its servers under all bargaining solutions. Moreover, the ‘efficient’ utilitarian solution is the most beneficial for while not providing any benefit to At the other extreme, ES is the most pessimal, since it enforces the same reduction in blocking probability, even though the scope for reduction is much less for KSBS and NBS result in intermediate contributions by and result in a substantial benefits for both and indeed, these configurations result in a roughly 40% reduction in the blocking probability of each provider.
Bargaining  

solution  
US  200  50  3.33%  3.33% 
ES  200  50  3.33%  3.33% 
NBS  200  9.5  3.36%  3.19% 
KSBS  200  8  3.56%  2.99% 
Next, we consider a scenario where the two providers differ in size, but are matched with respect to standalone blocking probability. Specifically, we set and The results are summarized in Table 2. As expected, the US as well as the ES correspond to complete pooling (see Lemma 7 and Corollary 1); this results in both providers seeing a blocking probability of 3.33%. On the other hand, the NBS as well as the KSBS, the smaller provider () pools fewer servers. As a result, the smaller provider achieves an even lower blocking probability under KSBS/NBS, at the expense of a higher blocking probability for the larger provider (compared to the full pooling under US/ES). As before, it is important to note that partial resource pooling offers the possibility of substantially lower blocking probability for both providers.
5 Large System Limits: Square root scaling
The computational complexity of the exact steadystate blocking probability increases as the number of circuits becomes large [10]. As a result, approximations can turn out to be helpful for their tractability as well as their ability to provide insights into the complex dependencies between the blocking probabilities and the system parameters. The goal of this section is to obtain large system approximations for the blocking probabilities under the bounded overflow partial pooling model.^{7}^{7}7A parallel development for the probabilistic sharing model is possible, which we omit due to space constraints.
Large system approximations have always been an integral part the literature on queueing theory. Depending upon the parameters of systems, these limits can take different forms such as meanfield [19], Quality and Efficiency Driven [20], or Nondegenerate Slowdown [21] limits.
5.1 QED scaling regime
For our resource sharing model with blocking, the most relevant limit is the qualityefficiencydriven (QED) regime (a.k.a. “squareroot staffing” regime, HalfinWhitt regime). While it is now commonly known under these names, it had already been investigated by Erlang himself^{8}^{8}8See the paper ”On the rational determination of the number of circuits” in [22]. and Jagerman as well [23]. The traditional QED regime applies to system with a single provider, and is defined as follows. Let be the number of circuits with the provider and be the offered load. We say that as if
Lemma 9 ([23]).
Let . Then,
Here, and denote, respectively, the probability density function and the cumulative distribution function, corresponding to the standard Gaussian distribution. Note that under the QED regime, the margin between the offered load and the number of servers is of the order of the square root of the number of servers. In many settings, the QED regime is known to be the right balance between quality (i.e., QoS) and efficiency (i.e., server provisioning costs); see, for example, [20, 24]. For the M/M/N/N loss system, Lemma 9 states that the steady state blocking probability decays as as
We define the QED scaling regime for our model with two providers as follows. For fixed and , let
(4)  
(5) 
Here, is the scaling parameter that is common to both providers. (4) states that the number of servers of each provider grow proportionately with the scaling parameter. (5) states that the offered load corresponding to each provider scales as per the QED (squareroot staffing) rule.
Before deriving the blocking probabilities for the different partial sharing configurations, we first look at two special cases for which these probabilities can be derived directly from Lemma 9. With no resource pooling, both the providers are decoupled, and for large , the steady state blocking probability of Provider can be computed using Lemma 9 to be
The second special case is that of full resource pooling. Here, the system acts as a single provider with servers/circuits and offered load of . By simple calculations we can see the system under full pooling also satisfies the square root scaling set up. So, the steady state blocking probability for both the providers is given as
Now, we present the squareroot scaling set up for partial sharing configurations. For