Slotted Aloha for Networked Base Stations
Abstract
We study multiple base station, multi-access systems in which the user-base station adjacency is induced by geographical proximity. At each slot, each user transmits (is active) with a certain probability, independently of other users, and is heard by all base stations within the distance . Both the users and base stations are placed uniformly at random over the (unit) area. We first consider a non-cooperative decoding where base stations work in isolation, but a user is decoded as soon as one of its nearby base stations reads a clean signal from it. We find the decoding probability and quantify the gains introduced by multiple base stations. Specifically, the peak throughput increases linearly with the number of base stations and is roughly larger than the throughput of a single-base station that uses standard slotted Aloha. Next, we propose a cooperative decoding, where the mutually close base stations inform each other whenever they decode a user inside their coverage overlap. At each base station, the messages received from the nearby stations help resolve collisions by the interference cancellation mechanism. Building from our exact formulas for the non-cooperative case, we provide a heuristic formula for the cooperative decoding probability that reflects well the actual performance. Finally, we demonstrate by simulation significant gains of cooperation with respect to the non-cooperative decoding.
curve
1 Introduction
Slotted Aloha [1] and framed slotted Aloha [2] are well-known schemes for uncoordinated multiple access that date back to the 70s. With these schemes, the time is divided into slots, and, at each slot, users contend to transmit their packets to the base station. With slotted Aloha, each user transmits at each slot with a certain probability; with framed slotted Aloha, slots are grouped into frames, and each user randomly selects a slot at each frame to transmit.
In the past decade, there has been much progress in the development of slotted Aloha type protocols, e.g., [3, 4, 5], with dramatic throughput improvements. Reference [3] introduces a framed protocol that allows for multiple user transmissions, as considered in [6] in the past, but with a novel successive interference cancelation mechanism [3]. In [3], users transmit (are active) at multiple slots at each frame, and send, along with their packet replicas, pointers to the corresponding activation slots. When a slot with a single active user occurs, this allows the base station not only collect this user, but also subtract its contribution in each other slot the user was active. This likely resolves collisions in some of the past slots and thereby allows for collecting additional users. More recently, [4] demonstrates that successive interference cancelation is analogous to the belief propagation erasure-decoding of the codes on graphs. Exploiting this analogy, [4] introduces a variable number of users’ transmission attempts and optimizes their distribution to maximize the throughput. Building on an analogy with rateless codes, reference [5] introduces the frameless Aloha protocol that further enhances the throughput. With the protocol therein, the frame size is not fixed a priori, but rather it adds new slots until a desired fraction of decoded users is achieved.
All the above references exploit temporal diversity. Reference [7] considers multiple receiver multi-access systems with spatial diversity which arises from independent fading of different user-receiver links. It analyzes the capture performance of the system under Rayleigh fading and shadowing. A recent reference [8] also considers a multiple receiver case with spatial diversity. Under independent on-off fading, it quantifies analytically the gains in the throughput introduced by multiple receivers (over the single receiver case), as well as the impact of the fading probability on these gains.
In this paper, we also study the spatial diversity effects with multiple receivers, but under a very different model than the ones in [7, 8]. A total of base stations (receivers) are deployed over a (unit) geographic area, and they jointly serve users (transmitters). Both the users and base stations are placed uniformly at random over the area. At a fixed time slot, each user transmits (is active) with probability , independently from other users. Each base station can hear all active users that are within distance from it, where is small compared to the diameter of the area. The base station thus receives a superposition of the signals of active users in its -neighborhood. (The signals of the users outside the -neighborhood do not contribute to the signal.)
We first consider the slotted Aloha protocol where each base station performs decoding in isolation (without cooperating with other stations). It decodes a user whenever there is a single active user in its -neighborhood. We find the probability that an arbitrary fixed user is decoded, both in the finite regime and asymptotically, when and (-fixed.) Further, we quantify the gains of diversity introduced with multiple base stations. In particular, the peak throughput (expected number of decoded users per slot) is increased times with respect to the single-base station slotted Aloha, where is a positive constant. (In particular, , see Section 4 for details.) In other words, the throughput scales linearly with . For example, for and , the peak throughput is about .
Next, we propose a cooperative, iterative decoding where the base stations that are geographically close communicate during decoding iterations. Specifically, we assume that, if a base station detects a user, it knows at which other base stations this user is also heard, and it informs them of this users’ ID and its information packet. The contacted base stations can subtract the interference contribution of the received signal, which possibly reveals additional clean packet readings. We show by simulation that cooperation introduces significant gains in the system performance. For example, for and , the peak throughput increases from (no cooperation) to (with cooperation). Also, the maximal load for which the decoding probability is above a prescribed value (e.g., ) is about times larger under cooperation than without cooperation, for a wide range of .
Structurally, this decoding algorithm is analogous to the interference cancellation decoding in, e.g., [4, 3], and it can be represented via message passing on a bipartite graph like in [4]. Active users here correspond to users in [4], base stations correspond to different slots (check nodes) in [4], and the links are the physical links between active users and base stations. However, the structure of the graph here is induced by geometry and is very different from the random graph in [4]. (See Section 3 for details.) Evaluating the decoding probability here is very challenging and standard tools like and-or-tree analysis [9] do not directly apply. We make the first step towards this goal by giving a heuristic formula that reflects well the actual performance. We derive the heuristic building from our results for the non-cooperative decoding.
In this paper, cooperation among base stations is confined within a single time slot and is independent across slots. In other words, this paper exploits spatial diversity. In our ongoing work, we exploit the potential of both spatial and temporal diversity by allowing that base stations cooperate both across space (as considered here) and across slots. The motivation for this comes from the single base station systems, where successive interference cancellation across slots yields dramatic throughput improvements.
Finally, we believe that our studies have a potential to find applications in massive uncoordinated multiple access in various networks, such as cellular, satellite, and vehicular networks, including recently popular machine-to-machine (M2M) services over these networks.
Paper organization. The next paragraph introduces notation. Section 2 details the system model that we assume, and Section 3 presents our decoding algorithms. Section 4 presents our results on the performance of the two decoding algorithms. Section 5 gives numerical studies and interpretations. Finally, we conclude in Section 6.
Notation. We denote by: the Euclidean ball in the -dimensional space centered at with radius ; the square centered at , with the side length equal to ; the indicator of event ; and the probability and expectation operators, respectively.
2 System model
We consider a multi-access system with users and base stations. We denote by , a user, and by , , a base station. Users and base stations are distributed over a geographical area, and each user can be heard by all base stations within distance from . (See Figure 1, top left, for an illustration.) The time is divided into slots. As the number of users may be larger than the number of base stations (as is common in practical scenarios), to avoid excessive collisions, different users’ transmissions are distributed across time slots, i.e., only a subset of users transmits at a certain slot. In this paper, we assume that decoding is completely decoupled (independent) across slots. Henceforth, from now on, it suffices to consider the system at a single, fixed slot. To keep the exposition general, we assume that each user transmits its message at a fixed slot with probability , independently from other users, and that all transmissions are slot-synchronized. This model subsumes, e.g., the following system. There are available slots in each frame. Users’ and base stations’ placements are fixed during the frame. Each user transmits once per frame, with equal probability across the slots. In our model, this corresponds to setting . We say that a user is active at a certain slot if it transmits at this slot.
We let , and we call the normalized load. The quantity equals the expected number of active users at a fixed slot per base station. The message of user contains the information packet and a header with the user’s ID. If is within distance from , we say that and are adjacent. Each base station therefore hears a superposition (collided message in general) from all active adjacent users. We explain decoding mechanism in Section 3.
We now detail the placement model at a fixed slot. Both users and base stations at a fixed slot are placed in the unit square , centered at . User is situated at a location , where is selected from uniformly at random, independently from other user’s locations. Further, base station is positioned at a location , where is selected from uniformly at random, independently from other stations’ locations. We assume that the placements of users and base stations are also mutually independent.
For the purpose of analysis, we differ two types of placements. We define , and say that a user is nominally placed if its position is in , and similarly for a base station. If, on the other hand, a user or a base station lies in the strip along the boundary of , , we call this a boundary placement. Since placements are uniform over , the probability of the nominal placement is , and the probability of the boundary placement is . We see that, as the radius decreases, the probability of the nominal placement goes to one, and hence we can neglect in the analysis all the effects caused by the boundary placements.
Degree distributions. For future reference, we introduce the users’ and base stations’ degree distributions when they are nominally placed. Fix arbitrary user , and arbitrary point , and let , i.e., is the conditional probability that has exactly adjacent base stations, given that it is nominally placed. It is easy to show that degrees follow binomial distribution, i.e., . Similarly, let , where denotes the number of active users , , adjacent to . (We exclude arbitrary fixed user , as needed for subsequent analysis.) We have , . We will also be interested in the asymptotic regime, when , , and ( is fixed), such that where are constants. In such setting, the users’ and base stations’ degree distributions converge to Poisson distributions with parameters and , respectively, i.e., for all :
(1) |
Hence, in the asymptotic regime, is the average number of base stations adjacent to a fixed user , and is the average number of active users adjacent to a fixed base station . It is easy to see that and are related as .
Coverage. Consider , where the event means that is heard (“covered”) by at least one base station. We refer to the latter quantity as the expected coverage. We have , and . An active user can be collected only if it is covered, no matter what decoding is used. Therefore, for a high decoding probability, we cannot have (or ) too small. Henceforth, from now on we assume , such that coverage is ensured; e.g., for , .
3 Decoding algorithms
Subsection 3.1 details the non-cooperative decoding, and Subsection 3.2 details the cooperative decoding algorithm.
3.1 Non-cooperative decoding
We now explain the non-cooperative decoding algorithm, where each base station works in isolation. At each base station, decoding is the simple slotted Aloha decoding. Suppose that station received signal . We assume that can determine if corresponds to a “clean” message. In other words, if, at a fixed slot, there is a single active user in then collects user (it reads its packet and obtains its ID). We say that a user is collected at a fixed time slot if it is collected by at least one base station at this slot. For example, for the network in Figure 1, top left, we can see that out of active users are collected.
3.2 Cooperative decoding
We now present the cooperative decoding algorithm, where neighboring base station collaborate to collect users.
We assume that each base station is aware of which users (either active or inactive) it covers, i.e., it knows the IDs
of all its adjacent users (e.g., through some sort of association procedure). Further, for each of its adjacent users , knows the list of the base stations , , to which is also adjacent.
We say that two base stations are neighbors if
they share at least one user. The decoding is iterative and involves communication between
neighboring base stations. Each base station maintains over iterations , a signal .
Initially, at , is the received signal from its active adjacent users (either a clean message from an active user, a collided message, or an empty message if neither of the users in is active.)
Station at a certain iteration may receive
a message from a neighboring base station .
This happens if decodes a user at , which we call ,
and if is adjacent to both and . The message contains the packet of
user and its ID.
Upon reception of , station subtracts the interference contribution of user , which we symbolically write as . Station can
recognize if the updated signal corresponds to a clean packet, and, if so, it reads the packet and determines to which user it belongs.
Step 1: Check signal: checks whether signal corresponds to a “clean” packet. If this is true, it performs the collect and transmit step; otherwise, it performs the receive and update step.
Step 2: Collect and transmit: collects a user and reads its ID. It transmits message to all ’s, , that are adjacent to . We call the latter set of stations After transmissions, leaves the algorithm.
Step 3: Receive and update: scans over all messages that it received at
and identifies the subset of all distinct messages.
Graph representation of decoding. We now introduce a graphical message-passing representation of decoding. It involves the evolution of a bipartite graph over iterations . Graph has two types of nodes – base stations and active users. Both the node sets and the edge set change (reduce) over iterations . It is initialized by , where is defined as follows: it has the node set that consists of all base stations and all active users. Its set of links contains all pairs such that and are within distance from each other (and is active.) We now describe one iteration .
Graph decoding iteration. All ’s in check in parallel if their degree in equals one. Let be the set of degree one base stations in . If is empty, the algorithm terminates. Otherwise, for each , let be the user adjacent to . Remove from all ’s and ’s, , and all the links incident to , . Set .
It is easy to see that the above algorithm terminates after at most iterations. Namely, at each iteration , either
at least one base station node is removed, or the algorithm terminates at . Therefore, at most iterations can be performed.
For the network in Figure 1, top left,
we show decoding iterations in Figures 1, top right (at ),
bottom left (), and bottom right (). We can see that cooperative
decoding collects out of users, while the non-cooperative collected .
4 Performance analysis
In this section, we study the performance of both non-cooperative and cooperative decoding schemes. Specifically, our goal is to determine the expected fraction of decoded users per time slot, Exploiting the symmetry across users, we have that the above quantity equals = Hence, our task reduces to finding the probability that arbitrary fixed user is collected. The following simple relation will be useful throughout: , which is easily obtained after conditioning on the event that is active and using that is the probability of being active. (Here, abbreviation “ coll.” stands for is collected, and “ act.” stands for is active.) We also consider the normalized, per station throughput – the expected total number of collected users per slot, per station. Next, recall , and, for fixed and , we will be interested in the following quantity:
(2) |
where is a small number. In words, is the largest normalized load for which decoding probability is above the prescribed value . Recall and that . It is clear that, when , due to the relation , cannot be greater or equal for any , i.e., no matter how small is. Thus, whenever , by convention we say
Remark 1
We explain the motivation behind quantity . Suppose there are available slots, where each user is active in exactly one among the slots. The system has the following requirement on the “quality of service” – each user be collected with probability above . This translates into the requirement For a fixed , we ask what is the maximal number of users that can be served with the guaranteed quality of service. That is, we look for . As , are fixed and , this is equivalent to finding (2). We will later be interested in optimizing (maximizing) .
4.1 Non-cooperative decoding
We now characterize for the non-cooperative decoding. As we will see, the sought probability depends on the distributions of the areas covered by randomly generated balls. Specifically, consider the ball . Fix some , and generate randomly points , where ’s, , are drawn mutually independently from the uniform distribution on , and let be the random variable that equals the area of divided (normalized) by . Further, denote by the probability distribution on induced by . Clearly, (and hence ) does not depend on due to normalization. Hence, we can set . Also, it is easy to see that with probability one, i.e., is the Dirac distribution at . Also, for any , , with probability one, i.e., is supported on . This is because all the ’s, , belong to , and thus is always a subset of . The distributions , , are difficult to compute. However, they can be partially characterized by estimating the first moments , . This can be done, e.g., through Monte Carlo simulations. We emphasize that the moments , , , need to be tabulated only once (just like, e.g., the tail distribution of the standard Gaussian.) That is, once we have the ’s available, they apply for any set of parameters .
We now state our result on and . We distinguish two cases: 1) non-asymptotic regime of finite , that corresponds to (binomial) degree distributions ; and 2) asymptotic regime that corresponds to (Poisson) degree distributions , .
Theorem 1
Consider the non-cooperative decoding algorithm. Then, for , we have: where and equals:
(3) |
and Further, let be fixed, , , and , and recall in (1). Then, where
We briefly comment on the structure of the results. It can be seen that . This is because in (1), and so . Similarly, it can be shown that .
Sketch of the proof. The detailed technical proof of Theorem 1 is omitted due to lack of space and will be provided in a companion journal paper. We briefly sketch the proof of (3), highlighting the main steps and omitting certain arguments. The keys are to use the inclusion-exclusion principle, conditioning on a user’s degree, and considering the size of the areas covered by the user’s neighboring base stations. We consider a user at the fixed nominal placement , . Consider . We first use the total probability law with respect to the degree :
(4) |
where . Due to base stations’ symmetry, without loss of generality we can assume that are the neighbors. In other words, actually equals the probability that is collected, given that is active, is nominally placed at , and are its neighbors. Now, is collected if and only if at least one of the ’s decodes it, and collects if and only if is the only active user in . (We then say that is empty.) Summarizing, is the conditional probability of the union , given that the ’s neighbors are , , -active. Now, applying the inclusion-exclusion formula:
(5) |
where is the conditional probability of , conditioned on be neighbors, , -active. Intuitively, depends on the area of , because we look at the event that no active users lie in . It can be shown (proof omitted) that equals in Theorem 1 and is independent of . Substituting in (5), plugging the resulting equation in (4), and noting that the probability in (4) is the same for all nominal ’s, we obtain (3).
Numerical calculation. Theorem 1 expresses in the form that is difficult to compute. Assuming that the first moments , are available (e.g., obtained through Monte Carlo simulations), we can compute with a high accuracy and a small computational cost, through the formula:
(6) |
Here, we approximated at
by letting
and truncating
the infinite sum at .
Formula (6) gives high accuracies for of order or larger.
Given the quantity , approximation is
accurate for sufficiently smaller than , e.g.,
. (In other words, for a larger , larger
is needed.)
A simple lower bound on . We derive a lower bound on , which is loose but very useful in providing insights into the system performance. We exploit this bound in Section 5.
Lemma 2
Consider the non-cooperative decoding algorithm in the asymptotic setting as in the second part of Theorem 1. Then:
Proof.
Consider at a nominal placement and suppose that is active. If there exists a base station in and there are no active users in (other than ), then is collected. Let denote the probability of the former; clearly, . By the independence of the users’ and base stations’ placements, we have that , which in the asymptotic regime goes to . Passing to the limit (where boundary effects vanish), the result follows. ∎
4.2 A heuristic for cooperative decoding
We now derive a heuristic formula for with cooperative decoding. The heuristic relies on our arguments for the non-cooperative case. It takes into account only the first two iterations of cooperative decoding. (See Remark 2.) Consider arbitrary fixed user at a nominal placement. Let be the probability that has been collected after the first iteration , given that it is active. It is easy to see that this is precisely the corresponding decoding probability for the non-cooperative case. We thereby approximate it with . Now, let be the probability that arbitrary fixed user has been collected after iterations, given it is active. We take a conservative approach by approximating the decoding probability after complete decoding algorithm be . We now evaluate . Fix user . We neglect the boundary effects and consider at the nominal placement , , i.e., we set Using the total probability law with respect to the ’s degree: where . Fix , and without loss of generality, fix the neighborhood of to . For each , , let be the probability that all active users , , adjacent to , have been decoded after iteration . (In the graph representation of decoding, this corresponds to being connected only to after iteration .) We say in the latter case that is known. Note that has been collected after if and only if there exists at least one , , such that is known after (We write this shortly as “ known”.) If the graph were random as in, e.g., [4], the events would be mutually independent, and, moreover, they would be independent of Then, we would have the following formula: . However, this is not the case here, and we need to proceed in a different way. In particular, we account for the correlation of the events through the inclusion-exclusion formula on the event : Here, is the conditional probability of , given that be the neighbors of , , -active. It remains to find . Clearly, this quantity depends on the area of , but also it depends on . We now approximate by accounting for the former dependence and by neglecting the latter dependence. Specifically, for each , we let
(7) |
Remark 2
The motivation for (7) is the following. Suppose that, after , the unknown users inside followed a Poisson distribution with mean . (On average, there are users over an area of size , the fraction of which are unknown is ) Further, suppose that their distribution does not depend on – the number of base stations in . (This is not the case in general, as more base stations in tend to reduce the number of unknown users.) Then, the probability that there are no unknown users in would be . Recall from Section 4. Then, given that the area of is , the probability that there are no unknown users in would be . Therefore, ; averaging with respect to , we finally obtain the left approximation (7). (The right one is as in (6).) The dependence of on is more pronounced when the considered iteration is larger (in (7), it is ), and (7) becomes less accurate. Thus, we stop at and let . Our future work will address the dependence of on .
Now, proceeding analogously to the non-cooperative case, and taking the asymptotic setting, we obtain:
(8) |
Formula (8) can be seen as a counterpart of the following formula from the density evolution analysis on random graphs: . It remains to express in terms of . We omit details due to lack of space, but the derivation of the approximate formula is completely dual (analogous) to (8). A fixed station is replaced with fixed user , the total probability law is done with respect to instead of , and the event is replaced with . The resulting formula is:
(9) |
5 Numerical studies and interpretations
In this section, we carry out numerical studies and provide interpretations of our results.
Simulation setup. We explain the simulation setup that we use throughout the section. We set the number of base stations , and users’ activation probability . We simulate the decoding probability and the normalized throughput for different values of by varying . We vary such that varies within the interval . We evaluate these quantities through Monte Carlo simulations. For each value of , we generate one instance of the network, i.e., we place users and base stations uniformly over a unit square. For a fixed network, we run the non-cooperative and cooperative decoding. (With cooperative decoding, we simulate its graph representation.) For each fixed , we perform simulation runs ( different graphs and the decoding algorithms over them.) For each (each ), we estimate as , where is the estimate of the decoding probability, and equals the total number of collected users divided by . We estimate the normalized throughput as the total number of collected users divided by . We obtain the parameters , , through Monte Carlo simulations. For each , we repeat different random, uniform, placements of points in . For each placement , , we estimate the area of through the Monte Carlo simulation with trials. We set We estimate as . With the non-cooperative decoding, we evaluate via (6), and . With the cooperative decoding, we evaluate via (8)–(9), and .
Throughput. In the first experiment, we simulate versus for , and . Both values of ensure coverage of at least . Figure 3 plots for the non-cooperative and cooperative decoding. We depict both the theoretical values and the values obtained through simulations. We first assess our theoretical findings. We can see that, in the non-cooperative case, our formula (6) accurately matches simulation. For the larger value , there is a slight mismatch, which could be eliminated by taking a larger . For the cooperative case, our heuristic formulas (8)–(9) follow well the trend of the curves, and we see that the heuristic is more accurate for the smaller value . Next, we compare the two decoding algorithms. From Figure 3, we can see that cooperation produces significant gains with respect to the non-cooperative decoding. For example, for , the peak throughput under cooperation is about , while without cooperation it is below . Similarly, for , the peak throughput under cooperation is , while without cooperation we have about .
Comparisons with a single base station. To quantify the gains of diversity induced by multiple base stations, we also compare the two decoding methods with the standard slotted Aloha and a single base station. Suppose that we have one base station at the center of the region, and that the station covers the full region. Clearly, for such system and a large , , and the throughput is . The peak throughput is To compare the single base station system with the above two decoding algorithms, we evaluate for each of the two the un-normalized peak throughput. For , this quantity equals for cooperation, and , without cooperation. Hence, base stations allow at least times larger throughput. More generally, consider the system in the asymptotic setting, with (i.e., with the coverage.) From Lemma 2, with non-cooperative decoding (and hence with cooperative as well) , where we used that . For a fixed , the peak of is . The maximum over is attained at and equals . Hence, comparing with the single base station system, the -base station systems gives at least higher total (un-normalized) throughput. In other words, the total throughput grows linearly with the number of base stations .
Quantity and optimal radius . We now give insights into how the system performance depends on and for the two decoding algorithms, and we demonstrate large gains of cooperation. Figure 2 depicts simulated for . First, we can see that, for each considered , cooperation offers almost three times better (larger) in a wide range of . Second, we can see that there is an optimal . Consider, e.g., the non-cooperative case. On one hand, too small does not allow for sufficient coverage, and hence it yields poor performance. On the other hand, too large eliminates the benefits of diversity. To see this, just consider the case where and each base station covers all users. In this case, all base stations have same observations, and we effectively have the single base station system.
6 Conclusion
We studied effects of spatial diversity and cooperation of slotted Aloha protocols with multiple base stations. Users and base stations are deployed uniformly at random over a unit area. At a fixed slot, each user transmits its packet (is active) with probability and is heard by all base stations placed within distance from it. We first considered the non-cooperative decoding where a user is collected if it is a single active user at one of the base stations that hear it. We find the decoding probability and quantify the gains with respect to the standard single base station slotted Aloha. We show that the peak throughput with base stations is roughly times larger than when a single base station is available. Next, we propose a cooperative decoding, where the nearby base stations help each other resolve users’ collisions through the interference cancellation mechanism. We demonstrate by simulation significant gains of cooperation with respect to the non-cooperative decoding. For example, for and , the peak throughout increases from to . Also, the maximal load () for which the decoding probability is above increases three times, for a wide range of . Finally, we give a heuristic formula for the decoding probability under cooperation. The formula accounts for the problem geometry and reflects well the actual performance.
Footnotes
- Our decoding puts an additional physical requirement on the receivers. To illustrate this, let users’ messages , , be real, positive numbers and signal received by be . Here, is the (positive) channel gain, and is the set of active users covered by . Let, at the first decoding iteration , reads a clean signal , where is adjacent to both and . Then, transmits to the message . Upon reading the real number , performs the (real-number) subtraction . Note that and need to perform re-scaling by their respective channel gains. Hence, each needs the channel gains to all its adjacent users . Now, consider [4], where each slot (check node) lies at the same physical location of the single base station. Check node receives , where is the channel gain from user to the base station, and are active users at slot . When check node has a clean signal with adjacent to both check nodes (different slots) and , it can just “send” to , and performs . Hence, no re-scaling by the channel gains ’s is needed.
- Among the received messages, there may be repetitions, i.e., there may be two or more equal messages received.
- The graph decoding algorithm here is very similar to the interference cancelation decoding in, e.g., [4], and the iterative (graph-peeling) decoding of LDPC codes over the erasure channel, e.g., [10]. The analogy with [4] is that base stations here correspond to different slots (check nodes) in [4], and active users here correspond to users in [4]. However, the graph structure here is very different from the one in [4]. First, for two users and with close to , there is a high overlap between and , and hence the sets of their adjacent base stations (the check nodes to which and connect) have a high overlap. This is in contrast with the random graph model where the neighborhood sets of different users are independent. Second, in [4], with high probability, the sizes of cycles grow with as , while here small cycles occur with a non-vanishing probability.
- More generally, for arbitrary set of system parameters , formulas in (3) and (6) are computable with arbitrarily high accuracy, provided that is large enough and the moments , , , are available. The accuracy can be controlled with the increase of and . For example, consider . The integrand is a polynomial of order and can be written as , where are the coefficients. Therefore,