Wireless Capacity and Admission Control in Cognitive Radio
We give algorithms with constant-factor performance guarantees for several capacity and throughput problems in the SINR model. The algorithms are all based on a novel LP formulation for capacity problems. First, we give a new constant-factor approximation algorithm for selecting the maximum subset of links that can be scheduled simultaneously, under any non-decreasing and sublinear power assignment. For the case of uniform power, we extend this to the case of variable QoS requirements and link-dependent noise terms. Second, we approximate a problem related to cognitive radio: find a maximum set of links that can be simultaneously scheduled without affecting a given set of previously assigned links. Finally, we obtain constant-factor approximation of weighted capacity under linear power assignment.
How much communication can be active simultaneously in a given wireless network? This is a topic of major research effort. We address this question in a more generalized setting than previously considered, and give efficient algorithms that achieve good performance guarantees based on a novel mathematical programming formulation.
In the capacity problem in wireless networks, we are given a set of communication links in a metric space, each consisting of a sender-receiver pair, and we seek to find the largest subset of links that can transmit simultaneously within the model of interference. We adopt the SINR model of interference where transmission over a link succeeds if the received signal at the receiver is sufficiently large, compared to ambient noise and interference from other transmissions. This model has emerged as a superior model for wireless interference patterns, as it is both analytically manageable, and reasonably realistic, especially in comparison to graph based models [11, 26, 29]. We assume that the powers have been pre-assigned to the links, based only on the length of the links. Having such simple assignments can be of great benefit in a distributed context.
The basic capacity problem has been addressed in numerous recent works. Constant-factor approximation algorithms have been given for uniform power  and more generally for any non-decreasing sub-linear power assignment (see Sect. 2 for definitions) , and for arbitrary power . These results assume a uniformity of the links, both in their signal characteristics as well as their value. They also assume that no other wireless activity is affecting these transmissions. We aim to handle more general scenarios, allowing for heterogeneity in link characteristics and environment. In particular, we address three extensions:
(QoS) Each link has its own signal requirements and its own ambient noise term.
(Weights) Each link has an associated weight, and the objective is to maximize the total weight of the satisfied links.
(Admission control) Certain communication is already taking place, which cannot be interfered with (possibly for regulatory reasons).
We discuss each of these extensions further.
The SINR achieved at a particular link determines the data-rate achieved at this link, or the quality of service (QoS). Different links may have a minimum acceptable QoS requirements, for example if one link is used for video transmission and another for data transmission. In addition, the noise level at receivers may not be the same across the network. This practically motivated version of the capacity problem has not been handled in much of the previous research [8, 15, 16]. We tackle this problem, both as an interesting problem in its own right, and as a stepping stone for the following problem.
Cognitive radio and admission control
Given are two sets of links and . The goal is to find such that can transmit simultaneously and the size of is maximized. We refer to this as the admission control problem.
This problem naturally arises in at least two application areas. The first is the so-called “cognitive radio”, which has been the object of intense study of late (see [2, 33, 24] and their many references). This area has gained great salience due to recent regulatory changes in wireless bandwidth management. Though the exact technological scenario for cognitive radios is still being figured out, the essential point is as follows: a wireless channel is allocated to a “primary user”; however, one would like to accommodate more users in the channel, as long as the primary user remains feasible. This clearly is an instance of the above mentioned problem, with being typically small (in fact, perhaps, just ).
However, there is a more “classical” source of the same problem, referred to as admission control or access control [7, 36], sometimes referred to as “active link protection” . The capacity problem, is its basic form, captures a scenario where each slot is independent of previous slots. In practice however, links can require sustained communication (and different links for different periods of time). Thus, in certain applications, a more realistic model is to maximize capacity under the constraint that older links still communicating not be disturbed. This again, is exactly the problem defined above (but perhaps with a typically larger ). Though heuristic approaches to this problem abound, we are unaware of rigorous algorithmic results in the SINR model.
In this problem each link is associated with a non-negative weight and the goal is to find a feasible set so as to maximize .
This weighted capacity problem is a natural extension of the capacity problem, and a case can be made for theoretical investigation for this reason alone. As it happens, though, the problem is further motivated by questions about stability in queuing theory. In this setting, packets arrive at network nodes according to some stochastic process, and the problem is to characterize the set of arrival rates under which the network can be stabilized, i.e., the network queues remain bounded. In the case of wireless networking stability, the seminal work of Tassiulas and Ephremides  established the existence policy that stabilizes the system under all arrival rates for which stability is potentially possible. This policy can be seen to be equivalent to solving the maximum weighted capacity problem in the SINR model.
It is easy to verify whether a given set of links is feasible. In fact, an appropriate power assignment that makes it feasible can be found efficiently. Namely, Eqn. 1 can be cast as a linear program with ’s as variables, which can thus be solved optimally. Indeed, there is a large body of work where one starts with a feasible set and then tries to optimize over some other criteria, say to minimize the power consumed .
Naturally, one doesn’t expect this approach to work directly for the capacity problem introduced before, which is “combinatorial” and in fact happens to be NP-hard . What is perhaps more surprising is that the capacity problem does not appear to easily admit a linear programming relaxation either, even for simple cases. Most algorithms developed for the capacity problem have thus been very simple greedy algorithms [8, 16, 21], with some exceptions [17, 4].
In this work, starting from a simple observation, we develop an integer program that approximates the capacity problem for a large class of oblivious power assignments. We then show how to round the corresponding linear programming relaxation to get a constant factor approximation. Thus we recover the main result of  but via linear programming as opposed to a greedy algorithm. We also show that the LP formulation can be easily modified to tackle a class of important problems where greedy algorithms do not appear to work very well, including the problems discussed above.
2. Preliminaries and results
The capacity problem in the SINR model is defined as follows. We are given a set of links, each consisting of a sender and receiver pair , which are points in a metric space with a distance metric . The asymmetric distance from link to link is the distance from ’s sender to ’s receiver, denoted . Each link has been assigned transmission power . A link succeeds if
where is the ambient noise, is the required SINR level, is the path loss constant, and is the set of concurrent transmissions. A set is feasible if the above constraint holds for all . Thus the capacity problem is equivalent to finding the feasible subset of maximum size.
Let denote the length of link . Let denote the ratio between the maximum and minimum length of a link. A power assignment is non-decreasing if whenever and sub-linear if whenever . We will restrict our attention to this class or particular assignments belonging to this class. Note that this class essentially contains all “natural” length based assignments, and specifically all well studied length based power assignments. These include the uniform power assignment, where all links use the same power, linear power assignment where (which is thought to be energy efficient), and mean power assignment where which is known to be essentially the “best” length-based assignment as far as capacity is concerned [13, 16].
Affectance. We will use the notion of affectance, introduced in [8, 15] and refined in  to the thresholded form used here, which has a number of technical advantages. The affectance on link from another link , with a given power assignment , is the interference of on relative to the power received, or
where is a constant depending only on the parameters of the link .
We will drop when clear from context. Let . For a set of links and a link , let and . For sets and , . Using such notation, Eqn. 1 can be rewritten as follows, which we will adopt:
In the variable QoS version of the capacity problem, and are no longer constants, but can be different for different links. Note that the definition of affectance stays the same apart from a changed definition of where and are respectively the signal requirement and noise level for .
For all problems that we consider, we will to mean the optimal solution, which will apply to the problem being discussed at that point.
We prove the following results.
For length monotone, sub-linear power assignments, there is constant-approximation algorithm for the wireless capacity problem. For uniform power, there is a constant-approximation algorithm in the QoS generalization.
The first part (not involving QoS) is the same as the main result proven in , but via a linear programming relaxation.
For the admission control problem with uniform power,
There is a approximation algorithm.
If the optimum solution (for some constant ), there is a constant-approximation algorithm.
Specifically, for the “cognitive radio” case of the problem where (or small, at any rate) we get a constant factor approximation for uniform power. There is no straight-forward greedy algorithm to tackle this problem. We believe that a greedy algorithm for the variable QoS problem is possible, but even if this is true, the resultant version for admission control would result in approximation factor worse than our results by a factor. Additionally, we see no way of utilizing the condition in the greedy algorithm.
For linear power, there is constant-approximation algorithm for weighted capacity problem.
For this problem, greedy algorithms combined with some basic observations can yield a -approximation algorithm (we describe this algorithm in detail in Section 6 when we experimentally compare it with our LP based algorithm).
We remark that our results holds in arbitrary metric space, independent of the path loss constant , and faithfully treat the ambient noise term.
The first work to study capacity of randomly deployed networks was the work of Gupta and Kumar . Rigorous worst case algorithmic analysis started with the work of Moscibroda and Wattenhofer , who studied of the scheduling complexity of arbitrary set of wireless links. Early work on approximation algorithms produced approximation factors that grew with structural properties of the network [30, 28, 3].
The first constant factor approximation algorithm was obtained for capacity problem for uniform power in  (see also ) in with . Fanghänel, Kesselheim and Vöcking  gave an algorithm that uses at most slots for the scheduling problem with linear power assignment , that holds in general distance metrics.
Recently, Kesselheim obtained a constant-approximation algorithm for the capacity problem with power control for doubling metrics and for general metrics . In another work , constant factor approximation was achieved for all non-decreasing, sub-linear power assignments. The greedy algorithms of [8, 15, 16] can be modified to handle the problems we address here, and these algorithms essentially constitute the previous best results on these problems.
As far as we can ascertain, the algorithmic situation for the admission control and weighted capacity is somewhat similar to the situation the “basic” capacity problem was in before the array of results mentioned above. Thus, we have a large body of works and results in different settings motivating the study of these questions, but no worst case algorithmic results.
The works on the emergent field of cognitive radio are too numerous to adequately cover. We refer the reader to  for a thorough discussion. For results on capacity of networks in a cognitive radio context see [19, 32] etc. (“capacity” not necessarily meaning the exact same thing we do). For the stability problem in a queuing theory setting that gives rise to the weighted capacity problem there are many works in graph based models [31, 20, 25] as well as recent ones on the SINR model . The weighted capacity problem was recently studied in , where the authors propose a version of the greedy-based approximation.
In terms of using a LP approach in the SINR setting, there is recent work of Hoefer et. al. , using related insights in their formulation. In the context of throughput maximization,  employed a linear programming solution. Being based on unit disc graphs, that approach does not lead to performance bounds we seek here.
3. The basic capacity problem
Let us first consider how one would attempt to write an Integer program (and a subsequent Linear programming relaxation) for the capacity problem. If the variable denotes that link was selected in the solution, we see that for selected links, the condition would have to hold. This is quite nice and linear, except for the fact that we would have to somehow indicate that this condition need only hold for , and that no condition need hold for links in . There appears no way to do this in a linear program.
For clarity, we will first present our linear program (and the whole algorithm) below, and then in proving its correctness, we will describe how out algorithm evades the problem elucidated above.
Our algorithm has three main steps:
Linear Program The first step of our algorithm is to solve the following linear program, with variables , one corresponding to every link . Let be a large enough constant.
subject to (3) (4) (5)
Rounding We then “round” the fractional solution to this linear program in two steps.
Let be the value of the (fractional) solution to .
First, we select a set , defined by binary variables , which are generated independently at random such that (and thus ).
Next, we choose a subset of named defined by , where is a binary random variable corresponding to this second round of selection. The variable is defined as follows: iff and the following two conditions hold:
Final Selection Finally, a feasible set is extracted from using a simple signal-strengthening technique which we will detail later.
We need the following definitions.
A link set is -feasible (resp., -anti-feasible), if for all (resp. if for all ). A link set is -bi-feasible if it is both -feasible and -anti-feasible.
We will simply write “feasible”, “anti-feasible” and “bi-feasible” when .
Our first step is to show that the solution to the linear program is an approximation to the capacity problem, or more formally:
Let be the value of the optimal solution of . Then, .
To prove this, it suffices to construct a solution (for all ) to the such that , and that satisfies all the constraints in the linear program.
Since is feasible, there is a 2-bi-feasible subset such that (See  for a simple proof of this fact).
Now construct the solution by setting if and otherwise. Thus . Lemma 5 can be proven then if we can show that Conditions 3 and 4 hold for this solution, and thus form a valid solution to . These follow directly from two Lemmas noted below (Lemmas 6 and 7), by setting to be larger than the implicit constants in those two Lemmas. ∎
The following Lemma was proven in . For completeness, we give a proof in the appendix that holds for arbitrary ambient noise.
Assume is -feasible using a non-decreasing, sub-linear power assignment. Let be any link such that for all . Then .
The next Lemma, something of a dual of the previous one, was proven recently in :
Assume is -anti-feasible using a non-decreasing, sub-linear power assignment. Let be any link such that for all . Then .
Remarks: Lemmas 6 and 7 hold the crucial insight that allow us to circumvent the problem mentioned at the beginning of this section. Note how these lemmas bound the affectance to and from a link without the condition that be a part of the feasible (or anti-feasible) set . This allows us to evade the issue of having to express conditions that only apply for links in the solution set. Instead we can write constraints (Equations 3 and 4) which apply to all links.
The next step is to analyze the Rounding phase. In particular, we claim that
Recall that . Then by linearity of expectation,
where we use .
Let denote the indicator random variable of the event that both Cond. 6 and 7 are fulfilled for link . Then . The point to note here is that the events and are independent since the random variable is not involved in the former (because ).
We will prove below that (Lemma 9).
Thus, . Now continuing with Eqn. 8, .
As promised, we lower bound :
Finally, we need to show that we can extract a large feasible subset from in the Final Selection phase. The following signal-strengthening lemma from  will be frequently useful.
[Thm. 1 of , slightly restated] If is an -feasible set, then can partitioned in to -feasible sets, for any .
There is an efficient algorithm to find a feasible set such that .
The algorithms in the following two sections will follow the same tri-partite design of LP, Rounding and Final Selection. Due to space constraints, we will mostly focus on the changes in the LP formulation, and when appropriate, the changes in the Rounding phase, without proving everything from scratch.
4. Cognitive radio/Admission control
Variable noise and signal requirements (QoS)
Recall that in this variation of the problem each link has a separate QoS and noise level and definition of affectance changes accordingly. If a link set is such that for all for some unspecified constant , we call the power assignment nearly uniform. The following holds.
Assume is anti-feasible and is some link. Assume that all links use a nearly uniform power assignment. Then .
The proof is a standard modification of the same result for uniform power with constant and (see, for example, Lemma 11 of ). Our proof of Lemma 6 provided in the appendix gives a general idea of this type of proof, and we mention after that proof the main changes needed to achieve Lemma 12.
The following modified LP can be used for uniform power capacity in this setting:
The additional steps after solving the LP and the analysis follows the same lines as Thm. 1 (which we omit due to space constraints).
Now we can focus on the admission control problem for which we will use some ideas from the variable QoS case.
We will prove the following more general result first.
Assume links in use a nearly uniform power assignment. Assume that links in use some arbitrary power assignment. Then we can approximate the admission control problem up to a factor of .
Recall that the goal is to find of maximum size such that is feasible. Thus in choosing , we have to be careful about the affectance of on and vice-versa. Our approach is to handle the affectance from as noise. In this regime, the new “noise” present at each link is the original noise , plus the interference received from all links in . Specifically, for , we define a variable noise level . Define to be the affectance taking this variable noise into account.
Now consider the following LP relaxation:
We show that the solution of LP2 is close to .
Let be the value of the solution to . Then .
The next step is to round the fractional solution achieved from solving the LP. As before, we first set with independent probability . Let us define the event as the condition holding. Let us define the event , for each link , as the condition holding.
We derive another round of selections by setting iff and both and occur. Thus,
Now . As we have seen before, is independent of , thus (via Cond. 11 and Markov’s inequality).
On the other hand, is not independent of . However, occurring given is the same as being true. But , by the definition of affectance. Thus . Thus finally, . Therefore, .
After the last round of selection, we thus get a set such that
, in expectation
Using averaging arguments and signal strengthening as before, we can extract which is feasible. To complete the solution, we need to extract a subset of such that
From the condition , it is not hard to see that the set of selected links from can be partitioned into sets such that Eqn. 12 holds. This gives us the sought after -approximation. ∎
Thm. 13 implies part a) of Thm. 2 directly. We note that this implies a -approximation algorithm that holds under any other non-decreasing sublinear power assignment, by partitioning the linkset into at most sets of nearly-uniform power.
We prove the last part of Thm. 2 below.
Let . If , for a large enough constant , then there is a constant-factor approximation algorithm for the admission control problem for uniform power.
First we show that if is large, we can assume that the affectances from to are small.
Assume , for a large enough constant . Define and . Then .
To see this, note that , since must be feasible in presence of . Now, defining , . Thus , or and finally if is large enough. ∎
We can also claim a strengthening property.
Assume is a set such that for all , , and for all . Then there is a subset with , such that for all and such a subset can be found in polynomial time with high probability.
Simply select each link in with probability . Let the set of selected links be . Then for all . Consider a fixed . Now where is the iid random variable indicating selection in to . We can use Hoeffding’s inequality to get a large deviation bound.
[Hoeffding, ] Let the independent random variables be bounded, i.e., , and let . Then,
Set to be . We can verify that given our assumptions, setting and suffices. Setting ,
This implies, by the union bound, that with probability at least , for all simultaneously. We now have proof of not only the existential statement, but the algorithmic one, since we can repeat the random experiments multiple times to get the high-probability result. ∎
Note that the above holds equally for any affectance function (specifically, the case ).
Now we can describe the linear programming relaxation. First note that by virtue of Lemma 16 it suffices to assume the input instance is and the optimum is (links not in can be thrown out by simple pre-processing). Let us reuse notation and to refer to this new instance after pre-processing.
Consider the following linear program ()
We claim that this is a relaxation up to constant factors.
Let be the optimal value of . The .
We can round this solution in the same way as before, with a two stage selection process. The proof varies only in that we need to claim that after the second selection (characterized by Bernoulli variable ), with high probability, simultaneously for all . This follows from an argument similar to Lemma 17 using the fact that affectances are bounded by and using the Hoeffding’s inequality.
5. Weighted capacity
Recall that the result for weighted capacity applies only to linear power. For linear power the following stronger version of Lemma 6 holds.
Assume is feasible using linear power and is any link (also using linear power). Then, .
The proof is nearly identical to that of Lemma 6, as elaborated in the appendix.
We can now write the following LP relaxation for the weighted capacity problem.
of Thm. 3 (sketch) The proof is rather like that of Thm. 1. Once again, we select each link into a set with probability (characterized by Bernoulli variable for each link ) and then do a further selection by setting where iff and . As in the proof of Thm. 1, one can show that and thus the expected weighted output is which is within a constant factor of the optimum. Finally, the set can be partitioned into a constant number of feasible subsets using signal strengthening (Lemma 10), completing the proof. ∎
For other power assignments, such as uniform power, we seem to be within striking distance of a -approximation. This is unfortunately not the case, we can only claim a poly-logarithmic approximation, worse than the greedy case. However, as we show in Section 6, in practice the LP approach might be applicable to these other power assignments as well.
In this section, we present results from simulation experiments. We focus on the weighted capacity problem for our experiments. It is difficult to conduct a comparative experiment for the admission control problem, there being no obvious previous algorithm to compare it with.
In contrast, the weighted capacity problem admits straightforward modifications of to the greedy algorithm, and thus a better comparative benchmark for our algorithm. Two natural greedy algorithms can be proposed:
Using weight classes: Let (by scaling). Now we can assume that all . This is because links with smaller weights can be discarded without losing more than a factor of in the approximation quality. Now divide the links into weight classes, the weight class is defined by for to . Now if we consider links belonging to a single , the weights do not matter (up to a factor of 2). We simply run the greedy algorithm of  for each , and output the solution for the best weight class. This gives a straightforward approximation factor.
Using length classes: Let by scaling and let . Divide the links into length classes for to . Within we can choose to run the greedy algorithm on the links in any order since the lengths are essentially the same, thus we go through links according to descending order of weights, achieving a constant factor approximation on . We choose the solution for the best , thus getting a approximation.
Thus, comparing the two greedy algorithms, we achieve a approximation. In what follows, we shall refer to this joint algorithm as “greedy algorithm”.
We randomly generated the instances. Some important parameters of the experiments are as follows:
: The maximum length of a link (the implicit minimum being )
: A number indicating that the sender of a link is chosen from a square
: number of links
We also use , , and . The instances were generated as follows. For each link, the sender was chosen randomly from a square. The length of the link was chosen randomly from . The receiver was thus placed at this distance from the sender and at a random direction. The weight was chosen independently from . We will mention different weight distributions later, and mention this initial choice of weight distribution as the ordinary distribution.
One crucial aspect of both greedy algorithms as well as the LP algorithm is the constants used. For the LP algorithm, this is the constant in Eqn. 15. The greedy algorithm of  also depends on a constant. Though theoretical bounds for these constants are available, it has been observed before that these theoretical bounds do not perform the best in practice . We run all algorithms with different values of the constant in question, running over reasonable values in small increments, and choosing the best solution for each algorithm separately. We ran our experiments in MATLAB, and used the convex optimization package CVX  to solve the LP.
The overall message from the experiments is that using the linear programming formulation gives a substantial improvement in the solution quality in many cases. On the other hand, the greedy algorithm is also not without merit, and can outperform the LP in certain other situations. It appears that the smaller the maximum feasible set is, the better greedy does, while as the solution size/quality improves, LP outperforms greedy. This is not surprising. When the set is really dense and the link lengths are large, the quality of the solution is bad and the cost incurred by greedy due to length-class or weight-class partitions is minimal.
In Fig. 1, we see the results for linear power with links. On Y-axis is plotted , where and are, respectively, the quality of the solution found from the linear programming algorithm and the greedy algorithm. As alluded before, the greedy algorithm does better when and density are both large (these are the points for which the Y-axis value is lesser than ), with the trend reversing when these change. Running the same experiment run for an increased number of links confirms these trends (Fig. 2).
We experimented with different distributions on the weights, to see if changes here change the solution trend significantly. We tried the following weight distributions.
Reversed: Set the weight of link to be where is chosen according to the ordinary distribution.
Length determined: Set weight of the link to be equal to its length.
Weight class: Choose a parameter randomly from and set weight to .
The overall trend is similar. For reversed and length determined distributions, LP did extremely well, whereas for the case of weight class distribution, greedy did much better, with LP only barely outperforming it in a few cases. This further points to the benefit of combining these algorithms in practice. The results for the reversed case are shown in Fig. 3.
Next we experimented with uniform power. As we discussed in Section 5, for uniform power we can only claim a poly-logarithmic approximation factor. However, the bounds are only so bad on rather pathological instances and one needs to do some work to come up with them. Thus in practice, it is reasonable to assume that an LP approach will be not without benefit. This is indeed borne out by our experiments, as seen in Fig. 4.
Research partially funded by grant 90032021 and grant-of-excellence 120032011 from the Icelandic Research Fund. Authors thank Neal Young for helpful discussions.
-  E. Ásgeirsson and P. Mitra. On a game theoretic approach to capacity maximization in wireless networks. In INFOCOM, 2011.
-  Paramvir Bahl, Ranveer Chandra, Thomas Moscibroda, Rohan Murty, and Matt Welsh. White space networking with Wi-Fi like connectivity. In Proceedings of the ACM SIGCOMM 2009 conference on Data communication, SIGCOMM ’09, pages 27–38, New York, NY, USA, 2009. ACM.
-  D. Chafekar, V.S. Kumar, M. Marathe, S. Parthasarathy, and A. Srinivasan. Cross-layer Latency Minimization for Wireless Networks using SINR Constraints. In Mobihoc, 2007.
-  D. Chafekar, V.S.A. Kumar, M.V. Marathe, S. Parthasarathy, and A. Srinivasan. Approximation Algorithms for Computing Capacity of Wireless Networks with SINR Constraints. In Infocom, 2008.
-  Mung Chiang, Prashanth Hande, Tian Lan, and Chee Wei Tan. Power control in wireless cellular networks. Foundations and Trends in Networking, 2(4):381–533, 2008.
-  Alexander Fanghänel, Thomas Kesselheim, and Berhold Vöcking. Improved algorithms for latency minimization in wireless networks. In ICALP, pages 447–458, July 2009.
-  A. Goldsmith and S. B. Wicker. Design challenges for energy-constrained ad hoc wireless networks. IEEE Wireless Communications Magazine, 9(4):8–27, 2002.
-  O. Goussevskaia, M. Halldórsson, R. Wattenhofer, and E. Welzl. Capacity of Arbitrary Wireless Networks. In INFOCOM, pages 1872–1880, April 2009.
-  Olga Goussevskaia and Roger Wattenhofer. Complexity of scheduling with analog network coding. In FOWANC, May 2008.
-  M. Grant and S. Boyd. CVX: Matlab software for disciplined convex programming, version 1.21. http://cvxr.com/cvx, April 2011.
-  Jimmi Grönkvist and Anders Hansson. Comparison between graph-based and interference-based STDMA scheduling. In Mobihoc, pages 255–258, 2001.
-  P. Gupta and P. R. Kumar. The Capacity of Wireless Networks. IEEE Trans. Information Theory, 46(2):388–404, 2000.
-  M. Halldórsson. Wireless scheduling with power control. http://arxiv.org/abs/1010.3427, September 2010. Earlier version appears in ESA ’09.
-  M. Halldórsson and P. Mitra. Nearly Optimal Bounds for Distributed Wireless Scheduling in the SINR Model. In ICALP, 2011.
-  M. Halldórsson and R. Wattenhofer. Wireless Communication is in APX. In ICALP, pages 525–536, July 2009.
-  Magnús M. Halldórsson and Pradipta Mitra. Wireless Capacity with Oblivious Power in General Metrics. In SODA, 2011.
-  Martin Hoefer, Thomas Kesselheim, and Berthold Vöcking. Approximation algorithms for secondary spectrum auctions. In Proc. 23rd Symp. Parallelism in Algorithms and Architectures (SPAA 2011), pages 177–186, 2011.
-  W. Hoeffding. Probability inequalities for sums of bounded random variables. Journal of the American Statistical Association, 58(301):pp. 13–30, 1963.
-  S.A. Jafar and S. Srinivasa. Capacity limits of cognitive radio with distributed and dynamic spectral activity. Journal on Selected Areas in Communications, 25(3), 2007.
-  Changhee Joo, Xiaojun Lin, and N.B. Shroff. Understanding the Capacity Region of the Greedy Maximal Scheduling Algorithm in Multi-Hop Wireless Networks. In INFOCOM, 2008.
-  T. Kesselheim. A Constant-Factor Approximation for Wireless Capacity Maximization with Power Control in the SINR Model. In SODA, 2011.
-  T. Kesselheim and B. Vöcking. Distributed contention resolution in wireless networks. In DISC, pages 163–178, August 2010.
-  Long B. Le, Eytan Modiano, Changhee Joo, and Ness B. Shroff. Longest-queue-first scheduling under SINR interference model. In MOBIHOC, 2010.
-  Marco Levorato, Urbashi Mitra, and Michele Zorzi. Cognitive interference management in retransmission-based wireless networks. In Proceedings of the 47th annual Allerton conference on Communication, control, and computing, Allerton’09, pages 94–101, Piscataway, NJ, USA, 2009. IEEE Press.
-  Bo Li, Cem Boyaci, and Ye Xia. A refined performance characterization of longest-queue-first policy in wireless networks. In MobiHoc, pages 65–74, 2009.
-  Ritesh Maheshwari, Shweta Jain, and Samir R. Das. A measurement study of interference modeling and scheduling in low-power wireless networks. In SenSys, pages 141–154, 2008.
-  T. Moscibroda and R. Wattenhofer. The Complexity of Connectivity in Wireless Networks. In INFOCOM, 2006.
-  Thomas Moscibroda, Yvonne Anne Oswald, and Roger Wattenhofer. How optimal are wireless scheduling protocols? In Infocom, pages 1433–1441, 2007.
-  Thomas Moscibroda, Roger Wattenhofer, and Yves Weber. Protocol Design Beyond Graph-Based Models. In Hotnets, November 2006.
-  Thomas Moscibroda, Roger Wattenhofer, and Aaron Zollinger. Topology Control meets SINR: The Scheduling Complexity of Arbitrary Topologies. In MOBIHOC, pages 310–321, 2006.
-  Gaurav Sharma, Ravi Mazumdar, and Ness B. Shroff. Delay and capacity trade-offs in mobile ad hoc networks: A global perspective. In INFOCOM. IEEE, 2006.
-  Y. Shi, C. Jiang, Y. Thomas Hou, and S. Kompella. On capacity scaling law of cognitive radio ad hoc networks. In Proc. IEEE International Conference on Computer Communication Networks (ICCCN), 2011.
-  H. Su and X. Zhang. Cross-layer based opportunistic mac protocols for qos provisionings over cognitive radio wireless networks. IEEE Journal on Selected Areas in Communications, 26(1):118–129, 2008.
-  L. Tassiulas and A. Ephremides. Stability properties of constrained queueing systems and scheduling policies for maximum throughput in multihop radio networks. IEEE Trans. Automat. Contr., 37(12):1936–1948, 1992.
-  P.J. Wan, O. Frieder, X. Jia, F. Yao, X. Xu, and S. Tang. Wireless link scheduling under physical interference model. In INFOCOM, 2011.
-  C. C. Wu and D. P. Bertsekas. Admission control for wireless networks. IEEE Trans. on Vehicular Technology, 50:504–514, 2001. http://web.mit.edu/dimitrib/www/Adcontrol.pdf.
Lemma 6: If is -feasible using a non-decreasing sublinear power assignment and is a link such that for all , then .
Assume that is a -feasible set. By the signal strengthening (Lemma 10), this affects only the constant factor.
Consider the link such that is minimum. Also consider the link with minimum. Let . We claim that for all links in , ,
To prove this, assume, for contradiction, that . Then, , by definition of . Now, again by the definition of , and . Thus and similarly . On the other hand . Now, , contradicting the following:
Lemma 21 ().
Let be links in a -feasible set. Then, .
Consider now any link in , . By the triangle inquality and Eqn. 16, . Now . Since , it holds that and by sub-linearity it holds that . Thus,
where the final equality follows from the feasibility of . Finally, summing over all links in
since by assumption. ∎