Stability and Distributed Power Control in MANETs with Outages and Retransmissions
Abstract
In the current work the effects of hopbyhop packet loss and retransmissions via ARQ protocols are investigated within a Mobile Adhoc NETwork (MANET). Errors occur due to outages and a success probability function is related to each link, which can be controlled by power and rate allocation. We first derive the expression for the network’s capacity region, where the success function plays a critical role. Properties of the latter as well as the related maximum goodput function are presented and proved. A Network Utility Maximization problem (NUM) with stability constraints is further formulated which decomposes into (a) the input rate control problem and (b) the scheduling problem. Under certain assumptions problem (b) is relaxed to a weighted sum maximization problem with number of summants equal to the number of nodes. This further allows the formulation of a noncooperative game where each node decides independently over its transmitting power through a chosen link. Use of supermodular game theory suggests a price based algorithm that converges to a power allocation satisfying the necessary optimality conditions of (b). Implementation issues are considered so that minimum information exchange between interfering nodes is required. Simulations illustrate that the suggested algorithm brings near optimal results.
Automatic Retransmission reQuest Protocols, Network Stability, Network Utility Maximization, Distributed Power Control, Supermodular Games
I Introduction
The current work considers a Mobile Adhoc NETwork (MANET) where data flows entering from a set of source nodes should be routed to their destinations. In such networks a major concern is the maximum set of incoming rates that can be supported, since interference is the bottleneck. If a utility function is related to each incoming flow a very interesting problem is to maximize the sum of all utilities under the constraint that the queues of all nodes remain stable. Such problems have been addressed in [1], [2], [3], [4], [5] and algorithms that optimally adapt the incoming rates and the transmission powers of each node have been suggested.
In the current work we are interested in bringing these models a step further and investigate how the stability regions and the optimal policies for congestion control, routing and power allocation vary, when the queues of each node use ARQ protocols to repeat transmissions of erroneous packets due to outages. In the current literature, investigations already addressing the network utility maximization (NUM) problem with erroneous transmissions through the links consider mainly fixed routing. In [6] the model does not consider queueing aspects and a NUM problem with rateoutage constraints per link is approximately solved. In [7] the effect of endtoend error probability is included in the utility of each source. The same problem with average power and reliability requirements is posed and algorithmically solved in [8]. Furthermore, in [9] a single hop adhoc network with outages is considered where a solution for joint admission control, rate and power allocation is derived based on a stochastic game formulation. Other contributions that investigate the effect of retransmissions in MANETs incorporating Random Access MAC protocols include [10], [11].
Motivated by a comment in [12] where it is stated that "in practical communication systems, the link capacity should be defined appropriately, taking packet loss and retransmissions into account, hence the flow conservation law holds for goodputs instead of rates", and after a presentation of the model under study in Section II, we derive in Section III the goodput capacity region. The success probability for the transmission over a wireless link depends on the entire power allocation and the scheduled transmission rate. We constrain our investigations to functions with specific properties presented in section IV, where it is shown that these also hold for the expression with Rayleigh fading [13]. The NUM problem naturally decomposes in Section V into the input rate control and the scheduling problem.
At this point a major challenge is to achieve a decentralized solution of the second problem. This is always possible of course for the case of parallel channels (see also [12] and [14]). Algorithmic suggestions can be found in [2] for zerofull power allocations and in [15] by solving a maximum weighted matching problem over a conflict graph. In our work fully distributed implementation is achieved by approaching the second problem with the arsenal of supermodular game theory in section VI  an idea appearing in [16] and [17]  and result to the suggestion of a price based algortithm in VI.D and VI.E that achieves almost optimal solutions with minimum information exchange between the nodes. Simulation results are finally presented in section VII.
Ii Model under Study
We consider a wireless network consisting of nodes , while is the set of all possible links. The time is divided into slots of equal duration (normalized to ) and is the time index. Data flows enter the network at source nodes and are removed at destination nodes . The set of data flows (commodity flows) injected into the network at a source node with a predefined destination is denoted by . The routes of the data flows through the network are not fixed. Furthermore, each link is characterized by an origin node (begin) and an end node (end). At each node , a total of buffers  one for each commodity flow  are reserved (see also Fig.1).
In the general case investigated, each node chooses at slot a power as well as a rate to transmit data through link , as long as . The total transmission rate of packets through link at some time slot is the sum of the transmission rates of the individual commodities sharing the link meaning . Scheduled packets of variable length for each commodity are combined into a common packet of length and sent through the link. The resulting long packet may be received at node with errors due to fading and interference. The probability of successful transmission is then a real valued function of the entire power allocation at slot , and the scheduled rate . The set of all possible scheduling rates is denoted by . The nodes have power restrictions, e.g. and is the convex compact set of all feasible power allocations.
Examples of such success probability functions for flat blockfading channels can be found using the outage probability definition [18]. Given an threshold value of link (we often simply write )
(1) 
where stands for , is the scheduled transmission rate through the link, is the slow varying path gain and is the associated flat fading component of the channel. For the case of Rayleigh/Rayleigh fading (meaning Rayleigh slow fading for both the desired and interference signals), a closed form expression of (1) can be found in [13] and [19]
(2) 
Observe that the success functions used imply that only the channel fast fading statistics are known and the nodes have no other instantaneous channel state information (CSI) over the fading gains, except  possibly  the slow varying path gains. The actual amount of data transmitted through each link equals . is a binary random variable which equals for success (with prob. ) and for failure (with prob. ). The expected transmission rate through link is then
(3) 
and is called the goodput of link [20], [21]. Furthermore, in the analysis that follows we often encounter a quantity called maximum goodput defined as (see [20] and [22] for parallel Rayleigh fading channels)
(4) 
In case a packet of length is received at node with errors, we assume that this can always be detected during decoding. When reception is correct an ACK is fed back otherwise a NAK signal is transmitted to via a reliable zerodelay wireless feedback link. In the latter case the packets of all transmitted commodities are then not removed from the buffer but wait for a future retransmission (StopandWait ARQ) under some new scheduling decision. The queue evolution for each node and commodity flow at slot , is given by
(5) 
The success probability of the transmission through link is equal for all commodities , since it depends on the sum rate . In the expression (5), is the actual outgoing data ("actual" meaning "error free") from node , is the actual incoming data from links , and is the amount of commodity bits arriving exogenously to the network at node during .
We associate each incoming flow to the network at node with destination , , with a utility function . The utility function takes as argument the average incoming data rate and is nondecreasing, strictly concave and continuously differentiable over the range (elastic traffic, [23]). The utilities describe the satisfaction received by transmitting data from node to .
The aim here is to find an incoming rate vector to maximize the sum of the utilities subject to the constraint that the system remains stable and furthermore explicitly provide the stabilizing scheduling policy . denotes the capacity region of the system, the largest set of for which the system remains stable. Formally we write
(6) 
Iii Network Capacity Region and Variations with Dropping Packet Decisions
The problem posed so far is similar to the models investigated in [2], [3], [4] and [15]. Due to the occurence of errors and the use of retransmissions, the capacity region of the model under investigation is definitely reduced and has a different expression compared to the works mentioned.
Theorem 1
The capacity region of the wireless network under study is the set of all nonnegative vectors such that there exist multicommodity goodput flow variables , satisfying

and if

:

, , where and
(7) 
In the above is the size vector of goodput flow variables for all commodities through the network. An optimal policy achieving stability for all vectors within is a variation of the wellknown backpressure policy [1], [2] where goodputs replace the rate vectors. This is named here goodput backpressure policy. We further denote with the goodput region of the network, which equals the convex hull () of given in (7). Comparing this region to the ones appearing in [2] and [14] the ratepower mapping is replaced here by the maximum goodputpower mapping .
Let us now assume that the nodes can decide, in addition to the transmission power and rate over the link , whether the possibly erroneous packet at time slot should be dropped or should be held in the node’s queues and wait to be retransmitted at the next time slot . We use the binary decision variable taking values for dropping decision and for a decision to continue. The single queue evolution will be the same as in (5) where (and similarly ) should be replaced by the expression which equals when and when .
If the decisions on dropping are randomized, with a fixed probability of dropping per link equal to (and hence ), the network capacity region , , will be the same as in Theorem 1 with a modification on the region . In this case we have that
(8) 
. Choice of the vector results in the region of Theorem 1 where no dropping takes place, while for , dropping always takes place after an erroneous transmission and this provides the maximum network capacity region with equal to
(9) 
where is the maximum allowable transmission rate per link. We can then obtain different regions between these two extremes by varying the dropping probabilities per link. To understand why this is important suppose that a network user transmitting a data flow with source node has a higher data rate than that offered by the actual error free network capacity region . We may then vary the vector so that the network will fit the requirements of the user. Of course the average rate of correctly transmitted packets through the network will not change. What will happen is that, instead of removing part of the user’s packets at entering the network (admission control), the network will offer per link at least one chance for all packets to be correctly transmitted through the network, hence will be able to provide unreliable service to the entire required high data rate, with index of reliablity .
Iv Properties of the success function and the maximum goodput function
The success probability function for transmission over link considered in this work, has the following properties^{1}^{1}1The gametheoretic notation is often used, where is the entire power vector excluding the th element ..

P.1 is strictly increasing in and the of the function is concave in

P.2 is strictly decreasing and convex in

P.3 is strictly decreasing in

P.4 The of the function has increasing differences for the pair of variables meaning that
(10) where and .

P.5 The of the function has increasing differences for each pair of variables . The differences are constant for all pairs , where and .
The last property actually implies  using [25, Corollary 2.6.1]  that the function is supermodular. By property P.4 a positive change on the transmission power has a greater impact on the increase of the (logarithm of the) success probability, the higher the rate of transmission. If we e.g. transmit with 16QAM modulation, an increase of power by will increase much more than in the case of transmission with BPSK.
Theorem 2
The success probability function for the Rayleigh/Rayleigh fading case, given in (2) satisfies properties P.1P.5.
Proof.
For the proof, the expressions (11)  (15) of first and second order partial derivatives are required. Specifically, from (11) and (12) the function is increasing in and decreasing in (strictly if and same for ). From (14) the logarithm of the function is concave in . The convexity in P.2 comes directly from the partial derivative of (12) over which is easily shown to be positive. P.3 is shown in (13), whereas P.4 comes directly by derivating (13) w.r.t. . Finally, P.5 is a direct consequence of the fact that  in (15)  and (see [25, p.42]). ∎
(11)  
(12) 
(13)  
(14)  
(15) 
Using the above properties we can derive important properties for the maximum goodput function in (4), which as seen in (7) plays a critical role in the definition of the system capacity region.
Theorem 3
If the success probability function satisfies P.1P.5 then the maximum goodput function in (4) has the following properties (where )

P’.1 is strictly increasing in

P’.2 is strictly decreasing and convex in

P’.3 is nondecreasing in

P’.4 is nonincreasing in
Proof.
Proofs of P’.1P’.4 are found in Appendix A. ∎
The above properties are illustrated in Fig.2 and Fig.3 using a success probability function with the expression in (2) for the 2user Rayleigh/Rayleigh fading case. These will not be directly used in what follows but are rather useful for the characterisation of the stability region and the optimal scheduling policies of such systems. Examples of the goodput region are shown in figures Fig.4 and Fig.5 for two simple network topologies: 2 transmitting nodes with 1 receiving, 1 transmitting node with 2 receiving.
Remark 1
In economic terms, we can interpret the success probability function as the demand of product in a market of firms. In this framework is the product’s price and is the firm’s revenue. Then is firm’s (revenue) = (price)(demand). The demand is by P.3 a decreasing function of the price, is increasing by P.1 in and decreasing by P.2 in . Then can be interpreted as a variable valuating product’s quality (or maybe the money spent by firm in advertisement) and as the quality (or money for advertisement) of products from competitors . Then the maximum goodput is the maximum revenue that a firm can obtain by choosing an optimal price , given a vector . By properties P’.1 and P’.2 the maximum revenue is increasing in and decreasing in , whereas by P’.3 and P’.4 the optimal price is also increasing in and decreasing in . Notice that if in P.4 logsupermodularity would be replaced by logsubmodularity the optimal price would be a decreasing function of .
V NUM Problem Dual Decomposition
and is given in (7). The constraint set is convex and compact (see [26, Appendix 4.C]), the objective function is concave and Slater’s condition can be shown to hold, hence strong duality also holds and known distributed algorithms, like the one following, can solve the Lagrange dual problem which involves the vector of dual variables . The Lagrangian associated with the primal NUM problem is denoted by while the dual function yields, due to the linearity of the differential operator (see [23], [4], [15])
and , for the flow with source node and destination . We can interpret as the implicit cost per pair . Thus, the NUM problem is decomposed into:
(a) The input rate control problem
(16) 
solved for each commodity flow at the incoming nodes independently . Observe that by assumption is continuous and monotone decreasing in (thus a bijection) and the inverse of the function exists. Since is strictly concave the solution is unique for each .
(b) The scheduling problem
(17)  
Through each link the commodity is scheduled to be routed with goodput rate and . This is the well known backpressure policy [1]. The solution of (17) further provides the optimal multicommodity goodput flow variables . The optimal solution described is very similar to the DRPC policy in [2].
If we can solve (17) distributedly, then algorithms can be provided, that solve the dual problem also in a distributed manner, and converge to the optimal average incoming rate vector and average price vector . The dual problem can be solved by the subgradient method. The prices for each nodedestination pair are stepwise adjusted by
(18) 
In the above is a positive scalar stepsize, denotes the projection onto the set and for each the values and are calculated by solving problems (16) and (17) respectively and using prices . As noted in the aforementioned works, in practice a constant stepsize is used for implementation purposes, although the convergence of the algorithm is guaranteed for . For constant stepsizes statistical convergence to is guaranteed as shown in [15, Def.1,Th.2].
Vi The Scheduling Problem
Via Relaxation
As was mentioned previously it is very important that the problem in (17) is solved in a distributed manner. To this aim the theory of supermodular games can be used. We make the following two assumptions

Assumption 1: Each origin node chooses a single end node to transmit

Assumption 2: Each node can transmit and receive at the same time

Assumption 3: Fixed scheduled rates per link are considered.
Specifically the last assumption constraints the generality of the initial model but is necessary for the approach that follows. Variable scheduled rates would involve a joint maximization over power allocation and rates. This would complicate the analysis, but is a rather important topic for future research. The maximization in (17) can be written as
(19)  
where (a) comes from the fact that the objective function is linear and the supporting hyperplanes to the sets and are the same, while (b) from Assumption 1. The latter simplifies the problem to a weighted sum maximization problem with number of summants equal to the number of nodes and allows the formulation of a noncooperative game in the following subsections, where each node decides independently over its transmitting power through a single chosen link. The capacity region of the system in Theorem 1 is of course reduced. An important question is how each node choses the optimal single link to transmit.
The number of links with node as origin are all links . This is the connectivity set of node . The optimal link is obviously the one which provides maximum weighted goodput to the above summation, for a given power allocation .
Departing here briefly from the main line of the analysis, we end this subsection with a heuristic suggestion for an almost optimal choice of a single receiver node, using low information exchange between the network nodes. Applying Markov’s inequality in (1)
Suppose that the end node of each link measures the received level of interference, the latter denoted by . This is not any more a random variable with unknown realization but rather a known deterministic quantity. The right handside then reduces to
(20) 
This is an upper bound on the actual error probability. The process of the suboptimal choice then is as follows. Each destination node of links belonging to the connectivity set of node , informs the origin node over and afterwards chooses to transmit over the link with the maximum ratio (20), since , and are known to .
An alternative way to choose a single receiver node could be by assigning to each element of the connectivity set a probability, with sum equal to one per transmitting node, and the choice will then be a random process.
ViB Optimality conditions
Using the KarushKuhnTucker (KKT) optimality conditions and observing that the active inequality constraint gradients are linearly independent [27, pp.315317] all feasible vectors are regular and we have the following necessary conditions for to be a local maximizer of the objective function in (19).
(21) 
for each and the complementary slackness conditions satisfy
(22) 
is the Lagrangian of the problem in (19). The conditions are only necessary and not sufficient. We make here the remark that if the objective function were concave, the dual gap would be zero and any local maximizer would also be global for the problem at hand. In this case the conditions would also be sufficient. Unfortunately this generaly does not hold for the specific objective function.
Divide (21) and (22) by (which is definitely positive if we choose ) and then  approching the problem similarly to [16]  set
ViC A Supermodular Game
If we view as the price charged by user to user for affecting its goodput by creating interference, we can approach the solution to the optimal power allocation problem in a distributed fashion with the use of game theory. We denote the noncooperative game by the triple where are the players, is the set of feasible joint strategies and is the payoff function for user .
We distinguish between two types of players. First, the power players who belong to the set , each one of which represents a node and the set of feasible joint strategies is identical to the set of feasible power allocations. Their payoff function equals
(28) 
We often set to emphasize the dependence of on the sum instead of the individual prices. The best response correspondence for player is the set
(29) 
where .
Second, the price players who belong to the set with cardinality . The feasible set of strategies for player is
(31) 
and is the joint feasible set.
A Nash equilibrium (NE) for the game is defined as the set of power vectors and price vectors with the property for every and every
(32) 
Hence belongs to the best response correspondence of player , , given the equilibrium prices, whereas belongs to the best response correspondence of player given the equilibrium powers.
The existence and uniqueness of the NE when the prices do not take part as players in the game has been proven in [28, Th.III.1] under mild assumptions on the problem parameters usually satisfied in practice. In our case however with players the uniqueness of a Nash equilibrium is not guaranteed. We can however make use of the theory of supermodular games, exploiting the structure of the payoff function in (28) to find algorithms that converge to one of the Nash Equilibria. We first give the definition of a supermodular game from Topkis [25]
Definition 1
A noncooperative game with players , each having strategy belonging to the feasible set of strategies , is supermodular if the set of feasible joint strategies is a sublattice of and for each the payoff function is supermodular in player ’s strategy and has increasing differences for all pairs , , .
Theorem 4
The noncooperative game with power players and price players is a supermodular game [25, p.178]. Furthermore, the set of equilibrium points is a nonempty complete lattice and a greatest and least equilibrium point exist.
Proof.
See Appendix B. ∎
After proving that the problem at hand has the desired properties so that supermodular game theory can be applied we prove in the following that the Nash Equilibria of the game are exactly the power allocations that satisfy the KKT necessary optimality conditions of the original sum weighted maximization problem.
Theorem 5
Proof.
See Appendix C. ∎
The above theorem is rather important because it shows that the formulated game leads to one of the solutions of the scheduling problem. If the objective function in (19) is concave then the NE is also unique and the game converges to the unique global maximizer. The suboptimality of the proposed scheme in the current work thus lies solely on the fact that the KKT conditions are only necessary but not sufficient. If we can define the region of for which the objective function is concave and restrict the feasible power allocations to that, the suggested distributed solution is the optimal one. This can be a topic for future investigations.
ViD The Scheduling Algorithm
In the current paragraph we provide an algorithm which updates for each player the power allocation and the price . Starting from any initial point within the joint feasible region, the algorithm will eventually converge to a NE bounded componentwise by the greatest and least NE. It is related to the RoundRobin optimization for supermodular games [25, Ch. 4.3.1], versions of which are suggested in [16] and [17].
The algorithm has two phases for each iteration and is given in Table I. The power update phase calculates the best response for each user by (28) given fixed prices and the opponents’ decisions .
During the price update phase each user calculates