You are on page 1of 19

Distributed Power Control Algorithms in Wireless Networking

Project Report

EE 359 Wireless Communications Winter 2009

Jeffrey Mounzer Stanford University jmounzer@stanford.edu

Advisor: Professor Nick Bambos

Table of Contents

Abstract .3 1 2 Introduction ............................................................................................................................. 4 Literature Survey ..................................................................................................................... 5 2.1 2.2 Wireless Network Model and the Foschini-Miljanic DPC Algorithm ............................... 5 The Backlog-Driven Distributed Power Control Approach .............................................. 7 Mobile power management for wireless communication networks .................... 7 Power-induced time division on asynchronous channels ..................................... 8

2.2.1 2.2.2

2.2.3 Power-Controlled Multiple Access Schemes for Next-Generation Wireless Packet Networks ................................................................................................................................ 8 2.2.4 3 Distributed Backlog-Driven Power Control in Wireless Networking ................... 12

Simulation of Distributed Power Control Algorithms............................................................ 13 3.1 3.2 Simulation Design........................................................................................................... 13 Results ............................................................................................................................ 14

Future Work........................................................................................................................... 17 4.1 4.2 4.3 Simulation Extensions .................................................................................................... 17 Proving the Structural Properties of Backlog-Driven DPC ............................................. 17 Design of Stronger DPC Algorithms................................................................................ 18

5 6

Conclusion ............................................................................................................................. 18 References ............................................................................................................................. 19

Abstract
In wireless networks, transmitter power control can play an important role in improving a number of key performance parameters, including energy usage, network capacity, and network reliability. As wireless ad hoc networks become more prevalent, it is critical that power control algorithms operate in a distributed fashion, since centralized network controllers are rarely available for such systems. This project explores the evolution of distributed power control (DPC) algorithms over the last 15 years through a literature review and simulations, and it examines future research directions in this field. In particular, we consider the emergence of backlog-aware algorithms for DPC, which exploit the tradeoff between power and delay to induce cooperation between links in the network. The simulations that were conducted provide a comprehensive platform for the evaluation of such algorithms, which was largely lacking in the literature.

Several interesting aspects of backlog-aware algorithms are highlighted by our simulation results. The most powerful result is a validation of the intuition that backlog-aware algorithms, by relying on teamwork between links, can provide significant performance gains as compared to competitive algorithms such as the seminal Foschini-Miljanic approach. These gains are particularly large when the network is under duress either when the load is too great for the links to clear their queues, or when the cross-link interference becomes large. In addition to presenting and discussing the simulation results, this work also describes the next steps in this line of research, including extending the currentlyused simulations, overcoming difficulties in the comparison of different algorithms, mathematically proving the observed structural properties of backlog-aware DPC techniques, and designing better algorithms for future systems.

1 Introduction
Transmitter power control can significantly improve a number of critical performance parameters for wireless communication networks; it can serve to minimize energy usage, improve network capacity, and mitigate the effects of cross-link interference [2, 4-5]. For networks with centralized controllers (e.g., a cellular network with a base station), the power control problem is relatively simple. However, in ad hoc networks which lack a central regulator, power control proves to be much more difficult and interesting [5]. In general, it is necessary for each transmitter in an ad hoc network to regulate its own power autonomously (this is called distributed power control, or DPC), since centralized coordination between transmitters is very difficult and susceptible to such problems as having a single point of failure. As wireless ad hoc networks become more prevalent (for example, in wireless LAN technologies [5]), it is important to develop high-performance DPC algorithms to meet the challenges presented by future wireless systems. This report explores the evolution of DPC algorithms over the last 15 years through a literature review and simulations, and it examines future research directions in this field. The starting point for much of the research done in DPC is an algorithm proposed by Foschini and Miljanic [1], which describes a constant signal-to-interference ratio (SIR) approach. At its core, this approach is competitive each link attempts to continuously maintain its target SIR by overcoming the interference presented by all the other links. The later works considered in this report explore a fundamentally different approach in order to improve upon the Foschini-Miljanic algorithm [2-5]. The unifying theme of these later efforts is the power versus delay tradeoff [4], in which performance parameters such as throughput and power consumption are improved by allowing the SIRs of the various links to fluctuate and letting transmitters delay sending information when the interference is high. The algorithms that are based on this approach allow the links to cooperate instead of compete. In particular, as is shown through our simulations, networks designed using these algorithms exhibit a soft TDMA effect, in which the links coordinate themselves to take turns transmitting at high power, even though their power control algorithms are completely distributed. While the papers considered in our survey devote much of their effort to the design of the various DPC algorithms, the literature is much more sparse with regard to evaluating their performance. This provided the motivation for our development of a comprehensive simulation platform with which we can closely examine various performance parameters of DPC algorithms and compare them with each other. This platform, built in MATLAB, allows a large range of network parameters to be modified, including the number of transmitters and receivers and their spatial distributions, packet arrival rates, and channel models. For this project, three algorithms are simulated, taken from the paper by Bambos and Kandukuri entitled Power-Controlled Multiple Access Schemes for Next-Generation Wireless Packet Networks [4]. One of these algorithms is a slight modification of the basic Foschini-Miljanic algorithm, which is essentially competitive in nature. The other two algorithms, based on backlog-driven DPC, are more cooperative. Our simulations provide a much more rigorous validation of some of Bambos and 4

Kandukuris results than is presented in their paper. Furthermore, our simulations illuminate some important attributes of backlog-driven DPC that are not directly addressed in their results. Our key conclusion drawn from the simulations is that backlog-driven DPC algorithms perform significantly better than the Foschini-Miljanic approach when the network is under duress either when the load becomes very large or when the cross-link interference is high. These characteristics can be extremely useful for real ad hoc networks because they can prevent network failures in periods of high congestion. The simulation platform and associated results provide a stepping stone to future research in this area. Using the insight gained from our simulations, we can determine which structural properties of backlogdriven DPC are the most prominent and desirable, and we can move forward with both proving these properties mathematically and with using them to design the next generation of DPC algorithms. As we show, there is significant research yet to be done in this field. The remainder of this report is organized as follows. In Section 2, the evolution of DPC algorithms for wireless networks is examined through a literature review, which includes the seminal work of FoschiniMiljanic [1] and a subsequent series of papers that employ backlog-aware DPC approaches. Section 3 describes the simulation that was undertaken as part of this project, including an explanation of the simulation design and a presentation and discussion of the results. Next, we look at future directions for this line of research in Section 4, and Section 5 provides some concluding remarks.

2 Literature Survey
In the literature survey portion of this report, we mainly consider five papers on DPC. Throughout the survey, our focus will be on the power control algorithms and associated results presented in these papers, although there are certainly many other interesting aspects that could be discussed. We begin with a description, based on the concise formulation in [6], of the simple network model that is used throughout papers in this research area, and then proceed to the seminal work of Foschini and Miljanic entitled A Simple Distributed Autonomous Power Control Algorithm and its Convergence [1] in Section 2.1. After reviewing this paper, which introduces what we shall refer to as the constant SIR algorithm, we will proceed in Section 2.2 to briefly discuss two papers by Rulnick and Bambos [2,3] that show the beginning of the movement toward backlog-aware DPC. Finally, we will take a look (also in Section 2.2) at works by Bambos and Kandukuri [4] and by Dua and Bambos [5], both of which display the trend toward more sophisticated approaches to backlog-driven DPC through dynamic programming techniques.

2.1 Wireless Network Model and the Foschini-Miljanic DPC Algorithm


In order to establish a standard upon which we can compare various DPC algorithms, we begin this section with a description of a simple wireless network model. This model or a slight variant is used in all of the papers that we consider in this report, and it provides the framework for the simulations that 5

are conducted. Although it is presented in several places (see, for example, [7] Chapter 14.4), we will largely follow the exposition provided in [6]. Once this framework is established, we will describe the operation of the basic Foschini-Miljanic algorithm, which serves as a baseline metric for all of the other papers we will look at. Consider a channel in which there are N links, and let Gij be the power gain from the transmitter of link j to the receiver of link i. Although the wireless channel typically exhibits path loss, shadowing, and multipath fading as components of this gain [7], all of the papers that we consider in this report take the gain to be deterministic (e.g., a function of path loss only). A commonly used measure of link quality of service is the SIR, which we define as

where Pi is the transmit power of link i and > 0 is the thermal noise power at the receiver of link i [6]. Lets also assume that each link has a minimum SIR threshold that must be met in order to meet its quality of service requirements. We will see that in the Foschini-Miljanic algorithm, the objective is to simultaneously satisfy all of these thresholds for all links in the network, whereas for the backlog-aware techniques, this requirement is relaxed to achieve other performance benefits. Lets call the threshold for link i i. Setting up our basic equation in matrix form, the Foschini-Miljanic objective is to achieve (I F)P u, with P 0 component-wise, where P = (P1, P2, , PN) is the vector of transmitter powers,
= ,

= =

(1)

,,

(2)

is a vector of normalized noise and interference powers, and

where is the indicator function which equals 1 if ij and 0 if i=j [6]. The matrix F is nonnegative element-wise and irreducible, so by the Perron-Frobenius theorem, the maximum modulus eigenvalue of F, F, is real, positive, and simple, and its corresponding eigenvector is component-wise positive. Furthermore, the existence of a vector P 0 satisfying (I F)P u is equivalent to F < 1 and also to the fact that (I F)-1 exists and is component-wise positive [6]. If any of these conditions hold, then the power vector

for i,j {1,2,,N},

(3)

is the Pareto optimal solution which satisfies all the minimum SIR thresholds simultaneously [6]. By Pareto optimal, we mean that it is the solution which minimizes the transmit power of each user clearly, a desirable situation.

( = )

(4)

The landmark result obtained by Foschini and Miljanic can be summarized as follows: as long as there exists a vector under the conditions described above (equivalently, as long as F < 1), then a simple iterative DPC algorithm for the links in the network will always result in exponentially fast convergence to [ 1]. In their own words, Foschini and Miljanic describe the algorithm as: Each user proceeds to iteratively reset its power level to what it needs to be to have acceptable performance as if the other users were not going to change their power level. Yet the other users are following the same algorithm and, therefore, are changing their power levels. [1] Rather remarkably, as Foschini and Miljanic go on to show, this iterative approach results in exponentially fast convergence to anytime such a vector exists for the overall system. In equation form, for each link, the algorithm can be written as ( + 1) =
() ()

(5)

where we use the discrete-time index k and we recall that is the minimum SIR threshold for link i. It is important to observe that each link makes its power decision for the next step autonomously the next power level chosen is simply a function of the links individual SIR target, its current power level, and its own observed SIR. If there does not exist a vector which solves the equation (I F)P u according to the requirements above, then this algorithm will result in all of the transmitter powers diverging to infinity [1]. This simple algorithm, which we shall refer to for convenience as the constant SIR algorithm (since the goal of the algorithm is to maintain a constant link SIR) provides an often-used baseline metric for the evaluation of other DPC algorithms. For our purposes in the simulations below, we use a slightly modified form of this algorithm (described in Section 2.2.4) to compare it with backlog-driven DPC algorithms.

2.2 The Backlog-Driven Distributed Power Control Approach


Now, we examine the progression from the constant SIR algorithm to more recent research in backlogdriven DPC by reviewing four papers ranging in publication date from 1997 to 2007. The first two of these papers [2,3] clearly show the beginnings of the movement toward backlog-aware DPC, while the later two [4,5] represent more sophisticated attempts to design backlog-driven algorithms.

2.2.1 Mobile power management for wireless communication networks


In [2], Rulnick and Bambos address the problem of conserving energy in wireless communication networks. The stated objective of this work is to guarantee an average transmission rate (used here as the primary quality of service metric) while keeping power consumption as low as possible. They set up 7

the problem as an optimization problem, seeking a rule which minimizes energy consumption while maintaining a fixed transmission rate. The problem formulation assumes a single link in stationary, unresponsive interference. This assumption plays a significant role in determining their solution, but it is clearly undesirable for wireless ad hoc networks, in which it is highly likely that the interference will be responsive to the actions of each link (for example, this is the case in the Foschini-Miljanic formulation). While the authors recognize this, they proceed to use the solution that they obtain for this special case to suggest a distributed power control algorithm called DPMA (dynamic power management algorithm), and they test the results of this algorithm in a responsive interference environment. The key aspect of the DPMA is that the links behave aggressively (i.e., transmit at higher power) during low interference but back off during high interference. Although DPMA is not backlog-aware, we mention it because this type of behavior is a general characteristic of the backlog-driven DPC algorithms that we will consider, and it shows a marked difference in approach from the constant SIR algorithm we discussed above. Perhaps most significantly, this paper relaxes the requirement for a constant minimum SIR and allows the SIRs to fluctuate this flexibility is a core element of the later backlog-driven DPC algorithms. It should be noted that there are hard-delay-constrained applications, such as voice, for which it is important to maintain a constant minimum SIR, and accordingly approaches which allow the SIR to fluctuate are perhaps less useful in these cases. However, for many modern and emerging wireless networks, data traffic is extremely important, and data-driven applications are not as constrained as voice in terms of delay therefore, the backlog-driven algorithms we examine have a high level of applicability for real systems.

2.2.2 Power-induced time division on asynchronous channels


Two years after [2] was published, Rulnick and Bambos authored a paper entitled Power-induced time division on asynchronous channels [3], which builds on the work in [2] and explicitly introduces the idea of backlog-sensitive power management. In this work, the authors explore how distributed power control methods can induce TDMA-like behavior in a wireless network a feature that is prominently displayed in the algorithms we consider later in this report. This paper captures the essence of the backlog-driven DPC problem, which can be summed up as: is it possible to induce the links in a wireless network to cooperate with each other instead of compete against each other through a fully distributed algorithm, so as to achieve better overall network performance? The authors proceed to design a backlog-sensitive DPC algorithm, which they call modified DPMA, and although the algorithm does not perform particularly well, they show that it has promise as a method to induce TDMA-like effects in a wireless network [3].

2.2.3 Power-Controlled Multiple Access Schemes for Next-Generation Wireless Packet Networks

The next paper we consider is [4], which provides the basis for the simulations conducted in this project. The DPC algorithms presented in this work are the first that we have looked at that can truly be described as backlog-driven. The authors use a dynamic programming approach to look at the power vs. delay dilemma [4], which they describe in the following manner. In a wireless packet communication network, when a transmitter observes high interference in the channel, it realizes that it will require high power in order to overcome the interference and successfully transmit a packet to its receiver. Accordingly, it might consider backing off and waiting to transmit until the interference is lower, while buffering its incoming traffic basically trading off increased delay for reduced power. However, when it has backed off, its buffer begins to fill up, putting pressure on the transmitter to be aggressive in order to rapidly reduce its backlog trading off increased power for reduced delay [4]. The authors method for designing backlog-driven DPC algorithms begins by using dynamic programming to find an algorithm which minimizes the average overall incurred cost when a single communication link operates in a channel with extraneously driven random interference (the same basic idea used in [2], as we described above). The overall cost which they attempt to minimize is composed of a power cost and a backlog cost, which are increasing functions of the link power and backlog, respectively. They proceed to solve for the optimal power which minimizes this overall cost for different functional forms of the probability of a successful transmission within a time slot. Then, they leverage the observed structural properties of these solutions to design backlog-driven DPC algorithms. Their key observations are that under low interference, it is effective to transmit and thereby reduce the backlog/delay cost by incurring a moderate power cost. However, as the interference goes up, the probability of a successful transmission decreases, and it becomes preferable to incur some delay cost to avoid an excessive power cost. Interestingly, they observe that this tradeoff process is ubiquitous across different functional forms of the probability of a successful transmission, s(p,i), which is only constrained to be increasing in p (power) and decreasing in i (interference). Furthermore, as the backlog increases, the optimal behavior across different functional forms of s(p,i) is for the link to transmit more aggressively (higher power) [4]. Although these are similar to the observations as the ones made in [2], they come from a fundamentally different mathematical approach (dynamic programming) to setting up and solving the problem. The authors use the observations they derive from the process of obtaining a solution to the extraneous interference case to propose two different classes of backlog-driven DPC algorithms, which we describe here. Consider a responsive interference environment with multiple links, where, for example, if a link increases its transmit power, the interference observed by all the other links increases. Suppose also that we operate in slotted time, indexed by k, and that each link can observe only its own packet queue (current backlog), its last transmitted power level, and the collective interference level that the entire network induced on it in the last time slot. A packet arrives into each links transmitter queue in each time slot with probability (for the simulations below, we assume is drawn from a uniform [0,1] distribution), and packet arrivals are statistically independent of each other. In the kth time slot and for a particular link, let bk be the transmitter queue backlog, ik be the interference observed at the link 9

receiver, and pk be the link transmit power. Finally, let G be the power gain from the links transmitter to receiver [4]. Then the family of algorithms entitled PCMA-1 algorithms has each link in the network update its power autonomously according to the following scheme: =
( )

0,

<

( ) ( )

(6)

where X(b) is an increasing function of the transmitter queue backlog, 1, and > 0. By modifying X(b), , and , an entire family of algorithms can be generated [4]. The second family of algorithms presented in this work, entitled PCMA-2 algorithms, is characterized by the equation:
, log ( ) = 0,

< ( )

where > 0. This family of algorithms is parameterized by X(b) and . A simulation of PCMA-2 for two interfering links with X(b) = b+4 and = 1 is shown in Figure 1 below. It is clear that the two links are taking turns using the channel, alternating between periods of high and low transmit power. This can be thought of as a soft TDMA type of behavior, where the links independently configure themselves to behave like a TDMA channel.
PCMA-2 Model with Two Links 70

( )

(7)

60

50

Power

40

30

20

10

3000

3500

4000

4500

5000 Time

5500

6000

6500

7000

7500

Figure 1: Example Power Evolution of PCMA-2 with Two Links Showing Soft-TDMA Effect

The authors then proceed to evaluate the performance of these two algorithms in comparison with a modified version of the constant SIR algorithm, which is, for each link i: =

0,

> 0

= 0

(8)

10

where we recall from above that is the minimum threshold SIR for link i and Rk is the SIR of link i at time k. We note that this algorithm does in fact depend on the backlog, but only in a trivial sense1. While the authors put much of their effort into setting up the problem and designing PCMA-1 and 2, the simulation that they conduct to obtain their performance evaluation is rather simplistic and leaves some room for doubt about the benefits of PCMA-1 and 2 in comparison to the constant SIR algorithm it is this fact that was the major motivating factor for the simulations conducted in this project. They use a 4 x 4 square lattice of 16 square cells with the edges wrapped around to create a torus (to avoid boundary effects), and each square has a single link located at its center. A packet arrives at the transmitter queue of each link with probability , and a packet is successfully transmitted for link n in each time slot with probability ( , ) =

(9)

where again Rn is the SIR of link n at time k. For this simulation, they use ===1. Note that for the constant SIR algorithm (when it converges), the success probability of each link will be constant and a function of the minimum threshold SIR. The constant SIR algorithm target for all links is set to 1.5. The authors use a simple path loss model for the gains Gij, where =

(10)

and rij is the distance from the transmitter of link j to the receiver of link i (note that for their simulation, the transmitter and receiver of link i are assumed to be in the same location, and the gain Gii is taken to be 1) [4]. Finally, the function X(b) in PCMA-1 and 2 is taken to equal b+4. The results of this simulation are conveyed through a plot which shows average backlog vs. average load and is reproduced to the right as Figure 2. The curve that turns vertical the furthest to the left is for the constant SIR algorithm, while the other two curves are for PCMA-1 and 2. Although this graph is used to claim that there is a 20 percent improvement in throughput by using PCMA-1 and 2 (since they can support an average load of .5 before blowing up, as opposed to the average load of .4 that can be supported by the constant SIR algorithm), these results actually do not conclusively show the benefits of their proposed backlogdriven algorithms, for several reasons. One of the key issues
Figure 2: Taken from [4]

Note that this modification is quite practical in the case of data-driven networks, since it is unnecessary to waste transmit power if there is no data to be sent.

11

with their claim is that the target SIR for the constant SIR algorithm appears to be strictly a function of the chosen link spatial distribution2, and their simulation assumes a direct relationship between the probability of a successful transmission and the link SIR. Therefore, changing the SIR target to a larger value will directly result in a shift of the constant-SIR curve to the right, potentially erasing any of their performance gains. Secondly, the transmitter and receiver spatial distribution is highly unrealistic, and it throws into doubt the broader applicability of the observed performance gains. Furthermore, the relative power consumption of the algorithms is completely neglected, and this parameter is obviously one of importance for the evaluation of backlog-driven DPC. Therefore, it was clear after an examination of their results that there remained significant work to be done to obtain meaningful comparisons of these algorithms, which led to the simulations created for this project.

2.2.4 Distributed Backlog-Driven Power Control in Wireless Networking


The last paper we will briefly consider is [5], published in 2007 by Dua and Bambos. We would mainly like to highlight the approach taken in this paper, which is very interesting and perhaps indicative of the direction that future work in this field will take. Once again, dynamic programming is the main mathematical tool used in their design of a backlog-driven DPC algorithm, but this time, the authors begin by considering a situation in which there are two interfering links and solving for the optimal power control solution a marked difference in approach from the previous works, which solved for optimal solutions in the extraneous interference case. Although the solution obtained is only for two links because of the complexity involved in using a higher number, this is still a step in the right direction as far as obtaining a truly optimal backlog-driven DPC algorithm. Another intriguing aspect of this approach is the attempt to design an Oracle what the authors conceive of as the optimal solution which would be obtained if there were a centralized controller and the use of this solution as a benchmark with which to compare the performance of their DPC algorithm, which they call BDD (backlog-driven distributed power control). They show that the Oracle naturally induces a load balancing effect, in which a transmitter with a large backlog is assigned a transmit power to reduce its backlog rapidly, and one with low backlog is assigned a lower transmit power [5]. The third feature of this paper that we would like to call attention to is that their method of deriving BDD incorporates a quasi-game-theoretic approach, because the transmitter behavior of link 1 is determined for each of three possible strategies chosen by link 2, roughly categorized as aggressive, backoff, and static. There is much potential in this line of research, and it will certainly be interesting to see how it will evolve in the coming years.

It appears that the authors set the constant SIR target to be near the maximum for which the constant SIR algorithm will converge, given their chosen spatial distribution of links. However, this still means that this target SIR, and therefore the average load that can be supported by their simulation of the constant SIR algorithm, is a function of this particular spatial distribution.

12

3 Simulation of Distributed Power Control Algorithms


In this section, we discuss the simulation that was conducted for this project. The motivation for this work has already been alluded to above we are seeking a more comprehensive, accurate, and useful evaluation of the properties of backlog-driven DPC algorithms. In particular, we are looking to improve upon the simulation done in [4]. With this in mind, a large-scale, robust, and easily configurable simulation was designed, and the results provide some interesting insights into the advantages of PCMA-1 and PCMA-2 [4] over the constant SIR algorithm.

3.1 Simulation Design


We begin by explaining the structure of the simulation that was performed. One of the major design considerations was that we wanted to be able to simulate a very large and randomly dispersed number of links (i.e., transmitter/receiver pairs) which can be chosen by the user. Then, transmitters are distributed uniformly over a two-dimensional playing field, and the receiver associated with each transmitter is placed at (r,) with respect to its transmitter, where r is the distance from the transmitter chosen randomly from a Gaussian distribution, and is the angle chosen from a uniform distribution on [0,2). We should note that in order to avoid boundary effects in our performance calculations, we actually place many more links in the playing field than the number of links we intend to use, and then choose the user-defined number of links for performance calculations from the middle of the playing field3. A sample of this spatial distribution is included in Figure 3. Other important parameters that can be set by the user include the Figure 3: Transmitters and Receivers in the Two-Dimensional Playing Field. average arrival rate of new packets Transmitters are represented by blue circles, and receivers are represented as orange squares. to be transmitted (arrival
3

This is in contrast to [4], in which boundary effects are avoided by turning the playing field into a conceptual torus. We felt that this method is unnecessarily unrealistic, and it introduces numerous problems, such as potentially double-counting the interference from a neighboring link.

13

probabilities are drawn from a Bernoulli distribution, as in [4]), the various configurable parameters in the PCMA-1 and 2 schemes such as X(b), and the constant SIR algorithm target SIR. A version of the simulation code is available at www.stanford.edu/~jmounzer/359.

3.2 Results
A number of simulations were run to test PCMA-1 (Equation 6), PCMA-2 (Equation 7), and the modified constant SIR algorithm (Equation 8) in order to show a variety of different relationships, as will be described below. Key simulation settings are listed in Table 1. Wherever possible, they were chosen for consistency with the simulation in [4]. Simulation Parameter Length of simulation Probability of successful transmission Gains Gij Interference on link i Thermal noise X(b) Number of links considered in performance calculations Total number of links in playing field , , Parameter Value or Function 25,000 time steps, unless otherwise noted ( , ) = = .1 b+4 75 200 1

(Equation 10)

= (Equation 9)

Table 1: Simulation settings

Based on these simulations, we can make several observations regarding the relative performance of PCMA-1 and PCMA-2. Perhaps the most dramatic and notable performance improvements of PCMA-1 and PCMA-2 over the constant SIR algorithm are observed when the network is under duress. This is very significant, since real networks are likely to have traffic fluctuations that result in temporary or long-term congestion. For example, one can easily envision a network experiencing a period of extremely heavy traffic, during which the average load nears unity. In this type of situation, the Foschini-Miljanic algorithm has disastrous results either the transmitter powers will all explode if the target SIR is adapted to attempt to keep up with the load (these simulations showed that it takes relatively few time steps for this to occur in the tens of time steps) or the backlog will rapidly increase if the target SIR is not adapted, at a far faster rate than the backlog of PCMA-1 or PCMA-2. In contrast, PCMA-1 and PCMA-2 both exhibit much more robust performance in such stressful situations. By cooperating with each other, the links are able to keep their powers from exploding while managing their backlogs more effectively. These effects are clearly shown in the set of figures below. In Figures 4 and 5, we show the analogous plots to those in [4], where we set the constant SIR algorithm to achieve a steady-state success probability of .7. As expected, we see that in Figure 4, the average backlog in the constant SIR algorithm when the load is less than .7 is driven to zero, while the average backlog when 14

the load is greater than .7 grows rapidly (and goes to infinity as the simulation length goes to infinity). However, the PCMA-1 and 2 curves both perform better in terms of average backlog than the constant SIR algorithm as the load becomes very large. We also observe that while the PCMA-2 curve performs rather poorly in terms of backlog compared to the constant SIR algorithm for average loads between .3 and .8, it uses less power over the same range. As we look at the average power for PCMA-1, we see that for average loads less than .7, it remains very close to the average power of the constant SIR algorithm4, but when the load becomes large, the power consumption goes up. However, we see that the PCMA-1 algorithm is trading off increased power for reduced backlog in this situation; over the same region, the constant SIR algorithm has allowed the backlog to explode.
Avg Power vs. Avg Load with Constant SIR algorithm target of 2.33
Average Backlog vs Average Load with Constant SIR Algorithm target SIR of 2.33 4000

10 9

3500

3000

PCMA-1 PCMA-2 Constant SIR

8 7 Average Power 6 5 4 3

PCMA-1 PCMA-2 Constant SIR

Average Backlog

2500

2000

1500

1000

2
500

1 0

0 0.2

0.3

0.4

0.5

0.6 0.7 Average Load

0.8

0.9

0.2

0.4 0.6 Average Load

0.8

Figure 4

Figure 5

The next pair of plots in Figures 6 and 7 provides a slight but illuminative twist on the graph in [4]. This time, we allow the constant SIR algorithm to adapt its constant SIR target to keep up with the average load, i.e., we set the target SIR so that a success probability higher than the average load is maintained, if possible. Here, we clearly see that there is a hard limit with regard to the feasibility of the constant SIR algorithm. For some average load (in this case, approximately .7), the target SIR will become unattainable for the links in the network, and as Foschini and Miljanic showed, this results in the link powers rapidly exploding to infinity. This will be the case as the average load increases regardless of the network spatial distribution at some point, the network under the constant SIR algorithm will not be able to support the requested target SIR. We see that PCMA-1 and 2 vastly outperform the constant SIR algorithm when the load becomes high for this situation. The backlog and power for the constant SIR algorithm immediately explode once the target cannot be reached, whereas PCMA-1 and 2 exhibit a smooth, gradual decay in performance.
4

The linear growth in average power for the constant SIR algorithm in Figure 5 for average loads less than .7 is simply a result of the modification made to the Foschini-Miljanic algorithm for these simulations namely, that there is no power spent when there are no packets waiting to be transmitted. After the average load becomes higher than the success probability at .7, we see the constant SIR algorithm result in a constant average power, as we expect, since the links never clear their queues.

15

Average Power vs. Average Load with Adaptive Constant SIR Target 10 9 8 7 PCMA-1 PCMA-2 Constant SIR

Avg Backlog vs. Avg Load with varying Constant SIR Target for each Average Load 12000

10000

PCMA-1 PCMA-2 Constant SIR

8000

6 5 4 3 2

Average Backlog

Average Power

6000

4000

2000

1 0
0

0.2

0.4 0.6 Average Load

0.8

0.2

0.4 0.6 Average Load

0.8

Figure 6

Figure 7

50 Finally, we observe that the simulations conducted represent a preliminary attempt at 0 10 10 10 validating and extending the results in [4]. It Transmitter Density proves to be surprisingly difficult to compare Figure 8 the algorithms along any particular dimension, such as average power consumption, because of their different structures. For example, a cursory glance at Figure 5 shows that the functional forms of average power with respect to average load are all quite different, which makes their comparison highly nontrivial. Future efforts in this line of research will include finding methods to make these types of comparisons, and we hope that the process of doing so will lead to new insights regarding the algorithms behaviors.
-3 -2 -1

The fluctuations in the PCMA-1 and 2 curves in Figure 9 are due to the fact that the simulation was reset for each density and the links were randomly redistributed. Future work will include taking averages over many simulations so that the curves are smoother, but the current plot is enough to illustrate our main point.

Average Transmitter Power after 12000 time steps

The last plot we consider is Figure 8, which clearly shows how PCMA-1 and 2 can outperform the constant SIR algorithm when the spacing between links decreases (and consequently, when the interference between links goes up). We see Average Power vs. Transmitter Spatial Density for Arrival Rate of .7 300 that at a certain link density, the constant SIR PCMA-1 algorithm can no longer converge, and all the PCMA-2 250 Constant SIR transmit powers explode to infinity. However, the PCMA-1 and 2 algorithms are able to 200 prevent their powers from exploding over much higher densities through their use of 150 5 cooperation . This robustness can be very 100 valuable in cases of high spatial congestion.

16

4 Future Work
There is much work yet to be done in this line of research, and we roughly characterize these future directions into three groups: extending the simulations described above to learn more about the performance of backlog-driven DPC, proving mathematically the structural properties of backlog-driven DPC that are observed (such as finding the mathematical reasons for the robust behavior when the network is under duress), and using the intuition we have gained to design improved DPC algorithms in the future.

4.1 Simulation Extensions


There are a number of ways that the simulations used in this project can be extended to provide a more complete characterization of the behavior of backlog-driven DPC. For example, it would be very interesting to employ more realistic transmitter and receiver spatial distributions and configurations to better simulate a real wireless environment, as well as to vary the packet arrival process. Furthermore, employing a stochastic wireless channel model (perhaps including shadowing and multipath fading) could lead to new insights. Another important area for research would be to make the network itself more dynamic by letting links enter and leave and by allowing transmitters and receivers to move around. This would enable us to more accurately characterize properties such as convergence of the algorithms and the maximum allowable steady-state number of nodes under different algorithms. Additionally, the PCMA-1 and 2 algorithms are parameterized by a number of different values, which can be changed to induce dramatically different link behavior. There has been little done as far as exploring the optimal values and functional forms for these parameters to achieve a desired performance gain, and further simulations could look into deepening our knowledge on this front. Another important direction for extending the simulations is to find better ways to compare the algorithms with each other. As described in Section 3.2, the different natures of the algorithms make it highly nontrivial to compare them along any particular dimension (such as power consumption, average backlog, etc.). There remains a significant amount of work to be done in this regard.

4.2 Proving the Structural Properties of Backlog-Driven DPC


The structural properties of backlog-driven DPC are not yet very well understood, and we hope that by leveraging the insights gained through extensive simulations, we can begin to characterize these properties mathematically. For example, it would be interesting to prove that the mathematical properties of PCMA-1 and 2 naturally result in their gradual performance decay under large average loads, as we observed in our simulations.

17

4.3 Design of Stronger DPC Algorithms


The ultimate goal of this entire line of research is to develop optimal DPC algorithms suitable for a wide range of wireless network environments. As we have seen in our literature review, this is a difficult problem to solve, and the current state of the art involves developing well-informed heuristics based on analyzing simplified models of the wireless environment, such as the case of unresponsive, extraneous interference. However, we have also seen that the techniques employed have become more and more sophisticated over time, and with a better understanding of the properties of backlog-driven DPC obtained through simulation and mathematical analysis, we hope to push the state of the art in this field further.

5 Conclusion
In this report, we have examined the evolution of backlog-driven DPC algorithms over the past 15 years, beginning with the benchmark, backlog-agnostic Foschini-Miljanic approach and ending with recent work in the dynamic-programming-based backlog-driven approach. Through our analysis and simulations, we have shown that there are significant performance gains for wireless networks in which the links can be induced to cooperate with each other through backlog-driven DPC. Although the full extent these performance gains is not trivial to characterize because of the different natures of the DPC algorithms, we have clearly highlighted the fact that when the network is under duress, backlog-driven DPC can provide enormous gains over the standard constant-SIR benchmark in terms of power consumption, backlog, and overall reliability. There remains much work to be done to find appropriate ways to compare DPC algorithms, prove the structural properties of backlog-driven DPC, and create new DPC algorithms which build on our previous insights to achieve even larger performance improvements.

18

6 References
[1] G.J. Foschini and Z. Miljanic, "A Simple Distributed Autonomous Power Control Algorithm and Its Convergence," IEEE Trans. Vehicular Technology, vol. 42, no. 4, pp. 641-646, 1993. [2] J. Rulnick and N. Bambos, "Mobile power management for wireless communication networks," Proc. IEEE INFOCOM , vol. 3, pp. 3-14, 1997. [3] J. Rulnick and N. Bambos, "Power-induced time division on asynchronous channels," Wireless Networks, vol. 5, pp. 71-80, 1999. [4] N. Bambos and S. Kandukuri, "Power-Controlled Multiple Access Schemes for Next-Generation Wireless Packet Networks," IEEE Wireless Communications , pp. 58-64, June 2002. s [5] A. Dua and N. Bambos, "Distributed Backlog-Driven Power Control in Wireless Networking," Proc. IEEE Workshop on Local & Metropolitan Area Networks, pp. 13-18, June 2007. [6] N. Bambos, Toward Power-Sensitive Network Architectures in Wireless Communications: Concepts, Issues, and Design Aspects, IEEE Personal Communications, pp. 50-59, June 1998. [7] A. Goldsmith, Wireless Communications, Cambridge University Press, New York, 2005.

19

You might also like