You are on page 1of 20

Cambridge Books Online

http://ebooks.cambridge.org/

Scalability, Density, and Decision Making in Cognitive Wireless Networ ks Preston Marshall Book DOI: http://dx.doi.org/10.1017/CBO9781139058599 Online ISBN: 9781139058599 Hardback ISBN: 9781107015494

Chapter 4 - Some fundamental challenges in cognitive radio and wireless networ k systems pp. 74-92 Chapter DOI: http://dx.doi.org/10.1017/CBO9781139058599.005 Cambridge University Press

Some fundamental challenges in cognitive radio and wireless network systems


Introduction to wireless system challenges
In this chapter, we will explore some of the fundamental challenges that are common to most, if not all, wireless networking systems and architectures. The intent is to approach the problem in a general framework that can derive meaningful insights into the broad categories of wireless architectures, as well as specic issues associated with specic architectures and designs. Although many of the current wireless architectures are highly specialized and homogeneous, it will be shown that the necessity for increased capability and cost-eective performance, within increasing spectrum constraints, is driving architectures to become more expansive and heterogeneous in their structure. These structures introduce opportunities for optimization across a range of heterogeneous techniques, technologies, and architectures, as well as a requirement for unique optimization methods within each of the homogeneous architectures. This chapter will also introduce some of the fundamental metrics that will be the basis for subsequent analysis and for the development of decision criteria.

4.1

4.2

Evolution of wireless and mobile architectures


Although cellular and mobile communications are a special case of a wide number of wireless architectures, their impact on popular usage, and society in general, is profound. It is important that their specic trends, and design considerations, be reected in even the most general treatment of wireless networking. Not only is spectrum a highly constrained resource, but also energy consumption, real-estate for towers, visual obstruction, and other aspects of the wireless ecosystem are signicant considerations in the evolution of wireless technology. Interestingly, the potential evolution of mobile architectures that would be responsive to growth in usage would also provide lower energy consumption, reduce the spectrum needed on a unit bandwidth basis, and avoid the increasing density of cellular infrastructure. The solutions that are evolved to address information density through more localized access to mobile points of presence have numerous other advantages, as well. Higher information density is achieved with lower levels of energy consumption, elimination of base transceiver station (BTS) towers, or reductions in tower height, and a much more distributed infrastructure. This transition will depend on a number of technology

Downloaded from Cambridge Books Online by IP 210.212.129.125 on Wed May 01 10:32:40 WEST 2013. http://dx.doi.org/10.1017/CBO9781139058599.005 Cambridge Books Online Cambridge University Press, 2013

4.2 Evolution of wireless and mobile architectures

75

developments that can be foreseen in the reasonably near future. These technologies include cognitive radio and networking, interference tolerance, and advanced wireless system architectures. Mobile access has become one of the most pervasive new technologies that has emerged in the late twentieth century. It is worth considering how the current mobile/cellular architecture evolved. Early cellular systems were focused solely on voice communications. The interface between them and the plain old telephone system (POTS) was a unique set of protocols and standards that were in use only within telephone systems. Therefore, the entire support structure of the early cellular systems was built around the internal methods of the telephone systems. Early cellular systems were sparsely deployed, and ensuring coverage was a major concern, so cellular BTSs and their associated towers were tall, of high power, and generally highly apparent and unattractive. The growth of cellular has generally followed this model, with inlling between BTS and towers to provide more bandwidth. Additionally, due to cost, mobile devices were generally used when a xed-infrastructure service was not available.1 This approach was adequate when the voice cellular usage grew through additional users. Even ten times more users meant a requirement for ten times the bandwidth, which was accommodated through additional sites, new frequencies, and advanced technologies that managed the spectrum more eectively. Doubling of spectral eectiveness, doubling of spectrum, and more sites accommodated voice growth. The problem this cellular architecture faces is that continued growth is not linear with the increase in user population and voice usage. Instead, it is driven by networking bandwidth, such as social networks, web services, location-based services, video conferencing, and the transition of traditional broadcast media (such as television (TV)) to cellular delivery. The industry has been satisfying the demand, generally through extension of the existing architectures, but it is clear that signicant changes in architecture will be needed to address the exponential growth that Internet services will necessitate. Since current equipment is highly ecient in terms of the bits/Hz achieved for the power delivered to the receiver, it is unlikely that signicant increases in capability will be achieved through additional eciencies in the communications waveform. In the United States, the Obama Administrations Federal Communications Commission (FCC) is seeking to provide cellular and other broadband services with an additional 500 MHz of spectrum [1], but even this spectrum is not a signicant increase compared with the required growth in capacity. Cooper [2] makes an eective case that most of the growth in radio-frequency (RF) bandwidth services has been achieved not through spectrum eciency, but through spectrum reuse. Spectrum reuse allows the same spectrum to be reused many times over. If the spacing between systems (or base stations in the cellular architecture) using the same spectrum can be halved, than potentially four times as much bandwidth can be provided to devices within the same area from this reduction in footprint.
1

Whereas mobile services are now used in residences or workplaces to avoid the inconvenience of switching between the xed and mobile service, and to provide a seamless communications experience.

Downloaded from Cambridge Books Online by IP 210.212.129.125 on Wed May 01 10:32:40 WEST 2013. http://dx.doi.org/10.1017/CBO9781139058599.005 Cambridge Books Online Cambridge University Press, 2013

76

Fundamental challenges in cognitive radio and wireless network systems

This chapter provides a general discussion of the organization of wireless systems, and then develops the challenges in the cognitive management and decision making of these systems. The challenges of cognitive radio and network management are somewhat sensitive to the method of organization of the links and network topology, and therefore this chapter introduces basic principles of logical operation and organization. These organizational principles will be the basis for developing discrete performance analysis and decision-making requirements later in the book. More localized base-station access is a natural evolution of other forces. Considering the characteristics of early cellular architectures in light of current trends indicates a number of reasons why new architectures may evolve. Range and usage density. Early cellular voice and data services were expensive, and were typically used only when the alternative infrastructure was not available. As the cost has dropped, these mobile services have become a primary mode. Therefore it is possible to meet much of the mobile bandwidth demand with equipment located in close proximity to the user community. Location. Large base stations had to be located far from the typical user, forcing high power and antenna height deployments. However, once it is possible to co-locate cellular infrastructure within the proximity of the bulk of the user population, these undesirable traits can be mostly eliminated, and the devices can have similar footprints to those of existing communications equipment, such as cable modems and wireless delity (Wi-Fi) hubs. Backhaul. First- and second-generation cellular systems were based on POTS telephone switching, signaling, and interoperability standards. Therefore, these base stations required access to the unique communications infrastructure generally associated with telephone company (TELCO) systems. With the advent of fourth-generation cellular systems (Long Term Evolution (LTE), Long Term Evolution-Advanced (LTE-A), and Worldwide interoperability for microwave access (WiMax)), the transition to Internet-based backhaul and other supporting infrastructure will be accomplished, and therefore cellular infrastructure support can be provided through any suitable Internet access method. Minimally invasive deployments, such as picocell and femtocell deployment also place wireless infrastructure at close proximity to the existing wired infrastructure. Digital processing cost. Cellular signaling and control protocols are complex, and early generations of base-station equipment were signicant cost drivers, being of comparable cost to analog elements, such as antennas, ampliers, and high-performance receivers. The cost of these elements has been rapidly reduced to the point where the digital control can be provided by inexpensive chips (as in femtocells). There is thus a nancial incentive to build extremely low-cost cellular access by forgoing the necessity for base stations with higher transmit power and performance. Usage patterns. The use of cellular telephones as a primary means of residential and workplace communications has been a consequence of the major

Downloaded from Cambridge Books Online by IP 210.212.129.125 on Wed May 01 10:32:40 WEST 2013. http://dx.doi.org/10.1017/CBO9781139058599.005 Cambridge Books Online Cambridge University Press, 2013

4.3 Classes of wireless architectures

77

reductions in cost. Home cordless phones were a transition point, providing mobility within a small roaming region. The trend is clearly for users to abandon the use of both wired and cordless modalities, in favor of consolidation/unication to integrated wireless services. The same is possible for broadband access, depending on cost. Most residences and workplaces use unlicensed Wi-Fi to provide mobility to a wired Internet service, but this could also be a transitional modality, just as cordless phones were transitional from xed wired landlines to wireless access. This transition appears to be dependent on the cost of wireless broadband, and would occur if it became suciently competitive in comparison with wired access.

4.3
4.3.1

Classes of wireless architectures


Wireless link management and multiplexing
A basic requirement of any wireless link is to separate the signals being sent from multiple nodes, to and from the same node, as required. Several principles have emerged, and have been applied in various formulations. In typical (single receiver) peer-to-peer (P2P) or ad-hoc congurations, all members of the network must transmit on the same frequency, so some method of contention control is required in order to prevent these transmissions from jamming each other at the intended receiver. The following multipleaccess schemes are commonly utilized. Time. In time-division multiple access (TDMA), separation of signals is provided by ensuring that at most one user is active on the channel at any one time. This can be implemented through a predetermined (typically slotted) xedtime-slot mechanism, or through a contention-based protocol. Slots are a xed upper bound on capacity, and managing assignments to these slots in conditions of uneven demand (Internet access, voice-call initiation, . . . ) is complex. Collision-sensing multiple-access (CSMA) nodes sense channel occupancy and transmit only when the channel is clear. Because such mechanisms can not ensure perfect collision avoidance, some mechanisms to resolve collisions and/or request the use of the channel must be provided.2 Frequency. Signals are assigned dierent frequencies, which, if properly separated in frequency and space, would create essentially orthogonal channels for communications, with each channel completely independent of any other channel. Such an architecture requires that the devices be partitioned, so communication is possible only along some parings of the network node (as will be discussed a few paragraphs further on). Unless the duty cycle of the signaling is high compared with the channels capacity, this solution can be wasteful of spectrum resources.
2

Such as protocols that provide a request-to-send (RTS) or clear-to-send functionality.

Downloaded from Cambridge Books Online by IP 210.212.129.125 on Wed May 01 10:32:40 WEST 2013. http://dx.doi.org/10.1017/CBO9781139058599.005 Cambridge Books Online Cambridge University Press, 2013

78

Fundamental challenges in cognitive radio and wireless network systems

Code. Code separation spreads the signals by multiplying each symbol of the information stream by a spreading code stream. The information is recovered by the reverse process. If the spreading codes are orthogonal (have no cross product), the signals can be separated independently, although the technique is not eective when multiple signals, with large dierences in signal amplitude, must be received, since the degree of coding gain is not sucient to reduce the ratio of the interfering signal to the intended signal. Additionally, the front-end may be overloaded, since it must provide analog amplication of the entire energy of the near signal, without the benet of any coding gain or processing reduction. Another consideration is that, if the receiver must service a number of nodes at dierent ranges, the dierence in arrival time can reduce the degree of orthogonality between codes. This mode is used in spread spectrum systems, such as code-division multiple access (CDMA), but requires complex power management in order to avoid this near/far eect.

4.3.2

Methods of duplexing
Hybrid combinations of the channel management arrangements are used to separate transmission and reception events. Alongside the requirement to separate transmissions from multiple nodes, individual nodes require mechanisms to isolate their own transmissions from their own, or their neighbors, reception events. The discussion in Chapter 2 pointed out the overload eects that would occur when a receiver was listening to a distant node and a node in close proximity simultaneously began transmitting. Some services, such as voice telephony, require duplex operation, which provides simultaneous (apparent) transmission and reception for an individual device, and among a number of users of the same channel. Cellular architectures have evolved largely along the rst of these two lines (which requires paired up- and down-link spectrum), but the second has attracted attention where paired spectrum is not available, as is being considered for Chinese deployments. Frequency-division duplex. Frequency-division duplex (FDD) separates signals in frequency. This is less general than the time-division duplex (TDD) methods, insofar as it requires predetermined (and typically static) bandwidth allocation and topology. Handsets transmit on one frequency (up-link), and the BTS transmits to all handsets on a single frequency. This has the advantage that no handset has to be able to receive transmissions from other handsets, which might overload its receiver. It has the disadvantage that the allocation to the up- and down-links is xed, and therefore not responsive to the actual ratio of trac to and from the BTS. This framework is used in most of the implementations of cellular systems (the exception being WiMax and the proposed LTE TDD mode, which is for use with unpaired spectrum). For example, the Global System for Mobile Communication (GSM) uses FDD to separate up- and down-link trac, and xed TDMA slots to

Downloaded from Cambridge Books Online by IP 210.212.129.125 on Wed May 01 10:32:40 WEST 2013. http://dx.doi.org/10.1017/CBO9781139058599.005 Cambridge Books Online Cambridge University Press, 2013

4.3 Classes of wireless architectures

79

manage transmissions to and from handsets. CDMA also uses FDD to isolate up- and down-links, but then uses spreading code separation to isolate handsets transmissions. This requires complex power management to ensure that transmissions received at handsets are within the processing range of the spreading code, and avoid near/far range issues. Time-division duplex. TDD separates signals to and from a single node by separating them in time. This is appropriate when the same signaling channel is used for both sides of the conversation. The control over this channel can be through a handshaking process, or through xed time allocations. A common example of this is Wi-Fi, WiMax, or aspects of the GSM cellular air interface. As mentioned, cellular practice has been primarily FDD. This mode has two serious issues for future deployments. 1. Spectrum that is suitable for paired use may become less available as it becomes necessary to share spectrum or use blocks that are not optimal in spectral extent or location. 2. The paired-spectrum approach requires a xed relationship of up-link and downlink spectrum. This was appropriate during the voice-service focus of wireless, but is less suitable as the trac mix shifts to Internet access. Web browsing and video are highly asymmetric, requiring much more down-link than up-link, Voice-over-Internet protocol (VOIP) is symmetric, and uploading of media may be asymmetric, requiring extensive up-link bandwidth. Any xed allocation between up- and down-link is likely to be wasteful of spectrum resources.

4.3.3

Wireless network topology


There is no generally accepted taxonomy for the organization of wireless network systems. Nevertheless, there are several models of wireless system organization that recur in a large number of systems, and can form classes from which most specic architectures can be derived.3 These include the following. Point-to-point (1 to 1). These architectures provide a direct path between the initiating (sending) node and the receiving node(s). Examples include analog frequency-modulation (FM) walkie talkies and radio networks. These systems are characterized by one-to-one communications (1 to 1) topologies.4 The essential characteristic is that communication is direct between the nodes, and symmetry is maintained in these architectures, since they have no specic partitioning between up- and down-links, or any other distinction between the physical or logical organizations of their links. Hub and spoke (n to 1). Hub and spoke is a common architecture where a collection of nodes communicates solely with a common node, which provides
3 4

This discussion ignores one-directional modes, such as broadcast radio and television. A special case might be a radio network, in which multiple radios transmit to each other.

Downloaded from Cambridge Books Online by IP 210.212.129.125 on Wed May 01 10:32:40 WEST 2013. http://dx.doi.org/10.1017/CBO9781139058599.005 Cambridge Books Online Cambridge University Press, 2013

80

Fundamental challenges in cognitive radio and wireless network systems

Figure 4.1 Network architectures.

routing among the nodes, or to other external networks. Examples include Wi-Fi hubs, cellular towers, and land mobile radio (LMR) trunking systems. Peer-to-peer (n to n). P2P networks are often referred to as a mesh (when in xed locations) or as a mobile ad-hoc network (MANET) (when they are mobile or have intermittent connectivity). In the P2P architecture, trac can be relayed through any appropriate node, and typically is delivered only after it has been transferred over several of the network links. Simplistic depictions of each of these architectures are shown in Figure 4.1. These network organizations are loosely composable. For example, a cellular system with backhaul can be considered to be a set of hub-and-spoke networks (the cellular towers) connected by a point-to-point network (the wireless backhaul), or, if interconnected, a P2P network.

4.3.4

Composite network organization


Individual systems provide a composite of these approaches. Table 4.1 illustrates some common network services and how each aspect of their organization is operated. A general network construct is the wireless metropolitan area network (MAN) that typically links between multiple wireless local area networks (WLANs).

4.4

Recent trends in wireless architectures


In fact, the critical consideration we must make is where we draw the line around the system we are analyzing. Figure 4.2 shows how some system boundaries might have been considered in early cellular architectures. The following discussion is intended to illustrate just how dicut this partitioning is when we introduce just the concept of femtocells into existing architectures.

Downloaded from Cambridge Books Online by IP 210.212.129.125 on Wed May 01 10:32:40 WEST 2013. http://dx.doi.org/10.1017/CBO9781139058599.005 Cambridge Books Online Cambridge University Press, 2013

4.4 Recent trends in wireless architectures

81

Table 4.1 Design choices in some common services

Service Walkie talkie Wi-Fi LMR trunking Cellular (CDMA) Cellular (GSM) Wi-Fi mesh

Link sharing Time Time Time Code Time Time

Transmission duplexing N/A N/A FDD FDD FDD TDD

Network topology 1 to 1 n to 1 n to 1 n to 1 n to 1 n to n

Figure 4.2 Stovepiped communications architectures.

The convergence of wireless technologies has been accelerating over the last decade. Figure 4.2 illustrates wireless access architectures of the 1990s. In these legacy architectures, the wireless and Internet routing paths are standalone, and isolated, until deep into the Internet and TELCO infrastructure. It is very reasonable to treat these systems as independent, since their interaction is in the core of the communications infrastructure, where we can reasonably assume that the law of large numbers isolates these systems.

Downloaded from Cambridge Books Online by IP 210.212.129.125 on Wed May 01 10:32:40 WEST 2013. http://dx.doi.org/10.1017/CBO9781139058599.005 Cambridge Books Online Cambridge University Press, 2013

82

Fundamental challenges in cognitive radio and wireless network systems

Figure 4.3 Emerging communications architectures.

In contrast, the architectures that are emerging are highly coupled, as shown in Figure 4.3. Both VOIP voice and data use the Internet as the core distribution mechanism. Whereas the only interaction between the cellular and premises Internet was in the network core in the POTS TELCO stovepipe architecture, in the emerging architecture the premises Internet supports mobile devices in parallel with the cellular services. This architecture becomes even more integrated with the introduction of femtocells, which make the existence and use of this access path invisible to network users. With femtocells, the wireless service is supported through both its own access to the Internet and that provided by the distributed infrastructure supporting the femtocell devices. The limiting case of this architecture would be to have the cellular base stations essentially eliminated,5 and operate the cellular service primarily through the femtocells. These anecdotal examples show the necessity to approach the emerging complex, dense, and highly adaptable wireless systems with much more generality than that provided by the currently rigid and xed hubspoke architecture through dedicated, and separate, infrastructure. Instead, the transition to Internet access as the primary service
5

Complete elimination is not practical for coverage reasons, but they can be transitioned to become the exception, not the primary path for the bulk of wireless communications.

Downloaded from Cambridge Books Online by IP 210.212.129.125 on Wed May 01 10:32:40 WEST 2013. http://dx.doi.org/10.1017/CBO9781139058599.005 Cambridge Books Online Cambridge University Press, 2013

4.6 Constraining aspects of nonlinear and discrete effects

83

of wireless systems argues for the adoption of much more hybrid architectures that adapt to local access patterns, opportunistic content, and Internet access, and exibly evolving architectures that are highly situationally adaptive.

4.5

Convergence of structured and unstructured networking


The architectures depicted in Figures 4.1 through 4.3 show a converging trend away from the highly structured initial systems such as GSM and Advanced Mobile Phone System (AMPS) that were completely pre-planned, and had only single modalities for each type of interaction. Each successive generation provided more alternatives in the choice of delivery path, management of interference, spectrum choices, and power management. These decisions were increasingly devolved to local awareness and decision making within the nodes. With the advent of the architecture shown in Figure 4.3, the population of remote-access points (femtocells) will greatly outnumber the number of centrally controlled base stations.6 As the number of access points (base stations and femtocells) increases, the control over operating characteristics will have to continue to be devolved to the local units, due to the literally millions of possible interactions. It is thus reasonable to imagine that the concepts of self-organizing and ad-hoc networks that have been under study by academic researchers can nd an application in the commercial practice dominated by less adaptive cellular technology. The incorporation of these technologies may be a fundamental enabler to future devolved and decentralized wireless architectures. Understanding of the general concepts of self-organizing networks will likely be essential to the development of future commercial architectures. Although many of these concepts might appear too general and unstructured for application to current wireless practice, they are likely to become increasingly relevant as the challenges of density, spectrum reuse, routing, and other decisions must be devolved to distributed devices.

4.6

Constraining aspects of nonlinear and discrete effects


Most of the spectrum-management and -access activity carries an implicit assumption that partitioned spectrum availability is both a necessary and (implicitly) a sucient condition in order for devices to access the spectrum. Therefore, the assumption is that deconicting channel usage is the fundamental task for any spectrum-management regime. Presumably, if the spectrum is appropriately partitioned, and no frequency is in use by more than one user at a time, no spectrum-management conicts would exist. Unfortunately, the inherent (and inevitable) imperfections of receiver circuits create interactions even among devices whose frequencies are deconicted. Section 2.5 introduced the constraints imposed by the realities of realistic circuits. In later chapters, we will extend the discussion of spectrum-occupancy constraints to
6

In 2010, it was reported [3] that the population of femtocells had already exceeded that of base stations in their rst several years of availability.

Downloaded from Cambridge Books Online by IP 210.212.129.125 on Wed May 01 10:32:40 WEST 2013. http://dx.doi.org/10.1017/CBO9781139058599.005 Cambridge Books Online Cambridge University Press, 2013

84

Fundamental challenges in cognitive radio and wireless network systems

also include the eects of adjacent-channel operation on the receiver. This issue is not a subtle engineering one, but has signicant impact on the operation of devices. It is commonly referred to as desensitization, receiver overload, and co-site interference. In one case, its solution required a major realignment of spectrum, funding of new publicsafety technology, and billions of dollars [4], and another one may have the eect of completely negating a proposed wireless deployment [57], in the case of a potential service by LightSquared. The constraining impact of these eects has had a signicant eect on the evolution of wireless architectures. The pre-4G cellular architectures (as well as most of the 4G deployments) separate signals in frequency (FDD), with suciently isolated upand down-link frequencies. This architecture avoids the possibility that a mobile device will have to receive signals in the presence of closely spaced mobile transmitters. In contrast, spectrum coexistence of TDD devices has been a challenging problem, even when the units are on separate frequencies. A unit transmitting on any frequency (within the operating band) can overload all of the channels on all of the radios that are in its proximity. In developing a model for these eects in later chapters, it is important to recognize that nonlinear eects do not operate on a continuum of impact. Whereas we can create a model of the eect of power on range or throughput, these nonlinear eects (by denition) have essentially no impact below a certain level, and a very signicant eect above a certain threshold. The theoretical basis for this is the inherent entropy of any object, space, or other element within the environment of the communications link. Noise is generated through a number of underlying mechanisms, including shot noise, Johnson noise, and 1/ f noise [8], all forming a lowest possible noise level in a receiver. As a minimum, a noise voltage below 4kB T R f 7 (where R is the resistance, T is the temperature in kelvins, kB is Boltzmanns constant and f is the bandwidth) is the lowest noise level that can be achieved. Reductions in nonlinear eects below this threshold have little benet to the wireless system, whereas above this point they typically have eects that are on the order of the power level over the threshold. For example, in the next chapter we will see that third-order intermodulation causes an increase in noise by a factor of 1,000 for an increase in input energy by a factor of 10. In the next chapter, these general relationships will be quantied in terms of the absolute amount of power generated by the intermodulation process.

4.7

Key objectives and metrics for introducing cognitive processes to wireless systems
When communications engineering and operations were dominated by individual links, the evaluation and optimization of these links was relatively scalar; one had to maximize the link performance, or minimize the resources required to achieve a given link performance. As wireless has moved from wireless access to wireless networking, the
7

Typical wireless devices have antenna radiation resistance on the order of 52 , and a temperature of 290 K in terrestrial applications.

Downloaded from Cambridge Books Online by IP 210.212.129.125 on Wed May 01 10:32:40 WEST 2013. http://dx.doi.org/10.1017/CBO9781139058599.005 Cambridge Books Online Cambridge University Press, 2013

4.8 Metrics for wireless network effectiveness

85

evaluation process is much less scalar, and the objectives of optimization are often competing. We will examine four metrics of the aggregate eect of network decision making. We will see that these metrics are not independent or orthogonal, but approaching them with this assumption is convenient in the early analysis steps, before they are entangled. Capability (Cnet ). We will consider capability to be from the demand to the source of information for the mean request. This metric is therefore not just a metric of the packet network, but also includes the consequences of content location. Reliability (P(C )). The network reliability is considered in the context of the other three metrics. It is the probability of achieving a given capacity in a network of given total membership (scalability) and mean density. We express this as a function of the capacity. Scalability (n). The total number of nodes forming a single address space and for which all other nodes must be suciently aware of their location, reachability, or address to communicate. Density (nd ). The physical density of network nodes. This is measured in nodes per unit area, or nd = n/a, where a is the area over which the n nodes are deployed. This book is not intended to be a design text, so our interest is not generally to determine specic values for each of the variables, but to understand the eect of network decisions on each of the variables. Therefore, the focus will typically be on the derivatives of each variable. That will lead us to understand the sensitivity of the networks performance to each possible operational parameter and condition, and thus each possible decision that can be made about its operation.

4.8
4.8.1

Metrics for wireless network effectiveness


Spectrum usage effectiveness metric
There is no shortage of metrics for reecting dierent aspects of wireless network effectiveness. Some of these assess physical-layer eciency, in bits/Hz, others assess the range achieved, and still others assess capacity, as examples. Unfortunately, none of these reects the fundamental tradeos that must be made in order to operate wireless systems in conditions of high density, spectrum scarcity, and inadequate capacity. Measures of spectrum eciency fail to reect the complex trade space of wireless design, and very dierent mission needs. Additionally, since there is no standard for 100% effective use of spectrum, or most aspects of wireless capacity, the concept of eciency can not be applied, since a ratio of actual to perfect is not a meaningful comparison. Instead, we will develop a concept of eectiveness, which is an unbounded metric. For example, it will be shown later that increasing bits/Hz is desirable for a single user in the spectrum, but is highly detrimental when users must be densely packed into spectrum, with adjacent interference regions. Similarly, capacity is readily increased

Downloaded from Cambridge Books Online by IP 210.212.129.125 on Wed May 01 10:32:40 WEST 2013. http://dx.doi.org/10.1017/CBO9781139058599.005 Cambridge Books Online Cambridge University Press, 2013

86

Fundamental challenges in cognitive radio and wireless network systems

with additional spectrum or power, but this mechanism implies that the network is not in any sense more eective or ecient in its use of these resources. In this work, we will consider spectrum not from the perspective of what is used, but in terms of the degree to which its use precludes other uses of the spectrum. It therefore reects the opportunity cost of the spectrum. It makes no dierence if the user of the spectrum considers his range to be distance r, when the spectral power density precludes meaningful operation out to a range many times that. Also, we wish to see a metric that rewards systems that recognize and leverage the actual range of their users from the closest network node. Shouting at a great volume to a user that is ten times closer than the limit of the communications is wasteful of spectrum resources, and precludes other uses of the spectrum. From the network layer perspective, we consider a link from user B to k other users, denoted as a1 , a2 , . . . , ak . Transmissions from and to each user ak utilize spectrum S (k), occur at a range of R(k), and preclude other uses of the spectrum out to a range of I (k). The amount of time taken for a block of data is T (k), and provides a set of data of D(k) bits. The benet from this metric system is the distance over which the communication is provided (R) and the volume of data (D). The cost of the delivery operation is the time (T ) over which the spectrum is utilized, the distance over which other use of the spectrum is precluded (I 2 ), and, of course, the amount of spectrum (S ) that is not available to other users. It is important to recognize that this metric applies to systems of transmitters and receivers. A poor receiver that has signicant response, or susceptibility, to adjacent channel signals eectively increases S (lowers the effectiveness metric) because the receiver requires some form of protection through reduced usage of these guard bands. Spectrum is consumed by receivers, as well as by transmitters:
k

ESpectrum =
n=1

R(n)D(n) I 2 (n)T (n)S (n)

(4.1)

where ESpectrum is the spectrum eectiveness in terms of data delivered across a range, over the spectrum, area, and time whose usage is precluded; R(n) is user ns actual communication range; D(n) is the quantity of data delivered for user n; I (n) is user ns interference range, out to which other uses of the spectrum are precluded; T (n) is the quantity of data actually communicated to user n; and S (n) is the actual spectrum precluded to other users by user ns activity. The units of this measure are in bits/(m Hz s), which is intuitively pleasing, but reective of a much more complex set of relationships than the modulation order of the signal. For static systems,8 such as some cellular down-links, we can simplify the equation to the steady-state value by replacing the time and data delivered term with a general expression for the peak capacity, C0 ;
8

Meaning systems with xed coverage, bandwidth, modulation characteristics, etc.

Downloaded from Cambridge Books Online by IP 210.212.129.125 on Wed May 01 10:32:40 WEST 2013. http://dx.doi.org/10.1017/CBO9781139058599.005 Cambridge Books Online Cambridge University Press, 2013

4.8 Metrics for wireless network effectiveness

87

2 considering the denied area to be constant at I0 ; and considering that users are distributed across a range of 0 to R0 , with a mean of kR0 .

The eectiveness measure of Equation (4.1) becomes ESpectrum = kR0C0 2 I0 S0 (4.2)

where k is the range ratio, namely the mean ratio of actual to maximum communications range; R0 is the maximum possible communications range; C0 is the mean transmission capacity; I0 is the worst-case interference range, out to which other uses of the spectrum are precluded; and S 0 is the mean amount of spectrum denied to other users. We can examine how this metric behaves with the introduction of various system architecture features. Interference tolerance. Interference tolerance is strongly rewarded. Any measure that reduces the interference range has a square relationship to the metric. If a given approach reduces the interference range by 44%, the value of the metric is doubled. Similarly, if a given system design can tolerate interference and enables it to operate closer to another system by the same 44%, the metric is doubled. Matching range and usage. Communications is most eective if the range of the system is matched to the range of the users. If many down-links are in range of a handset, then it is likely that the range being provided exceeds that which is needed, and the mean range will be far less than a k of 0.5. The metric is optimized when the communication is tailored to the actual user range dynamically. Power management with dynamic usage. Power control is certainly useful, and would be reected in the dynamic values of I (n), but only if there are provisions to coordinate the opportunities with other systems in the spectrum, or the interference caused by high-power operation can be tolerated by other systems, and thus not cause the value of I0 to be established at the maximal, or worst-case, range. Higher frequencies. Propagation loss typically is increased for communications links at higher frequencies. However, the increase in loss is generally even higher for the path to possible victims of interference. Therefore, use of higher frequencies often is highly benecial in terms of the possible density, even if it increases the power requirement for the transmissions. Densespectrum operation benets from the availability of a wide range of frequencies, in order to maximize the ratio of R0 /I0 . This consideration will be quantied in later chapters. Receiver performance. Poor receiver performance reects on this metric by forcing more distance between nodes (and thus increasing I , and in increasing the amount of spectrum that is required for a service, using that receiver

Downloaded from Cambridge Books Online by IP 210.212.129.125 on Wed May 01 10:32:40 WEST 2013. http://dx.doi.org/10.1017/CBO9781139058599.005 Cambridge Books Online Cambridge University Press, 2013

88

Fundamental challenges in cognitive radio and wireless network systems

(increasing S ). For example, the Global positioning system (GPS) adjacentchannel interference issue with the proposed LightSquared service [5, 6] reects that the true value of S for GPS is much higher than the GPS waveform bandwidth, because the design of the receivers signicantly precludes meaningful usage of additional spectrum, beyond that of the emitted signal. The spectrum needed to provide guard bands to protect the receivers eectively adds to the S value.

4.8.2

Architecture effectiveness metric


A similar measure can be developed to consider the density eects of dierent system architectures. In this case, we ignore the range term, since we would be considering comparisons of alternative methods to accomplish identical communications tasks. This metric reects the spectrum cost of information delivery. If the serviced set of nodes is equivalent in two architectures, the eectiveness of this architecture (EArchitecture ) reduces Equation (4.2) further to EArchitecture = C0 2 I0 S0 (4.3)

where EArchitecture is the architecture eectiveness in terms of data capacity, over the spectrum and area whose usage was precluded. This metric has units of bits/(m2 Hz). For example, this measure is appropriate for comparing the eectiveness of a cellulartower-provided service and that of a Wi-Fi-provided one (of equivalent character) to the same set of candidate nodes. This measure may explain why Wi-Fi is so eective that this small slice of spectrum was reported to ooad over 40% of one carriers smartphone trac [9], despite its limited coverage or availability. Given the fundamental limits of communication theory, the only viable method to achieve density of information is to assure locality of communications.

4.8.3

Information-theoretic effectiveness metric


The previous subsection opened with the qualication that the ESpectrum and EArchitecture metrics were metrics applicable to the network level. These metrics consider all bits of equal value. An extension of these metrics extends this eectiveness measure to the application layer by considering the eective information-theoretic rate of a system. Recall that Zipfs law [10, 11] Equation (2.19) describes a wide range of probabilities for content access in networks, even including discounting for spatial and temporal correlation: 1 ks (4.4) f (k; s, N ) =
N n=1

1 ns

Downloaded from Cambridge Books Online by IP 210.212.129.125 on Wed May 01 10:32:40 WEST 2013. http://dx.doi.org/10.1017/CBO9781139058599.005 Cambridge Books Online Cambridge University Press, 2013

4.9 Summary

89

where f is the frequency of occurrence, k is the rank in order of occurrence, s is the characteristic curve of the Zipfs-law distribution, and N is the size of the total population of candidate items. The entropy of a series of requests for n items of content is not proportional to log N . When normalized for the bit size of the content, most of the content that is passed through the network has a much lower entropy, due to the highly skewed distribution. The information eectiveness metrics are similar to those in Equations (4.2) and (4.3), but consider the entropy of the information, rather than the bit value. The entropy provided in message set N of n random messages is derivable from Equation (2.15) as 1 1 1 1 1 log2 1 H2 (N ) = log2 n n n n (4.5)

This is the maximum entropy possible from the N set. Any bias in the distribution of N will reduce this entropy. We therefore weight the occurrence probability with the entropy of each occurrence to determine the eective entropy of the communications trac that was delivered: N 1 C0 f ( k ; s , N ) H ( f { k ; s , N } ) (4.6) SysInfo = 2 H (N ) I2S 2 0 0 k=1 EInfo = 1 H2 (N )
N k=1

kR0C0 f (k; s, N )H2 ( f {k; s, N }) I2S 0 0

(4.7)

The introduction of entropy more appropriately reects the actual capacity of these networks. It rewards the integration of content management and persistence within (rather than external to) wireless networks. It is not very relevant to networks that function solely as wireless access points (Wi-Fi or cellular as examples), but it is highly relevant to multi-hop networks that route content internally, as we will explore.

4.9

Summary
Wireless systems face a wide range of challenges that are unique to each environment and mission. Optimization of scalar aspects (range, link capacity, bits/Hz) fails to fully reect the complex trade space and these unique aspects. As we move to dense networks, any metric of performance must reect not only the performance of the system itself, but also its eect on other systems that are present in the environment. Individual wireless systems will face dierent challenges at dierent times, in different locations, or for dierent user behaviors. Rather than attempting to achieve an overall optimal operation, the important challenge is to adapt around whatever constraint (energy, spectrum, . . . ) is constraining operation, targeting the aspect of performance that is decient. Optimal is not a global objective, but one that is specic to the constraints and objectives of individual systems, and the constraints under which they operate.

Downloaded from Cambridge Books Online by IP 210.212.129.125 on Wed May 01 10:32:40 WEST 2013. http://dx.doi.org/10.1017/CBO9781139058599.005 Cambridge Books Online Cambridge University Press, 2013

90

Fundamental challenges in cognitive radio and wireless network systems

4.10

Further reading
There are many books that focus on wireless systems from a perspective that is one level above the link layer. Clearly, advances in wireless networking will occur primarily in the operation of these architectures, rather than at the link-layer level. In particular, LTE and LTE-A are emerging as the dominant technology in the decade of the 2010s. Rappaport provides a good introduction [12] to many of the important concepts that shape wireless architectures. There are some good recent books on LTE and modern cellular. For example, Ghosh et al. [13] provide a detailed discussion of previous cellular architectures and how they motivated the transition to the emerging fourth-generation standards, as well as detailed coverage of LTE. Cooper provides an interesting short position paper [2] demonstrating the eect of evolving communications architectures, and how growth in bandwidth has been enabled primarily by spectrum reuse rather than link technology. A discussion of the policy implications of the convergence of the wired and wireless infrastructure has been published by the Aspen Institute [14].

Problems
4.1

Research at least two wireless network standards (examples could include the Association of Public-Safety Communications Ocials (APCO) Project-25 and the Terrestrial trunked radio (TETRA), and describe how they manage channel access in the context of the techniques in this chapter, or hybrids of them.

4.2 This chapter dierentiates among the metrics of capacity, reliability, density, and scaling. Using at least four dierent network types (WLANs, MAN, Internet service provider (ISP), cellular, etc.), characterize the considerations (limitations, user needs) of each in terms of these metrics, and specify how they should drive the design objectives of each. 4.3 Section 4.7 dierentiates among the metrics of capacity, reliability, density, and scaling. Using at least four dierent network types (WLANs, MAN, ISP, cellular, etc.), characterize the considerations (limitations, user needs) of each in terms of these metrics, and specify how they should drive the design objectives of each. 4.4 4.5

Using the results from Problem 4.2, describe how these networks appear to address the challenges of each of the four key objectives of Section 4.7. Section 4.8 developed two metrics (ESpectrum and EArchitecture ) to describe the operation of wireless systems, and their use of spectrum. Apply both of these metrics to two communications architectures, as shown below:

Downloaded from Cambridge Books Online by IP 210.212.129.125 on Wed May 01 10:32:40 WEST 2013. http://dx.doi.org/10.1017/CBO9781139058599.005 Cambridge Books Online Cambridge University Press, 2013

References

91

Wi-Fi-like Maximum range: 70 m Typical range: 50 m Interference range: 100 m Rate: 54 MBPS Spectrum: 20 MHz

3G-like Maximum range: 1.5 m Typical range: 500 m Interference range: 3,500 m Rate: 3.0 MBPS Spectrum: 1.2 MHz

Compare the results and discuss the reasons for their similarities and dierences. Note that these values are not intended to be actual design values of the systems represented.
4.6

Discuss the two metrics (ESpectrum and EArchitecture ) from Section 4.8 in terms of what design and architecture considerations would maximize each of them. If you had to improve each by a factor of two, what changes would you recommend in order to achieve this eect in both metrics?

Consider the eect of waveform modulation order on the two metrics (ESpectrum and EArchitecture ) from Section 4.8. What is the eect of increasing modulation order, assuming that the additional energy is proportional to that shown in Shannons equation (2.18), and that the energy propagates at the rate of r2 . Plot this curve for a range of modulation increases of from one (baseline) to four times that in Problem 4.5.
4.7 4.8 4.9

Discuss how the objectives of Section 4.7 are reected in the metrics described in Section 4.8.

Consider a P2P network that has 20 devices with ranges uniformly distributed between 10 and 500 m. Each node transmits at a power level of 1 W at 900 MHz. Using the propagation model that considers that energy levels at receivers is the third root (1/r3 ) of the range (r), what is the dynamic range of input energy at each of the receivers? Assuming that a typical receiver has a linearity range of 35 dB (not considering any AGC action), do any of the devices fail due to lack of dynamic range? Consider the same layout of devices as in Problem 4.9, but this time applying a FDD mechanism in a hubspoke network, as in cellular systems. In this case, the BTS tower is located 30 m above the eld of radios (you can model this by adding 20 m to the distance) and, because it is an infrastructure component, it has a linearity range of 48 dB. Does the BTS fail due to lack of dynamic range from the handheld devices? This problem considers the likely noise impact of the dynamic range issues addressed in Problems 4.9 and 4.10. The dynamic range limit of the problem was dened to at be the point when the noise generated by intermodulations was equal to the noise oor of the receiver. What is the level of intermodulation noise that is generated by each of these signals?

4.10

4.11

References
1 Federal Communications Commission, Connecting America: The National Broadband Plan, 2010.

Downloaded from Cambridge Books Online by IP 210.212.129.125 on Wed May 01 10:32:40 WEST 2013. http://dx.doi.org/10.1017/CBO9781139058599.005 Cambridge Books Online Cambridge University Press, 2013

92

References

2 M. Cooper, The myth of spectrum scarcity: Why shuing existing spectrum among users will not solve Americas wireless broadband challenge, A Martin Cooper Position Paper, March 2010. 3 Informa Telecoms & Media, The shape of mobile networks starts to change as femtocells outnumber macrocells in US, London, Oct. 2010. 4 L. Luna, NEXTEL interference debate rages on, Mobile Radio Technology, Aug. 1, 2003. 5 United States Federal Communications Commission, Comment deadlines established regarding the Lightsquared Technical Working Group report, Technical Report DA 11-1133, June 2011. 6 National Space-Based Positioning, Navigation, and Timing Systems Engineering Forum (NPEF), Assessment of LightSquared Terrestrial Broadband System Eects on GPS Receivers and GPS-dependent Applications, June 2011. 7 S. Oh, Exclusion principles and receiver boundaries on spectrum resources, in 39th Telecomunications Policy Research Conference, Sept. 2325, 2011. 8 N. Gershenfeld, The Physics of Information Technology. Cambridge: Cambridge University Press, 2000. 9 AT&T Inc., AT&T Wi-Fi network usage soars to more than 53 million connections in the rst quarter, Press Release, April 22, 2010. 10 G. K. Zipf, The Psychobiology of Language. Boston, MA: Houghton-Miin, 1935. 11 G. K. Zipf, Human Behavior and the Principle of Least Eort. Cambridge, MA: AddisonWesley, 1949. 12 T. S. Rappaport, Wireless Communications: Principles and Practices, 2nd edn. Upper Saddle River, NJ: Prentice Hall, 2002. 13 A. Ghosh, J. Zhang, J. G. Andrews, and R. Muhamed, Fundamentals of LTE, Upper Saddle River, NJ: Prentice Hall, 2011. 14 M. MacCarthy, Rethinking Spectrum Policy: A Fiber Intensive Wireless Architecture. Washington, D.C.: The Apsen Institute Communications and Society Program, 2010.

Downloaded from Cambridge Books Online by IP 210.212.129.125 on Wed May 01 10:32:40 WEST 2013. http://dx.doi.org/10.1017/CBO9781139058599.005 Cambridge Books Online Cambridge University Press, 2013

You might also like