You are on page 1of 10

[3B2-9]

mmi2012010044.3d

17/1/012

15:5

Page 44

..........................................................................................................................................................................................................................

RESOURCE MANAGEMENT FOR


SOFTWARE-DEFINED RADIO CLOUDS
..........................................................................................................................................................................................................................

SOFTWARE-DEFINED RADIO (SDR) CLOUDS COMBINE SDR CONCEPTS WITH CLOUD


COMPUTING TECHNOLOGY FOR DESIGNING AND MANAGING FUTURE BASE STATIONS.

THEY PROVIDE A SCALABLE SOLUTION FOR THE EVOLUTION OF WIRELESS COMMUNICATIONS. THE AUTHORS FOCUS ON THE RESOURCE MANAGEMENT IMPLICATIONS AND
PROPOSE A HIERARCHICAL APPROACH FOR DYNAMICALLY MANAGING THE REAL-TIME
COMPUTING CONSTRAINTS OF WIRELESS COMMUNICATIONS SYSTEMS THAT RUN ON
THE SDR CLOUD.

......

Ismael Gomez Miguelez


Vuk Marojevic
Antoni Gelonch Bosch
Universitat Polite`cnica de
Catalunya

Wireless communications are


virtually omnipresent today. Radio transmitters and receivers encode and decode signals
such that huge data amounts reach their destinations in real time and at the desired quality. Wireless communications links are
therefore established between mobile terminals and base stations of cellular communications systems.
Maximizing the capacity of wireless communications systems under the given technological and physical constraints led to the
concept of spectral efficiency. A higher spectral efficiency is usually achieved through
advanced signal-processing techniques, including adaptive modulation, orthogonal frequency division multiplexing (OFDM),
channel estimation and equalization, iterative
decoding, and sophisticated channel access.
A higher use of the radio communications
spectrum generates more interference. Distributed transmission and reception techniques are therefore employed. Multipleinput, multiple-output (MIMO) antenna
systems increase radio transmission performance by exploiting the spatial diversity

where fading affects each multiple communication link differently.1 Distributed antenna
systems (DASs), moreover, employ an optical fiber or coaxial cable for connecting the
antenna system to the remote processing
unit. A cooperative DAS based on a radioover-fiber technique was introduced in Chinas
Beyond 3G Future Project2 and tested in
field experiments. In those experiments, multiple antennas were distributed over the cell
and connected to a data center performing
all signal-processing tasks. Researchers have
recently considered similar approaches.3
Todays base stations feature a set of heterogeneous processing devices, including
application-specific integrated circuits (ASICs),
general-purpose processors (GPPs), digital signal processors (DSPs), and field-programmable
gate arrays (FPGAs). Each device executes
the set of processing tasks that were specified at design time. The digital signal processing chains can be implemented in
software (software-defined radio [SDR] application) running on general-purpose hardware
(SDR platform).4 Ongoing advances in radio
engineering and digital signal processing

Published by the IEEE Computer Society

0272-1732/12/$31.00 c 2012 IEEE

..............................................................

44

[3B2-9]

mmi2012010044.3d

17/1/012

15:5

Page 45

CH-1
RF

ADC

CH-N
RF

ADC

Optical fiber

Frequency diversity

Antenna detail

DAC

DAC

Data center
BS2 processing

Optical fiber

Antenna 3

Switch

Antenna 1

Antenna 2
Space diversity

BS1 processing

BS3 processing

Figure 1. The software-defined radio (SDR) cloud. Antennas connect to the data center
via communication links. (CH-1 RF represents the analog signal processing part associated
to radio frequency channel 1.)

necessitate employing processor arrays and


automatic resource allocation tools that provide computing resources on demand. Considering a data center the computing core of
a base station is thus a natural evolution of
wireless communications.
We introduce the SDR cloud, a data center that processes the signals transmitted and
received by a set of distributed antennas.
Without loss of generality, we consider
GPPs the processing devices of the data center because of their mature development
tools and management utilities. The SDR
cloud employs cloud computing technology,
which enables computing-resource sharing.
We focus on real-time computing-resource
management for dynamic SDR application
deployment. Related work addresses infrastructure and networking issues: You and
Gao2 deal with increasing the spectral efficiency, whereas Lin et al.3 analyze different implementation options and hardware choices.

The SDR cloud


Base stations are the core elements of
todays wireless communications systems.
Traditionally, theyre dimensioned offline
for a predetermined number of users and
radio resources. During large periods of low
radio traffic,5 however, computing resources
remain idle and can hardly be used for

other purposes. This article therefore analyzes another approach.


A cloud is a type of parallel and distributed system consisting of a collection of
interconnected and virtualized computers
that are dynamically provisioned as a function of the service-level agreements between
service providers and consumers.6 Clouds
provide services to clients without reference
to the infrastructure that hosts the services.
As cloud clients, radio operators would be
able to offer wireless communications services on demand anywhere in the world
without local installation. Radio operators
can therefore share radio-related infrastructure (such as antenna sites) and focus on
developing wireless communications standards and services. An infrastructure operator
providing computing resources to radio
operators (infrastructure as a service [IAAS])
facilitates a more efficient and scalable computing-resource deployment and management
by centralizing storage, memory, processing,
and bandwidth resources; the power supply,
cooling, security, maintenance, and software
licensing costs scale correspondingly.
We envisage an infrastructure based on
distributed antennas with a limited amount
of local processing resources. These antennas
connect to a data center via low-latency and
high-speed communication links (see Figure 1).

....................................................................

JANUARY/FEBRUARY 2012

45

[3B2-9]

mmi2012010044.3d

17/1/012

15:5

Page 46

...............................................................................................................................................................................................
SOFTWARE-DEFINED RADIO

The SDR cloud naturally enables coherently


combining the digital signal processing of
several user signals detected by different antennas, benefitting from space-, time-, and
frequency-diversity techniques. These techniques allow lower transmission powers and
higher robustness against interference, leading to higher spectral efficiency.
A single SDR cloud can provide wireless
communications services to a metropolitan
area with a high user density. Measurements
have shown that the average user establishes
seven or eight communication sessions per
day, each lasting 90 seconds in the mean.7
We might expect some 20,000 concurrent
wireless communications sessions at peak,
assuming that only 1 percent of 2 million
inhabitants receive wireless communications
services at the same time. If each session
requires 10 giga-operations per second
(GOPS), which is a reasonable figure for
3G services,8 the system would need to provide 200,000 GOPS for the digital signal
processing. In large geographical areas associated with SDR clouds, there will be zones
of higher and lower user densities. A common data center provides opportunity for
dynamic-processing load balancing between
the services offered in different radio sectors
or cells. (Todays base stations typically provide wireless communications coverage to
bounded areas called cells, which are split in
several sectors.) We envisage that radio
resources will remain the bottleneck of future
wireless communications systems. Thus, additional infrastructure facilities (smart antennas) and digital signal processing algorithms
will continue to play an essential role in
increasing system capacity. An infrastructure
operator with a wide business scope could,
on the other hand, enable other uses of the
computing resources during the long daily
periods of low wireless traffic.
Any wireless communications service has
a more-or-less stringent throughput and latency requirement. Throughput and endto-end latency are radio link parameters
that characterize the quality of service
(QoS). They specify requirements for the
wireless communications infrastructure and
the SDR cloud computing resources.
Analog-to-digital (ADC) and digitalto-analog converter (DAC) technology

....................................................................

46

IEEE MICRO

determines the necessary optical fiber transmission capacity. Data converters evolve
slowly. Commercially available converters
provide up to one giga-sample per second,
with a resolution of 14 bits. Data at a rate
of 14 gigabits per second (Gbps), at most,
will then need to be transferred from a single
converter to the data center and vice versa.
Optical fiber transmission capacities are
way beyond this figure.
The transmission delay of optical fiber
links is a function of distance; optical signals
travel at the speed of light. Because optical
fiber switches introduce delays in the order
of 100 ns, the transmission delay of an optical fiber path 20 kilometers long would be
approximately 0.1 ms. This value is low in
comparison with typical end-to-end latency
requirements of tens of milliseconds (50 ms
for voice, the service with the highest latency
sensibility).

Resource management requirements


and approach
Todays wireless subscribers access communications services anywhere, anytime,
and under different circumstances. An
SDR applications real-time processing and
data-flow demands are a function of the
communications service, QoS, and channel
conditions. The wireless communications
channel highly degrades transmitted signals.
Yet, weak or corrupted signals can still be
recovered, and the QoS target (bit error
rate, video or image resolution, and so
forth) can still be achieved through sophisticated signal processing.
Digital signal processing algorithms are
continuously evolving. Many processing
operations are applied in the same or similar form in various radio standards. A
modular design thus facilitates reusing
software modules for assembling different
radio transceivers. An SDR system is therefore a dynamic system, where radio and
computing resources are continuously reassigned. We can, therefore, define the main
resource management requirements for
SDR clouds as
1. Respond, that is, allocate the necessary
amount of computing resources, in
real-time,

[3B2-9]

mmi2012010044.3d

17/1/012

15:5

Page 47

2. Provide real-time control over the processing throughput and latency,


3. Manage different QoS targets,
4. Identify and track the system resource
states and handle multiple computing
constraints,
5. Provide computing resources as a single
abstract resource, shared between different users, and
6. Adapt the resource management strategy
to internal (SDR cloud) and external
(environmental) influences or conditions.
Managing the SDR cloud computing
infrastructure within wireless communications services tight timing constraints is a
complex task. We therefore suggest dividing
the data center into clusters of a few processors each. A high-level resource manager
assigns users to clusters or cluster groups as a
function of the radio and computing conditions, and defines the resource-management
strategy. Distributed low-level resource
managers then allocate and deallocate clustercomputing resources in real time. That is,
whenever a user wishes to initiate or terminate a wireless communications session, the
corresponding low-level resource manager
allocates or deallocates computing resources.
User sessions are initiated independently.
Computing resources should therefore be
assigned on a first-come, first-served basis.

ALOE enabling technologies


The Abstraction Layer and Operating Environment (http://flexnets.upc.edu/trac/wiki/
ALOE) is a lightweight, platform-independent
execution environment for SDRs. ALOE
is implemented in plain C. Its modular
design consists of three layers: the hardware API, software daemons, and the software API.
The hardware API abstracts hardware devices (such as timers, data converters, and
memory) and operating system services
(such as communication interfaces and process management). Each processor runs a
subset of software daemons for real-time
control, synchronization, interprocessor data
routing, application management, platform
management, and so forth. The software
API provides data-interface management
functions, configuration utilities, and access

to global variables. The hardware abstraction


is thus performed at the hardware API level,
whereas the software API provides communication-service abstraction.
ALOE coordinates radio transmission and
reception with distributed digital signal processing. It therefore divides the execution
time into discrete computing time slots and
executes the SDR application in a pipelined
fashion. That is, ALOE invokes each application process once per time slot and processes
some part of the continuous data flow (see
Figure 2).9 Merging pipelining stages or
applying sophisticated scheduling techniques
reduces the resulting pipelining latency
(three time slots in Figure 2).

Middleware
The middlewares role is to provide communication services and execution-control
mechanisms for a coordinating distributed
execution of applications. Misaligned time
slots can lead to lost data packets. Therefore,
the distributed execution of SDR applications requires time slot synchronization and
a single virtual time. Synchronization is also
required between the data center and the
ADCs and DACs; although both ends of
the optical fiber link may have buffers for
absorbing momentary throughput variations,
processor time slots must be synchronized
with the ADCs and DACs. This is the
SDR clouds most difficult synchronization
situation due to the relatively long distance
between the converters and the data center.
Misalignments of a small fraction of a
time slots duration, which is typically several
hundreds of microseconds, are tolerable here
because of the pipelined data processing. The
SDR cloud operator can use data communication interfaces (optical fiber lines) for synchronization, employing a simple messageexchange mechanism between a pair of
ALOE-enabled processors.10 (A small microcontroller can manage the synchronization
and buffering issues at the ADC and
DAC.) Since we assume a symmetric
round-trip time for estimating messagepropagation delays, the maximum synchronization error will be half the round-trip
timethe actual synchronization error is
much lower than that in the case of nearly
symmetric message propagation paths.

....................................................................

JANUARY/FEBRUARY 2012

47

[3B2-9]

mmi2012010044.3d

17/1/012

15:5

Page 48

...............................................................................................................................................................................................
SOFTWARE-DEFINED RADIO

SDR application of four processes O1, O2, O3, and O4:


Pipelining stage 1

stage 2

O1O2

O2

stage 3
O2O4
O4

O1

Data packets

O1O3

O3

O3O4

Scheduling chart:
time slot
Processor 1 (P1)

O1

time slot

x+1

O1

O2

time slot

x+2

O1

O2

O2

P1 internal link
O1O2

O1O2

O1O2

P1-P2 external link


O1O3 O2O4
Processor 2 (P2)

O3

O4

O1O3 O2O4

O1O3 O2O4
O3

O4

O4

O3

P2 internal link
O3O4

O3O4

O3O4

Figure 2. Time slots and pipelining. The figure shows how three data packets are processed by four processing modules
(O1 to O4) in a distributed and pipelined fashion. Processor 1 (P1) executes processes O1 and O2, whereas P2 runs O3
and O4.

The full-duplex fibers communication latencies help achieve misalignments of less than
10 microseconds (ms).
We can measure the ALOE overhead as
a fraction of the time-slot. The set of
ALOE software daemons have to run on
each processor. In case of a multicore processor, these daemons run on one core
only, reducing the overhead. The processor
overhead relative to the time slot duration
is then a function of the execution time
of the software daemons (tdaemons ) and the
software API functions (t swapi, which
includes the context switching time), the
number of application processes (Nprocesses )
and processing cores (N cores ), and the
time slot duration (tslot ):
Overhead %

....................................................................

48

IEEE MICRO

tdaemons Nprocesses  tswapi


Ncores  tslot

On a 2-GHz CPU, we measured 130 ms


for executing all software daemons and 5 ms
for the software API functions per process.
The relative ALOE overhead for a 1 ms
time slot and 20 application processes running on a quad core is then 5.75 percent.

Resource management
Using the available computing resources,
the resource manager must satisfy real-time
computing-resource requirements. Generalpurpose computing clouds or grids often
apply best effort or other types of resourcemanagement policies11 without the strict timing constraints of wireless communications.
The ALOE resource manager accounts for the
SDR-specific computing constraints. It models
SDR platforms as interconnected processing
cores, and SDR applications as directed acyclic
graphs of continuous data flow demands.

[3B2-9]

mmi2012010044.3d

17/1/012

15:5

Page 49

Million operations per second (MOPS) is


employed for measuring the processing
resources and requirements of SDR platforms and applications. Megabits per second
(Mbps) correspondingly characterizes the
interprocessor bandwidth capacities and the
data flow demands. The pipelined execution
finally leads to the two metrics, million
operations per time slot (MOPTS) and
megabits per time slot (MBPTS). These
metrics are the basis of a set of mathematical
models characterizing SDR platforms and
applications.8
The ALOE resource manager consists of a
resource-allocation or mapping algorithm
and a customizable cost function.9 The
tw-mapping is a dynamic programming algorithm. It analyzes all possible mapping
options between w processes and all available
processing cores under the given resource
constraints. Cost metrics are therefore computed and mapping paths marked in a trellis
of NPC  M nodes, where NPC is the number of processing cores and M the number of
application modules. The algorithm finds
the final mapping solution by unrolling the
decisions backward, following the selected
path to its origin.
Because of the algorithms cost-function
independence, the ALOE resource manager
can consider different computing resource
types and constraints. A multiobjective cost
function can be used for considering multiple constraints and optimizing the use of several resources. The proposed cost function
distributes the processing load while minimizing the interprocessor data flows. The
two superposed cost terms represent the relation between the processing and data-flow
requirements and the remaining processing
and bandwidth resources.
The SDR application-to-platform mapping is part of establishing a new wireless
communications session. The complexity of
processing a new user grows with the number
of processing cores, proportional to NPCw+1
in the case of tw-mapping. The t1-mapping
execution time is a few milliseconds for
eight processing cores, in the order of 100 ms
for 32 cores, and a few seconds for 100 cores.
A mapping delay of a few seconds per user
might be unacceptable, especially if users
arrive at a higher rate. However, a few

hundreds of milliseconds for mapping an


SDR application seems reasonable.
The coherent modeling of SDR applications (computing requirements) and platforms (computing resources) facilitates
managing wireless communications services
throughput and latency requirements. The
ALOE resource manager, in particular, provides control over the processing throughput and latency. Because time is implicitly
modeled, a mapping that allocates the
required amount of computing resources
satisfies the SDR applications throughput
demand.
The tolerable processing latency (Lmax) is
a function of the end-to-end latency of the
given radio link. The actual processing latency corresponds to the number of pipelining stages (s) times the time slot duration.
We therefore specify the time slot duration as
tslot  Lmax =s
Previous research suggests limiting processing latency to 10 ms for 3G data services;9 10 ms is reasonable, even for voice.
Here, those 10 ms must absorb the transmission delay over the optical fiber link between
the data converters and the data center. For
an optical fiber transmission delay of 0.1 ms,
Lmax 9.9 ms remains the tolerable processing latency and determines the time slot
duration.

Case studies
We simulate an SDR cloud scenario and
three simple resource management strategies
for analyzing the performance of the hierarchical resource management in two use cases.

Scenario and strategies


Three radio operators wish to provide 3G
services in an urban area of 4 km  4 km.
Their market shares are 50, 25, and 25 percent.
A distributed set of antennas provides full
service coverage. Each of the 400 antennas
covers a radio cell of 200 m  200 m and
operates at N radio channels. A channel pertains to a single operator. The distribution of
channels to operators may be according to the
market shares. A wireless subscriber or user is
associated with a single operator. We dont
consider intercell interference or frequency

....................................................................

JANUARY/FEBRUARY 2012

49

[3B2-9]

mmi2012010044.3d

17/1/012

15:5

Page 50

...............................................................................................................................................................................................
SOFTWARE-DEFINED RADIO

reuse here, and we assume that the available


computing resources limit system capacity.
At each antenna site, the N channels are
filtered and down-converted to a suitable intermediate frequency before being sampled
by N ADCs (see Figure 1). Here, an ADC
generates 16-bit samples at a rate of 65 MHz.
The data throughput between a single ADC/
DAC pair and the data center is thus
2  1,040 Mbps. An optical fiber connection
with appropriate multiplexing facilities and a
capacity of N  2,080 Mbps would suffice
for carrying N bidirectional data flows between an antenna and the data center.
Digital down-conversion and matched filtering are the receivers first digital signal processing tasks. Four SDR receiver models
consider different data throughputs: 64,
128, 384, and 1,024 kbps of raw data rates.
(The corresponding transmitters can be independently managed and arent considered
here.) These bit rates can offer various services
and QoS levels. The user service distribution
is 40 percent voice (64 kbps) and 60 percent
data (128 to 1,024 kbps). We obtained the
four SDR application models according to
the digital signal processing algorithms complexities and data-flow demands at different
processing stages.8 The SDR cloud readily
supports more sophisticated QoS management approaches (for example, accepting
QoS upgrades or downgrades as a function
of the radio or computing conditions).
The data center is organized in clusters
of eight equal and fully interconnected
processorstwo tightly coupled quad
cores, for instance. We assume that all N
channels per antenna reach a group of
three clusters; that is, three computing clusters are assigned to each radio cell. C clusters
are interconnected via switches of 40-Gbps
ports. The data center contains 1,200 clusters, or 9,600 processors, with a total processing power of 115,200 GOPS (12 GOPS per
processor).
The resource-management strategy should
account for sporadic events or traffic peaks
and adapt to changing environmental and
data center conditions. We compare three
simple strategies here:
 S1: For each cell, one cluster is dedi-

cated to one operator. Each SDR

....................................................................

50

IEEE MICRO

application is thus processed by a single cluster (C 1), assigned as a function of the user location (cell) and
operator association.
 S2: Groups of C 3 clusters are
formed, combining the computing
resources of 24 processors. These
resources are shared among all users in
a cell, irrespective of the operator
association.
 S3: Groups of C 4 clusters are
formed, combining the computing
resources of 32 processors. These
resources are shared among the users
of a single operator located in four
neighboring cells.
Any low-level resource manager needs to
ensure that the first process of the SDR application executes on the processor receiving
the data from the ADC while managing
the limited processing resources and interprocessor bandwidths. We therefore employ
the t1-mapping and the two-term cost function presented in the previous section and
analyze the behavior of the three resource
management strategies in two use cases.

Use case 1: Time-varying service demands


Wireless subscribers use their mobile devices sporadically and for different purposes,
but they do so more intensively at certain
hours. The system load (number of active
users) here increases from very low to very
high to, finally, unstable, where the number
of service demands exceeds system capacity.
Special days like New Years Eve are an
example.
The simulation time span for this use case
encompasses 25,000 events. At each event, a
user initiates or terminates a wireless communications session following an exponential
probabilistic arrival and departure process
with equal probability of the cell association.
There are some 20,500 session requests in
total with different service and QoS demands
and approximately 4,500 service terminations. Up to 12,500 (S1) or 13,000 (S2
and S3) user sessions are served simultaneously. The peak-processing resource utilization is 87.6 percent.
Figure 3 shows the number of active
users, accepted active users, and active

17/1/012

15:5

Page 51

processors over time. Processors are turned


on and off as a function of processing demand and mapping decisions. Many processors are active at low or medium system
loads because of the cost functions loadbalancing feature. S1 uses fewer processors
than S2 or S3 between 1,000 and 10,000
of the simulation time. These differences
disappear at higher system loads, when S1
and S3 reject more users than S2. A user
is accepted or served if and only if all realtime computing constraints are met.
S1 dedicates computing resources to each
operator within a cell. Because of the heterogeneous market shares, some clusters become
more loaded than others. Operator 1s computing resources will saturate first and new
users will be rejected despite available computing resources in other parts of the data
center. The S2 strategy enables sharing computing resources among the three operators
within a radio cell. Because of the 50, 25,
and 25 percent market shares, approximately
50 percent of the served users are associated
with Operator 1 and 25 percent each to
Operators 2 and 3. S2 thus adapts better to
the heterogeneous market shares and serves
more session requests at medium and high
system loads. S3 joins four clusters of four
cells assigned to a single operator. It performs
only slightly better than S1 because of the
homogeneous user distribution over the entire service area.
Note the balanced number of users entering and leaving between 5,000 and 6,000 of
the simulation time. Because of the high
number of processors already in use while
the data center runs at a low processing
load (executing some 3,500 SDR applications), the termination of a user session in
one cell and the acceptance of a new session
in another statistically leads to fewer processors being employed. Resource managers
track these variations and are aware of the
computing resource states at any time.

Use case 2: Location-varying service demands


The second use case reflects an agglomeration of active users that move from one geographical area (downtown or a football
stadium, for instance) to another (residential
area). The user locations follow a 2D Gaussian
distribution in an area of 800 m  800 m

20,000

20,000
S1 users
S2 users
S3 users
Active users
S1 procs
S2 procs
S3 procs

15,000

15,000

10,000

10,000

5,000

5,000

0
0

5,000

10,000
15,000
Simulation time

20,000

Active processors

mmi2012010044.3d

Active users

[3B2-9]

0
25,000

Figure 3. Active users, accepted active users, and active processors


over simulation time. The number of active users (running sessions plus
new session requests) increases from low to medium to very high. The
resource management strategies try absorbing the variations in the
number of active users.

(16 radio cells) with the highest density in


the two central cells. These users head
north in regular time stamps. Wireless traffic
is low in the remaining service area.
The analysis is confined to 4  4 cells and
the corresponding computing clusters (384
processors) of the data center. Figures 4a
through 4c show the processor occupation
over time. These figures indicate how groups
of processors become active and increase and
(later) decrease their activity corresponding
to changing service demands.
Figure 4a indicates six highly occupied
clusters (6  8 processors), which are associated with two cells and three operators. The
cluster activity changes as users move across
cell boundaries, approximately every 500
simulation time units. For S2, two groups
of three clusters each (2  24 processors)
present the highest occupation (Figure 4b).
These two groups correspond to the two
heavily loaded cells, whose computing
resources are shared among three operators.
Whereas S1 and S2 have the same number
of highly involved processors, S2 accepts
more users than S1 (Figure 4d) because of
its resource-sharing capability.

....................................................................

JANUARY/FEBRUARY 2012

51

[3B2-9]

mmi2012010044.3d

17/1/012

15:5

Page 52

...............................................................................................................................................................................................
SOFTWARE-DEFINED RADIO

300

350

250

300
Processor index

200
150
100
50

250
200
150
100
50

500
(a)

1,000
1,500
Simulation time

500

2,000
(b)

1,000

1,500

0.7

350

300

300

250

Accepted users

Processor index

350

250
200
150
100
50

(c)

1,000
Simulation time

1,500

2,000

S1 users
S2 users
S3 users
S1 procs
S2 procs
S3 procs

200

0.6
0.5
0.4

150

0.3

100

0.2

50

0.1

0
500

2,000

Simulation time

(d)

500

1,000

1,500

Usage processors

Processor index

350

0
2,000

Simulation time

Figure 4. Processor occupation over simulation time for Strategy 1 (a), Strategy 2 (b), and Strategy 3 (c). Accepted active
users and processing resource occupation over time for S1, S2, and S3 (d). The figures indicate the differences between
the three strategies in terms of processor occupation and number of accepted users for use case 2.

Figure 4c shows three groups of highest


processor occupation. Each group is associated with one operator and 2 4 radio
cells and consists of 2 4 clustersthat is,
64 processors. S3, which joins the computing
resources of four clusters assigned to four
neighboring cells, benefits from a low number of active users surrounding a busy cell;
it can therefore accept more users, making
better use of available computing resources
(Figure 4d).

ireless communications systems have


evolved to become sophisticated and
commonplace. With SDR on the horizon,
computing issues will play a major role in

....................................................................

52

IEEE MICRO

wireless communications. Ongoing development toward high-speed wireless and wired


communications networks will eventually
enable real-time digital signal processing on
demand. Combining SDR and cloud computing concepts facilitates increasing radio
transmission capacity by centralizing the
digital signal processing. Moreover, SDR
clouds provide a scalable solution for the
evolution of wireless communications. They
open new opportunities for third parties or
micro-operators, offering value-added and
dynamically definable services. Cloud computing concepts, such as software as a service
(SaaS) and platform as a service (PaaS), may
MICRO
eventually be adopted.

[3B2-9]

mmi2012010044.3d

17/1/012

15:5

Page 53

....................................................................
Processing, vol. 2005, no. 16, 2005,

References
1. A.J. Paulraj et al., An Overview of MIMO

pp. 2626-2640.

CommunicationsA Key to Gigabit Wire-

11. N.D. Doulamis et al., Fair Scheduling Algo-

less, Proc. IEEE, vol. 92, no. 2, 2004,

rithms in Grids, IEEE Trans. Parallel Distrib-

pp. 198-218.

uted Systems, IEEE CS Press, vol. 18,

2. X.H. You and X.Q. Gao, Development of


Beyond 3G Techniques and Experiment
System: An Introduction to the FuTURE
Project, ICT Shaping the World: A Scientific View, Wiley-Blackwell, Nov. 2008.
3. Y. Lin et al., Wireless Network Cloud:
Architecture and System Requirements,
IBM

J.

Research

and

Development,

vol. 54, no. 1, 2010, pp. 1-12.


4. E. Buracchini, The Software Radio Concept, IEEE Comm., IEEE CS Press,

no. 11, 2007, pp. 1630-1648.

Ismael Gomez Miguelez is a PhD candidate


at the Department of Signal Theory and
Communications at the Universitat Polite`cnica de Catalunya (UPC). His research
interests include radio and computing
resource management with special emphasis
on energy efficiency and software-defined
radio. Gomez has an MS in electrical
engineering from the UPC.

vol. 38, no. 9, 2000, pp. 138-143.


5. I.F. Akyildiz et al., NeXt Generation/
Dynamic Spectrum Access/Cognitive Radio
Wireless Networks: A Survey, Computer
Networks: Intl J. Computer and Telecommunications Networking, vol. 50, no. 13,
2006, pp. 2127-2159.
6. R. Buyya et al., Cloud Computing and
Emerging IT Platforms: Vision, Hype, and
Reality for Delivering Computing as the

Vuk Marojevic is an assistant professor at


the Department of Signal Theory and
Communications at the Universitat Polite`cnica de Catalunya (UPC). His research
efforts are on software-defined and cognitive
radio architectures and algorithms for an
efficient and flexible computing resource
management. Marojevic has a PhD in
electrical engineering from the UPC.

5th Utility, Future Generation Computer


Systems, Elsevier Science Publishers,
vol. 25, no. 6, 2009, pp. 599-616.
7. J. Guo, F. Liu, and Z. Zhu, Estimate the Call
Duration Distribution Parameters in GSM
System Based on K-L Divergence Method,
Proc. IEEE Intl. Conf. Wireless Communications, Networking and Mobile Computing
(WiCom 07), IEEE CS Press, 2007,
pp. 2988-2991.
8. V. Marojevic, Computing Resource Management in Software-Defined and Cognitive
Radios, doctoral dissertation, Department
of Signal Theory and Communications, Universitat Polite`cnica de Catalunya, 2009.
9. V. Marojevic, X. Reves, and A. Gelonch, A
Computing Resource Management Framework for Software-Defined Radios, IEEE

Antoni Gelonch Bosch is an associate


professor at the Department of Signal
Theory and Communications at the Universitat Polite`cnica de Catalunya (UPC). His
research focuses on implementation methodologies supporting cognitive resource
management strategies and algorithms for
flexible wireless communications systems and
networks. Gelonch has a PhD in electrical
engineering from the UPC.
Direct questions and comments about
this article to Vuk Marojevic, Dept. of
Signal Theory and Communications, Universitat Polite`cnica de Catalunya (UPC),
C/. Jordi Girona 1-3, 08034 Barcelona,
Spain; marojevic@tsc.upc.edu.

Trans. Computers, IEEE CS Press, vol. 57,


no. 10, 2008, pp. 1399-1412.
10. X. Reves et al., Software Radios: Unifying
the Reconfiguration Process over Heterogeneous Platforms, Eurasip J. Applied Signal

....................................................................

JANUARY/FEBRUARY 2012

53

You might also like