You are on page 1of 3

DATA CENTERS

This article was published in ASHRAE Journal, February 2013. Copyright 2013 ASHRAE. Posted at www.ashrae.org. This article may not be copied and/or distributed
electronically or in paper form without permission of ASHRAE. For more information about ASHRAE Journal, visit www.ashrae.org.

Internal IT Load

Internet Traffic Fluctuations

Profile Variability
T
By Donald L. Beaty, P.E., Fellow ASHRAE

here is a significant opportunity for engineers to deliver great value

and great return on investment by right sizing for the IT load in data

centers even though it is a challenging task. In the sidebar, modified


excerpts from the recent Journal columns provide context for this article.
What is interesting about the PUE
metric is that the data center industry
was typically dominated by a focus on
uptime/risk and capital cost. The combination of the Green Grid PUE metric
and ASHRAEs Thermal Guidelines for
Data Processing Environments exposed
the data center industry to the opportunity
to save energy and that the operating
conditions could be relaxed to facilitate
energy savings.
Thermal Guidelines also produced a
thermal report template for manufacturers to uniformly report actual heat release
rather than nameplate values. Taken together, these items have driven improved
energy efficiency, and improved capital
efficiency in data center design and
construction.
Table 1 shows an example of the thermal report. In Figure 1, an example of a
real nameplate versus the thermal report
is shown. This is a very powerful demonstration of the variation of nameplate
versus actual heat release. The PUE is
the total facility power divided by total
IT power. The difference between PUE0
and PUE3 is demand versus consumption.
Within the context of this article and load
profile, PUE3 is the right metric.

Impact of the Internet


The amount of work being performed
by IT equipment is related to the level of
72

ASHRAE Journal

activity associated with whatever software applications are installed on that


equipment. The level of activity is often
linked to Internet or communicationbased activity. Directly and indirectly, our
lives become more and more impacted by
the internet.
The physical manifestation of the Internet is server-based one with most of those
servers residing in data centers. The servers are hardware or fixture, furniture, and
equipment (FF&E) and the occupants
are the software applications.
The loose definition of hardware is that
it is physical and often not very flexible,
and it is in complete contrast to software,
which is virtual, digital and often both
mobile and flexible.

Fluctuations in Internet traffic can


sometimes be a good measure of fluctuations in the work performed by the
IT equipment and, hence, the fluctuation
in the load.
Although it may not be surprising to
hear that Internet traffic for a given industry has major seasonal surges, what
may be more surprising is the drop in
activity that occurs during specific times
of day or week.
Some common seasonal surges include online shopping during the holidays (especially over Thanksgiving
weekend when some online retailers
reported as much as 300% to 700% over
their normal traffic over the rest of the
year). Similarly, other seasonal surges
include tax-related sites during tax
season and election-related sites during
an election.
Some common drops in activity occur
when there is a major event that diverts
attention away from the Internet such as
big sporting events like the Superbowl,
World Cup Final and the Olympics (particularly the opening ceremony).
A daily fluctuation in Internet activity
commonly occurs with the stock market, with some data centers supporting
high-frequency trading, others provide
domestic exchanges only, while others
have multiple exchanges. Depending on
the type of trading supported, the data

Configuration

Description

Model

Condition
Typical Heat
Release

Airflow
Nominal

Watts at 120 V cfm

Maximum (at 35C)

(m3/h)

cfm

(m3/h)

Minimum

1-way 1,5 GHz Processor


16 GB Memory

420

26

44

40

68

Full

2-way 1,65 GHz Processor


Maximum Memory

600

30

51

45

76

Typical

1-way 1,65 GHz Processor


16 GB Memory

450

26

44

40

68

Note: Most new server fans are variable speed.

Table 1: Thermal report example.


a s h r a e . o r g

February 2013

Data Center Definitions


What Makes Data Centers Different? (October 2012)
Data Center Occupancy: The primary occupants in a
data center are not people and, therefore, the pattern and
variation in occupancy is significantly different than most
commercial buildings the variation in occupancy is actually
significantly greater than in a commercial building.
Data Center Change Frequency: The frequency of
change (plug loads, etc.) for a data center is often very rapid
compared to most commercial buildings. The magnitude of
change for a data center further compounds this difference.
Data Center Power Density: Typical data centers are
often between 50 to 200 W/ft2 (538 to 2153 W/m2) of
plug load, although some data centers are more than 1,000
W/ft2. Even modern high-tech offices with all of their electronics are only around 10 to 15 W/ft2 (108 to 161 W/m2).

center may only see high Internet traffic for eight hours of any
given day.
Weather-related events such as natural disasters seem to
have more of a roller coaster type of impact on Internet traffic.
Typically, surges occur before and after the event as people seek
information, but drop during the event, presumably caused by
interruptions to service because of significant surges in demand.
In summary, each data center has its own unique traffic profile, and each of those profiles can be subject to fluctuations of
disruptive proportions. These fluctuations often are accounted
for in the installed capacity of the server systems, specific to
each site, and are something that is constantly managed.

Imminent Arrival of the Internet of Things


The speed that hardware requirements and/or use (or even both)
can change represents a speed of change profile that is not normal
in the building industry. Making this an even bigger challenge is
that although the change can be incremental, it can also double or
more in its thermal and power load demand. These are unusual
extremes compared to a building industry requiring a somewhat
unique and more detailed approach to load profiling and forecasting. This challenge, more than likely will only get worse.
The Internet of Servers is transitioning into the Internet of Things. In the context of this article, the Internet of
Things can be thought of as virtual/digital representations of
uniquely identifiable physical objects interconnected via the
Internet, enabling activities such as sharing information and
interoperability.
Smart grid, smart city, smart buildings are all tied into the
concept of the Internet of Things, which creates some interesting potential from an overall global load profile improvement.
However, it makes the load profile for data centers even more
unpredictable. Essentially, it means many new applications will
dilute the accuracy of historical load profile data.
February 2013

Data Centers: Cooling as a Service (November 2012)


Cloud Computing: IT operations departments are
migrating towards considering computing as a service (i.e.
cloud computing). Cloud computing provides a more fluid
IT environment that allows customers to modulate their IT
usage in a manner more resembling a utility service. Cloud
computing can be thought of as on demand computing,
which makes the load difficult to project. Depending on
the cloud provider, there may be no constraint on the
demand limit.

Data Center Energy Metric PUE (January 2013)


PUE Category 0 (PUE0): A power demand based
calculation measured during peak IT equipment use over a
12-month period (a simple snapshot).
PUE Category 3 (PUE3): An energy consumption-based
calculation using a cumulative measurement over a 12-month
period at the point of connection of the IT equipment.

Hardware Virtualization
The Internet of Things and its limitless potential load impact
notwithstanding, Internet traffic fluctuations and the idle server
power of 50% still being the maximum server power has meant
that it was common to see peak demands in a data center as a
small percentage of the connected load such as 25% (similar to
other areas of the building industry). In other words, to make
the Internet work for the fluctuations and customer demands,
it has been built with large margins so that it works under all
operating conditions. However, this has proved an inefficient
use of energy, as well as capital.
As a result, there has been an overall trend towards virtualization of servers and network making them far more flexible for
fluctuating demands, but also reducing the stranded capacity
associated with servers with one or more applications that are
seldom used or at low demands.
Virtualization started on the mainframe computers back in
the 1960s. Essentially virtualization at the server level means
a change from physical servers to virtual servers. From the
perspective of the software applications, virtualization means

Nameplate920 W (1 kVA w/PF=0.92)


ASHRAE Thermal Report420 to 600 W

Figure 1: Comparison of thermal report and nameplate.


ASHRAE Journal

73

DATA CENTERS
that a single physical server can become many virtual servers
and/or many physical servers can become a single virtual server.
The purpose in doing so is to recapture stranded capacity and
better use. By virtualizing a server, many applications now can
be used on that single server at once or on demand.
Organizations that have moved towards hardware virtualization have reported a reduction in quantity of servers (as much as
a 10:1 reduction for some). In some cases, this has driven major

RACK COOLING

EW
N

ChilledDoor
HIGH DENSITY

consolidation programs, resulting in large companies with, lets


say, 20 data centers being able to consolidate it to less than 10
data centers. This strategy results in a win-win situation with
both capital and energy efficiency realized.
Virtualization may be one measure that normalizes the load
profile. However, the main point here is not to suggest any absolutes other than the short life of technology and the continued
expansion of the digital world, meaning a continued increase
in the amount of data stored and in the
uses for software.

Saves up to
90% power
over traditional
data center
cooling systems
45 kW per rack with
65oF chilled water
Removes 100% server
heat at its source
No condensate pumps
or piping
Replaces rear door
of any server rack
Hinged door with coil,
EC fans & PLC
Fail-safe Leak
Prevention System (LPS)
Optional Cooling
Distribution unit (CDU)
Quick connect hoses
Made in the USA
Contact Motivair for details
Scan with a QR reader on your
smartphone to nd out more about the
ChilledDoor Rack Cooling System

COOLING SOLUTIONS

85 Woodridge Drive I Amherst, NY 14228 I 716-691-9222


info@motivaircorp.com I www.motivaircorp.com

www.info.hotims.com/44629-36

74

Big Data
Another data center industry topic to
consider is big data. Searching online for
big data will result in big number of
web page returns describing the challenges
with being able to effectively handle the
large volumes of data that something like
the Internet of Things can yield. Handling
this data includes its capture, storage,
search, sharing and analysis.
This is a really hot topic that has recently caused disproportionately large
growth in the data storage portion of the
data center industry. It is another indication of the high variability and unpredictability of the data center load. Storage
servers, unlike compute servers, tend to
have a flatter load profile and are working
to counteract the fluctuations caused by
virtualized servers in some data centers.

Conclusion
It is an incredibly difficult task to try to
predict the load profile of a data center,
which makes it incredibly difficult to
right-size the cooling system to achieve
optimum energy efficiency. The right
solution for one data center will not work
well for another since each has very unique
business models and traffic demands.
It is important to understand all of the
variables that influence the load, both now
and in the future, and to ensure designs
are adaptive, flexible and receptive to
potentially frequent and prolific change.
By gaining these insights, steps can be
taken to right-size the infrastructure so
that superior energy efficiency can be
more readily obtained.
Donald L. Beaty, P.E., is president of
DLB Associates Consulting Engineers, in
Eatontown, N.J. He is publications chair of
ASHRAE TC 9.9.

A S H R A E J o u r n a l

February 2013

You might also like