Professional Documents
Culture Documents
1. 2. 3. 4.
INTRODUCTION OVERVIEW AND ROLE OF DATA CENTER DATA CENTER STANDARD TIA-942 DATA CENTER HISTORY FROM 60S TO THE CLOUD GREEN DATA CENTER
5.
DCDA Consulting Services is what it is abbreviated from, Data Center Design and Assessment. We concentrate our service in Designing Data Centers; from scratch, or motivated by thorough assessments of your existing Data Centers.
DCDA Consulting is proud to be one of the very few local companies who specializes in Data Center Design and Assessment .
EXPERT
Trained Employees
PROCESSES
Standardization, Simplicity, Documentation
TECHNOLOGY
Data Processing, Communication, Data Storage
Reliability Basis
ESSENTIAL NETWORK PHYSICAL INFRASTRUCTURE Power Supply, Rack, Cooling, Maintenance & Management
Services
that meets TIA-942 standards and best practices
Extended Services
To provide you with comprehensive assistance, we follow through our design and aid you with the implementation of your Data Center project.
Training on Data Center Planning and Design Customized training on Data Center Planning
and Design.
Scope of Services
1. Floor Plan (Layout) of a Data Center (Architecture &
Interior)
3. Electrical Design & Power Management 4. Cooling 5. Physical Security System (Access Control System
and CCTV System)
The final deliverable is in the form of a Tender Document that cover client requirements, design and material specification, inclusive of shop drawings, and complete BoQ for item 1 to 7.
Client
ICON+ PLN Sigma Bentoel Malang HMSampoerna Hortus Securitas Lintas Arta JTL BII Data Center Depdiknas Puspitek Medco HP, Teradata, Bandung Bukopin DC Renovation ICON+ PLN Barclay Bank Bakri Bumi Resources ICON+ PLN PT. Dizamatra Bank Indonesia PT. POS Indonesia (Persero) Bakri Bumi Resources Niko Resources PLN Pusat Kementerian Keuangan Garuda Indonesia
Dimension
800 m2 800 m2 100 m2 100 m2 30 m2 100 m2 500 m2 120 m2 135 m2 100 m2 200 m2 800 m2 150 m2 &100 m2 80 m2 800 m2 500 m2 1451 m2 300 m2 & 200 m2 80 m2 15 m2 576 m2 1000 m2 1070m2
Tier
Tier3 to 4 Tier3 Tier2 Tier2 Tier2 Tier2 Tier1 Tier2 Tier2 Tier2 Tier2 Tier3 to 4 Tier2 Tier2 Tier3 to 4 Tier2 Tier3 Tier2 Tier2 Tier1 Tier2 Tier2 Tier3
7
I.
TIA-942 Telecommunications Infrastructure Standards for Data Centers
Our Approach
Data Center Design & Assessment Reference
II. Best Practices and Green DC III. Real Hands-on Experiences of 20++ Data Center Projects IV. Keep up with the latest development of DC
Data Center
1. 2. 3. 4.
INTRODUCTION OVERVIEW AND ROLE OF DATA CENTER DATA CENTER STANDARD TIA-942 DATA CENTER HISTORY FROM 60S TO THE CLOUD GREEN DATA CENTER
5.
Data Centers, are specialized building or rooms, hosting ICT equipments such as computing and networking systems to keep the companys critical applications and websites running smoothly and properly (24/7 x 365 days and ongoing)
10
11
SECURE-24
OR
PARIS TELECITY
In most of the cities, our life relies on the functioning and availability of one or multiple data centers. It is not an overstatement. Most of the things in every segment of human activity such as energy, lighting, telecommunications, internet, transport, urban traffic, banks, security systems, public health, and entertainment are controlled by data centers.
13
14
15
Most of Mobile-Phone Facilities, Features and Applications use Data Center Services (from SMS, Chat, Social Networkingetc)
16
17
18
Years
With 900 million registered users, Facebook would be the 3rd largest country in the world
users
2009
19
20
Today, the number of text messages sent and received everyday exceeds the total population on the planet.
Every one of The 30+ billion Google searches performed every month are run on servers and their supporting infrastructure
The billions of text messages sent every day are supported by servers and their supporting infrastructure.
21
Data Center has evolved from the domain of the IT Department to being a key topic in a boardroom
22
Data Center
1. 2. 3. 4.
INTRODUCTION OVERVIEW AND ROLE OF DATA CENTER DATA CENTER STANDARD TIA-942 DATA CENTER HISTORY FROM 60S TO THE CLOUD GREEN DATA CENTER
5.
23
24
Called or written as: Computer Room Data Center Data Centre Server Room Network Room Network Closer Telecommunication Room
27
Tiered reliability Site space and layout Cabling management & infrastructure Environmental considerations
29
Tier 1
Tier 2
Tier 3
99.982% availability (max cumulative annual downtime is 19 hrs or < 1 day). Tier 4 Tier IV, is composed of multiple active power and cooling distribution paths, has redundant components, and is fault tolerant; 99.995% availability (max cumulative redundant components, and is fault tolerant; 99.995% availability (max cumulative annual downtime is 5hrs) annual downtime is 5hrs) Source: Uptime Institutes
Tier IV, is composed of multiple active power and cooling distribution paths, has
30
TIER II
Tenant 1 shift Only 1 N+1 30% 40-50 40-50 18/45cm 175 100(=857) =488) 208, 480 3 to 6 1970 $600 22.0 hrs 99.749%
TIER III
Stand-alone 1 + shitfs 1 active 1 passive N+1 or N + 2 80% - 90% 40-60 100-150 30" 30"-36" / 75 75-90cm 250 150 (=1225) (=732) 12-15kV 15 to 20 1985 $900 1.6 hrs 99.982%
TIER IV
Stand-alone 24 by forever 2 active 2 (N + 1) or S + S 100% 50-80 150+ 30"-36" / 75 75-90cm 250 (=1225) 150+ (GE 732) 12-15kV 15 to 20 1995 $1,100+ 0.4 hrs 99.995%
31
Tier-II
Tier-III
Tier-IV
Ceiling Height
Operations Center
yes
Yes
Structural
Floor loading capacity superimposed live load Floor hanging capacity for ancillary loads suspended from below 7.2 kPa (150 lbf/sq ft) = 734 kg/m2 1.2 kPa (25 lbf/sq ft) = 122 kg/m2 8.4 kPa (175 lbf/sq ft) = 857 kg/m2 1.2 kPa (25 lbf/sq ft) = 122 kg/m2 12 kPa (250 lbf/sq ft) = 1225 kg/m2 2.4 kPa (50 lbf/sq ft) = 245 kg/m2 12 kPa (250 lbf/sq ft) = 1225 kg/m2 2.4 kPa (50 lbf/sq ft) = 245 kg/m2
150 (=734)
175 (=857)
250 (=1225)
250 (=1225)
Tiered reliability Site space and layout Cabling management & infrastructure Environmental considerations
33
The space surrounding the DC must also be considered for future growth and planned for easy annexation.
The standard
34
Backbone cabling
provides connections between ER, MDA, and HDAs
ER (entrance room)
is the location for access provider equipment and demarcation points.
Horizontal cabling provides connections between the HDA, ZDA, and EDA.
is a centrally located area that houses the Main Crossconnect as well as core routers and switches for LAN and SAN infrastructures
(ISP/Co-Lo)
Offices, Operation Center, Supports Room Horizontal Cabling Telecommunication Room (Office & Operation Center LAN Switches)
Backbone Cabling
COMPUTER ROOM
Backbone Cabling Horizontal Cabling
36
COMPUTER ROOM
37
Most of the Design we did is using (1) Overhead for Telco & (2) Power under RF
38
Tiered reliability Site space and layout Cabling management & infrastructure Environmental considerations
39
40
41
Servers
Switches
HDA or MDA
EDA
Prevalent architecture in LAN, works well in smaller Data Centers For larger, Tier III and IV Data Centers design limits growth 42
Cabling Infrastructure
3. CrossCross-Connection
Permanent cable to EDA or SAN storage (Backbone and Horizontal cabling)
PATCH CORD CROSS CONNECTIONs (easier MAC MAC) )
CORE SWITCHES
Core Switches
43
COMPUTER ROOM
Core Switch
A core switch is located in the core of the network and serves to interconnect edge switches. The core layer routes traffic from the outside world to the distribution layer and vice versa. Data in the form of ATM, SONET and/or DS1/DS3 will be converted into Ethernet in order to enter the Data Center network. Data will be converted from Ethernet to the carrier protocol before leaving the data center.
Distribution Switch
Distribution switches are placed between the core and edge devices. Adding a third layer of switching adds flexibility to the solution. Firewalls, load balancing and content switching, and subnet monitoring take place, aggregating the VLANs below them. Multimode optical fiber will be the typical media running from the distribution layer to the core and edge devices.
Not every data center will have all three layers of switching. In smaller Data Centers the core and distribution layer are likely to be one and the same.
46
Maps to
a top-of-the-rack interconnect. The EDAs will serve the electronics in each cabinet and the ZDAs will serve the EDAs.
6. The ZDAs will homerun back to the MDA,
A good data center layout adapts flexibly to new needs and enables a high degree of documentation and manageability at all times. Customers can choose a variety of cabling structures from:
end of row or dual end of row middle of row top of rack, or two row switching.
In two Racks
In One Rack
End of Row
From each dualdual-end end-ofofrow racks LAN & SAN Switch back to MDA core switches
Middle of Row
From each server rack LAN Switch back to MDA core switches
LAN Switch
Nexus 5000
Each EDA Rack has its own SAN, LAN & KVM switch LAN & SAN Back to MDA
Two Row
(traditional implementation early TIATIA-942 implementation) implementation)
No Active (network) Devices in EDA only cable management cablings back to Access & Core Switches in MDA
Sample
60
Benefits
Alleviates congestion beneath access floor Creation of segregated pathways Minimize obstruction to cold air
Concerns
Requires adequate space above racks Infrastructure provisions to support the pathway Cabling may be exposed
61
Benefits
Pedestals create infrastructure pathways Utilization of real estate Cabling is hidden
Concerns
Could restrict cold airflow Creating segregated pathways Accessibility to cables
62
Proper cabling Installation & labeling in the pathway, work area, equipment area and wiring center is a must
63
64
65
Tiered reliability Site space and layout Cabling management & infrastructure Environmental considerations
66
Cooling:
The standard recommends the use of adequate cooling equipment as well as a raised floor systems for more flexible cooling. Additionally standard states that cabinets and racks should be arranged in alternating patterns to create hot and cold aisle.
67
Development of IT Equipment
TODAY
1-RU Blade Servers 42 server per-rack LAN 84 per-rack SAN: 84 per cabinet
68
Blade Servers
in the 42U
Racks
Blade Servers cluster 84 server perper-rack LAN 168 perper-rack SAN: 168 percabinet
69
70
71
Data Center
1. 2. 3. 4.
INTRODUCTION OVERVIEW AND ROLE OF DATA CENTER DATA CENTER STANDARD TIA-942 DATA CENTER HISTORY FROM 60S TO THE CLOUD GREEN DATA CENTER
5.
72
Up until the early 1960s, computers were primarily used by government agencies. They were large mainframes stored in rooms what we call a datacenter today. In the early 1960s many computers cost about $5 million each and time on one of these computers could be rented for $17,000 per month. By the mid 1960s, computer use developed commercially and was shared by multiple parties. American Airlines and IBM teamed up to develop a reservation program termed the Sabre system. It was installed on 2 IBM 7090 computers, located in a specially designed computer center in Briarcliff Manor, New York. The system processed 84,000 telephone calls per day.
73
Punch cards Early computers often used punch cards for input both of programs and data. Punch cards were in common use until the mid-1970s. It should be noted that the use of punch cards predates computers. They were used as early as 1725 in the textile industry (for controlling mechanized textile looms).
Above left: Punch card reader. Above right: Punch card writer.
From PINGDON
Magnetic drum memory Invented all the way back in 1932 (in Austria), it was widely used in the 1950s and 60s as the main working memory of computers. In the mid-1950s, magnetic drum memory had a capacity of around 10 kB.
Above left: The magnetic Drum Memory of the UNIVAC computer. Above right: A 16-inch-long drum from the IBM 650 computer. It had 40 tracks, 10 kB of storage space, and spun at 12,500 revolutions per minute.
The hard disk drive The first hard disk drive was the IBM Model 350 Disk File that came with the IBM 305 RAMAC computer in 1956. It had 50 24inch discs with a total storage capacity of 5 million characters (just under 5 MB).
78
79
The first hard drive to have more than 1 GB in capacity was the IBM 3380 in 1980 (it could store 2.52 GB). It was the size of a refrigerator, weighed 550 pounds (250 kg), and the price when it was introduced ranged from $81,000 to $142,400.
Above left: A 250 MB hard disk drive from 1979. Above right: The IBM 3380 from 1980, the first gigabyte-capacity hard disk drive.
The floppy disc The diskette, or floppy disk (named so because they were flexible), was invented by IBM and in common use from the mid-1970s to the late 1990s. The first floppy disks were 8 inches, and later in came 5.25 and 3.5-inch formats. The first floppy disk, introduced in 1971, had a capacity of 79.7 kB, and was read-only. A read-write version came a year later.
Above left: An 8-inch floppy and floppy drive next to a regular 3.5-inch floppy disk.Above right: The convenience of easily removable storage media.
82
83
84
Data Center 2000s Energy Efficiency concerns and Green Data Center
2000s As of 2007, the average datacenter consumes as much energy as 25,000 homes. There are 5.75 million new servers deployed every year. The number of government data centers has gone from 432 in 1999 to 1,100+ today. Data centers account for 1.5% of US energy consumption and demand is growing 10% per year. Facebook launched the OpenCompute Project, providing specifications to their Prineville, Oregon data center that uses 38% less energy to do the same work as their other facilities, while costing 24% less. As the growth of online data grows exponentially, there is opportunity (and a need) to run more efficient data centers.
85
86
Cloud computing is a general term for anything that involves delivering hosted services over the Internet. These services are broadly divided into three categories: Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). The name cloud computing was inspired by the cloud symbol that's often used to represent the Internet in flowcharts and diagrams.
88
89
Cloud Provider
90
91
92
93
94
Data Center
1. 2. 3. 4.
INTRODUCTION OVERVIEW AND ROLE OF DATA CENTER DATA CENTER STANDARD TIA-942 DATA CENTER HISTORY FROM 60S TO THE CLOUD GREEN DATA CENTER
5.
95
Background
Data center spaces can consume up to 100 to 200 times as much electricity as standard office spaces they are prime targets for energy-efficient design measures that can save money and reduce electricity use. However, the critical nature of data center loads elevates many design criteriachiefly reliability and high power density capacityfar above energy efficiency.
98
Green Design Green Procurement & clean Green Operation & sustainability
Green disposal
PUE
Detailed Calculation
Total IT Load Cooling Infrastructure Power System Load Lighting Load Total Facility Load PUE 94kW 80kW 24kW 2kW 200kW 2.13
105
Information Technology (IT) systems and their environmental conditions, Data center air management, cooling and electrical systems, on-site generation, and heat recovery.
2.
IT system energy efficiency and environmental conditions are presented first because measures taken in these areas have a cascading effect of secondary energy savings for the mechanical and electrical systems.
Metrics and benchmarking values to evaluate data center systems energy efficiency.
design
Selection and procurement
Key elements in a green data center