You are on page 1of 51

Storage Networking 101

SAN Solutions

David J. Mossinghoff

Director, Storage Solutions


Forsythe Solutions Group
June, 2006
Copyright Storage World Conference 2006. All rights Reserved.

Overview
The focus of this session is architecting networked storage for the enterprise and will include tips on implementing a tiered network storage infrastructure for both local and remote access

In this session you will learn:


The current state of networked storage protocols, how they relate to disk technologies and where these SNW technologies will merge and diverge. How to best utilize file or block based storage for Fibre Channel, IP protocols and for your storage network. What are the areas of storage intelligence in networked storage today, and what might the future hold? Factors to consider when building a total cost of ownership comparison between the different technologies and protocols.
Copyright Storage World Conference 2006. All rights Reserved. 2

Agenda
Information Management: the Big Picture Todays Storage Challenges Introduction to Storage Networking Architectures

The 6 Levels of Storage Networking


Virtualization- with focus on SAN

Q&A
Copyright Storage World Conference 2006. All rights Reserved. 3

The Big Picture

Main Focus today is on storage networking technology

Copyright Storage World Conference 2006. All rights Reserved.

Todays Storage Challenges


Selecting the right mix of storage technology Streamlining the process of managing resources Improving availability of data, with adequate protection/security Providing for your companys current and future storage needs Delivering cost effective solutions that meet the business needs
Copyright Storage World Conference 2006. All rights Reserved. 5

Value of Storage Networking

Key enabler for server/storage consolidation, tiering, and virtualization More effective and efficient data availability High operational availability Enhanced Backup & Recovery Enhanced Disaster Recovery & Restart Improved scalability, provisioning, utilization of storage capacity Improved application performance Improved data/file sharing Enabler for significant TCO savings for data & storage management
Copyright Storage World Conference 2006. All rights Reserved. 6

Proof points
Top reasons for deploying a SAN*
Improved back-up & recovery Server and storage consolidation On-going demands for additional capacity Improved application performance Improved disaster recovery New project or application deployment 46% 40% 37% 31% 27% 23%

* Source: IDC IT Management Survey - 2005


Copyright Storage World Conference 2006. All rights Reserved. 7

Networked Storage Technologies


SAN
Storage area networks

NAS
Network-attached storage
IP (front end), Optional-FC (backend) IP

CAS
Content-addressed storage

Type of transport Type of data


Key requirement

Fibre Channel (FCP, FICON); IP (iSCSI, FCIP, iFCP)


Block High Operational availability, Deterministic performance OLTP, data warehousing, ERP

File Multi-protocol File sharing

Object, fixed content Long term retention, integrity assurance

Typical applications

Software and product development; fileserver consolidation

Content Management, Compliant storage and retrieval

Copyright Storage World Conference 2006. All rights Reserved.

Storage & SNW Technology Alternatives


DAS
Application Server File System
SCSI, FC, ATA

NAS/CAS
Application Server Application Server

SAN
Application Server File System Application Server File System
FCP Switch/ Director

TCP/IP Ethernet Switch

iSCSI/IP Ethernet Switch

JBOD or Basic Disk

File System RAID RAID

High cost-ofownership-utilization, management Inflexible Sneaker-net management

Transmission optimized for file or Object oriented transactions IO traffic travels over Ethernet For NAS May also use gateway into FC SAN

Transmission optimized for I/O block data movement Separates LAN and SAN FC SAN is mature iSCSI is emerging and viable
9

Copyright Storage World Conference 2006. All rights Reserved.

SAN Characteristics
Service level enablers
Operational availability Reliability and serviceability Performance (response time and throughput) Scalability (with performance)

Provisioning ability for new ports/connections

Copyright Storage World Conference 2006. All rights Reserved.

10

SAN Characteristics
Related key characteristics
Viability of manufacturer/market share Quality of partnership with your company

Quality of service and support


Certified support of required servers/OS levels

Efficient and effective SAN manageability

Total cost sensitivity (Price = Cost)


Copyright Storage World Conference 2006. All rights Reserved. 11

Key Priorities: An example


FC C Dir ore< > ect or+ Edge Sw itc h FC D Co irec t lla pse or or dC ore xte n si on itc h NE Sw SA N

Typical Storage Tier Supported: Storage Connectivity Service Levels/Characteristics Service levels Availability Reliability/Servicability Performance Scalability Provisioning Price Sensitivity Similar for other "Key Charactertics"

Tier 1 Relative Priority

Tier 1 Relative Priority

Tier 1, 2 Relative Priority

Tier 2,3 Relative Priority

Tier 3 Relative Priority

Tier 3, 4 Relative Priority

Priority Legend: Very High High Medium Low Lowest


Copyright Storage World Conference 2006. All rights Reserved. 12

DA S

FC

SA

IP

Economics - Storage Connectivity


Connectivity Type FC Director/Collapsed Core SAN Ext. (Bridge/Router) FC Core<> Edge FC Switch FC Switch IP Switch/FC Bridge IP Switch IP Switch/FC Bridge IP Switch Server Image Size Very Large N/A Large Large,Med Med Med,Small Med,Small Small Small Path Mgt Software (not included) Probable N/A Probable Potential N/A Potential Potential N/A N/A Cost of Connect. $ $ $ $ $ $ $ $ $ 12,400 8,000 4,700 3,700 1,850 1,600 1,130 765 565

Includes director, switch, HBA, cabling costs over 3 yrs.


Copyright Storage World Conference 2006. All rights Reserved. 13

6 Levels of Storage Networking


Levels of Data Availability
Direct Attached Storage (DAS)

Copyright Storage World Conference 2006. All rights Reserved.

14

Problem: Stranded Storage


Gigabit/100Mb - Ethernet / IP
Servers with directattached storage

Poor use of disk capacity Inadequate data protectionmay lead to artificial server growth Minimal-to-no disk storage management Difficult to share data between applications Major inhibitor: cost of FC SAN connectivity may be higher than server cost!
Copyright Storage World Conference 2006. All rights Reserved. 15

6 Levels of Storage Networking


Levels of Data Availability
FC SAN Switch Based (Local) Direct Attached Storage (DAS)

Copyright Storage World Conference 2006. All rights Reserved.

16

FC SAN-Switch Design
Servers Single or Dual FC HBAs
N-ports

Node (N) port Fabric (F) Port

F-port F-port F-port

F-port

1Gb,2Gb, 4Gb Switch


N-ports

1Gb,2Gb, 4Gb Switch Fibre Channel Gigabit/100Mb Ethernet / IP

Copyright Storage World Conference 2006. All rights Reserved.

17

FC SAN with FC Switches


Simple design Low cost relative to other FC SAN alternatives Scales well from a few to 100 usable ports Simple to manage Universally supported / certified Multiple manufacturer / HBA provider options Larger environments may have multiple SAN Islands
Copyright Storage World Conference 2006. All rights Reserved. 18

FC SAN-Switch Design-Mesh
Scales up to 100 usable ports
Servers Single or Dual FC HBAs

E-ports used To link Switches & Create ISLs

ISLs (hops)

E-Ports

1 Gb,2Gb, 4Gb Switches

1 Gb,2Gb, 4Gb Switches Fibre Channel Gigabit/100Mb Ethernet / IP

Copyright Storage World Conference 2006. All rights Reserved.

19

6 Levels of Storage Networking


Levels of Data Availability
IP Storage Networks (Local or Distance)
iSCSI Market

FC SAN Switch Based (Local)

Direct Attached Storage (DAS)

(NAS, iSCSI)

$3,000 $2,500 $2,000 ($K) $1,500 $1,000 $500 $2005 2006 2007 2008 WW Market

Source: IDC- 2006


20

Copyright Storage World Conference 2006. All rights Reserved.

IP Storage Why the Buzz?


Data growth, sharing, and server proliferation can be expensive and cumbersome to manage Scalability, data availability, and sharing can be a major problem with a DAS environment Over-provisioning and data protection complexity make DAS increasingly expensive Applications may require a block based (DAS or a SAN) solution Other applications may benefit from a file based (NAS) solution Cost, complexity and lack of expertise can prohibit traditional Fibre Channel SAN implementation IP storage networking (iSCSI & NAS) can address these challenges
Copyright Storage World Conference 2006. All rights Reserved. 21

Advantages of IP Networking
Common and well proven technology
Low acquisition costs Standards-based solutions Commodity economics Ethernet in every corporation

Low management costs


Familiar network technology and management tools Proven reliable/interoperable transport infrastructure

Local area and wide area network connectivity


WAN enables remote data replication and disaster recovery

Long-term viability
Large R&D investment profile, strong roadmap 10Gb Ethernet emerging significant for IP storage
Copyright Storage World Conference 2006. All rights Reserved. 22

Why iSCSI is Important


NAS has proven IP storage networking viability iSCSI software initiators included with major operating systems eases to deployment of IP SANs Networking capabilities can simplify IP SAN management Lower cost infrastructure broadens reach of IP SAN solutions Leveraging IP networking investments and knowledge base lowers total cost of ownership
iSCSI is a viable IP-SAN solution today!
(for the right applications)
Copyright Storage World Conference 2006. All rights Reserved. 23

iSCSI Building Blocks


iSCSI is SCSI-3 command frames encapsulated in IP packets (Typically over GbE)
IETF standard documented RFC3720

HOST/INITIATOR
iSCSI Software Initiator (NIC) TCP Off-load Engine (TOE) iSCSI Host Bus Adapter (HBA) (some support remote boot)

DISK ARRAY/TARGET
Handled by iSCSI compatible storage array
iSCSI software target driver Standard NIC connectivity iSCSI to FC-SAN bridges available from multiple manufacturers

Copyright Storage World Conference 2006. All rights Reserved.

24

SolutioniSCSI Integration
Fibre Channel Gigabit/100Mb Ethernet / IP
FC SAN F1/F2

iSCSI to FC SAN Bridge

Servers connected to SAN via iSCSI

Storage consolidated on SAN

iSCSI Disk Array (iSCSI - Tape is also possible)

Connects servers via iSCSI to existing fibre channel SAN Low cost per server connection Leverages existing IP network/skills Improved usage and flexibility of storage assets to applications Improved ability for centralized data protection
Copyright Storage World Conference 2006. All rights Reserved. 25

Proof Points -Performance


Enterprise Strategy Group Validation study (4/04)
http://www.netapp.com/tech_library/ftp/analyst/ar1023.pdf

Copyright Storage World Conference 2006. All rights Reserved.

26

Proof Points - Performance


Enterprise Strategy Group Validation study (4/04)
http://www.netapp.com/tech_library/ftp/analyst/ar1023.pdf

iSCSI performance is close to FC


4000 SQL users AND 4000 Notes Users AND 8 SAS Queries

5000 4000 Throughput 3000 (TPM) 2000 1000 0 DAS N/A Response Tim e Im provem ent: N/A
Throughput Im provem ent:

iSCSI 33% 63%

FC 44% 88%

Copyright Storage World Conference 2006. All rights Reserved.

27

6 Levels of Storage Networking


Levels of Data Availability
FC SAN Switch Based (Local) Direct Attached Storage (DAS)

FC SAN(NAS, iSCSI) Director Based , Dual Fabric (Local)

IP Storage Networks (Local or Distance)

Copyright Storage World Conference 2006. All rights Reserved.

28

FC Director Design Options

Required Connectivity to each Director and Fabric

F1
140 Port Director
ISL/Hop

F1 F2
256 Port Director
ISL/Hop

F2
64 port Director

Tape and/or Virtual Tape Subsystem(s)

2 and/or 4 Gb FC link Trunking

Copyright Storage World Conference 2006. All rights Reserved.

29

6 Levels of Storage Networking


Levels of Data Availability
FC SAN Switch Based (Local) Direct Attached Storage (DAS)

IP Storage Networks (Local or Distance) FC SAN Core<> Edge FC SANOr Collapsed Core Director Multi-Fabric Based, (Local) Dual Fabric (Local)

Copyright Storage World Conference 2006. All rights Reserved.

30

Core<>Edge Design Options


Required Connectivity to each Director and Fabric

1 Gb switch

2/4 Gb switch

F1
ISLs/Hop
ISLs/Hop

F2
ISLs/Hop

1 Gb FC link 2 and/or 4 Gb FC lin Trunking

F1
140 Port Director

F1 F2
256 Port Director

Tape and/or Virtual Tape Subsystem(s) Tape/V-Tape attached To the core

Copyright Storage World Conference 2006. All rights Reserved.

31

Large SAN Connectivity Concern


Inter-switch links (ISLs) used to link switches and/or directors to build larger SANs Multiple ISLs are typically required for performance Server/storage ports go down; effective price goes up ISL traffic is static; links may be under-utilized
ISLs Servers 10% ISL usage 30% 60% 90%
Copyright Storage World Conference 2006. All rights Reserved. 32

SolutionTrunking
(with Automatic Load Balancing)
Fabric-data traffic more evenly distributed among ISLs All ISLs share bandwidth Overall bandwidth improved Network design and administration is simplified

~50% usage ~50% usage ~50% usage ~50% usage


Copyright Storage World Conference 2006. All rights Reserved. 33

Servers

Trunk

What is a Fan-in-Ratio?
When using a core<>edge design, it is important to consider the fan in ratio Measure of the relative incoming max bandwidth to the available ISL bandwidth into the core Example:
Servers have 1 Gb FC HBAs coming into 32-port FC switches (2Gb capable) Switches are connected via ISLs to core directors (which are also 2 Gb capable -presume ISL Trunking is enabled) If (3) 2Gb ISLs per switch are used, the Fan-in-Ratio is: 32 ports 3 ports = 29 ports * 1 Gb @ = 29 Gb (3) ISLs * 2 Gb = 6Gb 29 Gb/6Gb = 5:1 (approx) fan-in-ratio (good R.O.T)

Copyright Storage World Conference 2006. All rights Reserved.

34

Director / Switch Scalability


Number of Hosts Lowest Acquisition Cost Highest Availability Least Complexity

256 to >1,000
Switches Director Director Director

64 to 256
Switches Switches Director Director

Up to 64
Switches Switches Switches

Copyright Storage World Conference 2006. All rights Reserved.

35

Advantages of Core:Edge
Scalability - Up to (16) switch domains can be attached to each director fabric
Director E-Ports are auto-sensing

Scalability example of usable ports (non-ISL)


(2) 64-port directors+edge switches (2) 140 port directors+edge switches (2) 256-port directors+edge switches = 650 u-ports = 1,100 u-ports = 1,326 u-ports

The effective cost per port is reduced vs. an all director solution
Due to lower cost per port of FC Switches Scalability within a fabric is increased more economically

If designed properly
No single points of failure in the SAN (for dual path servers) Performance scales with port count
Copyright Storage World Conference 2006. All rights Reserved. 36

6 Levels of Storage Networking


Levels of Data Availability
FC SAN Switch Based (Local) Direct Attached Storage (DAS)

IP Storage Networks SAN over (Local or Distance (Replication, FC SAN- Distance) Remote Tape, Director (NAS, iSCSI) Extended SAN) Based FC SAN Dual Fabric Gigaman, OCx, Core<> Edge (Local) DWDM Or Collapsed Core iFCP, FCiP Multi-Fabric (Local)

Bridging/ Routing

Copyright Storage World Conference 2006. All rights Reserved.

37

Extending the SAN over Distance


Fibre Channel dir/switch FCP gateway MAN / WAN FCP gateway Fibre Channel dir/switch

E_Port termination

IP routing

E_Port termination

Gateway-to-gateway protocol
either FCP, FCIP or iFCP

Supports direct Fibre Channel connection to storage Provide either server<> Storage and/or Array<>Array connectivity
Copyright Storage World Conference 2006. All rights Reserved. 38

FCP

SAN F1/F2

MON
(dark Fibre)
FCP/DWDM Gateway FCP/DWDM Gateway

SAN F1/F2

Fibre Channel DWDM link


Existing SAN
Remote/Replication site

High metro optical network bandwidth (DWDM) SAN extension through ISLs creates large set of fabrics Propagation of faults across entire fabric Service disruptions from fabric changes impacts all fabrics Custom network configurations supported with SONET or ATM
Copyright Storage World Conference 2006. All rights Reserved. 39

FCIP
SAN F1/F2 FCiP Gateway

IP Network
FCiP Gateway

SAN F1/F2

Fibre Channel Gigabit Ethernet / IP


Existing SAN
Remote/Replication site

SAN extension through ISLs creates large set of fabrics Propagation of faults across entire fabric Service disruptions from fabric changes impacts all fabrics Custom network configurations supported with SONET or ATM Optional data compression and fast write features can result in higher throughput and lower network costs
Copyright Storage World Conference 2006. All rights Reserved. 40

iFCP Solution

SAN F1/F2

iFCP Gateway

IP Network

iFCP Gateway

SAN F3/F4

Fibre Channel Gigabit Ethernet / IP


Existing SAN
Replication site

iFCP protocol provides fabric isolation between sites Prevents fault propagation / zone definition isolation Optional data compression and fast write features can result in higher throughput and lower network costs
Copyright Storage World Conference 2006. All rights Reserved. 41

SAN Cabling Considerations


(Speed and Distance Matters)
Fiber Optic Glass Port Speed 1 Gb/s 2 Gb/s 4 Gb/s 1 Gb/s 2 Gb/s 4 Gb/s 1 Gb/s 2 Gb/s 4 Gb/s Operating Distance 500 m 300 m 150 m 300 m 150 m 70 m 10 km 10 km 2 km
42

Filament Core 50 micron (multi-mode)


62.5 micron (multi-mode) 9 micron (single-mode)

Copyright Storage World Conference 2006. All rights Reserved.

6 Levels of Storage Networking


Levels of Data Availability
FC SAN Switch Based (Local) Direct Attached Storage (DAS)

FC SAN(NAS, iSCSI) Director Based FC SAN Dual Fabric Gigaman, OCx, Core<> Edge (Local) DWDM Or Collapsed Core Multi-Fabric FCP, iFCP, FCiP (Local) Bridging/ Routing

SAN over Distance (Replication, Remote Tape, Extended SAN)

Virtualization

IP Storage Networks (Local or Distance)

Copyright Storage World Conference 2006. All rights Reserved.

43

Storage and SAN Virtualization


Environment profile where this applies
Multiple, heterogeneous storage arrays Multiple SAN islands Volatile environment, rapid growth rates Multi-tenancy (and potentially formal chargeback) is important Need for improved QoS for storage resources Total cost of ownership sensitivity

SAN is required for heterogeneous storage virtualization


And the storage-V intelligence may exist in the SAN

The SAN itself may be virtualized


VSANs (logical), SAN partitions (physical)
Copyright Storage World Conference 2006. All rights Reserved. 44

Virtualization What Problems are we solving?


Enables lower TCO and higher effectiveness and efficiency of the enterprise storage resources
Standardize and simplify the storage operating environments Common local replication Common remote replication Simplifies and speeds provisioning May provide concurrent data movement in the storage hierarchy (key for implementing ILM and tiered storage) Single pane of glass storage management SAN and storage multi-tenancy Improved QoS and chargeback
Copyright Storage World Conference 2006. All rights Reserved. 45

Advanced STORAGE Virtualization


Where can Storage Virtualization occur?
Host server Server Appliance SAN Appliance/Blade Storage Array Controller

Focus in this presentation is SAN based virtualization

Copyright Storage World Conference 2006. All rights Reserved.

46

SAN Based Storage Virtualization


All virtualized IO goes through the Virtualization Engine Prior to going to disk

Required Connectivity to each Director and Fabric

Virtualization Control Workstation (Metadata)

FC Director

ISL

SAN Virtualization engine (Blade or Appliance)


Ethernet

2 and/or 4 Gb link Trunking


Copyright Storage World Conference 2006. All rights Reserved. 47

SAN Virtualization
Dynamic Partitioning:
Partitioning enables a virtual director Each director four partitions (V-directors) Partitions own a subset of the ports on the system Partitions are managed independently and remain isolated from other partitions Common SAN management console for all partitions
Copyright Storage World Conference 2006. All rights Reserved.

Financial application

ERP

Web services
48

Virtual SANs (VSANs)


Similar concept to partitioning
Defined logically (less physical segregation) Up to (4) VSANs/director are logically defined

Inter-VSAN Routing
Allows sharing of centralized storage services, such as tape libraries and disks, across VSANs Distributed, scaleable, and highly resilient architecture Transparent to third-party switches

Quality-of-Service (QoS) Advanced Traffic Management


Example: Prioritizing latency-sensitive OLTP transactions over throughput-intensive data-warehousing or B/U traffic
Copyright Storage World Conference 2006. All rights Reserved. 49

Conclusion:
The focus of this session was architecting networked storage for the enterprise and included tips on implementing a tiered network storage infrastructure for both local and remote access

In this session, (hopefully) you learned:


The current state of networked storage protocols, how they relate to disk technologies and where these SNW technologies will merge and diverge. How to best utilize file or block based storage for Fibre Channel, IP protocols and for your storage network. What are the areas of storage intelligence in networked storage today, and what might the future hold? Factors to consider when building a total cost of ownership comparison between the different technologies and protocols.
Copyright Storage World Conference 2006. All rights Reserved. 50

Questions?

David J. Mossinghoff
Forsythe Solutions Group (913) 323-6857

dmossinghoff@forsythe.com

Copyright Storage World Conference 2006. All rights Reserved.

You might also like