You are on page 1of 540

NetApp Accredited Storage Architect

Professional Workshop

NETAPP UNIVERSITY

NetApp Accredited Storage Architect


Professional Workshop
Student Guide
Course ID: SALES-ILT-SE-ASAP-REV07
Catalog Number: SALES-ILT-SE-ASAP-REV07-SG
Content Version: 1.0

NetApp University - Do Not Distribute

ATTENTION
The information contained in this course is intended only for training. This course contains information and activities that,
while beneficial for the purposes of training in a closed, non-production environment, can result in downtime or other
severe consequences in a production environment. This course material is not a technical reference and should not,
under any circumstances, be used in production environments. To obtain reference materials, refer to the NetApp product
documentation that is located at http://now.netapp.com/.

COPYRIGHT
2012 NetApp, Inc. All rights reserved. Printed in the U.S.A. Specifications subject to change without notice.
No part of this document covered by copyright may be reproduced in any form or by any meansgraphic, electronic, or
mechanical, including photocopying, recording, taping, or storage in an electronic retrieval systemwithout prior written
permission of NetApp, Inc.

U.S. GOVERNMENT RIGHTS


Commercial Computer Software. Government users are subject to the NetApp, Inc. standard license agreement and
applicable provisions of the FAR and its supplements.

TRADEMARK INFORMATION
NetApp, the NetApp logo, Go further, faster, AdminNODE, Akorri, ApplianceWatch, ASUP, AutoSupport, BalancePoint,
BalancePoint Predictor, Bycast, Campaign Express, ChronoSpan, ComplianceClock, ControlNODE, Cryptainer, Data
ONTAP, DataFabric, DataFort, Decru, Decru DataFort, DenseStak, Engenio, E-Stack, FAServer, FastStak, FilerView,
FlexCache, FlexClone, FlexPod, FlexScale, FlexShare, FlexVol, FPolicy, GatewayNODE, gFiler, Imagine Virtually
Anything, Infinivol, Lifetime Key Management, LockVault, Manage ONTAP, MetroCluster, MultiStore, NearStore, NetApp
Select, NetCache, NetCache, NOW (NetApp on the Web), OnCommand, ONTAPI, PerformanceStak, RAID DP,
SANscreen, SANshare, SANtricity, SecureAdmin, SecureShare, Securitis, Service Builder, Simplicity, Simulate ONTAP,
SnapCopy, SnapDirector, SnapDrive, SnapLock, SnapManager, SnapMirror, SnapMover, SnapProtect, SnapRestore,
Snapshot, SnapValidator, SnapVault, StorageGRID, StorageNODE, StoreVault, SyncMirror, Tech OnTap, VelocityStak,
vFiler, VFM, Virtual File Manager, WAFL, and XBB are trademarks or registered trademarks of NetApp, Inc. in the United
States and/or other countries.
All other brands or products are either trademarks or registered trademarks of their respective holders and should be
treated as such.

NetApp Accredited Storage Architect Professional Workshop: Welcome

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

TABLE OF CONTENTS
WELCOME ........................................................................................................................................................ 1
MODULE 1: NETAPP OVERVIEW ................................................................................................................ 1-1
MODULE 2: CORE SOFTWARE TECHNOLOGY ........................................................................................ 2-1
MODULE 3: CORE HARDWARE TECHNOLOGY ....................................................................................... 3-1
MODULE 4: STORAGE-EFFICIENCY STRATEGY ..................................................................................... 4-1
MODULE 5: ENTERPRISE DATA STORAGE .............................................................................................. 5-1
MODULE 6: BUSINESS APPLICATIONS .................................................................................................... 6-1
MODULE 7: DATA PROTECTION ................................................................................................................ 7-1
MODULE 8: DISASTER RECOVERY ........................................................................................................... 8-1

NetApp Accredited Storage Architect Professional Workshop: Welcome

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

NetApp Accredited
Storage Architect
Professional Workshop
Course ID:
SALES-ILT-SE-ASAP-REV07

NETAPP ACCREDITED STORAGE ARCHITECT PROFESSIONAL WORKSHOP

NetApp Accredited Storage Architect Professional Workshop: Welcome

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Classroom Logistics
Schedule
Start time
Stop time
Break times

Safety
Alarm signal
Evacuation procedure
Electrical safety
guidelines

Facilities
Food and drinks
Restrooms
Phones
NetApp Confidential

CLASSROOM LOGISTICS

NetApp Accredited Storage Architect Professional Workshop: Welcome

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Course Overview
In this course, you will learn about the following:
The NetApp Unified Architecture Model
Overview
Core Software & Hardware Technology
Storage Efficiency Strategy
Enterprise Data Storage
Business Applications
Data Protection
Disaster Recovery
NetApp Confidential

COURSE OVERVIEW

NetApp Accredited Storage Architect Professional Workshop: Welcome

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Course Objectives
By the end of this course, you should be able to:
Describe the key features and benefits of the
NetApp Unified Architecture model
Demonstrate key hands-on capabilities of our
technology
Present and explain Enterprise Data Storage
and Business Application solutions utilizing
NetApp technology
Describe and present key solutions in Data
Protection and Disaster Recovery
NetApp Confidential

COURSE OBJECTIVES

NetApp Accredited Storage Architect Professional Workshop: Welcome

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Course Agenda: Day 1


Morning
Module 1: Welcome & NetApp Overview
Logistics, Agenda, and NetApp review

Module 2: Core Software Technology


Data ONTAP 8.1 Cluster and 7-Mode Options

Afternoon
Module 3: Core Hardware Technology
Current hardware technology

NetApp Confidential

COURSE AGENDA: DAY 1

NetApp Accredited Storage Architect Professional Workshop: Welcome

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Course Agenda: Day 2


Morning
Module 3 Continued: Core Hardware
Technology
Current hardware technology

Module 4: Storage Efficiency


What is Storage Efficiency?

Afternoon
Module 5: Enterprise Data Storage
Windows and Unix file serving
Storage Area Networking
NetApp Confidential

COURSE AGENDA: DAY 2

NetApp Accredited Storage Architect Professional Workshop: Welcome

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Course Agenda: Day 3


Morning
Module 6: Business Applications
SnapManager, Messaging and Collaboration, &
Virtualization

Module 7: Data Protection


OnCommand
SnapVault

Afternoon
Module 8: Disaster Recovery
Disaster Recovery Architecture
SnapMirror & MetroCluster
NetApp Confidential

COURSE AGENDA: DAY 3

10

NetApp Accredited Storage Architect Professional Workshop: Welcome

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

NetApp University Information Sources


NetApp Support
http://now.netapp.com

NetApp University
http://www.netapp.com/us/services/university/

NetApp University Support


http://netappusupport.custhelp.com
NetApp Confidential

NETAPP UNIVERSITY INFORMATION SOURCES

11

NetApp Accredited Storage Architect Professional Workshop: Welcome

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Module 1
NetApp Overview

NetApp Confidential

MODULE 1: NETAPP OVERVIEW

1-1

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Module Overview
This module focuses on NetApp core technology:
Unified Storage
Data ONTAP Operating System
WAFL (Write Anywhere File Layout) file system
Snapshot technology
RAID technology
NVRAM (Nonvolatile RAM)
Aggregates and Volumes
Quotas and Qtrees

NetApp Confidential

MODULE OVERVIEW

1-2

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Module Objectives
After this module, you should be able to identify
the following NetApp core technologies:
The WAFL (Write Anywhere File Layout) file
system
Snapshot technology
RAID 4 and RAID-DP technology
Nonvolatile RAM ( NVRAM) operations
Aggregates and volumes
Quotas and Qtrees

NetApp Confidential

MODULE OBJECTIVES

1-3

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Unified Storage at a Glance


VM1

VM2

VM3

VM4

Applications
and Servers

Enterprise Network
Remote

FC, iSCSI,
NFS, CIFS,
and FCoE

Data Abstraction Layer


Logical Pool of Storage
Disaster
Recovery

Disk-toDisk
Backup

NetApp V-Series Systems

NetApp
FAS Systems

NetApp Data ONTAP


Architecture
Storage Controller

Storage Arrays
Multitier Multi-Use

NetApp Confidential

UNIFIED STORAGE AT A GLANCE


No other vendor provides this kind of capability. Now customers can provide a common pool of storage
across virtual and physical servers regardless of protocol. Customers can support multiple tiers from the same
pool. Customers can unify entire storage infrastructures, including mixed-vendor storage arrays.
You may think that customers must sacrifice performance with this approach, but NetApp systems stand up to
demanding performance requirements.
NetApp systems are multitier and multi-use.
Customers can unify mixed-vendor storage arrays.

1-4

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

The Data ONTAP Operating System


Application-Centric Storage

Manage data from applications:


Application administrator selfmanagement within an established
storage policy
Application synchronization
Use a single storage-virtualization engine:

Application-Centric Data Management

Management of storage pools instead of


hardware

Data ONTAP Architecture

The heart of virtualized data


management

A Multiprotocol, Unified Platform

Simplify elements to be managed:


FAS2000

FAS3000

FAS6000

V-Series Systems

Choices for capacity, performance, and


cost
Support for SAN and network-attached
storage ( NAS) protocols
Architecture for availability and simplicity

HP

EMC

HDS

NetApp Confidential

THE DATA ONTAP OPERATING SYSTEM


Application-Centric Storage
NetApp storage solutions are based on Data ONTAP architecture, a highly optimized, scalable, and flexible
OS that:

1-5

Starts with a storage-virtualization engine that provides an end-to-end solution in a single integrated
platform. The capabilities that are built into Data ONTAP software specifically address the challenges
that are shown on the previous slide.
Provides the ability to scale infrastructure (small, medium, or large) over time and across heterogeneous
physical components
Allows management of storage from an application point of view, which results in the ability to delegate
and automate tasks

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Data ONTAP Layers


Services Layer
Protocol Layer

MultiTenancy

Storage
Efficiency

Storage
Acceleration

Data
Protection

Storage
Quality of
Service

Protocols NFS, CIFS, FC, iSCSI


and Ethernet

WAFL File System

Data Layout Layer

(Thin Provisioning)

Storage Pool

A Transformational Platform
NetApp Confidential

DATA ONTAP LAYERS


The virtualized pool of storage is fronted by a three-layer approach:

1-6

Layer 1( Data Layout): The Write Anywhere File Layout (WAFL) file system provides for the highest
write and read efficiency, which results in the lowest latency. Because it is a write-in-place environment,
data writes do not exert any nonessential spinning of drives, thereby increasing drive longevity.
Layer 2( Protocol): By allowing SAN and network-attached storage(NAS) over multiple protocols, the
Data ONTAP platform affords the highest level of flexibility and usability. This unifying construct
enables a truly simple-to-manage environment for all workloads.
Layer 3( Services): Because data resides in the storage pool, which enables the highest level of efficiency
across the dataset and other functionality that enables the application layer to achieve or exceed
objectives, Data ONTAP software is the transformational platform in the market. Because of virtual
constructs at all layers, Data ONTAP software provides the most flexible, scalable, and efficient platform
that enables customers to address the changing needs of today and to address tomorrows challenges.

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Core Technology
Off-Box Administration Tools

Off-Box Storage Management

Data ONTAP 8.1 Cluster Mode


Data ONTAP 8.1 7-Mode for FAS Systems
and for V-Series Systems

WAFL Core Technology


Snapshot Technology
RAID 4 or RAID-DP Technology
NVRAM Operations
Aggregates and Volumes

On-Box, Value-Added Software

Protocol Support
FC and Ethernet

NetApp Confidential

CORE TECHNOLOGY
This discussion starts with the core technologies that are listed in the middle of the slide. The Snapshot and
FlexVol technologies have their own sections, because they are so important for you to understand and be
able to explain to potential customers. These core technologies are what NetApp does, what NetApp is about,
and why NetApp technologies can work the way that they do:

WAFL core technology


Snapshot technology
RAID 4 or RAID-DP technology
NVRAM operations
Aggregates and volumes

You will certainly talk with most customers about RAID technology and how NetApp RAID protection
works, but getting down into the WAFL file system is usually not necessary. However, you must understand
the WAFL file system whether you talk to customers about it or not. The system is integral to how NetApp
storage products work.

1-7

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 1
The WAFL File System

NetApp Confidential

LESSON 1: THE WAFL FILE SYSTEM

1-8

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

The WAFLFile System


The WAFL file system is highly data-aware and enables the storage
system to determine the most efficient data placement on disk.
Data is intelligently written in batches to available free space in the
aggregate without changing existing blocks.

The aggregate can reclaim free blocks from one flexible volume
(FlexVol volume) for allocation to another.
Data objects can be accessed through the NFS, CIFS, FC, FCoE, or
iSCSI protocol.

Data
ONTAP
Architecture

NFS Protocol

CIFS Protocol

FC, FCoE, and


iSCSI

NFS Semantics

CIFS Semantics

LUN Semantics

File Mechanism

Directory Mechanism

Read and Write

FlexVol Volume
Aggregate

Protocol
Layer

The WAFL
File System

NetApp RAID Technology

NetApp Confidential

THE WAFLFILE SYSTEM


At the core of the Data ONTAP operating system is the WAFL file system , the NetApp proprietary software
that manages the placement and protection of storage data. Integrated with the WAFL system is NetApp
RAID technology, which includes single and double-parity disk protection. NetApp RAID technology is
proprietary and fully integrated with the data-management and data-placement layers, which allows efficient
data placement and high-performance data paths.
Closely integrated with NetApp RAID technology is the aggregate, which forms a storage pool by
concatenating RAID groups. The aggregate controls data-placement and space-management activities.
The FlexVol volume is logically assigned to an aggregate but is not statically mapped to it. This dynamic
mapping relationship between the aggregate layer and the FlexVol layer is integral to the innovative storage
features of the Data ONTAP architecture.
The WAFL file system includes the necessary file and directory mechanisms to support file-based storage and
the read and write mechanisms to support block storage or LUNs.
Note that the protocol-access layer is above the data-placement layer of the WAFL file system . This allows
all of the data to be managed effectively on disk, regardless of how the data is accessed by the host. This level
of storage virtualization offers significant advantages over other architectures that have tight association
between the network protocol and data.

1-9

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Core Technology Disk Structure


Legacy (Static)

NetApp (Dynamic)

MetaLUNs

LUNs and
Files
FlexVol
Volumes

Logical
Layer
Physical
Layer

LUNs

Aggregate

RAID
Groups

RAID
Groups

Hard
Disks

Hard
Disks

Legacy or write-in-place storage architectures rely on static virtualization, for


which data volumes are pre-allocated or statically mapped.
NetApp architecture leverages a dynamic virtualization engine: Data volumes
are dynamically mapped to physical space.

NetApp Confidential

CORE TECHNOLOGY DISK STRUCTURE

1-10

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

10

Data ONTAP Components: The WAFL File


System Versus Traditional File Systems
WAFL File System

Traditional File Systems

File data location

Anywhere on disk

Fixed location (LBA)

Metadata location

Anywhere on disk (except


root inode)

Dedicated regions

Updates to existing data and


metadata?

Put in unallocated blocks


(originals intact)

Overwrite existing data

Snapshot copies and


versions?

By design

Requires extra copy on write

File-system consistency?

Guaranteed by design

Requires careful ordering of


all writes

Crash recovery?

Reboot, ready to go

Slow, complicated fsck

Interaction with RAID


technology

Can write full stripes, utilizing


bandwidth

Must seek for all updates

NetApp Confidential

11

DATA ONTAP COMPONENTS: THE WAFL FILE SYSTEM VERSUS TRADITIONAL


FILE SYSTEMS
Write Anywhere File Layout (WAFL ) is the NetApp file system. It is the file-system layer of the Data
ONTAP operating system , but what does the name WAFL mean? Sometimes potential customers are
confused about the meaning. Sometimes this confusion is planted by NetApp competitorsan insidious sales
technique known as FUDfear, uncertainty, and doubt. Sometimes competitors suggest that the WAFL
system does not protect data that is stored on disk because the WAFL system stores the data on disk just
anywhere. However, that is not what WAFL means. In fact, it is just the opposite. The important point is
that unlike the majority of file systems that require metadata to be recorded to a particular physical location
on the disk, the WAFL file system can write metadata anywhere on the disk.
From a performance point of view, the WAFL system attempts to avoid the disk head having to write data in
one location and then having to move to a special portion of the disk to update the inodesthe metadata
then move back to write more data, then move again to update inodes, and so on across the physical disk
medium. Head seeks happen quickly, but on server-class systems, you have thousands of disk accesses
happening per second. This adds up quickly and greatly impacts the performance of the system, particularly
on write operations. The WAFL system does not have that handicap and writes the metadata in line with the
rest of the data. Write anywhere refers to the file systems ability to write any class of data at any location
on the disk; in other words, it can choose where to put the data.
The basic goal of the WAFL system is to write to the first best available location. First refers to the
closest available block. Best refers to the same address block on all disks, that is, a complete stripe. The
first best available is always a complete stripe across an entire RAID group that utilizes the least amount of
head movement to access. That is arguably the most important criterion for choosing where the WAFL
system locates data on a disk. That is what the term write anywhere refers to: the location of the metadata.
The Data ONTAP operating system controls where everything goes on the disks, so it can decide on the
optimal location for data and metadata. This fact has significant ramifications for the way that the Data
ONTAP operating system does everything, particularly in the operation of RAID and Snapshot technology.
1-11

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

NetApp Data Layout


Feature

Benefit

WAFL architecture

New data is intelligently written to available


free space.

The WAFL file system leverages a pointer


file-system architecture.

This facilitates dynamic storage virtualization,


thin provisioning, and more.

An aggregate is statically mapped to RAID


groups or physical blocks.

Aggregates provide an intelligent storage pool


to manage block mapping.

FlexVol volumes are not statically assigned


physical blocks at the time of creation.

A logical volume can be nearly any size without


full up-front investment in physical capacity.

Data is logged into nonvolatile memory and


then written to disk en masse.

Full stripe writes are used to optimize the write


pattern across the aggregate and improve
performance.

NetApp Confidential

12

NETAPP DATA LAYOUT


The unique features of the WAFL file system offer many benefits.
The write anywhere function of the WAFL system intelligently writes new data to available free space on
disk without having to move or modify the original data. Additionally, WAFL does not require manual
tweaking or tuning to optimize data-placement behavior.
The WAFL system leverages a modern pointer architecture for data placement. Instead of statically mapping
logical blocks to physical blocks at the time that a LUN is created, the WAFL system dynamically maps
logical blocks to physical blocks when data is written to disk. The ability to provision a LUN or FlexVol
volume independently of the available disk capacity is referred to as thin provisioning. It allows IT
administrators to purchase disk capacity as needed, rather than requiring a full up-front investment.
Another feature of the WAFL system relates to the use of aggregates. Aggregates form a storage pool from
RAID groups and are responsible for the assignment of logical data blocks to physical blocks on disk.
Aggregates can be dynamically expanded by adding more RAID groups. And because the logical blocks in a
NetApp LUN do not occupy a predefined space on disk, expanding an aggregate doesnt require data
movement to restripe LUNs across the added disks. An aggregate is also aware of the data that it stores on
disk. When data is deleted from a storage volume, such as a LUN, the aggregate knows that the free data
blocks can be reclaimed and assigned to another volume or LUN as needed. The added value of the aggregate
offers improved storage efficiency over legacy technologies.
NetApp FlexVol technology offers many advantages. The FlexVol volume is a dynamic storage object and is
not statically assigned physical blocks at the time of creation. As a result, the FlexVol volume can be any size,
even larger than the aggregate. And the volume can be dynamically resized, larger or smaller, without data
loss. These advantages are also available to LUNs.

1-12

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Finally, NetApp uses intelligent caching and write patterns to improve write performance. Data from the host
is logged into nonvolatile memory and then written to disk en masse. Writes to disk are optimized across all
drives in the aggregate and contribute to improved data access.
The advantages of the WAFL system are demonstrated with three NetApp technologies: RAID-DP, Snapshot,
and, FlexClone.

1-13

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 2
NetApp Snapshot Technology

NetApp Confidential

LESSON 2: NETAPP SNAPSHOT TECHNOLOGY

1-14

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

13

NetApp Greatest Hits


What are the most important features to the customer?
NetApp Feature

NetApp Feature

RAID-DP

95%

SnapMirror

81%

FlexClone

95%

SnapVault

76%

94%

FlexVol performance

Snapshot technology

73%

Single OS

89%

SnapManager

SnapRestore

89%

Data ONTAP benefits

68%

WAFL integration

86%

FlexVol provisioning

68%

Multi-protocol

86%

V-Series

64%

Data ONTAP simplicity

85%

FlexVol priorities

61%

WAFL file system

85%

SnapDrive software

60%

FlexVol virtualization

85%

LockVault

60%

iSCSI leadership

85%

NAS Leadership

59%

SnapLock

83%

Forced disk consistency

40%

NetApp Confidential

69%

14

NETAPP GREATEST HITS


NetApp conducted a survey of its internal system engineers asking what are the most important technical
features to the customers that you work with?
The items highlighted are the ones that depend on Snapshot technology. Snapshot technology itself came in
second, right after the tie for first place between RAID-DP and FlexClone software. SnapRestore,
SnapLock, SnapMirror, SnapVault, and SnapManager were all considered important technical features
by at least 69 percent of the NetApp customers with whom those SEs worked.
Snapshot copies are very important to all NetApp features. Snapshot copies are generally thought of in the
marketplace as a way to get back to a previous version of the data. That use of Snapshot copies is fairly
obvious. NetApp technology also leverages Snapshot technology for replication and compliance.

1-15

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

NetApp Snapshot Technology (1 of 3)


Blocks in
LUN or File

Blocks
on the Disk

A
A

B
B

C
C

Create Snapshot copy 1:


No data movement
Copy pointers only

Snapshot
Copy 1

NetApp Confidential

15

NETAPP SNAPSHOT TECHNOLOGY (1 OF 3)


This presentation was originally given by Dave Hitz, one of the NetApp founders and Executive Vice
President of NetApp, Inc. He did this presentation for the 2005 Fall Classic. It is a good description of
Snapshot technology and how our competitors snapshot technologies work.
What Is a NetApp Snapshot Copy?
A Snapshot copy is a locally retained, point-in-time image of data. NetApp Snapshot technology is a feature
of the WAFL storage-virtualization technology that is a part of the Data ONTAP microkernel that ships with
every NetApp storage system. A NetApp Snapshot is a frozen, read-only view of a WAFL volume that
provides easy access to old versions of files, directory hierarchies, and LUNs.
The high performance of the NetApp Snapshot technology makes it highly scalable. A NetApp Snapshot copy
takes only a few seconds to createtypically less than one second, regardless of the size of the volume or the
level of activity on the NetApp storage system. After a Snapshot copy has been created, changes to data
objects are reflected in updates to the current version of the objects, as if Snapshot copies did not exist.
Meanwhile, the Snapshot version of the data remains completely stable. A NetApp Snapshot copy incurs no
performance overhead; users can comfortably store up to 255 Snapshot copies per WAFL volume, all of
which are accessible as read-only and online versions of the data.
How does NetApp Snapshot technology work?
Data ONTAP architecture starts in the same way as random access mediums with pointers to physical
locations, the same as USB drives, or thumb drives, and any other type of disks, such as floppy disks. When
Data ONTAP software creates a Snapshot copy, it preserves the inode map as it is at that point in time and
then continues to make changes to the inode map on the active file system. Data ONTAP software keeps the
old version of the inode map. No data movement occurs at the time that the Snapshot copy is created.

1-16

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

NetApp Snapshot Technology (2 of 3)

B1

Blocks in
LUN or File

Blocks
on the Disk

A
A

B1
B

C
C

Create Snapshot copy 1.


Continue writing data.
Create Snapshot copy 2:

B1

No data movement
Copy pointers only

A
B
C

Snapshot
Copy 1

Snapshot
Copy 2

NetApp Confidential

16

NETAPP SNAPSHOT TECHNOLOGY (2 OF 3)


When Data ONTAP software writes changes to disk, the changed version of block B gets written to a new
location, B1 in this example. That enables the file system to avoid all of the parity update changes that would
be required if the new data were written to the original location. If Data ONTAP software updated the same
block, it would have to perform multiple parity reads to be able to update both parity drives. The WAFL file
system writes the changed block to a new location, again writing complete stripes and not moving or
changing the original data blocks.
When the file system creates the next Snapshot copy, the new Snapshot copy points only to the unchanged
blocks A and C and to block B1, the new location for the changed contents of block B. That is all. Data
ONTAP software does not move any data; it keeps building on the original active file system. It is extremely
simple and efficient, and because it is so simple, it is good for disk utilization. The only extra blocks that are
used when changes are made are those that are needed for the new or updated blocks.

1-17

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

NetApp Snapshot Technology (3 of 3)

C2

Blocks in
LUN or File

Blocks
on the Disk

B1

C2
C

C
B1
C2

Create Snapshot copy 1.


Continue writing data.
Create Snapshot 2.
Continue writing data
Create Snapshot copy 3.
Simplicity of model:
Best disk utilization

B1

Snapshot Snapshot
Copy 1
Copy 2

Fastest performance

Snapshot
Copy 3

NetApp Confidential

17

NETAPP SNAPSHOT TECHNOLOGY (3 OF 3)


Snapshot copies have excellent performance characteristics. No extra I/O operations are required.
Functionally, the system can realistically provide an unlimited number of Snapshot copies. The hard limit is
255 Snapshot copies per volume online, and in most production environments, that is more than are used.
Two dozen active Snapshot copies at a time are the most that you find in production environments, even
though many more can be utilized if needed.
Secondary archival environments certainly use many more Snapshot copies. Now, by using FlexClone
technology, you can literally take an unlimited number of Snapshot copies of a volume. A user can take up to
254 Snapshot copies and then, on the last Snapshot copy, create a FlexClone volume clone. Then the user can
take another 254 and clone it again, take another 254, and so on. So today, we have unlimited Snapshot
copies.

1-18

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Data ONTAP Snapshot Performance


Snapshot copies:
A point-in-time copy is
created in a few seconds.

TPC-C is published with


five active Snapshot copies.

No performance penalty
occurs.
A Snapshot
copy is created.

NetApp Confidential

18

DATA ONTAP SNAPSHOT PERFORMANCE


This slide depicts the Data ONTAP Snapshot performance, looking at I/O measured before, during, and after
a Snapshot copy is created while the system is under a 50/50 4K read/write OLTP workload. You can see in
this chart, when the Snapshot copy is created, a minor change in performance is experienced but as soon as
the Snapshot is created, the performance resumes to the systems previous high I/O levels.

1-19

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 3
A Competitors Snapshot

NetApp Confidential

LESSON 3: A COMPETITORS SNAPSHOT

1-20

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

19

A Competitors Snapshot (1 of 2)
Blocks in
LUN or File

Blocks
on the Disk

A
A

B
B

C
C

Create snapshot 1:
Create copy-out
region 1.

Create pointers to old


blocks and copy out.

Copy
Out 1

B
C
A
B

Snapshot
Copy 1

Copy
Out 1

NetApp Confidential

20

A COMPETITORS SNAPSHOT (1 OF 2)
How do NetApp competitors do snapshots?
They start with exactly the same picture. All storage works this way. This is the point where NetApp
separates from the competition. Our competition, in preparation for a snapshot of the file system, creates a
copy-out region. In some competitors products, this copy-out region must be allocated when the system is
initially configured, and others create it just prior to the snapshot with free disk space. If a snapshot is taken at
this point, the process is similar to the process that Data ONTAP software uses: The file system simply keeps
track of the inode map. This is true only for the first snapshot. After that, the rest is drastically different from
the NetApp method.

1-21

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

A Competitors Snapshot (2 of 2)
Blocks in
LUN or File

B1

Blocks
on the Disk

Create snapshot 1.
Continue writing data:

Block changes.

B1
B

Read old block; write to


copy-out region.
Update snap pointer to
copy-out region.

Copy
Out 1

Update block on disk.

One write requires:

A
B

Snapshot Copy 1

One read (old data)

One write (old data)


One write (new data)

Copy
Out 1

NetApp Confidential

21

A COMPETITORS SNAPSHOT (2 OF 2)
When data is changed, the snapshot procedure begins to differ from how Data ONTAP software does
snapshots. When data changes in a storage system from any of the competitors to NetApp, the file system:

Must first read the original data block


Then writes its contents to the copy-out region
Updates pointers
Updates the contents of the block on disk back at the blocks original location

So, the new data is written to the original location. In addition, after the file system updates the original
location, it must update the parity bits on any existing RAID drives.
To accomplish each update, file systems from the competitors to NetApp must do a:

Read of the old data


Write of the old data to its new location
Write of the new data to the old location

This is a total of one read and two writes to service one update request: three times the system overhead.

1-22

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Snapshot Comparison
NetApp

Others

B1

C
B1

Used
Disk
Space

C2

C2
B
C

The NetApp approach


Minimum overhead, which
guarantees disk-space
efficiency
No data movement:
Guarantees disk
performance
Enables more Snapshot
copies
Space on disk is better.

Performance is better.
Number of Snapshot copies
Side-by-Side Comparison After Two Snapshots is better.
NetApp Confidential

22

SNAPSHOT COMPARISON
Because the activity occurs on first write, competitors performance slowly ramps back on these systems. If
the file system keeps updating block C, it does not have to do any extra work. Because it has stored the old
version, it can now write over the original location without the need to first copy the data to a copy-out area. It
is the first write on any block that is included in a snapshot that requires the extra overhead.
What typically happens on competitors systems is a cyclical change in performance. For example,
performance is at an expected level, then a snapshot is created, performance drops, and then performance
slowly comes back to an acceptable level. When another snapshot is created, performance drops again and
slowly returns, then drops again. So, although many NetApp competitors say that they can create thousands of
snapshots, best practices generally show that administrators should limit the number of snapshots of a given
set of data to anywhere from four to eight (it varies with each vendor) because of potential performance
impact and the difficulty of managing these copy-out areas.
When the snapshot feature on competitors systems is used regularly, the systems start to get multiple data
copies that are stored. The more snapshots that are created, the more likely the systems are to have multiple
copies of data. Administrators of these systems have questions to consider, such as:

How big should this copy-out region be made? (The answer depends on the delta rate.)
What is the delta rate?

If the administrator does not make the copy-out region large enough, the snapshot capability breaks.
The file system cannot keep the version of the old data and loses that snapshot. Of course, if the copy-out area
is too big, it is wasted space. Determining what size these copy-out areas should be is an art and must be finetuned over time.

1-23

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 4
Snapshot Restore

NetApp Confidential

LESSON 4: SNAPSHOT RESTORE

1-24

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

23

Using Snapshot Copies to Restore Data


Blocks in
LUN or File

Blocks
on the Disk

B1
B

C2

Block C2 is bad.

B1
C2

B1

B1

C2

Snap- Snap- Snapshot


shot
shot
Copy 1 Copy 2 Copy 3

NetApp Confidential

24

USING SNAPSHOT COPIES TO RESTORE DATA


Assume that after a NetApp Snapshot copy is created, the storage system develops a logically bad block for
some reason. If the block is physically bad, RAID takes care of it, and it never comes into the Snapshot
picture. So, somehow, a bad block existsC2, in this examplethat was accidentally deleted.

1-25

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Using Snapshot Copies to Restore Data


Blocks in
LUN or File

Blocks
on the Disk

B1
B

C2
C

C
B1
C2

.snapshot
Directory

B1

B1

C2

Block C2 is bad.
Let users self-restore
from the .snapshot
directory in NAS
(.snapshot in NFS,
previous versions in
Windows) environments.

Snap- Snap- Snapshot


shot
shot
Copy 1 Copy 2 Copy 3

NetApp Confidential

25

USING SNAPSHOT COPIES TO RESTORE DATA


Data ONTAP software lets users self-restore from the .snapshot directory in NAS environments.
For example, if a users home directorydrive H, for exampleis hosted on a NetApp storage system, the
user can see all available Snapshot copies by displaying the .snapshot directory on drive H in an CIFS
environment or the ~snapshot directory in a CIFS environment. The daily Snapshot copies occur at midnight
every night. The hourly backups occur on a schedule that is determined by the administrator. On the back end,
the system only stores changed blocks. Anything the user has not touched for a while is not duplicated for
each Snapshot copy. Every Snapshot copy uses the same unchanged blocks.
If something happens to one of the users filesperhaps it was deleted or written over by accidentthe user
can drag the data out of the Snapshot directory and restore it back to the users home directory. When a user
does that, the user is copying data from a Snapshot copy and creating new blocks in the active file system.
NOTE: The system administrator can turn this feature off.

1-26

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Using Snapshot Copies to Restore Data


Blocks in
LUN or File

Blocks
on the Disk

B1
B

C
C2

SnapRestore Technology

B1
C2

A
A

B1
B1

B1

C
C

C2

Snap- Snap- Snapshot


shot
shot
Copy 1 Copy 2 Copy 3

Block C2 is bad.
Let users self-restore
from the .snapshot
directory in NAS
environments.
Restore from the
Snapshot copy with
SnapRestore technology.
A single-file
SnapRestore instance
allows restoration of a
single file from a
Snapshot copy.

NetApp Confidential

26

USING SNAPSHOT COPIES TO RESTORE DATA


The process that is described on the previous slide is fine for everyday home directories with files such as
Word documents, PowerPoint presentations, and so on. Of course, if you want to restore a database that is 50
GB, that is probably not what you have in mind with Snapshot copies. So, the other way to restore data from a
Snapshot copy uses the SnapRestore feature. SnapRestore technology does not copy files; it simply moves
pointers from the files that are found in the good Snapshot copy to the active file system. The pointers that are
stored in the Snapshot copy are promoted to become active file system pointers.
The system tracks the links to blocks on the WAFL system, and when no more links to a block exist, the
block is available for overwrite and considered free space.
Because SnapRestore technology is an all-pointer operation, it is quick. No data update occurs, nothing is
moved, and the file system potentially frees blocks that were only used in the later version of the file system.
SnapRestore operations generally happen in less than a second. They are not literally instantaneous but
practically instantaneous.
Imagine what restoring looks like on a competitors system. The competitors file system moves the blocks
somewhere else, so to return to a previous version, all of the blocks must be copied back to where they were
before. Some systems have ways to make that look live. For example, as the read request comes to a
particular block, the file system may read this block while it moves stuff in the background. One way or
another, the competitor must get all of those blocks back to their previous locations.
When restoring from a Snapshot copy with the SnapRestore command, moves pointers from the good
Snapshot copy to the file system. A single-file SnapRestore operation may require a few seconds or a few
minutes to restore.

1-27

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

NetApp Snapshot Summary:


Efficiency and Performance
Save only changed
blocks, Copy on first
write.
A
B
C
Disk Blocks
No performance
penalty occurs.
Active File
Snapshot Copy
Core functionality is
built into the Data
A
B
C
ONTAP operating
system.
Active File

Updated File

Snapshot Copy
C1

New Block
NetApp Confidential

27

NETAPP SNAPSHOT SUMMARY: EFFICIENCY AND PERFORMANCE


As you learned in the prerequisites, NetApp Snapshot copies create online, read-only copies of the entire file
system.
Snapshot copies require only a few seconds to createusually less than one secondregardless of the size of
the volume or the level of activity on the NetApp storage system.
After a Snapshot copy is created, changes to data objects are reflected in updates to the current version of the
objects, as if Snapshot copies did not exist. Meanwhile, the Snapshot copy version of the data remains
completely stable.
A NetApp Snapshot copy incurs no performance overhead.
You can keep up to 255 Snapshot copies per volume.

1-28

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Exercise 1
Module 1: Whiteboard Exercise:
Demonstrating Snapshot Technology

Time Estimate: 20 Minutes

NetApp Confidential

EXERCISE 1
Please refer to your exercise guide.

1-29

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

28

Whiteboard Exercise:
Snapshot Demonstration
This exercise is a script of how to demonstrate
Snapshot functionality on the whiteboard.
Take 10 minutes to study the method of
presentation.
Volunteers will come to the whiteboard and
deliver the Snapshot presentation to the class.
Be ready to:
Walk through how NetApp Snapshot technology
works and explain what happens on changes of
data
Explain how competitors do snapshots and what
happens to the snapshots on changes
NetApp Confidential

WHITEBOARD EXERCISE: SNAPSHOT DEMONSTRATION

1-30

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

29

Lesson 5
NVRAM Operation

NetApp Confidential

LESSON 5: NVRAM OPERATION

1-31

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

30

NVRAM Operation (1 of 4)
Client

Storage System
Network

Disk

NetApp Confidential

31

NVRAM OPERATION (1 OF 4)
Next in this module, by looking at a basic setup of systems, you step through the process that the WAFL file
system uses when integrating NVRAM into the system of reads and writes.

1-32

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

NVRAM Operation (2 of 4)
Client

Storage System
Operation

Main
Memory

NIC

NIC

Main
Memory

ack

N
V
R
A
M
BATT

NIC = Network Interface Card

Operation is placed in the


controllers main memory,
where further processing will
occur.

Operation is also logged into


battery-backed RAM and is
now safe from controller
failure.

The client is free to forget


about it its done.
The path is purely electronic,
memory-to-memory.

NetApp Confidential

32

NVRAM OPERATION (2 OF 4)
Take a close look at NVRAM and WAFL file-system integration. The controller contains a special chunk of
RAM called NVRAM. In this case, NV means nonvolatile. It is nonvolatile because it has a battery. So,
if something happens, such as a disaster striking the system, the data that is stored in NVRAM is not lost.
After data gets to a NetApp storage system, it is treated in exactly the same way whether it came through a
SAN or NAS connection. As I/O requests come into the system, they first go to RAM. The RAM on a
NetApp system is used as in any other system; it is where Data ONTAP does active processing. As the write
requests come in, the OS also logs them into NVRAM.
When the WAFL file system receives a write from the host, it logs the write in NVRAM and immediately
sends an ACK (acknowledgment) back to the host. At that point, from the hosts perspective, the data is
written to storage. But, in fact, the data may be temporarily held in NVRAM. The goal of the WAFL file
system is to write data in full stripes across the storage media. To do this, it holds write requests in NVRAM
while it chooses the best location for the data, does RAID calculations, does parity calculations, and gathers
enough data to write a full stripe across the entire RAID group.

1-33

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

NVRAM Operation (3 of 4)
Client
Main
Memory

Storage System

NIC

NIC

Activities that involve the


operation consume main
memory.
Up to 10 seconds may elapse
between CPs, during which
many other operations arrive
(not shown).

Main
Memory

N
V
R
A
M
BATT

The organized data from the


operations is written to disk in a
process that is called
consistency-point (CP)
processing.
NVRAM is zeroed.

NetApp Confidential

33

NVRAM OPERATION (3 OF 4)
The WAFL file system never holds data for longer than 10 seconds before it establishes a consistency point
(CP). CP operations are atomic operations; in other words, they must be committed fully or they are
recommitted. This is why they are called CPs.
At least every 10 seconds, the WAFL system takes the content of NVRAM and commits it to disk. When a
write request is committed to a block on disk, the WAFL system clears it from the journal. On a system that is
lightly loaded, an administrator can see the 10-second CPs happen: Every 10 seconds, the lights cascade
across the system. Most systems run with a heavier load than that, and the CPs happen every second, every
two seconds, or every four seconds, depending on the system load.
A question that frequently arises is: Is NVRAM a performance bottleneck? No, it is not. The response time
of RAM and NVRAM is measured in microseconds. Disk response times are always in milliseconds, and it
takes a few milliseconds for a disk to respond to an I/O. Because disks are radically slower than any other
component on the system, such as the CPU or RAM, disks are always the performance bottleneck of any
storage system . When a system is committing back-to-back CPs, thats because the disks are taking writes as
fast as they can. That is a platform limit for that system. If that platform limit is reached, the option is to
spread the traffic across more heads or upgrade the head to a system with greater capacity. That is a disk
limitation; the disks are emptying NVRAM as quickly as possible. NVRAM could function faster if the disks
could keep up.
NVRAM is logically divided into two halves so that as one half is emptied the incoming requests fill the other
half. They go back and forth on that system. When the WAFL system fills one half of NVRAM, the WAFL
system forces a CP to happen, and it writes the contents of that half of NVRAM to the storage media. A fully
loaded system does back-to-back CPs, so it fills and refills both halves of the NVRAM.

1-34

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

NVRAM Operation (4 of 4)
Client
Main
Memory

Storage System

NIC

NIC

Activities involving the


operation consume main
memory
Up to 10 seconds can elapse
between CPs, during which
many other operations arrive
(not shown)

Main
Memory

N
V
R
A
M
BATT

The organized data from the


operations is written to disk in a
process called a Consistency
Point, or CP

NVRAM is zeroed

NetApp Confidential

34

NVRAM OPERATION (4 OF 4)
One advantage that NetApp products gain from the use of NVRAM is the flexibility to use RAID more
efficiently. RAID 4 is the NetApp base RAID type that has been used since the founding of the company.
Because of the performance issues that result from its implementation, NetApp competitors do not use RAID
4 . The competitors may be capable of handling it, but in most cases they dont use it. Why?
For NetApp competitors, the parity drive is what is wrong with RAID 4. RAID 4 uses a single drive to write
parity. When you have a single drive that is dedicated to parity, NetApp competitors write down each request
as it comes in and write requests to disparate locations. All of those updates happen randomly on a data disk,
which means that the updates also require a parity change. This creates a parity drive that is exponentially
busier than each data drive. The parity drive gets hot (figuratively) and slows the entire system. The parity
drive is a bottleneck.
So why does NetApp use RAID 4? NetApp can use RAID 4 because the WAFL system controls where to put
the data on disk. It does the parity calculations in memory rather than having to read in extra data and parity
bits. The WAFL system can lay out complete stripes on disk and writes to the parity drive no more and no
less often than all the other drives in the array.

1-35

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

NVRAM and High Availability


Configurations (1 of 2)

Controller A

Controller B

Controller Interconnect
NVRAM

- Heartbeat
- NVRAM Mirroring

Clients and Hosts

NVRAM

Clients and Hosts

NetApp Confidential

35

NVRAM AND HIGH AVAILABILITY CONFIGURATIONS (1 OF 2)


Here you can see a diagram of a basic cluster configuration. There are two controllers with the NVRAM is
being mirrored on each system. The two colors showing on each controller are pointing out that we fact We
have a mirror, orange, on both sides, and a mirror, blue, on both sides. Our primary connection to one shelf is
the secondary connection to the other controllers shelf.
The cluster connection between them is InfiniBand on most of the platforms, with some exceptions such as
the FAS2000s. 10-Gb InfiniBand transports the heartbeat signal as well as NVRAM mirroring between the
systems. Both systems are dealing with their own traffic, data read and writes.
Hardware color and the color of the wire indicates disk ownership, or which controller controls which disk.
(technical detail: Cluster interconnect for the FAS270C is by way of dedicated GbE, internal to the FAS270C,
inaccessible to all other "nodes" that might tend to rob performance from it or serve as possible source of data
corruption.)

1-36

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

NVRAM and Dual Controller


Configurations (2 of 2)

Controller A

Controller B

Controller Interconnect
NVRAM

- Heartbeat
- NVRAM Mirroring

Clients/Hosts

NVRAM

Clients and Hosts

NetApp Confidential

36

NVRAM AND DUAL CONTROLLER CONFIGURATIONS (2 OF 2)


Both controllers can actively accept data, and when one fails, all of the traffic moves to the surviving
controller. Most systems utilize software disk ownership, so an administrator can assign individual disks to
one controller or another. This provides much more flexibility, but the important thing to remember is that
you must assign a disk or it cannot be used for any purpose.
If you forget to assign a disk, it cannot be used by either controller. Even if a controller is in degraded mode
and needs a spare to start a rebuild, it does not take an unassigned disk. By using either software ownership or
hardware ownership, when a failover occurs, all of the ownership moves to the surviving controller and that
controller takes all of the traffic.
Some customers choose to configure their controllers as though they are active-passive. They do not put any
traffic on the second controller, so that when a failover occurs, it has exactly the same performance profile as
when it runs on the other system.
Some customers choose to load them only to about 40%, so, when one fails over, the other is at about 80%
utilized, but it still performs well. Other customers choose to load them normally and accept that during a
failover, the customers have decreased performance. It depends on the goals of the clients and what they host
on their systems. Any of these scenarios is a potential design option.
The total functioning NVRAM with a cluster is the same as the total functioning NVRAM as a single system.
As mentioned earlier, NVRAM is never a bottleneck. NVRAM is lightening fast, and it is usually disk
access that slows the system. As long as no primary traffic goes across the interconnect, a failover does not
create performance issues.

1-37

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 6
RAID-DP Technology

NetApp Confidential

LESSON 6: RAID-DP TECHNOLOGY

1-38

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

37

NetApp RAID-DP Technology:


The New Standard for Data Reliability
Solution: Greater Availability with RAID-DP
Same protection as RAID 1 (mirroring)
Same cost, performance, and ease of use
Business Implications
71% more usable capacity than competitive offerings
Drive failures wont impact data availability
Technical Benefits
More secure than RAID 5
More reliable than mirroring for double-disk failure
14% parity overhead versus 50% overhead with
mirroring (SATA)
NetApp Confidential

38

NETAPP RAID-DP TECHNOLOGY: THE NEW STANDARD FOR DATA RELIABILITY


RAID-DP technology is the new standard and benchmark for data reliability. With the introduction of higher
drive capacities comes the increased probability of downtime for a much larger set of data, and customers
face the need for better and more cost-effective data protection.
RAID-DP technology addresses these needs better than any other RAID method because of the way the data
is stripped across the drives.

1-39

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Data ONTAP Components:


Data Layout with RAID 4

Write
Chain

RAID
Stripe
Parity Drive

Uses a Tetris-like write


Tries to fill stripes
Recalculates parity
NetApp Confidential

39

DATA ONTAP COMPONENTS: DATA LAYOUT WITH RAID 4


Inside NetApp, the WAFL file system and NVRAM process is described as cheating at Tetris. The object
of Tetris is to create full lines of blocks so that they get cleared out. That is what the WAFL system does. It is
cheating, because it involves caching blocks, looking at them, and laying them out before having to write
them to disks.
The WAFL system can cheat when laying out blocks because of the journaling that occurs in NVRAM and
the RAM buffer. It writes complete stripes across the array so that the traffic on the parity drive is the same as
the traffic on the data drives. The disks all get the same number of writes across the entire RAID group. This
is why NetApp can use RAID 4 and not have the performance problem of the parity drive getting hot and
overloaded in either RAID 4 or RAID-DP technology.
RAID 4 has always been available in Data ONTAP software. One of the advantages of RAID 4 is that it
allows the administrator to add data drives to RAID groups. Adding a data disk that contains all zero bits has
no impact on the parity disk. With RAID 4, this allows the addition of data drives to RAID groups without
having to touch any of the data or recalculate any parity.
At least four disks should be added at a time to a system that has implemented RAID 4 protection. With
aggregates, that is usually not an issue. Most NetApp customers usually add an entire RAID group at a time to
an aggregate to increase capacity.
The next item is that most companies with enterprise environments want to be able to survive two data disk
failures. So, how do you do that? That is where RAID-DP technology comes into the picture.

1-40

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Data ONTAP Components: RAID 4 Parity


D

9
RAID-4
protects against
any single-disk failure.

2
7

NetApp Confidential

40

DATA ONTAP COMPONENTS: RAID 4 PARITY


The DP in the RAID-DP name stands for double parity or dual parity. The OS materials refer to RAIDDP technology as dual parity, because RAID-DP technology has two parity disks. Engineering and technical
documents may call it diagonal parity, because that more literally describes how it works. Instead of
calculating the parity bit across horizontal stripes on the disks, RAID-DP technology calculates the parity
diagonally down blocks as depicted in this slide. The result is that RAID-DP technology can survive the
failure of two data disks simultaneously and maintain live read-write access to that data while the system
reconstructs the contents of the failed disks.
Recently the Storage Network Industry Association (SNIA) updated its definition of RAID 6, so NetApp can
now call RAID-DP technology an implementation of RAID 6. SEs who are standards-oriented call the
implementation RAID 6; SEs who are NetApp innovation-oriented call it RAID-DP technology.
HP and several other storage vendors can implement RAID 6. How many customer implementations use
RAID 6 from vendors other than NetApp? The answer is few. The reason is performance impact. If 100% is
normal performance on a storage system from a NetApp competitor, when a client turns on RAID 6
protection, the performance drops to about 60% or, in many cases, 40% of normal performance. You can
imagine why: File systems that are constrained to doing updates to certain physical locations on a disk
generate much additional read traffic and disk head movement when RAID 6 is turned on.
The WAFL file system also must do extra work in RAID-DP technology, calculating parity reads horizontally
across all of the data drives to produce the normal parity updates and performing parity reads diagonally
across all of the data drives. It seems at first glance that RAID-DP technology creates a massive cascade of
I/O activity to update both kinds of parity with each write to disk, but because the majority of these
calculations are done in RAM, with NetApp, the I/O impact is kept to a minimum.

1-41

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

The WAFL file system not only tries to write complete stripes of data across disks, it always tries to write 16
complete stripes at a time. When the WAFL system writes 16 stripes simultaneously, it can do both the
normal horizontal parity calculations and the diagonal parity calculations in RAM before committing any data
to disk. The WAFL system wraps the diagonal calculations around this set of stripes and has all of the data
laid out in memory with the parity and the diagonal parity calculated before putting the data on disk. This
approach means that no extra read traffic or head movement slows storage I/O. The result is excellent
performance even with RAID-DP technology enabled.
In terms of throughput and latency, the performance is the same for RAID-DP technology as it is for RAID 4.
RAID-DP technology does introduce a 1% to 2% increase in storage controller CPU usage, because extra
calculations are done in RAM before laying the data down on the disk, so performance impact is minimal.
The bottom line is that no performance reason exists for NetApp customers not to run RAID-DP technology.
The next point to clarify is how much storage overhead this creates, because the system dedicates another disk
to parity. In other words, will a RAID-DP system require use of more disks than a comparable RAID 4
system does? RAID-DP technology requires the same number of disks as RAID 4 does. For RAID 4, one
parity disk for every seven data disks should exist . By contrast, for RAID-DP technology, 2 parity disks for
every 14 data disks should exist . The net result is exactly the same ratio of parity disks to data disks.
However, the resulting protection that RAID-DP technology provides is much greater, because RAID-DP
technology can survive two simultaneous disk failures.
Another important question that is commonly asked is, If performance from other RAID 6 implementations
is so bad , how do NetApp competitors get multidisk failure protection? The answer is that a majority of
time, the competitors implementations create full mirrors. That means that eight disks are protected by eight
disks, which effectively cuts usable disk space by 50%. This is a great selling point for NetApp; our usable
space in a double-disk protection scenario is far greater than that of our competition. When discussing this
issue with customers, be sure to focus on the double-disk failure protection.

1-42

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Data ONTAP Components:


RAID-DP Parity
D

DP

4
3

RAID-DP technology
protects against
any double-disk
failure.

2
7
16

RAID-DP technology is dual, diagonal parity data protection.

NetApp RAID-DP technology is an implementation of the industry standard RAID


6 as defined by the Storage Networking Industry Association ( SNIA).
NOTE: The SNIA definition was recently updated to include NetApp RAID-DP
technology: http://www.snia.org/education/dictionary/r/.
NetApp Confidential

DATA ONTAP COMPONENTS: RAID-DP PARITY

1-43

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

41

Cost-Effective Data Reliability


The Problem

The NetApp RAID-DP Solution:

Double-disk failure is a
mathematical certainty.

Protects against double-disk


failure

RAID 5 (single-parity disk) has


insufficient protection.

Provides high performance and


fast rebuild

RAID 10 (mirrored copy)


doubles the cost.

Provides better protection than


RAID 10 does and at a lower
cost, without impacting
performance

RAID 5

RAID 6

RAID 10

RAID-DP

Cost

Low

Low

High

Low

Performance

Low

Low

High

High

Resiliency

Low

High

Med

High

NetApp Confidential

42

COST-EFFECTIVE DATA RELIABILITY


Solidify RAID-DP technology as the foundation of data protection.
This table compares RAID-DP technology with RAID 5 and RAID 10.
While RAID 5 used to be considered adequate, protection is provided only for single-disk failures. With the
sheer number of drives in use today, combined with drive manufacturer issues around similar life span on
drives that are manufactured together, it is now a mathematical certainty that data centers must be prepared
for double-disk failure scenarios. This requirement rules out RAID 5.
Many competitors respond with RAID 10, which overcomes some double-drive issues (unless on the same
side of the mirror) and performs much higher than RAID 5 does. But these improvements come at a high
price, because of the need to double the raw capacity and thus double the price.
NetApp offers RAID-DP technology and backs it up by recommending RAID-DP technology in best practice.
RAID-DP technology protects against double-disk failure and has the high performance of RAID-10 and the
low price of RAID 5.
No trade-off is required with RAID-DP technology.
Typical competitors are labeled on the bottom of the page for comparison purposes.

1-44

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Outstanding Customer Experience:


NetApp RAID-DP Technology
Industry Statistics: Drive Replacements
and Media Errors Increase with Drive Capacities
20%

17.9%
ATA

15%

ATA

Protected with
RAID-DP
Technology

10%
5%

5%
3%

ATA

.2%

FC

FC

0%
Up to 5%
*Typical disk drive
replacement rate
(per year)

2.6%
ATA

Up to 2.6%

1.7%

.0000000001%

FC
Up to 17.9%

Less than 1 in a Billion

*Disk drive spec media or


*Media or bit error
*Media or bit error with
bit error likelihood
likelihood: single parity second failure likelihood:
(full-capacity transfer 300- (during reconstruction of an
double parity
GB FC and 320-GB SATA) 8-drive RAID 4 or 5 set) (during reconstruction of
a 16-drive RAID-DP set)

*Source: NetApp, Seagate, and Hitachi

NetApp Confidential

43

OUTSTANDING CUSTOMER EXPERIENCE: NETAPP RAID-DP TECHNOLOGY


NetApp RAID-DP technology offers the highest level of protection with the best performance that is available
to protect against data loss due to a double-disk failure that results from media failure within the same RAID
group.
Now consider a storage array. Disks are grouped in RAID sets. RAID helps to build resiliency against
individual disk-drive failures. Upon a drive failure, the RAID set can reconstruct the lost drive by using
mathematical redundancy that is built into RAID. The reconstruction requires that all of the bits in the RAID
disks be read. Data loss occurs when you encounter a bit error during reconstruction read operations.
You now have the three ingredients for a perfect storm under single parity RAID:

Increased (up to two times) drive failures = more reconstructions with ATA drives.
Lower bit error resiliency on ATA drives = increased likelihood of bit errors.
Larger ATA disks = larger number of bits in a RAID group = increased likelihood of bit errors.

NetApp effectively eliminates this risk with RAID-DP technology. Others can, too, with RAID 6. The
difference is that the NetApp solution has minimal performance impact and is extremely simple to deploy.

1-45

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Solution Portfolio for


Disk-Failure Protection
RAID-DP technology protects against double-disk failures within a RAID group.
RAID-DP technology with SyncMirror software (RAID-DP technology and RAID
1) protects against:

Any five concurrent disk failures

Storage subsystem failure and almost all higher-order failures

Any four concurrent disk failures and at least one failed sector read

At least two failures in half the mirror with the rest in the other half

RAID-DP technology provides cost-effective, increased data protection.

Increasing
Cost of
Protection

Checksums

Single
Parity
One Disk

RAID-DP
Technology
Any Two Disks

Single-Parity
RAID and
SyncMirror
Software

RAID-DP Technology
and
SyncMirror Software

Any Three Disks

Any Five Disks

Classes of Failure Scenarios

NetApp Confidential

SOLUTION PORTFOLIO FOR DISK-FAILURE PROTECTION


RAID-DP technology and SyncMirror software protect against data loss from:

1-46

Any five concurrent disk failures


Any four concurrent disk failures and at least one media error
Loop failures

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

44

Lesson 7
Storage Layout: Aggregates

NetApp Confidential

LESSON 7: STORAGE LAYOUT-AGGREGATES

1-47

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

45

Data ONTAP Storage


Terminology: Aggregate
Aggregate
RAID
Group 0
RAID
Group 1
RAID
Group 2

An aggregate is a collection of physical disk space that is used as a


container to support one or more flexible volumes. Aggregates are
the physical layer.

NetApp Confidential

46

DATA ONTAP STORAGE TERMINOLOGY: AGGREGATE


What is an aggregate? An aggregate is a collection of disks. It can be multiple RAID groups or one RAID
group. It is a collection of physical disk space that is used as a container to support one or more volumes.
An aggregate is the physical layer. When you create an aggregate, you do not have any usable space yet.
Nothing yet exists for a host to connect to in an aggregate. Volumes must be created, either traditional or
flexible, by using the aggregate.

1-48

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Basic Aggregate Attributes


An aggregate default RAID type is RAID-DP
technology. A RAID group size is definable for
one or more RAID groups.
Aggregates support SyncMirror software.
Aggregate snapshot copy support (enabled by
default) targets all flexible volumes that are
contained within the aggregate.

NetApp Confidential

47

BASIC AGGREGATE ATTRIBUTES


In the current version of Data ONTAP software, aggregates default to RAID-DP technology. They can be
changed to RAID 4 as an option, but in most cases no reason to do so exists. The majority of customers, both
primary and secondary, and online and near-line storage use RAID-DP technology. The RAID group size is
definable, but the default is the most efficient.
Aggregate Snapshot copies are required only in aggregates that use RAID SyncMirror software, including all
MetroCluster configurations.
Other customer-relevant uses are:

A feed into WAFL_check -prev_CP; this effectively restores the aggregate to that Snapshot copy (see
below) and then runs against it
The possibility of mirroring the entire aggregate

NOTE: This restores every FlexVol volume in the aggregate to the state that it was in when the aggregate
Snapshot copy was created. It is unlikely that this is what you want.
Users can use SyncMirror software to mirror aggregates if needed. Aggregate Snapshot copies are enabled by
default. A key point to consider when rolling back an aggregate Snapshot copy is that everything that is
contained in that aggregate is reverted to that point in time. reverts The SyncMirror all of the FlexVol
volumes simultaneously.

1-49

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 8
Storage Layout: Volumes

NetApp Confidential

LESSON 8: STORAGE LAYOUT-VOLUMES

1-50

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

48

Data ONTAP Storage Terminology:


Flexible Volume
Aggregate
RAID
Group 0

FlexVol2

RAID
Group 1

FlexVol2
FlexVol1

RAID
Group 2

FlexVol2
FlexVol1

FlexVol1

A flexible volume is a collection of disk space that is allocated as a subset


of the available space within an aggregate. Flexible volumes are:

Loosely tied to their aggregates

The logical layer


NetApp Confidential

49

DATA ONTAP STORAGE TERMINOLOGY: FLEXIBLE VOLUME


A flexible volume is a collection of disk space that is allocated from the available space within an aggregate.
FlexVol volumes are loosely tied to their aggregates and will be even more loosely tied in the future with the
implementation of Data ONTAP functionality.
Note that, as the picture shows, both FlexVol volumes are striped across all of the disks of the aggregate. That
is always true of a FlexVol volume, no matter what the size. A FlexVol volume can be as small as 20 MB or
as large as the entire aggregate, up to 16 terabytes with 32-bit aggregates in the current version of Data
ONTAP software.

1-51

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Default Snapshot Copy Reserve


Snapshot Copy Reserve
Not usable for normal operations
Used to protect Snapshot copies

Aggregate Space

Aggregate Space

Online backup space


Space amount is adjustable

Aggregate Snapshot Reserve

Reserve is 5%

Reserve is 0%

Volume Snapshot Reserve

Reserve is 20%

Reserve is 5%

Active File
System

80%

80%
Snap Reserve

20%

Snap Reserve

95%

5%

20%

Snapshot Copy Reserve 5%

NetApp Confidential

DEFAULT SNAPSHOT COPY RESERVE

1-52

Active File
System

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Snapshot Copy Reserve 0%

50

Lesson 9
Storage Layout: Qtrees

NetApp Confidential

LESSON 9: STORAGE LAYOUT-QTREES

1-53

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

51

Qtrees
A qtree (quota tree) is a special directory that
can be created
Only in the root of a volume
Looks just like a directory to the client
To limit disk space and files by applying quotas
to the qtree
Can have security style and oplock settings
independent of its volume and other qtrees
Can be used for backup and recovery in 7Mode (Qtree SnapMirror and SnapVault)
Can be used to separate LUNs within a volume
NetApp Confidential

QTREES
Qtrees are similar to flexible volumes but have the following unique characteristics:

1-54

allow you to set security styles


allow you to set oplocks for CIFS clients
allow you set setup and apply quotas
are used as a backup unit for SnapMirror and SnapVault

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

52

Quotas
Quotas are specified to
Limit the amount of disk space that can be
used
Track disk space usage
Warn of excessive usage

Quota targets
Users
Groups
Qtrees

NetApp Confidential

53

QUOTAS
Quotas are important tools for managing the use of disk space on your storage system. A quota is a limit set to
control or monitor the number of files, or amount of disk space an individual or group can consume. Quotas
allow you to manage and track the use of disk space by clients on your system.
A quota is used to:

1-55

Limit the amount of disk space or the number of files that can be used
Track the amount of disk space or number of files used, without imposing a limit
Warn users when their disk space or file usage is high

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Exercise 2
Module 1: Sign-in and Build the
Base Lab Configuration

Time Estimate: 10 Minutes

NetApp Confidential

EXERCISE 2
Please refer to your exercise guide.

1-56

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

54

Module Summary
Now that you have completed this module, you
should be able to:
Describe the WAFL file system
Demonstrate a Snapshot
Explain RAID 4 and RAID-DP
Explain how NVRAM Operations work
Define and describe Aggregates and Volumes

NetApp Confidential

MODULE SUMMARY

1-57

NetApp Accredited Storage Architect Professional Workshop: NetApp Overview

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

55

Module 2
Core Software Technology

NetApp Confidential

MODULE 2: CORE SOFTWARE TECHNOLOGY

2-1

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Module Overview
This module focuses on NetApp core software
technology:
Data ONTAP 8.x 7-Mode and Cluster-Mode:
32-bit and 64-bit aggregates
On-box features of Data ONTAP software
Protocol support
Off-box features of Data ONTAP software
OnCommand management software
The FlexShare quality of service tool

NetApp Confidential

MODULE OVERVIEW

2-2

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Module Objectives
After this module, you should be able to:
Identify NetApp core software:
On-box features of Data ONTAP software
Off-box features of Data ONTAP software

Describe the on-box and off-box capabilities


of NetApp software

NetApp Confidential

MODULE OBJECTIVES

2-3

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 1
Core Software Technology

NetApp Confidential

LESSON 1: CORE SOFTWARE TECHNOLOGY

2-4

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Core Software Technology


Off-Box Storage Management

Off-Box Administration Tools

Data ONTAP 8.1 Cluster Mode


Data ONTAP 8.1 7-Mode for FAS Systems
and for V-Series Systems

WAFL (Write Anywhere File Layout)


Core Technology
Snapshot Technology
RAID 4 or RAID-DP Technology
Nonvolatile RAM (NVRAM) Operations
Aggregates and Volumes
On-Box, Value-Added Software

Protocol Support

FC and Ethernet

NetApp Confidential

CORE SOFTWARE TECHNOLOGY


This module is a quick, high-level review of NetApp core software technology. You have taken the Webbased courses that were listed as prerequisites for this class. One of those modules provided an overview of
NetApp software technology, so this module is a review of that information and an introduction to other
products.
This module emphasizes that many important core features of NetApp software are inside the Data ONTAP
operating system. These features do not require a separate download, a separate install, a reboot, a separate
blade, or a gateway. These features are inside the OS, ready to be used. Many capabilities require an
additional license for customers to enable and use them, but the features are all contained within the OS.
Other pieces exist outside of Data ONTAP software. Some pieces reside on SAN hosts to help to manage
those hosts and to bring management simplicity to application and host administrators. These features free
these administrators from relying on server administrators and storage administrators to accomplish basic
storage tasks.
Administration tools are available for administrating large environments. For example, Yahoo!, the largest
NetApp customer, has roughly 1,200 systems online simultaneously. How do you manage 1,200 systems?
That is an important, challenging question. Even a small shop may have five systems, and if the shop has only
one IT person, it is a daunting task to manage all five systems. In response to those needs, NetApp has
administration tools that are discussed in later modules of this course.
The topics in the center of this slide are core technologies that form the foundation of all NetApp products.
This module briefly reviews these technologies.

2-5

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 2
The Data ONTAP Operating System

NetApp Confidential

LESSON 2: THE DATA ONTAP OPERATING SYSTEM


This module starts with Data ONTAP software, listed at the top of the previous slide, which is the NetApp
OS. The primary function of the Data ONTAP operating system is to flow data between client computers and
the disks or tape that are used for storage and archiving.

2-6

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

What Is the Data ONTAP 8.1


Operating System?
The production-ready, enterprise-class,
version of Data ONTAP 8.1 software
A system with one codebase and two
separately orderable product variations:
Cluster-Mode: the next version of the Data
ONTAP GX operating system
7-Mode: the next version of the Data ONTAP
7G operating system after Data ONTAP 7.3.x

The first step on the path to complete the


scale-out capabilities of the Data ONTAP 8
operating system
NetApp Confidential

WHAT IS THE DATA ONTAP 8.1 OPERATING SYSTEM?


The Data ONTAP 8.1 operating system is a production-ready, enterprise-class, first version of Data ONTAP
8.1 software.
This OS is one single codebase with two separately orderable product variations:

Cluster-Mode: the next version of Data ONTAP GX software


7-Mode: the next version of Data ONTAP 7G software after Data ONTAP 7.3.x

Data ONTAP 8.1 software is the first step on the path to the complete scale-out capabilities of
Data ONTAP 8 software.

2-7

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

A Tale of Two Products


Data ONTAP

Data ONTAP 7G

SpinFS

Data ONTAP GX

NetApp Confidential

Data ONTAP 8.0:


7-Mode
Cluster-Mode

A TALE OF TWO PRODUCTS


In 1992, NetApp introduced the Data ONTAP operating system and ushered in the network-attached storage
(NAS) industry. Since then, NetApp has added features and solutions to its product portfolio to meet the
needs of its customers. In 2004, NetApp acquired Spinnaker Networks to fold its scalable, clustered filesystem technology into Data ONTAP software. That plan came to fruition in 2006, when NetApp released
Data ONTAP GX software, the first clustered NetApp product. NetApp also continued to enhance and sell
Data ONTAP 7G software.
Having two products provided a way to meet the needs of the NetApp customers who were happy with the
classic Data ONTAP software while allowing customers with certain application requirements to use Data
ONTAP GX software to achieve even higher levels of performance (and with the flexibility and transparency
that is afforded by its scale-out architecture).
Although the goal was always to merge the two products into one, the migration path for Data ONTAP 7G
customers to get to clustered storage eventually required a big leap. Enter Data ONTAP 8.0 software. The
goal for Data ONTAP 8.0 software was to create one code line that allows Data ONTAP 7G customers to
operate a Data ONTAP 8.0 7-Mode system in the manner to which theyre accustomed while also providing a
first step in the eventual move to a clustered environment. Data ONTAP 8.0 Cluster-Mode allows Data
ONTAP GX customers to upgrade and continue to operate their clusters as theyre accustomed.

2-8

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

The Data ONTAP 7G Operating System

NFS

CIFS

FC

iSCSI

WAFL Virtualization Layer

7G
Stack

RAID and Storage Interface

7G Volumes

NetApp Confidential

THE DATA ONTAP 7G OPERATING SYSTEM

2-9

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

The Data ONTAP 8.1 7-Mode


Operating System

FreeBSD

Is compatible with the Data


ONTAP 7G operating system
for volume-access paths and
protocol stack

Supports the Data ONTAP 7G


software suite
D-Blade
NFS

CIFS

FC

iSCSI

WAFL Virtualization Layer

Supports NFS, CIFS, iSCSI,


FC, and FCoE
7-Mode
Stack

RAID and Storage Interface

7-Mode Volumes

NetApp Confidential

10

THE DATA ONTAP 8.1 7-MODE OPERATING SYSTEM


FreeBSD is an advanced OS for x86-compatible (including Pentium and Athlon) and 64-compatible
(including Opteron, Athlon 64, and EM64T) ARM, IA-64, PowerPC, PC-98, and UltraSPARC architectures.
It is derived from BSD, the version of UNIX that was developed at the University of California, Berkeley.
The D-Blade is the data blade, which is a software component.

2-10

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

The Data ONTAP 8.1 Cluster-Mode


Operating System
Free BSD
CIFS

NFS

File Semantics

iSCSI

FC

LUN Semantics

N-Blade

SCSI-Blade

Cluster
Interconnect
D-Blade
WAFL Virtualization Layer
RAID and storage interface

Similar access paths


and protocol stack to
the Data ONTAP GX
operating system
8.1 supports NAS
protocols:
CIFS, NFS, pNFS

8.1 supports SAN


protocols:
iSCSI, FC, FCoE

Cluster-Mode volumes

NetApp Confidential

THE DATA ONTAP 8.1 CLUSTER-MODE OPERATING SYSTEM


The D-Blade is the data blade, which is a software component.
The N-Blade is the network blade, which is a software component.
The SCSI-Blade is the SAN blade, which is a software component.

2-11

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

11

Theres a Huge Shift in the Market


Dynamics of todays data center
Explosive
Data Growth

Aging Dedicated
Architectures

Shared
Resources

Changing
Needs

Apps
Servers
Network
Storage

CIOs being forced to re-evaluate what enterprise storage means


Enterprise Infrastructure

CIO
NetApp Confidential

12

THERES A HUGE SHIFT IN THE MARKET


The following trends are witnessed in the market today:

A huge data explosion creates the need for scalability, capacity elasticity, and simple data management.
Aging infrastructures create the need for business continuity, the need to protect against data loss, and the
need for data retention and archiving.
Silos of data create the need for unified storage.
Changing business needs create the need for dynamic, customizable storage.

Perhaps the biggest challenge that IT decision makers face is getting a platform that can store and access all
the current and future information, adjust to changing business needs (with integrated data protection), and do
this without any disruption to clients. That means a highly scalable, shared enterprise infrastructure for the
future.

2-12

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Cluster-Mode Terminology
Virtual Server (Vserver)
Similar to MultiStore vfiler in Data ONTAP 7G
Creates a namespace within a cluster

Logical Interface (LIF)


A logical path between a physical port and a
Vserver

Interface Group (ifgrp)


Virtual Interface (VIF) in Data ONTAP 7G
Creates a logical trunking of physical ports

NetApp Confidential

13

CLUSTER-MODE TERMINOLOGY
In the current version of Data ONTAP software, aggregates default to RAID-DP technology. They can be
changed to RAID 4 as an option, but in most cases no reason to do so exists. The majority of customers, both
primary and secondary, and online and near-line storage use RAID-DP technology. The RAID group size is
definable, but the default is the most efficient.
Aggregate Snapshot copies are required only in aggregates that use RAID SyncMirror software, including all
MetroCluster configurations.
Other customer-relevant uses are:
A feed into WAFL_check -prev_CP; this effectively restores the aggregate to that Snapshot copy (see
below) and then runs against it
The possibility of mirroring the entire aggregate
NOTE: This restores every FlexVol volume in the aggregate to the state that it was in when the aggregate
Snapshot copy was created. It is unlikely that this is what you want.
Users can use SyncMirror software to mirror aggregates if needed. Aggregate Snapshot copies are enabled by
default. A key point to consider when rolling back an aggregate Snapshot copy is that everything that is
contained in that aggregate is reverted to that point in time. reverts The SyncMirror all of the FlexVol
volumes simultaneously.

2-13

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Scalability in Three Dimensions

Performance
Scaling
Capacity Scaling

Operational
Scaling

NetApp Confidential

14

SCALABILITY IN THREE DIMENSIONS


Saleability is one of the key foundations for the future of ONTAP. From user side it provides a single
virtualised pool of all storage. From a system point of view, it provides performance and capacity scalability
by adding controllers (performance) and storage (capacity). Storage is accessed via an abstraction and the
cluster enables delivering the right storage and all the complexity behind the scenes
1. Scaling for performance is a given. It starts at the bottom with the appropriately designed block store
(WAFL) and then moves up to supporting the latest fastest media types (flash, flash as cache, sas, etc.).
And dealing with multiple faster cores. We have a fully integrated technology agenda to drive more
performance. All the more important with consolidation.
2. The sheer amount of storage we have to deal with. Consolidated data centers have more TBs and it is no
longer enough to just have large systems. So we need a single logical pool that can be provisioned as a
logical pool across lots of arrays. This is the basis of our next gen ONTAP 8.
3. How do we make sure that the storage can operationally scale? Storage admins can no longer spend time
on manual activities (provisioning, dp, tuning, etc.). This is all about efficiencies and the ability to scale
systems nondisruptively.
In the early days, the only way to upgrade was to scale up, get the bigger system, the better controller. With
Ethernet networks and the emergence of the Internet, many environments scale out with more systems.
But with a flexible platform built with these new workloads in mind, you can now scale for capacity, allowing
applications to get the performance and quality of storage necessary to run.

2-14

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Data ONTAP 8.1 Cluster-Mode


Unified Architecture at Scale
Protocols
FC

CIFS

FCoE

NFS

iSCSI

pNFS

Scalability

Storage Efficiency

Performance scaling
Capacity scaling

Operational scaling

Management and
Ecosystem Integration

Cost Versus
Performance

Flash Cache
Solid-state disk
(SSD)
SAS and SATA

Deduplication
Compression
Thin provisioning
Cloning

Integrated Data Protection

Unified management
Secure multi-tenancy
Onboard antivirus

Snapshot technology
Load-sharing mirrors
Asynchronous SnapMirror

Nondisruptive Operations (NDOs)


Common
Software

Common
Systems

Common
Management

NetApp Confidential

15

DATA ONTAP 8.1 CLUSTER-MODE


UNIFIED ARCHITECTURE AT SCALE

Cluster-Mode is focused on scalability for growth in three dimensions:

Performance (Meet the continual need to go faster )


Capacity ( to store ever more data )
Operational scalability ( so that you can do more with less.)

With Data ONTAP 8.1 software, NetApp dramatically enhances capability in all these areas to make them
ready for enterprise workloads.
Some enhancements make the system closer to 7-Mode; others go beyond 7-Mode.
This version will be especially attractive to new customer segments.
The included features are:

2-15

Unified architecture at scale (supported NAS and SAN protocols)


Integrated storage efficiencies
Flexible performance options
Unified management and ecosystem integration
Integrated data protection
Nondisruptive operations (how customers transcend to an always-on infrastructure with nondisruptive
operations throughout their systems life spans)

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Data ONTAP 8 Cluster-Mode


Overview

NetApp Storage
A single system image for
up to 24 nodes ( 4 for SAN)
Support for FAS and V-Series systems
Scaling to 51 PB capacity
Scaling to multiple GBps throughput
On-demand resource balancing

Third-Party Storage
with V-Series Systems
Integrated data protection and storage
efficiency
Multiprotocol access
Common software and management
Always-on infrastructure
A fully integrated NetApp solution

NetApp Confidential

16

DATA ONTAP 8 CLUSTER-MODE


OVERVIEW

Single systems can range in size up to 24 nodes for SAN, and NAS can be 2 or 4 node clusters.

2-16

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Data ONTAP Unified Storage


Data storage for SAN hosts
and NAS clients

SAN Hosts and NAS Clients

High availability:
Hardware and software
resilience
Online software updates

Core storage capabilities:

Access Protocols

RAID-DP technology

Storage

High Availability

Thin provisioning
Storage efficiency: deduplication,
compression, and cloning
Integrated data protection:
Snapshot copies and replication

NetApp Confidential

DATA ONTAP UNIFIED STORAGE


Customers can serve out data for all protocols by using HA pairs and back-end storage.
Customers can internally manage:

Front-end client network and protocols


Back-end storage, which incorporates the benefits of:

2-17

The WAFL file system


RAID-DP technology
Thin provisioning and Snapshot copies
The new 64-bit aggregates
Storage efficiencies (deduplication and compression)

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

17

Data ONTAP 8.1 Cluster-Mode


Virtualization of storage and
data access from underlying
controller and storage
hardware

SAN Hosts and NAS Clients

Flexible data management


and presentation
Transparent migration
of data and network
resources

Access Protocols
Interconnect

Interconnect enablement
of cluster-wide shared
resources

High Availability

Storage

Virtualized Storage and Network


NetApp Confidential

18

DATA ONTAP 8.1 CLUSTER-MODE


Data ONTAP Cluster-Mode splits the standard functions by virtualizing the storage and client-access
protocols with front-end client-access protocols and back-end storage components. The back end is still the
same capable WAFL file system with RAID-DP technology, Snapshot copies, and thin provisioning. The
back end connects all the controllers in the cluster with a high-speed, reliable interconnect. All nodes can thus
share data, communicate, and synchronize together.

2-18

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Data ONTAP 8.1 Cluster-Mode


Virtual Server
A virtual server (Vserver)
provides a logical, flexible,
secure resource pool for a NAS
namespace and LUNs.

SAN Hosts and NAS Clients

All data access is through a


Vserver, which supports
one or more protocols.
A Vserver includes
FlexVol volumes, LUNs,
and logical network
interfaces (LIFs).

LIF2

LIF1

LIFs
Vserver

High Availability

FlexVol Volumes

A minimum of one Vserver is


required; hundreds can be
supported.

VS1

Integrated Shared Architecture


NetApp Confidential

19

DATA ONTAP 8.1 CLUSTER-MODE


VIRTUAL SERVER

Virtual server (Vserver) architecture provides the NetApp core value propositions for Data ONTAP 8.1
Cluster-Mode: single-system management, a single mountpoint and namespace for NAS, a scalable container
for LUNs, and transparent data mobility, the ability to move volumes seamlessly around all the aggregates in
all the controllers, which provides nondisruptive, nonstop operations.
The last major architectural component is the Vserver. It allows the cluster to serve data and acts as a
container for the logical client network interfaces, volumes, and LUNs. All client data is accessed through a
Vserver, so a minimum of one Vserver is required.
A Cluster-Mode system can support hundreds of Vservers.
A Vserver can support any NAS or SAN protocols.
It forms a namespace and LUN container for clients and hosts to access.
It has a container for volumes that include LUNs.
The scalable container for volumes that have LUNs uses multipath I/O (MPIO) and Asymmetric Logical Unit
Access (ALUA) with all nodes in the cluster namespace for NAS.
A namespace consists of FlexVol volumes and is junctioned together at subdirectories below the root.
It forms a hierarchy that presents to the clients as a single CIFS share or NFS export.
It can support from a required minimum of one Vserver to hundreds of LUNs and volumes.
It can reside within the same Vserver, which provides a unified architecture at scale.

2-19

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Data ONTAP 8.1 Cluster-Mode


Cluster Expansion
Cluster-Mode can
nondisruptively grow and
redistribute resources.

SAN Hosts and NAS Clients

The Vserver adjusts as the


cluster is seamlessly expanded.

You can mix and match


controllers.

LIF4

You can mix and match drive


types: SATA, SAS, FC, and
SSD.

LIF3

LIF2

High Availability

LIF1
High Availability

Third-party arrays can work


with V-Series systems.
Cluster-Mode can host
thousands of volumes.
Cluster-Mode includes a PBsized namespace.

VS1

Transparent Operational Flexibility


NetApp Confidential

20

DATA ONTAP 8.1 CLUSTER-MODE


CLUSTER EXPANSION

Data ONTAP 8.1 Cluster-Mode started with a two-node cluster, and NetApp expects this configuration to be
popular.
Inherent architecture enables scaling of the cluster for capacity and performance optimization.
You can add two nodes for a four-way cluster.
Virtualization means without interruption to clients.
Network and storage resources can be redistributed nondisruptively across the physical controllers.
You can grow a cluster: 24 controllers and NAS workloads that serve thousands of volumes to create a PBsized namespace. (Four nodes for SAN.)
You can build up the cluster in HA pairs.
Support is available for all the currently shipping platforms.
The most recently available controllers are essentially the same support matrix for both controllers and disk
shelves as in 7-Mode.
You can mix and match controllers within the cluster (as long as the members of each HA pair are the same
controller type).
You can mix and match disk types to match the business need: SAS, FC, SATA, and SSD.
Include V-Series storage systems in the cluster for front-end-supported third-party arrays with all the Data
ONTAP Cluster-Mode benefits.

2-20

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Data ONTAP 8.1 Cluster-Mode


Multi-Tenancy
Vservers enable multiple
storage domains that share
a common resource pool.
Vservers maintain logical
separation: They define
domains for volumes, LIFs,
and access protocols.

SAN Hosts and NAS Clients

LIF4

Vservers provide secure,


delegated administration.

LIF3

LIF2 LIF2

High Availability

LIF1

LIF1

High Availability

Hundreds of Vservers can


be supported.
VS2

VS1

Workload Isolation
NetApp Confidential

21

DATA ONTAP 8.1 CLUSTER-MODE


MULTI-TENANCY

So far, this module has shown only one Vserver.


Vservers are the basis for multi-tenancy operations, too.
This graphic shows a second Vserver, VS2, and associated volumes, LUNs, and logical interfaces. Note that
one node hosts volumes from both Vservers, VS1 and VS2. This is fine and expected, but the logical
separation is maintained.
The new Vserver presents another namespace and set of LUNs, that is, another potential CIFS share or NFS
mount for the same or different clients and with secure administration and delegated administration.
The same or different clients and hosts can mount it by using a logical interface from the new Vserver. Again,
each Vserver exists with volumes and LUNs on one, some, or all of the aggregates and nodes.
You can define hundreds of Vservers in a single cluster. Vservers can use any combination of NAS and SAN
protocols, which provides a true unified architecture at scale.

2-21

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Data ONTAP 8.1 Cluster-Mode


Summary
Cluster-Mode:
Is designed for continuous
data access

NetApp Unified Storage


Architecture

Provides unified architecture


at scale
Provides dynamic,
transparent, and on-demand
reconfiguration

Virtualized Storage and Network

Provides single-system
management
Is a fully integrated solution
from NetApp
Storage Infrastructure for Scalable Shared Enterprise
NetApp Confidential

DATA ONTAP 8.1 CLUSTER-MODE


SUMMARY

2-22

Is designed for continuous data access


Provides unified architecture at scale
Provides dynamic, transparent, and on-demand reconfiguration
Provides single-system management
Is a fully integrated solution from NetApp

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

22

Core Software Technology


Off-Box Storage Management

Off-Box Administration Tools

Data ONTAP 8.1 Cluster Mode


Data ONTAP 8.1 7-Mode for FAS Systems
and for V-Series Systems

WAFL Core Technology


Snapshot Technology
RAID 4 or RAID-DP Technology
NVRAM Operations
Aggregates and Volumes

On-Box, Value-Added Software

Protocol Support

FC and Ethernet

NetApp Confidential

CORE SOFTWARE TECHNOLOGY

2-23

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

23

Lesson 3
Nondisruptive Operations

NetApp Confidential

LESSON 3: NONDISRUPTIVE OPERATIONS


Next youll review nondisruptive operations (NDOs).

2-24

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

24

Data ONTAP 8.1 Cluster-Mode


Nondisruptive Operations

Volume Movement

LIF Migration and


Load Balancing

Nondisruptive
Operations
High Availability:
Storage Failover (SFO)
and LIF Failover

Nondisruptive
Upgrades (NDUs)

NetApp Confidential

25

DATA ONTAP 8.1 CLUSTER-MODE


NONDISRUPTIVE OPERATIONS

NDOs are among the key benefits of Data ONTAP Cluster-Mode.


On-demand flexibility allows NetApp customers to seamlessly add capacity, rebalance resources, and rapidly
grow the system.
Operational efficiency provides virtualized tiered services that allow NetApp customers to match business
priorities.
Always-on provides serviceability and the ability to refresh technology without disruption to business
systems.
This includes several components that when used in conjunction with each other can provide an always-on
nondisruptive infrastructure.
The following sections of this module provide details about each of these components and discuss some of the
common use cases and operations that each can provide:

2-25

Volume movement
LIF migration and load balancing
High availability with storage failover and LIF failover
Nondisruptive upgrades

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Volume Movement

NetApp Confidential

VOLUME MOVEMENT

2-26

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

26

Cluster-Mode Transparent Volume


Movement
NFS, CIFS, iSCSI, FC, and FCoE
Continuous data access
by clients and hosts

Uninterrupted Access

Nondisruptively move
volumes between any
aggregates anywhere
in the cluster

LUN
A1

Uses Snapshot
technology to copy data
to a new aggregate
in the background
Storage space savings,
mirror relationships,
and Snapshot copies
are unchanged

A2

A3

B1

B2

HA

HA

C2
A2

A3

C1

LUN

C1

LUN

LUN

LUN

B2

NetApp Confidential

B1

A1
R

27

CLUSTER-MODE TRANSPARENT VOLUME MOVEMENT


Consider how volume movement works. Physically, a volume is moved by a single administrator command
(CLI or System Manager 2.0) from one aggregate to another. The data copy to the new volume is achieved by
a series of copies of the Snapshot copies, each time copying a diminishing delta from the previous Snapshot
copy.
Only in the final copy is the volume locked for I/O while the final changed blocks are copied and the file
handles are updated to point to the new volume. This should easily complete within the default NFS timeout
(600 seconds) and almost always within the CIFS timeout period of 45 seconds. In some especially active
environments, sufficient data will have changed that a period of time that is longer than the timeout period is
required to copy. Options are available for managing those rare occasions. Also by using MPIO and ALUA,
SAN paths are automatically updated to the optimized path after the volume moves to its new location. With
this capability and the NAS namespace, the clients view of the namespace is unchanged after a volume
moves, and SAN hosts continuously have access to the data.
Note that you can also move the LIFs to different ports. LIFs move automatically in the case of a node failure
or for optionally dynamically rebalancing the client connections. An administrator can also manually move
LIFs to different controller nodes as part of planned maintenance events. This is required to clear a controller
completely of volumes to take down for maintenance or to completely replace. This is covered in more detail
later in the presentation.

2-27

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Cluster-Mode: On-Demand Flexibility


The Challenges

The Benefits

Disk full errors

Nondisruptive volume movement is


transparent to clients and hosts.

Overprovisioning in anticipation of
future capacity needs

Managing access to new storage

Namespace and LUN mapping is


unchanged.
The storage infrastructure is shared.
The Results

B2
C1
C1

A3

LUN B
LUN

A2 A

B1

A1

LUN

Seamlessly added capacity


Rebalanced resources
Rapidly deployed new system

LUN A1
LUN

NetApp Confidential

CLUSTER-MODE: ON-DEMAND FLEXIBILITY

2-28

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

28

Cluster-Mode: Operational Efficiency


The Challenges

The Benefits

Changing workload demands

Nondisruptive volume movement is


transparent to clients and hosts.

Critical projects that need


appropriate resources

You can mix controllers and disk types


in the same cluster.
On-demand mobility is available for
critical projects.
You can adapt resources to meet
business demand.

B2
C1

A3
A1

LUN
LUN

Higher Performance
Storage

B1

A2
C

B
A

Lower-Cost
Storage

LUN

The Results
Virtualized tiered services
Integrated unified system

Matched business priorities

NetApp Confidential

CLUSTER-MODE: OPERATIONAL EFFICIENCY

2-29

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

29

Cluster-Mode: Operational Lifecycle


The Challenges

The Execution

Upgrading an entire storage system

Identify affected volumes and LUNs.

Maintaining 24 x 7 operation during


the move

Nondisruptively move volumes.


Perform technology refresh.

Power up node and rejoin cluster.


Move volumes back to new node.
Repeat.

A1

B2

The Results

A2

B1

C1

LUN
LUN

A3

C
A

LUN

Zero downtime
Zero processing interruptions
Zero client changes

Always-On Infrastructure
NetApp Confidential

CLUSTER-MODE: OPERATIONAL LIFECYCLE

2-30

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

30

LIF Migration and Load


Balancing

NetApp Confidential

LIF MIGRATION AND LOAD BALANCING

2-31

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

31

LIF Migration
LIFs are moved to
other physical ports
within the cluster.

Nondisruptive to hosts and clients

Load balance:
NAS client access

LIF4

LIF3

LIF2

HA

LIF1
HA

Continued data access by


clients:
NFS, and SMB 2

Redistribution of client
access during
maintenance operations

NetApp Confidential

LIF MIGRATATION

2-32

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

32

Dynamic IP Load Balancing

NetApp Confidential

DYNAMIC IP LOAD BALANCING

2-33

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

33

Load-Balancing Client Network Access


Average Network
Load on the Node

Network Demand
on an IP Address

LIFs are not permanently tied to a network port.


Two load-balancing options exist:

Assign new clients by using Domain Name System ( DNS)


lookup to least loaded LIF.
Rebalance LIFs across nodes manually as load changes.
NetApp Confidential

34

LOAD-BALANCING CLIENT NETWORK ACCESS


A name server is built into the cluster. This is used with the customers site-wide name server by configuring
the site-wide name server to forward requests to the Data ONTAP 8.1 cluster.
The cluster then identifies a lightly-loaded LIF and returns an IP address for the client to use.
The system administrator can attach specific weights to specific LIFs, that is, the administrator can still
configure round robin.
If the client I/O becomes unbalanced, the load can be periodically redistributed across the cluster. This is
called autorebalance:

2-34

Is for NFS only; CIFS traffic disqualifies for movement


Respects network failover group rules

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

High Availability

NetApp Confidential

HIGH AVAILABILITY

2-35

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

35

Storage Failover (SFO)

Active-Active HA Pair

Active-Active HA Pair

NetApp Confidential

36

STORAGE FAILOVER (SFO)


Two nodes in the same cluster are connected as an SFO pair.
Pairs are called active-active configurations.
Each node of the pair is a fully functioning node in the cluster, hence the active-active term.
Clusters can be heterogeneous in terms of hardware and Data ONTAP versions, but an SFO pair must be the
same controller model.

2-36

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

High Availability in Cluster-Mode


A cluster is composed of high-availability (HA) pairs to
provide resiliency:
Each HA pair consists of two of the same controller model. A
cluster can be built with HA pairs, where each pair has
different controller models from other HA pairs.
HA pairs are the basis for NDU.

If a controller fails:
The storage control fails over to the HA pair partner (SFO).
SAN data traffic moves to LIFs that are configured on the
partners ports.
NAS data LIFs fail over to ports on other nodes in the cluster
that are within the same LIF failover group.
Data-protection data transfers move to intercluster LIFs that
are configured on the partners ports.
NetApp Confidential

37

HIGH AVAILBILITY IN CLUSTER-MODE


For more details on high availability, refer to TR-3450, HA Pair Controller Configuration Overview and Best
Practices.

2-37

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Nondisruptive Upgrade

NetApp Confidential

NONDISRUPTIVE UPGRADE

2-38

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

38

Data ONTAP 8.1 Cluster-Mode


Nondisruptive Rolling Upgrade (1 of 2)

Data ONTAP
Upgrade Is
Complete on
Cluster
New
Version

Rolling upgrade is the process of upgrading Data ONTAP


software on up to one-third of the nodes in a cluster
concurrently (by following the NDU procedure).
NetApp Confidential

39

DATA ONTAP 8.1 CLUSTER-MODE


NONDISRUPTIVE ROLLING UPGRADE (1 OF 2)

The NDU procedure is still HA pair by HA pair. For more details, refer to TR-3450, HA Pair Controller
Configuration Overview and Best Practices.
NOTE: This slide requires manual clicking to advance the animation.

2-39

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Data ONTAP 8.1 Cluster-Mode


Nondisruptive Rolling Upgrade (2 of 2)
Cluster-wide new
features are not
available until all nodes
are upgraded

New
Version

such as new
network protocols
(NFSv4.1 and SMB
2.1) and new cluster
ZAPIs.

Enhancements and
features that are not
cluster-wide are enabled
when the HA pair
upgrade is complete

such as new
commands, new
media types, and new
aggregate types (64bit aggregates).

A mixed version cluster exists when two versions of Data


ONTAP software are running on nodes within one cluster.
NetApp Confidential

DATA ONTAP 8.1 CLUSTER-MODE


NONDISRUPTIVE ROLLING UPGRADE (2 OF 2)

2-40

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

40

Lesson 4
Software Modes: 7-Mode vs.
Cluster Mode

NetApp Confidential

LESSON 4: SOFTWARE MODES-7-MODE VS. CLUSTER MODE

2-41

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

41

Cluster Mode: Software Structure 2.0


FAS22xx, FAS&V3210/3240/3270 and 6210/6240/6280
Included software delivering unmatched value
Base

Protocols

Includes: One Protocol of choice*, base cluster key


* = All protocols included at $0 on 22xx. FCP is unavailable on 22xx
** SnapshotTM, thin provisioning, RAID-DP, deduplication, cluster failover and FlexCache are
included and preinstalled with Data ONTAP 8.1

Additional protocols*

Includes: iSCSI, FCP*, CIFS, NFS


* = NA for 2240

SnapRestore

Automated system recovery

SnapMirror

Enhanced disaster recovery and replication

FlexClone

Automated virtual cloning

Includes: SnapRestore
Includes: SnapMirror
Includes: FlexClone

Automated application integration


SnapManager Suite

Includes: SnapManager for Exchange, SQL Server, SharePoint, Oracle, SAP, Virtual
Infrastructure*, Hyper-V, and SnapDrive for Windows and UNIX
* = feature currently unavailable for use

NetApp Confidential

CLUSTER MODE: SOFTWARE STRUCTURE 2.0


FAS22XX, FAS&V3210/3240/3270 AND 6210/6240/6280

2-42

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

42

7 mode: Software Structure 2.0


FAS2240, FAS&V3210/3240/3270 and 6210/6240/6280
Data ONTAP
Essentials

Included software delivering unmatched value

Protocols

Additional protocols

SnapRestore

Automated system recovery

SnapMirror

Enhanced disaster recovery and replication

FlexClone

Automated virtual cloning

Insight Balance

Performance and Capacity Management

SnapVault

Simplified disk-to-disk backup

SnapManager Suite

Complete Bundle

Includes: One Protocol of choice, HTTP, Deduplication, NearStore, DSM/MPIO, SyncMirror ,


MultiStore, FlexCache, MetroClusterTM, High availability
Includes: iSCSI, FCP, CIFS, NFS
Includes: SnapRestore
Includes: SnapMirror
Includes: FlexClone

Includes: Insight Balance

Includes: SnapVault Primary and SnapVault Secondary

Automated application integration

Includes: SnapManager for Exchange, SQL Server, SharePoint, Oracle, SAP, Virtual
Infrastructure, Hyper-V, and SnapDrive for Windows and UNIX

All software for all-inclusive convenience

Includes: All Protocols, Single Mail Box Recovery, SnapLock, SnapRestore, SnapMirror,
FlexClone, SnapVault, and SnapManager Suite

NetApp Confidential

43

7-MODE: SOFTWARE STRUCTURE 2.0


FAS2240, FAS&V3210/3240/3270 AND 6210/6240/6280

Data ONTAP Essentials has a couple of exceptions: FAS2240 includes all protocols
In FAS2240 software structure, these features are not part of Data ONTAP Essentials package but are
included as part of the base Data ONTAP software.

2-43

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Comparison of Data ONTAP 8.1 7-Mode


and Cluster-Mode
Data ONTAP 8.0 7-Mode

Data ONTAP 8.0 Cluster-Mode

Single-system namespace

Global namespace

32-bit and 64-bit aggregates

64-bit aggregates (8.0.1 and greater)

SnapMirror Sync and SnapMirror Async

SnapMirror Async only

Data protection (DP) SnapMirror

Data protection (DP) and load-sharing


(LS) SnapMirror

Controller failover (CFO)

Storage failover (SFO)

Deduplication

Deduplication (8.1 and greater)

NAS and SAN

NAS and SAN (8.1 and greater)

DataMotion for Volumes (8.0.1 and


greater)

Volume move

MultiStore software

Virtual servers

NetApp Confidential

44

COMPARISON OF DATA ONTAP 8.1 7-MODE AND CLUSTER-MODE


Although the Data ONTAP 8.0 operating system is a single code line, its two modes of operation have almost
as many differences as Data ONTAP 7G software has from Data ONTAP GX software. Except for the most
obvious difference of high availability, each mode has some features that are slightly different from the
others, and each mode has some things that the other mode does not.
For example, 7-Mode has both synchronous and asynchronous SnapMirror functionality, while Cluster-Mode
has only asynchronous SnapMirror functionality. Likewise, Cluster-Mode has data-protection and loadsharing mirrors, while 7-Mode has only data-protection mirrors. Data ONTAP 7-Mode supports the new 64bit aggregate, while Cluster-Mode did not until the release of the Data ONTAP 8.0.1 operating system .
Another big difference is that 7-Mode supports the SAN protocols of FC and iSCSI, while Cluster-Mode
supports only the NAS protocols. One of the key features of Cluster-Mode is the ability to move flexible
volumes within the namespace transparently to clients. With the release of Data ONTAP 8.0.1 software, 7Mode supports DataMotion for volumes in SAN environments.
Although at this time differences exist, eventually Data ONTAP 8.0 software will become a one-mode
product with all the necessary features of the two current modes.

2-44

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Cluster-Mode Concepts
Clustered (distributed) NAS
Clustered (scalable) SAN
The ability to manage resources from any
node in the cluster (cluster-wide UI)
Global namespaces
Hierarchical volume relationships (junctions)
Replicated database (RDB) semantics
Volume movement

NetApp Confidential

45

CLUSTER-MODE CONCEPTS
High availability carries with it the idea of many nodes that work together but that are seen externally as one
system.
The global namespaces (one for each cluster Vserver) are the external, client-facing representation of this
distributed storage. Junctions are the glue that holds the global namespaces together. Junctions are analogous
to symbolic links. They connect volumes to create the global namespace of a cluster Vserver.
For the nodes to work as one, constant intracluster communication must occur over a dedicated cluster
network. That cluster network must be reliable.
Flexible volumes can be moved among aggregates and nodes. The movement does not cause the volumes
path in the global namespace to change, nor is the process of moving a volume seen by the client. No NFS
mountpoints or CIFS shares need to change, and the volume is available for reading and writing during the
process. This is explained in more detail later in this course .
Data LIFs are not permanently tied to particular network ports and nodes. As such, they can be migrated
(manually or automatically) away from problematic hardware or hardware that is heavily taxed.

2-45

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Why Larger Aggregates Are Needed


To enable larger volume sizes: Some
applications require large volumes, for
example, applications that are related to
genomic research, seismic interpretation,
satellite imagery, and PACS.
To reduce system-management overhead:
Fewer drives and aggregates means many
aggregates on large systems.
Managing more aggregates adds low-valueadded tasks to a storage administrators
workload.
NetApp Confidential

46

WHY LARGER AGGREGATES ARE NEEDED


For applications that need volumes that are larger than 16 TB, you must have an underlying aggregate that is
larger than 16 TB, too.

2-46

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

The Data ONTAP 8.1 Operating System


64-Bit Aggregates (1 of 2)
The current maximum size for 32-bit aggregates is 16 TB:
A limited number of spindles that uses larger drives
More total aggregates required

Solution: 64-bit aggregates, up to 100 TB in the Data ONTAP 8.0 operating system

16 x 2-TB drives = 32 TB
32 x 2-TB drives = 64 TB
48 x 2-TB drives = 96 TB
NetApp Confidential

THE DATA ONTAP 8.1 OPERATING SYSTEM


64-BIT AGGREGATES (1 OF 2)

2-47

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

47

The Data ONTAP 8.1 Operating System


64-Bit Aggregates (2 of 2)
Reasons:

To improve storage efficiency and performance with high-capacity SATA


drives (1 TB and larger)

To simplify storage management by using fewer aggregates and volumes

Maximum size: 50 to 162 TB for aggregates and FlexVol volumes:

Size based on system model, recovery times, and WAFL consistency


capabilities

Architectural maximum size: approximately 1,000 PB

No change to maximum Snapshot copy number, maximum file size, or


maximum LUN size

Features of Data ONTAP 8.1 software:

No required license

32-bit default type for new aggregates

Existing 32-bit aggregates and volumes that cannot grow greater than 16 TB

No in-place upgrade of existing aggregates


NOTE: An upgrade will be available in a future version of Data ONTAP 8 software.
NetApp Confidential

48

THE DATA ONTAP 8.1 OPERATING SYSTEM


64-BIT AGGREGATES (2 OF 2)

Better performance with large-capacity drives:

Greater storage efficiency with large-capacity drives: a 14 + 2 RAID-DP group with 1 TB and larger
drives is supported.
Larger volumes:

2-48

More data drives per aggregate can boost application performance.


Better throughput is expected when you use large SATA drives.

The flexible volume size limit is the same as the aggregate size limit.
FlexVol volumes that have the space guarantee set to Volume can be up to 90% of maximum aggregate size.

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

The Data ONTAP 8.X Operating System


Creating and Displaying 64-Bit Aggregates

NetApp Confidential

THE DATA ONTAP 8.X OPERATING SYSTEM


CREATING AND DISPLAYING 64-BIT AGGREGATES

2-49

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

49

Whats New in Data ONTAP 8.1 Cluster-Mode


64-Bit Aggregates
All new aggregates default to using the 64-bit format,
including the root volume aggregate.
Larger aggregate sizes are supported.

The ability to make In-place expansion of existing


32-bit aggregates to the 64-bit format
Adding Asynchronous replication between volumes
that reside on different aggregate types
32-Bit

64-Bit

64-Bit

32-Bit

Source

Destination

Source

Destination

NetApp Confidential

WHATS NEW IN DATA ONTAP 8.1 CLUSTER-MODE


64-BIT AGGREGATES

Some of the things that remain the same as with Data ONTAP 7.3.1 are the:

2-50

Maximum file and LUN size


Maximum number of FlexVol volumes, files, LUNs, qtrees, and Snapshot copies

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

50

In-Place Aggregate Expansion


Overview
Availability:
No required license
Support on all platforms

Expansion process:
The process is triggered automatically when an aggregate
grows beyond 16 TB.
The process expands the aggregate and all the volumes
within the aggregate.
The expansion is in-place and nondisruptive and does not
require a data copy.

Performance impact:
Minimal impact on system throughput during conversion
No interruption to storage services during the expansion
process
NetApp Confidential

IN-PLACE AGGREGATE EXPANSION


OVERVIEW

2-51

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

51

Maximum 64-Bit Aggregate and Volume Sizes


The Data ONTAP 8.1 Operating System
64-bit aggregate and volume capacity limits vary by controller model.
Max Aggregate1

Max Volume1

6280, 6240
6080, 6070

162 TB

100 TB

6210

162 TB

70 TB

6040, 6030
3270, 3170

105 TB

70 TB

3240, 3160

90 TB

50 TB

3210, 3140

75 TB

50 TB

3070, 3040

50 TB

50 TB

2040

50 TB

30 TB

2240

60 TB

53.7 TB

FAS/V Model

The maximum aggregate and volume sizes with 32-bit addressing are both 16 TB.

NetApp Confidential

MAXIMUM 64-BIT AGGREGATE AND VOLUME SIZES


THE DATA ONTAP 8.1 OPERATING SYSTEM

2-52

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

52

Data Migration Between 32-Bit and 64-Bit


Aggregates (1 of 2)
Data ONTAP 8.1 7-Mode does not support conversion of a
32-bit aggregate to a 64-bit aggregate.
If a 32-bit aggregate or volume must expand past the 16-TB
limit, data must be migrated to new volumes that are
provisioned in a 64-bit aggregate.

Qtree SnapMirror relationships and the NDMPcopy


command migrate data that is present only in the active file
system.
FlexVol volume Snapshot copies are not migrated.

To migrate data with all FlexVol volume-level Snapshot


copies preserved, contact NetApp Professional Services.
NetApp Professional Services has a service offering that can
be used to migrate data and FlexVol volume Snapshot copies
from a 32-bit aggregate to a 64-bit aggregate.
NetApp Confidential

53

DATA MIGRATION BETWEEN 32-BIT AND 64-BIT AGGREGATES (1 OF 2)


Professional Services
NetApp Professional Services has a service offering that can be used to migrate data and FlexVol volume
Snapshot copies from a 32-bit aggregate to a 64-bit aggregate. The offering must be purchased, and it
provides customers with Snapshot copy preservation.

2-53

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Data Migration Between 32-Bit and 64-Bit


Aggregates (2 of 2)
The following Data ONTAP 8.1 7-Mode tools can be
used to migrate data:
Qtree SnapMirror relationships:
Migrate data from a volume or qtree to a qtree on
the destination.
If qtree-to-qtree replication is performed, one qtree
SnapMirror relationship per qtree is required.

The NDMPcopy command:


In Data ONTAP 8.0 7-Mode, migrates data that is
located in volumes, qtrees, and directories
Can also migrate individual files
NetApp Confidential

54

DATA MIGRATION BETWEEN 32-BIT AND 64-BIT AGGREGATES (2 OF 2)


NDMP is an open protocol that is used to control data backup and recovery communications between primary
and secondary storage in a heterogeneous network environment.
NDMP specifies a common architecture for the backup of network file servers and enables the creation of a
common agent that a centralized program can use to back up data on file servers that run on different
platforms. By separating the data path from the control path, NDMP minimizes demands on network
resources and enables localized backups and disaster recovery. With NDMP, heterogeneous network file
servers can communicate directly to a network-attached tape device for backup or recovery operations.
Without NDMP, administrators must remotely mount the NAS volumes on the server and back up or restore
the files to directly attached tape backup and tape library devices.
NDMP addresses a problem that is caused by the nature of NAS devices. These devices are not connected to
networks through a central server, so they must have their own OSs. Because NAS devices are dedicated file
servers, they arent intended to host applications such as backup software agents and clients. Consequently,
administrators must mount every NAS volume by either the NFS or CIFS from a network server that does
host a backup software agent. This cumbersome method causes an increase in network traffic and a resulting
degradation of performance. NDMP uses a common data format that is written to and read from the drivers
for the devices.
NDMP was originally developed by NetApp, but the list of data backup software and hardware vendors that
support the protocol has grown significantly. Currently, SNIA oversees the development of the protocol.

2-54

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Image Backup (smtape)


Provides the capability to back up all Snapshot copies or named
Snapshot copies
Supports tape seeding
Supports SnapMirror-to-tape backup images in the two versions
immediately earlier than Data ONTAP 8.1 software
Provides backup system throughputs of over 500 GB per hour
Supports flexible and traditional volumes
Supports deduplication volumes, maintaining deduplication on the tape
and on the restored volume
Supports 64-bit aggregates
Supports remote three-way backup through direct memory access
(DMA)

Supports variable tape record sizes from 4k to 256k with 4k increments


and default 240k
Requires no license
NetApp Confidential

55

IMAGE BACKUP (SMTAPE)


DMA means data management application, which is also called backup application. DMA controls the
NDMP session, for example, with Veritas NetBackup and Legato NetWorker.

2-55

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

NetApp with the Data ONTAP Operating System


Cluster-Mode Only

Clustered scale-out (24 node NAS: 4- node SAN)


Namespace
NDOs
Management as a single system
Scalable and integrated multi-tenancy
NFSv4, NFSv4.1 (pNFS), SMB 2.0, and SMB 2.1
Onboard antivirus

Data ONTAP 8.1


Cluster-Mode

7-Mode and Cluster-Mode


Unified architecture
Storage-efficiency features
Snapshot copies and asynchronous volume
SnapMirror
Intelligent caching with Flash Cache

7-Mode Only

SnapLock software
SnapVault software and Open Systems SnapVault
Qtree and synchronous SnapMirror
MetroCluster
vFiler units
The FlexShare tool
IPv6, HTTP, FTP, SFTP, TFTP

Data ONTAP 8.1


7-Mode

NetApp Confidential

NETAPP WITH THE DATA ONTAP OPERATING SYSTEM

2-56

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

56

Similarities Between Data ONTAP 8.1


7-Mode and Cluster-Mode
Same controllers and disk shelves
Unified storage: NFS, CIFS, FC, FCoE, and iSCSI

The WAFL file system


32-bit and 64-bit aggregates
RAID 4 and RAID-DP technology
FlexVol volumes
Qtrees for quotas
Snapshot copies
Asynchronous volume replication
HA pairs
Web-based UIs
NetApp Confidential

57

SIMILARITIES BETWEEN DATA ONTAP 8.1 7-MODE AND CLUSTER-MODE

2-57

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Transitioning from Data ONTAP 7-Mode to


Cluster-Mode
Data ONTAP 7-Mode and Cluster-Mode cannot be run
simultaneously on the same controller (node).
Data ONTAP 7-Mode systems require wipe-clean and reinstallation in Cluster-Mode.
In-place transition of a 7-Mode system to a Cluster-Mode
cluster is not available.
A data-migration service between existing Data ONTAP 8.1
7-Mode and new Cluster-Mode systems environments is
required:
NAS: The volume SnapMirror copy-based process preserves
Snapshot and replication copies and storage efficiency.
SAN: The data-transfer-appliance-based (DTA-based) process
moves only the active LUN data (no Snapshot copies).

Both services are disruptive to data access.


NetApp Confidential

TRANSITIONING FROM DATA ONTAP 7-MODE TO CLUSTER-MODE

2-58

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

58

Exercise 3
Module 2: Reviewing the NetApp
Support Site

Time Estimate: 10 Minutes

NetApp Confidential

EXERCISE 3
Please refer to your exercise guide.

2-59

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

59

Lesson 5
On-Box, Value-Added Software

NetApp Confidential

LESSON 5: ON-BOX, VALUE-ADDED SOFTWARE

2-60

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

60

On-Box, Value-Added Software


Off-Box Storage Management

Off-Box Administration Tools

Data ONTAP 8.1 Cluster Mode


Data ONTAP 8.1 7-Mode for FAS Systems
and for V-Series Systems

WAFL Core Technology


Snapshot Technology
RAID 4 or RAID-DP Technology
NVRAM Operations
Aggregates and Volumes

On-Box, Value-Added Software

Protocol Support
FC and Ethernet

NetApp Confidential

61

ON-BOX, VALUE-ADDED SOFTWARE


On-box, value-added software includes all of those features (some of them are separately licensed) that are
installed with and that run within Data ONTAP architecture. These features are not separate add-ons. They
are always pre-installed on every FAS system that NetApp ships to customers.

2-61

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Key Software Enhancements


New Platforms
(Simplified Structure)

Current Platforms

(Customers Choose Structure)

More Value (Now Standard)

iSCSI protocol
System management, data protection,
storage efficiency, and performance
optimization
Operations Manager
Protection Manager
Extended
Value Software:
Provisioning Manager
Over 30
software
products
SnapManager
for VI to choose from
More
protocols
SnapManager
for Exchange
SnapRestore
technology
SnapManager
for
SharePoint
SnapMirror
products
SnapManager
FlexClone
software for SAP
SnapManager
MultiStore
software for Oracle
SnapManager for SQL
MetroCluster
Server
SnapDrive
software
More

Add OnCommand management


software
Add continuous availability
Add secure multi-tenancy

Enhanced Flexibility

Now choose the included protocol


(iSCSI, FC, NFS, or CIFS).

Simpler to Configure

Six key products plus protocols


SnapRestore technology
SnapMirror products
FlexClone software
SnapVault software
SnapManager software
Optional protocols

The Complete Bundle

Included Software:

NetApp Confidential

62

KEY SOFTWARE ENHANCEMENTS


More Value, Easier to Configure
Differences exist between the software structure on current systems and the software structure on new
FAS3200 and FAS6200 systems.
Key Points
With the new FAS3200 and FAS6200 systems, NetApp is rolling out a software structure that delivers more
value and simplifies system configurations.
Currently, midrange and high-end systems have some software included in the base and have a menu for
adding on more than 30 software products. In addition, iSCSI protocol is included with the system, while
other protocols, if needed, must be purchased separately.
The new systems have a simplified software structure. The three key features are:

2-62

More value, now standard with each system


Enhanced flexibility for customers to decide which protocol they want to include for free in their system
purchase
Add-on software that is simplified to six key products or available together as the Complete bundle, with
the option to buy any additional protocols

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Data ONTAP On-Box Technology


SnapSuite Software Family: Quick Reference Guide
Snapshot

Instant self-service file recovery for end


users.

SnapRestore

Instant volume recovery, or large


individual files.

SnapMirror

SnapMirror Async and SnapMirror Sync


remote replication over inexpensive IP. FC
now also supported

SnapVault

Heterogeneous super-efficient hourly diskbased online archiving with versioning up


to weeks or months

SyncMirror

Synchronous RAID-1 local mirroring by


means of disk shelf plexes. RAID-1
remote mirroring product for disaster
recovery is MetroCluster.

SnapLock

SEC-compliant disk-based WORM


technology

$
$

IP
Windows
Linux
Solaris
HP-UX
AIX

plex0

plex1

No license fee
License fee

NetApp Confidential

63

DATA ONTAP ON-BOX TECHNOLOGY


Many NetApp products are based on Snapshot copies. Because these product names are so similar, they can
be confusing. This slide shows a SnapSuite Products Quick Reference Guide that can help you to keep the
products straight.
If a Snapshot copy of an entire volume exists, and something goes wrong, the entire volume can be restored to
its state when the Snapshot copy was created. Primarily, SnapRestore software is used to revert an entire file
system back to a point in time when a particular Snapshot copy was created.
That is great protection, but that is all inside the same storage appliance. A customer may want to have its
data replicated to another storage appliance and to another physical location. Two NetApp products,
SnapMirror and SnapVault software, can do that. So, what is the difference?
The first difference is positioning. SnapVault software is an archival application. It performs a disk-to-disk
backup and restore function and replaces tape in a given environment. So, as backup and restore technology,
you can have production on system A and the destination of SnapVault software going to system B in a
remote location. If something happens to the data on system A, the administrator can run a restore from
system B to system A. That certainly provides protection for system A, but the process of restoring system A
from system B can be time-consuming, because all of the original content must be copied over the wire from
B back to A.
SnapMirror software, by contrast, is a disaster recovery solution. System B is maintained as a mirror image of
system A. Unlike with SnapVault software, if something happens to system A in a SnapMirror environment,
one of the available options is to bring system B online instantly as the new production server. When system
A comes back, SnapMirror software enables you to resynchronize systems A and B and move production
back to A. Within the license structure of SnapVault software, you cannot bring the destination platform
online as the production server. And even if you turn a SnapVault destination into a production server, you
can never resynchronize it with the original source platform without a complete return to baseline.

2-63

SnapVault software is for archiving, as is suggested by its name.


SnapMirror software is for creating a mirror image for disaster recovery.
NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Data ONTAP On-Box Technology


A Closer Look at SnapMirror and SyncMirror Software
SnapMirror

SnapMirror Async and SnapMirror Sync


provide remote replication over
inexpensive IP. FC is now supported

SyncMirror

Synchronous RAID-1 local mirroring via


disk shelf plexes. RAID-1 remote
mirroring product for DR is MetroCluster

IP

plex0

NetApp Confidential

plex1

64

DATA ONTAP ON-BOX TECHNOLOGY


A CLOSER LOOK AT SNAPMIRROR AND SYNCMIRROR SOFTWARE

There are three different SnapMirror modes that will be discussed in more detail in future modules:

Asynchronous SnapMirror Async (asynchronous)


Synchronous Asynchronous SnapMirror Sync (synchronous)
Semi-Sync
Semi-synchronous

WHAT ARE THE DIFFERENCES BETWEEN THE ASYNCHRONOUS, SYNCHRONOUS, AND SEMISYNCHRONOUS MODES OF SNAPMIRROR ?
The asynchronous mode of SnapMirror is destination driven and changes are replicated to the destination on a
schedule predetermined by the user. Any changes to the source data are completed and acknowledged back to
the client immediately. Asynchronous SnapMirror can be used with either volumes or qtrees. Here is an
example configuration of SnapMirror Async: fas1:vol1 fas2:vol2 - 0,30 * * *
Synchronous SnapMirror is a mode of replication that sends updates from the source system to the destination
system as soon as they occur, rather than according to a predetermined schedule. Therefore, the write
operations to the source system are not acknowledged to the client until they have been written to the
destination systems NVRAM. This guarantees that data written on the source system is protected on the
destination system, even if the entire source system fails. An example configuration of SnapMirror Sync is:
fas1:vol1 fas2:vol2 - sync
The semi-synchronous mode of SnapMirror provides a middle ground that keeps the source and destination
systems more closely synchronized than asynchronous mode does, but with less impact on performance.
Configuration of semi-synchronous mode is identical to that of synchronous mode. Before Data ONTAP 7.3,
semi-synchronous mode was tunable with an outstanding parameter that specifies how many ops/seconds can
be outstanding before the source system delays acknowledging write operations from clients.

2-64

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Starting with Data ONTAP 7.3, a new option called semi-sync is available and the outstanding parameter
functionality has been removed. When using semi-synchronous mode, writes are acknowledged as soon as the
source system writes to its NVRAM. For more information, see the "SnapMirror Sync and SnapMirror SemiSync Overview and Design Consideration Guide" (TR-3326). An example configuration of SnapMirror SemiSync is: fas1:vol1 fas2:vol2 semi-sync Both synchronous and semi-synchronous modes of SnapMirror can
only be used on volumes, not qtrees. All modes of SnapMirror can be used with both flexible and traditional
volumes.
The vast majority of NetApp customers use asynchronous SnapMirror between two systems to update the
mirror image as often as once per minute. SnapMirror in synchronous mode produces continuous, live
updates between the two systems. Synchronous mode has very strict limits on bandwidth and on the distance
between two systems. Otherwise, latency will have too great an impact on application performance. Semisync mode is the middle ground between synchronous mode and asynchronous mode.
SyncMirror
SyncMirror was designed to handle two issues that are extremely important to Data Center Managers: RTO,
or Recovery Time Objective, and RPO or Recovery Point Objective. Customers want to minimize both the
time it takes to recover from a failure event, and they also want to minimize the data loss.
For instant recovery, SyncMirror provides two mirrors (known internally as plexes) on separate failure
domains. If one mirror goes out, then you have the other mirror instantly available. The recovery time is
essentially zero. This meets the customer objective of minimizing RTO.
And to meet a customers Recovery Point Objective, SyncMirror provides synchronous data replication. By
Recovery Point, we are referring to the point at which your mirrored data is out of phase with your primary
production data. With SyncMirror, the mirrored data on both mirrors is always up to date,up to the second .
So if one mirror goes down due to unexpected fire, power loss, or user error, the system can maintain
continuous data availability by accessing the surviving mirror that is fully synchronized with the latest data.
Another feature of SyncMirror is that it is integrated with our Active-Active clustered failover configuration,
which can provide near instantaneous failover both locally, and with MetroCluster, over a metropolitan area.
MetroCluster allows you to split a NetApp system across two locations for unified High Availability ( HA)
and Disaster Recovery (DR) protection. You take an Active-Active configuration and split it across a distance
as far as 100 kilometers. In the event of a disaster, terrorist attack, user mismanagement, or even a disgruntled
employee who decides to destroy everything at one site, you still have instant access to a fully synchronized,
up to the second, mirrored copy that can be as far as one hundred kilometers away.
SyncMirror is also tightly integrated with Data ONTAP for simplicity and ease of use. It is easy to administer
and maintain over time, easy to install for new systems, and easy to upgrade for existing systems.

2-65

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 6
Protocol Support

NetApp Confidential

LESSON 6: PROTOCOL SUPPORT

2-66

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

65

Data ONTAP Protocol Support


Off-Box Storage Management

Off-Box Administration Tools

Data ONTAP 8.1 Cluster Mode


Data ONTAP 8.1 7-Mode for FAS Systems
and for V-Series Systems

WAFL Core Technology


Snapshot Technology
RAID 4 or RAID-DP Technology
NVRAM Operations
Aggregates and Volumes

On-Box, Value-Added Software

Protocol Support
FC and Ethernet

NetApp Confidential

66

DATA ONTAP PROTOCOL SUPPORT


Next youll hear about protocol support. In this case, protocol support refers to network file-sharing protocols,
SAN protocols, and application-layer protocols that NetApp sometimes collectively describes as data-access
protocols.

2-67

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Cluster-Mode Networking Overview


Data Network
SAN and NAS

High Availability

High Availability

Cluster Interconnect
10GbE

Management Network
Data ONTAP Cluster-Mode Cluster

NetApp Confidential

CLUSTER-MODE NETWORKING OVERVIEW

2-68

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

67

Cluster Network Standardization


Approach:
This is the standard configuration for cluster interconnect
switches in Cluster-Mode configurations.

New clusters require the standard switch configurations for


the cluster and management network.

Benefits:
Is engineered by NetApp

Ensures networking design best practices:


Dual cluster network switches for redundancy
Sufficient interswitch bandwidth: eight ports per switch
Standard hardware, software, and configurations:
Are used throughout the QA process to ensure quality
Enable quicker problem resolution when using known configurations

NetApp Confidential

CLUSTER NETWORK STANDARDIZATION

2-69

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

68

Cluster Switch Requirements


Cluster interconnect switches:
Cisco Nexus 5010 and Cisco Nexus 5020
Wire-rate 10GbE connectivity between storage controllers:
1 x 10GbE connection from each node to each switch
(two ports per node total)
Interswitch bandwidth: eight ports per switch

Cluster management switch:


Cisco Catalyst 2960
Management connections for storage controllers and shelves

Same switch configuration for all supported storage


controllers

NetApp Confidential

69

CLUSTER SWITCH REQUIREMENTS


The FAS2040 system connects into a cluster by using onboard 1GbE ports. The first 8 ports of the Cisco
Nexus 5010 and the first 16 ports of the Cisco Nexus 5020 can be either 1GbE or 10GbE, depending on the
SFP that is used. NetApp has released a new 1GbE SFP to enable the FAS2040 system to participate in
clusters. All other controllers remain at 10GbE. A best practice is not to mix 1-G and 10-G nodes.

2-70

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Cluster Configuration Overview


2 to 18 Nodes

20 to 24 Nodes

Two Cisco Nexus 5010:

Two Cisco Nexus 5020:

20 x 10GbE ports

40 x 10GbE ports
Eight ports used for ISLs

Eight ports used for Inter-Switch


Links ( ISLs)

Two rack units each

One rack unit each


Expansion module required
for 12 to 18 nodes
One module (8 x 10GbE)

Two Cisco Catalyst 2960:

Two Cisco Catalyst 2960:

24 ports of 10/100 Ethernet


One rack unit each

24 ports of 10/100 Ethernet


One rack unit each

NetApp Confidential

70

CLUSTER CONFIGURATION OVERVIEW


A Data ONTAP 8.1 Cluster-Mode cluster that uses Nexus 5010 switches for the cluster network can have a
maximum of 8 x FAS2040 nodes in the cluster.
A Data ONTAP 8.1 Cluster-Mode cluster that uses Nexus 5020 switches for the cluster network can have a
maximum of 16 x FAS2040 nodes in the cluster.

2-71

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Configuration Overview
Function

Switch

Max
Nodes

Configurable in
NetApp
Cabinet

Supported NICs

Cluster
interconnect

Cisco NX-5010

12

Yes

X1117A-R6
X1107A-R6
X1008A-R6

Cluster
interconnect

Cisco NX-5010
with expansion
module

18

Yes

X1117A-R6
X1107A-R6
X1008A-R6

Cluster
interconnect

Cisco NX-5020

24

No

X1117A-R6
X1107A-R6
X1008A-R6

Management
network

Cisco Catalyst
2960-24TT

2 24

No

NetApp Confidential

71

CONFIGURATION OVERVIEW
Refer to the compatibility matrix for more details:
http://now.netapp.com/knowledge/docs/olio/guides/cisco/Cluster_Mode_Compatibility.pdf

2-72

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Data ONTAP Components


Data Access Protocols

The four core data access protocols are:


CIFS (Common Internet File System,
developed by Microsoft)
NFS (Network File System, developed by Sun
Microsystems: NFSv2, NFSv3, and NFSv4)
FC (Fibre Channel Protocol)
iSCSI ( SCSI over TCP/IP)
All protocols, including iSCSI, are priced the same, with
the first protocol provided for free no matter which one
the customer chooses, except FAS2240.
NetApp Confidential

72

DATA ONTAP COMPONENTS


DATA ACCESS PROTOCOLS

Four data-access protocols are most important to NetApp products:

CIFS, developed by Microsoft


NFS, developed by Sun Microsystems
iSCSI
FC

These products are referred to at NetApp as the core four, the core protocols, or just core. At NetApp,
the importance of these protocols is reflected in the fact that they have their own engineering group. In most
systems, 99% of the data that comes on or off a NetApp system goes through one or a combination of these
four protocols.
When you recommend a core protocol to a customer, it is important to know which ones are included with
Data ONTAP software, which require a separate license, and which require an additional license fee. Each of
the four core protocols requires a separate license key. However, NFS, CIFS, and FC require an additional fee
for the license, while the iSCSI license is free.
Because every customer needs at least one of these protocols, the core protocol licenses are always included
as separate line items as a part of each deal that is set up in the Quote tool, CustomerEdge, and PartnerEdge.
For customer convenience, the licenses are preloaded in each storage system at the factory prior to shipping.
When the customer turns the system on, the core licenses that the customer ordered are active and walk the
customer through any necessary setup.

2-73

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Data ONTAP Components


Other Protocols

HTTP and HTTPS: not a full-fledged HTTP


server
FTP: full FTP and TFTP implementation
NDMP
SNMP
SMTP
Telnet, Remote Shell ( RSH), Secure Shell (
SSH), and Remote Procedure Call

NetApp Confidential

73

DATA ONTAP COMPONENTS


OTHER PROTOCOLS

Beyond the core four, additional protocols are supported by Data ONTAP software. For example, Data
ONTAP software can use HTTP and HTTPS to get and put files, although it is not a full-fledged Web server.
NetApp has no plan to become a replacement for Apache or IIS or any other full-featured Web server.
Data ONTAP also offers a full implementation of FTP and TFTP. Since Data ONTAP 7.0, the FTP server is
native code that is compiled in C, as everything else isa full-fledged, robust implementation of FTP.
Other supported protocols include:

NDMP for doing backups


SNMP for monitoring the system with any SNMP system
SMTP, because, while NetApp is not an e-mail server, it can send SMTP messages
Telnet, Remote Shell (RSH), and Secure Shell (SSH) for access to the system
SSH and HTTPS for security purposes

All of the NetApp management tools use secure Remote Procedures Call to send API instructions back and
forth through the system.

2-74

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 7
Unified Connect

NetApp Confidential

LESSON 7: UNIFIED CONNECT

2-75

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

74

Unified Connect
In Data ONTAP 8.0.1 7-Mode, NetApp introduces Unified Connect:
Consolidates FCoE- and IP-based traffic over the same connection
Includes support for FC-to-FCoE implementation

CNA

HBA

1/2/4G FIBRE CHANNEL

L1

L2 MGMTO

MGMT1

CONSOLE

10 11

12 13

14 15

16

17

10 19

20

UTA

PS2

PS1

SLOT2

Cisco Nexus 5010


STAT

N5K-M1008

FC Module

PROPERLY SHUT DOWN SYSTEM BEFORE OPENING CHASSIS.

PCI 1

PCI 3

PCI 2

LNK

0a

0b

e0a
LNK

e0b

RLM

PCI 4

e0c

e0d
LNK

0c

0d

LNK

Data ONTAP 8.0.1 7-Mode

NetApp Confidential

75

UNIFIED CONNECT
CNA means converged network adapter, a technology that supports data networking (TCP/IP) and storage
networking (FC) traffic on a single I/O adapter. CNAs support Enhanced Ethernet and Fiber Channel over
Ethernet (FCoE).
HBA means host bus adapter, an I/O adapter that sits between the host computers bus and the FC loop and
manages the transfer of information between the two channels. To minimize the impact on host processor
performance, the HBA performs many low-level interface functions automatically or with minimal processor
involvement.
UTA means unified target adapter. With this adapter, customers can run FCoE and IP traffic through the same
port and on the same wire, which eliminates the need and expense for separate SAN and LAN adapters and
cables.
For detailed information regarding Unified Connect, see NetApp Unified Connect Technical Overview and
Implementation.

2-76

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Unified Connect Infrastructure (1 of 2)


Today

CNA

FC

FCoE, iSCSI, NFS,


and CIFS
10GbE Switch
(FCoE-Enabled)
FC
FCoE

NFS,
CIFS,
and
iSCSI
10GbE

FC

UTA

NetApp Confidential

76

UNIFIED CONNECT INFRASTRUCTURE (1 OF 2)


With mixed workloads, customers have had to create many different connection layers to converse between
the servers and storage.

2-77

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Unified Connect Infrastructure (2 of 2)


With Unified Connect

Key Feature Benefits


True end-to-end network
conversion

CNA
FCoE, iSCSI,
NFS, and CIFS
10GbE Switch
(FCoEEnabled)
FCoE

10GbE

FC

UTA

Increased efficiency and


simplified management
Extension of the unified
architecture benefits

Business Value
Streamlining of IT operations,
which results in lower operating
costs
True data-center consolidation
Ability to react to market demands
faster

NetApp Confidential

77

UNIFIED CONNECT INFRASTRUCTURE (2 OF 2)


Unified Connect allows all protocols to run over the UTA. Because everything can be run over Ethernet and
allows customers to consolidate their IT environments fully, no need exists for separate cards or separate FC
switches. NetApp is the only storage vendor to offer this and, as such, is a leader in helping customers to
consolidate their environments.

2-78

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ethernet Unifies Data-Center Storage


Solve All Your Use Cases
Outsourced
Data
Centers

NFS

New Data Centers


and
Remote Offices

CIFS

Traditional FC
Data Centers

iSCSI

FCoE

10GbE
Enhanced Ethernet and DCB

Increased asset and


storage utilization
Simplified storage
and data
management
Reduced costs
through
consolidation
Improved storage
and network
efficiencies

Unified File and Block


NetApp Confidential

78

ETHERNET UNIFIES DATA-CENTER STORAGE


In addition to lower complexity and costs and improved efficiencies and utilization, Ethernet storage enables
unification of file and block data, with NFS, CIFS, iSCSI, and FCoE all running on 10GbE and supporting
and benefiting from the DCB standard.
Now that you know why Ethernet-based storage is important, consider why NetApp is the best vendor for
Ethernet storage.

2-79

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 8
Off-Box Storage-Management
and Administration Tools

NetApp Confidential

79

LESSON 8: OFF-BOX STORAGE-MANAGEMENT AND ADMINISTRATION TOOLS

2-80

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Off-Box Storage Management


Off-Box Storage Management

Off-Box Administration Tools

Data ONTAP 8.1 Cluster Mode


Data ONTAP 8.1 7-Mode for FAS Systems
and for V-Series Systems

WAFL Core Technology


Snapshot Technology
RAID 4 or RAID-DP Technology
NVRAM Operations
Aggregates and Volumes

On-Box, Value-Added Software

Protocol Support

FC and Ethernet

NetApp Confidential

80

OFF-BOX STORAGE MANAGEMENT


Next youll hear about the off-box storage-management and administration tools that are available from
NetApp. These products are all add-ons that are not automatically installed with Data ONTAP software. You
have learned about two of them: SnapDrive and SnapManager software. What are the others?

2-81

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 9
NetApp OnCommand
Management Software

NetApp Confidential

LESSON 9: NETAPP ONCOMMAND MANAGEMENT SOFTWARE

2-82

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

81

The NetApp OnCommand Product


Portfolio
MANAGE
Control

OnCommand System Manager


OnCommand Report
My AutoSupport

Automate

OnCommand Unified Manager


SnapManager software

Analyze

OnCommand Insight

IT INTEGRATION
Access

OnCommand Plug-in for Vmware


OnCommand Plug-ins for
Microsoft

Develop

Open management
SDK Community

NetApp Confidential

82

THE NETAPP ONCOMMAND PRODUCT PORTFOLIO


The NetApp management software portfolio maps to the current NetApp product offerings.

2-83

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

NetApp Open Management Interfaces


Flexibility to Choose the Right Solution (1 of 2)

In-House
Management
Tools

NetApp Confidential

83

NETAPP OPEN MANAGEMENT INTERFACES


FLEXIBILITY TO CHOOSE THE RIGHT SOLUTION (1 OF 2)

NetApp has adopted an open strategy. BMC is a template to be leveraged with other partners.
Today NetApp has engaged with partners who fall into three broad categories:

IT service-management and orchestration platforms from vendors like BMC, CA, HP, IBM, and Fujitsu
(Resource Orchestrator)
Management products that are provided by virtualization vendors
Home-grown management platforms or the emerging cloud-management platforms

These management platforms consolidate the management of multiple elements and give services providers
the ability to manage and orchestrate their infrastructures from a single management console.
The NetApp differentiator is our partner strategy and integration. This is in contrast to our competitors, who
provide access to third-party management platforms but also compete with their own management platforms.

2-84

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

System Manager 2.0


Key Features and Benefits
Features

Benefits

Windows and Linux support

Familiarity in browser look and feel

Data ONTAP support

Both 7-Mode and C-Mode support

Discovery
and setup

Discover new and existing storage systems and configure them


using a simplified setup wizard

Navigation

Manage one cluster at a time

Storage

Simplify initial storage configuration on new systems.

Use FlexVol, deduplication, compression, provisioning, and Snapshot


to improve storage efficiency
Server virtualization wizard for VMware ESX Server

Multiprotocol support

CIFS, NFS, iSCSI, FCP, and FCoE supported

Cluster management

New cluster configurations are detected


V-Server management
Automatically detects HA partner and groups HA pairs together

Monitoring
and alerting

Dashboards exist for single systems and clusters, with graphing,


notifications, and reminders

NetApp Confidential

SYSTEM MANAGER 2.0


KEY FEATURES AND BENEFITS

Additional key features and benefits

2-85

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

84

External to Data ONTAP Software


Products (1 of 3)

SnapDrive Software:
Windows
UNIX

NetApp Confidential

85

EXTERNAL TO DATA ONTAP SOFTWARE


PRODUCTS (1 OF 3)

The technical reason for the existence of SnapDrive software is related to the problem of host-side caching. In
a SAN environment, the host system has a cache. When its writes are committed, they are cached to a hostside cache according to a schedule that is unknown to the storage system controller. If the storage system
controller creates a Snapshot copy, it has no idea what is in that host-side cache. The cache may be partway
through a root inode update. The result may be a bad Snapshot copy and anywhere from a few missing files
and some corrupt files to a completely unreadable file system. So the technical reason for the existence of
SnapDrive software is to coordinate Snapshot copies with the host OS. Essentially, SnapDrive software tells
the host OS to:

Synchronize its disks or flush its cache


Create a Snapshot copy
Bring production back to normal

This coordination can happen quickly when it is integrated into the OS. It happens in a few clock cycles, but
integration with the OS is important so that the storage system controller can guarantee that every write is
committed at the time that a Snapshot copy is created.
Another important reason for the existence of SnapDrive software is to enable provisioning and management
of backup and restore activities from the NetApp server. SnapDrive software provides OS-level integration
that enables the server administrator to manage everything by using SnapDrive softwarecreating Snapshot
copies, performing restores back to previous Snapshot copies, creating new drives, mounting new drives,
putting the file systems on new drives, and so on. The server administrator has control over these activities
and does not depend on a storage team. SnapDrive software includes a complete set of tools that communicate
over Manage ONTAP API back to the storage system to control all of this management from the host server
side.

2-86

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Key Points:

2-87

Eliminate the manual management of NetApp storage with the NetApp protocol-agnostic solution.
Combine storage virtualization with native disk and volume management.
Automatically back up and restore data.
Create OS- and application-consistent Snapshot copies of data.
Gain virtualization support for the VMware ESX Server and Microsoft Hyper-VServer.

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

External to Data ONTAP Software


Products (2 of 3)

SnapManager software
Databases
Messaging
Virtualization

NetApp Confidential

86

EXTERNAL TO DATA ONTAP SOFTWARE


PRODUCTS (2 OF 3)

Described collectively as the Application Suite within the overall NetApp Manageability Software Family, at
this time, the following collection of SnapManager releases are available:

SnapManager for Exchange


SnapManager for Oracle and SnapManager for SAP:
UNIX
Windows
SnapManager for SharePoint
SnapManager for SQL Server
SnapManager for Virtual Infrastructure
SnapManager for Hyper-V

Later in the course, youll use some of these products in labs and course modules.

2-88

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

External to Data ONTAP Software


Products (3 of 3)

SnapProtect
Symantec NetBackup with Replication Director
NetApp Syncsort Backup (NSB)
Open System SnapVault
NearStore Personality License

NetApp Confidential

87

EXTERNAL TO DATA ONTAP SOFTWARE


PRODUCTS (3 OF 3)

Another off-box NetApp software product for storage management is called Open Systems SnapVault. It is
the same protocol as SnapVault but for use when the source is administered by an open system such as
Windows, Linux, or commercial UNIX, and NetApp storage is the destination. Open Systems SnapVault is
very important for remote office environmentsremote offices that are too small to have their own primary
storage systems dedicated to their sites, but that have servers that need to be backed up. Doing remote office
backup can be a problem for any IT environment. Many have implemented tape at their remote offices, but
changing the tape can become an administrative burden that is neglected or not performed regularly. Open
Systems SnapVault offers a disk-to-disk backup solution that eliminates the need to change tapes.
Open Systems SnapVault was first developed by BakBone, the company that licensed NetApp protocols to
create Open Systems SnapVault. Now NetApp has created its own version. Some of these OEM versions
(Syncsort, BakBone) have different features, but each gives server administrators the ability to back up
disparate systems onto NetApp storage. Another useful feature of Open Systems SnapVault is that once the
source is backed up to NetApp storage, it is a readable, mountable, viewable file system. It is read-only and it
is very easy to verify that the backup is good once it gets to the NetApp system.
NearStore Personality License is a license option that can be installed on any FAS3000 or FAS6000 system to
optimize that system for data protection and retention applications. Adding the NearStore on FAS license
enables more concurrent streams for SnapVault and SnapMirror, enables SnapVault for NetBackup, and
adds support for deduplication. NearStore on FAS systems utilize Data ONTAP for secondary storage
environments and supports all NetApp SnapX applications for data protection and retention near-line storage.
NearStore on FAS is a general purpose storage system that can be utilized in disk-to-disk backup, data
archival, and data retention environments.
When you want to use a storage system for backup, you should optimize the storage system for backup by
enabling the NearStore personality license.
When enabled, the nearstore_option license does the following.
2-89

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Allows a higher number of concurrent SnapMirror and SnapVault replication operations when the system is
used as a destination
Allows SnapVault for NetBackup to be enabled on 3000, 6000 series systems

2-90

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

High-level Portfolio Positioning


Is this a NetBackup customer with NAS as primary NetApp workload?
Symantec NetBackup

Yes
Else

Are catalog and tape top backup workflow concerns?


SnapProtect Software FAS primary
NSB 3rd party to FAS secondary

Yes

Otherwise

App Admin

Application SnapManagers
SMVI, SMHV, VSC
OnCommand 5.0 Protection Manager

VI Admin

Storage Admin

OnCommand Protection Manager


88

HIGH-LEVEL PORTFOLIO POSITIONING

2-91

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Exercise 4
Module 2: Creating Aggregates and
Volumes

Time Estimate: 30 Minutes

NetApp Confidential

EXERCISE 4
Please refer to your exercise guide.

2-92

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

89

Lesson 10
OnCommand Insight Assure,
Perform, and Plan

NetApp Confidential

90

LESSON 10: ONCOMMAND INSIGHT ASSURE, PERFORM, AND PLAN

2-93

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

OnCommand Management Software


Service Automation and Analytics

Service automation
Policy-based workflows
Service catalog for SLAs

Capacity planning
Service management
Performance analytics
Multivendor and multiprotocol

NEW

NEW

OnCommand

OnCommand Insight
Service Analytics

Service Automation

Integrates:
Provisioning Manger
Protection Manager
Operations Manager
SMVI and SMHV

Formerly SANscreen
and Akorri BalancePoint
Device management
Problem detection
Monitoring and reporting

System Manager
Simple storage-device management

NetApp Confidential

91

ONCOMMAND MANAGEMENT SOFTWARE


SERVICE AUTOMATION AND ANALYTICS

Manage
NetApp provides the capabilities to help customers to maximize the effectiveness of their IT infrastructures in
meeting and adapting to changing service levels with minimal cost and effort (Efficiency). This is
accomplished with tools that manage the NetApp infrastructure by delivering storage and service efficiency
(Control, Automate, and Analyze). Additionally, NetApp management helps to analyze the entire multivendor
infrastructure stack to assess and ensure optimal efficient use.
The circular arrow indicates that the operations of control, automate, and analyze represent an ongoing
process with IT management.
Control
How do I manage my NetApp storage infrastructure more effectively?
Control provides centralized management, monitoring, and reporting tools to optimize a customers NetApp
storage and meet business policy requirements:

2-94

Proactive real-time problem alerting and detection


Comprehensive monitoring and reporting to assess the health of storage infrastructure. Customers get a
better view of what is deployed and how it is utilized, which enables them to improve storage-capacity
utilization and increase the productivity and efficiency of their IT administrators.
Achievement of compliance and conformance with business policies by using enterprise-wide
configuration management and distributed policy setting

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Automate
How can I reduce the time and complexity of provisioning and protecting my NetApp infrastructure?
Enabling service automation allows for the elimination of manual processes that lead to errors and costly
downtime. By using policy-based automation, customers can standardize the utilization of their storage
infrastructures. The service catalog lets customers define service levels that specify attributes of the storage
infrastructure. This allows for automating the tasks of provisioning and protection and frees the administrator
for more valuable projects.
Analyze
I need detailed visibility into my infrastructure to gain service efficiencies and deliver on SLAs.
Customers can gain a holistic view of their storage infrastructures as a unified set of services by using
analysis, discovery, correlation, service paths, simulation, and root-cause analysis.
Through the NetApp Analyze capabilities, customers get visibility into complex, multivendor, multiprotocol
storage services.
Capacity management:
Customers can continually improve storage efficiency and reduce capex and opex with efficient capacity
management to identify, plan, forecast, and provide the right amount on the right platform.

Virtual machine ( VM) optimization: Customers can get service-path visibility into virtual infrastructure
environments so that they can plan and optimize the alignment of VMs and storage and eliminate capacity
and performance concerns.
Assurance monitoring: Customers can provide storage service monitoring and assurance visibility into
networked storage assets to quickly understand their availability, performance, relationships, and
utilization.

Akorri
With the recent acquisition of Akorri, the OnCommand familys ability is strengthened with performancecapacity analytics that allow customers to plan capacity, predict issues before they happen, and troubleshoot
issues if they do occur.

2-95

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

OnCommand Insight:
High-Level Product Overview
Balance
Map service
health
Optimize
workloads
Predict and
resolve
problems

Predictability

Assure

Perform

Ensure config
SLO
Identify cause
of service
issues
Plan and
validate
service
changes
Audit
changes

Availability

Manage and
optimize
resource
usage
Get storage
service
performance
metrics
Align service
tiers

Plan
Manage and
plan capacity
Trend,
forecast, and
report
Be costaware
Enable
chargeback
and
accountability

Optimization

Efficiency

NetApp Confidential

92

ONCOMMAND INSIGHT: HIGH-LEVEL PRODUCT OVERVIEW


OnCommand Insight comprises four products, but this module focuses on Assure, Perform, and Plan.
OnCommand Insight Assure automatically discovers all resources and provides a complete end-to-end view
of an entire service path. With OnCommand Assure, customers can see exactly which resources are used and
who is using them. Customers can establish policies based on best practices, which enables Insight Assure to
monitor and alert on violations that fall outside those policies.
Insight Assure is also a powerful tool for modeling and validating planned changes to minimize impact and
downtime for consolidations and migrations. Insight Assure can be used to identify candidates for
virtualization and tiering.
Insight Perform correlates resources to business applications, which enables customers to optimize resources
and better align resources with business requirements. Customers can reclaim orphaned storage and retier
resources to get the most out of their current investments.
Insight Plan provides trending, forecasting, and reporting for capacity management. Insight Plan reports on
usage by business unit, application, data center, and tenants. Insight Plan provides user accountability and
cost awareness, which enables customers to generate automated chargeback reporting by business unit and
application.

2-96

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Backup:
Customers need end-to-end visibility into complex virtualized environments. With OnCommand Insight, IT
has a single pane of glass through which it monitors and manages its heterogeneous environment. That
visibility also puts IT in a position where IT proactively manages the environment so that IT can ensure that it
meets SLAs on availability and performance. IT can also ensure that configurations are in line with service
requirements. IT can implement best practices and view vulnerabilities and violations to drive availability and
efficiency.
After IT has the environment under control, IT can analyze and optimize the existing resources with service
analytics. All of the data that is captured is stored, and IT can then review and report on actual usage and
better plan for capacity. This way, IT buys only what it needs. Service analytics also means that IT can report
costs, and this can be used for chargeback of storage services, part of an overall chargeback strategy.

2-97

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Insight Plan and Perform

NetApp Confidential

93

INSIGHT PLAN AND PERFORM


When you open the Insight Perform data warehouse, you see the data marts that are contained in the data
warehouse for performance and capacity. Next in this course, youll dive into the Volume Daily Performance
data mart and view some of the detailed performance reports that come from Insight Perform.
Insight Perform correlates the performance from an entire environment, from application to the storage, to
provide customers with performance metrics of their applications.
If you drill down to the Volume Daily Performance data mart, you can see several types of reports that are
ready to run. In the next few slides, youll view these.

2-98

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Managing Business Entities


Create business entities:
Tenant (for cloud)
Line of business
Business unit
Project

NetApp Confidential

94

MANAGING BUSINESS ENTITIES


Insight Plan introduces a hierarchical approach to business-level storage usage and reporting. Business units
are replaced with more detailed tenant, line of business, business unit, and project trees that can be drilled into
and filled in any of the entities. Usage reporting can now be accomplished at the tenant level, which provides
cloud and service providers the tools to report at any of the levels to their customers. Additionally, reporting
can still be carved up at any of the levels down to the application.
Essentially, customers can report on tenants that have lines of business, business units, and projects with
applications, so customers have the full spectrum of usage.
These business entities are added into OnCommand Insight Plan. Reporting is accomplished from the local
Insight Plan server and rolled up to the DWH for enterprise-level reporting. The next two slides show
examples of local and DWH reporting.

2-99

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Measuring and Managing Risk While


Using Thin Provisioning
Risk associated with Thin Provisioning over commit
Questions to answer:
Can I allocate more storage?

Am I at risk now?
Was I at risk before? How long?
When will I be at risk?

NetApp Confidential

95

MEASURING AND MANAGING RISK WHILE USING THIN PROVISIONING


How does NetApp calculate risk-assessment ratio? Are these standard reports or custom in report studio?
Provide answers when leveraging thin-provisioning technologies:

Can I allocate more storage?


Am I at risk now?
Was I at risk before? For how long?
When will I be at risk?

Insight Plan provides multiple enterprise reports that show trending and help with forecasting storage needs in
a thin-provisioned environment.

2-100

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Optimizing Storage Tiers

NetApp Confidential

96

OPTIMIZING STORAGE TIERS


Insight Plan easily shows customers how tiering strategies work and exactly which application and business
units use each tier of storage.

2-101

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 11
The FlexShare Tool

NetApp Confidential

LESSON 11: THE FLEXSHARE TOOL

2-102

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

97

FlexShare Prioritization of Service Tool


Challenges and Benefits
Workload Challenges:
Specific applications and workloads require priority
service control.
Without quality of service control, some workloads
can interrupt critical operations or transactions.
Priorities can change based on time and demand.

A Standard Feature in Data ONTAP 7G


Software:
Effective storage consolidation is achieved.
Each volume is assigned one of five priority levels.
Critical workloads get fastest response
when the controller is fully loaded.
The storage administrator can make dynamic
adjustments.
NetApp Confidential

FLEXSHARE PRIORITIZATION OF SERVICE TOOL


CHALLENGES AND BENEFITS

2-103

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

98

The FlexShare Tool Gives Priority Service


to the Most Important Workloads
Saturated Controller Under
Same Workloads

Workload latency is similar


when a controller is fully
loaded and the FlexShare
tool is not used.
With the FlexShare tool,
significant reductions in
latency are seen in highpriority volumes.
Latency for other volumes
is based on their priority
settings.

Without the FlexShare Tool


High-Priority
Volumes
Medium-Priority
Volumes
Low-Priority
Volumes
0

10

20

30 40 50 60 70
Latency (milliseconds)

80

30

80

With the FlexShare Tool


High-Priority
Volumes
Medium-Priority
Volumes
Low-Priority
Volumes
0

10

20

40

50

60

70

Latency (milliseconds)

NetApp Confidential

99

THE FLEXSHARE TOOL GIVES PRIORITY SERVICE TO THE MOST IMPORTANT


WORKLOADS
The FlexShare tool is enabled when system bottlenecks occur:

CPU
NVRAM
Disk

Control of priority by volume: five priority levels

2-104

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Control of System and Client Workloads


System priority set
high for critical
deadlines:

Prioritize Client over System

System Load (I/O per Second)

Backup
Disaster recovery

Client priorities set


high for application
and database control
Exchange
SAP
Oracle

Prioritize System over Client

System
Client

Time

Dynamic Control of System Resources

NetApp Confidential

CONTROL OF SYSTEM AND CLIENT WORKLOADS

2-105

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

100

FlexShare Example
Volumes on FC and SATA Disks
Clients
Server

Server

Switch

Volumes

Volumes

FC Disks

SATA Disks

High-Priority
FC Aggregate

Medium-Priority
SATA Aggregate

FAS Storage System Running Data ONTAP


Software with the FlexShare Tool

Storage
administrators can:
Prioritize data
access in mixed
storage
environments
Set priority for
volumes on FC
disks higher than
priority on SATA
disks

NetApp Confidential

FLEXSHARE EXAMPLE
VOLUMES ON FC AND SATA DISKS

2-106

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

101

Module Summary
Now that you have completed this module,
you should be able to:
Identify NetApp core software:
On-box features of Data ONTAP software
Off-box features of Data ONTAP software

Describe the on-box and off-box capabilities


of NetApp software

NetApp Confidential

MODULE SUMMARY

2-107

NetApp Accredited Storage Architect Professional Workshop: Core Software Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

102

Module 3
Core Hardware Technology

NetApp Confidential

MODULE 3: CORE HARDWARE TECHNOLOGY

3-1

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Module Overview
This module focuses on NetApp core hardware
technology:
Fabric Attached Storage (FAS) Systems
V-Series Systems
Storage Drive Technology
Resources and tools

NetApp Confidential

MODULE OVERVIEW

3-2

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Module Objectives
After this module, you should be able to:
Describe NetApp enterprise hardware
FAS systems

V-Series systems
Storage Acceleration Appliance, FlexCache storage device,
and high-availability(HA) devices
E-Series systems

Identify the available drive types


Fibre Channel ( FC)
SAS

SATA
Solid-state disk ( SSD)

Identify available resources


NetApp Confidential

MODULE OBJECTIVES

3-3

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 1
Hardware

NetApp Confidential

LESSON 1: HARDWARE

3-4

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Agenda
Overview
FAS2X00
FAS3X00
FAS6X00
SATA, SAS, FC, SSD
Remote LAN Module
(RLM), Embedded
Switch Hub (ESH),
AT-FCX
Performance
parameters

Virtualization
solutions, including
V-Series systems
Resources

NetApp Confidential

AGENDA

3-5

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Networked Storage Topology


SAN
Enterprise

NAS
Departmental

Enterprise

Departmental

iSCSI
Fibre
Channel

Corporate
LAN

Dedicated
Ethernet

SAN (Block)

NAS (File)

NetApp
FAS

NetApp Confidential

NETWORKED STORAGE TOPOLOGY


Our network topology is extensivefrom SAN block-based connectivity with FC and iSCSI to NAS filebased attachment to LANs and dedicated Ethernet connectivity. We bring a unique offering to the
marketplacebecause our systems are so flexible that one storage controller can handle communication from
either SAN or NAS. This flexibility provides a big advantage to midsized companies that have an immediate
need for NAS storage but want to move toward a SAN-style infrastructure.

3-6

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Industry-Leading Systems Portfolio:


Truly Unified
Protocols

Broad System Portfolio

Cost and Performance

FC

Flash Cache

FCoE
iSCSI

SSD

NFS

FlexCache

CIFS

Unified Management

Same tools and processes:


learn once, run everywhere
Integrated data management
Integrated data protection

One Architecture for Many Workloads


NetApp Confidential

INDUSTRY-LEADING SYSTEMS PORTFOLIO: TRULY UNIFIED


The NetApp systems portfolio is truly unified.
Key points:
Unified Storage Architecture is much more than support for multiple protocols on one storage array. In most
environments of scale, multiple protocols are not run on the same box. The real benefits of unified storage are
at an architecture level, not at a box level.
The big question is how to achieve the lowest cost profile while meeting the SLAs for a particular workload
or mix of workloads. Consider the following questions:
Why buy more than is needed?
The ability to grow and scale from low-end to high-end systems on an architecture means that customers don't
have to apply a "rip-and-replace approach to one of the most costly parts of their IT operationsthe
processes and skill sets that are required to deliver IT services to their users.
How can we help customers who are invested in an infrastructure other than ours benefit from our IT
efficiencies?
Our ability to virtualize SAN systems with V-Series enables customers to achieve the benefits of
standardization, data protection, and storage efficiency even if they are currently running EMC, HDS, or HP
storage systems.
How can customers achieve multiple cost-performance profiles within the same architecture?
We use flash-assist technologies or caching techniques to achieve high performance from low-cost drives.
Thereby, we enable what some people refer to as tierless storage. A unified architecture means that
customers don't need to apply a rip-and-replace approach when additional I/O or, more likely, a mix of
additional I/O and cost profiles is needed for multiple applications and storage needs.

3-7

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

How does standardization reduce cost?


As the number of architectures decreases, efficiency and flexibility increases. Customers can increase storage
utilization by using one architecture, rather than using a multi-array approach that requires division of the
architecture. The ability to handle multiple workloads and deploy multiple technology options across one
architecture provides customers with the flexibility to deal with change. It is unlikely that the storage
requirements of today and the storage requirements 12 to 18 months from now will be the same .
Delivery of a unified set of tools, a unified set of processes, and one way of performing disaster recovery,
backup, provisioning, management, and maintenance produces massive benefits in terms of complexity
reduction. Complexity reduction quickly translates into cost reduction.

3-8

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Evolving Data-Center Design


ApplicationBased Silos

Zones of
Virtualization

Private Cloud

Public Cloud

Management

Apps
Servers
Network

Storage

Traditional
Approach

Flexible and Efficient Shared IT


Infrastructure

A flexible and efficient foundation is essential.


NetApp Confidential

EVOLVING DATA-CENTER DESIGN


What dominates our discussions with IT organizations today is the application silo model. Until a few years
ago, the application-based silo was the primary provisioning model for servers and storage.
The application-based approach begins with an application and builds a dedicated infrastructure under it:
services and storage are carved out for the application and its users. Typically, silos are independent of each
other. Often, different choices in regard to servers and storage are made for different silos. Each silo requires
specialized skills, and often an organization is defined around a tier of servicewith dedicated SAN teams or
dedicated NAS teams, tier 1 or tier 2, and so on. When an application is rolled out, the first step is to purchase
and rack new hardware and infrastructure. This process can require months, so months may pass before an
application is placed into production. Then, when the roll-out is complete, it is difficult to share resources.
Excess capacity and horsepower that is stranded in one silo cant be allocated to another application or
repurposed to roll out a new application.
But server virtualization is changing this situation and paving the way for a completely different architecture,
an architecture that enables one pool of resources to be shared across multiple clients. Server virtualization
has a compelling value proposition and a profound implication. The value proposition is simple: most servers
are underutilized. When multiple applications are run on one server, server footprint is reduced, utilization is
increased, manpower needs are reduced, and money is saved. British Telecom, for example, reduced from
3,000 servers to just over 100 blades.
Virtualization allowed applications to be decoupled from hardware. Now, applications are mobile. They can
move from server to server for load balancing, from data center to data center for disaster recovery, and into
and out of the cloud for capacity bursting, flexibility, and cost. IT organizations can build a broad,
homogeneous, horizontal server infrastructure that is capable of running multiple applications simultaneously.
And server virtualization breaks the cycle of having to install new hardware in order to deploy new
applications. Resources can flow to where they are needed. Applications can be moved around, and a degree
of standardization can be achieved.

3-9

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

However, some companies discover that their storage infrastructure doesnt provide them with the level of
flexibility and efficiency that they need. We work with companies every day who are spending most of their
virtualization-resource efficiency gains on their storage infrastructure. So, a fundamentally different approach
is required for storage.
We were early to recognize the need for virtualized storage. We have been delivering virtualized storage for
years. Customers want to build not only a broad horizontal infrastructure that can run multiple applications
for servers but also an infrastructure that uses maximizes storage efficiency.
The silo model is being replaced by the virtualization model. The model of running multiple applications on a
server infrastructure that is optimized for flexibility, speed, and scale leads to a broader shared IT
infrastructure. Various terms are used: virtual data center, dynamic data center, virtual dynamic data center,
internal cloud, and private cloud. We use the term shared IT infrastructure.
We expect the silo and virtualized models to coexist for years, but eventually application-based silos will be
relegated to legacy applications that will never be migrated or to a small set of key applications in the data
center that warrant their own dedicated infrastructures. As time passes, the vast majority of storage and the
vast majority of the applications will move to the shared infrastructure.
NetApp is the clear leader in the new shared IT infrastructure world. Our underlying architecture and design
approach, the partnerships that we have built in the market, and our commitment to customer success make us
the storage foundation of choice for virtualized, shared infrastructure.

3-10

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Refreshes and Additions


More powerful, affordable, and flexible
systems for midsize businesses and
distributed enterprises

FAS/V6280

FAS/V6240

FAS/V6210

FAS/V3270
FAS/V3240

FAS/V3210

FAS2240
FAS2040

720 TB
240 Drives

1,800 TB
600 Drives
1-TB Flash
Cache

2,880 TB
960 Drives
2-TB Flash
Cache

3,600 TB
1,200
Drives
3-TB Flash
Cache

4,320 TB
1,440
Drives
6-TB Flash
Cache

4,320 TB
1,440
Drives
8-TB Flash
Cache

432 TB
144 Drives
408 TB
136 Drives

Unified Storage Architecture


NetApp Confidential

REFRESHES AND ADDITIONS


Most of the systems within our FAS portfolio were refreshed last year.
In November 2011, the entry line was refreshed. We are now adding a new member to the family, the
FAS2240 system. This system becomes the flagship, high-end offering of the entry line. This powerful system
comes in 2U and 4U configurations.
We introduced new fixed configurations for the FAS2040 system with upgraded technology at a lower price.
The FAS2040 system is now the entry-level offering for our Enterprise portfolio, replacing the FAS2020
system and beating its price point. The old FAS2040 SKUs and the FAS2020 SKU were placed on end-ofavailability (EOA) on November 8, 2011.
All products in our line support Data ONTAP 8.0, providing a truly unified system portfolio. Regardless of
where customers enter or purchase into our Enterprise line, they gain the increased efficiency and flexibility
that is offered by Data ONTAP 8.0. With these enhancements to our portfolio, we offer not only a system that
can compete with competitors such as VNXe but also a no-compromise portfolio that can beat VNXe and
other competitors.
Key points:

3-11

A truly unified portfolio


The best storage platform for efficient IT infrastructure
An approach that differentiates our offerings from competitors offerings
Most efficient
Extremely flexible (in terms of performance, capacity, expandability)
Delivering the best value to the customer

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

V-Series Open Storage Controllers:


V6200 and V3200 Systems
V-Series systems build on current storage
investments to satisfy unmet needs.

V3210
480 TB

V3240
1,200 TB

V3270

V6210
2,400 TB

V6240

V6280
2,880 TB

2,880 TB

1,920 TB

Support for Disk Arrays from Major Storage Vendors

NetApp Confidential

10

V-SERIES OPEN STORAGE CONTROLLERS: V6200 AND V3200 SYSTEMS


There are two new V-Series systems: the V6200 series and the V3200 series.
Key points:

3-12

To complement the new FAS systems, we offer six new V-series systems.
V-Series systems support disk arrays from major storage vendors.
V-Series systems builds on the customers current storage investment to satisfy unmet needs.
V-Series systems enable customers to gain the benefits that NetApp can deliver.

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Enhanced Data ONTAP 8 for More Value


FLEXIBILITY

EFFICIENCY
Data Growth
SATA or Flash Cache
RAID-DP
Thin provisioning
Snapshot copies
Deduplication
Thin replication
Virtual copies

Data compression

TB

Unified connectivity, allowing all


workloads to be consolidated over
Ethernet

In-line data compression, which extends


efficiency and increases utilization

Nondisruptive data mobility for


improved data management

Flexibility and Efficiency, Shared IT Infrastructures


NetApp Confidential

11

ENHANCED DATA ONTAP 8 FOR MORE VALUE


Key points:
Thousands of customers have deployed Data ONTAP 8.
New features include support for Flash Cache, compression, Unified Connect, and DataMotion for Volumes.

3-13

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 2
FAS2000 Series

NetApp Confidential

LESSON 2: FAS2000 SERIES

3-14

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

12

FAS2000 Series: Architecture Highlights


The FAS2000 series is a NetApp entry-level
enterprise platform.

Fast CPU and memory architecture


High-availability cluster in a box
Either SATA or SAS storage architecture
Increased onboard I/O connectivity

The series introduces BMC (Baseboard Management


Controller) remote management technology.
SAS and SATA disks are available.
The series is RoHS-compliant (hazardous
substances).

NetApp Confidential

13

FAS2000 SERIES: ARCHITECTURE HIGHLIGHTS


FAS2000 controllers were added to the product line in 2009. They have fast CPUs and memory and were
designed to operate within a high-availability architecture. The FAS2000 systems replaced the FAS200
systems, which were popular products for remote offices and small company installations.
The FAS2000 systems use a high-performance storage technology called SAS. Baseboard Management
Controller (BMC) is a feature that is unique to the FAS2000 series and that enables remote management. The
BMC feature is similar to the RLM port (control) that is available on the FAS3100 and FAS6000 systems.
Both SATA and SAS disk drives are available internal to the box and FC, and SATA can be used externally
through the expansion shelves. FAS2000 systems are RoHS-compliant.

3-15

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

FAS2000 Storage Systems


The NetApp FAS2000 series
Leading performance and
efficiency
Unified storage architecture
Easy and affordable scaling
Superior data protection
Easy deployment and
management

Featuring the
Data ONTAP
Operating System

NetApp Confidential

14

FAS2000 STORAGE SYSTEMS


Segment-leading performance and efficiency
Unified Storage Architecture to handle multiple SAN and NAS workloads
Easy and affordable scaling to meet ever-increasing storage needs
Superior data protection to deliver frequent backups, simple quick recovery, and cost effective availability
Easy deployment and management without the need for extensive storage expertise

3-16

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Entry FAS Systems


More powerful, affordable, and flexible systems for
midsized businesses and distributed enterprises
New System

New Price

FAS2040

FAS2240
432 TB

408 TB

Start right. Keep it simple. Grow smart.


NetApp Confidential

15

ENTRY FAS SYSTEMS


FAS2000 systems allow customers to Start Right, Keep It Simple, and Grow Smart.
The FAS2040 system is the flagship product for the entry line.

Provides more power to meet the needs of demanding midsized or distributed enterprise deployments
Has SAS connectors

The FAS2020 system is preconfigured with drives, to provide excellent value for smaller, value-oriented IT
organizations
The FAS2050 has been discontinued.

3-17

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

FAS2000 Series: Key Specifications


FAS2040
Form Factor

FAS2240-2 FAS2240-4

FAS3210

2U

2U

4U

3U

24 inches

19 inches

24 inches

24 inches

408 TB

374 TB

430 TB

720 TB

Max Drive Count

136

144

144

240

10GbE Support

No

Yes

Yes

Yes

Flash Cache

No

No

No

No

V-Series

No

No

No

Yes

No

No

Chassis Depth
Max Storage

MetroCluster Support
OS Version
SSD

No
Data ONTAP
7.3 and 8.x
Yes

Data ONTAP 8.1


Yes

Yes

Yes
Data ONTAP
7.3 and 8.x
Yes

All figures represent dual-controller configurations

NetApp Confidential

16

FAS2000 SERIES: KEY SPECIFICATIONS


Each FAS2000 system is an "all-in-one system; that is, all components are inside the unit.
In the FAS2000 series and the FAS200 series, a controller and a storage shelf are built into one unit. Here is
the FAS2040 and the FAS2240, one of the two latest editions to the FAS product line. In FAS2000 systems,
SAS drives are used.
External DS14 shelves can be added to itup to 84 additional FC or SATA spindles. At this time, there are
no external SAS drives.
SAS or SATA drives may be present in the controller head units of the FAS2040 and the FAS2240 and the
FAS2020 systems. The FAS2020 system is smaller than the FAS2040 and the FAS2240 system. The
FAS2020 system has 12 SAS or SATA drives and the ability to add two additional shelves.
FAS2000 systems can be clustered. The FAS250 systems, which the FAS2000 systems replaced, cannot be
clustered. All FAS2000 systems are capable of the four core protocols and have FC connectivity. Because the
FAS2040 and the FAS2240 system has a PCI slot, it can be expanded. FAS200 systems cannot be expanded.
Typically, the additional port is used for expansion shelves.
The interconnect is across the backplane of the chassis. There is no separate CFO card.
The FAS2050 system has been discontinued.

3-18

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

FAS2240: More Software Value Included


Included at no additional cost:
All protocols
Data ONTAP Essentialskey technologies:

Performance optimization: FlexShare qualityof-service tool

Storage efficiency: Deduplication, thin


provisioning, compression, NetApp Snapshot
copies, RAID-DP technology

Core management: OnCommand


management software (System Manager,
provisioning capability, protection capability)

High availability: RAID-DP technology, NetApp


Snapshot copies, device-specific module
(DSM) and multipath I/O (MPIO), SyncMirror
software, Open Systems SnapVault

Secure multi-tenancy: MultiStore software

NetApp Confidential

17

FAS2240: MORE SOFTWARE VALUE INCLUDED


The FAS2240 system is equipped with all protocols and with Data ONTAP Essentials at no extra cost.
Data ONTAP Essentials provides the following key features:

3-19

Simplified performance optimization: the FlexShare quality-of-service tool


Industry-leading storage efficiency: deduplication, thin provisioning, compression, NetApp Snapshot
copies, and RAID-DP technology
Simplified management: OnCommand management software (System Manager to optimize day-to-day
performance, provisioning capability to streamline storage provisioning, and protection capability to help
secure business critical data
Increased availability: RAID-DP, technology Snapshot technology, DSM and MPIO, SyncMirror
software, and Open Systems SnapVault
Secure multi-tenancy: MultiStore software

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Simple Software Structure


FAS2040

All Protocols

Data ONTAP Base

Data ONTAP Essentials

Snapshot copies, RAID-DP


technology, NearStore disk storage,
FlexVol volumes, FlexShare tool,
thin provisioning, deduplication,
compression, SyncMirror software
System Manager, FilerView tool

The features of Data ONTAP Base


plus:

Eight Options

Six Options

Server Pack
Application Pack
Advanced Pack

Complete Bundle

Protection Pack

Automated management

SnapRestore
SnapMirror
SnapVault
FlexClone
SnapManager Suite

NetApp Confidential

Complete Bundle

Foundation Pack

Secure multi-tenancy

Virtualization
Bundle

Optional
Software

All Protocols

Windows Bundle

Included
Software
($0)

FAS2240

18

SIMPLE SOFTWARE STRUCTURE


The updated FAS2040 system includes all protocols and is offered at a price that is comparable to the price of
the FAS2020 system.
The FAS2040 system is equipped with Data ONTAP Base, which includes the software listed in the top-left
cell of the table. These items are provided at no additional cost to the customer. Therefore, even with our
entry-level system, customers receive the industry-leading efficiency tools that NetApp is known for. Also,
customers can use System Manager to experience greater control, better visibility, and increased simplicity in
managing their environments. The FAS2040 system retains its current pack and bundle structure, so
customers who want additional capabilities can choose one or more of eight software options.
The FAS2240 system also includes all protocols and all components of Data ONTAP Essentials. So , the
FAS2240 system has the same software structure that our mid-level FAS3200 systems and high-end FAS6200
systems have.
In addition to providing all of the features provided by Data ONTAP Base, Data ONTAP Essentials
automates management via OnCommand management software and adds the secure multi-tenancy features
that are provided by MultiStore software.
To add software, customers just turn on a license. They can purchase enhanced capabilities one-by-one or
within a bundle.

3-20

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Portfolio Positioning
Optimized for Virtualization and Consolidation

Unified Storage Architecture

FAS3210
Enterprise-class availability and data
protection
Ability to scale to meet rapid data growth
Flexibility for diverse workloads
FAS2240 New System!
Two to three times greater performance
over previous models
Investment protection
Most efficient use of the IT budget

FAS2040
Low acquisition cost
Easy order process
Comprehensive solution
NetApp Confidential

PORTFOLIO POSITIONING
OPTIMIZED FOR VIRTUALIZATION AND CONSOLIDATION

FAS3210

High availability and scalability for larger environments


Enterprise-class availability and data protection for critical applications
Higher scalability that adapts readily to rapid data growth
Flexibility to support diverse workloads

FAS2240 New System!

Improved performance to support demanding workloads


Two to three times improvement in performance
Investment protection for growing business needs
Industry-leading efficiency that maximizes utilization of IT budgets

FAS2040 New Price!

3-21

Popular entry platform now priced for best value


Low acquisition cost for smaller IT organizations
Easy-to-order configurations
A comprehensive solution at an entry-price point

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

19

Lesson 3
FAS3200 Series

NetApp Confidential

LESSON 3: FAS3200 SERIES

3-22

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

20

The FAS3200 and V3200 Series


FAS32

Perfect Building Block for Shared IT Infrastructure


FAS327010
FAS3240

The best value for mixed workloads


Future-ready flexibility and scalability
50% more PCIe connectivity

Up to 2 PB of storage capacity

Unified architecture and Data ONTAP 8.0, which is the


storage-efficiency leader
NetApp Confidential

21

THE FAS3200 AND V3200 SERIES


FAS/V3200 systems are the perfect building blocks for shared IT infrastructures. The three new systems are
FAS/V3210, FAS/V3240, and FAS/V3270.
FAS/V3200 systems offer the best value for mixed workloads. The systems were designed to cost-effectively
deliver a strong combination of benefits and the flexibility that supports mixed workloads.
FAS/V3200 systems also provide the scalability and flexibility that enables customers to be future-ready:

50% more PCIe slots (12 versus 8) provides for more connectivity options or more Flash Cache modules
(up to 2 TB in the FAS/V3270 system).
Scalability of up to 2 PB of storage capacity handles requirement increases, especially for virtualized
shared storage environments.

FAS/V3200 systems provide higher performance than FAS3100 systems provide (typically ~25% gain for the
FAS/V3270 system over the FAS3170 system and ~50% gain for the FAS3240 system over the FAS3140
system).
With the FAS/V3200 systems, an additional service processor and an alternate control path (ACP) enable
additional diagnostics and nondisruptive recovery (same as with FAS/V6200 systems).
FAS/V3200 systems also leverage the advantages of Data ONTAP 8 and the NetApp Unified Storage
Architecture (one OS, consistent management software, multiple protocols, integrated data protection, and
multiple tiers of storage) to provide industry-leading storage efficiency. For example, deduplication and
compression help customers control data growth.

3-23

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

FAS3200 Systems: When to Sell


Target Applications and Customers
FAS3270

Business and
virtualization applications

For storage
consolidations and
server virtualization

Storage consolidations
and server virtualization

FAS3240

Windows storage
consolidation

Enterprise and midsized


business customers

For mixed workloads


(the best price and
performance)

FAS3210

For midsized
businesses and
Windows storage
consolidation

NetApp Confidential

22

FAS3200 SYSTEMS: WHEN TO SELL


Target applications and customers for FAS3200 systems:

Business and virtualization applications


Storage consolidation and server virtualization
Windows storage consolidation
Enterprises and midsized businessesprimarily the FAS3210 system for midsized businesses

Enterprise and midsized-business customers appreciate the value that the FAS3200 series delivers through its
efficiency, flexibility (through expandability and scalability), and performance.
The FAS3270 system is great for enterprise midrange storage. It serves as a building block for shared IT
infrastructure and facilitates storage consolidation.
The FAS3240 system is the flagship product, with strong fundamentals in the price-sensitive enterprise space.
It is particularly useful for mixed workloads and delivers scalability and performance at a great price.
The FAS3210 system is particularly useful for mixed workloads in the medium-to-small-enterprise (MSE)
market and for Windows storage consolidation.
Additional opportunities are available to MSE customers :

The V3210 system


Flash Cache, which automatically boosts performance

NOTE: Flash Cache is not offered in the FAS2000 family, and there is not a V2000 family for MSE
customers.

3-24

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Key FAS/V3200 Features

I/O

I/O expansion module: 50% more slots


Two PCIe v2.0 (Gen 2) slots in the controller
Onboard SAS ports

Midrange performance improvements


15% higher spindle and capacity
Scaling HA available in 3U and 6U footprints
Service processor (SP) remote management
Capabilities beyond RLM and BMC
RASM Addition of high-end RASM capabilities
Flexibility and continued price and performance leadership
NetApp Confidential

KEY FAS/V3200 FEATURES


RASM (Reliability, Availability, Serviceability, Manageability
RLM (Remote LAN Manager)
BMC (Baseboard Management Controller)

3-25

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

23

Flexibility and Space Efficiency


FAS/V3240 and FAS/V3270

6U

FAS3100
Dual-Enclosure HA
More expansion slots

6U

FAS/V32x0

Single-Enclosure HA

3U
Single-Enclosure HA

Same rack space, greater performance, and additional expansion capabilities


or

Half the rack space and greater performance

NetApp Confidential

FLEXIBILITY AND SPACE EFFICIENCY

3-26

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

24

FAS/V3200 Platform Transitions

FAS2050

FAS3100

FAS3100

Significant Performance Issues


Capacity Constraints
I/O Limitations
No Onboard SAS Ports
No Data ONTAP 8.0 Support

Solution

Performance Issues
Excessive Footprint
No Onboard SAS Ports

Solution

Performance Issues
Significant I/O Limitations
No Onboard SAS Ports

Solution

NetApp Confidential

FAS/V3200 PLATFORM TRANSITIONS

3-27

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

FAS/V3210

FAS/V3240A
FAS/V3270A

FAS/V3240AE
FAS/V3270AE
25

Chassis

FAS/V3200 Configuration Flexibility

3U

Controller
Chassis

I/O Expansion Module

3U

NetApp Confidential

FAS/V3200 CONFIGURATION FLEXIBILITY

3-28

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

26

FAS/V3200 Slots and Interfaces

Standalone Controller
2 PCIe v2.0 (Gen 2) x 8 slots

Top full height and full length

Bottom full height and length

Management (wrench)
SP and e0M

Private management

2 x 6Gb SAS (0a, 0b)

ACP (wrench w/lock)

2 x HA interconnect (c0a, c0b)

Serial console port

2 x 4Gb FC (0c, 0d)

I/O expansion module

2 x GbE (e0a, e0b)

4 x PCIe 8x

USB port (not currently used)

Full length and full height slots

NetApp Confidential

FAS/V3200 SLOTS AND INTERFACES

3-29

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

27

FAS/V3200 I/O Expansion Module

Not hot swappable

Slot
Numbers

If the I/O expansion module (IOXM)


is removed, the controller panics.
If the IOXM is inserted into a running FAS/V3200
system, it is not recognized until the controller is
rebooted.

4 full-length PCIe v1.0 (Gen 1) x 8 slots


NetApp Confidential

28

FAS/V3200 I/O EXPANSION MODULE


To enable the use of SAS with the FAS/V3210 system, a SAS card must be added to the controller. The
addition of the SAS card requires one slot.

3-30

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

FAS/V3200 Motherboard Layout


RTC
Coin Cell

NVMEM Battery

DIMMS

CPU0

USB

CPU1

CPU Air
Flow
Guide
(Open)

PCIe Card Area


(Slots 1 and 2)

NetApp Confidential

FAS/V3200 MOTHERBOARD LAYOUT

3-31

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

29

FAS/V3200 Detailed Memory


Configurations

DIMM Banks
Desc

FAS3210 FAS3240 FAS3270

DIMM-4

Main

Empty

Empty

Yes

DIMM-3

Main

Empty

Empty

Yes

DIMM-2
DIMM-1

Main
Main

Yes
Yes

Yes
Yes

Yes
Yes

DIMM-NV2
DIMM-NV1

NVMEM
NVMEM

Empty
Yes

Yes
Yes

Yes
Yes

Battery-Backed
DIMMS

NetApp Confidential

FAS/V3200 DETAILED MEMORY CONFIGURATIONS

3-32

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

30

FAS/V3200 USB Flash Module

Boot device for the Data ONTAP operating system


and for environment variables
Replacement for CompactFlash
Same resiliency levels as CompactFlash

2-GB density
Field replaceable unit ( FRU)
NetApp Confidential

FAS/V3200 USB FLASH MODULE

3-33

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

31

FAS3200 Series Product Comparison


Features

FAS3270

FAS3270

With Expanded I/O


Maximum Raw Capacity
Maximum Disk Drives
Controller Form Factor

Memory

FAS3240

FAS3240

FAS3210

480TB

With Expanded I/O

1,920TB

1,920TB

1,200TB

1,200TB

960

960

600

600

Dual enclosure HA;


Single enclosure HA;
Dual enclosure HA;
Single enclosure HA;
2 controllers in two
2 controllers
2 controllers in two
2 controllers
3U chassis, total of 6U in single 3U chassis 3U chassis, total of 6U in single 3U chassis

240
Single enclosure HA;
2 controllers
in single 3U chassis

32GB

32GB

16GB

16GB

8GB

Maximum Flash Cache

2TB

2TB

1TB

1TB

N/A

PCIe Expansion Slots

12

12

Onboard I/O: 4Gb FC

Onboard I/O: 6Gb SAS

Onboard I/O: GbE

Storage Networking Supported FC; FCoE; IP SAN (iSCSI); NFS; CIFS; HTTP; FTP
OS Version

Data ONTAP 8

NetApp Confidential

FAS3200 SERIES PRODUCT COMPARISON

3-34

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

32

FAS/V3200 Key Specifications


FAS/V3170
Number of Processor Cores

FAS/V3210

FAS/V3240

FAS/V3270

Memory

32 GB

8 GB

16 GB

32 GB

NVRAM

4 GB

1 GB

2 GB

4G B

I/O Expansion Module

--

--

Yes

Maximum Number of PCIe


Slots

12*

Onboard I/O
Maximum Number of Spindles

4 x GbE
8 x 4Gb FC

4 x 6Gb SAS, 4 x GbE, 4 x 4Gb FC

840

240

600

960

1680 TB

480 TB

1200 TB

1920 TB

Maximum Aggregate Size

70 TB

50 TB

50 TB

70 TB

Data ONTAP

7.2.5+

Maximum Capacity**

7.3.5 and 8.0.1

* With I/O expansion module

** For Data ONTAP 8.0 and earlier, maximum capacity is half the amount that is specified.

NetApp Confidential

33

FAS/V3200 KEY SPECIFICATIONS


Key technical specifications for FAS3200 systems:

FAS3170: two 64-bit dual-core 2.6 GHz


FAS3210: one 64-bit dual-core 2.3 GHz
FAS3240: one 64-bit quad-core 2.3 GHz
FAS3270: two 64-bit dual-core 3.0 GHz

Key points:

Three new FAS/V3200 systems: FAS/V3210, FAS/V3240, and FAS/V3270


Three primary differences between the three models: expandability, scalability, and performance

The expansion capabilities of the FAS/V3270 system and the FAS/V3240 system are equal (on-board
connectivity and 12 PCIe slots) for host and back-end connectivity and for Flash Cache (for example, up to 2
TB of Flash Cache in the FAS3270 system). Both systems have more expandability than the FAS/V3210
system (four PCIe slots and on-board I/O).
There are two versions of the FAS/V3270 and FAS/V3240 systemswith and without expanded I/O. Most
customers choose to purchase and deploy the expanded I/O systems (which are 6U tall, instead of 3U tall)
because the additional height enables 12 PCIe slots, for additional connectivity and for Flash Cache modules.
The FAS/V3270 system can scale up to almost 2 PB of storage capacity. The FAS/V3240 system can scale up
to 1.2 PB. The FAS6210 system can scale up to 480 TB.
Among the FAS/V3200 systems, the FAS/V3270 system delivers the highest performance, and the
FAS/V3240 system delivers more performance than the FAS/V3210 system. These differences are
determined by the characteristics of the multicore processors and the amount of memory that are designed
into each system.

3-35

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Compared to the current FAS/V3100 systems, the new FAS/V3200 systems offer:

3-36

Greater flexibility, as a result of expandability and scalability improvements (the FAS/V3270 system
offers 50% more PCIe slots and 15% more storage capacity than the FAS/V3170 system offers)
Improved performance (increase varies by systems)
Higher availability (from the new service processor and the alternate control path)

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Mini-Exercise: I Want a New Card


Assume that your customer has a FAS3240 system. The
customer is impressed with its capabilities, so much so
that the customer wants to use its features for additional
projects.
To build the desired configurations, the customer needs
more ports, so the customer wants to buy a dual-port
optical GbgE adapter card.

The controller is running Data ONTAP 8.1 7-Mode.


Identify the part number of the card and the slot numbers
in which the card can be placed.
Refer to the System Configuration Guide for the FAS3240, which can be accessed
from the NetApp Support site.

NetApp Confidential

34

MINI EXERCISE: I WANT A NEW CARD


Assume that your customer wants to expand his FAS3240 system. He needs an additional dual-port optical
adapter. In this exercise, you identify the required part number and the available expansion ports.
1. On your laptop, log in to the NetApp Support site: https://now.netapp.com/eservice/SupportHome.jsp
2. Once the Support Site comes up, look for Documentation. Look the right in the More Resources box,
click Interopreability, System Configuration Guide.
3. On the left side of the screen, part way down, locate and select System Configuration Guide.
4. From the drop-down menu, select Release 8.0.1 7 Mode, and click Go.
5. On the left, locate and select NetApp storage systems.
6. From the FAS3000/6000 menu, select FAS3240 and Expansion slots/cards.
7. In the center of the screen, select Expansion Slot Assignments for a FAS3240A in an HA environment.
8. Locate the card part number and the relevant expansion slot numbers.
You can depend upon the accuracy of the data that the System Configuration Guide provides. The guide is
updated constantly, and NetApp engineers are committed to ensuring that the data is accurate and timely. For
each release of the Data ONTAP operating system, the data is encoded in a file. Systems expect to be properly
configured (as prescribed by the guide) and recognize when they are not properly configured.

3-37

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Key FAS/V3200 Series Features:


HA Controller Configuration Comparisons
FAS3210

FAS3140

FAS3240

FAS3160

FAS3070

FAS3170

Memory

8 GB

8 GB

16 GB

16 GB

32 GB

32 GB

NVRAM

NV8 (4 Gb)

NV7 (1 GB)

NV8 (4 Gb)

NV7 (4 GB)

NV8 (4 Gb)

NV7 (4 GB)

Number of PCIe I/O


Expansion Slots

4 or 12

4 or 12

Onboard 4-Gb FC

Onboard Gbe SAS

4 x 6 Gbe

4 Gbe

4 x 6 Gbe

4 GbE

4 x 6 Gbe

4 GbE

Maximum Number
of Spindles

240

420

600

672

960

840

Maximum Capacity

480 TB

420 TB

1200 TB

672 TB

192 0B

840 TB

8.0 and later

7.2.5 and
later

8.0 and later

7.2.6 and
later
7.3.1 and
later

8.0 and later

7.2.5 and
later

Data ONTAP
Release Number

NetApp Confidential

35

KEY FAS/V3200 SERIES FEATURES: HA CONTROLLER CONFIGURATION


COMPARISONS
All FAS/V3200 symbols are PCIx and 2-Gb FC. Information about the systems can be found on
PartnerCenter and SalesEdge.
The FAS/V3240 and FAS/V3270 systems use the faster PCIe bus architecture and 4 or 12 Gb. These midrange systems are powerful boxes that carry a solid load.

3-38

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 4
FAS6200 Series

NetApp Confidential

LESSON 4: FAS6200 SERIES

3-39

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

36

FAS6200 and V6200 Series


FAS6280

FAS6240
FAS6210

High performance for demanding workloads

Double the performance of other FAS systems

Ongoing performance gains via the Data ONTAP 8 system

Future-ready scalability and flexibility

Up to 3 PB of capacity and double the PCIe connectivity of other FAS systems

Built-in 10 GbE, 8-Gb FC, and 6-Gb SAS

Enhanced enterprise-class availability: service processor and alternate control


path (ACP)
NetApp Confidential

37

FAS6200 AND V6200 SERIES


The three FAS6200 systems (FAS6210, FAS6240, and FAS6280) are designed for large-scale, shared IT
infrastructures.
FAS6200 systems provide the performance that the most demanding workloads require. FAS6200 systems
deliver twice the performance that other FAS systems deliver. Performance will continue to increase, as the
Data ONTAP 8 operating system is enhanced and tuned.
FAS6200 systems provide the scalability and flexibility that is required to be future-ready:

3-40

Ability to scale to 3 PB of storage capacity to handle increasing requirements, especially for virtualized
shared storage environments
Flexibility in regard to connectivitymore than twice the number of PCIe slots that FAS6000 systems
provide
PCIe slots that can be used with Flash Cache modules to further increase performance (up to 8 TB in
FAS6280)
Built-in, high-bandwidth connectivity10GbE, 8-GB FC, and 6-Gb SASready to meet any
connectivity requirement that future deployments require
Enhancement of enterprise-class availability
An additional service processor and an alternate control path (ACP) that enable additional diagnostics and
nondisruptive recovery

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

FAS6200 and V6200 Highlights


NetApp high-end platform series
Significant increase in performance: leading-edge
performance technology
Integrated, high-performance ports
Enhanced configuration flexibility
I/O expansion module that increases slot count
HA options in either 6U or 12U form factor

Higher spindle limits and capacities


Significant RASM improvements
Support that began with the launch of Data
ONTAP 8.0.1 RC2
NetApp Confidential

FAS6200 AND V6200 HIGHLIGHTS

3-41

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

38

FAS6200 System Positions


Target Applications and Customers
Large shared workloads
Demanding performance
and capacity requirements
Virtualization and technical
applications
Large enterprises and
cloud-service providers

FAS6280

Cloud Computing
Platform

FAS6240

Enterprise Workload
Consolidation Platform

FAS6210

Virtualization
Platform

NetApp Confidential

39

FAS6200 SYSTEM POSITIONS


The target applications and customers for the FAS6200 series are:

3-42

Business and virtualization applications


Storage consolidation and server virtualization
Windows storage consolidation
Enterprises and midsized business (primarily the FAS3210 system for midsized businesses)

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

FAS6200 Performance

PCIe

PCIe v2.0 (Gen2) x8 architecture


More than twice the slot expandability of the
FAS6080 system

Core

Latest 64-bit processing architecture


Faster memory and more memory than
provided by the FAS6080 system

I/O

Onboard 10-GbE and 8-Gb FC


Integrated 10-GbE stateless off-load

Up to 3.6 times the performance of the FAS6080 system


NetApp Confidential

40

FAS6200 PERFORMANCE
Stateless offload = is technical functionality that may have resided in the operating system software or
hardware and is now handled by a chipset, firmware or software which is on the 10 GbE NIC card.

3-43

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Key FAS6200 Features


Persistent write log
(de-stages NVRAM to impact flash
RASM Service processor (next-generation RLM)

I/O expansion module (IOXM)


Flexible HA configurations in 6U and 12U
Support for more spindles and more capacity
Scaling Enough ports, ability to use all SAS technology
Configuration flexibility with significant scaling and RASM improvements
NetApp Confidential

KEY FAS6200 FEATURES


RASM (Reliability, Availability, Serviceability, Manageability)

3-44

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

41

Optimize Performance and Reduce Costs


Flash Cache Is a Standard Feature:
1 TB included with FAS6240 and FAS6280 systems
Data-Driven Real-Time Self-Managing

Virtual Storage Tier

Intelligent Caching and


Storage Efficiency enable the
NetApp Virtual Storage Tier
Real-time promotion of hot data
Performance scaling on demand
Efficient use of Flash and HDDs
Simple installation, no administration

Physical Storage

NetApp Confidential

42

OPTIMIZE PERFORMANCE AND REDUCE COSTS


For orders placed after September 12, 2011, 1TB of Flash Cache (512 GB per controller) is included with the
FAS/V6240 and FAS/V6280 systems (but not with the FAS/V6210 system).
Together, NetApp intelligent caching (Flash Cache) and storage efficiency features (for example,
deduplication) enable the virtual storage tier (VST), which optimizes performance and reduces costs. VST is
highly effective for virtualization environments, databases, messaging, and numerous applications.
With VST, NetApp introduced a better approachintelligent caching. This technology is optimized for Flash
and is not simply an adaptation of older-generation disk-tier solutions.
The NetApp VST promotes hot data to performance storage without moving the data. The data block is
copied to the VST, but the hot block remains on hard-disk media. With this approach, the operational disk I/O
operations that are required by other approaches to move data between tiers is not needed. Also, when the
activity of the hot data on Flash trends down and data blocks become cold, the inactive data blocks are
overwritten with new hot-data blocks. Again, the data is not moved.
This no-movement approach is highly efficient. It not only eliminates wasteful operational I/O but also
enables the application of advanced efficiency features such as data deduplication and thin provisioning.
Granularity is key to the ability to place the most efficient amount of data into the intelligent cache. NetApp
VST uses a block size of 4K. This granularity prevents cold data from being promoted along with hot data.
Contrast this approach to other companys approaches, which promote data blocks that are measured in MB
or even GB.
VST is simple to install and works out of the box with its default settings. The flexibility of VST enables the
creation of multiple classes of service by enabling or disabling the placement of data into the VST on a
volume-by-volume basis.

3-45

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

NetApp Enterprise-Class HA
Better than 5x9 availability
Demonstrated in real customer environments
Validated in a white paper by industry-analyst firm IDC

New enterprise-class availability features


Lights-out management via a new service processor
Nondisruptive recovery through a storage alternate control
path (ACP)

Continuous data availability for mission-critical applications


(MetroCluster software eliminating planned and unplanned
downtime)

NetApp Confidential

43

NETAPP ENTERPRISE-CLASS HA
NetApp designs enterprise-class high-availability into its storage products.
The NetApp portfolio delivers proven data availability (the whole storage infrastructure: system, disk shelves,
and software). Across thousands of customer deployments, AutoSupport data shows better than 9x5
availability. The industry-analyst firm IDC validated this finding in a white paper (on the Field Portal and on
NetApp.com).
With the FAS6200 series (and with the FAS3200 series), enterprise-class high availability is further enhanced
via provision of these features:

Service processor, for lights-out management


Alternate control path to storage, for nondisruptive recovery

With the HA software that is provided with the Data ONTAP system and the MetroCluster software that
customers can purchase, mission-critical applications are protected and planned and unplanned downtime is
eliminated.

3-46

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Flexibility and Space Efficiency


FAS/V6210

6U

FAS6000
Single-Enclosure HA

FAS/V6240 and FAS/V6280

12U

12U

Dual-Enclosure HA

Dual-Enclosure HA
Half the rack space and much higher performance or
the same rack space with much higher performance and
additional expansion capabilities

NetApp Confidential

FLEXIBILITY AND SPACE EFFICIENCY

3-47

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

44

FAS/V6200 Platform Transitions

FAS/V3170

Performance challenges
Limited 10GbE and 8-Gb FC
I/O

Solution

Performance challenges
Significant I/O limitations

Solution

FAS/V6210

FAS/V6040

FAS/V6240

Insufficient performance
Significant I/O limitations

Solution

FAS/V6080

FAS/V6280A

NetApp Confidential

FAS/V6200 PLATFORM TRANSITIONS

3-48

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

45

FAS6200 Base Configurations


Single Chassis
FAS6210

Dual Chassis
Being Evaluated

FAS6210A

NA

FAS6240
FAS6280

FAS6240A
FAS6280A

FAS6240
FAS6280

NetApp Confidential

FAS6200 BASE CONFIGURATIONS

3-49

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

46

NVRAM8 Architecture
The NVRAM8 is non-standard in height.
The dedicated controller slot 2 is based on power and
cooling requirements.

FRUs can be installed and removed without tools.

NetApp Confidential

47

NVRAM8 ARCHITECTURE
NVRAM8 is two cards in one: the interconnect hardware card for HA and the NVRAM electronics card. In
this regard, NVRAM8 is similar to NVRAM5 and NVRAM6. However, unlike with NVRAM5 and
NVRAM6, NVRAM8s HA and NVRAM functions are handled by separate chips.
NVRAM is a key element of NetApp technology. It enables writes to disk to be completed efficiently. It
accomplishes this task by allowing writes to be delayed until they can be performed in one burst and by
insuring that the data is not lost by power outage or system panic before the burst is committed to disk.
The HA function carries the process one step further by linking two controllers into a redundant pair. The HA
link enables one controller to perform high-speed updates of the other controller's NVRAM with data that is
not yet committed to disk. If one controller fails, the other controller completes the tasks that the failed
controller did not complete.
No longer is battery power used to hold contents in DRAM memory for a minimum of three days. Instead,
when system power is lost unexpectedly, NVRAM8 performs a de-stage operation. The contents of DRAM
are moved to flash components within a minute of the power loss and then the card shuts down completely.
The battery is not needed to preserve customer data. When system power is restored, the Data ONTAP system
transfers the contents that are in the flash components back to DRAM and replays the NVRAM log from
DRAM memory.
Like NVRAM5 and NVRAM6, NVRAM8 uses InfiniBand as the protocol for the interconnection between
the redundant pairs of controllers for HA solutions. With the advent of NVRAM8, the speed of the link
doubled from SDR (2.5 Gb per second per lane or 10Gb per second per link) for NVRAM5 and NVRAM6 to
DDR (5 Gb per second per lane or 20 Gb per second per link) for NVRAM8. Like NVRAM7 in Spectre
(FAS3100 series), a chassis with two controllers does not need external cables to make the HA connection.
NVRAM8 features an additional high-speed connector to the controller board. This connector is part of a
physical link over the midplane to the other controller. A special LED on the PCI bracket lights up when two
controllers with NVRAM8 are present in a chassis.
3-50

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

NVRAM6
512 MB/2 Gb DIMM

IB CFO Connectors

3-Cell Battery

2-Cell Battery
(2 Gb Version Only)

NetApp Confidential

48

NVRAM6
For both the software and hardware modules, the NVRAM card is referenced. The NVRAM6 card is
currently used on NetApp systems.
The battery uses Lithium Ion technology. Three or five 1.95Ah cells provide a total of 5.9 or 9.8Ah at 4.1V. If
an external power failure occurs, this configuration can supply onboard power for at least three days.
Each card contains two independent chargers. Together, the chargers charge the battery in less than 10 hours.
When the system is powered on, each charger is ON by default. Safety circuitry is built into the battery pack.
Each card has two InfiniBand CFO connections. A card has one or two batteries. If a card has 512 MB of
memory, it has one three-cell battery. If a card has 2 GB of memory, it has a second battery. Therefore,
NetApp guarantees at least 72 hours life of the battery.
Typically, a battery lasts longer than 72 hours, but NetApp guarantees at least 72 hours. Some people say that
72 hours (3 days) is not very long. However, 72 hours can be sufficient to enable the processes that prevent
data loss.
In most cases, within standard storage environments, backup power is available. When power is restored to a
system and the system is rebooted, the Data ONTAP operating system cleans the dirty writes and commits the
clean writes to disk. At that point, a clean shutdown can be executed via the halt command. Then,
NVRAM contains no data. The system shuts down completely, and all data is committed to disk. If all data
can be removed from NVRAM within the three days that the battery provides power, no data can be lost.

3-51

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

FAS/V6200 Series Features:


HA Controllers
FAS/V6280

FAS/V3170

FAS/V6210

FAS/V6080

16

16

16

24

Memory

32 GB

48 GB

64 GB

96 GB

192 GB*

NVRAM

4 GB

8 GB

4GB

8 GB

Expansion I/O Module

--

--

--

Yes

Maximum Number of
PCIe Slots

10

24

Onboard 8-Gb FC

--

8 or 16

--

8 or 16

Onboard 10 GbE

--

--

Onboard 6-Gb SAS

--

0 or 8

--

0 or 8

840

1200

1176

1440

Processor Cores

Maximum Number of
Spindles
Maximum Capacity

FAS/V6240

1680 TB**

2400 TB

2352 TB**

2880 TB

Maximum Aggregate

70 TB

70 TB

100 TB

100 TB

Data ONTAP Release


Number

7.2.5 and later

8.0.1

7.2.4 and later

8.0.1

* Requires the Data ONTAP 8.0.2 system. Memory in the Data ONTAP 8.0.1 system is 96 GB.
** Requires the Data ONTAP 8.0 system or later; for earlier systems, maximum capacity is half what is specified.

NetApp Confidential

FAS/V6200 SERIES FEATURES: HA CONTROLLERS


FAS3170 uses 2 x 64-bit dual-core 2.6 GHz.
FAS/V6210 uses 2 x 64-bit quad-core 2.26 GHz.
FAS6080 uses 2 x 64-bit quad-core 2.6 GHz.
FAS/V6240 uses 2 x 64-bit quad-core 2.53 GHz.
FAS/V6280 uses 2 x 64-bit hex-core 2.93 GHz.

3-52

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

49

Comparison
FAS6240/6280 vs. FAS6040/6080
Model

CPU

Memory

IO

FAS6040

4 Cores @2.6GHz

16GB @333MHz

12.5

Max SpindlesFCS
840

FAS6240

8 Cores @2.53GHz

48GB @1066MHz

18.5

1178FC, 1440SAS

FAS6080

8 Cores @2.6GHz

32GB @333MHz

12.5

840

FAS6280

12 Cores @2.93GHz

96GB @1066MHz

18.5

1178FC, 1440SAS

NetApp Confidential

50

COMPARISON
Similar

One in a box
NVRAM as a separate FRU

Different

3-53

RLM: replaced by SP
SP Console: accessed via the system console port (CTRL-g)
NVRAM: only in slot 2
I/O: more and more flexible
IOXM: splitting the I/O into two modules
More I/O: 19 PCIe card equivalents versus 14 equivalents in the same 6u height chassis
Wrench port: replaces the RLM port
e0M and e0P: added
Locked wrench port: added
10-GbE ports: added
USB port: added
FC and SAS I/O board ports: added
Different chassis: different servicing model
LCD: eliminated
I/O expansion on separate module (IOXM): added
USB boot media: instead of CF
NVRAM8: instead of NVRAM6
NVRAM: no change in slots for HA versus standalone

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Comparison
FAS6210 vs. FAS3170
Model

CPU

Memory

IO

FAS3170

4 Cores @2.6GHz

16GB @667MHz

5.5

840

FAS6210

8 Cores @2.26GHz

24GB @1066MHz

8.5

1008FC, 1200SAS

NetApp Confidential

Max SpindlesFCS

51

COMPARISON
Similar

Two in a box
Horizontal PCIe slots
Onboard FC and GbE ports
Management Ethernet port
IB that is run over the midplane for HA

Different

3-54

High-line (220V) AC power: required


RLM: replaced by SP
SP Console: accessed over the system console port (CTRL-g)
NVRAM: a separate FRU
10 GbE and I/O slots: added
High-line power (220V): required
More I/O: nine PCI card equivalents versus six equivalents in the same 3u height tray
Locked wrench port: added
USB port: added
NVRAM8: in slot 2 versus NVRAM7 on Mobo
FC/SAS IO board: added
USB boot media: instead of CF

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Extending Our Reach


Headquarters
Regional
Centers
Remote
Offices

Franchises

Optimized for
Data Management:
Data ONTAP

Shared Infrastructure in the Data Center


Departmental & Vertical Applications
Tech
Apps

Vertical
Apps

Analytics

Web
Apps

FMV

HPC

Optimized for
Performance:

NetApp Confidential

EXTENDING OUR REACH

3-55

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

E-Series

52

NetApp Technology Overview


Data ONTAP 8.1

E-Series

Supports NAS (NFS, CIFS) and


SAN (FC, FCoE, iSCSI) protocols

Supports SAN and DAS


(FC, SAS, Infiniband, iSCSI)
protocols

Targets Enterprise IT and Cloud


Infrastructure markets

Targets big bandwidth and big data


markets

Meets robust data management


requirements
Snapshot copies, near-zero
space instant cloning, data
protection, disaster recovery,
unified management

Designed to be highest
performance
with best rack density
Modular flexibility supports
configurations customized
for customer needs
Bullet-proof reliability and
availability designed to ensure
continuous high-speed data delivery

NetApp Confidential

NETAPP TECHNOLOGY OVERVIEW

3-56

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

53

E-Series Controller Models


E2600

E5400

Dual active controllers


Support intermixed SAS, SSD drive types
Support disk shelves for expansion with 12,
24 or 60 drives
NetApp Confidential

54

E-SERIES CONTROLLER MODELS


There are three E5400 models, each of which has a unique form factor. The E5460 is a 4U, 60-drive system;
the E5424 is a 2U, 24-drive system; and the E5412 is a 2U 12-drive system.
Each system has dual controllers, supports a range of SAS drive types, as well as the ability to intermix the
different drive technologies.
With these three unique models, the E5400 provides a variety of starting points to best meet solution and/or
customer requirements.
The E5460 is a great fit for big data solutions in that it delivers the highest combination of performance and
capacity. The E5460 delivers up to 6 gigabytes of sustained bandwidth, and supports up to 180 terabytes of
raw capacity. Additionally, the E5460 supports the widest range of drive technologies, from highperformance SSDs to high-capacity near-line SAS drives, making a great fit for any environment.
For performance density, the E5424 delivers the highest bandwidth per U. With up to four gigabytes per
second on reads, and 2.5 gigabytes per second on writes, nothing packs more throughout into such little space.
Its 2.5 drives deliver great performance per watt. And the E5424 meets the NEBS level 3 and ETSI Telco
specifications.
The E5412 is a great choice for smaller configurations. And like the E5424, meets the NEBS level 3 and
ETSI Telco specifications.
In many cases, these three models deliver the performance, density and capacity required for building big data
solutions. But when the situation calls for more capacity or performance each system supports expansion
through any of its three disk shelf options: the DE6600, DE5600 or DE1600.
Lets take a look at these now.

3-57

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

A New Platform for a New Market


NetApps unified architecture and Data ONTAP
operating system will continue to target Enterprise
IT and Cloud Infrastructure markets
Robust data management requirements

NetApp will use the E-Series platform to enter


the emerging Big Bandwidth & Big Data markets
Focused on pure performance and data protection
Available only as part of an E-Series solution

NetApp Confidential

55

A NEW PLATFORM FOR A NEW MARKET


Before we move forward, lets take a look at the high-level positioning for the two NetApp platforms.
ONTAP will continue to focus on the high feature content markets, such as Enterprise IT and Cloud
infrastructures, where robust data management features are required.
The E-Series platform will be used to enter the emerging big bandwidth and big data markets where the focus
is on pure performance and data protection. For these markets we will create solutions based on the E-Series
platform. Its important to note, that that E-Series storage is only available directly from NetApp as part of a
Big Data solution.
Big data means different things to different people, so before we move on lets put some framework around
what we mean by big data. We actually see big data as three fairly unique opportunities. The first is analytics,
which ranges from structured enterprise-class data warehousing solutions, such as Teradata, to a new
generation of appliance-like devices coupled with open-source software to build scalable, cost-effective
compute farms for data analysis.
The second big data opportunity is bandwidth. These environments, such as high performance computing,
rich media, video, and so forth, are generating enormous amounts of data and put unnatural stresses and
strains on traditional storage systems.
The third market is around content which is the age old problem of having the rate of unstructured data
growth greatly exceed the rate of scale in conventional systems.
So we see the whole ecosystem of big data in these three dimensions, which youll see referred to as ABC for
analytics, bandwidth, and content. And for these markets well use the E-Series platform to create solutions
tailored for new verticals, which well look at now.

3-58

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

E-Series Solutions
Hadoop
Packaged ready-to-deploy
modular Hadoop cluster
improves usable capacity
and performance

HPC: Seismic Processing


High bandwidth / high density
platform stores large volumes
of 2D, 3D and 4D seismic data
with scalable growth

Full Motion Video


Turnkey solution provides a
single architecture for ingest,
exploitation and dissemination

HPC: Lustre
Massively parallel distributed
file system for large scale
cluster computing

Media Content Management


Multi Petabyte capture and
playback platform for rich
media content creators

StorageGRID
Object-based storage with
Petabyte scale for distributed
image, video and records
repositories

E-Series storage is only available as part of these solutions


NetApp Confidential

56

E-SERIES SOLUTIONS
Weve identified six initial Big Data solutions for the E-Series Platform. The first, which was announced back
in May, is a full motion video solution. The FMV solution combines Quantum StorNext software and E5400
storage to create a single architecture for ingest, exploitation and dissemination. The FMV solution can
deliver over 20 gigabytes per second of read and write throughput and over a petabyte of raw storage in a
single rack.
The other solutions, which will roll out over the coming months, include three more bandwidth solutions:
Media Content Management and two HPC solutions -- seismic processing and Lustre. The first analytic
solution released will be for Hadoop. And the initial content solution is StorageGrid.
These six solutions are the only way to purchase E-Series storage directly from NetApp. And for each of these
solutions, a custom-configured E-Series storage system is tested and integrated with 3rd party software to
create a turnkey solution designed to meet the specific requirements of that vertical. Additional training
courses, presentations and collateral are available for each of these solutions.
NOTE: Its important to note that this course covers the full feature set and capabilities of the E-Series
platform. Solutions built on the E-Series are architected to include the specific product attributes that best
meet the workload, capacity and form factor requirements for that vertical. As a result, some of the features
and capabilities discussed in this course are not offered or relevant for a given E-Series solutions. Please refer
to solution documentation and collateral for an understanding of the E-Series attributes offered as part of the
solution.

3-59

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Exercise 5
Module 3: Case Study Overview
Time Estimate: 30 Minutes

NetApp Confidential

EXERCISE 5
Please refer to your exercise guide.

3-60

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

57

Lesson 5
Hard-Drive Technology

NetApp Confidential

LESSON 5: HARD-DRIVE TECHNOLOGY

3-61

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

58

Drive Types
SATA (serial advanced technology attachment)
SATA interface
7.5K, 3.5 inch form-factor

SAS (serial-attached SCSI)


SAS interface
10K, 2.5 inch form-factor
15k, 3.5 form-factor

FC (Fibre Channel)
FC interface
10K and 15K, 3.5 inch form-factor

SSD (solid-state drive)


Flash-based drive that uses a SATA interface
3.5 inch form-factor
NetApp Confidential

DRIVE TYPES

3-62

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

59

SATA
SATA characteristics are
Enhanced parallel ATA
Faster transfer speeds (more than 150 Mbps)
Thin cable connections (7-pin)

Primary SATA storage is


A storage hardware option for controllers
Intended for primary applications
Intended to match application storage requirements with
solution costs

NetApp Confidential

60

SATA
Primary SATA storage was introduced in May 2005. For the past couple of years, NetApp has used SATA
storage on FAS systems. SATA storage is intended for primary applications. SATA storage enables NetApp
to provide customized solutions.
The target markets for SATA are latency-insensitive primary applications. Latency considerations are very
important. ATA drives are inexpensive and widely available, but they are slow. To maintain less than 20-ms
latency, an ATA drive can provide approximately 40 IOPs. However, to maintain the same level of latency, a
15,000 RPM FC drive can provide approximately 200 IOPs.
You must carefully consider latency. You must ensure that, on installation, SATA drives are placed where
latency is not relevant. For example, you might use SATA drives in home-directory environments and-read
only warehouses.
Where latency is critical, do not use SATA drives. Therefore, in most cases, you should not use SATA drives
in production environments.

3-63

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

SATA Storage: Target Markets


Latency-insensitive primary applications
Home directories
Data warehouses

Instances in which primary applications do not require


peak storage performance
To determine suitability, analysis is required.
For situations for which SATA storage is appropriate:
Target highly competitive deals
Deny opportunities to competitors
Craft finely tuned solutions

NetApp Confidential

61

SATA STORAGE: TARGET MARKETS


SATA drives should be used only if primary applications do not require peak storage performance. To
determine whether the use of SATA drives is appropriate, analysis is mandatory!
If SATA storage is appropriate, recommend it. SATA storage provides a cost opportunity, because SATA
storage is cheaper than FC storage.
NOTE: If SATA drives are placed into a production environment that is beyond their capabilities, the drives
must be replaced, and the customer loses confidence in NetApp and in the people who recommended the use
of SATA drives. Before you recommend the use of SATA drives, analyze the sizing and performance
requirements of the situation.

3-64

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

SAS
SAS is the logical evolution of SCSI that
Satisfies the enterprise data-center
requirement for scalability, performance,
reliability, and manageability
Shares an electrical and physical interface
with SATA
Provides unprecedented choices for server
and storage-subsystem deployment
Source: SCSI Trade Association Organization

NetApp Confidential

62

SAS
The term SAS refers to serial-attached SCSI drives. Basically, SAS drives and FC drives are the same,
but the SAS interface uses serial communication.

3-65

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

SAS Usage
When compared to SATA, SAS provides
these advantages:
Higher performance
Higher I/O per second
Faster response times

Higher I/O per second is required for small,


random-read, intensive application workloads
(typical of Microsoft Exchange and OLTP).

NetApp Confidential

63

SAS USAGE
SAS drives and FC drives have identical performance profiles, but management and reliability considerations
make SAS drives the more attractive solution.
With SAS, the limit on the number of devices that can be connected is determined by bandwidth.
With FC, the maximum number of addressable devices is 128. Therefore, there can be only four shelves per
loop. With SAS, additional loops can be created, so there is no port burn (as there is on FC in very large
system environments).
Bandwidth over SAS can be better than bandwidth over FC. SAS drives are currently a little less expensive
than FC drives.
Few SAS storage devices are available, and no standalone storage systems have SAS drives. Sun is the only
NetApp competitor that offers a SAS-class drive.

3-66

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Similarities Between SAS and FC Drives


Except for the drive interface, SAS drives and
FC drives are mechanically the same:
Same magnetic, mechanical, electronic, and
microcode technologies
Same rotational speeds
Same reliability
Rotational Speed

Average Rotational Latency

FC*

SAS*

15,000 RPM

15,000 RPM

2.0 ms

2.0 ms

3.5 or 4.0 ms

3.5 or 4.0 ms

Transfer Rate (Maximum)

125 MBps sustained

125 MBps sustained

Number of Interface Ports

Seek Time Average Read/Write

* For FC and SAS drive specifications: http://www.seagate.com/docs/pdf/datasheet/disc/ds_cheetah_15k_5.pdf

NetApp Confidential

64

SIMILARITIES BETWEEN SAS AND FC DRIVES


SAS and FC drives spin at the same speeds. There are 10K and 15K SAS drives. The only difference between
SAS and FC drives is the interface.
SAS has matured as a drive option. Unlike FC drives, SAS drives have management traffic on one channel
and data traffic on another channel. Therefore, a loop initialization primitive (LIP) storm, which can easily
occur on FC drives, cannot occur on SAS drives.
If a storm occurs, it occurs on the management channel and does not affect data traffic. On SAS, every device
can be reset. On FC, device resets are quite disruptive, and the loop may or may not stay up.
In the FC-Arbitrated Loop (FC-AL) protocol, a device that enters the loop and attempts to initialize sends out
a LIP to request an address. All other activity on the loop stops, as each device re-establishes its connection
within the new configuration. A LIP storm occurs when all of the drives on a FC-AL loop (which may be a
large number) attempt to change or re-establish their names and numbers on the loop. Because SAS uses a
separate channel for drive management, LIP storms do not affect the data transmission channels.

3-67

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Differences Between SAS and FC


FC-AL

SAS

There are 2 Gbps per loop (perhaps 4


Gbps).

There are 12Gbps per aggregate with a


4x-wide port.

Drives are attached to other drives.


Drive isolation is challenging.

Drives are isolated from one another.

Connections between host bus


adapters (HBAs) and drives are virtual,
passing through all drives on the loop

Direct, point-to-point HBA-to-drive


connections go only through expanders.
Drives are completely isolated from one
another.

Discovery, management ,and I/O


operations cannot coexist. LIP storms
disrupt traffic.

Discovery, management ,and I/O


operations can coexist. BROADCAST
storms are squelched by expanders.

One physical link connects HBAs to


drives.

4x-wide ports link HBAs to expanders.


Stuck links can be unstuck.

Addressability is limited126
arbitrated loop physical addresses
(ALPAs)

Addressability is limited only by


performance considerations: 2^64
addresses.

NetApp Confidential

DIFFERENCES BETWEEN SAS AND FC

3-68

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

65

Improving Disk Response Time


A customer wants 2000 I/O per second at 20
ms.
How many drives are needed?
ATA drives at 5,400 RPM
ATA drives at 7,200 RPM
FC drives at 15,000 RPM

About 75
About 50
About 11

Which is cheapest?
The FC drives at 15,000 RPM cost 30%
more per drive but are the least expensive
way to meet requirement.
NetApp Confidential

66

IMPROVING DISK RESPONSE TIME


If 2,000 IOPS at 20 ms latency is needed, 70 ATA drives running at 5,400 RPM or 50 ATA drives running at
7,200 RPM or 11 FC drives running at 15,000 RPM are needed.
Although the FC drives running at 15,000 RPM cost about 30% more per drive, they are the least expensive
way to meet a high IOPS fixed-latency requirement.

3-69

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

SSD Versus HDD


Advantages of SSD
Significantly higher random-read performance
compared to hard-disk drive (HDD) at 15,000 RPM
Low read latency
Low cost per IOP

No moving parts that can fail


Very fast reconstruct times (<20 min)

Disadvantages of SSD
Small capacity
High cost per GB
Use-based consumable lifetime
NetApp Confidential

SSD VERSUS HDD

3-70

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

67

Reasons for Using RAID-DP Technology


System Reliability Event

FC

SATA

Typical Disk Drive Replacements (per year per 100 drives)

13

25

Bit Error Likelihood (per spindle)

0.2%

2.3%

Bit Error Likelihood Single Parity (per reconstruction of an 8-Drive RAID 4/5 Set)

1.6%

18.4%

Bit Error Likelihood Dual Parity (per reconstruction of an 8-Drive RAID-DP Set)

< 1 in a billion

To enable primary application reliability, RAID-DP technology is


required.
SATA drives are twice as likely to fail.
Drive failures result in RAID reconstructionstwice as many SATA
reconstructions.
Assuming five reconstructions per year, the use of RAID 5 promises
an almost 100% chance of data loss from bit error.
RAID-DP technology eliminates the bit-error risk.
NetApp Confidential

68

REASONS FOR USING RAID-DP TECHNOLOGY


What is the likelihood of two drives within a RAID group failing simultaneously? The answer depends on the
definition of failure.
If failure refers to the hardware failure of a drive, the likelihood of two drives failing simultaneously is very
small (tiny).
If failure refers to the following scenario, the likelihood significantly increases:
1. One failure occurs.
2. The system performs the reconstruction process.
3. During the reconstruction, a bit error occurs on a drive.
The system considers the bit error to be a second failure, and all data is lost.
When a bit error occurs, the drive is still usable (a good drive with probably nothing wrong). The drive can be
reformatted and reinitialized, but the data cannot be recovered. A RAID failure has occurred.
The likelihood that a bit error will occur on a SATA drive during reconstruction is approximately 18%. Thus,
one reconstruction in five is expected to fail. The frequency of occurrence is a function of (a) the size of the
disk drive and (b) the typical drive-level, error-correction capabilities.
Therefore, because RAID-DP technology eliminates the bit-error risk, it is recommended for all drives.

3-71

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

When to Sell Which Drive


If a customer requires bytes, not performance, sell
ATA drives: inexpensive only if there is no responsetime requirement.
If the customer requires latency-constrained
performance, sell SAS or FC drives at 15,000 RPM.
High cost per byte
Lowest cost per operation at a fixed latency

If the customer requires the lowest latency in the


smallest space, sell SSD drives:
Highest cost per byte
Lowest latency and fewest drives

NetApp Confidential

69

WHEN TO SELL WHICH DRIVE


If a customer requires bytes without performance, ATA drives are the best choice. They work well for archive
systems, some home-directory environments, and perhaps some read-only data warehouses.
ATA drives are inexpensive only if there is no response-time requirement. If latency-constrained performance
is required, FC drives at 15,000 RPM are recommended. These FC drives have a high cost per byte but the
lowest cost per operation at a fixed latency.

3-72

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Exercise 6
Module 3:
WEB: Using the Field Portal

Time Estimate: 15 Minutes

NetApp Confidential

EXERCISE 6
Please refer to your exercise guide.

3-73

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

70

Lesson 6
Disk-Shelf Technology

NetApp Confidential

LESSON 6: DISK-SHELF TECHNOLOGY

3-74

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

71

NetApp DS4243: Reduce Data Footprint

Dense, space-efficient design allows optimal use of data


center resources: up to 72 TB of raw storage in 4U.
NetApp deduplication for FAS minimizes the use of the
system resources that are required for primary data,
backup data, and archival data.
NetApp Snapshot software enables data protection without
performance impact and with minimal consumption of
storage space.

NetApp Confidential

72

NETAPP DS4243: REDUCE DATA FOOTPRINT


See Hardware Universe for the most current specifications.
Dense, space-efficient design enables optimal use of data-center resources.
Minimize the use of system resources. Primary data, backup data, and archival data can be deduplicated with
nominal impact on data-center operations.
NetApp Snapshot software (the original and most functional point-in-time copy technology) protects data
without performance impact and with minimal consumption of storage space.

3-75

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

DS4243: Flexible Storage for


NetApp Unified Storage Architecture

The architecture of the DS4243, which is industry-standard and


based on a Storage Bridge Bay (SBB), provides flexibility for
future redeployments.
The DS4243 provides one disk shelf for all tiers of storage.

The IOM modules of the DS4243 define the connectivity


personality of the disk shelf.
The DS4243 is available with SAS drives, which are
performance-optimized, or with SATA drives, which are capacityoptimized.
The DS4243 is available in 12-drive and 24-drive configurations

NetApp Confidential

73

DS4243: FLEXIBLE STORAGE FOR NETAPP UNIFIED STORAGE ARCHITECTURE


Industry-standard architecture that is based on Storage Bridge Bay (SBB) leverages several storage
connectivity technologies to provide flexibility for future deployments.
NetApp Unified Storage Architecture enables customers to choose not only the right protocol, right storage
tier, and right performance but also the right price-point to address their changing business needs.

Mature, second-generation, point-to-point SAS-based architecture enables high resiliency and fault
isolation and recovery.
Frame array class resiliency, enabled by ACP, provides secure, out-of-band management communication
that is separate from the data path of disks.
Multiple redundant components are combined with a nondisruptive upgrade capability.
NetApp RAID-DP technology (low-overhead, high-performance RAID 6) provides greater data
protection and capacity utilization than RAID 5 and RAID 1+0 technologies provide.

SATA requires two power sources, and SAS requires four power sources.

3-76

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Advantages Provided by the DS4243

Greater density30% denser storage capacity with 24 drives in 4U (versus 14


drives in 3U with DS14)
High-resiliency features:

ACP for out-of-band management

Point-to-point SAS technology

SBB (Storage Bridge Bay)mature, stable, industry-standard modules for storage


controller connectivity

Greater bandwidthup to 12-Gbps SAS (four wide SAS ports, each at 3 Gbps)
versus 4 Gbps FC

Reduced power consumption

Greater than 10% reduction in the number of watts consumed per TB of storage

Power efficiency greater than 80%

NetApp Confidential

74

ADVANTAGES PROVIDED BY THE DS4243


The DS4243 delivers greater density30% denser storage capacity: 24 drives in 4U (versus 14 drives in 3U
with DS14).
Greater Resiliency
ACP is an out-of-band management architecture that isolates management communication from the data path.
The use of out-of-band management for disk subsystems has historically been found only in high-end, framearray storage systems. In an out-of-band management implementation, disk health is monitored using a
communications path that is separate from the data path. To provide an out-of-band implementation, the
DS4243 uses dedicated Ethernet ports for ACP.
With current FC-AL technologies, management communication and the data path often use the same wire.
Therefore, certain classes of errors can hang the connection between the disk subsystem and the storage
controller. Incorporating out-of-band management capability with the SAS architecture helps to circumvent
these types of error conditions. Point-to-point SAS technology isolates drive errors and prevents them from
bringing down an entire loop.
Greater Bandwidth
The DS4243 uses wide SAS ports. The ports enable four data-communication paths, each of 3-Gbps SAS
bandwidth. Together, the ports can accommodate up to 12-Gbps bandwidth (compared to the 4-Gbps
bandwidth that FC accommodates). Because few workloads push the bounds of the 4-Gbps bandwidth that the
DS14 provides, few workloads experience significant performance improvement from the SAS wide ports on
the DS4243. However, the ports provide investment protection. Future controller upgrades will be able to take
advantage of the additional bandwidth.
Reduced Power Consumption
Greater than 10% reduction in the number of watts consumed per TB of storage
Power supplies that offer power efficiency greater than 80%

3-77

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Decision: DS4243 or DS14

Sell the DS4243 for:

Sell the DS14 for:

FAS/V6000 systems

FAS/V6000, FAS3100, FAS3070,


FAS3040, and FAS2050 systems
when no PCIe slot is available

FAS/V3100 systems
New sales of FAS2040 and
FAS2050 systems
New SA200, SA300, and SA600
systems

FAS2020 configurations with


external expansion
MetroCluster capability
DC-power systems
Situations in which every DS14
must be at least five meters away
from adjacent controllers or shelves

NOTE: Some DS14 EOA plans have been announced.


NOTE: Except for the FAS2040 system, it is assumed that a PCIe slot is available for SAS HBA in the FAS/V controller .

NetApp Confidential

75

DECISION: DS4243 OR DS14


When should customers consider the DS4243 for their installations?
Customers must ensure that a PCI-e I/O slot is available in the FAS/V controller for the SAS HBA.
Availability of the slot is required for connectivity to the DS4243.
By using a SAS HBA in the FAS/V controller, customers can connect to the DS4243 disk shelf with
FAS/V6030, FAS/V6040, FAS/V6070, FAS/V6080, FAS/V3170, FAS/V3160, FAS/V3140, FAS/V3070, and
FAS2050 systems. Customers can add DS4243 disk shelves to the systems that are installed in their
infrastructure, provided a PCI-e I/O slot is available in the FAS/V controller for the SAS HBA.
MetroCluster configurations are not supported with DS4243. A DS4243 with the IOM3 modules uses SAS
cables that are limited in distance to 5m. Because MetroCluster configurations need to support distances up to
100 km, they require FC connectivity. To address the distance limitation with SAS, NetApp will, in the
future, make available a FC-SAS bridge module for the DS4243 Meanwhile, customers should use DS14
for their MetroCluster configurations.
Additionally, customers who require a DC-power solution must use DS14 configurationsat least until a
DC-powered DS4243 becomes available.

3-78

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

The DS2246 Disk Shelf

DS2246: Greater Density and Speed


Twenty-four 2.5-inch small form-factor drives in only 2U rack
space

6-Gbps SAS interconnect and backplane


10,000 RPM SAS disk drives with a size of 450 GB or 600 GB
30% to 50% lower power consumption than with a DS4243 shelf

Same availability and resiliency features as provided by the


DS4243 shelf

NetApp Confidential

76

THE DS2246 DISK SHELF


For an audience of IT directors and managers and technical contributors, these are the key points:

Small form-factor (2.5 inch) drives that make it possible to shrink a 24-drive shelf from 4U to 2U
Doubled SAS interconnect bandwidth (to 6 Gbps)
Same-size SAS drives as with the DS4243, with slower (10,000 RPM) rotation and equivalent initial
pricing
Approximately 20% lower IOPS per drive (OLTP workload) for 60% higher IOPS per rack unit
A shelf that is as dependable as the DS4243, the most dependable NetApp shelf ever

The 15,000 RPM drives that are available in the 2.5-inch SFF are one-fourth to one-half the size of the 10,000
RPM drives and cost twice as much per GB. For this reason, NetApp offers only the 10,000 RPM drives in
the SFF.

3-79

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

SSDs in a NetApp Disk Shelf

SSDs can provide consistently fast response times.


SSDs are supported in the highly reliable DS4243
disk shelf.
Twenty-four 100-GB SSDs can be used per shelf.
SSDs are available with high performance NetApp
FAS and V-Series storage controllers.

NetApp Confidential

77

SSDS IN A NETAPP DISK SHELF


For an audience of IT directors and managers and technical contributors, these are the key points:

SSDs are best suited for random, read-intensive workloads that require consistently fast response times
SSDs are available in the DS4243, which houses 24 drives in 3.5-inch form-factor carriers.
Each shelf provides approximately 2 TB of raw capacity.
For best results, this very fast media should be matched with a high-performance storage controller.
SSD requires four power sources.

Data ONTAP 8.0.1 or later is required. Supported systems: FAS and V-Series 3160, 3170, 3240, 3270, 6040,
6080, 6210, 6240, and 6280.
Auto-tiering software not available. Use Flash Cache instead of SSDs when (a) workloads is random read
intensive, (b) hot data is unknown or dynamic, and (c) an administration free approach is desired.

3-80

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Storage to Meet a Variety of Needs


Storage Media
High-capacity HDDs
SATA; 7,200 RPM; 3.5 inch; 1TB or 2TB

Small form-factor HDDs


(SAS; 10,000 RPM; 2.5-inch SFF, 450GB or 600GB

Performance HDDs
SAS; 15,000 RPM; 3.5 inch; 300 GB, 450GB or 600GB

Ultraperformance SSDs
(SATA, 3.5 inch, 100 GB)

Disk Shelves
DS4243: high capacity and performance
4U height with twenty-four 3.5-inch drives

DS2246: storage and performance density


2U height with twenty-four 2.5-inch SFF drives

NetApp Confidential

78

STORAGE TO MEET A VARIETY OF NEEDS


Key points in regard to storage media choices for SAS-connected NetApp disk shelves

3-81

NetApp offers several choices.


Two new media choices and a new disk shelf were first shipped in September 2010.

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Storage Alternatives
SELECTION CRITERIA

STORAGE MEDIA

DISK SHELF

Performance density
Storage density

Small form factor HDDs


(SAS, 10k RPM, 2.5 SFF)

DS2246
(2U, 24 drives)

Best performance with hard


disk drives

High-performance HDDs
(SAS, 15k RPM, 3.5)

DS4243
(4U, 24 drives)

Lowest latency
Max performance density

Solid state drives


(SSDs)

DS4243
(4U, 24 drives)

Max capacity
Cost per gigabyte
Max storage density

High-capacity HDDs
(SATA, 7.2k RPM, 3.5)

DS4243
(4U, 24 drives)

This table appears in the new NetApp Disk Shelves and Storage Media datasheet.

NetApp Confidential

79

STORAGE ALTERNATIVES
This table provides information that matches customer needs with media and disk-shelf options.

3-82

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Comparison of NetApp Disk Shelves


DS14mk4 FC
SPECIFICATION

DS2246

DS4243

Rack units

Drives per shelf enclosure

24

24

14

Drives per rack unit

12

~4.7

High-capacity disks

NA

SATA 7,200 RPM;


1 TB and 2 TB

SATA 7,200 RPM;


1 TB and 2 TB

2.5-inch SFF SAS; 10,000

RPM; 450 GB and 600 GB

3.5-inch SAS; 15,000 RPM;


300 GB, 450 GB, or 600 GB

3.5-inch FC; 15,000 RPM;


300 GB, 450 GB, or 600GB

SSDs

NA

100 GB

NA

Multipath high availability

Out-of-band management

Performance disks

MetroCluster support

DC power option

This table appears in the Compare tab of the Disk Shelves and Storage Media page on NetApp.com.

NetApp Confidential

COMPARISON OF NETAPP DISK SHELVES


DS14mk2 AT is discontinued
The 14 shelf configuration can only be obtained when ordering MetroCluster.
Anything below the DS14mk4 FC does not work with Data ONTAP 8.1

3-83

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

80

Lesson 7
Back-End Connectivity

NetApp Confidential

LESSON 7: BACK-END CONNECTIVITY

3-84

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

81

IOM Modules

The industry-standard SBB-based architecture provides flexibility for future


deployments and mature, stable connectivity architecture for disk enclosures.
IOM modules define the connectivity of the disk shelf.
IOM is the SAS equivalent of AT-FCx and ESH4 in DS14 disk shelves.

Dual redundant IOMs, which are standard per shelf, provide resilient multipath
high availability (MPHA) connectivity
Each IOM contains two ACP ports and two SAS ports.

IOM3 provides 3-Gbps SAS connectivity on the DS4243.


IOM6 provides 6-Gbps SAS connectivity on the DS2246.
IOM3 and IOM6 are not interchangeable.
NetApp Confidential

82

IOM MODULES
IOM is defined as input/output module.
The number 3 refers to 3-Gbps SAS.
Each IOM3 contains two ACP and two SAS ports.
Dual redundant IOMs, which are standard for the DS4243, provide resilient multipath high availability
(MPHA).

3-85

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 8
Flash Cache

NetApp Confidential

LESSON 8: FLASH CACHE

3-86

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

83

NetApp Flash Cache: Advantages


Optimize Performance and Reduce Cost
Improve average latency for random reads
Increase I/O throughput of disk-bound storage
systems without adding disk drives
Reduce cost by using fewer, larger disk drives
Effectively service file services, databases,
messaging,
and virtual infrastructure
Predict your results before buying
for an existing storage system

NetApp Confidential

84

NETAPP FLASH CACHE: ADVANTAGES


Flash Cache modules are PCIe cards that provide enterprise-class, single-level cell (SLC) flash memory and
custom memory-management units. The cards fit into the expansion slots of a storage controller.
NetApp Flash Cache modules are intelligent-read caches that contain user data and NetApp metadata. The
word intelligent is used because what is cached is determined by which of three modes of operation is
selected. For detailed information about operation nodes, see the Technical FAQ.
Active data flows automatically into the cache and all storage behind the controller is subject to caching.
When the disk subsystem (not the CPU) is the obstacle, the traditional way to increase I/O throughput is to
add disks. If additional capacity is not needed, the addition of disks wastes storage. With caching, the storage
systems I/O throughput is increased without the addition of disks.
This caching approach is effective for workloads that are random in character and read-intensive. Examples
include file services, OLTP databases, messaging, and virtual infrastructure.
With NetApp Flash Cache modules, results can be predicted. A Data ONTAP 7.3 and later feature called
Predictive Cache Statistics simulates the presence of a cache under the workload. The feature can predict
whether adding cache would be helpful and how much cache should be added.

3-87

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Flash Cache: the Optimum Configuration


How to Increase Performance and Decrease Cost

+ Flash

+ Flash

Cache

Configure Only with


FC or SAS Disks

Additional disk drives


providing IOPs
Inefficient use of
storage capacity,
power, and space

Cache

Configure with FC or SAS


Disks and Flash Cache

Configure with SATA


Disks and Flash Cache

Disks provide capacity and


IOPs

More storage capacity

Flash Cache provides IOPs


and reduces latency.

Storage, power, and space


costs are reduced.

Provision of an IOPs boost


for SATA drives
Cost savings for storage,
power, and space

NetApp Confidential

85

FLASH CACHE: THE OPTIMUM CONFIGURATION


HOW TO INCREASE PERFORMANCE AND DECREASE COST

For customers who need new storage systems, position Flash Cache against disks. Flash Cache provides
multiple ways to increase performance and decrease cost.
Many storage systems are configured with a large number of high-performance disks to provide adequate read
I/O throughput. As a result, storage capacity, power, and rack space is wasted.
With Flash Cache in the configuration, disks provide the capacity and some of the I/O throughput (IOPs).
Flash Cache provides additional IOPs and faster response times. Eliminating unneeded disks can reduce the
purchase price of a system and provide ongoing power and rack-space savings.
Flash Cache can be combined with SATA drives to maximize capacity, minimize the number of disks, and
obtain good performance.

3-88

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Synergy of Flash Cache and Deduplication


Reducing the Duration of Boot Storms in a Virtual Infrastructure
Virtual-Machine Reads
During a Boot Storm

VM 1
VM 2
VM 3
VM 4

Deduplicated Volume
of VM Boot Images

Storage Controller
with Flash Cache

After a master block is cached,


all virtual block duplicates are read at cache speed.

VM 5

VM n

NetApp Confidential

86

SYNERGY OF FLASH CACHE AND DEDUPLICATION


REDUCING THE DURATION OF BOOT STORMS IN A VIRTUAL INFRASTRUCTURE

With the unique ability of NetApp to combine deduplication of primary storage with intelligent caching, the
duration of boot storms within a virtual infrastructure can be reduced.
NOTE: The same effect is realized when FlexClone software is used with Flash Cache.
Flash Cache is deduplication-aware. That is, Flash Cache caches a deduplicated block only once and satisfies
read requests for all corresponding virtual blocks from the cache at least 10 times faster than going to disk.
A NetApp partner, Corporate Technologies Inc., published test results showing that, when NetApp
deduplication and intelligent caching were used together, the duration of a boot storm was reduced from 15
minutes to 4 minutes The original Performance Acceleration Module, precursor to Flash Cache, was used in
this test. Here is a link to a blog with the test results:
http://ctistrategy.com/2009/11/01/vmware-boot-storm-netapp/

3-89

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Setting Expectations
Effective with Random-Read Workloads
Databases
File services
VMware, Hyper-V, and Citrix
Microsoft Exchange and SharePoint
Engineering and software development

NetApp Confidential

87

SETTING EXPECTATIONS
Set realistic expectations, as to when Flash Cache will help and when it will not help.
Using Flash Cache helps with many workloads but not with all workloads. Read caching is most effective for
small-block, random read-intensive workloads.
Flash Cache is not significantly helpful for sequential or write-intensive workloads or for CPU-based
problems.

3-90

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 9
Performance

NetApp Confidential

LESSON 9: PERFORMANCE

3-91

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

88

Performance Topics
Management of fragmentation
Memory of NetApp systems
CIFS performance
Comparison: iSCSI versus FC

NetApp Confidential

PERFORMANCE TOPICS

3-92

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

89

Fragmentation Management: Reallocation


Available in Data ONTAP 7.0 and later, running in the
background at non-busy times
Useful for
Improving spatial locality of files and LUNS
Solving sequential read-performance problems

Requiring these cautions


Reallocation rewrites files.
You cannot move data that is locked into Snapshot
copies.
If Snapshot copies are present, sufficient free space is
required.
Rewritten data is changed data, and SnapMirror
software moves the changed blocks.
NetApp Confidential

90

FRAGMENTATION MANAGEMENT: REALLOCATION


In regard to fragmentation, NetApp is often criticized. However, all random-access storage mediums
fragment.
NetApp can argue that, because of the WAFL file system, NetApps fragmentation problem is smaller than
most vendors fragmentation problem. The NetApp system determines where blocks are written and lays out
blocks in the most efficient way. WAFL lays out complete stripes much more often than other vendors
systems do.
Certainly, fragmentation still happens, because the systems of both NetApp and NetApps competitors delete
stripes from and open holes in stripes. However, competitors create stripes that contain holes. So, NetApps
issue and competitors issues are very different.
To fix the fragmentation that is bound to occur, the NetApp system enables the reallocation of blocks
rewriting data and arranging it in clean stripes on the storage system. Because the stripes are laid out
sequentially, sequential read-performance problems are reduced.
NOTE: Reallocation works by rewriting files; it cannot move data that is locked in Snapshot copies.
Therefore, rewrites of blocks for a file in a Snapshot copy look like a delta. They require more storage space
and increase the size of the delta on SnapVault or SnapMirror relationships.
NetApp recommends against running a full reallocation. Because the system assumes that all data is new, a
100% delta is created in the next Snapshot copy. In essence, the process creates a new baseline for the
replication relationshipand thus creates a need for 100% more storage space. A full reallocation should be
considered only for volumes that are less than 50% full.

3-93

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Reallocation: How, When, and What


Full Reallocation, Defragmented

Data

Data

Data

Data

NetApp Confidential

Data

91

REALLOCATION: HOW, WHEN, AND WHAT


Typically, a user tells the system what to reallocate and when to reallocate. A low-priority process runs in the
background when the system isnt busy, and if the system becomes busy, the process stops. In this case, small
amounts of data are reallocated every day.
If small amounts of data are reallocated each day, the system creates small deltasnever creating a large
delta. If reallocation is turned on at the beginning of the creation of a FlexVol volume, the reallocation
process can be controllednever creating a big delta in a Snapshot copy.
No other RAID reallocation tool performs like NetApps reallocation process. In most competitors
environments, to lay out data in a clean form, a migration must be performed. NetApp reallocation can be
performed live, with little effect on system performance.
Another example of when reallocation is useful is the addition of disks. Assume that an aggregate contains 16
disks and 1 RAID group and that you decide to add a RAID group of 16 disks. Immediately after the addition,
data resides on only the original 16 disks (thus on half of the disks). To spread the data across all 32 disks,
you must reallocate. If you do not reallocate, new writes span all 32 drives, but the original data resides on
only the original 16 disks.
In the future, NetApp intends to enable the reallocation process to move Snapshot blocks. Moving Snapshot
blocks is a complex process. For example, a block held in a Snapshot copy might have 25 pointers to it, each
pointer located on different inode map. Therefore, the move process requires not only the physical movement
of the block but also a cascade of inode updates. This intensive operation will probably be available as an
option.
Windows hole punching is similar to reallocation. An issue in regard to LUNs is the inability to identify when
a deletion has occurred. The SCSI command set does not include a delete command. A delete is a code
statement that says this inode is free. But such blocks can be identified only through NTFS.

3-94

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

With the hole-punching feature, blocks that are being freed are identified. Once identified, the WAFL system
can be used to free the blocks. As NetApp engineers gain more understanding of the file systems that are
contained within a LUN, NetApp will offer more features and improve upon its current features. For example,
the reallocation process will be more successful now that the blocks that are used within a LUN are
distinguished from the blocks that are not used within the LUN.

3-95

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Memory Considerations
More memory on a host helps in these situations:
A few large host systems
Large data sets that are not shared among hosts
Application data footprints that are very wide

More memory on a controller helps in these


situations:
Many hosts, with shared data
Large metadata needs (such as the presence of many
big, deep directories)
Certain applications with wide data footprints (such as
the Exchange application and large databases)

NetApp Confidential

92

MEMORY CONSIDERATIONS
More memory on the host is almost always helpful.
More memory on the controller helps hosts to connect, helps with large metadata needs, and helps
applications with wide data footprints.

3-96

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Relationship Between Memory and Disks


For some configurations, performance tradeoffs are
required.
With more memory, fewer disks may be needed; with
less memory, more disks may be needed.
Slower disks need more memory; faster disks need less
memory.

When considering the memory-disk question, you


should
Consider the customers applications and architecture
Understand the customers environment
Use NetApp whitepapers and sizing tools

NetApp Confidential

93

RELATIONSHIP BETWEEN MEMORY AND DISKS


For some configurations, trade-offs between amount of memory and number of disks are required.
In a highly competitive deal, you may be tempted to tweak the performance setup. However, we recommend
that you do not do so. In most cases, you should focus on high disk performance and not try to balance
memory and disks against each other.
For FlexVol considerations, we are pushing toward flexible volumes and aggregates. In almost all
environments, performance of all of the disks and creation of a big I/O pool are huge wins. There are some
small trade-offs. For example, FlexVol volumes have additional metadata needs, so they have additional
memory needs. But, those needs are quite small compared to the gain that is achieved by creating large pools
of IOPS, the flexibility that gained by cloning features, and so on.

3-97

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

CIFS Performance
CIFS is not a high-performance protocol. Each connection has lowperformance demand, but tens of thousands of CIFS users produce a
high-performance load.
Consolidation of CIFS users requires that you

Consider and size carefully.

Use the home directory sizing guide

Use the Custom Application Sizing tool

When possible, correlate CIFS usage with statistics collectionvery powerful

You must be aware of

Anti-virus needs

Advanced CIFS features (for example, SMB signing, quotas, and oplocks)

With consolidation, CIFS environments are expected to be high performance, so the


environments require careful attention.

NetApp Confidential

94

CIFS PERFORMANCE
CIFS is not a high-performance protocol, but tens of thousands of CIFS users create a high-performance load.
Consolidation of CIFS users provides a great opportunity. Administrators appreciate the benefits. And, they
expect consolidation to produce fantastic performance.
Be aware of and careful about anti-virus needs and advanced CIFS features. One such CIFS feature is SMB
server signing. This feature was introduced in one of the service packs of 2000 and is included in 2003. If two
systems are enabled for SMB server signing (enabled by default on Microsoft systems), the signing occurs
automatically. An MD5 signature is added to every packet that is transmitted between the two systems. The
addition of the MD5 signatures adds a huge load to the CPU.
NetApp supports SMB server signing. However, if it is turned on, the CPU will probably peg and
performance will decrease. At this time, there is little demand for SMB signing. If customers begin to demand
SMB signing, we will qualify an MD5 offload card and off load all of the calculations to a daughterboard to
reduce the impact on the CPU.
Be aware of the impact on performance. Make sure that customers understand what CIFS is doing, as MD5s
are complicated and can impair performance on the client side. MD5 signing is not only a server feature.
Other features to be aware of are quotas and oplocks.
Consolidated CIFS environments are expected to be high performance, so you should ensure that very large
CIFS environments are sized appropriately.

3-98

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

iSCSI Versus FC
The choice is often a business, political, or philosophical choicenot a
technical choice.
When technical factors are considered

Customers who have FC tend to prefer FC

Customers who dont have FC tend to prefer iSCSI

These factors affect performance.

iSCSI software is easy and cheap.

iSCSI uses more CPU (on host and storage)often not an issue.

iSCSI hardware has a typical NIC cost; CPU consumption is less than
software
In regard to bandwidth, an FC wire is typically two times an Ethernet wire;
however, this fact is rarely an issue (just use multiple wires).

For most cases, iSCSI performance is similar to FC performance, so


performance considerations determine the choice.

NetApp Confidential

95

ISCSI VERSUS FC
The choice between iSCSI and FC is often a question of business, politics, or philosophyrarely a technical
question. NetApp systems work effectively and efficiently with both iSCSI and FC, so whatever the customer
prefers is the right choice.
Typically, customers who already have FC choose FC, because they want to leverage the environment that
that have. Typically, customers who do not have FC choose iSCSI, because starting a new fabric requires a
large investment.
The only performance caveats concern software initiators. Software initiators require more CPU load.
Bandwidth aggregation makes iSCSI relatively competitive with FC. For most cases, iSCSI performance is
similar to FC performance.

3-99

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Exercise 7
Module 3: Case Study
Student Activity 1

Time Estimate: 30 Minutes

NetApp Confidential

EXERCISE 7
Please refer to your exercise guide.

3-100

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

96

Lesson 10
V-Series Systems

NetApp Confidential

LESSON 10: V-SERIES SYSTEMS

3-101

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

97

V-Series Logical Topology


FC and iSCSI LUNs

FlexVol Volumes

V-Series
Front End

Aggregate
Disk RAID
Group

Disk RAID
Group

Disk RAID
Group

Storage
Array LUN

Storage
Array LUN

Storage
Array LUN

NetApp Confidential

Storage Array
Back End

98

V-SERIES LOGICAL TOPOLOGY


This diagram illustrates the concept of back-end and front-end LUNs. Basically, the storage-array LUNs that
use the array vendors RAID grouping are moved into a Data ONTAP aggregate.
After an aggregate is established, provisioning is performed as it is within any NetApp storage system and
front-end LUNs, either iSCSI or FC, are then established within the FlexVol environment.
The underlying storage system presents LUNs to the V-Series system. Then the LUNs are used as if they
were disks. The options are to configure the system so that it presents one massive LUN or to obtain multiple
LUNs and do RAID zero across them from within the V-Series system. You see only RAID zero on live
NetApp systems in V-Series systems.

3-102

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

The Traditional
Heterogeneous Storage Model
Vendor A

Vendor B

SAN

SAN

NAS

NAS

Enterprise

Departmental

Enterprise

Departmental

FC

iSCSI

Vendor C

Ethernet

LAN

Each vendors environment has a unique management interface


and data-management suite.
NetApp Confidential

99

THE TRADITIONAL HETEROGENEOUS STORAGE MODEL


It is very common, in a traditional environment, to require that different architectures be deployed from
different vendors, depending on the size of the environment and depending on the protocols that are involved.
This requirement produces management complexity, and complexity increases cost. NetApp competitors offer
no single solution for managing these various arrays and configurations.
The V-Series system, running the Data ONTAP multiprotocol architecture, addresses the management
problem that is created by having islands of storage configuration. If a customer is running the environment
that is depicted in the illustration, NetApps approach is appealing, because the NetApp approach provides for
centralized management configurations that enable the presentation of a consolidated view.

3-103

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Designed to Work in Demanding


Heterogeneous Environments
SAN
Enterprise

NAS
Enterprise

Departmental

FC

iSCSI

CIFS

Vol

Single, simple,
management interface for
heterogeneous storage
Disk RAID
Ability to scale out:
Group
Performance
Capacity
DMX

Aggregate

Departmental

NFS

LAN
Multiprotocol controller
consolidates NAS and SAN
storage:
CIFS and NFS
iSCSI and FC

NetApp V-Series Systems

Controller virtualizes
and pools
heterogeneous storage.

NetApp Confidential

100

DESIGNED TO WORK IN DEMANDING HETEROGENEOUS ENVIRONMENTS


V-Series systems are designed to be interoperable in the most demanding heterogeneous storage
environments and to deliver features that traditional SAN solutions cannot deliver.
How V-Series systems works:

3-104

Is both hardware and software


Connects to heterogeneous storage arrays
Pools storage
Is multiprotocol (FC, iSCSI, and FCoE)
Provides for centralized management
Scales out for DMX only today

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

V-Series Systems: Multiprotocol Storage


Use of a multiprotocol controller that enables environments
with heterogeneous arrays to run the Data ONTAP operating
system
Ability to consolidate file and block workloads into one system
NFS and CIFS
iSCSI and FCP

Support for IBM, HDS, HP, EMC, Fujitsu, and 3PAR storage
arrays
Use of one simple management interface for heterogeneous
storage
Ability to adapt dynamically to changing performance and
capacity needs
http://eng-web.nane.netapp.com/projects/V-Series/V-Series_Sales_Edge/index.html

NetApp Confidential

101

V-SERIES SYSTEMS: MULTIPROTOCOL STORAGE


V-Series systems provide the only heterogeneous storage solution that unifies NAS, SAN, and IP-SAN under
one storage architecture. Instead of using the diverse architectures that were previously required to manage
heterogeneous environments, NetApps customers use one management interface and one set of software to
run everything. Obviously, NetApp provides its customers a much simpler, much more cost-effective
solution.
http://eng-web.nane.netapp.com/projects/V-Series/V-Series_Sales_Edge/index.html

3-105

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Exercise 8
Module 3: WEB: Using the
Performance Sizing Tool

Time Estimate: 30 Minutes

NetApp Confidential

EXERCISE 8
Please refer to your exercise guide.

3-106

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

102

Lesson 11
Resources

NetApp Confidential

LESSON 11: RESOURCES

3-107

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

103

Resources
NetApp public Web site for software
information
http://www.netapp.com/us/products/managementsoftware/

NetApp Support site and Partner Center


http://support.netapp.com/

NetApp Field Portal


https://fieldportal.netapp.com/viewcontent.asp?

NetApp Confidential

RESOURCES

3-108

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

104

Locations Useful to SEs


Storage System Site Preparation Guide

http://support.netapp.com/search?q=Site+preparation+Guide&access=a&out
put=xml_no_dtd&site=gs&client=gs&proxystylesheet=gs&getfields=*&filter=p

System Configuration Guide

http://support.netapp.com/knowledge/docs/hardware/NetApp/syscfg/index.sht
ml

Documentation Page

http://now.netapp.com/NOW/knowledge/docs/docs.shtml

Hardware Universe

https://communities.netapp.com/community/netapp_partners_network/netapp
_tools/hardware_universe

Parts Finder

http://rmasrv/parts/

Software downloads

http://support.netapp.com/portal/download

NetApp Confidential

LOCATIONS USEFUL TO SES

3-109

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

105

Module Summary
Now that you have completed this module, you should be able to:
Describe NetApp enterprise hardware
FAS systems
V-Series systems
Storage acceleration appliance, FlexCache storage device, and
high-availability(HA) devices
StorageGrid object-based solution

Identify the available drive types


Fibre Channel (FC)
SAS
SATA

Solid-state disk (SSD)

Identify available resources

NetApp Confidential

MODULE SUMMARY

3-110

NetApp Accredited Storage Architect Professional Workshop: Core Hardware Technology

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

106

Module 4
Storage-Efficiency Strategy

NetApp Confidential

MODULE 4: STORAGE-EFFICIENCY STRATEGY

4-1

NetApp Accredited Storage Architect Professional Workshop: Storage-Efficiency Strategy

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Module Overview
This module focuses on the following topics:
Information technology costs and spending
The NetApp advantage
Storage-efficiency solutions
Areas in which storage efficiency reduces IT
spending

NetApp Confidential

MODULE OVERVIEW

4-2

NetApp Accredited Storage Architect Professional Workshop: Storage-Efficiency Strategy

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Module Objectives
After this module, you should be able to:
Identify the major cost and spending
components that NetApp customers face in IT
Define and describe the NetApp advantage
List the key software solutions in the NetApp
storage-efficiency strategy
List the key savings areas in customer
environments

NetApp Confidential

MODULE OBJECTIVES

4-3

NetApp Accredited Storage Architect Professional Workshop: Storage-Efficiency Strategy

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ask Your Customers What They Would


Do If They Could Do the Following
Use 50% less storage
Cut IT expenses by half
Avoid building a new
data center
Reduce data-center
power and cooling loads
Speed IT response to
business needs
Accelerate time to market
Deliver IT as a service

NetApp Confidential

ASK YOUR CUSTOMERS WHAT THEY WOULD DO IF THEY COULD DO THE


FOLLOWING
Here is a script that you can use for presenting the advantages of NetApp solutions to customers:
Id like to ask you a few questions to help me understand your business drivers. Perhaps some of these will
resonate with the challenges that you face. What if you could buy 50% less storage in your virtualized
environment by using time-proven, industry-leading storage-efficiency technologies and best-practice
implementation? We guarantee that youll need 50% less. Moreover, we guarantee that youll need 35% less
storage even if you continue to use your existing storage assets.
What if you could cut total IT spending in half? We did this for Sensis, an Australian provider of online
information services, with an IP storage network and creation of best practices for storage administration.
What if you could continue to grow but avoid having to build a new data center? We did this for Thomson
Reutersin fact, we helped Thomson Reuters to defer investing in three new data centers and for British
TelcomBT. We also did it for ourselves. Are you interested in delivering power savings to your business that
will help you to fulfill new environmental responsibility objectives and meet emerging data-center regulatory
pressures?
We can help you to speed up time-consuming processes like provisioning and backups that inhibit agility
and expose you to risk while delivering storage efficiency that will help you to provision and back up
according to an extremely competitive business model.
Do you plan to deliver IT as a service, either through your enterprise cloud or by outsourcing? We can help
you with either approach. NetApp provides storage and data management for the leading as-a-service
providers in the market today. Providers of storage-as-a-service, software-as-a-service, infrastructure-as-aservice, and platform-as-a-service choose NetApp to support their market offerings.

4-4

NetApp Accredited Storage Architect Professional Workshop: Storage-Efficiency Strategy

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

They choose NetApp because of the flexible, unified architecture, broad data protection and retention, and
business-continuity solutions that when combined with storage efficiency allow them to offer the most
complete and responsive services in a compelling pricing model. Services from Oracle, SAP, Facebook,
Navitaire, T-Systems, Siemens, BT, Iron Mountain, and the worlds most popular online music service all
have NetApp at the heart of their offerings.
What are your priorities? Dialog with customers about their businesses, challenges, and goals. These are the
challenges and opportunities with which we help customers around the world.

4-5

NetApp Accredited Storage Architect Professional Workshop: Storage-Efficiency Strategy

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Software Efficiencies
Save
up to
46%

Save
up to
33%

Save
up to
95%

RAID 6 protection (RAID-DP


technology) helps to protect
against double-disk failure
with little performance penalty.

Thin provisioning (FlexVol


technology) creates flexible
volumes that appear to be a
certain size but are a much
smaller pool.

Thin replication (SnapVault


and SnapMirror software)
makes data copies for disaster
recovery and backup and uses
a minimal amount of space.

Save
over
80%

Save
over
80%

Snapshot copies are point-intime copies that write only


changed blocks and with
minimal performance penalty.
Virtual copies (FlexClone
volumes) are near-zero-space,
instant, virtual copies. Only
subsequent changes in the
cloned dataset get stored.

Save
up to
95%

Deduplication removes
data redundancies in primary
and secondary storage.

Save
up to
87%

Data compression reduces


the footprint of primary and
secondary storage.

NetApp Confidential

SOFTWARE EFFICIENCIES

4-6

RAID 6 (RAID-DP technology) protects against double-disk failure without sacrificing performance or
adding disk-mirroring overhead.
Thin provisioning (FlexVol technology) keeps a common pool of storage readily available to all
applications.
Thin replication (SnapVault and SnapMirror software) enables block-level, incremental data backup and
replication for significant storage and bandwidth savings.
Snapshot copies provide instant, point-in-time data copies with minimal storage Snapshot space.
Virtual copies (FlexClone volumes) use virtual cloning to create on-demand, space-efficient virtual clones
of volumes, LUNs, and individual files.
Deduplication across applications and protocols identifies, validates, and removes redundant data blocks
from volumes for up to 95% disk savings.
Data compression is performed inline and immediately reduces the amount of stored data.

NetApp Accredited Storage Architect Professional Workshop: Storage-Efficiency Strategy

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Hardware Efficiencies
Using High-Density Disk Drives
High-performance storage utilizes:
SATA high-density disk drives
RAID-DP technology
Flash Cache

D D D D D D D D D D D P P

The net effect is:


Six times higher density per watt
Three to seven times higher capacity per
rack, for example,144-GB FC versus 1-TB SATA
An increase in both storage efficiency and
performance

NetApp Confidential

HARDWARE EFFICIENCIES
USING HIGH-DENSITY DISK DRIVES

Explain the use of large-capacity SATA drives in enterprise applications.


Flash Cache is the current brand name for PAM II, the next-generation card that replaces the original
Performance Acceleration Module (PAM).
The use of 144-GB FC drives instead of 1-TB SATA drives results in seven times more capacity.
Because SATA drives store much more data per disk, resiliency is important. RAID-DP technology provides
this resiliency but without the capacity overhead of disk mirroring.
The new PAM can effectively increase the read performance of SATA drives, which allows you to use SATA
drives in more applications.
The combination of SATA drives, RAID-DP technology, and PAM radically changes what constitutes highperformance storage.

4-7

NetApp Accredited Storage Architect Professional Workshop: Storage-Efficiency Strategy

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

FlexClone Virtual Clones


Production
Storage

Test and
Development Storage

6-TB
Database

8 TB of Storage,
1 Copy, and 4 Clones
Gold Copy

6-TB
Database

30 TB of Storage and
5 Full Copies

With
FlexClone
Software

Gold Copy

Without
FlexClone
Software

NetApp Confidential

FLEXCLONE VIRTUAL CLONES


This figure shows the effect of NetApp FlexClone technology on storage efficiency. You can:

Create a virtual clone copy of the primary dataset


Choose to store only data changes between parent volume and clone
Quickly create copies of production data to test product lifecycle management ( PLM) software upgrades
before deployment
Quickly create sandbox environments for test and QA

FlexClone copies are invaluable in testing and development environments.


Instead of provisioning a large amount of storage capacity to perform application testing, the production
application data is shared with the test data, which results in extreme efficiency.

4-8

NetApp Accredited Storage Architect Professional Workshop: Storage-Efficiency Strategy

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Network Boot with


NetApp FlexClone Software
LUN cloning reduces the
required capacity for
duplicate boot images.

The master boot image is


replicated with spaceefficient copies.

Ethernet
Switch

Clones are created in


seconds or minutes.
The benefits are:
Reduced capacity
requirements
Rapid server
deployment

Servers with
Standard
1Gb/10GbE
NICs

Master
Boot LUN

NetApp Confidential

NETWORK BOOT WITH NETAPP FLEXCLONE SOFTWARE

4-9

NetApp Accredited Storage Architect Professional Workshop: Storage-Efficiency Strategy

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Cloned
Boot LUNs

FlexVol Thin Provisioning


FlexVol Volumes: 2 TB
200
GB

1 TB

50
GB

150
GB
100 GB

300 GB

Over 90% of NetApp


systems utilize thin
provisioning.
Thin provisioning:
Enables users to create
flexible volumes that
virtually allocate storage
with a fraction of the
physical space.

200 GB

Physical Storage: 1 TB Total

Streamlines capacity
provisioning.

The average increase in


utilization is 33% and often
utilization increases over
100%.
NetApp Confidential

FLEXVOL THIN PROVISIONING


Explain the effect of NetApp thin provisioning on storage efficiency.
NetApp thin provisioning allows users to oversubscribe their data volumes, which results in high utilization
models.
Think of thin provisioning as just-in-time storage.
It is not uncommon for users to report 100% or greater raw versus usable storage utilization based on thin
provisioning alone.

4-10

NetApp Accredited Storage Architect Professional Workshop: Storage-Efficiency Strategy

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Thin Replication
SnapVault and
SnapMirror
Stored Copies

SnapMirror and SnapVault


technology simplify disk-based
disaster recovery and backup.
After an initial copy, a
subsequent transfer moves
only changed blocks.

Primary
Thin
Systems Transfers
Secondary
System

Over 50% of NetApp


systems utilize thin
replication.

NetApp thin replication


enables virtual restores of full,
point-in-time data at granular
levels that other competitors
cannot provide.
Deduplication of the primary
data offers further space
reductions.

NetApp Confidential

10

THIN REPLICATION
Thin replication is a term that is used to describe SnapMirror and SnapVault technologies in the context of
storage efficiency.
Because only incremental block changes are transferred after the baseline copy is made, SnapMirror and
SnapVault technologies improve efficiency.
Deduplication can be easily combined with thin replication for even greater savings. The resulting thin
transfers reduce storage space requirements at the source and the destination and also reduce WAN traffic.

4-11

NetApp Accredited Storage Architect Professional Workshop: Storage-Efficiency Strategy

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Efficiency for Any Protocol


NetApp LUNs are data
objects with block disk
attributes and are created
within FlexVol volumes.

All NetApp storageefficiency features apply


equally in SAN and networkattached storage ( NAS)
environments.
With NetApp Unified Storage
Architecture, a single system
can provide SAN and NAS
capabilities and easily be
reconfigured for changing
business needs.

Data ONTAP Software: Logical View


Abstraction Layer

Efficiency Feature

LUN
FlexVol
Volumes

Thin Provisioning
Thin Replication
Snapshot Copies
Virtual Copies
Deduplication

Aggregate
RAID Group
Disk Drive

NetApp Confidential

RAID-DP Technology
SATA Drives

11

EFFICIENCY FOR ANY PROTOCOL


Storage efficiency applies equally to SAN and network-attached storage (NAS), as shown on the right side of
this slide.
The NetApp storage platform, Data ONTAP software, incorporates layers of abstraction to improve data
management and storage efficiency.
A LUN is a file object that is created within a FlexVol volume that looks like a block-based SAN device.

4-12

NetApp Accredited Storage Architect Professional Workshop: Storage-Efficiency Strategy

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

NetApp Deduplication Overview


Proven technology:
Over 30,000 systems licensed for deduplication

Design for enterprise storage:


Integrated tightly with Data ONTAP software
Available on FAS and V-Series systems
Suitable for primary, backup, and archival storage tiers

Broad customer platform options:


Multiple platforms that scale in capacity, performance, and price

Deduplication storage efficiencies for reduced costs:


Reduce physical data storage costs

Reduce space, power, and cooling costs


Store more data per physical storage system
NetApp Confidential

12

NETAPP DEDUPLICATION OVERVIEW


FAS Deduplication is a general purpose space reduction feature available on FAS systems. When FAS
deduplication is enabled, all data in the specified flexible volume can be scanned at intervals and duplicate
blocks removed, resulting in reclaimed disk space. NOTE: FAS deduplication is not supported on V-Series,
R100, R150, FAS250 and FAS270 as well as the 800 and 900 series controllers..
FAS deduplication runs as a background process, and the system can perform any other operation during this
process. FAS deduplication is a post-processing task, and is performed on a volume at an average rate of 3050MB/sec (108-180GB/hour). Up to eight volumes can be deduplicated simultaneously. It is important to note
that although the deduplication process runs as a low priority background task, deduplicating eight volumes
simultaneously will place significant load on the system.

4-13

NetApp Accredited Storage Architect Professional Workshop: Storage-Efficiency Strategy

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Deduplication of Storage That Is Not


NetApp Storage
SAN
Enterprise

NAS
Enterprise

Departmental

FC

iSCSI

CIFS

Departmental

NFS

LAN

Vol
Aggregate

NetApp V-Series Systems


Disk RAID
Group

NetApp Deduplication

NetApp Confidential

13

DEDUPLICATION OF STORAGE THAT IS NOT NETAPP STORAGE


This slide shows the V-Series architecture, which virtualizes and pools heterogeneous storage at the back end.
The V-Series controller is a multiprotocol controller that provides both NAS and SAN capabilities. No other
vendor provides a single controller that can serve NFS, HTTP, and CIFS protocol and FC and iSCSI for
LUNs.
V-Series architecture delivers centralized management for provisioning, disaster recovery, backup recovery,
compliance, and retention across heterogeneous storage at the back end.

4-14

NetApp Accredited Storage Architect Professional Workshop: Storage-Efficiency Strategy

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

How Does Deduplication Work?


Storage systems use inodes and reference pointers to
read and write data.
NetApp deduplication uses multiple pointers to reference a
single block.
The same basic technology has been used in NetApp
Snapshot copies for over 15 years.
INODE 1

INODE 2

Indirect
Block

Indirect
Block

Indirect
Block

Indirect
Block

DATA

DATA

DATA

DATA

NetApp Confidential

14

HOW DOES DEDUPLICATION WORK?


Storage systems use reference pointers to read and write data. Thats necessary so that users can find data
after theyve written it.
Look at the four data blocks on the bottom of the graphic. The two green blocks indicate that the data is the
same. By eliminating the redundant block and referencing the data pointer to the original block, users can
effectively make the bottom right block free space that is available back to the storage system. That is
fundamentally how deduplication works from a data structure standpoint.
Deduplication consists of two major components: WAFL (Write Anywhere File Layout) block sharing and
finding common blocks.
A reference count metafile keeps track of how many times a given block appears in qtrees in the active file
system. In effect, this is an array that is indexed by VVBN. The size of each entry is 16 bits (8 bits used) so
that it requires 0.5 GB per TB of volume space. The maximum sharing for a block is 256.

4-15

NetApp Accredited Storage Architect Professional Workshop: Storage-Efficiency Strategy

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

How Does Deduplication Work?


Removes duplicate 4K WAFL blocks
Uses a postprocess that can be scheduled
Is transparent to applications
Supports any interface or protocol
Is a low-priority background process

NetApp Confidential

15

HOW DOES DEDUPLICATION WORK?


The following are highlights of what NetApp deduplication implementation achieves.
Key Messages:

4-16

Users can find and remove 4K WAFL blocks.


The most popular configuration is to schedule at slow times. (Other configurations are threshold, manual,
and through the SnapVault scheduler.)
Because this occurs at a low level, it is transparent to applications.
In addition, NetApp can accommodate any interface that is supported by FAS systems.
Deduplication for FAS systems runs as a low-priority background process.

NetApp Accredited Storage Architect Professional Workshop: Storage-Efficiency Strategy

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

NetApp Deduplication
Over 20,000 systems utilize
NetApp deduplication.
Deduplication removes
redundant data blocks from
volumes, regardless of
application or protocol.
With deduplication, users can
recoup 50% of their capacity
on average and up to 95% for
some datasets and
environments.
Only NetApp offers
deduplication for primary,
secondary, and archival
storage tiers.
NetApp Confidential

16

NETAPP DEDUPLICATION
Explain the effect of NetApp deduplication on storage efficiency.
Deduplication searches for and removes duplicate data.
NetApp is the market leader in deduplication, and thousands of customers use deduplication in production.
NetApp deduplication is different in that it can be applied to a broad variety of applications and storage tiers,
including primary storage, replicated storage, backup storage, and archival storage.

4-17

NetApp Accredited Storage Architect Professional Workshop: Storage-Efficiency Strategy

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Inside NetApp Deduplication


Deduplication removes duplicate WAFL blocks:

There is no charge to add the deduplication license

Is enabled volume by volume

Includes 4K block-level deduplication

Is for any interface or protocol:

CIFS, NFS, FC, iSCSI, and NDMP

Is application-transparent:

Is content-agnostic

Requires minimal overhead:

Write performance overhead is approximately 7%.

Read performance overhead is approximately 0%.

Includes these features that were released in January 2009:

Larger volume sizes

Checkpoint restart

Performance improvements
NetApp Confidential

17

INSIDE NETAPP DEDUPLICATION


Adding deduplication to a NetApp storage environment is a 10-minute process: Add the licenses, enable the
volumes to be deduplicated, and schedule the deduplication to run at specified intervals.

4-18

NetApp Accredited Storage Architect Professional Workshop: Storage-Efficiency Strategy

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Benefits of NetApp Deduplication


Time-Based Deduplication

Actual
Storage
Consumed

Backup 1
Backup 2
Backup 3
Backup 4

Backup data:
NetApp deduplication removes
redundant data from backups.
Savings are displayed as a
ratio, for example, 20:1 space
savings after 30 backups.

Original Data
Deduplicated Data
New Data

Volume-Based Deduplication
Original Data
Volume

Duplicates
Identified and
Removed

Nonbackup data:

Actual
Storage
Consumed

NetApp deduplication removes


redundant data from a single
volume.
Savings are displayed as a
percentage of the total volume,
for example, User data reduced
by 30% after deduplication.

NetApp Confidential

18

BENEFITS OF NETAPP DEDUPLICATION


The presenter must understand that the top part of this slide doesnt discuss deduplication; it refers to backup.
Because of Snapshot copies, users dont copy data when they take a backup. Only the changed blocks are
moved. The bottom example is the more common view and understanding of deduplication.
NetApp has changed the rules of deduplication. No longer only for backup data, NetApp deduplication
provides space savings across all storage tiers: primary, backup, and archival data.

4-19

NetApp Accredited Storage Architect Professional Workshop: Storage-Efficiency Strategy

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

NetApp Data Compression


Lower TCO
Space savings for varied datasets
Compression for primary, secondary, and
archive storage

NetApp Confidential

NETAPP DATA COMPRESSION


Increase storage savings on a variety of datasets
Integrated tightly with Data ONTAP
Available on FAS and V-Series Systems
Suitable for primary, backup, and archival storage tiers

4-20

NetApp Accredited Storage Architect Professional Workshop: Storage-Efficiency Strategy

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

19

How Does Compression Work? (1 of 3)


Data is compressed inline.
A scanner is available for pre-existing data.
No need for application awareness exists.
Compression supports FAS and V-Series
systems.

NetApp Confidential

HOW DOES COMPRESSION WORK? (1 OF 3)


A brief overview of compression:

4-21

Is inline; reduces write I/O


Is enabled on a volume basis
Provides a compression scanner for pre-existing data and background processes
Is transparent to applications
Supports FAS and V-Series systems

NetApp Accredited Storage Architect Professional Workshop: Storage-Efficiency Strategy

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

20

How Does Compression Work? (2 of 3)


With 32k compression groups

Compression
Groups
32K
32K
32K
32K
32K
32K

192k file
NetApp Confidential

HOW DOES COMPRESSION WORK? (2 OF 3)

4-22

Write operations of a new file are sent to the storage system.


The file is broken into compression groups of 32k.

NetApp Accredited Storage Architect Professional Workshop: Storage-Efficiency Strategy

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

21

How Does Compression Work? (3 of 3)


Immediate space savings with inline
compression
Compressed
Data on Disk

Compression
Groups
32K
32K
32K
32K
32K
32K

192k File

60k on Disk
NetApp Confidential

22

HOW DOES COMPRESSION WORK? (3 OF 3)

4-23

Compression works inline and replaces repeating patterns of data within a compression group.
This provides immediate space savings.

NetApp Accredited Storage Architect Professional Workshop: Storage-Efficiency Strategy

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

How Does Compression Work


with Deduplication?
Cumulative space savings with postprocess
deduplication
Compressed
Postprocess
Deduplication

Compression
Groups

192k File

and
Deduplicated
Data on Disk

24k on Disk
NetApp Confidential

HOW DOES COMPRESSION WORK WITH DEDUPLICATION?

4-24

Deduplication works postprocess and removes duplicate blocks of data on disk.


Customers can get greater cumulative savings.

NetApp Accredited Storage Architect Professional Workshop: Storage-Efficiency Strategy

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

23

NetApp Data Compression


Architecture Requirements
Data ONTAP 8.0.1 and later versions:
Is available for 7-Mode and Cluster-Mode

Installation of free licenses:


Compression
Deduplication

FlexVol software requirements:

A 64-bit aggregate
A maximum volume size of 16 TB
Deduplication enabled (does not need to be run)
Compression enabled per FlexVol volume

NetApp Confidential

24

NETAPP DATA COMPRESSION


ARCHITECTURE REQUIREMENTS

Data compression requirements


Data ONTAP 8.0.1 7-Mode
Written agreement (policy-variance request, or PVR) to control use cases (more details later in this course)
Free licenses (deduplication and compression)
Deduplication licensed and enabled on volume (but not necessarily scheduled)
Only 64-bit aggregates
A maximum volume size of 16 TB
Data ONTAP 8.0.1 7-Mode: It supports only 7-Mode (not Cluster-Mode) configurations.
Deduplication
NetApp data compression requires deduplication to be enabled on the same volume. After you enable
deduplication, you can choose to enable data compression. You do not need to schedule deduplication to run;
you only have to enable it on the same volume.
Free license
NetApp data compression requires both the deduplication and the compression license. Both are free.
64-bit aggregates
NetApp data compression does not support 32-bit aggregates. No plans exist for supporting 32-bit aggregates.
Is enabled per FlexVol volume
Works on FlexVol volumes only, not on traditional volumes
Limits maximum volume size
Limits volume (same as deduplication). For Data ONTAP 8.0.1, the limit is 16 TB for all supported storage
systems.
4-25

NetApp Accredited Storage Architect Professional Workshop: Storage-Efficiency Strategy

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Compression and Deduplication


Deduplication
Inline

Postprocess

Compressed
Data

Compressed and
Deduplicated Data

Raw Data

Immediate space savings with inline compression

Cumulative space savings with postprocessing


deduplication
NetApp Confidential

25

COMPRESSION AND DEDUPLICATION


When you have compression and deduplication enabled on a FlexVol volume, you get immediate space
savings with compression and cumulative savings with postprocess deduplication. The total savings are not
necessarily the sum of the individual savings. Refer to When to Select Deduplication and the Compression
Best Practice Guide for more details.
Compression can reduce the footprint of the initial data that is written to disk, and deduplication removes
duplicate WAFL blocks.

4-26

NetApp Accredited Storage Architect Professional Workshop: Storage-Efficiency Strategy

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Compression and Deduplication Savings


Representative Savings by Application
Compression
Savings

Deduplication
Savings

Combined
Savings

NetApp
Recommendation

Geoseismic files

40%

3%

40%

Compression

Engineering data files

55%

30%

75%

Both

Virtual servers

55%

70%

70%

Deduplication

Home directories

50%

30%

65%

Both

Database and Biz Apps

65%

0%

65%

Compression

Exchange 2010

35%

18%

37%

Both*

Exchange 2003

37%

3%

38%

Compression

Dataset Type
Primary and secondary

Backup and archive only

*Exchange 2010: deduplication for primary; compression for backup and archive

These are typical space savings; actual results may vary. Use the Space Savings Estimation Tool (SSET) v3.0.

NetApp Confidential

26

COMPRESSION AND DEDUPLICATION SAVINGS


REPRESENTATIVE SAVINGS BY APPLICATION

These are sample savings that NetApp has achieved with internal and customer testing. Actual customer
savings are highly dependent on the data type and data layout. It is highly recommended that you test your
actual data with both the Space Savings Estimation Tool ( SSET) and in a test and development environment.
For Data ONTAP 8.0.1, NetApp recommends running only nonperformance-sensitive applications such as
File Services and IT infrastructure on the primary storage infrastructure. These other data types may be good
targets for backup and archive tiers.

4-27

NetApp Accredited Storage Architect Professional Workshop: Storage-Efficiency Strategy

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Primary Storage Use Cases


Customers who:
Look for ways to reduce primary storage
consumption
Do not want to run a third-party compression
solution
Want to achieve low deduplication savings
Have applications that are not performancecritical:
File Services
Engineering applications
Seismic data
NetApp Confidential

27

PRIMARY STORAGE USE CASES


For primary storage the uses cases will be limited to those that are not performance critical, typically
applications that run on SATA drives or NAS such as File Services, Engineering Applications and Seismic
data.
Compression and Deduplication
Deduplication, NetApp is the established market leader over 87,000+ licenses and over one exabyte worth of
data being deduplicated
Deduplication and compression utilize the ONTAP architecture
Advanced deduplication increases the amount of applications that we can offer space savings for
Only NetApp offers deduplication and compression for primary, secondary, and archival storage tiers
Provides immediate space savings with compression and additional space savings with post-process
deduplication
Users often recoup 50% or more of their capacity
Eliminates need for off box solution

4-28

NetApp Accredited Storage Architect Professional Workshop: Storage-Efficiency Strategy

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Secondary and Archive


Storage Use Cases
Customers who:
Look for ways to reduce backup and archive
storage consumption
Have backup and archive jobs that consume too
much disk space
Cannot store enough backups because of space
constraints
Want to use SnapVaultsoftware or another backup
solution
Look for ways to reduce the cost of backup and
archive tiers
NetApp Confidential

28

SECONDARY AND ARCHIVE STORAGE USE CASES


For secondary storage, the use cases are much less limited and may include applications such as File Services,
databases, and Exchange. Backup and archive solutions that perform compression do not benefit much from
data compression. These customers may choose to disable the compression feature on their backup and
archive solutions to take offline the resource overhead that is caused from compression. Note that enabling
compression on a backup and archive tier increases the time that it takes to complete backup. NetApp
recommends that you test this in your environment before implementation.

4-29

NetApp Accredited Storage Architect Professional Workshop: Storage-Efficiency Strategy

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

NetApp Uses 50% Less Power,


Cooling, and Space
Half the Power
Power (VA) and Usable TB

Half the Cooling Load

300
200

Heat (BTU per Hour)/Usable TB

52%

600

100
0

NetApp

Half the Space

900

Competition

Total rack units /10 TB

51%

20
15

300
0

53%

10
NetApp

Competition

5
0

NetApp

Competition

Possible range based on environment-specific factors and typical environments


Source: Oliver Wyman Study: Making Green IT a Reality, November 2007. Competitors: EMC CLARiiON and HP EVA.

NetApp Confidential

29

NETAPP USES 50% LESS POWER, COOLING, AND SPACE


Storage efficiency translates to environment savings.
Power, cooling, and data-center space are increasingly important to NetApp customers. NetApp consistently
outperforms EMC and HP in environmental impact. Details can be found in the Oliver Wyman study that is
noted on this slide.
Most NetApp customers face increasing pressure on space, power, and cooling in their data centers. Some are
running out of space. Others cant get new power from their utility. Some customers find that todays high
densities have maxed out their cooling infrastructure.
With governments around the world scrutinizing data-center power consumption and the increasing global
pressure for environmental stewardship, the power, cooling, and space benefits of NetApp solutions can help
customers to directly address those concerns and challenges.
NetApp solutions can cut footprint, power, and cooling loads by half. Most NetApp customers see extremely
favorable ROIs, often paying themselves back in under a year. BTs ROI was eight months. Its annual power
savings alone were $2.4M.
NetApp has a host of products that can help customers to get more from their systems, which eliminates the
need for wholesale changes, extends the life of some of their infrastructure that they may not be ready to
replace, and enables customers to extend their mixed-vendor storage arrays with NetApp capabilities without
having to take those systems out of production.

4-30

NetApp Accredited Storage Architect Professional Workshop: Storage-Efficiency Strategy

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Unified Protection for the


Entire DataCenter Environment
Continuous availability:
six nines of uptime
NetApp
Primary Storage

Other Vendors
Storage Through
Virtualization

Disaster recovery for


complete site protection
Backup and recovery for
Snapshot copies and for tape
environments
Archive and compliance for
long-term retention and
ongoing access

Security to encrypt data while


the system is up and during
scheduled downtime

NetApp Confidential

30

UNIFIED PROTECTION FOR THE ENTIRE DATACENTER ENVIRONMENT


NetApp provides a range of disk-based backup solutions, such as the following:

Systems that automate remote office backup


Virtual tape solutions that address the difficulty of slow tape-based backup and recovery
Heterogeneous solutions that can be used in any primary storage environment

NetApp data-protection solutions safeguard enterprise data, integrate easily with customers existing
infrastructure, help customers to meet backup requirements, and help customers to recover rapidly when
needed. And, by storing only copies of changed data, NetApp delivers protection that helps customers to
affordably protect their data, their businesses, and their reputations.
NetApp disk-based backup solutions can be used in combination with NetApp mirroring and replication
technologies to provide the most cost-effective business-continuance solutions that are available today. Tested
and proven for complex environments, NetApp technology allows customers to mirror data to a remote site
and delivers records of changes at any interval that the customers choose. When a failure occurs, a customer
can retrieve the desired copy of the data instantaneously and quickly resume business without disruption of
service.
NetApp software-based archiving and compliance solutions are unique in the industry. Not only do these
solutions totally eliminate the cost of redundant compliance storage by using a single copy for both backup
and compliance, they also eliminate the need for dedicated compliance systems. Only the NetApp unified
architecture lets customers consolidate e-mail file, database, enterprise resource planning ( ERP), and CMS
data on a single platform.
NetApp also provides industry-leading information classification and management, which enables data
discovery to mitigate litigation and compliance risks and enables efficient management of storage tiers to
lower the cost of archival storage.

4-31

NetApp Accredited Storage Architect Professional Workshop: Storage-Efficiency Strategy

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 1
Tools

NetApp Confidential

LESSON 1: TOOLS

4-32

NetApp Accredited Storage Architect Professional Workshop: Storage-Efficiency Strategy

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

31

The Efficient IT Calculator (1 of 2)

NetApp Confidential

32

THE EFFICIENT IT CALCULATOR (1 OF 2)


The Efficient IT calculator has been enhanced and now quantifies savings when using NetApp dedupe for
FAS, VTL and primary and archival data sets. Tool users will see their personalized reports showing how
much money, space and time they can save when using deduplication.
This tool can be found on the Field Portal.

4-33

NetApp Accredited Storage Architect Professional Workshop: Storage-Efficiency Strategy

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

The Efficient IT Calculator (2 of 2)


Interactively illustrates
storage-efficiency benefits
Calculates power and
cooling savings
Automates e-mail to all
registrants with a summary
of results:
Archival
Backup
Disaster recovery
Development and testing

NetApp Confidential

33

THE EFFICIENT IT CALCULATOR (2 OF 2)


Review the Efficient IT calculator.
Key Messages:

4-34

Speak to the greater topic of the Efficient IT calculator.


The Efficient IT calculator now includes data compression along with deduplication.

NetApp Accredited Storage Architect Professional Workshop: Storage-Efficiency Strategy

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

The SSET

This tool is available on the NetApp Field Portal.

NetApp Confidential

34

THE SSET
This tool is a confidential NetApp product. It is intended for use only by NetApp employees and authorized
NetApp partners when analyzing data at current or prospective NetApp customer accounts. By installing this
software, you agree to keep the tool and its results confidential to NetApp, the NetApp authorized partner, and
the customer account.
Overview:
FAS deduplication is a NetApp storage space-saving technology that increases stored data efficiency by
deduplicating and storing only unique data.
The SSET for Linux crawls through all the files in the specified path and estimates the space savings that will
be achieved by FAS deduplication.
NOTE: This tool reports the percentage of duplicate data that is found in the file system and not the amount
of data that is actually saved by enabling FAS deduplication. The tool is for estimation only.

4-35

NetApp Accredited Storage Architect Professional Workshop: Storage-Efficiency Strategy

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

The SSET 3.0


Analyzes existing data
Predicts savings for:
Deduplication only
Compression only
Both ( compression followed by deduplication)

Is run from Windows or Linux clients with read


access to data
Does not require data to be on NetApp
storage

NetApp Confidential

35

THE SSET 3.0


Version 3.0 of the Space Savings Estimation Tool provides support for three configurations:

Deduplication only
Compression only
Combined savings, compression followed by deduplication

SSET scans local or CIFS-mapped or NFS-mounted file systems only. It can analyze data from any source, in
other words it does not require the data to be on NetApp storage. It can be run from either a Windows or
Linux machine. This tool is limited to evaluating a maximum of 2TB of data. If the path contains more than
2TB of data, the tool will indicate the maximum size has been reached and present the results for the 2TB of
data that has processed.
Currently the tool is only available to NetApp field and partner personnel. They can run it at the customer site
but must remove it when they are finished testing. This tool cannot be left with the customer.
SSET 3.0 is currently available by request only to Sandra Moulton, but will be released with 8.0.1 to the field
portal.

4-36

NetApp Accredited Storage Architect Professional Workshop: Storage-Efficiency Strategy

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

What Is NetApp Realize?


NetApp Realize:
Is a sales tool that helps you to win deals by proving the
business value of NetApp solutions

Is a platform for creating a tailored financial analysis


Calculates key values, such as:

Return On Investment
Net Present Value
Total Cost of Operations
Payback period

Includes capital cost and operating expense savings


Calculates values for one to five years
Incorporates NetApp storage-efficiency technologies
NetApp Confidential

WHAT IS NETAPP REALIZE?

4-37

NetApp Accredited Storage Architect Professional Workshop: Storage-Efficiency Strategy

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

36

Creating an Analysis

Identify customer and set


up new analysis

Edit default financial values

Propose one or more


solutions to replace existing
storage

Calculate the cost


savings of your proposed
solutions, ROI, and payback
period

Present results

Identify customers
data requirements and
existing storage

NetApp Confidential

37

CREATING AN ANALYSIS
1. Set up your analysis.
NetApp enters your information and reviews default assumptions for your financial environment.
2. Identify the existing storage environment.
NetApp works with you to identify data requirements and existing storage technology.
3. Propose solutions.
Propose storage solutions that NetApp believes are appropriate alternatives to existing solutions.
4. Analyze and compare.
NetApp Realize analyzes the financial impact of proposed solutions and shows savings and benefits
compared to the existing system.
5. Present results.
Use the NetApp Realize outputs as a summary to allow you to take the next step.

4-38

NetApp Accredited Storage Architect Professional Workshop: Storage-Efficiency Strategy

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

NetApp Realize Certification by IDC


NetApp Realize is certified by International Data Corporation
(IDC):
Typical savings of 40% to 65%

Accurate methodology
Intelligent defaults:
Power costs
Labor costs

Floor space costs


Others

NetApp Confidential

NETAPP REALIZE CERTIFICATION BY IDC

4-39

NetApp Accredited Storage Architect Professional Workshop: Storage-Efficiency Strategy

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

38

NetApp Realize Functionality Overview


NetApp Realize models and compares the following:
Existing storage (can be a competitors proposal)
Your proposal 1: the default V-Series system
Your proposal 2: the default FAS system

Existing storage can be:


Replaced
Left intact without expansion
Phased out over time

NetApp Realize calculates the cost savings that are


available with your proposed systems compared to
the cost of the existing storage system.

NetApp Confidential

NETAPP REALIZE FUNCTIONALITY OVERVIEW

4-40

NetApp Accredited Storage Architect Professional Workshop: Storage-Efficiency Strategy

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

39

NetApp Realize Financial Methodology


NetApp Realize calculates the cost savings
for each proposal.
Cost savings = the cost of staying with existing
storage less the cost of purchasing and operating
your proposed system.
NetApp Realize calculates the ROI for each
proposal:
ROI = the cost savings of your proposed system
divided by the investment that is required to
purchase and operate the proposed system.
ROI takes the time value of money into
consideration.
NetApp Confidential

NETAPP REALIZE FINANCIAL METHODOLOGY

4-41

NetApp Accredited Storage Architect Professional Workshop: Storage-Efficiency Strategy

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

40

NetApp Synergy: a Suite of Applications

http://synergy.netapp.com

NetApp Confidential

41

NETAPP SYNERGY: A SUITE OF APPLICATIONS


Having the correct tools is probably the single most important factor in being efficient and productive when
you perform service-delivery work. For every delivery hour that you can save, you increase your margins,
reduce risk, and improve customer satisfaction.
NetApp has worked hard to ensure that the NetApp services teams have high-quality, rich-featured tools. As
part of the NetApp partner program, to ensure your success, NetApp has extended those tools to you for your
use and benefit.
NetApp Synergy is a suite of applications that can assist you with pre-sales through post-sales activities.
NetApp has more than a dozen applications that you can choose from to meet your specific needs, but today,
this course focuses on Storage Design Studio (SDS).
NetApp Learning Center training courses are available for this tool. See the NetApp Learning Center for
availability.

4-42

NetApp Accredited Storage Architect Professional Workshop: Storage-Efficiency Strategy

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Storage Design Studio


Design, configure, and document
An application that:
Enables you to build accurate, detailed
configurations of NetApp storage controllers
Provides the capability to rapidly design a
solution, make changes to the design, and
see results immediately
Generates configuration scripts
Generates high-quality, as-built storagesolution models and detailed documentation

NetApp Confidential

42

STORAGE DESIGN STUDIO


After your proposed solution is agreed upon and the deal is closed, it is critical that you deploy the
components with accuracy and efficiency. SDS helps you to ensure that all your deployments proceed as
quickly, accurately, and effectively as possible.
SDS is an application plug-in that offers fine-grain configuration, automated storage provisioning, and
integrated Word and Visio documentation.

4-43

NetApp Accredited Storage Architect Professional Workshop: Storage-Efficiency Strategy

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Exercise 9
Module 4: Locating the StorageEfficiency Whiteboard

Time Estimate: 10 Minutes

NetApp Confidential

EXERCISE 9
Please refer to your exercise guide.

4-44

NetApp Accredited Storage Architect Professional Workshop: Storage-Efficiency Strategy

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

43

Module Summary
Now that you have completed this module, you
should be able to:
Identify the major cost and spending
components that NetApp customers face in IT
Define and describe the NetApp advantage
List the key software solutions in the NetApp
storage-efficiency strategy
List the key savings areas in customer
environments

NetApp Confidential

MODULE SUMMARY

4-45

NetApp Accredited Storage Architect Professional Workshop: Storage-Efficiency Strategy

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

44

Module 5
Enterprise Data Storage

NetApp Confidential

MODULE 5: ENTERPRISE DATA STORAGE

5-1

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Module Overview
This module focuses on the following topics:
Consolidation
Contrast with SAN
Thin provisioning and space reservations
Positioning
Network-attached storage ( NAS) architecture

NetApp Confidential

MODULE OVERVIEW

5-2

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Module Objectives
After this module, you should be able to:
Discuss the challenges of consolidation in:
Windows environments
UNIX environments

Discuss the advantages of NetApp SAN


technology
Articulate the advantages of thin provisioning
and space-reservation technology
Discuss NetApp NAS architecture

NetApp Confidential

MODULE OBJECTIVES

5-3

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 1
Consolidation

NetApp Confidential

LESSON 1: CONSOLIDATION

5-4

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Windows File Serving: Consolidation


The Data ONTAP operating system functions as the
Windows file server.
A high number of Windows servers exists per NetApp
controller. (The controller is much more scalable than
with Windows.)
You can usually size a Windows file server simply
based on customer storage needs. This can be a
great point of entry to an account.
The maximum number of users, shares, open files,
and so on, is noted in documents. The numbers are
generally high, based on system memory.

NetApp Confidential

WINDOWS FILE SERVING: CONSOLIDATION


The Data ONTAP operating system is the Windows file server, and it replaces Windows boxes. Windows
administrators may be put in other roles because of this shift, so be mindful of the political implications of
your sales pitch.
Multiple Windows systems map to one NetApp controller: It depends on a customers servers, how much
traffic they have, and how powerful the servers are.
Windows servers tend to be lightweight boxes with many disks connected to them. They are not usually big,
high-performance systems.
Consolidation as large as 80-to-1 has been seen in Windows environments, but as little as 5-to-1 has also been
seen.
Generally, you can size the Windows file system based on a customers storage need.
CIFS is generally not used in high-performance types of environments. Performance is usually less of a
question.
In large environments, pay attention to maximum users, shares, and open files. Scale is based on system
memory, which is documented in standard Data ONTAP documentation.

5-5

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

A Seamless Transition
Integration with Windows Infrastructure
A Typical Windows File-Serving
Environment Before NetApp
User Home
Directories

Windows Software Development,


Project Shares
CAD, and so on

CIFS

CIFS

An Efficient and Highly Available Windows


File-Serving Environment After NetApp
User Home
Directories

CIFS

Windows
Project Shares

CIFS

CIFS

Microsoft
Active
Directory
Server

Software Development,
CAD, and so on

CIFS

Microsoft
Active
Directory
Integration

Integrate into a Windows environment:

Active Directory and Group Policy support

Kerberos and Lightweight Directory Access Protocol (LDAP) support

Leveraging of existing Windows administration tools such as Microsoft Management Console (MMC)

Integration with MS Volume Shadow Copy Service (VSS)

NetApp Confidential

SEAMLESS TRANSITION
INTEGRATION WITH WINDOWS INFRASTRUCTURE

This image represents a typical Windows file-serving environment. The computers at the top may be
thousands of users on the network who access data on the servers below. You can see that each server is
independent, with its own backup systems, storage capacity, and administrative needs. In a typical file-server
environment, hundreds or even thousands of these servers may exist. The challenge here is maintaining all of
the servers, backing them up, keeping them up-to-date, and utilizing storage assets effectively.
Here we see the same users, but now they are accessing one consolidated server that interoperates fully with
the Windows environment. From the perspective of the Windows clients and administrators, it looks like a
Windows file server that allows them to leverage existing Windows applications and tools. NetApp integrates
with Active Directory, supports Kerberos authentication and Group Policy Objects, and integrates with
Volume Shadow Copy Services (VSS), which is Microsofts snapshot implementation. It also works with
existing software for backup, storage management, and antivirus scanning.
The benefits are many. Because customers are moving their data from slow, unreliable servers with directattached storage (DAS) to a highly reliable enterprise-class file server, they get highly available storage. They
can also consolidate hundreds of file servers with minimal impact on users and take advantage of pooled
storage for more efficient use of storage resources. With pooled storage, customers also get the ability to
expand their storage capacity without disruption for just-in-time provisioning, which also greatly increases
their storage-utilization levels. Another benefit is heterogeneous file sharing, which allows rapid, secure
access for data sharing to both UNIX and Windows users. Because the storage systems to manage are fewer
and simpler, customers get simplified data management. Finally, because NetApp is built on open systems, it
seamlessly integrates with existing software and hardware.
This is a big opportunity.

5-6

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

File Storage and Serving


File Services
Network file storage and serving
CIFS and NFS protocols
Desktops and servers
Typical Uses
User data and home directories
Shared project files
Application files and data
Consolidate file servers and direct-attached storage (DAS) with NetApp.

NetApp Confidential

FILE STORAGE AND SERVING

5-7

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

UNIX File Serving


Consolidation:
May not actually consolidate boxes
Definitely consolidate management overhead, such
as volume and RAID management in appliances

Performance:
NetApp has the best real-world performance in
multiple workloads.
Performance comparison is available on the SPEC
SFS97 Web site.

Robust backup options, such as tape backup


options all the way to MetroCluster
NetApp Confidential

UNIX FILE SERVING


Unlike most Windows file servers, UNIX servers tend to be large, powerful systems with multiple processors
and large amounts of RAM, so customers will probably not consolidate servers.
Customers can definitely consolidate management overhead:

Volume and RAID management


Better Snapshot copy functionality
Better backup

NetApp has the best real-world performance. It has been in NFS the longest, since its inception.
The SPEC Web site has excellent NFS performance numbers:

Independent
Full disclosure of configurations
Most NetApp competitors
http://www.spec.org

The Data ONTAP GX operating system currently holds the record on the SPEC site for the fastest throughput
that is yet recorded.

5-8

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Mixed UNIX and Windows


File-Serving Environments
NetApp provides UNIX and Windows access to
the same file: the traditional NetApp multiprotocol:
A likely advantage in an engineering or graphics
shop
Guaranteed file locking (see TR3014 and TR3024)
Linux to the desktop?

NetApp is an established leader in NFS:


Original NetApp controllers are best-of-breed NFS
servers.
NetApp made the first NFS server that is supported
by Oracle for databases.
NetApp Confidential

MIXED UNIX AND WINDOWS FILE-SERVING ENVIRONMENTS


Most environments have at least one team, group, or department that has Linux as its standard desktop
operating system. That is the classical mixed environment. That is no problem for NetApp.
Data ONTAP can provide simultaneous NFS and CIFS access to the same file systemthe same files. In
these multiprotocol environments, Data ONTAP guarantees file locking across the two environments. If a
Windows user has a file open for edits, that file lock is honored from the NFS side and vice versa. Many
customers use cross-protocol file access. However, avoid the mixed security style.
There are three security styles available: This is a piece of metadata that exists on a qtree. Every qtree has to
have a security style. A system-wide default is set based on the last-licensed NAS protocol or by the
administrator. Any qtrees security style can be changed on the fly, within certain rules.
The three security styles are:

UNIX
NTFS
Mixed

Seeing those three names, customers assume that if they have a mixed environment, they must set the mixed
security style. That is not true. Any of these settings can give full, read, and write access to both CIFS and
NFS clients. The best practice is to choose the dominant security style, usually the one that needs to be able to
set the security, and use that one for the qtree security style. Users connecting from the opposite protocol are
mapped to the dominant protocol.
The /etc/usermap.cfg file defines mappings.
For example, a Windows user comes in to a UNIX-style qtree. The account is mapped to a UNIX account.
The UNIX account permissions are read and applied to the Windows user. If the mapped UNIX account has
access, the Windows account will have access.

5-9

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

When is the mixed security style useful? Only if there is a strong business case for having both security styles
active in the same file system. Some files will have read, write, execute bits from UNIX. Some files will have
ACLs and ACEs from NTFS. A given file or directory can only have one type of security or the other. On
individual files, it is a problem, but it is a manageable problem. Folders are a more difficult challenge, given
the intricacies of NTFS permission inheritance.

5-10

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

NFS for Database


NFS is often an option in a traditional SAN
environment:
NetApp NFS solutions have been Oraclecertified since 1997.
NFS can be simpler to deploy.
NFS has less stringent space-reservation
requirements.

Most NetApp and UNIX database


deployments use NFS, and many of these
predate NetApp SAN solutions.
NetApp Confidential

10

NFS FOR DATABASE


NFS is a good option for this traditional SAN space.
NFS:

Has been certified by Oracle since 1997


Can be simpler to deploy
Does not require space reservations (see more on space reservations in the SAN section of this course)

Most NetApp UNIX database deployments are on NFS:

Many of these are because they predate Data ONTAP SAN functionality.
High database performance is possible over NFS.
Oracle does a large portion of its internal production over NFS.

NetApp has excellent technical reports, co-authored with Oracle, that detail optimal configuration.
Setting the right combination of mount options is critical for high-performance database implementation.

5-11

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Consolidate (1 of 2)
Challenges:
Excessive file-server
sprawl to handle
growth
Poor server and
storage utilization
Silos of file storage
File Servers with
Dedicated Storage

NetApp Confidential

11

CONSOLIDATE (1 OF 2)
Windows file servers were originally intended for small workgroups and have eventually scaled up to provide
department-level storage.
These file servers, however, do not scale well for many reasons, which results in silos of file storage that can
be accessed only by their respective file servers.
With NetApp, customers can:

5-12

Break the cycle of deploying more


Choose the right platform for current and future needs

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Consolidate (2 of 2)
NetApp File Services

File Servers

The NetApp Solution:


Consolidate Linux,
UNIX, and Windows
file storage.
Eliminate file servers.
Optimize storage
efficiency.
Improve utilization by
50% or more.

Our storage utilization went up from 43% to 76% once we moved to NetApp.
Beaumont Hospitals
NetApp Confidential

CONSOLIDATE (2 OF 2)

5-13

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

12

Manage
End Users

Challenges
Managing individual file
servers (a time consuming
process)
Patches
Antivirus
Backup

Security

Deploying new services


New equipment
Additional resources
Server
Administrator

Storage
and Server
Administrators

Storage
and Backup
Administrators

Coordination of activities
among multiple groups

NetApp Confidential

MANAGE

5-14

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

13

Simplifying Management
The NetApp Solution:

End
Users

Consolidate management:
Infrastructure, storage,
and files

File Management

Daily tasks automated with


policies

Windows
Administrator

Speed time to deployment:


Is deployed in minutes, not
hours or days

Storage Management

Allows administrators to
manage file systems and
storage

Storage
Administrator
Everyone now has time to develop new technologies instead of being involved with current
operations and maintenance. Thats the best ROI.
IAI

NetApp Confidential

14

SIMPLIFYING MANAGEMENT
Deploy new storage and services with integrated solutions in minutes versus hours or even days.
Windows administrators directly manage storage without having to rely on their storage counterparts.

5-15

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Improve Data Protection (1 of 2)


Challenges:
Long, disruptive backups:
Infrequent because of
overhead and disruption
Network-intensive
because of constant
replication

Complicated file
restoration:
Multiple administrators
and systems
Unreliable tape media
NetApp Confidential

IMPROVE DATA PROTECTION (1 OF 2)

5-16

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

15

Improve Data Protection (2 of 2)


The NetApp Solution:
Near-instant backups:
Frequent backups that lead to
better protection
Mirroring to secondary storage
Secure, compliant copies

Simple user-service restores:


Present a file-system view of
previous backups
Integrate with Windows VSS

NetApp storage is trusted with our intellectual property, the TI crown jewelsthats the real
proof of our confidence in NetApp solutions.
Texas Instruments

NetApp Confidential

16

IMPROVE DATA PROTECTION (2 OF 2)


NetApp solutions provide:

Near-instantaneous incremental backups:

Simple self-service restore:

5-17

Allow you to perform frequent backups and provide better protection


Allow you to mirror data and backups to secondary storage, including creating secure, compliant copies

Presents a file-system view of previously backed up copies to end users or Windows administrators
Integrates with Windows VSS

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Scale to Handle Growth


Challenges:
Growth in files:
Deploy new file servers

Results in storage silos


Servers and Storage
in Confined Silos

Inaccurate capacity and


performance planning

Poor utilization:
Missed SLAs
Compromised file
protection and availability

Difficulty adding storage


services without additional
resources
NetApp Confidential

SCALE TO HANDLE GROWTH

5-18

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

17

Integration with Server


and Network Infrastructure
Before NetApp
Home
Directories

Shared
Storage

With NetApp

Software
Development

Home
Directories

Shared
Storage

Active Directory
(AD), LDAP,
or Network
Information
Service (NIS)

Software
Development

AD,
LDAP,
or NIS

NetApp Confidential

18

INTEGRATION WITH SERVER AND NETWORK INFRASTRUCTURE


NetApp provides seamless integration with the environment while consolidating multiple file servers.
Seamless integration allows customers to integrate seamlessly with the authentication environment such as
Microsoft Active Directory (AD), AD LDAP, OpenLDAP, AD Kerberos, and MIT Kerberos.
Consolidation allows customers to consolidate their data that serves applications such as home directories,
shared storage, custom applications, technical applications, and software development.
Many file servers are consolidated into one NetApp system, which reduces the administrative overhead and
TCO.
NetApp has supported Windows 2008 AD and SMB 2.0 protocol since Data ONTAP 7.2.4 and Data ONTAP
7.3.1 respectively.

5-19

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Exercise 10
Module 5: Demonstrating
Windows File Server

Time Estimate: 60 Minutes

NetApp Confidential

EXERCISE 10
Please refer to your exercise guide.

5-20

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

19

Lesson 2
Contrast with SAN

NetApp Confidential

LESSON 2: CONTRAST WITH SAN

5-21

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

20

NetApp SAN Advantages (FC and IP)


Easier administration:
LUNs that are not tied to disk; no disk management
Industry-leading flexibility for changes

Simple data provisioning: LUN creation, growing,


and shrinking
LUN clones or FlexClone volumes for testing,
report generation, and verification
A hardened storage subsystem with integrated
data protection: RAID-DP technology, SyncMirror
software, MetroCluster, and Snapshot technology

NetApp Confidential

21

NETAPP SAN ADVANTAGES (FC AND IP)


The products that are based on Snapshot technology create an advantage for NetApp. These products provide
the ability to mirror from a Snapshot copy, lock down a Snapshot copy for compliance, and so on.
SAN provides easier administration:

LUNs are not directly tied to disk; no disk management is required.


Industry-leading flexibility is provided for changes.

Data provisioning is simple. LUN creation, growing, and shrinking are simple. With SnapDrive software and
Windows 2008, you can shrink a disk. Windows 2003 does not provide this capability. A best practice is to
use Volume Manager and add and remove LUNs (but do not shrink them).
NetApp offers LUN cloning for test environments, for reports, and for other purposes.
NetApp offers a hardened storage subsystem with integrated data protection among other features.
The majority of NetApp technologies apply to SAN environments and to NAS.

5-22

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Advantages of IP SAN (iSCSI)


Leverages the network:
Administration expertise
No switch qualification
WAN for disaster recovery; all NetApp LUNs are
the same

Is within Internet Engineering Task Force ( IETF)


standards
Is growing in popularity quickly
Includes Microsoft iSCSI Initiator at no charge
Is a great way to grow account presence
NetApp Confidential

22

ADVANTAGES OF IP SAN (ISCSI)


While customers generally do not use the same wires to run iSCSI, a customer can leverage its existing
administration expertise.
Ethernet switches do not have to be qualified.
Because iSCSI is implemented over IP, it can be implemented over a WAN for disaster-recovery purposes
much more cheaply than FC can:

All NetApp LUNs are the same. Frequently, customers use FC in production but iSCSI in failover or
disaster-recovery scenarios.
In Exchange and database environments, many customers use FC in production, but the maintenance
activities, such as the Exchange verification process, take place over iSCSI.

iSCSI is growing in popularity. Software initiators are freely available on most platforms.

5-23

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

LUN: Functional Summary


LUNs are implementations of local disk storage
that are supported on the controller.
LUN
Transport

Uses FC or iSCSI protocol

Access to data

Uses SCSI-3 commands

Interface

FC: FC host bus adapter (HBA) and driver


iSCSI: Ethernet port and software initiator, TCP offline
engine (TOE) card, or HBA
Fibre Channel over Ethernet (FCoE): Unified Target
Adapter (UTA) and CAN

Format

New Technology File System (NTFS), UFS, and VxFS

How many

Multiple can exist at the root of a volume or qtree

NetApp Confidential

23

LUN: FUNCTIONAL SUMMARY


LUN stands for logical unit number. This not particularly meaningful phrase refers to tape or disk devices that
are attached to a system.
In summary, a LUN uses FC or iSCSI protocol as the transport for the same SCSI commands that are used for
data access in DAS and uses an interface, either an FC host bus adapter ( HBA) or one of three options with
iSCSI:

All software with a plain Ethernet network interface card ( NIC)


Half software, half hardware with a TOE card
All hardware with a hardware iSCSI initiator

Because NetApp provides block-level access, the format of the file system can be anything.
The number of LUNs at the root of a volume or qtree is not limited by Data ONTAP architecture.
Generally speaking, for manageability, because Snapshot copies are always volume-wide, each LUN should
have its own flexible volume.

5-24

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Steps for Creating and Mapping a LUN


1. Display the initiators worldwide port names
(WWPNs).
2. Create an initiator group (igroup).
3. Create the LUN.
4. Map the LUN to a single igroup.
5. Display the mapped LUN.
You can use SnapDrive software to do all this.

NetApp Confidential

24

STEPS FOR CREATING AND MAPPING A LUN


1. Display all visible initiators from the Data ONTAP operating system.
2. Create an initiator group (igroup) for the initiator or initiators.
iSCSI and FC LUNs get igroups.
3. Create the LUN, which is a file in the WAFL (Write Anywhere File Layout) file system.
4. Map the LUN to one or more igroups.
When the mapping occurs, if all is well, the drive appears, ready to use, on the host that connects to it.
Additional host-side steps are required to prepare the drive:
5. Partition the drive.
6. Format the partition or partitions.
7. Mount the formatted partition.
All of these steps can be performed through SnapDrive software.

5-25

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

SnapDrive Software
Extending NetApp Simplicity to SANs
Windows and UNIX hostbased storage provisioning

Storage Administrator

Dynamic volume
management

Database Administrator ( DBA)


LAN or WAN

SnapDrive
Software

SnapDrive
Software

OS-consistent Snapshot
copies

SnapDrive Microsoft Cluster


Software Services ( MSCS)
Application Server

Near-instantaneous restores
that use the SnapRestore
feature
Multipathing

Storage
Network

Cluster awareness

OS-consistent replication
SnapDrive best practices:
http://now.netapp.com/NOW/knowledge/docs/other/best_practices_snapdrive/bestpract.htm

NetApp Confidential

25

SNAPDRIVE SOFTWARE
EXTENDING NETAPP SIMPLICITY TO SANS

SnapDrive software is part of the NetApp server suite and comprises SnapDrive for Windows and SnapDrive
for UNIX.
SnapDrive software allows all storage-provisioning activities to be managed from the host
(the server administrator):

LUN creation
igroup creation
Mapping
Partitioning
Formatting
Mounting

Windows and UNIX versions:

Windows 2000 Server and Windows Server 2003


AIX, Solaris, HP-UX, Red Hat Linux, SUSE, and Oracle Enterprise Linux

SnapDrive software handles dynamic volume management. SnapDrive software provides OS-consistent
Snapshot copies. This is the most important technical reason for having SnapDrive software (along with the
management reasons). Customers get near-instantaneous restores with SnapRestore software, which is
multipathing-aware and cluster-aware, and customers get OS-consistent replication.

5-26

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Best Practices:

5-27

Do not create LUNs on the root storage system volume /vol/vol0.


For better Snapshot copy management, do not create LUNs on the same storage system volume if those
LUNs must be connected to different hosts.
If multiple hosts share the same storage system volume, create a qtree on the volume to store all LUNs for
the same host.
SnapDrive for Windows allows administrators to shrink or grow the size of LUNS. Never expand a LUN
from the storage system; otherwise, the Windows partition does not expand properly.
Make an immediate backup after expanding the LUN so that its new size is reflected in the Snapshot
copy. Restoring a Snapshot copy that is made before the LUN is expanded shrinks the LUN to its former
size.
Do not place LUNs on the same storage system volume as other data; for example, do not place LUNs in
volumes that have CIFS or NFS data.
Calculate the LUN size according to application-specific sizing guides, and calculate for Snapshot usage
if Snapshot copies are enabled.
Depending on the volume or available SnapReserve space, use the option for volume automatic grow or
automatic delete to avoid a volume-full condition that is due to poor storage sizing.

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

SnapDrive for Windows


Components
Interfaces:
MMC
SnapDrive command-line interface (CLI)

Services:
Core NetApp SnapDrive services
Data ONTAP Virtual Disk Services (VDS ), which interacts
with Windows VDS for disk and volume management

Data ONTAP VSS, which interacts with Windows VSS for


Snapshot copy management

Initiators:
iSCSI
FC

NetApp Confidential

26

SNAPDRIVE FOR WINDOWS


COMPONENTS

Best Practices
Refer to the NetApp Interoperability Matrix and check the following items:

Confirm that SnapDrive for Windows supports the environment.


For specific information about requirements, see the SnapDrive 6.2 for Windows Installation and
Administration Guide.
See the FC and iSCSI Configuration Guide for Data ONTAP.
Always download the latest Host Utilities from the download section of the NetApp Support site.
NetApp recommends that you perform all procedures from the system console and not from a terminal
service client.

After you complete the preceding checklist, see the steps in the SnapDrive 6.2 for Windows Installation and
Administration Guide for the details of how to install SnapDrive for Windows.
Refer to the SnapDrive 6.2 for Windows Release Notes for the latest fixes, known issues, and documentation
corrections.

5-28

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

SnapDrive for Windows


Features

Space reclamation (NTFS hole punching)


Globally unique identifier (GUID) partition
table (GPT ) disk partition
igroup management
Thin provisioning of LUNs

NetApp Confidential

27

SNAPDRIVE FOR WINDOWS


FEATURES

Space reclamation:

Is a process that allows blocks that are marked free in the NTFS metadata block to be freed on the Data
ONTAP LUN
Does not require a new license
Provides better space utilization

Globally unique identifier (GUID) partition table (GPT) partitions are part of the extensible firmware
interface (EFI). This standard is phasing out BIOS, which relies on master boot record (MBR) partitions.
MBR:

Supports four primary partitions or three primary partitions and an extended partition with up to 128
logical drives
Has a maximum size for a basic volume of two terabytes
Contains only one copy of the partition table

GPT:

5-29

Can have 128 primary partitions


Can be up to 18 exabytes logically, but Windows file systems put a limitation of 256 terabytes
Contains two copies of its partition table, has CRC32 fields for partition data-structure integrity, and on
checksum failure can recover itself from a backup copy

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Key Features
The following are the key features of SnapDrive for Windows:

Enhancement of online storage configuration, LUN expansion and shrinking, and streamlined
management
Support for connections of up to 168 LUNs
Integration with Data ONTAP Snapshot technology, which creates point-in-time images of data that is
stored on LUNs

SnapDrive 6.2 for Windows: Best Practices

Works in conjunction with SnapMirror software to facilitate disaster recovery from either asynchronously
or synchronously mirrored destination volumes
Enables SnapVault updates of qtrees to a SnapVault destination
Enables management of SnapDrive software on multiple hosts
Enhances support on Microsoft cluster configurations
Simplifies iSCSI session management
Enables technology for SnapManager products

NOTE: igroup management controls igroup creation and naming within SnapDrive software.
Thin provisioning of LUNs:

5-30

Controls less than 100% of the fractional space reservations from SnapDrive software
Monitors fractional reserve usage

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

SnapDrive for UNIX


Features
Maps file system mountpoints to newly created or existing LUNs
Supports all major UNIX platforms including AIX, Solaris, HP-UX,
Red Hat Linux, SUSE, and Oracle Enterprise Linux
Grows file systems on demand in a nondisruptive way
Supports protocols like FC, iSCSI, and NFS
Provides host-driven Snapshot technology:
Snapshot copy of volumes in same NetApp storage

Snapshot copy of volumes across NetApp storage systems


Uses NetApp Manage ONTAP for secure communication with
NetApp storage systems
Supports a range of multipathing and clustering technologies
(appropriate to host)

NetApp Confidential

28

SNAPDRIVE FOR UNIX


FEATURES

SnapDrive software offers a supported way to ensure that the file systems and volumes are consistent when a
Snapshot copy is created.
A customer must still ensure that the application that is running on the file system and volume is consistent
before the customer creates the Snapshot copy.
Host-driven Snapshot technology provides a one-step process for quiescing and synchronizing file-based
systems and volumes before creating a Snapshot copy to ensure data integrity.
With SnapDrive for UNIX 3.0, the Windows and UNIX versions are similar, with the exception that
SnapDrive for Windows has a GUI.

5-31

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 3
Flexible Volumes Space Guarantee

NetApp Confidential

LESSON 3: FLEXIBLE VOLUMES SPACE GUARANTEE

5-32

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

29

Flexible Volume Creation


Space Guarantee Types

Flexible volume creation can be performed by


using CLI commands or System Manager.
The actual space is allocated from the
containing aggregates file systems space.
This allocation is controlled by the flexible
volumes space guarantee option.
Three space guarantee types are available:
X
Volume

None

File

NetApp Confidential

30

FLEXIBLE VOLUME CREATION


SPACE GUARANTEE TYPES

When you create a FlexVol volume, you must worry about the space guarantee type. Three different concepts
that can be confusing, because they are similarly named, are:

Snapshot reserve: the space that is reserved for active Snapshot copies on a volume (20% by default; can
be adjusted)
Space guarantee: a method of guaranteeing that writable space is available for the volume
Space reservation: primarily a LUN mechanism that is used to guarantee expected writable space

The focus of the next section is on the three space guarantee types for FlexVol volumes:

5-33

Volume
None
File

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Flexible Volume Resizing


The CLI Command

The vol size command is used to resize a


flexible volume.
Syntax:
vol size <vol_name> [[+|-]<size>[k|m|g|t]]

Command

Result

vol size FlexVol 50m

FlexVol will now be 50 M

vol size FlexVol +50m

FlexVol will be increased 50 M to 100 M

vol size FlexVol -25m

FlexVol will be decreased 25 M to 75 M

NetApp Confidential

31

FLEXIBLE VOLUME RESIZING


THE CLI COMMAND

The important issue about FlexVol volumes is the ability to change them. To do that, an administrator can use
a command that is similar to this one: vol size FlexVol 50n. As long as the volume is not larger than the
number that is entered, it immediately trunks down the volume to 50 MB. It does not actually move any data
around on disk or shrink any tables; the Data ONTAP operating system adjusts an accounting number. As
long as the space is free, the system allows the change. An administrator cannot destroy data by doing this;
the system protects the administrator from making a fatal error.
Here is an interesting situation that is related to the resizing of volumes:
A Windows administrator calls in while on a Windows system that is connected over CIFS to the NetApp
box. From the Windows system, the administrator sees that a 100-Gb file system is running out of space. The
administrator needs more to get a project done. The administrator is familiar with the storage systems, logs in,
and runs a volume size increase command to 110 Gb. When the administrator goes back to the Windows box
and looks at it, it reports that 88 Gb are available. The administrator thinks that something is wrong and dials
1-800-4NETAPP. What just happened?
This is not a bug. This is correct behavior, assuming default configurations. If Windows is reporting 100 Gb
that is usable by the file system, how large is the volume that is hosting that?
The default reserve for Snapshot copies is 20%, which means that the underlying volume is actually 125 Gb.
So when the administrator issued the command to change volume size to 110 Gb, the administrator actually
shrank the volume by 15 Gb to 110 Gb. The volume still has the 20% reserve for Snapshot copies, so the
administrator reduced the size of the active file system to 88, because 88 Gb plus 20% takes the system back
to 110 Gb.

5-34

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

This is an example of why NetApp recommends that before you do anything, run the vol size command
with no arguments. Doing so shows you the current size of your volume. Then, when you see 125 Gb as the
volume size, you will remember the reserve for Snapshot copies. This gives you the opportunity to do the
correct math and resize the volume appropriately.

5-35

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Aggregate and Flexible Volume Removal


Aggregates cannot be removed until all flexible
volumes on the aggregate are removed.
Flexible volumes and aggregates can be removed
by using CLI commands and System Manager.
For flexible volumes, use these CLI commands:
In Cluster Mode, you must unmount the volume
before off lining.
vol offline <FlexVol-name> and
vol destroy <FlexVol-name>

For aggregates, use these CLI commands:


aggr offline <aggr-name> and
aggr destroy <aggr-name>
NetApp Confidential

32

AGGREGATE AND FLEXIBLE VOLUME REMOVAL


The following question comes up frequently: Now that I have multiple volumes that are hosted inside a
single physical container, is it bad if I accidentally delete that container? The answer is yes. For that reason,
you cannot remove an aggregate until all of the FlexVol volumes are removed. For a FlexVol volume to be
removed, it must be taken offline and destroyed. Both commands require multiple responses to complete the
removal. During removal of a volume, the number of times that you must press ENTER to destroy each
FlexVol volume is four, and you then need an additional four acknowledgments to destroy an entire
aggregate. Removal of volumes and deletion of aggregates cannot be done accidentally. NetApp has a built-in
safety net. Be aware, though, that after you destroy the physical container, you cannot revert to a Snapshot
copy, because all of the Snapshot data was destroyed.
If you need to resurrect FlexVol volumes that were destroyed, you must have aggregate Snapshot copies.
After the FlexVol volume is destroyed, it cannot be restored on that FlexVol volume (because it was
destroyed), but it can be restored from an aggregate Snapshot copy (if one was enabled). That also reverts
everything in the aggregate (which may have multiple volumes) to get that FlexVol volume back. This may
seem like a big hammer way to protect yourself from making a mistake, but it is possible.

5-36

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 4
Thin Provisioning and Space
Reservations

NetApp Confidential

LESSON 4: THIN PROVISIONING AND SPACE RESERVATIONS

5-37

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

33

Storage Scenarios with NetApp


Local and DAS
SAN-attached storage that is backed by
NetApp technology
SAN-attached storage and Snapshot copies
that are backed by NetApp technology
SAN-attached storage, Snapshot copies, and
space reservations that are backed by NetApp
technology

NetApp Confidential

34

STORAGE SCENARIOS WITH NETAPP


This presentation will walk through four storage scenarios:
Local direct-attached storage: This scenario provides an example of a direct-attached disk as our basis for
comparison.
SAN-attached storage backed by NetApp: Next, we compare a common SAN environment, without Snapshot
technology.
SAN-attached storage with Snapshot copies, backed by NetApp: This scenario adds WAFL Snapshot copies
and explains the potential pitfalls they can introduce.
SAN-attached storage with Snapshot copies and space reservations, backed by NetApp.

5-38

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

DAS Systems
Local:
The OS owns the drive and its storage space.
No other system can access the drive.
Any blocks can be accessed at any time.

Example: a 10-block hard disk, a 3-block file, a


5-block file, and a 4-block file

NetApp Confidential

DAS SYSTEMS

5-39

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

35

DAS Scenario
Local:
Write three-block file
Write five-block file
Write four-block file?

FS ENOSPC = Normal Condition

NetApp Confidential

36

DAS SCENARIO
In this case, the file system layer of the host OS issues an SCSI ENOSPC message. This is a normal
condition. The OS responds by reporting back to the user or application that no space is available, and the
write fails. Well-written applications have no problem with this message.

5-40

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

The SAN-Attached File System


Backed by NetApp technology
The OS thinks it has a hard disk, so it thinks it
owns the disk and its storage space.
No other system can access it.
All blocks can be accessed at any time.
Example: a 10-block LUN, a 3-block file, a 5block file, and a 4-block file

NetApp Confidential

37

THE SAN-ATTACHED FILE SYSTEM


The SAN-attached file system, backed by NetApp, looks and feels like a local direct-attached disk to the OS,
so the same assumptions are in place:

5-41

The OS owns the drive and its storage space.


No other system can access it.
Any blocks can be accessed at any time.

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

SAN-Attached Scenario
Backed by NetApp technology
Write three-block file
Write five-block file
Write four-block file?
Delete three-block file
Write four-block file?

FS ENOSPC = Normal Condition

NetApp Confidential

38

SAN-ATTACHED SCENARIO
The file system layer of the host OS issues the SCSI ENOSPC message. This is a normal condition, and the
OS and applications respond normally.

5-42

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

The Problem: SAN and Snapshot Copies


Backed by NetApp Snapshot technology
Write three-block file
Create a Snapshot
copy
Write five-block file
Delete three-block file

Write four-block file?

WAFL (Write Anywhere File Layout File System)


ENOSPC = Disk Failure

NetApp Confidential

39

THE PROBLEM: SAN AND SNAPSHOT COPIES


In this case, it is the WAFL file system that issues the ENOSPC message in response to the host OS file
systems request for four blocks. The host FS layer does not expect to be denied access to blocks that it
knows, by its own accounting, should be available. The FS layer assumes a hard error has occurred on the
underlying disk and immediately disconnects the presumed-failed disk. For applications that access the disk,
this is a catastrophic failure with no guarantee that the data is left in a consistent state.

5-43

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

The Solution: Space Reservations (1 of 2)


Example:
A 10-block LUN
Snapshot copies
100% space reservation
This guarantees that WAFL file system
ENOSPC cannot occur.

NetApp Confidential

THE SOLUTION: SPACE RESERVATIONS (1 OF 2)

5-44

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

40

The Solution: Space Reservations (2 of 2)


Write three-block file
Snapshot copy
Write five-block file
Snapshot copy
Delete three-block file
Write four-block file?
Snapshot copy?

The Snapshot operation failed: No space is left on the device.


LUN writes are protected: The Snapshot copy may not be allowed.

NetApp Confidential

THE SOLUTION: SPACE RESERVATIONS (2 OF 2)

5-45

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

41

Exercise 11
Module 5: Demonstrating
Windows iSCSI

Time Estimate: 30 Minutes

NetApp Confidential

EXERCISE 11
Please refer to your exercise guide.

5-46

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

42

Lesson 5
Positioning

NetApp Confidential

LESSON 5: POSITIONING

5-47

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

43

FC SAN and iSCSI Qualification Matrices


Matrices:
Are kept up-to-date
Are available to customers and channel partners

Unlike NAS, SAN (including iSCSI for now) requires


qualification for hosts, switches, and (FC SAN) HBAs.
If you do not see a combination that you need, ask for it.
Visit the Support Site for FC and iSCSI deployments:
http://support.netapp.com/

NetApp Confidential

44

FC SAN AND ISCSI QUALIFICATION MATRICES


The most important issues to consider when you plan SAN implementations are the qualification matrices. In
the matrix are entries for:

Supported protocols
Notes about firmware, specific HBAs, and other hardware-specific data
Which versions of the Host Utilities are supported
Which host OS versions are supported
Which physical platforms
Whether a software initiator is available and if NetApp supports it
Any versioning restrictions
Driver availability
What volume managers are available and which ones are supported by NetApp
What multipathing software is available
Which file systems are supported by NetApp
The Data ONTAP version
Details on clustering
SnapDrive versions

A line entry exists for each possible supported configuration. The matrix is about 230 pages long, and it
grows constantly. Unlike NAS, SAN requires qualification of everything that is mentioned above and more. If
you do not see a combination that you need, ask for it. As NetApp works to grow SAN presence, the
sustaining NetApp engineering department works hard to get you support for whatever combination you need
as quickly as it can. This may take a few weeks but generally does not significantly increase the sales cycle.

5-48

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Which Protocol?
FC SAN is usually the highest-performance option;
however, performance is not always the top criterion.
SAN is application-independent.

NAS is mostly OS-independent: Standard protocols


are in UNIX, Linux, and Windows.
If a Windows server runs the application, use SAN.
Because no space reservation for Snapshot copies
exists, NAS can be easier to administer.
Be a trusted advisor, but let the customer decide.

NetApp Confidential

45

WHICH PROTOCOL?
When you talk about the core four, you must talk about which protocol you want in a given environment.
Thankfully, most customers have this well established by the time that people come in to service accounts. FC
SANs usually provide the highest performance option, of course, yet performance is not always the top
criterion, especially if the customer does not have FC infrastructure. It is expensive to create infrastructure if
it does not exist. Certainly SAN has the advantage of being totally application-independent. It looks like a
hard drive. Anything that can run on anything can run on SAN.
NAS is the most independent protocol. Because it has been around as a standard for so much longer than
CIFS, better support is usually available in the NFS world.
You must watch to ensure that an application will be supported if NetApp moves off of DAS. For a Windows
server that runs an application, use SAN. Some, but not many, circumstances exist in which you can use CIFS
by Windows. Microsoft requires the use of SAN for most applications.
NAS can be easier to administer, because only one file-system layer exists, and that layer is NetApp. Using
NAS makes it easier to manage Snapshot copies, and you do not have to worry about space reservations.
Be a trusted advisor, but let customers make their own decisions.
Usually customers have reasons for their choices that are defined and in place.
Frequently, these are not technical reasons.
Be aware that some of their reasons may be political.

5-49

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

The NetApp Solution


Top Technical Selling Points
Truly unified storage:
UNIX, Windows, Linux, NAS, and SAN: one OS
Better data-management strategies:
Now that you have unified, how do you back up?
Highly scalability, not just on the specification sheet
Best provisioning and storage utilization:
Data ONTAP 8.0 architecture with FlexVol
technology
A wide range of service offerings:
Different needs at different sites for different
customers
NetApp Confidential

46

THE NETAPP SOLUTION


TOP TECHNICAL SELLING POINTS

NetApp technical selling points are unified storage, better management solutions, scalability, provisioning,
service offerings, and so on.

5-50

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 6
Fractional Reserve

NetApp Confidential

LESSON 6: FRACTIONAL RESERVE

5-51

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

47

Snapshot and Fractional Reserve (1 of 4)


The Snapshot reserve sets aside space in a
volume for backups. It may expand into the active
file system.
The fractional reserve is used in calculating the
amount of space in a volume that is set aside to
ensure overwrites. If a LUN is completely filled
and a Snapshot copy is created, then, with a
fractional reserve of 100%, enough space is
guaranteed to completely overwrite the LUN and
still preserve the old data through the Snapshot
copy.

NetApp Confidential

SNAPSHOT AND FRACTIONAL RESERVE (1 OF 4)

5-52

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

48

Snapshot and Fractional Reserve (2 of 4)


The fractional reserve may not be an efficient use of space:
Even with the fractional
reserve set to 100%, the
system still ran out of space.

The fractional reserve


only delayed the
inevitable.

Example 1: Fully Provisioned

Conclusion: Better Snapshot copy


management is needed.

NetApp Confidential

49

SNAPSHOT AND FRACTIONAL RESERVE (2 OF 4)


The fractional reserve and the overwrite reserve only delay the inevitable. The volume eventually becomes
full. The solution is that better Snapshot copy management is needed.

5-53

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Snapshot and Fractional Reserve (3 of 4)


The fractional reserve may not be an efficient use of space:
Conclusion:
Thin Provisioning

Administrators must plan a


larger volume size to provide
for the guaranteed overwrite
reserve.

And the LUN may never


need the overwrite
reserve.

Example 2: Fully Provisioned

NetApp Confidential

50

SNAPSHOT AND FRACTIONAL RESERVE (3 OF 4)


A LUN may never use the overwrite reserve that the fractional reserve provides. To minimize storage usage,
consider applying thin provisioning to a LUN by using a nonspace-reserved LUN or by setting the fractional
reserve to something other than 100%.

5-54

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Snapshot and Fractional Reserve:


Conclusion (4 of 4)
If you need guaranteed overwrites, use the
fractional reserve.
To minimize space usage, you can disable the
fractional reserve.
Snapshot copies can fill a volume if not managed
properly, which may prevent writes to a LUN if the
volume guarantee is none.
To better manage Snapshot copies,
administrators can:
Delete Snapshot copies (automatically)
Expand the size of the volume that contains the LUN
NetApp Confidential

51

SNAPSHOT AND FRACTIONAL RESERVE: CONCLUSION (4 OF 4)


To better manage Snapshot copies within a volume that contains a LUN, administrators can set up a policy to
delete Snapshot copies automatically or set up a policy to expand the size of the volume that contains the
LUN.

5-55

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Snapshot Reserve
Snapshot reserve defines a percentage of the
volume that is reserved for Snapshot copies:
Set at the volume level
Volume 1 Space Reservation
netapp> snap reserve

Snapshot Reserve

Volume vol_SAN1:
current snapshot
reserve is 20% or
2097152 k-bytes.

Historically set to zero for volumes that are


used with SAN environments
NOTE: Although the active file system cannot consume disk space that is reserved for Snapshot copies, Snapshot copies can exceed
the Snapshot reserve and consume disk space that is normally available to the active file system.

NetApp Confidential

52

SNAPSHOT RESERVE
The Snapshot reserve specifies a set percentage of disk space for Snapshot copies. By default, the Snapshot
reserve is 20% of disk space. The Snapshot reserve can be used only by Snapshot copies, not by the active file
system. This means that if the active file system runs out of disk space, any disk space that remains in the
Snapshot reserve is not available for active file system use.
NOTE: Although the active file system cannot consume disk space that is reserved for Snapshot copies,
Snapshot copies can exceed the Snapshot reserve and consume disk space that is normally available to the
active file system.
The Snapshot reserve is not a reservation of physical disk; it is an amount of space to be counted against
Snapshot copies.

5-56

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Snapshot Automatic Delete


The Snapshot autodelete command determines when
(if) Snapshot copies are automatically deleted. It is set at
the volume level:
snap autodelete vol[on|off|show|reset]

If autodelete is enabled, then options:


snap autodelete vol options option val
Options and Values
commitment

try, disrupt

trigger

volume, snap_reserve, space_reserve

target_free_space

1-100

delete_order

oldest_first, newest_first

defer_delete

scheduled, user_created, prefix, none

prefix

<string>
NetApp Confidential

SNAPSHOT AUTOMATIC DELETE

5-57

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

53

Volume Autosize (1 of 2)
To grow the volume:
vol autosize determines if a volume should grow
when nearly full.
Both snapshot autodelete and vol autosize
use the value wafl_reclaim_threshold:
Data ONTAP 7.1 to Data ONTAP 7.2.3: 98%
Data ONTAP 7.2.4 and later versions (threshold
depends on volume size):
Variable NameVolume Size

Value

wafl_reclaim_threshold_t: Tiny volumes< 20 G

Threshold= 85%

wafl_reclaim_threshold_s: Small volumes from 20 G to < 100 G

Threshold= 90%

wafl_reclaim_threshold_m: Medium volumes from 100 G to < 500 G

Threshold= 92%

wafl_reclaim_threshold_l: Large volumes from 500 G to < 1 T

Threshold= 95%

wafl_reclaim_threshold_xl: Extra large volumes from 1 T up

Threshold = 98%

NetApp Confidential

54

VOLUME AUTOSIZE (1 OF 2)
This value, when changed from the defaults, is not persistent; it reverts to the default values after booting. So
to change this value (for example, 90% for tiny volumes of less than 20 G) and make it persist after booting,
you should add the following line to each /etc/rc file on both controllers:
priv set q diag;
setflag wafl_reclaim_threshold_t 90;
priv set;

5-58

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Volume Autosize (2 of 2)
Configuration:
Is set at the volume level
Can use these values:
ON:
Increment size (default 5% of original size)
Maximum size (default 120% of original size)

OFF:
vol autosize vol_name [-m
size[k|m|g|t]]
[-i size[k|m|g|t]] [on|off|reset]

NetApp Confidential

55

VOLUME AUTOSIZE (2 OF 2)
Volume autosize can be run only a maximum of 10 times on any particular volume. If you set the incremental
size too small, you cannot expand it as much as you may want to. For that reason, it is generally
recommended that you use the -m and -i switch when configuring the volume autosize feature to set the
incremental size and the maximum size to something larger than the defaults.
NOTE: The volume can grow only to a maximum size that is 10 times the original volume size.

5-59

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Administrators Choice
Administrators can choose which procedure to
employ first:
snapshot auto delete
vol autosize
Use the volume option:
try_first
Possible values:
snap_delete
volume_grow (default)

Example:

vol options vol_name try_first snap_delete


NetApp Confidential

56

ADMINISTRATORS CHOICE
Configurations can get complex. If you have doubts as to the recommended best practice of reservations,
consult this guide: Technical Report: Thin Provisioning in a NetApp SAN or IP SAN Enterprise
Environment at http://media.netapp.com/documents/tr-3483.pdf.

5-60

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Exercise 12
Module 5: Case Study:
Student Activity 2

Time Estimate: 30 Minutes

NetApp Confidential

EXERCISE 12
Please refer to your exercise guide.

5-61

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

57

Module Summary
Now that you have completed this module, you
should be able to:
Discuss the challenges of consolidation in:
Windows environments
UNIX environments

Discuss the advantages of NetApp SAN


technology
Articulate the advantages of thin provisioning
and space-reservation technology
Discuss NetApp NAS architecture
NetApp Confidential

MODULE SUMMARY

5-62

NetApp Accredited Storage Architect Professional Workshop: Enterprise Data Storage

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

58

Module 6
Business Applications

NetApp Confidential

MODULE 6: BUSINESS APPLICATIONS

6-1

NetApp Accredited Storage Architect Professional Workshop: Business Applications

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Module Overview
This module focuses on the following topics:
The value of NetApp systems to applications
Messaging and collaboration
Database added value
Technical applications
Server virtualization specifics

NetApp Confidential

MODULE OVERVIEW

6-2

NetApp Accredited Storage Architect Professional Workshop: Business Applications

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Module Objectives
After this module, you should be able to:
Discuss why companies should use NetApp
technology for applications
Discuss the value of WAFL (Write Anywhere
File Layout) for load-balancing databases
Articulate the value and history of using
NetApp systems for messaging and
collaboration
Discuss the value of using NetApp systems in
database environments
NetApp Confidential

MODULE OBJECTIVES

6-3

NetApp Accredited Storage Architect Professional Workshop: Business Applications

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 1
The NetApp Value to Applications

NetApp Confidential

LESSON 1: THE NETAPP VALUE TO APPLICATIONS

6-4

NetApp Accredited Storage Architect Professional Workshop: Business Applications

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Why Use NetApp Systems


for Applications?
A few of the reasons to use NetApp systems are:
Snapshot copies
Data and Snapshot management and replication
Flexibility and ease of use
Dynamic provisioning
Performance
iSCSI solutions that are provided by a market leader
Cost-effective FC solutions that are gaining market
recognition

Excellent high-end FC, clustering, and network


multipath I/O (MPIO) options
NetApp Confidential

WHY USE NETAPP SYSTEMS FOR APPLICATIONS?


NetApp systems work well with Microsoft Exchange, so well that some NetApp Software Engineers consider
Exchange environments to be the easiest sell for NetApp products. When you demonstrate SnapManager
software for Exchange to an administrator, that administrator becomes eager to see more and to put the
software into an environment. This is a good NetApp solution.
Exchange 2010 is now out and starting to be implemented widely in the customer world. NetApp now
supports Exchange 2010. SnapManager 6.0 for Exchange is available.
Why use NetApp hardware and software solutions for Exchange?

6-5

Snapshot technology
Flexible provisioning
Aggregates
Spreading of data across many spindles to get optimized performance
Good I/O per second performance
Excellent FC options
Clustering
Multipath network I/O (MPIO)
Windows integration

NetApp Accredited Storage Architect Professional Workshop: Business Applications

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Flexible Volumes (Exchange Example)


An aggregate with flexible volumes:
Total disks are available to all
flexible volumes.
Volumes are logical and flexible,
not constrained by hardware.
Volumes can be sized as
needed.
Volumes are easy to
manage with maximum I/O
performance.
Data
LUN

LUNs with host data

Data
LUN

Data
LUN

Data
LUN

Logs
LUN

Logs
LUN

Logs
LUN

Logs
LUN

Volumes and
data management
A Data ONTAP 7G aggregate pool
of physical disks, flexible volumes,
and increased aggregate disk I/O
bandwidth
NetApp Confidential

FLEXIBLE VOLUMES (EXCHANGE EXAMPLE)


In this example, when you use flexible volumes on an aggregate, you share all of those I/Os per second from
all of those disks. If you suddenly get much hot traffic on this server, it does not matter, because the volume
or LUN spreads it out and equalizes it across all of those disks in the aggregate.
Much best-practice information is available for setting up Exchange environments. Many Exchange
environments keep their data and logs on the same aggregate. Some environments keep data on one aggregate
and logs on another. It depends on the environment and its traffic profile. NetApp has technical reports that
discuss best-practice configurations.
Because so many disk I/Os per second are required, large aggregates with flexible volumes striped across
them are always a big win for Exchange environments.

6-6

NetApp Accredited Storage Architect Professional Workshop: Business Applications

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 2
SnapManager
Management Software

NetApp Confidential

LESSON 2: SNAPMANAGER MANAGEMENT SOFTWARE

6-7

NetApp Accredited Storage Architect Professional Workshop: Business Applications

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Why Not Use Native Management Tools?


To back up:
No scheduling is available; you must manually start the
backup.
The process is resource-intensive; Microsoft does not
recommend that you run it during production.
Granularity is poor; it is limited to the site level.

To restore one file:


You must first restore the entire database onto a
nonproduction server.
You must then manually copy a single file onto a
production server.
You cannot prevent the loss of important metadata,
histories, and security settings that are associated with the
file.
NetApp Confidential

WHY NOT USE NATIVE MANAGEMENT TOOLS?


The native tools back up only databases and search indexes. The administrator must manually back up frontend files. Microsoft recommends that users keep images of the Web servers. The native tools require high
restore time and provide low availability during the restore process. Also, no out-of-the box scheduling
mechanism exists. You must use the command line with Windows Task Scheduler to schedule backups.
The bottom line is that customers need a third-party data-protection solution.

6-8

NetApp Accredited Storage Architect Professional Workshop: Business Applications

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 3
Messaging and Collaboration

NetApp Confidential

LESSON 3: MESSAGING AND COLLABORATION

6-9

NetApp Accredited Storage Architect Professional Workshop: Business Applications

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Exchange on NetApp Systems


NetApp systems were the first with Microsoft on
iSCSI.
NetApp is firmly committed to iSCSI and FC.
SnapManager for Exchange versions now
support Exchange 2010.
The NetApp Volume Shadow Copy Service (VSS)
hardware provider is integrated into SnapDrive
data-management software (no separate driver is
required).
SnapManager 6.0 for Exchange supports 64-bit.

NetApp Confidential

10

EXCHANGE ON NETAPP SYSTEMS


NetApp systems were the first third-party systems to be supported by Microsoft on iSCSI and Exchange. The
two companies are firmly committed to each other and have a good relationships in place. SnapManager for
Microsoft Exchange versions support all Exchange versions from Exchange 5.5 to Exchange 2010. Volume
Shadow Copy Service (VSS) hardware providers are integrated into SnapDrive 3.1 software, so after you
have that version of SnapDrive software, you are ready to go. There is 64-bit support for Exchange 2010 and
SnapManager 6.0 for Exchange.

6-10

NetApp Accredited Storage Architect Professional Workshop: Business Applications

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

SnapManager for Exchange 6.0


Technology Overview
Exchange Server
Exchange Server
with SnapManager
for Exchange

VSS

FC and iSCSI

Data ONTAP APIs

SnapManager for Exchange

SnapDrive Software

Exchange
Database

Transaction
Logs

Snapshot
Copy

NetApp Confidential

11

SNAPMANAGER FOR EXCHANGE 6.0


TECHNOLOGY OVERVIEW

This slide shows the basic architecture of SnapManager for Exchange.


The key points are:

6-11

SnapManager for Exchange (SME) is based on the Microsoft VSS framework.


NetApp SnapDrive for Windows is used by SME to communicate with the NetApp storage systems.

NetApp Accredited Storage Architect Professional Workshop: Business Applications

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

NetApp Software for Exchange


SnapManager software:

Provides rapid online backups and restores by integrating with the Exchange
backup API, running Esefile verification, and automating log replay

Includes an intuitive UI and wizards for configuration, backup, and restoration

SnapDrive software:

Provides dynamic disk and volume expansion

Supports Ethernet and FC environments

Supports Microsoft Cluster Services (MSCS) and NetApp controller failover


(CFO) for high availability

Is required for Windows SnapManager products and included with UNIX


SnapManager products

Single Mailbox Recovery (SMBR) software restores a single message,


mailbox, or folder from a Snapshot backup to a live Exchange server or
.pst file (An optional feature).

NetApp Confidential

12

NETAPP SOFTWARE FOR EXCHANGE


NetApp offers specialized software for Exchange environments:

SnapManager software
SnapDrive software: runs in both Ethernet and FC environments
Single Mailbox Recovery (SMBR) software
Operations Manager: provides a central management console for NetApp systems

SnapManager software is the primary piece of software that everyone thinks about in an Exchange
environment. It facilitates rapid online backups and restores. It integrates directly with the Exchange API and
performs Esefile verification. This course discusses that later, but that is an important piece, as is automated
log replay. SnapManager software also provides a nice UI and wizards for configuration, backup, and restore.
SnapManager software depends on SnapDrive software. Because it is a SAN environment, SnapDrive
software is required on the back end to manage OS-consistent Snapshot copies and the file systems
themselves.
SMBR is a good tool for pulling out a single message, an entire mailbox, a folder, or whatever you need to
pull out of a backup and then restoring it to a live Exchange server or to a separate .pst file.

6-12

NetApp Accredited Storage Architect Professional Workshop: Business Applications

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Single Mailbox Restore (Exchange)


Use PowerControls software.
Quickly access Exchange data that is stored in online
Snapshot backups.

Select any data, down to a single message.


Restore the data to one of two locations:
An offline mail file:
The file is in personal storage file (.pst) format.
Open the file in Microsoft Outlook.

The users mailbox:


Connect to a live Exchange server.
Copy data directly to the users mailbox.
Data is instantly available.
NetApp Confidential

13

SINGLE MAILBOX RESTORE (EXCHANGE)


With NetApp SMBR software, you can provide better service, reduce infrastructure expenses, and improve
productivity for Exchange administrators. NetApp SnapManager for Exchange, when combined with NetApp
SMBR software, enables you to create near-instantaneous online backups of Exchange databases and to verify
that the backups are consistent so that you can rapidly recover Exchange data at any level of granularity:
storage group, database, folder, single mailbox, or single message.
Single mailbox restore is from PowerControls software. Many other products provide it, but when combined
with NetApp Snapshot technology, it becomes more powerful.
Single mailbox restore makes the process of restoring items from a mailbox a simple help-desk function
rather than an IT operation such as pulling and restoring tapes. This tool is effective and efficient, especially
in versions of Exchange earlier than Exchange 2007.

6-13

NetApp Accredited Storage Architect Professional Workshop: Business Applications

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Exchange Server Performance


The server needs megacycles for networking,
user activity, and database verifications,
among others.
The iSCSI software initiator requires more
CPUs.
FC and iSCSI hardware initiators scale more
by 10% to 15%.

NetApp Confidential

14

EXCHANGE SERVER PERFORMANCE


Because of the I/O load on an Exchange system, NetApp products may not help to increase the number of
users that can be sustained by one system. Another aspect that a software engineer should be aware of is that
if a customer uses iSCSI, the customer needs more CPU overhead to go with software initiators. This
overhead can range from 10% to 15%, depending on the system load. The customer may want to go with a
hardware initiator for easier scaling.

6-14

NetApp Accredited Storage Architect Professional Workshop: Business Applications

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Data Resiliency and Efficiency


Site A
Client Access Server

Data Availability Group ( DAG)


SnapManager
Exchange and SMBR

Active
Database A

Replica
Database B

9:00 AM

Backup-1

9:15 AM

Backup-2

9:30 AM

Backup-3

NetApp
Deduplication

NetApp Confidential

15

DATA RESILIENCY AND EFFICIENCY


Description
This course now introduces two notions:

Storage resiliency: provided by separate storage systems for the active and passive nodes. In addition,
these storage systems can be clustered.
Space efficiency: provided by the NetApp deduplication feature that is run against the NetApp Exchange
volumes on the storage

Advantages of having SME:

All the advantages of the previous scenario remain.


You also now drive down space consumption at the passive copy. This further reduces the need for
additional storage space.

A Data Availability Group (DAG) is a set of up to 16 Microsoft Exchange Server 2010 Mailbox servers that
provide automatic database-level recovery from a database, server, or network failure. Mailbox servers in a
DAG monitor each other for failures. When a Mailbox server is added to a DAG, it works with the other
servers in the DAG to provide automatic, database-level recovery from database, server, and network
failures.

6-15

NetApp Accredited Storage Architect Professional Workshop: Business Applications

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Exercise 13
Module 6: Case Study:
Student Activity 3

Time Estimate: 30 Minutes

NetApp Confidential

EXERCISE 13
Please refer to your exercise guide.

6-16

NetApp Accredited Storage Architect Professional Workshop: Business Applications

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

16

Lesson 4
Database Specifics

NetApp Confidential

LESSON 4: DATABASE SPECIFICS

6-17

NetApp Accredited Storage Architect Professional Workshop: Business Applications

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

17

Database Administrators
Versus Storage Administrators
Typically, a database administrator (DBA) gives a storage
administrator storage and space layout requirements and the
storage administrator is responsible for allocating the storage space
that is needed.
Unfortunately, the DBA and the storage administrator are driven by
two different goals:
The storage
administrator
wants to keep
costs down and get
high storageutilization rates.

The DBA
wants to get as
much storage
space as possible
to avoid problems
later on.

NetApp Confidential

18

DATABASE ADMINISTRATORS VERSUS STORAGE ADMINISTRATORS


Traditionally, a battle exists between database administrators ( DBAs) and storage administrators. DBAs
always want more space, and storage administrators always want to use less space by achieving higher
utilization from available disks.

6-18

NetApp Accredited Storage Architect Professional Workshop: Business Applications

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

NetApp Database and


Application Solutions
NetApp has partnerships, solution sets, and
resources for the following:
Oracle (database and applications)
IBM DB2
SQL Server
Sybase
SAP

NetApp Confidential

19

NETAPP DATABASE AND APPLICATION SOLUTIONS


In addition to Microsoft for Exchange, NetApp has partnerships with Oracle, IBM for DB2, Microsoft SQL
Server, Sybase, and SAP. Because SAP always runs on top of another database, SAP is included here.
NetApp SnapManager for SAP is currently only for SAP running on Oracle, which currently is only on
Solaris. SnapManager software for each of these products provides a similar suite of functionality as
previously described: provisioning the storage, working with flexible volumes, and using Snapshot copies,
SnapMirror relationships, and the SnapRestore feature.

6-19

NetApp Accredited Storage Architect Professional Workshop: Business Applications

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Database and Database


Object
Pain Point Creation and Modification
Pain Point
Creating duplicates of databases
is a time-consuming and difficult
process and uses valuable storage
resources.

NetApp Solution
An easy, space-efficient, and
relatively inexpensive way to
make copies of a database for
testing, quality assurance (QA),
and development
FlexClone software, the key
feature that facilitates the solution

NetApp Confidential

20

DATABASE AND DATABASE OBJECT CREATION AND MODIFICATION


The next pain point is database and data object creation and modification: in other words, creation of
duplicates of databases. This process is time-consuming and difficult and uses system resources.
NetApp FlexClone software is an inexpensive way of making copies of a database for testing, quality
assurance (QA), and development. This feature is important in database environments and is probably where
FlexClone software is the most obvious fit, although it has many uses outside of the application world.

6-20

NetApp Accredited Storage Architect Professional Workshop: Business Applications

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Volume Cloning: How It Works


1. Start with a volume.
2. Create a Snapshot copy.

3. Create a clone (a new


volume based on the
Snapshot copy).
4. Modify the original volume.
5. Modify the cloned volume.
Result:

Volume 1
Snapshot
Copy of
Volume 1
Cloned
Volume

Snapshot Copy

Data Written
to Disk

Independent volume copies


that are efficiently stored

Volume 1
Changed Blocks
Cloned Volume
Changed Blocks

NetApp Confidential

21

VOLUME CLONING: HOW IT WORKS


Here are all of the blocks on disk. At this point, you make a Snapshot copy of these blocks. No space is used
at this time. The Snapshot copy is a read-only copy.
The next step is to create a cloned volume based on the Snapshot copy. The clone ties to the same blocks of
active data as the Snapshot copy and uses them as its base.
As changes are made to the data, the changes are tracked separately. The changes to the clone do not affect
the original volume, and changes to the original volume do not affect the clone. The advantage is that space
requirements do not double with the clone. Because it shares blocks with the original volume, only changed
data takes up additional space.
This makes the clone space-efficient and near-instantaneous to create, because no data movement occurs, only
replication of pointers in the metadata to the original data blocks.
Much less space is used, and much less time is spent creating the clone. Given a 2-TB database, making a
physical copy takes hours. With FlexClone software, the moment that the command is typed, the cloned
volume is available and ready to use. DBAs love cloning. Typically, you take the clones from the mirror to
offload the additional I/O from production spindles, but this is not a requirement.
In the case of a database failure, using FlexClone software, an administrator can perform the restore, get
production up and running, and take a clone off of it, prior to the restore, to run tests and scenarios to
determine what happened.
If the administrator makes changes and realizes that the copy must be independent, the administrator can use a
clone-splitting command. At that point, in the background, the controller copies all of those blocks so that
they are separate blocks on disk that exist completely independently of each other as separate volumes.

6-21

NetApp Accredited Storage Architect Professional Workshop: Business Applications

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Cloning for Testing and Development


Traditional Approach
Traditional Approach

1.

Prepare target system volumes for the


database files and database file system.

2.

Shut down the source database or put the


database into online backup mode.

3.

Copy the data file to the target system


volumes.

4.

Repeat steps 2 and 3 for each data file.

5.

Restart the source database.

6.

Copy the database file system to the target


system.

7.

Configure the target system database


server.

8.

Restart the new target database server.

9.

Roll forward redo logs if required.

NetApp Approach

1.

Select the source clone.

2.

Select the target system.

3.

Click the mouse a few times to submit


selections.

DBA

Server Storage
Admin- Administrator istrator

9 hours

728 hours

Total time every year for cloning 10 production systems

NetApp Confidential

22

CLONING FOR TESTING AND DEVELOPMENT


If you clone 10 production systems, each of 500-GB Oracle databases, expect to need at least one clone per
week (for example, for patching, schema and database extension testing, and database upgrades).
Traditional time per database clone is approximately 14 hours or 1.25 hours on a 1-Gbps network or a100Gbps network respectively.
NetApp time per database clone is less than one minute.
Now examine the 1-GBps network times from the example:

6-22

Total time = 10 systems x 1.4 hours per clone x 52 weeks per year = 728 hours per year
Total time for NetApp = 10 systems x 1/60 of an hour per clone x 52 weeks per year = approximately 9
hours per year

NetApp Accredited Storage Architect Professional Workshop: Business Applications

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

SnapManager for SQL Server: Overview


Provides integrated data management for SQL
Server 2000, 2005 and 2008 databases:
Automated, fast, and space-efficient backups by
using Snapshot technology
Automated, fast, and granular restores and
recovery by using SnapRestore technology
Integration with the SnapMirror product family
for database replication, which provides
tight integration with Microsoft technologies
such as MSCS and volume mountpoints.

NetApp Confidential

23

SNAPMANAGER FOR SQL SERVER: OVERVIEW


The integration of SnapManager for Microsoft SQL Server is similar to that of SnapManager for Exchange.
SnapManager for SQL Server:

6-23

Manages virtual devices


Integrates with VSS
Creates Snapshot copies
Updates SnapMirror relationships
Creates consistent Snapshot copies by quiescing writes
Integrates into the API in a way that is similar to SnapManager for Exchange

NetApp Accredited Storage Architect Professional Workshop: Business Applications

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Sizing
For SQL and all databases, use the database
sizing tool, which calculates space and
performance:
https://sizers.netapp.com

You can find many how-to technical reports


on the external Web:
http://www.netapp.com/library/tr/

NetApp Confidential

24

SIZING
Sizing is important in database environments, perhaps more so than in Exchange environments. Exchange is a
specialized database.
NetApp has a good database sizer, similar to the Exchange sizer. This sizer is ready for major supported
databases: Oracle, Microsoft SQL, Sybase, and DB2. It sizes for space and performance.
By clicking the second link that is shown here, you can find many technical reports and technical reports that
were co-authored by NetApp with Oracle and, in some cases, Red Hat. You can find the best practices as
recommended by all three parties.

6-24

NetApp Accredited Storage Architect Professional Workshop: Business Applications

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 5
Server Virtualization

NetApp Confidential

LESSON 5: SERVER VIRTUALIZATION

6-25

NetApp Accredited Storage Architect Professional Workshop: Business Applications

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

25

Virtualization Increases Storage Demands


Before
Virtualizing
Servers*
The number of applications per server

After
Virtualizing
Servers
More than 10

More than 10

The number of down applications on storage failure

More than 10

The amount of lost data on dual-disk failure

1x

10x

The number of physical servers

The backup data volume

The possibility of meeting the backup window


Disaster recovery
Provisioning

1x

10x

Feasible

Maybe not

Costly and complex

More complex

Slow and complex

Storage servers

* Typical configuration: DAS, RAID 5, and tape backup

NetApp Confidential

26

VIRTUALIZATION INCREASES STORAGE DEMANDS


Server virtualization allows dramatic levels of server consolidation, often in the range of 10:1. This gets over
the old silo design of one application to one server.
However, a storage failure in a virtualized server can take down 10 applications, not just one. This leads to a
need for more reliable storage.
A dual-disk failure (or more commonly, a failure with a media error on rebuild) means that data sets of 10
applications must be reloaded, not just one. This means that a company needs something better than RAID 5.
With 10 times more data on a server, a company may not be able to make its backup windows, so it needs
faster backup.
In addition, with IT operations that are more and more critical, disaster recovery continues to increase in
priority. Disaster recovery is difficult in a direct-attached storage (DAS) environment but becomes practical
with virtualized servers and storage-based disaster recovery.
While server virtualization enhances server provisioning greatly, the result is fast server and slow storage
provisioning, unless other means of storage provisioning are integrated.

6-26

NetApp Accredited Storage Architect Professional Workshop: Business Applications

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Flash Cache Use Case:


an Opportunity for Deduplication

Clones consume storage that is equal to the size of the template.

Clones are 100% identical: OS software, patches, software drivers, and application data.

To deduplicate virtual machine (VM) blocks, use Flash Cache to help to accelerate
concurrent data access

ESX Server
Data Store A
O
S

A
p .vmdk*
p

O
S

A
p .vmdk
p

O
S

A
p .vmdk
p

A
p .vmdk
p

O
S

Acceleration

A
O
O
P A.VMDK
S O p .vmdk S
SP
p

A
P .VMDK
P

Duplicate Data
A
A
OIs Eliminated
O
P .VMDK
P .VMDK
S
S
P

FlexVol
RAIDTechnology
Layer

NetApp
FAS System
Traditional
Enterprise
RAID Arrays
*.vmdk = Virtual Machine Disk
NetApp Confidential

27

FLASH CACHE USE CASE: AN OPPORTUNITY FOR DEDUPLICATION


VMware provides a great opportunity for deduplication and NetApp. VMware stores redundant data in each
virtual machine (VM) such as the OS, patches, and software applications that are common to every virtual
server (Vserver).
NetApp can reduce redundant data to a single instance with deduplication. This can save as much as 90% of
space, which significantly reduces storage costs.
This capability is unique to NetApp and is a strong selling point against the competition. Only NetApp can
perform deduplication on primary data.

6-27

NetApp Accredited Storage Architect Professional Workshop: Business Applications

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Accelerating the
Adoption of Virtualization
VMware
Technologies

NetApp
Technologies

Server

Transparent
memory sharing

Deduplicated
array cache

Storage

Linked clones

FlexClone
zero-cost clones

VMDK thin
provisioning

Storage that uses thin


provisioning and
deduplication

Interconnect
Server

FC, iSCSI, NFS,


FCoE, and CIFS

FC, iSCSI, NFS


FCoE, and CIFS

Application
performance

Disaster recover solutions


and virtual applications

Dynamic FlexShare
quality of service tool

Storage
Server

NetApp Confidential

28

ACCELERATING THE ADOPTION OF VIRTUALIZATION


This diagram shows the building block features of VMware on NetApp and how they address system areas.

6-28

NetApp Accredited Storage Architect Professional Workshop: Business Applications

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Always-On Server and Data Mobility


Microsoft Live Migration
H-V

Non-disruptive migration of
VMs across physical
machines

H-V

Storage vendor independent

NetApp Data Motion


Migration of data stores
across NetApp storage
systems

Storage
StoragePool
Pool

Data
Data

Data
Data

Storage array balancing

Technology refresh

Capacity management

Moves hundreds to
thousands of data stores in a
single operation
NetApp Confidential

29

ALWAYS-ON SERVER AND DATA MOBILITY


Solve how NetApp extends continuous server availability to storage.
Microsoft Live Migration enables VMs to be moved from one physical server to another without disrupting
applications for purposes of workload balancing, resource optimization, maintenance, upgrades, and so on.
NetApp Data Motion complements Microsoft Live Migration.
The benefits of Data Motion are:

No planned downtime for:

Improved SLA flexibility

Dynamic load balancing


Adjustable storage tiers

Application transparency

6-29

Storage-capacity expansion
Scheduled maintenance outages
Technology refresh
Software upgrades

Performance
Transaction integrity

NetApp Accredited Storage Architect Professional Workshop: Business Applications

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Citrix Essentials for XenServer


The data-management capabilities of the Data ONTAP
operating system are directly integrated in XenServer.
VM1

VM2

VM3

VM4

VM5

VM6

VM7

VM8

APP

APP

APP

APP

OS

OS

OS

OS

Vserver
Administrator

Storage Pool
Storage Administrator

NetApp Confidential

30

CITRIX ESSENTIALS FOR XENSERVER


Codeveloped with Citrix, the XenServer integrated adaptor enables server administrators to manage NetApp
storage directly from the XenCenter console.
The NetApp integrated storage adaptor for Citrix XenServer enables your customers server administrators to
increase productivity by managing common storage functions within the XenCenter console.
With NetApp solutions, storage is provisioned the instant that customers create a VM.
Accelerate test and development or production from weeks to minutes with instant storage provisioning and
cloning of XenServer VMs.
Protect the XenServer virtual infrastructure with automated data protection and recovery of VMs without
impacting application servers.

6-30

NetApp Accredited Storage Architect Professional Workshop: Business Applications

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 6
Virtual Desktop Interface (VDI)

NetApp Confidential

LESSON 6: VIRTUAL DESKTOP INTERFACE (VDI)

6-31

NetApp Accredited Storage Architect Professional Workshop: Business Applications

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

31

The Promise of Virtual Desktops


Simplify desktop management:
Reduce the need for intensive technical support.
Reduce the number of PC images.

Lower costs by addressing staffing costs and


data-recovery costs.
Reduce data loss by making backup less
challenging and therefore more likely.
Improve security and compliance:
Control data portability.
Centralize continuous security upgrades and
patches.
NetApp Confidential

32

THE PROMISE OF VIRTUAL DESKTOPS


Virtual desktops are rapidly being adopted by organizations because of the potential improvements to desktop
computing.
Key Points:
Virtual desktops can simplify desktop management. For example, VDI reduces desktop images that must be
managed and maintained by eliminating the need for a different image for each desktop model. VDI even
allows employees to use their own PCs, which enables IT to offload PC hardware support.
VDI promises lower costs, especially reduced administrative requirements, and PC refresh costs. Note that
while costs can be reduced, it is generally the TCO and not the up-front costs, because of the investment in
data-center infrastructure.
An early driver for VDI adoption was the ability to reduce data loss by moving data from the desktop to the
data center, where backup and disaster recovery can be applied more consistently.
The transfer of data from the desktop also improves data security. Access to data is controlled centrally, and
this avoids security exposure if a PC is stolen.
Finally, a major reason why companies adopt virtual desktops is to streamline the migration to Windows 7 by
rolling Windows 7 out centrally and by virtualizing applications that dont run within Windows 7.

6-32

NetApp Accredited Storage Architect Professional Workshop: Business Applications

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Storage Challenges
Storage costs increase: If 500 servers require much
space, what about 50,000 desktops?
High availability is critical:
It is important to maintain access to desktop systems.
Performance blockages occur when thousands of
systems boot at the same time.

Mass deployment time frames are lengthy because of


the need to provision VMs and storage for tens and
hundreds of desktops at a time.
Storage is central to the security and control of user
data, so regular backups, data retention, and
immutable storage are required.
NetApp Confidential

STORAGE CHALLENGES

6-33

NetApp Accredited Storage Architect Professional Workshop: Business Applications

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

33

The Promise of Virtual Desktops


Simplify desktop management
Lower costs by addressing staffing costs and
data-recovery costs
Reduce data loss
Improve security and compliance
Streamline OS upgrades such as to
Windows 7

NetApp Confidential

34

THE PROMISE OF VIRTUAL DESKTOPS


Virtual desktops are rapidly being adopted by organizations because of the potential improvements to desktop
computing.
Key Points:
Virtual desktops can simplify desktop management. For example, VDI reduces desktop images that must be
managed and maintained by eliminating the need for a different image for each desktop model. VDI even
allows employees to use their own PCs, which enables IT to offload PC hardware support.
VDI promises lower costs, especially reduced administrative requirements, and PC refresh costs. Note that
while costs can be reduced, it is generally the TCO and not the up-front costs, because of the investment in
data-center infrastructure.
An early driver for VDI adoption was the ability to reduce data loss by moving data from the desktop to the
data center, where backup and disaster recovery can be applied more consistently.
The transfer of data from the desktop also improves data security. Access to data is controlled centrally, and
this avoids security exposure if a PC is stolen.
Finally, a major reason why companies adopt virtual desktops is to streamline the migration to Windows 7 by
rolling Windows 7 out centrally and by virtualizing applications that dont run within Windows 7.

6-34

NetApp Accredited Storage Architect Professional Workshop: Business Applications

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Typical VMware View Architecture


Individual

Pooled

Clients:
Laptops, desktops, and
thin clients

Connection Broker:
VMware View Composer

Desktop Broker

VM

VM

VM

VM

VM

Virtual Desktops
Hypervisor (VMware ESX)

VM

Physical Servers
Data Center

NetApp Confidential

TYPICAL VMWARE VIEW ARCHITECTURE

6-35

NetApp Accredited Storage Architect Professional Workshop: Business Applications

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

35

Flexible Storage for Virtual Desktops


Create unified storage for
virtual desktops:

SAN

Storage Pool

Efficiency for both individual


and pooled desktops

Provision thousands of VMs in


minutes.

NAS

user

SAN for desktops; NAS for


user data

User Storage

Virtual Desktops

data

Create instantaneous clones.


Scale capacity in real time.
Support thousands of
desktops per system.

Meet any virtual desktop requirement with a single system.


NetApp Confidential

FLEXIBLE STORAGE FOR VIRTUAL DESKTOPS

6-36

NetApp Accredited Storage Architect Professional Workshop: Business Applications

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

36

Business Continuance for Desktops


Instant backup and recovery
Transparent recovery from component failure
Automatic failover for system and site failures

Recovery in minutes from larger regional disasters

Building 1

Building 2

Storage Resource
Management
( SRM) with
Disaster-Recovery Site
SnapMirror Software

View users stay connected.


NetApp Confidential

BUSINESS CONTINUANCE FOR DESKTOPS

6-37

NetApp Accredited Storage Architect Professional Workshop: Business Applications

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

37

Provision Thousands of VMs in Minutes


Desktop #1:
Windows
Vista
Desktop #2:
Windows
Vista
Desktop #3:
Windows XP

Desktop Golden Images

Clone a
Desktop

Clone a
Data Store

Virtual Desktops

Cloned Data Stores

Rapid cloning of individual desktops


Thin clone technology that minimizes storage use
Ability to provision thousands of VMs in less than 10
minutes
NetApp Confidential

38

PROVISION THOUSANDS OF VMS IN MINUTES


Key point: New FlexClone granular cloning capabilities are ideal for Vserver and desktop environments.

6-38

FlexClone technology can now clone at the subvolume level for both file and LUN cloning.
The primary use case is VDI: Clone individual VMs.
FlexClone technology has been available beginning with Data ONTAP 7.3.1 software at no additional
cost.
FlexClone technology enables you to provision 5,000 VMs in less than 30 minutes. Later in this course, a
demonstration shows more about this topic.

NetApp Accredited Storage Architect Professional Workshop: Business Applications

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Automated Desktop Provisioning


The NetApp Rapid Cloning
Utility:
An integrated cloning and
provisioning utility for
VMware VMs
A vCenter plug-in
The ability to import VMs
into VMware View
Manager
No charge to NetApp
customers

NetApp Confidential

AUTOMATED DESKTOP PROVISIONING

6-39

NetApp Accredited Storage Architect Professional Workshop: Business Applications

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

39

Traditional VDI Storage Deployment


Storage is provisioned and connected as a data store.
Virtual desktops store their data on VMs.
Virtual disks are copied one at a time to create VMs.
500-GB Data Store
VDI

VDI

VDI

500-GB Data Store

VDI

VDI

500 GB of Storage

500 GB of Storage

500 GB of Storage

500 GB of Storage

VDI

VDI

VDI

Traditional Storage Array: More than 2,000 GB Allocated

500-GB Data Store


VDI

VDI

VDI

500-GB Data Store

VDI

VDI

VDI

NetApp Confidential

VDI

VDI

40

TRADITIONAL VDI STORAGE DEPLOYMENT


Note the process and time that is required to create storage and copy VMware clones from template Virtual
Machine Disks ( VMDKs).

6-40

NetApp Accredited Storage Architect Professional Workshop: Business Applications

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

VDI Storage and Provisioning with


FlexClone Software
Create a primary image data store and perform deduplication.
Create FlexClone data stores as needed, instantly, and with zero space.
Data stores and virtual disks are immediately available.
500-GB Data Store
VDI

VDI

VDI

500-GB Data Store

VDI

VDI

VDI

VDI

VDI

Gold Master Data Store


500 GB
Clone A: 0 GB
Clone B: 0 GB
Clone C: 0 GB
NetApp FAS System: 500 GB Allocated

500-GB Data Store


VDI

VDI

VDI

500-GB Data Store

VDI

VDI

VDI

VDI

VDI

NetApp Confidential

41

VDI STORAGE AND PROVISIONING WITH FLEXCLONE SOFTWARE


After the first data store and set of VMDKs are created, all subsequent provisioning is immediate and
consumes no additional storage.

6-41

NetApp Accredited Storage Architect Professional Workshop: Business Applications

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 7
The FlexPod Solution

NetApp Confidential

LESSON 7: THE FLEXPOD SOLUTION

6-42

NetApp Accredited Storage Architect Professional Workshop: Business Applications

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

42

Introducing the FlexPod Solution


Benefits
Cisco UCS B-Series
Cisco UCS Manager

Cisco Nexus Family


Switches

Low-risk standardized shared infrastructure


supporting a wide range of environments

Highest possible data center efficiency

IT flexibility, providing business agility:


scale out or up, but manage resource pools

Features

NetApp FAS
10 GE and FCoE
Complete Bundle

Shared Infrastructure for a Wide Range


of Environments and Applications

Complete data center in a single rack

Performance-matched stack

Step-by-step deployment guides

Solutions guide for multiple environments

Multiple classes of computing and storage


supported in a single FlexPod

Centralized management: NetApp


OnCommand and Cisco UCS Manager

NetApp Confidential

43

INTRODUCING THE FLEXPOD SOLUTION


The FlexPod solution is the best-of-breed infrastructure foundation that supports virtualized and
nonvirtualized workloads that use Cisco UCS, Nexus (servers and network), and NetApp FAS (storage
systems). This is the best-of-breed unified compute, unified network, and unified storage.
NetApp will soon introduce the FlexPod for VMware solution, the first FlexPod solution to be launched.
The FlexPod solution is built around three key capabilities:

6-43

Lower risk with a validated, simplified data-center solution and a cooperative support model for a safe
and proven journey to virtualization and toward the cloud
Enabled business agility with flexible IT that scales out and up to fit multiple use cases and environments
such as SAP, Exchange 2010, SQL, VDI, and secure multi-tenancy (SMT)
Reduced TCO with higher data-center efficiency, decreased number of operational processes, reduced
energy consumption, and maximized resources

NetApp Accredited Storage Architect Professional Workshop: Business Applications

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Data-Center Efficiency and Flexibility


High density, low cost, less power, and less space
Rapid response to changing business needs
Cisco UCS Platform
and Unified Fabric

VMware vSphere
The industrys leading
server-virtualization
technology

High-density
virtualization and
computing

VMware vMotion and


Storage vMotion

Cable once and


consolidate wiring

VMware Distributed
Resource Scheduler

10 GE unified and
virtualized fabric

Resource pooling

Resource pooling

NetApp FAS
Guaranteed storage
efficiency
RAID-DP technology
and deduplication
Thin provisioning
Space-efficient clones
Thin replication
NetApp DS2246 disk
shelves

Resource pooling

NetApp Confidential

44

DATA-CENTER EFFICIENCY AND FLEXIBILITY


This slide outlines the technologies that each vendor brings to jointly drive data-center efficiency.
In this example, the vendors technologies act together to drive higher efficiencies than are possible
individually.

6-44

NetApp Accredited Storage Architect Professional Workshop: Business Applications

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Secure Multi-Tenancy That Is Built on


FlexPod for VMware
Layer on or enable software:
NetApp MultiStore and the FlexShare tool
VMware vShield zones and applications
VMware vSphere Enterprise Plus
Security hardening
The Cisco Nexus 1000V series
Cisco SAFE architecture

Enable capabilities:
Multi-tenancy and secure separation
Service availability and disaster recovery
The Enhanced Secure
Muli-Tenancy (SMT)
Cisco-Validated Design,
Released October 2010

Service management

Service assurance
Workload isolation and mobility

NetApp Confidential

45

SECURE MULTI-TENANCY THAT IS BUILT ON FLEXPOD FOR VMWARE


The Enhanced SMT deployment guide will be built on FlexPod for VMware infrastructure.
Layering on top of the FlexPod solution allows full-blown cloud solutions such as secure multi-tenancy to be
built. (See the recently released Enhanced Secure Multi-Tenancy CVD.)

6-45

NetApp Accredited Storage Architect Professional Workshop: Business Applications

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Build on Existing Investments


Unify
Computing

Cisco Unified Computing System


Cisco Nexus Family

Unify
Fabric

FlexPod
FC SAN
Virtualize
Storage

NetApp
V-Series Systems

Protect investments.
Achieve benefits in
each layer as you go.

Move stepwise rather


than all at the same
time.

Storage
Array

Existing

NetApp Confidential

46

BUILD ON EXISTING INVESTMENTS


Rather than taking a big bang or forklift replacement approach, you can evolve to the FlexPod solution
and get benefits as you go.
Because different equipment may not be on the same lease cycle, this protects existing investments and allows
you to move stepwise at each layer toward the FlexPod solution while you start to get the FlexPod benefits.
The cooperative support agreement covers a wide range of configurations.

6-46

NetApp Accredited Storage Architect Professional Workshop: Business Applications

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Module Summary
Now that you have completed this module, you
should be able to:
Discuss why companies should use NetApp
technology for applications
Discuss the value of the WAFL file system for
load-balancing databases
Articulate the value and history of using NetApp
systems for messaging and collaboration
Discuss the value of using NetApp systems in
database environments
NetApp Confidential

MODULE SUMMARY

6-47

NetApp Accredited Storage Architect Professional Workshop: Business Applications

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

47

Module 7
Data Protection

NetApp Confidential

MODULE 7: DATA PROTECTION

7-1

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Module Overview
This module focuses on NetApp Data Protection
solutions:
Cluster-Mode
OnCommand
SnapVault
Compliance

NetApp Confidential

MODULE OVERVIEW

7-2

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Module Objectives
After this module, you should be able to:
Discuss general trends in the data-protection
market
Articulate the value of the NetApp
OnCommand application
Discuss the challenges that the NetApp
SnapVault feature solves
Discuss the challenges and solutions that are
involved in compliance and informationlifecycle management
NetApp Confidential

MODULE OBJECTIVES

7-3

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 1
Data Protection

NetApp Confidential

LESSON 1: DATA PROTECTION

7-4

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

NetApp Integrated Data Protection


Primary Data Center
Application
Agents

Manage
and Monitor

Continuous
Availability

DR Site
SnapMirror

Snapshot
Copies

Disaster
Recovery
Clone for Development
and Test

Archive
Application
Backup

Remote Office

SnapVault

OSSV
NetApp Confidential

NETAPP INTEGRATED DATA PROTECTION


Start) Show the availability components and local backups: Snapshot copies with SnapManager, continuous
availability, disaster recovery, and business agility.
Build 1) Show the addition of backup for long-term protection both for NetApp and competitor (remote
office) systems.
Build 2) Show the addition of archive and enterprise content management (ECM) for long-term retention and
compliance requirements.

7-5

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Data ONTAP 8.1 Cluster-Mode Replication


Intra- and intercluster data
protection (DP) mirrors
Volume-level replication
Mirrors Snapshot copies
Storage efficiency aware
Replicate between any
aggregate types
Supports all protocols

NetApp Confidential

DATA ONTAP 8.1 CLUSTER-MODE REPLICATION

7-6

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Data ONTAP 8.1 Cluster-Mode


SnapMirror
Intra- and intercluster replication options
Asynchronous volume SnapMirror
Storage efficiency savings are preserved
Remote
Data Center

Main
Data Center

WAN
A

A3

C2

C1

LUN

LUN

A2

LUN

B2
B1

B2
A1

A3

LUN

B1

LUN
R

Secondary Cluster

Primary Cluster
NetApp Confidential

DATA ONTAP 8.1 CLUSTER-MODE


SNAPMIRROR

8.0 Mirrors not upgradable to 8.1 mirrors


No 7-Mode Cluster-Mode replication
Asynchronous only
No SMoFC
In 8.1 no support for:
Cascading
Vserver level management
Vserver DR
Tape or Disk seeding
All DP mirrors require licenses on source and destination clusters
LS mirrors require no license
Configuring LS mirrors for vserver root volumes is a best practice

7-7

LUN

C1
R

A1

C2
A2

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Intracluster Data Protection Replication


Provides local on-cluster data protection
Target volumes can be in the same or different
Vserver as the source

Data transfers over cluster interconnects


Data Network

DP

RW

NetApp Confidential

INTRACLUSTER DATA PROTECTION REPLICATION

7-8

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Intercluster Data Protection Replication


Replication between volumes that reside on different clusters to
enable disaster recovery
Data transfers across then WAN using intercluster LIF connections
Source volume RW

Intercluster LIF
connection

WAN

DP Destination volume
NetApp Confidential

INTERCLUSTER DATA PROTECTION REPLICATION

7-9

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Replication for Load Sharing Mirrors


Read-only mirrors of a NAS
FlexVol volume
Scales client read requests
to increase data throughput
and balance workload across
nodes

Transparent to the namespace


Clients are automatically
directed to a read-only mirror

Simultaneously schedule
automatic resynchronization of
all mirrors

NetApp Confidential

10

REPLICATION FOR LOAD SHARING MIRRORS


FlexVol M is scaled to enhance read performance via a load-balancing, asynchronous mirroring capability.
Some applications often require scaling read throughput well beyond write throughput and this scaling option
is an effective choice in these cases.
Best practice: create a mirror on each node, including the node hosting the source volume so that access is
always local
Typical use cases:

reference data,
shared libraries or binaries
netboot images

Use where read-only access throughput is needed

7-10

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 2
NetApp OnCommand
Management Software

NetApp Confidential

LESSON 2: NETAPP ONCOMMAND MANAGEMENT SOFTWARE

7-11

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

11

OnCommand Products
Service Automation and Analytics
Capacity planning

Service automation

Service management

Policy-based workflows

Performance analytics

Service catalog for SLAs

Multivendor, multiprotocol

OnCommand
Service Automation

OnCommand Insight
Service Analytics

OnCommand Unified Manager


Virtual Storage Console

OnCommand Insight Balance


OnCommand Insight Assure
OnCommand Insight Perform
OnCommand Insight Plan)

Device management
Problem detection
Monitoring and reporting
System Manager
Simple storage device management

NetApp Confidential

12

ONCOMMAND PRODUCTS
SERVICE AUTOMATION AND ANALYTICS

To help customers achieve the storage efficiency that they require, the newest release of OnCommand
management software groups multiple products into one family and unifies multiple capabilities into one
product.
OnCommand produces are designed to make NetApp storage the best choice for physical, virtual, and cloud
environments.
Control NetApp storage with System Manager and My AutoSupport.
System Manager provides simple, workflow-based wizards that automate device-management tasks.
Administrators can quickly set up and efficiently manage NetApp SAN and NAS systems.
Automate NetApp storage infrastructures via OnCommand unified manager and SnapManager software.
OnCommand unified manager integrates the functions of Provisioning Manager, Protection Manager, and
Operations Manager into one user interface. Through one view, customers can monitor their shared
storage environment and drill down to define storage-service levels and policy-based workflows.
SnapManager software provides the ability to connect to and manage from various platforms, including
from virtualized platforms.
Analyze shared IT infrastructures via the OnCommand Insight products.
OnCommand Insight products provide visibility and optimization across heterogeneous storage
infrastructures. The products that were formerly known as SANscreen and Akorri BalancePoint have been
integrated into OnCommand Insight. With OnCommand Insight, customers can optimize performance,
plan capacity requirements, and ensure that they are meeting their service-level needs.
Insight (SANscreen) - Assure, Plan and Protect

7-12

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

OnCommand Management Products


Simplicity, Efficiency, and Flexibility
OnCommand management software delivers efficiency savings by unifying storage
operations, provisioning, and protection for both physical and virtual resources
Simple

Provide effective
storage for the
virtualized data
center

Single unified approach


Physical and virtual service
Efficient

Automation and analytics


Storage efficiency
Service efficiency
Flexible

Visibility and insight


Open API that integrates with third-party
management products and hypervisors

Reduce IT
spend up to 50%

Rapidly respond
to changing
demands

NetApp Confidential

13

ONCOMMAND MANAGEMENT PRODUCTS


SIMPLICITY, EFFICIENCY, AND FLEXIBILITY

NetApp OnCommand products enable IT storage teams to unify the operation, provisioning, and protection of
their organizations data and deliver efficiency savings.
Key benefits that enable the savings:

7-13

Simple. A unified approach and one set of tools enables management of physical worlds, virtual worlds,
and service-delivery systems. Therefore, NetApp storage is the most effective storage for the virtualized
data center.
Efficient. Automation and analytics capabilities deliver on storage and service efficiency, reducing IT
capex and opex spend by up to 50%.
Flexible. Tools provide visibility and insight into complex, multiprotocol, multivendor environments and
provide open APIs that enable integration with third-party orchestration frameworks and hypervisors.
Therefore, OnCommand products provide a flexible solution that enables rapid response to changing
demands.

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

OnCommand
Integrated Storage Management and Automation
Integrated offering

Uniform management across


workloads

Capabilities provided by
multiple, earlier product

Snapshot naming, backup-type and


retention-period specification, prescripting and post-scripting, and policy
extensions

Unified, extensible policy


infrastructure

Server, VM, and application


aware

Provisioning, cloning,,
backup/recovery and DR
policies

Infrastructure-wide RBAC with


delegated management

Extensible to other applications

One configuration repository


for reporting, event, and audit
logs
Unified view, interface choice

NetApp Confidential

14

ONCOMMAND
INTEGRATED STORAGE MANAGEMENT AND AUTOMATION

OnCommand management software is the fifth generation of NetApp storage-resource management products.
To improve administrative efficiency, OnCommand products integrate numerous, previously separate
capabilities. These capabilities were previously identified as Provisioning Manager, Protection Manager,
Operations Manager, SnapManager for Virtual Infrastructures (VMWare) and SnapManager for Hyper-V
(Microsoft).
OnCommand software provides a unified platform. The unified platform enables creation and extension of
policies that can be specific to servers, VMs, and applications. It centralizes provisioning, cloning, backupand-recovery, and disaster-recovery policies and provides security features such as role-based access control
(RBAC) and delegated manageability.
OnCommand software enables management across workloads for snapshot naming, backup-type and
retention-period specification, prescripting and postscripting, and policy extension. It integrates the back-end
into one configuration repository for reporting, event, and audit logs and provides one dashboard from which
storage resources can be viewed and interface options can be selected.
OnCommand software is included with the purchase of NetApp storage hardware.

7-14

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 3
Components and Architecture

NetApp Confidential

LESSON 3: COMPONENTS AND ARCHITECTURE

7-15

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

15

OnCommand Components
Two packages:
Core services physical
storage manageability
Host services,
virtualization plug-ins

Core

Host

NetApp Confidential

16

ONCOMMAND COMPONENTS
OnCommand 5.0 has been packaged in to the central and host services based on physical or virtual
management capabilities.
The central services are comprised of the core manageability software, pertaining to the tools related to
physical storage.
The Host package encompasses the host plug-ins based on the type of virtual infrastructure supported.
For example, the host package would install the services to monitor and manage virtual infrastructure (VIM).
When you install host services in a VMware environment, then OnCommand 5.0 host plug-ins for V-center
server is also automatically installed.

7-16

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

OnCommand Architecture
OnCommand
Console (GUI)

NetApp
Management
Console (GUI)
Operations
Manager
Console (GUI)

Hyper-V
Plug-ins

OnCommand
Host Services
APIs

DataFabric
Manager Server

Storage Storage
System System

VMware
Plug-ins
SnapDrive
for Windows

vSphere
Client GUI

Front-end GUI
Back-end server or service

NetApp Confidential

17

ONCOMMAND ARCHITECTURE
The architecture diagram identifies the basic components of the OnCommand core and host packages.
The color-coding distinguishes the core components (orange) from the host components (green).
Solid boxes identify front-end GUIs that users interact with directly, and the dashed boxes identify back-end
servers or services that are not directly visible to the user.
The OnCommand console serves as the GUI from which Hyper-V objects are managed and, alternatively, as
the GUI from which VMware objects are managed. The OnCommand console launches Operations Manager
console and NetApp Management Console, from which the physical environment is managed.
DataFabric Manager server can be installed in the standard edition or the express edition.
OnCommand host services caches schedules, catalogs, and events for short periods and enables execution
without DataFabric Manager server.
The plug-ins for Hyper-V and VMware are collections of primitives that enable connection into Hyper-V and
VMware environments.
SnapDrive for Windows software is used only within the Hyper-V environment. It is used for storage
discovery and to manage LUNs and Snapshot copies.
The vSphere Client GUI is native VMware software that is used by the VMware administrator for virtual
environment administration. OnCommand software provides the GUI with access to the storage environment.

7-17

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

OnCommand User Interface Choices

OnCommand Dashboard

vCenter

Microsoft
SystemCenter

Customer Portal

NetApp Confidential

Partner Portal

18

ONCOMMAND USER INTERFACE CHOICES


OnCommand software provides a unified dashboard that identifies all storage resources (for at-a-glance status
and metrics) and provides various interface choices.
OnCommand software continuously monitors and analyzes the health of the environment and provides
visibility across the environment. It identifies what is deployed and displays utilization information, enabling
customers to improve their storage-capacity utilization and increase the productivity and efficiency of their IT
administrators.
The dashboards panels contain information about the system and provide cumulative information about
various aspects of the environment:

Availability: information about the storage controllers and vFiler units that are discovered and monitored
by OnCommand (for example, the number of controllers and units that are down).
Events: status of the storage and server objects. The top five events (ranked by degree of severity) are
listed.
Full Soon Storage: identification of aggregates and volumes that are near capacity (based on the number
of days before capacity will be reached).
Fastest Growing Storage: identification of aggregates and volumes for which space usage is increasing
rapidly and information about growth rate and trend for specific aggregates and volumes.
Dataset Overall Status: status of the environment.
Resource Pools: identification of the resource pools that, given current usage levels, may experience
space shortages.
External Relationship Lags: information about the relative percentages of external SnapVault, qtree
SnapMirror, and volume SnapMirror relationships (with lag times in error, warning, and normal status)
Unprotected Data: number of unprotected storage and server objects that are being monitored

In addition, views are available through virtualization platforms that are based on the SnapManager selfservice customer portals. The portals are available through the Service Catalog capability and or the
integrated partner frameworks.
7-18

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

OnCommand Dashboard in Detail

NetApp Confidential

19

ONCOMMAND DASHBOARD IN DETAIL


Using the OnCommand dashboard to review information provides visibility across your storage environment
by continuously monitoring and analyzing its health. You get a view of what is deployed and how it is being
utilized, enabling you to improve your storage capacity utilization and increase the productivity and efficiency
of your IT administrators. And this unified dashboard gives at-a-glance status and metrics far more efficient
than having to use multiple resource management tools. This web-based interface uses a common web
framework called NWF.
Dashboard is a user interface window containing information panels providing information about the system.
NetApp OnCommand has various dashboard panels to provide cumulative information about various aspects
of your environment.
Availability dashboard panel provides information about the storage controllers and vFiler units that are
discovered and monitored by OnCommand. You can also view the number of controllers and vFiler units that
are in down state.
Events dashboard panel provides information about the status of the storage and server objects by listing the
top five events based on their severity.
Full Soon Storage dashboard panel displays aggregates and volumes that are reaching their capacity. The
information displayed in this panel is based on the number of days in which this threshold will be breached.(at
the rate how many days it will take to full)
Fastest Growing Storage dashboard panel displays aggregates and volumes for which space usage is
increasing rapidly. It also displays the growth rate, trend, and for a specific aggregate or volume.
Dataset Overall Status dashboard panel displays the overall status. number of datasets in overall error status,
overall warning status, or overall normal status.
Resource Pools dashboard panel displays the resource pools which may face potential space shortages based
on the current usage levels
7-19

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

External Relationship Lags dashboard panel displays the relative percentages of external SnapVault, Qtree
SnapMirror, and volume SnapMirror relationships with lag times in error, warning and normal status
Unprotected Data dashboard panel displays the number of unprotected storage and server objects that are
being monitored

7-20

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 4
Key Functionality

NetApp Confidential

LESSON 4: KEY FUNCTIONALITY

7-21

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

20

OnCommand: Operations and


Provisioning
RBAC
Policies
Monitoring

Reporting
View reports to identify potential
storage savings from deduplication

Assign preconfigured
services to datasets

Provision and protect


at same time

NetApp Confidential

21

ONCOMMAND: OPERATIONS AND PROVISIONING


OnCommand simplifies and standardizes storage operations. Standardized configuration accelerates
deployment and mitigates operational risks. OnCommand software delivers storage management features that
enable business policy compliance. Compliance is enabled achieved by using enterprise-wide configuration
management, distributed policy setting, and customized reporting.
OnCommand is intuitive and helps improve the productivity of storage administrators. The operations
capability of the product helps storage administrators resolve problems faster and improve capacity utilization
by providing a full picture of NetApp storage resources. With just a few clicks, administrators can drill down
to detailed storage system information. And by replacing repetitive, time-intensive tasks with policy-based
automation, they become more productive.
Role-based access control on the centralized console makes it possible for server and database administrators
to perform self-service provisioning. Because these tasks are only performed within the limits of policies
defined by IT architects and based on company business requirements, the system remains stable, efficiently
configured, and under control. Policies that can be ascribed to datasets include capacity, storage reliability,
space provisioning requirements, access mechanisms and security settings.
Another valuable dimension of operations management is monitoring and analysis of reporting.
With OnCommand, you can continuously monitor and analyze the health of your storage environment,
informing customers about and can thus maintain visibility of what is being deployed and how it is being
utilized. This improves both storage capacity utilization as well as administrator efficiency.
By streamlining provisioning, OnCommand software enables customers to increase operating efficiency and
eliminate hands-on complexity, and simplify by streamlining provisioning with OnCommand. Complexity of
the underlying storage can be removed for easier down-stream administration. OnCommand allows you to
provision and protection of protect data at the same timethe moment you provision storage, you protect it.
No additional steps or time are required

7-22

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

OnCommand increases operating efficiency and eliminate hands-on complexity by streamlining provisioning.
It allows the ability to provision and protect data at the same time no additional steps or time are required.
Provisioning with OnCommand allows the automation of complex provisioning processes. Services can be
defined granularly by the storage architect, and then be easily and consistently selected by down-stream
administrators
To maximize use of your resources, OnCommand automates NetApp storage efficiency features including
thin provisioning and primary data deduplication. Automation This eliminates unnecessary and wasteful overprovisioning and provides storage only when needed. In addition, during the provisioning process,
OnCommand can automatically select the best resource to meet a request. As resource pools approach full
allocation, the system can issue alerts also automatically alert, and suggest ways to increase available space.

7-23

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

OnCommand: Protection
Grouping of similar
requirements
Preset policies
Protection status
at a glance

A simplified process
Alerts

NetApp Confidential

22

ONCOMMAND: PROTECTION
OnCommand software simplifies the process of protecting enterprise data by enabling administrators to group
data into datasets and apply preset policies to the datasets. It automatically correlates datasets and underlying
physical storage resources, so administrators do not need to think in terms of the storage infrastructure.
OnCommand software helps protect data by providing administrators with an easy-to-use management
console that they can use to quickly configure and control all SnapMirror, SnapVault, Open Systems
SnapVault (OSSV), and SnapManager operations. Administrators can apply data-protection policies
consistently, automate complex protection processes, and pool backup and replication resources.
A simple dashboard provides an at-a-glance view of comprehensive data-protection information, including
information about unprotected data. The software enables administrators to apply predefined policies to the
data, thus minimizing the potential for error. OnCommand software also provides e-mail alerting to enable
issues to be analyzed and corrected before they significantly impact data protection.

7-24

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

OnCommand Plug-Ins
VMware
Virtual Storage Console
Microsoft
ApplianceWatchPRO for Microsoft System
Center

NetApp Confidential

23

ONCOMMAND PLUG-INS
OnCommand plug-ins for VMware and Microsoft provide access to OnCommand control and automation
features from those respective management frameworks.

7-25

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

What is the Storage Service Catalog?


Self
Service
Portal

Subscriber

Self
Service
Portal
(storage)

Data Center Orchestration

Technology View

Metrics

Policies

Resource Pool

Network

Server

Application

Network

Service
Server

Application

Service Catalog

Storage
Architect

Included free with


OnCommand
Enables storage as a
service
Automates manual
processes
Unique to NetApp

Logical View

NetApp Confidential

24

WHAT IS THE STORAGE SERVICE CATALOG?


The Storage Service Catalog, a component of OnCommand software, is a key service-automation
differentiator for NetApp. It enables storage-provisioning policies, data-protection policies, and storage
resource pools to be integrated into a single service offering that administrators can choose when provisioning
storage. The catalog not only automates much of the provisioning process but also automates a variety of
storage-management tasks that are associated with the policies.
The catalog provides a layer of abstraction between the storage consumer and the details of the storage
configuration, creating storage as a service. The service levels that are defined with the catalog specify and
map policies to the attributes of the pooled storage infrastructure. The higher level of abstraction between
service levels and physical storage enables elimination of complex, manual work and encapsulates storage
and operational processes together for optimal, flexible, and dynamic allocation of storage.

7-26

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 5
Ecosystem Integration

NetApp Confidential

LESSON 5: ECOSYSTEM INTEGRATION

7-27

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

25

APIs and SDK: Choosing the Right Solution

Custom
Management

Virtualization
Management

In-House
Management
Tools

Enterprise
Management

NetApp Confidential

26

APIS AND SDK: CHOOSING THE RIGHT SOLUTION


NetApp is developing an ecosystem that delivers the value that partner products can provide, while assuring
flexibility and choice for customers. The result is a solution that addresses the unique needs of the endcustomer environment. Key technologies that enable this differentiation are an open API and a free Software
Development Kit (SDK).
The companies that provide IT integration within the NetApp ecosystem represent some of the best-known
names in the industry (such as virtualization-management solutions from Microsoft, VMware, and Citrix and
enterprise-management frameworks from BMC Software, CA, HP, IBM, and Fujitsu).
The technologies that differentiate NetApp are an open API and a free Software Development Kit (SDK). The
OnCommand SDK and the open APIs provide partner platforms with a tighter integration at a higher storageabstraction layer, thus enabling policy-based automation for protection and provisioning tasks on NetApp
storage
The goal of NetApps partnerships and of NetApps integration with management and orchestration vendors
is to enable customers to manage their infrastructure from end to endapplications, servers, networks, and
storage.
This strategy enables customers to choose the right solution for their problem and evolve their solution over
time.

7-28

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

NetApp and Multivendor


Control and automation for
NetApp storage

Application
Silos

Analysis for heterogeneous


infrastructure end to end

Zones of Virtualization
Management

ITaaS
(aka Internal Cloud)

Management

External Cloud
Services

Management

Apps
Servers
Network
Storage

NetApp Confidential

NETAPP AND MULTIVENDOR

7-29

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

27

Storage and Service Efficiency


I need two 800-GB Oracle
instances at the Gold service level.
Application Administrator

Data Center Orchestration Framework

Two VMs with


11Gb on tier 1
servers

Service
Analysis

Service
Catalog

Service
Measurement

Policy
Infrastructure

Two 800-GB LUNs


GOLD SLA

Two 800-GB LUNs


Thin provisioning
Deduplication
SnapVault

SnapMirror

NetApp Confidential

28

STORAGE AND SERVICE EFFICIENCY


The diagram illustrates how an orchestration framework, the Storage Service Catalog, and analysis capability
can be integrated to enable automated end-to-end management of shared IT infrastructures.
An application administrator requests storage at the high-service level.
The request moves to the OnCommand Storage Service Catalog, where predefined policies pair datasets with
service levels for performance, availability, efficiency, and protection. To ensure capacity savings, the process
can include deduplication.
Defined availability and protection levels automatically create backup and replication actions.
Similarly, newly provisioned VMs trigger the policy-based SLAs that are used with physical resources.
SnapManager for VMware and SnapManager for Hyper-V enable the integration.
Finally, Insight analysis products track changes, collect performance data, and send alert messages in regard
to significant events and threshold status.

7-30

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Storage and Service Efficiency


Application Administrator

Data Center Orchestration Framework

Service
Analysis

Service
Catalog

Service
Measurement

Policy
Infrastructure

SnapVault

SnapMirror

NetApp Confidential

Service
Efficiency

Storage
Efficiency

29

STORAGE AND SERVICE EFFICIENCY


This example illustrates the concept of efficiency, the key value that OnCommand management software
provides.
The example shows how a storage service that is built with OnCommand policy, automation, service-catalog,
and virtualization-awareness capabilities, is coupled with NetApp analysis products, and is integrated with a
portal or orchestration platform delivers the service and storage efficiency savings that are required by IT
organizations today.

7-31

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

BMC and NetApp Automation


IT Services

Provision two VMs,100


GB each at GOLD SLA
Provision
two VMs

BMC
Atrium

Adapter
Network Layer
Logical Pool of Storage

NetApp
Management

Atrium manages NetApp storage:


Full-stack automated
provisioning
Storage that is automatically
provisioned and protected by
defined SLAs
Defined SLAs that
automatically deliver storage
and service efficiency

Disaster recovery and off-site replication


Thin provisioning
Provision two 100
Deduplication
GB at GOLD SLA
RAID-DP

NetApp Confidential

30

BMC AND NETAPP AUTOMATION


BMC has implemented a software adapter that uses the NetApp open APIs and the NetApp SDK and that
takes full advantage of the Storage Service Catalog to enable full-stack, automated provisioning from BMCs
Business Service Management (BSM) product.
The slide illustrates how a system administrator can automatically provision VMs and storage at a particular
service level. Because service levels are defined through the service catalog, the provisioning process
automatically allocates the storage and protection processes.
This example depicts the integration of a management platform with NetApp management software to enable
service delivery of storage and to leverage NetApp efficiency technologies.

7-32

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

BMC and NetApp Service Management


Service Desk

SIM

Remedy

SNMP Trap

Atrium

CMDB

BMC

Business Service Management


Provide full dependency mapping
from the business services to the
storage services

Enable automatic remedy ticket


creation for business service as a
result of a storage issue

OnCommand
Provide extension of businessInsight

OnCommand Insight
Data Warehouse

level impact analysis into storage

Connector
OnCommand
Insight
Server

OnCommand
Insight
Server

Bidirectional update for Atrium


CMDB and OnCommand Insight
data warehouse
Storage
Admin

Infrastructure

NetApp Confidential

31

BMC AND NETAPP SERVICE MANAGEMENT


This integration covers 2 use cases:
Asset Management

Discover storage arrays and hosts


Manage the relationship between hosts and storage arrays and reports on:

Capacity
Operational recovery
Replication service

Impact Management

Model and identify the impact of storage availability on business services


Integrate OnCommand Insight data to enable helpdesk tracking and risk mitigation for storage services

Connector availability?
Target Q3FY11
The connector is bi-directional
Import application and business-unit (business line) information exists in the CMDB to OnCommand Insight.
The information can be used for violation management, capacity management and chargeback (assume the
customer has capacity manager license)
How frequent is the update of SIM?
Near Real-Time, upon OnCommand Insight SNMP trap generation?

7-33

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

What is done automatically?

The extraction of data from OnCommand Insight Data warehouse, transformation into a service model
and the loading of the service model to the CMDB
Importing Applications and business units information from CMDB to OnCommand Insight

What will I have to do in the CMDB?

Assuming no conflicts exists with the data everything will be done automatically
When conflict occurs (ex. server information cannot be found in the CMDB) the CDMB administrator
will have to resolve it.

What about SIM integration


CMDB administrator will require to look at traps captured from OnCommand Insight, using server &
storage information exists in the trap find the storage service in the CMDB and change its status this
require manual configuration

7-34

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

NetApp OnCommand Product Portfolio


MANAGE
Control

OnCommand System Manager


My AutoSupport

Automate

OnCommand unified manager


Workflow Automation
SnapManager & SnapDrive
software

Analyze

OnCommand Insight Balance


OnCommand Insight Assure
OnCommand Insight Perform
OnCommand Insight Plan

IT INTEGRATION
Access

Virtual Storage Console


OnCommand plug-in for Microsoft
OnCommand plug-ins for BMC, CA,
Tivoli, etc.

Develop

OnCommand API & SDK

NetApp Confidential

NETAPP ONCOMMAND PRODUCT PORTFOLIO


NetApp Management Software Portfolio mapping to our current product offerings.

7-35

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

32

OnCommand Portfolio
Enterprise License
Sold Separately

OnCommand Insight

Value-Added, ControllerBased Pricing Attach Sale

SnapManager Suite

OnCommand 5.0
(Operations Manager, Protection Manager, Provisioning
Manager)

Data ONTAP Essentials


Included with Controller

System Manager 2.0

NetApp Confidential

33

ONCOMMAND PORTFOLIO
Most of the components of OnCommand software are delivered with NetApp hardware.
System Manager, which provides basic storage-system management, is ideal for customers who have only a
few controllers. The 2.0 version, which was available as of August 2011, is included with the purchase of a
storage system.
Similarly, OnCommand management software is provided with NetApp storage systems. OnCommand
software is recommended for use with multiple controllers, to enable efficient management of larger
environments. It was available as of September 2011. OnCommand and System Manager are included within
the Data ONTAP Essentials bundle.
To take full advantage of virtualization-aware capabilities, customers must purchase the SnapManager suite,
which includes entitlement to the SMVI and SMHV products.
Finally, NetApp analysis capabilities are provided by OnCommand Insight products (formerly OnCommand
Insight and Akorri). The Insight products have capacity-based enterprise licenses, available separately.

7-36

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

OnCommand Software: Advantages


Delivers storage and service efficiency for
NetApp storage
Integrates operations, provisioning, and
protection offerings
Reduces IT spend by up to 50% and is ideal
for the virtualized data center
Enables storage as a service automation
through the Storage Service Catalog
Enables integration with ecosystem partners,
a capability that is unique to NetApp
NetApp Confidential

34

ONCOMMAND SOFTWARE: ADVANTAGES


With NetApp OnCommand you have a single unified approach to manage your storage simply, efficiently,
and flexibly. OnCommand helps you better control your data and storage, automate common and complex
tasks, and better analyze how to evolve your capacity to meet business needs and help lower costs.
OnCommand delivers on Storage AND Service efficiency.
Using automation and analytics OnCommand can help you lower operational costs and better plan your
growth which can reduce your IT spend by as much as 50%.
Finally NetApp storage and OnCommand management software provides the ideal shared storage
infrastructure for the virtualized data center.

7-37

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 6
SnapVault Software

NetApp Confidential

LESSON 6: SNAPVAULT SOFTWARE

7-38

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

35

What Is SnapVault Software?


A data protection solution for heterogeneous
storage environments
SnapVault
Software that performs disk-to-disk backup and
recovery, which is ideal for use with NearStore
near-line disk storage
A solution that is designed to address the pain points
that are associated with tape:
Intelligent data movement that reduces network traffic
and production-system impact
Frequent backups that ensure superior data protection
Use of NetApp Snapshot technology to significantly
reduce the amount of backup media that is needed
NetApp Confidential

36

WHAT IS SNAPVAULT SOFTWARE?


SnapVault is the NetApp native Data ONTAP backup, recovery, and archive solution. It is ideal for use with
NearStore near-line storage.
SnapVault software:

Doesnt require a NearStore Personality License or NearStore hardware


Is designed to address the pain points that are associated with tape
Uses intelligent data movement, transferring only the changes that are made at the block level
During data transfers, reduces traffic across the network
Reduces the impact on production systems
Can perform backups more frequently, because less data is backed up
Is based on Snapshot technology
Reduces the amount of backup media that is needed

SnapVault software works in controller-to-controller environments and in open systems environment (Open
Systems SnapVault) and is usually implemented with a NearStore ATA-based secondary backup system.
SnapVault software can be used in disaster-recovery scenarios, if used in conjunction with SnapMirror
products. SnapVault software does not create read/write copies; and data becomes active only after it is
restored to a FAS system.

7-39

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Traditional Backup: Challenges


Remote Office

Remote Office

Tape

Offsite Location

Challenges

Inability to hit shrinking


backup windows storage
growth and 24x7 demand

Tape

Restores that require too much


time and frequently fail

Data Center

Remote office backups that are


challenging and prohibitively
costly

NDMP
FAS
Servers

Increasing operating costs for


management and media
Infrequent backups

Windows
Servers
Tape
Library
Heterogeneous
Storage

UNIX
Servers

NetApp Confidential

37

TRADITIONAL BACKUP: CHALLENGES


The traditional approach is to back up to tapeusing backup software such as Veritas and Legato. In this
case, the backup is performed via file-level transfers. To back up a laptop, backup software such as Connected
TLM or Veritas NetBackupPro is used to transfer block-level changes.
The tape solution can be used to back up heterogeneous storage and operating-system and application
environments. Software is installed on a backup server, and tape is attached to the server, either directly or
through a storage network.
Full backups back up all data. Typically, full backups occur on weekends. Incremental backups usually back
up only changed files. Incremental backups occur in-between full weekends (for example, nightly). Remote
backups are performed within the infrastructures that are located in remote offices. To enable disaster
recovery, tapes are sent offsite.

7-40

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

SnapVault Backup: Solution


Remote Office
Windows
Servers

Remote Office

Remote Office

FAS
Servers

UNIX
Servers

Accelerated backups
Accelerated and guaranteed
restores
Significantly less manual
intervention and support

Data Center

Network efficient backups


only changed blocks sent
Media efficient backupsonly
changed blocks and
incremental backups stored
forever

Each
Incremental
backup
is a full file
system Image

Windows
Servers

Heterogeneous
Storage

Features and Benefits

Extremely fast and granular


restores from an online disk

Block-level
incremental
backups

More frequent backupsas


often as hourly

UNIX
Servers

SnapVault

NetApp Confidential

38

SNAPVAULT BACKUP: SOLUTION


When the SnapVault solution is used, a full backup of all systems is performed on the NearStore system.
Thereafter, all backups are incremental, and only changed blocks are stored on disk.
Storing only changed blocks dramatically reduces the amount of information that is stored on disk. For
backups from one NetApp system to another, only changed blocks are sent across the network and only
changed blocks are stored.
The data that is stored, including the data from all incremental backups, is in file format and can be viewed as
a full backup image. Whether you want to view a backup that was performed four hours ago or four days ago,
you can quickly locate the backup and have a full view into the environment as it was at the time the backup
was performed. You do not need to backtrack step-by-step to view the data or locate the information that you
need.
Both the tape process and the SnapVault process perform incremental backups, but the tape process performs
the backups by file, and the SnapVault process performs the backups by block.
A SnapVault incremental backup is the equivalent of a full backup. For each day, only the changed blocks are
moved, but all of the previously backed-up blocks are active. So, every day, the full file system is visible.
How do you restore data?
Assume that you need to restore data the night before your full (weekly) backup is to be performed. With the
traditional solution, to restore the data, you must apply seven incremental (nightly) backups. With the
SnapVault solution, each incremental backup is full (because all previously backed up blocks are active and
accessible), so data can be restored via a one-step process, rather than via a multiple-step process.

7-41

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

How do you verify that a backup is good?


With tape, to determine whether a backup is good, you must read the whole backup tape. And, you must hope
that the tape is readable. Tape is a volatile medium that is easy to damage. SnapVault software saves the
backup as a file system that can be read, written to, mounted, and browsed. You can view the SnapVault file
structure and see the data that has been backed up.

7-42

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

How SnapVault Backup Works


Administrators set up a backup relationship, backup
schedule and retention policy.
Multiple qtrees can be backed up to one volumeif the
qtrees.
A backup job is initiated based on a backup schedule and
can back up multiple systems.
After the initial (level 0) transfer, all backup jobs are
incremental.
A backup job moves data from the SnapVault primary
location to the SnapVault secondary location.
Controllers transfer changed blocks to the SnapVault
secondary location.
Open systems transfer changed files to the SnapVault
secondary location.
NetApp Confidential

39

HOW SNAPVAULT BACKUP WORKS


Administrators set up backup relationships, schedules, and retention policies. For example, a source might
create a Snapshot copy every 30 minutes and retain the four most current copies, one from six hours ago and
one daily starting 24 hours ago. The SnapVault system might move changed blocks from only the daily 24hour Snapshot copies. A one-to-one correlation between the Snapshot copy policy at the source and the
Snapshot copy policy at the destination is not required.
The SnapVault system retains fewer Snapshot copies per day or per week but retains them longer.
In regard to SnapVault operations:

7-43

Multiple qtrees can be backed up to one volumeif the qtrees have the same schedule and policy
SnapVault is qtree-based in native NetApp environments, so it always backs up to a qtree
A job moves data from the SnapVault primary location (source) to the SnapVault secondary location
(destination)
One job can pull data from multiple SnapVault primary locations

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

SnapVault Operations
Faster backups
Disk and network efficiency
Online access to hundreds of full backups
Primary Storage

NetApp NearStore
3

Delete Snapshot copies without impacting


others
Recreate full copy of data

Store only incremental changes

Transfer only incremental changes


Baseline transfer on first backup

4
Active
LUN-File System
Data blocks
A

SnapVault

Snapshot 1

Snapshot 2

Snapshot 3

SnapVault
Backup 1

SnapVault
Backup 2

NetApp Confidential

SnapVault
Backup 3

40

SNAPVAULT OPERATIONS
Here is an example of a baseline transfer. At some point, all of the active data on the primary system (source)
needs to be moved to the secondary system (destination). Because the transfer is based on a Snapshot copy, as
the transfer is processed, changes are occurring and production is continuing in the source. Therefore, there
may be more Snapshot copies on the source than on the destination.
After the baseline transfer is completed, you can create a Snapshot copy and interact with the file system
(view, browse, mount LUNs, and so on). During this time, changes continue on the source. Because the
SnapVault secondary data is based on a Snapshot copy, the data never has to be quiescedbecause it was
quiesced before the Snapshot copy was created.
The destination does not request all of the blocks that are within the Snapshot copy; rather, it requests only the
changed blocks. The destination and source views of the data are unique. The destination view is more
backup-focused. Production on the source system is not affected by the data transfer of the SnapVault
operation.

7-44

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

SnapVault Backup Flow Diagram


Setup

SnapMirror toTape
Local Copy
LAN Copy

Initial Full
Backup

Incremental
Backup

Backup images
are in file format
on disk.
Backups are
Immediately
and easily
verifiable.
The backup
provides a
reliable and
redundant form
of disk storage.

Incremental
backups are
created forever.
Changed blocks
are transferred
for controllers.
Changed blocks
or files are
transferred for
open systems.
Only changed
blocks are
stored for all
systems.
All backup
copies are full
images.

SnapMirror

Tape Backup

SnapMirror
and/or
SnapVault
secondary to
remote location
using
SnapMirror
Software for
disaster
recovery

Use the NDMP


backup
application to
back up data to
tape at any
time.
No backup
window is
needed.
Tape resources
are centralized
and used
efficiently.

NetApp Confidential

41

SNAPVAULT BACKUP FLOW DIAGRAM


SnapVault software protects the data on a SnapVault primary system by maintaining multiple read-only
versions of the data on a SnapVault secondary system. The SnapVault secondary system is a data storage
system (such as a NearStore system or a controller) that runs Data ONTAP.
First, a complete copy of the dataset is pulled across the network to the SnapVault secondary system. The
initial (baseline) transfer may require some time to complete, as the transfer duplicates the entire source
dataset (much like a level-zero backup to tape).
Establishing the baseline can be can be a time-consuming process time-consuming process, especially with a
large file system and a low throughput pipe. For example, to transfer a 2-TB system over a 128-KB line can
require can require months. In such a situation, many customers make the baseline transfer by shipping the
SnapVault system side-by-side with the primary system and making the baseline transfer locally. Then, they
ship the SnapVault system to its final destination and start replicating the changed blocks on the Snapshot
copy. Another option is to mirror the data, the data, place it on tape, and restore the tape at the destination.
Baseline transfers can also occur over the wire.
Each subsequent backup transfers only the data blocks that have changed since the previous backup
(incremental backups or incremental backups forever). For some NetApp replication relationships, the
baseline transfers were made eight or nine years ago, and all backups since that time have been incremental
Block-level time have been incremental. Block-level incremental backups are available for both controller-tocontroller SnapVault and Open Systems SnapVault, although the process for determining which blocks have
changed is quite different.
When the initial full backup is performed, the SnapVault secondary system stores the data in a WAFL file
system and creates a Snapshot copy of the data. A Snapshot copy is a read-only, point-in-time version of a
dataset. Each Snapshot copy can be thought of as a full backup (although it consumes only a fraction of the
space). A Snapshot copy is created each time a backup is performed, and a large number of Snapshot copies
can be maintained, according to a schedule configured by the backup administrator. Each Snapshot copy
consumes an amount of disk space that is equal to the differences between it and the previous Snapshot copy.
7-45

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Data protection of the secondary system is common. A SnapVault secondary system can be protected by
either backup to tape or backup to another disk-based system (such as to a NearStore system). To back up to a
secondary SnapVault system (such as to a NearStore system), you can create a volume-based SnapMirror
relationship. To back up a secondary system to a tape library, you can use SnapMirror technology to mirror to
tape or perform an NDMP backup to tape.
SnapVault software and SnapMirror technology are built on the protocol that transmits blocks across the
WAN. They are designed specifically for WAN links. The progress of a transfer is recorded on both the
source and the destination. Therefore, if a source and the destination. Therefore, if a transfer is interrupted, it
does not need interrupted, it does not need to be restarted be restarted. The transfer stream includes numerous
checkpoints, point at which the transfer can be restarted. At transfer stream includes numerous checkpoints,
point at which the transfer can be restarted. At most, a transfer might, a transfer might have to repeat the
transfer of the transfer of a couple of hundred blocks. The transfer takes as much bandwidth from the pipe as
it can obtain but, as needed obtain but, as needed, can be throttled.

7-46

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

SnapVault Management
Command line
OnCommand
Third-party backup solution

NetApp Confidential

42

SNAPVAULT MANAGEMENT
SnapVault management options include the following:

Command line
OnCommand
Third-party backup solutions

BakBone from NetVault


Syncsort
SnapVault for NetBackup
Tivoli

The third-party options are a part of NetApp original equipment manufacturer (OEM) relationships.

7-47

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Open Systems SnapVault


Is an agent on the host
Creates backups based on Snapshot copies

Open Systems
SnapVault

Has these options:


The host sends files and the SnapVault system determines
which blocks have changed and stores them.
The host monitors block changes and sends only the changed
blocks to the SnapVault system.

Works with different host agents:


NetApp Open Systems SnapVault
Third Party Backup Solutions

Check the link below for the latest information:


http://now.netapp.com/NOW/products/interoperability/

NetApp Confidential

43

OPEN SYSTEMS SNAPVAULT


Open Systems SnapVault extends the functionality of native SnapVault software to heterogeneous
environments. To back up data from Windows, Linux, or Solaris servers or from commercial UNIX platforms
(HP-UX or AIX), you can use Open Systems SnapVault. It is an agent on the host that can transfer data to a
SnapVault system and create backups that are based on Snapshot copies. The source system with the client
does not have Snapshot copies or WAFL, so you must do some different work in that case.
The delta can be managed in either of two ways:

The host sends an entire file across the wire and allows the SnapVault controller to figure out which
blocks have changed. Because the previous and current versions of the data are stored locally, the
controller can easily perform the comparison. This option requires high bandwidth.
The host can maintain a database of each 4 KB of the files data and run checksums to determine which
blocks have changed. The host sends only the 4-KB chunks that are different. This option requires much
less bandwidth but requires a very large CPU load on the source system.

In a typical remote office, the CPU-intensive option is fine, because the office probably shuts down at night.
Because the server is sitting idle, it can run the checksums.
Various host agents that are available:

BakBone, the original creator


The NetApp version of Open Systems SnapVault
Syncsort
CommVault

Syncsort is the only host agent that can perform bare-metal restores. With this type of restore, you place a
floppy in the system and boot from the floppy. This method restores the entire operating system over the wire
from SnapVault software. Other host agents need an OS to recover into. Install Windows, then install Open
Systems SnapVault, and then start the restore.
7-48

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

SnapVault Backup Process:


Open Systems SnapVault
Open Systems SnapVault uses the local, internal
database for relationship information, metadata,
indexing, and block-level incremental (BLI) data, so
you need storage space on the open system.
Read-only Snapshot copies are vaulted on the
secondary storage system.
Phases in the transfer process include the following:
Phase I: The file system is scanned, and the directory
structure is built
Phase II: Datasets are transferred.
Phase III: Acknowledgements are sent, and Softlock
negotiations occur.
NetApp Confidential

44

SNAPVAULT BACKUP PROCESS: OPEN SYSTEMS SNAPVAULT


Open Systems SnapVault and regular SnapVault use the same process for defining mappings between
primary directories and secondary qtrees. In both cases, the schedule is set up on the secondary system.
However, a major difference is that, in an Open Systems SnapVault environment, the file system is scanned
for changed files, and checksums are performed on the changed files and their associated data blocks. The
checksums are then compared to the checksums from the last backup, and changed blocks are sent to the
destination SnapVault system.
Unlike with Data ONTAP, with Open Systems SnapVault, there is no Snapshot copy on the primary system.
Phases of transfer:

Phase I is a resource-intensive process. The time required for the process varies, depending on the size of
the dataset. The process can be lengthy. If block-level incremental (BLI) backups (or checksum
calculations) are enabled, the period of time is extended. During a baseline transfer, If BLI is enabled and
set to high, checksums are performed on every 4-KB chunk of every file. For an incremental backup, if
BLI is enabled and set to high, checksums are performed on every 4-KB chunk of every file that has
changed.
Phase II transfers the dataset to the SnapVault secondary (destination).
Phase III occurs when acknowledgements are sent to the host system. The acknowledgements confirm the
dataset transfer.

NOTE: The checksum calculations, the local Open Systems SnapVault agent database (history, metadata),
and a temporary directory are all stored on the primary system. Allow sufficient space for all of these items on
the primary system in Open Systems SnapVault.

7-49

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Controller and Open System Comparison


Controller SnapVault

Open Systems SnapVault

Incremental
Backup

Snapshot technology is used to


transfer changed blocks.

BLI is used in the primary system to


transfer changed files.

Source Data

All non-qtree data can be backed up


to one qtree.

The directory or sub-directory can be


backed up to one qtree.

Tape Restore

A one-step process restores from


tape to the primary system.

A two-step process restores from


tape to the secondary system and
then restores to the primary system.

Snapshot on
Primary System

A Snapshot copy is created or an


existing Snapshot copy is used on
the SnapVault primary system.

The live file system is backed up; a


Snapshot copy is not needed.

NetApp Confidential

45

CONTROLLER AND OPEN SYSTEM COMPARISON


A SnapVault backup that is based on Data ONTAP differs from an Open Systems SnapVault backup in the
following ways:

7-50

For SnapVault backups, changed blocks are based on Snapshot copies. For Open Systems SnapVault
backups, changed blocks are based on host or vault monitoring.
SnapVault backups are qtree-based. Whether the source data is a directory, subdirectory, or NetApp qtree,
it is backed up to a qtree.
For Open System SnapVault backups, tape restores are more complicated.
When the SnapVault backup is based on Data ONTAP, restoring from a native tape to a native primary
system is a one-step process. The system can skip the destination and return directly to the source. With
Open Systems SnapVault backups, restoration is a two-step process. The backup must be restored to a
NetApp box and then pushed form the box to the original source.

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Exercise 14
Module 7: Demonstration: SnapVault
Time Estimate: 45 Minutes

NetApp Confidential

EXERCISE 14
Please refer to your exercise guide.

7-51

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

46

Lesson 7
Compliance and Permanence

NetApp Confidential

LESSON 7: COMPLIANCE AND PERMANENCE

7-52

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

47

Compliance Drivers and Requirements


Compliance Requirements

Market Drivers
Litigation Protection Regulations

Data Permanence

SEC 17a-4

Immutable storage

Basel II

Sarbanes-Oxley Check 21

Data authenticity

NASD 3010/3110

Data integrity
Data replication

DOD 5015.2

Patriot Act

S B 1386
Gramm-LeachBliley

HIPAA

21 CFR Part 11

UK Data
Protection Act

Privacy and Security


Authorization
Access controls

Encryption
Auditing
Secure deletion

Most companies are subjected to multiple regulations


NetApp Confidential

48

COMPLIANCE DRIVERS AND REQUIREMENTS


Worldwide, regulations dictate the way businesses store information. In addition, enterprises store
information securely to protect their intellectual property and to defend themselves against litigation. External
regulations and internal corporate-governance requirements significantly impact data-storage needs. The
regulations and requirements can be divided into two categories: 1) data permanence and 2) privacy and
security.
Data permanence can be defined as the need to store data in a form that can be proven not to have changed
over a period of years. Data-permanence requirements specify data retention elements such as:

Immutable storage: referred to as write once, read many (WORM) storage, which is storage from which
data cannot be deleted or modified for the duration of the retention period
Data authenticity: the ability to prove that data was written on the media accurately the first time
Data integrity: the ability to prove that data has not been altered since it was first written and that the
integrity of the data will be protected for the retention period
Data replication: the storing of a data copy that is separate from the original copy to ensure data
availability, even in the case of disaster

The following regulations control authorized access to private user and company data:

Authorization: allowing data access to authorized individuals


Access controls: limiting individual rights to perform certain actions with the data
Encryption: protecting the privacy of data in transmission or at rest
Auditing: keeping a log of who did what done what to the data when
Secure deletion: deleting data so that it can never be recovered

Typically, enterprises are subject to a variety of regulations. These regulations may mandate a matrix of
requirements and cut across data permanence, privacy, and security. Enterprises should take a big-picture,
long-term view of compliance storage, rather than focusing on storage infrastructures that meet only current
requirements.
7-53

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

SnapLock Usage: Process


1.

Use the SnapLock Compliance or SnapLock Enterprise


license to license SnapLock software.

2.

Create a SnapLock volume or aggregate, but realize that


you cannot convert a volume to a SnapLock volume.

3.

Share or export the SnapLock volume.

4.

Copy the write-enabled file over NFS or CIFS.

5.

Change the last access time to reflect the retention date.

6.

After the file is stored on the SnapLock volume, use a


script or an application to change the permissions to read
only.

NetApp Confidential

49

SNAPLOCK USAGE: PROCESS


SnapLock compliance software is a production-side compliance solution. Two licenses are available with
SnapLock:

The SnapLock Compliance license allows the removal of data only after the compliance window (as
determined by the compliance clock) is completed. The data is destroyed through physical destruction of
the drives.
The SnapLock Enterprise license allows disks to be erased. This type of compliance is used for businesscompliance rules, not for regulatory compliance. The physical container that holds the data is destroyed.
The lock cannot be undone, rather it must be destroyed. The process destroys data.

To manage data:
1. An administrator creates a SnapLock volume or aggregate (a physical layer container)
2. The container is shared, and a copy of the file is moved.
3. The most recent access time is changed to reflect the retention date, and the permissions are set to read
only.
4. The file is stored until the retention date arrives.
This process is intended for only structured and semi-structured data sets, so an application can control the
details.

7-54

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

SnapLock Usage: Technical Details


The vol command has been changed:
vol create: use the L switch to specify SnapLock
software.
The System Manager browser-based administration tool
does not support the SnapLock option.

In Data ONTAP 7-mode aggr is the command.


1. Use aggr create volume_nameL to create
SnapLock aggregates.
2. Create flexible volumes on the SnapLock aggregate.

The type of SnapLock volume is dependent on type of


license: Snaplock Compliance or Snaplock
Enterprise.
NetApp Confidential

50

SNAPLOCK USAGE: TECHNICAL DETAILS


Stress to partners and customers that NetApp systems cannot undo the creation of a SnapLock volume. Once
created, a SnapLock volume is permanent. Customers and partners should follow the best practice guidelines
for creating and maintaining SnapLock volumes, as detailed in the following paper:
http://www.netapp.com/tech_library/3263.html.
Changes to volume commands for SnapLock usage include the following:

Data ONTAP 7G introduced the capital L switch to the vol and aggr commands for creating compliant
data stores.
Starting in Data ONTAP 6.4.1, once a SnapLock volume is created, it cannot be destroyed.
Because vol copy essentially destroys WORM data, a copy is not allowed to a destination SnapLock
volume.

You can lock SnapLock aggregates and traditional volumes. A standard aggregate cannot contain a flexible
volume that contains only compliance data. Only compliance aggregates can contain flexible volumes that
contain compliance data.

7-55

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Exercise 15
Module 7: Case Study
Student Activity 4

Time Estimate: 35 Minutes

NetApp Confidential

EXERCISE 15
Please refer to your exercise guide.

7-56

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

51

Module Summary
Now that you have completed this module, you
should be able to:
Discuss general trends in the data-protection
market
Articulate the value of the NetApp OnCommand
application
Discuss the challenges that the NetApp
SnapVault feature solves
Discuss the challenges and solutions that are
involved in compliance and information-lifecycle
management
NetApp Confidential

MODULE SUMMARY

7-57

NetApp Accredited Storage Architect Professional Workshop: Data Protection

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

52

Module 8
Disaster Recovery

NetApp Confidential

MODULE 8: DISASTER RECOVERY

8-1

NetApp Accredited Storage Architect Professional Workshop: Disaster Recovery

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Module Overview
This module covers the following topics:
Disaster-recovery overview
Recovery objectives
SnapMirror software features
MetroCluster overview
Positioning

NetApp Confidential

MODULE OVERVIEW

8-2

NetApp Accredited Storage Architect Professional Workshop: Disaster Recovery

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Module Objectives
After this module, you should be able to:
Discuss the NetApp disaster-recovery
architecture:
Recovery point objective (RPO)
Recovery time objective (RTO)

Articulate the key features of SnapMirror


software
Discuss the benefits of MetroCluster
technology
Position NetApp products and services for
disaster recovery
NetApp Confidential

MODULE OBJECTIVES

8-3

NetApp Accredited Storage Architect Professional Workshop: Disaster Recovery

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 1
Disaster Recovery

NetApp Confidential

LESSON 1: DISASTER RECOVERY

8-4

NetApp Accredited Storage Architect Professional Workshop: Disaster Recovery

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Recovery Objectives
Days

Hours

Minutes

Seconds

Cost and Availability

Continuous
Operations

Hot Standby
Mirroring

Cost

Database Replication
Remote Journaling
Remote Vaulting
Weekly
Backup

Daily
Backup

Currency of Data
Source: Deloitte and Touche
NetApp Confidential

RECOVERY OBJECTIVES
This graphic shows that the cost of a solution increases as the recovery point objective (RPO), the point to
which you want to be able to recover, becomes closer and closer to real time (or immediate). The concept is
valid, and most businesses have multiple types of data with multiple priorities along this curve.
Not many environments have a business need for continuous operations for any class of data. Financial
environments are the most common that have a real-time recovery point. Some types of companies exist for
which online transaction processing (OLTP) requires data that is current up to the last I/O operation.

8-5

NetApp Accredited Storage Architect Professional Workshop: Disaster Recovery

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Causes of Unplanned Downtime


Operational
Failures

People and
process issues
Infrastructure
changes
Configuration
and problem
management

Application
Failures

Bugs
Performance
issues
Changemanagement
process

Component
and System
Failures

Site Failures

Regional
Disasters

Controller
failure

Terrorist
attacks

Electric grid
failures

Host bus
adaptor (HBA)
and port
failure

HVAC failures

Natural
disasters:

Disk failure
Shelf failure
FC loop failure

Power
failures
Building fire
Plumbing
accidents

Floods
Hurricanes
Earthquakes

Architectural
failures
Planned
downtime

40%

40%

10 %

7%

3%

Probability of Occurrence
NetApp Confidential

CAUSES OF UNPLANNED DOWNTIME


Many reasons exist for unplanned downtime, including operational failures, application failures, component
and system failures, site failures, and regional disasters. Of that unplanned downtime, 40% is caused by
operational failures (operator errors), another 40% is caused by application failures, and the remaining 20% is
caused by component and system failures, site failures, and disasters. Storage system failures account for a
negligible percentage of all system and component failures as a result of the storage resiliency features that
are built into all of the most widely used systems.
The types of failures are summarized as follows:

8-6

Operational failures: The proliferation of storage silos, multiple architectures, and products and
technologies that are not interoperable makes IT infrastructures increasingly complex. This complexity
often results in a rise in operator errors.
Application failures: As new functionality is added to applications that must support multiple underlying
architectures, complexity increases, and so does the likelihood of an application failure.
Component and system failures: Failures in system components often result in long recovery times and in
data corruption. A high level of storage resiliency is essential to preventing downtime and loss of data.
Site failures and regional disasters: Of the different types of unplanned downtime, site failures and
regional disasters are the least likely to occur, but they are responsible for the highest costs when they do
happen.

NetApp Accredited Storage Architect Professional Workshop: Disaster Recovery

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

The State of the Market

Present

Past

Operational
Failures

Application
Failures

Reliance on tape for all backup and


recovery needs

Increasing data-center complexity


that results in downtime because
of operator errors
A need for faster disk-based data
recovery

Component
and System
Failures

Site Failures

Emphasis on
highavailability
(HA) clustered
solutions

Disaster-recovery protection for


only the most mission-critical
applications

Well
addressed by
the top
enterprise
storage
vendors

Increasing need for HA and


disaster recovery solutions

Regional
Disasters

Cost and complexity: barriers to


widespread adoption

Server and storage consolidation


A need to protect a broader set of
applications cost-effectively

NetApp Confidential

THE STATE OF THE MARKET


Past

Organizations used to rely on tape for most of their backup and recovery needs when they needed to
recover a previous copy of data after a failure occurred.
Because the goal was never to go down, IT organizations put a heavy emphasis on high-availability (HA),
clustered solutions.
Only the most mission-critical applications were protected against disasters with synchronous replication
solutions. Because of limited budgets, the rest of the applications were not covered under a disasterrecovery plan. Off-site shipment of tapes was the last level of protection for these applications.

Present

8-7

Increasing data-center complexity results in an increase in operator errors, which, in turn, leads to
increased downtime.
Tape is no longer adequate as a backup and recovery medium. It takes too long to back up and recover
and doesnt meet the requirements of increasingly strict SLAs.
Storage system failures account for a negligible percentage of all system and component failures as a
result of the storage resiliency features that are built into all of the most widely used systems.
The need for disaster recovery solutions is increasing because of terrorist activities, recent disasters, and
the need for compliance with the Sarbanes-Oxley Act (SOX).
As customers continue to consolidate expensive UNIX servers onto commodity clusters, they look at a
consolidated disaster-recovery plan for a broader set of applications.

NetApp Accredited Storage Architect Professional Workshop: Disaster Recovery

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

NetApp Software Architecture


Continuous
Operations

LAN and WAN


Clustering

MetroCluster

Synchronous
Replication

Synchronous SnapMirror
and High Availability

Block-Level
Incremental
Backups

Synchronous SnapMirror, SyncMirror


Software, and Asynchronous SnapMirror
SnapVault
Software

Application
Recovery

Cost

Availability

Asynchronous
Replication

SnapRestore
Software
Daily
Backup

Snapshot
Copies
Low-Level SLA

Medium-Level SLA

High-Level SLA

Cost

NetApp Confidential

NETAPP SOFTWARE ARCHITECTURE


This is where NetApp products roughly fit on the same style of curve. This is not a precise mapping. What is
relevant is that NetApp has products that allow customers to reach any level of recovery point that the
customers need.

8-8

NetApp Accredited Storage Architect Professional Workshop: Disaster Recovery

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Disaster-Recovery Architecture
Major Data Center

Disaster-Recovery Site

UNIX
Server

Windows
Server

FAS System
with NPL

Windows NT
Server

Clustered and
Nonclustered
NetApp Servers

SnapVault
Software

Backup
Server

Tape
Library

Network IP
or FC

Remote Data Center

FAS System
with NPL

SnapVault
Software

FAS System
with NPL
NetApp Confidential

MetroCluster
Windows
Server
UNIX
Server

DISASTER-RECOVERY ARCHITECTURE
This is a representation of a hypothetical disaster-recovery architecture. Many NetApp customers have these
cascading environments, especially large customers with multiple sites around the world.
As you see, multiple classes of data are handled in different ways. Some of the data is immediately mirrored
by using SnapMirror technology directly to a remote site. Some of the data is stored by SnapVault software
locally. Eventually, all of the data is mirrored by SnapMirror technology to the remote site.
The example also shows the use of Open Systems SnapVault, MetroCluster, regular clusters, and some standalone systems. All of these are mirrored to a third site where they have their backup structure. This entire
operation can be performed by using NetApp technology, which provides a single-vendor solution.
NetApp uses the same design internally. The primary NetApp disaster-recovery center is in Sacramento,
California, 75 miles away in a straight line from NetApp headquarters in Sunnyvale, California. The
advantage is that the primary NetApp disaster-recovery center is outside the most dangerous earthquake zone,
so it is theoretically safer. Everything gets replicated to the Sacramento site, and then the most critical data
gets replicated to Amsterdam, the NetApp European headquarters. From there, it gets replicated to Bangalore,
India, which is the largest NetApp Asia-Pacific office, and from Bangalore it is replicated back to North
America to the Research Triangle Park facility. Each of those sites has its own primary data, so that primary
data is also replicated out to the other sites.

8-9

NetApp Accredited Storage Architect Professional Workshop: Disaster Recovery

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Flexible Disaster Recovery


Protects and accelerates business with
60% lower TCO:
Application
Integration

One to many and many to one


Any platform to any platform:
Any FAS system

MetroCluster

SnapMirror
Software

FC or SATA disk

Replication between NetApp and thirdparty storage (through V-Series


systems)

The ability to tune to meet business


requirements: synchronous, semisynchronous , or asynchronous
Support for all applications and
protocols
NetApp Confidential

10

FLEXIBLE DISASTER RECOVERY


Consider these key values:
First, look at SnapMirror software as a flexible solution. SnapMirror software is primarily used as a disaster
recovery solution. SnapMirror software replicates unique data blocks at high speeds over LAN, WAN, or FC
networks to minimize bandwidth utilization and provide protection against unplanned downtime.
Customers now use it for business intelligence, data distribution, and development and testing to maximize
utilization of their disaster-recovery site for better ROI. This is enabled by FlexClone technology, which
creates instantaneous, space-efficient clones off your SnapMirror copy on the disaster-recovery site to run
your other business activities without impacting your production-site operations.
SnapMirror software leverages the NetApp Unified Storage Architecture, which means that customers can use
a single product that can replicate between tiers of NetApp storage (which can be FC systems on the primary
and SATA systems on the disaster-recovery site) and between third-party storage by using NetApp V-Series
systems for investment protection.
Customers can also use multiple replication modes (synchronous, semi-synchronous, and asynchronous) to
tune their RPO to meet their business needs.
SnapMirror software also supports all applications and protocols (including FC, iSCSI, NFS, and CIFS).
All these benefits of SnapMirror software apply equally well to virtual and traditional physical environments.

8-10

NetApp Accredited Storage Architect Professional Workshop: Disaster Recovery

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Disaster Recovery for


Virtual Environments
Primary Data Center
VM1 VM2

Disaster-Recover Site
VM3

VM1

VM2

VM3

Site
Failure

Broad support to meet needs:

VMware

Microsoft Hyper-V

Citrix XenServer

Integrated with VMware SRM:


enables automated virtual
machine (VM) failover
Leverages storage efficiency:

Data

SnapMirror
Software

Data

Up to 90% less primary and


disaster-recovery storage

Up to 70% less network


utilization

Virtual
Storage
Partition

Designed for shared


architectures: secure multitenancy across virtual storage
partitions

NetApp Confidential

11

Data

Data

Data

Data

Virtual
Storage
Partition

DISASTER RECOVERY FOR VIRTUAL ENVIRONMENTS


The benefits of SnapMirror software can be realized in virtual environments regardless of the vendor. NetApp
works with VMware, Microsoft Hyper-V, and Citrix XenServer.
Customers can extend the power of SnapMirror software to virtualize storage environments, for example,
with VMware Site Recovery Manager for rapid, reliable, and affordable automated site-disaster recovery.
Enhanced application protection for virtualized applications through integration with SnapMirror software
means that customers can achieve high levels of availability through instantaneous recovery and access of
data through failed-over virtual machines (VMs) on the secondary site. Together, these products provide
customers with a robust disaster recovery solution that reduces the risk, cost, and complexity that is associated
with traditional disaster-recovery approaches.
From an efficiency perspective, you know that SnapMirror software provides thin replication by leveraging
the many storage-efficient technologies that NetApp has had for many years, including Snapshot copies,
RAID-DP technology, and deduplication. SnapMirror software has introduced a built-in network-compression
capability to help to reduce customers network bandwidth utilization. Data transfers are accelerated to free
the network for other uses. And because customers can replicate more often, that means a lower RPO and at
no additional cost, which means no additional hardware costs, no additional license costs, and no extra
devices to manage. In lab testing, NetApp has seen bandwidth utilization reduced by 72% for Oracle data, a
63% reduction for home directory, and 53% for Exchange. One customer, North American Banking
Company, uses SnapMirror compression and has seen bandwidth utilization increase by 66%, which saves the
company an estimated $10,000.
Finally, customers can virtually partition storage and provide secure multi-tenancy with the ability to replicate
data across partitions with the knowledge that the data is protected.
DCI, a financial-services company, says, NetApp software takes care of automating replication and recovery
processes, and VMware SRM automates the failover. Should we ever experience a site disaster, in a matter of
minutes we can be up and running at the DR facility. And it costs us about 50% less than before.
8-11

NetApp Accredited Storage Architect Professional Workshop: Disaster Recovery

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 2
SnapMirror Software

NetApp Confidential

LESSON 2: SNAPMIRROR SOFTWARE

8-12

NetApp Accredited Storage Architect Professional Workshop: Disaster Recovery

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

12

SnapMirror Overview
SnapMirror software replicates a file system on one controller to a
read-only copy on another controller:
Replication is volume-based (traditional or flexible) or
qtree-based.
Based on Snapshot technology, only changed blocks are copied
after the initial mirror is established.
Asynchronous and synchronous operations are possible.

SnapMirror software runs over IP and FC.


Data is read-accessible at remote sites.
One to many means multiple copies.
Many to one means consolidation.
Cascade and Multihop follow on destinations.
Resynchronization is easy.
Scheduling and throttling is easy.
NetApp Confidential

13

SNAPMIRROR OVERVIEW
NetApp has disaster-recovery relationships that go back and forth between all five of the NetApp sites around
the world. The product that makes that possible is SnapMirror software.
SnapMirror technology replicates a file system on one controller to a read-only copy on another controller.
The replication can be volume-based or qtree-based, depending on the circumstances of the transfer. Like
SnapVault software, SnapMirror software is based on Snapshot technology, so only the changed blocks must
be moved after the initial baseline is in place. SnapMirror software can be asynchronous or synchronous in its
transfer type and can run over IP or FC.
Customers can have one source that goes to many destinations or have many sources that go to one
destination. SnapMirror technology can cascade and be utilized in multihop scenarios. Probably the most
important difference is the resynchronization process. If you move production to the destination and make
changes there, you must be able to get those changes back to the original source. That is easy to do with
SnapMirror technology: Like SnapVault software, SnapMirror software is easy to schedule and throttle.
SnapMirror software was the first replication product from NetApp and came out in 1997. SnapVault
software was then based on the SnapMirror technology, which utilizes the same underlying engine.

8-13

NetApp Accredited Storage Architect Professional Workshop: Disaster Recovery

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Volume SnapMirror Software


Versus Qtree SnapMirror Software
Volume SnapMirror software:
Replication of the entire volume:
Snapshot copies and qtrees replicate.

Volumes must be the same type (traditional or flexible).

Block-based replication

Qtree SnapMirror software:


Replicates only the qtree
Can consolidate qtrees from multiple systems
Provides logical, file-based replication
Has no volume type or Data ONTAP version
requirements
Is asynchronous only

NetApp Confidential

14

VOLUME SNAPMIRROR SOFTWARE VERSUS QTREE SNAPMIRROR SOFTWARE


SnapMirror technology can be configured for whole volumes or individual qtrees in a volume. Volume
SnapMirror technology replicates an entire volume and all the associated Snapshot copies to the secondary,
including the volumes qtrees. The replicated volume looks identical to the source volume, including the
Snapshot copies. Volume SnapMirror technology can be used only on volumes of the same typeboth
traditional or both flexible volumes. Volume SnapMirror technology is a block-based replication. Therefore,
earlier versions of Data ONTAP architecture cannot understand file-system transfers from later versions.
Qtree SnapMirror technology is used between qtrees, regardless of the type of the volume (traditional or
flexible). Qtrees from different sources can be replicated to a destination, and the Snapshot copy schedules on
the source and destination are independent of each other. Qtree SnapMirror replication is logical replication:
All the files and directories are created in the destination file system. Therefore, replication can occur between
different versions of Data ONTAP software. Qtree SnapMirror technology can operate only in asynchronous
mode.
Volume SnapMirror replication cannot occur from later to earlier versions of Data ONTAP software;
however, the reverse is possible. If Volume SnapMirror technology is configured to replicate from an earlier
to a later version, customers should upgrade the earlier version of the source as soon as possible. This allows
customers to resynchronize (reversing the replication relationship) during a disaster-recovery scenario. This is
also true for synchronous SnapMirror technology; however, qtree SnapMirror technology does not have this
restriction.

8-14

NetApp Accredited Storage Architect Professional Workshop: Disaster Recovery

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

SnapMirror Modes
Synchronous SnapMirror
1

Every Write

No data-loss exposure
A replication distance of
less than 100 km
Some performance impact

Semi-Synchronous SnapMirror
1

Every Write

Asynchronous SnapMirror
1

3
Changed Blocks
1
2

Set Intervals

Seconds of data exposure


No performance impact
From one minute to hours
of data exposure
No distance limit
No performance impact

NetApp Confidential

15

SNAPMIRROR MODES
SnapMirror software can be configured into three replication modes. All are available with a single license.
The first mode is synchronous SnapMirror. In this solution, the data at the disaster-recovery site exactly
matches the data at the primary site. This is achieved by replicating every data write to the remote location
and not acknowledging to the host that the write has occurred until the remote systems confirm that the data
has been written. This solution provides the least data loss, but a limit of 50 to 100 km exists before latency
becomes too great, because the host application must wait for an acknowledgment from the remote NetApp
devices.
Semi-synchronous SnapMirror allows customers to achieve a near-zero-data-loss disaster recovery solution
without performance impact on the host application. The solution also allows customers to perform
synchronous-type replication over longer distances. When data is written to the primary storage, an
acknowledgment is immediately sent back, which eliminates the latency impact on the host. In the
background, SnapMirror software tries to maintain as close to synchronous communication as possible with
the remote system. SnapMirror software has user-defined thresholds that control how far out of synchronicity
the source and remote copy datasets are allowed to get.
Asynchronous SnapMirror allows customers to replicate data at adjustable frequencies. Customers can do this
type of point-in-time replication as frequently as once per minute or as infrequently as once in several days.
No distance limitation exists, and the mode is frequently used to replicate across long distances to protect
against regional disasters. Only the blocks that change between each replication are sent, which minimizes
network usage.

8-15

NetApp Accredited Storage Architect Professional Workshop: Disaster Recovery

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Synchronous, Asynchronous,
or Semi-Synchronous?
People assume that they need synchronous
mirroring.
Synchronous SnapMirror issues include:

Cache
The communication line going down
Distance limitations
Latency limitations
Performance impact

Most customers go with Asynchronous SnapMirror:


A Snapshot copy every minute
A guaranteed consistent file system every minute
NetApp Confidential

16

SYNCHRONOUS, ASYNCHRONOUS, OR SEMI-SYNCHRONOUS?


Companies assume that they need synchronous mirroring to have the best protection. The key question is:
What is the recovery point? The customer must take a realistic view of the companys needs and consider the
implications. Synchronous replicationis not always the best choice for the situation.
Customers should consider these points when they decide on a level of synchronization:

Operation caching: If a line goes down, what happens? How should the recovery occur?
Distance limitations
Latency limitations
The performance impact of a down communication line or system failover

Most NetApp customers choose asynchronous mirroring for the following reasons:

8-16

A Snapshot copy is created every minute.


Asynchronous mirroring guarantees a consistent file system, whether it is SAN or network-attached
storage (NAS), every minute. (Guaranteed consistency is more valuable to most NetApp customers than
having all of the limitations that come with it.)

NetApp Accredited Storage Architect Professional Workshop: Disaster Recovery

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

SnapMirror Sync Error Handling


Automated fallback to async mode when
connection is disrupted
Attempts to reestablish sync mode at oneminute intervals
Automatic reestablishment of sync operations
as soon as is possible

NetApp Confidential

17

SNAPMIRROR SYNC ERROR HANDLING


If there are problems with the network, synchronous replication might go into an asynchronous mode.
Ordinarily, the source and destination controllers periodically communicate with each other to maintain the
connection. In the event of a network outage, synchronous SnapMirror goes into an asynchronous mode if the
periodic communication is disrupted. When in asynchronous mode, the source controller tries to communicate
with the destination controller once every minute until communication is reestablished. Once communication
is reestablished, the source controller asynchronously replicates data to the destination every minute until
synchronous replication can be reestablished.

8-17

NetApp Accredited Storage Architect Professional Workshop: Disaster Recovery

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

SnapMirror Network Compression


Enables compression over the network to minimize network
bandwidth consumption
Is configurable per SnapMirror relationship
Uses the industry-standard gzip algorithm
Uses a compression ratio that depends on the data set type
(reported in the snapmirror status -l output)
Asynchronous SnapMirror
Read

Write

Network Traffic
Compressed

Network Traffic
Uncompressed

Compressed Data
Across the Wire

NetApp Confidential

SNAPMIRROR NETWORK COMPRESSION


This slide shows how SnapMirror network compression works.

8-18

NetApp Accredited Storage Architect Professional Workshop: Disaster Recovery

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

18

SnapMirror Flexibility
Multiple hops

Sync

Async

Many-to-One

Cascading

FAS

FAS w/NPL

Asymmetric replication

Heterogeneous
replication with V-Series systems

V-Series
Enterprise Storage Array

NetApp Confidential

FAS

19

SNAPMIRROR FLEXIBILITY
Multiple hops can be used to protect against site disasters (with a synchronous replication solution) and
regional disasters (with an asynchronous replication solution). SnapMirror technology can also replicate from
multiple data centers to a central disaster-recovery site, where you can centralize your tape backup
infrastructure, which reduces your costs.

8-19

NetApp Accredited Storage Architect Professional Workshop: Disaster Recovery

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

SnapMirror Licensing
SnapMirror software is one product with two
licenses:
Synchronous SnapMirror
Asynchronous SnapMirror

No separate source and destination licenses


exist: A single controller can be a source and
a destination at the same time.

NetApp Confidential

20

SNAPMIRROR LICENSING
When customers buy SnapMirror technology, they get everything, but two license numbers exist:

One for semi-synchronous SnapMirror


Another for asynchronous SnapMirror

A user can change the relationship between synchronous, semi-synchronous, and asynchronous modes. The
relationship can be set up in any way as long as the baseline is established. The modes can be changed
without performance impact or baseline resynchronization.
No separate source or destination license exists. Because only one license exists for both source and
destination, the same box can be a destination and a source.

8-20

NetApp Accredited Storage Architect Professional Workshop: Disaster Recovery

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

SnapMirror Software and SnapVault


Software: Product Comparison
SnapMirror Software

SnapVault Software

Can be scheduled to run every minute

Can be scheduled to back up every hour

Provides no Snapshot coalescing

Provides Snapshot coalescing*

Provides no Snapshot copy management

Provides additional Snapshot copy management

Can transfer two ways

Can transfer one way only

Mirrors volumes or qtrees

Backs up qtrees

Can use a read-write destination

Always uses a read-only destination

Does not support open systems

Can back up open systems

*Coalescing reduces the number of overhead Snapshot copies that are needed on the secondary system, which allows
customers to keep more backup copy online.

NetApp Confidential

21

SNAPMIRROR SOFTWARE AND SNAPVAULT SOFTWARE: PRODUCT COMPARISON


The differences between SnapMirror technology and SnapVault software may be confusing at first. Here is a
summary.
SnapMirror software is set to run every minute, while SnapVault software is normally scheduled no more
than once every hour. SnapMirror software performs no Snapshot copy coalescing or management, while
SnapVault software performs both.
In a SnapMirror relationship, the Snapshot copies are the same on the destination as on the source. With
SnapVault software, a different schedule is used, which is synchronized with the backup scenario. The blocks
that are stored on the destination may be different from those on the source; only those blocks that are
necessary to maintain the Snapshot copies are stored on the destination. SnapVault software manages blocks
differently on the destination than how SnapVault software manages what is visible on the source.
With SnapMirror technology, transfers can go two ways. With SnapVault software, transfers are one-way
only. With SnapVault software, users do not ever intend for the destination to become production, so users do
not need to synchronize data in the other direction, although users can restore data; whereas with SnapMirror
software, not only can relationships go both directions between machines but those relationships can be easily
reversed.
SnapMirror software can be used to mirror volumes or qtrees. SnapVault software backs up qtrees only.
The destination can easily be made read-write in a SnapMirror relationship. With SnapVault software, the
destination is always read-only and can be used to back up open systems with Open System SnapVault.

8-21

NetApp Accredited Storage Architect Professional Workshop: Disaster Recovery

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 3
MetroCluster

NetApp Confidential

LESSON 3: METROCLUSTER

8-22

NetApp Accredited Storage Architect Professional Workshop: Disaster Recovery

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

22

MetroCluster Overview
Design Goals

The primary goal of a MetroCluster is to provide


mission-critical applications and redundant
storage services in the case of site-specific
disasters (for example, fire or long-term power
loss).
MetroCluster tolerates site-specific disasters
with minimal interruption to mission-critical
applications and zero data loss by
synchronously mirroring data between two sites.
NetApp Confidential

23

METROCLUSTER OVERVIEW
DESIGN GOALS

The primary goal of MetroCluster is to provide mission-critical applications with redundant storage services
in the event of site-specific disasters such as fire or long-term power loss.
MetroCluster can also be described as follows. MetroCluster is designed to tolerate site-specific disasters with
minimal interruption to mission-critical applications and zero data loss by synchronously mirroring data
between two sites.
You should adjust the focus depending on whom you are talking to. Some NetApp clients focus on the
redundancy of data; others focus on the recoverability of the system.

8-23

NetApp Accredited Storage Architect Professional Workshop: Disaster Recovery

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Two Types of Failure Scenarios


Disasters

Disasters require an operator to confirm the disaster and manually run


the cf takeover f command before a cluster failover can occur
NetApp Confidential

24

TWO TYPES OF FAILURE SCENARIOS


Failures can be a result of acts of nature or something going wrong in the system.
An act of nature obviously is the worse scenario of the two, both in human terms and physical terms, but it is
also worse because you cannot tell the difference between the sudden destruction of a site and a network
outage between the two sites. So, after this type of disaster, the system will not fail over automatically. If all
communications are suddenly lost, an automatic failover is not performed. This contrasts with a standard sideby-side cluster, in which case the system would fail over.
If there is something going on in a system, such as an internal failure, the system knows it is going down, and
it will send a signal across the line so the other system knows to take over, causing an automatic failover to
occur.
In a natural disaster, an administrator must declare that a disaster has happened and tell the other system to do
the takeover, so the system avoids a split-brain scenario and data corruption. You want to avoid split brain in
any clustered environment. Some customers have automated this process. They have decided that if three
independent network connections fail simultaneously, they assume it is a real disaster and have a script that
sends the takeover command. But many customers leave the disaster failover process as a manual process.

8-24

NetApp Accredited Storage Architect Professional Workshop: Disaster Recovery

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Two Types of Failure Scenarios


Normal Cluster Failover Events

NetApp Confidential

TWO TYPES OF FAILURE SCENARIOS


NORMAL CLUSTER FAILOVER EVENTS

8-25

NetApp Accredited Storage Architect Professional Workshop: Disaster Recovery

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

25

MetroCluster
MetroCluster is a cost-effective replication solution for
combined high-availability and SyncMirror disaster
recovery within a campus or metro area
Major Data Center

Nearby Office

Configurations
Stretch MetroCluster
provides campus disaster
recovery protection

LAN/SAN

Can stretch up to 500m

FAS or
V-Series

Fabric MetroCluster
provides metropolitan
disaster recovery
protection

Disks

Can stretch up to 100km


with FC switches

V-Series MetroCluster
NetApp Confidential

26

METROCLUSTER
MetroCluster is a way to stretch a cluster beyond the 500 meter distance limitation. This is very valuable for
sites that need a cluster on a campus or
metropolitan area to allow for some localized failures as well as run as a cluster with failover integration.
This is very popular in industries and countries where a metropolitan separation is mandated for disaster
recovery.
A MetroCluster configuration comprises the following components and requires the following licenses:
An HA pair
(cf license)

Provides automatic failover capability


between sites in the case of hardware failures

SyncMirror software
(syncmirror_local)

Provides an up-to-date copy of data at the remote site; data is ready for access after
failover without administrator intervention

Controller failover
(cf_remote)

Provides a mechanism for the administrator to declare remote site disaster and initiate
a site failover through a single command for ease of use

FC switch

Provides controller connectivity between sites that (vendor-specific) are greater than
500* meters apart; enables sites to be located at a safe distance away from each other

8-26

NetApp Accredited Storage Architect Professional Workshop: Disaster Recovery

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

The MetroCluster Difference


Across City or Metropolitan Area MetroCluster provides

continuous availability
within a single data
center and across
data centers in
adjacent floors,
buildings, and
metropolitan areas.

Across Floors, Buildings, or Campuses


Within Data Center

FAS3000, FAS3100,
FAS3200, FAS6000,
FAS6200, and VSeries systems are
supported.
MetroClusterMetroCluster:
Stretch up Stretch
to 500 mup to 100 km
MetroCluster

NetApp Confidential

27

THE METROCLUSTER DIFFERENCE


MetroCluster can address a customers continuous-availability requirements whether MetroCluster is
deployed inside a data center, at multiple locations in a building, or across city or metropolitan-wide
deployments up to a distance of 100 km. This enables a level of availability that goes beyond the HA features
in a single array, which makes MetroCluster a highly versatile solution.
MetroCluster supports NetApp FAS3000, FAS3100, FAS3200, FAS6000, FAS6200, and V-Series systems.

8-27

NetApp Accredited Storage Architect Professional Workshop: Disaster Recovery

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Stretch MetroCluster
Campus Distances
Building A

Similar to local HA configuration


but with longer cables

Building B

Less than or equal to 500m at 2 Gbps


Less than or equal to 270m at 4 Gbps
Dark Fiber

Vol X

Vol Y

Vol X

Vol Y

HA interconnect (FC-VI)
A-loop
B-loop

NetApp Confidential

STRETCH METROCLUSTER
CAMPUS DISTANCES

8-28

NetApp Accredited Storage Architect Professional Workshop: Disaster Recovery

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

28

MetroCluster: Overview
Data Center #1

Data Center #2

Host 1

System1

Host 2

System2

System1: Pool 0

System2: Pool 0

System2: Pool 1

System1: Pool 1

SyncMirror over a distance


NOTE: This diagram displays the fabric-attached MetroCluster
configuration; stretch MetroCluster available also
NetApp Confidential

29

METROCLUSTER: OVERVIEW
MetroCluster combines the reliability of a high-availability pair with the synchronous replication of
SyncMirror over a distance.

8-29

NetApp Accredited Storage Architect Professional Workshop: Disaster Recovery

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

MetroCluster and SyncMirror


Combines RAID 1 and RAID 4 / RAID-DP

Plex
0

Plex
1

Pool 0

Pool 1

Pools set by
disk ownership
(software only for
Data ONTAP 8.0
7-Mode and later)

NetApp Confidential

METROCLUSTER AND SYNCMIRROR

8-30

NetApp Accredited Storage Architect Professional Workshop: Disaster Recovery

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

30

Stretch MetroCluster
Connectivity
The cluster heartbeat:
Is through InfiniBand
Uses nonvolatile RAM ( NVRAM) cards, except the FAS 3100
Series, which requires an FC-VI card
Is less than or equal to 500m when using a point-to-point
connection*
Uses FC cabling

Uses patch panels to reduce distance

Disk shelves:
Support up to the platform limit
Are 2 Gb or 4 Gb
Are ATA-supported
Use disk ownership that is the same as with Fabric MetroCluster
*Must use OM3 type cabling (or better) to achieve 500m
NetApp Confidential

31

STRETCH METROCLUSTER
CONNECTIVITY

Two versions of MetroCluster exist: fabric and stretch.


Stretch is for short distances of up to 500m and with a direct FC connection between the systems.
Fabric is the long-distance version, for up to 30 km out of the box or up to 100 km with a policy-variance
request ( PVR).
The heartbeat for an HA pair uses the InfiniBand connections on the nonvolatile RAM ( NVRAM) card of the
FAS6000 series.
Because the FAS3100 series uses a chassis connection (dual-controller chassis) for the heartbeat, a stretch
MetroCluster requires an FC-VI card.

8-31

NetApp Accredited Storage Architect Professional Workshop: Disaster Recovery

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Fabric MetroCluster
Metropolitan Area Distances
Building A

Building B

100 Km with Policy-Variance Request (


PVR)

FC Switches

The switched MetroCluster deployment


uses high-powered, longwave small form
factors ( SFPs) in Brocade to achieve
distance.

Vol X

Vol Y

Vol Y

Vol X

Dark fiber
A-loop
B-loop
HA interconnect (FC-VI)

NetApp Confidential

32

FABRIC METROCLUSTER
METROPOLITAN AREA DISTANCES

Two versions of MetroCluster exist: fabric and stretch.


Stretch is for short distances of up to 500m and with a direct FC connection between the systems.
Fabric is the long-distance version, for up to 30 km out of the box or up to 100 km with a policy-variance
request ( PVR).
Functionally, the switched MetroCluster environment is identical to the nonswitched environment. The major
exception is the distance that can be achieved with the switched back end.
Here is an example of the long-distance version. The cluster interconnect, the NVRAM mirroring, the
heartbeat, and the disk mirroring go over dark fiber. As with standard clusters, things are in production,
volume X is mirrored over to X prime, and volume Y is in production on the other side that is mirrored over
to Y prime.
The mirroring of data can go both directions and frequently is performed both ways. Brocade switches are
used to achieve the distance, and the switch must be a Brocade switch. The Brocade switch is a specific set of
switches that NetApp sells with the solution.

8-32

NetApp Accredited Storage Architect Professional Workshop: Disaster Recovery

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Fabric MetroCluster
Connectivity (1 of 2)
Cluster interconnect: VI over FC (versus SCSI):
FC-VI (HA Interconnect) card required
FC switches:
Disk and controller interconnect
Brocade switches (see NetApp documentation for current
models):
licensed for full fabric (multiswitch fabric)

No support for customer-supplied switches

Configuring for long distances:


Up to 10 km: four longwave SFPs

Greater than 10 km:


Four extended longwave SFPs (Brocade-certified)
Required extended distance license: buffer credits set accordingly
NetApp Confidential

33

FABRIC METROCLUSTER
CONNECTIVITY (1 OF 2)

Because all connections have been moved onto an FC-switched environment, the heartbeat for an HA pair, a
fabric MetroCluster requires an FC-VI card in the FAS6000 and FAS3000 series.

8-33

NetApp Accredited Storage Architect Professional Workshop: Disaster Recovery

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Fabric MetroCluster
Connectivity (2 of 2)
Storage ports:
Can be 2 Gbps: two dual-port FC HBAs or four onboard ports
Can be 4 Gbps:

Four onboard ports (model-specific)


Quad-port FC HBAs

Disk shelves:
Shelves on each loop must be the same speed.

Two shelves per loop is the maximum.


ATA shelves are not supported.
Depending on the FAS system, ownership is determined by software
or hardware rules.

Disk shelves are attached to the same ports on both switches


(hardware ownership).

NetApp Confidential

34

FABRIC METROCLUSTER
CONNECTIVITY (2 OF 2)

In many customer solutions, NetApp uses Brocade 200E switches for physical connectivity. The
MetroCluster fabric operates well in switched environments.
Switches are prewired, preconfigured, internal components of MetroCluster. As you do not have a choice of
disks, likewise you do not have a choice of switches to use.
Only one Inter-Switch Link ( ISL) connection exists between each of the switches. Any switch port can be
used.
Trunking is not supported.
This uses a VI interconnect (X1922A) card.
Interconnect is VI over FC (versus SCSI).
The card is a different version of the standard Qlogic QLA2352, currently a 2-Gb card.

8-34

NetApp Accredited Storage Architect Professional Workshop: Disaster Recovery

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

MetroCluster: SAS Support


FMC1-2

FMC1-1

S1

S2

S3

S4

Fabricattached
shown,
stretch also
supported
ATTO
FibreBridge
6500N
NetApp Confidential

35

METROCLUSTER: SAS SUPPORT


Initially, Fabric-attached MetroCluster supported only Fibre Channel (FC) arbitrated loop shelves. NetApp
introduces, with Data ONTAP 8.1 7-Mode, support for a FC-to-Serial-attached SCSI (or SAS) bridge which
maintains the distance benefits of FC while leveraging newer SAS disk and shelf technology.
The FC-to-SAS bridge is the ATTO FibreBridge 6500N. This bridge has the following features:

8-35

A Fibre Channel (FC) to Serial Attached SCSI (SAS) bridge from ATTO Technology to support the SAS
disk shelves - DS4243 and DS2246 in Stretch and Fabric MetroCluster configurations
2 8Gb Fibre Channel SFP+ ports
2 x4 6Gb SAS QSFP+ ports (only SAS port 'A' is used, port 'B' is disabled and not usable)
2 Ethernet ports
One serial port
Standard 1U 19 rack mount form factor
Management capable through Ethernet (recommended) or RS-232
Single integrated power supply (AC 100-240V)
Always check the NetApp Interoperability Matrix.

NetApp Accredited Storage Architect Professional Workshop: Disaster Recovery

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Shared Fabric
FMC1-1
0

FMC1-2
2

17

17

18

18

17

17

18

18

S4

S2
1

S3

S1

FMC2-1

FMC2-2
NetApp Confidential

36

SHARED FABRIC
Prior to Data ONTAP 8.1, a single fabric MetroCluster (FMC) uses four dedicated switches which carries HA
interconnect and storage traffic. This means if the storage administrator needs two fabric MetroCluster setups
using eight switches. These switches might be under-utilized in some environment. In cases where the storage
administrator feels that the existing switches and ISL are carrying less than 50 percent of their maximum
capacity, the storage administrator may opt for a shared fabric configuration. In this configuration, two fabric
MetroCluster setups use just four switches.
The example on the slide illustrates a simple shared fabric MetroCluster scenario. The connections described
not the only method to connect these storage systems, disks and switches. This should be only as an example
to give a better clarity to the solution. In the above setup, FMC1 and FMC2 form two fabric MetroCluster
pairs that share the switches and the ISLs between the switches. The switches are named S1, S2, S3 and S4
with domain IDs 1, 2, 3 and 4 respectively. For simplicity reasons, let us assume each storage controller has 2
FCVI and 2 HBA ports. One of each is connected to the primary and secondary switches. FMC1 storage
controllers connect the FCVI and HBA to switches via port 0 and port 2 respectively. FMC2 storage
controllers use port 1 for FCVI and 3 for HBA. The disk shelves are connected to the switch through ports 4,
5, 6 and 7. In addition to these we have 2 ISL on port 17 and 18 on all the switches. So in summary, this
configuration has F-ports on 0, 1, 2 and 3, E-ports on 17 and 18, and L ports on 4, 5, 6 and 7.

8-36

NetApp Accredited Storage Architect Professional Workshop: Disaster Recovery

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Supported MetroCluster
Two-chassis configurations:
Each chassis is single-enclosure and standalone:
FAS3210 controller with blank
FAS3240 and FAS3270 controller with IOXM

Two chassis with single-enclosure high


availability (twin):
Supported on all three FAS3200 systems
Not directly quotable but is supported

NetApp Confidential

SUPPORTED METROCLUSTER

8-37

NetApp Accredited Storage Architect Professional Workshop: Disaster Recovery

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

37

Supported Configurations
Storage System Platforms:
FAS3040/3070
FAS31XX/FAS32XX
FAS60XX/FAS62XX

Fabric-attached MetroCluster supports:


Brocade switches

Brocade 200E
Brocade 5000
Brocade 300
Brocade 5100

Brocade Fabric Operating System version 6.0.x or later


Brocade licenses:
Full-fabric license
Extended distance license (if over 10km)
Ports-on-demand licenses for additional ports if necessary

NetApp Confidential

SUPPORTED CONFIGURATIONS

8-38

NetApp Accredited Storage Architect Professional Workshop: Disaster Recovery

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

38

Stretch MetroCluster Support


Storage System Platforms:
FAS3040/3070
FAS31XX/32XX
FAS60XX/62XX
Disk Ownership Method:
Software only
Interconnect hardware:
FC/VI adapter
Copper/Fibre converters for interconnect (FAS32XX and
FAS62XX)
See the MetroCluster Compatibility Matrix on the NOW site
NetApp Confidential

STRETCH METROCLUSTER SUPPORT

8-39

NetApp Accredited Storage Architect Professional Workshop: Disaster Recovery

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

39

Best Practices
MetroCluster
Choose the correct controller based on normal sizing
tools and methods.
Connections are important:
Calculate distances properly.
Use the correct fiber for the job.
Verify the correct SFPs, required cables, and patch
panels.

Remember that mirroring requires two times the disk


requirements.
Remember the MetroCluster spindle maximums.
Remember the speed restrictions.

NetApp Confidential

40

BEST PRACTICES
METROCLUSTER

Controller sizing for MetroCluster is the same as for a standard active-active system configuration.
Be aware of the impact of SyncMirror software:

Write performance decreases in a heavy load situation by approximately five percent.


If you activate read from both plexes, read performance can increase.
Mirrored plexes result in half the usable maximum spindle count.

Connections are important:

8-40

Stretch MetroCluster interconnect is InfiniBand with Multi-Fiber Push-On ( MPO) Adapter. (Check
customers patch panels.)
Remember to account for patch panels in distance and link-budget calculations.
Ensure that the correct type of fiber is in use.

NetApp Accredited Storage Architect Professional Workshop: Disaster Recovery

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

High Availability
Summary

Review the Data ONTAP 8.1 7-Mode HighAvailability Configuration Guide for information
about :
Fault tolerance
Nondisruptive software upgrades
Nondisruptive hardware maintenance
Specifications and comparisons

NetApp Confidential

41

HIGH AVAILABILITY
SUMMARY

HA provides fault tolerance and the ability to perform nondisruptive upgrades and maintenance.
Configuring storage systems in an HA pair provides the following benefits:
Fault tolerance
When one node fails or becomes impaired a takeover occurs, and the partner node continues to serve the
failed nodes data.
Nondisruptive software upgrades
When you halt one node and allow takeover, the partner node continues to serve data for the halted node
while you upgrade the node you halted.
Nondisruptive hardware maintenance
When you halt one node and allow takeover, the partner node continues to serve data for the halted node
while you replace or repair hardware in the node you halted.

8-41

NetApp Accredited Storage Architect Professional Workshop: Disaster Recovery

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Module Summary
Now that you have completed this module, you
should be able to:
Discuss the NetApp disaster-recovery
architecture:
RPO
RTO

Articulate the key features of SnapMirror software


Discuss the benefits of MetroCluster technology
Position NetApp products and service for disaster
recovery

NetApp Confidential

MODULE SUMMARY

8-42

NetApp Accredited Storage Architect Professional Workshop: Disaster Recovery

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

42

NetApp Confidential

THANK YOU

8-43

NetApp Accredited Storage Architect Professional Workshop: Disaster Recovery

2012 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

43