You are on page 1of 645

NetApp University

Data ONTAP 7.3 Fundamentals


Student Guide

NetApp University - Do Not Distribute


NETAPP UNIVERSITY

Data ONTAP® 7.3 Fundamentals


Student Guide
Version Number: 5.0
Release Number: Data ONTAP 7.3
Course Number: STRSW-ED-ILT-DOTF-REV05
Course Number: STRSW-ED-ILT-DOTF-REV05-SG

NetApp University - Do Not Distribute


ATTENTION
The information contained in this guide is intended for training use only. This guide contains information
and activities that, while beneficial for the purposes of training in a closed, non-production environment,
can result in downtime or other severe consequences and therefore are not intended as a reference
guide. This guide is not a technical reference and should not, under any circumstances, be used in
production environments. To obtain reference materials, please refer to the NetApp product
documentation located at www.now.com for product information.

COPYRIGHT
© 2008 NetApp. All rights reserved. Printed in the U.S.A. Specifications subject to change
without notice.
No part of this book covered by copyright may be reproduced in any form or by any means—graphic,
electronic, or mechanical, including photocopying, recording, taping, or storage in an electronic retrieval
system—without prior written permission of the copyright owner.
NetApp reserves the right to change any products described herein at any time and without notice.
NetApp assumes no responsibility or liability arising from the use of products or materials described
herein, except as expressly agreed to in writing by NetApp. The use or purchase of this product or
materials does not convey a license under any patent rights, trademark rights, or any other intellectual
property rights of NetApp.
The product described in this manual may be protected by one or more U.S. patents, foreign patents,
or pending applications.

RESTRICTED RIGHTS LEGEND


Use, duplication, or disclosure by the government is subject to restrictions as set forth in subparagraph
(c)(1)(ii) of the Rights in Technical Data and Computer Software clause at DFARS 252.277-7103
(October 1988) and FAR 52-227-19 (June 1987).

TRADEMARK INFORMATION
NetApp, the NetApp logo, and Go further, faster, FAServer, NearStore, NetCache, WAFL, DataFabric,
FilerView, SecureShare, SnapManager, SnapMirror, SnapRestore, SnapVault, Spinnaker Networks,
the Spinnaker Networks logo, SpinAccess, SpinCluster, SpinFS, SpinHA, SpinMove, SpinServer, and
SpinStor are registered trademarks of Network Appliance, Inc. in the United States and other countries.
Network Appliance, Data ONTAP, ApplianceWatch, BareMetal, Center-to-Edge, ContentDirector, gFiler,
MultiStore, SecureAdmin, Smart SAN, SnapCache, SnapDrive, SnapMover, Snapshot, vFiler, Web
Filer, SpinAV, SpinManager, SpinMirror, and SpinShot are trademarks of NetApp, Inc. in the United
States and/or other countries.
Apple is a registered trademark and QuickTime is a trademark of Apple Computer, Inc. in the United
States and/or other countries.
Microsoft is a registered trademark and Windows Media is a trademark of Microsoft Corporation in the
United States and/or other countries.
RealAudio, RealNetworks, RealPlayer, RealSystem, RealText, and RealVideo are registered
trademarks and RealMedia, RealProxy, and SureStream are trademarks of RealNetworks, Inc. in the
United States and/or other countries.
All other brands or products are trademarks or registered trademarks of their respective holders and
should be treated as such.
NetApp is a licensee of the CompactFlash and CF Logo trademarks.

2 Data ONTAP® 7.3 Fundamentals: Welcome

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
TABLE OF CONTENTS
WELCOME ............................................................................................................................1
MODULE 1: OVERVIEW ......................................................................................................1-1
MODULE 2: INSTALLATION AND CONFIGURATION ......................................................2-1
MODULE 3: BASIC ADMINISTRATION .............................................................................3-1
MODULE 4: ADMINISTRATION SECURITY ......................................................................4-1
MODULE 5: NETWORKING ................................................................................................5-1
MODULE 6: PHYSICAL STORAGE MANAGEMEN ...........................................................6-1
MODULE 7: LOGICAL STORAGE MANAGEMENT ...........................................................7-1
MODULE 8: CIFS .................................................................................................................8-1
MODULE 9: NFS ...................................................................................................................9-1
MODULE 10: QTREES AND SECURITY STYLE ..............................................................10-1
MODULE 11: SAN ...............................................................................................................11-1
MODULE 12: SNAPSHOT COPIES ...................................................................................12-1
MODULE 13: WRITE AND READ REQUEST PROCESSING............................................13-1
MODULE 14: SYSTEM DATA COLLECTION ....................................................................14-1
MODULE 15: FLEXSHARE ................................................................................................15-1
MODULE 16: NDMP FUNDAMENTALS ............................................................................16-1
MODULE 17: ACTIVE-ACTIVE CONTROLLER CONFIGURATION .................................17-1
MODULE 18: FINAL WORDS .............................................................................................18-1

3 Data ONTAP® 7.3 Fundamentals: Welcome

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
This page is intentionally left blank.

4 Data ONTAP® 7.3 Fundamentals: Welcome

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Welcome

NetApp University - Do Not Distribute


Data ONTAP® 7.3
Fundamentals
Release#: Data ONTAP 7.3
Course Part#: STRSW-ED-ILT-
DOTF-REV05

DATA ONTAP® 7.3 FUNDAMENTALS

5 Data ONTAP® 7.3 Fundamentals: Welcome

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Logistics

ƒ Introductions
ƒ Schedule (start time, breaks, lunch, close)
ƒ Telephones and messages
ƒ Food and drinks
ƒ Restrooms

© 2008 NetApp. All rights reserved. 2

LOGISTICS

6 Data ONTAP® 7.3 Fundamentals: Welcome

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Safety

ƒ Alarm signal
ƒ Evacuation route
ƒ Assembly area
ƒ Electrical safety

© 2008 NetApp. All rights reserved. 3

SAFETY

7 Data ONTAP® 7.3 Fundamentals: Welcome

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Course Objectives

By the end of this course, you should be able to:


ƒ List and describe types of storage systems (enterprise,
virtualized, nearline, and high-performance computing)
ƒ Use the NOW™ (NetApp on the Web) knowledge base
to resolve technical issues
ƒ Describe the Data ONTAP® operating system
ƒ Identify client-server and multi-tiered storage systems
ƒ Manage and configure system interfaces
ƒ Identify and use naming services
ƒ Use a storage system console to access and execute
commands

© 2008 NetApp. All rights reserved. 4

COURSE OBJECTIVES

8 Data ONTAP® 7.3 Fundamentals: Welcome

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Course Objectives (Cont.)

ƒ Explain features of the WAFL® (Write Anywhere File


Layout) file system
ƒ Identify and manage Snapshot™ copies
ƒ Create and manage aggregates, volumes, and disks
ƒ Configure and manage Network File System (NFS)
exports
ƒ Setup and manage Common Internet File System
protocol (CIFS) shares
ƒ Describe storage area network (SAN) features in Data
ONTAP
ƒ Identify techniques to collect performance data for a
storage system

© 2008 NetApp. All rights reserved. 5

COURSE OBJECTIVES (CONT.)

9 Data ONTAP® 7.3 Fundamentals: Welcome

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Course Agenda

ƒ Day 1
– Data ONTAP Fundamentals
– Installation and Configuration
– Basic Administration
ƒ Day 2
– Administration Security
– Networking
– Physical Storage Management

© 2008 NetApp. All rights reserved. 6

COURSE AGENDA

10 Data ONTAP® 7.3 Fundamentals: Welcome

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Course Agenda (Cont.)

ƒ Day 3
– Logical Storage Management
– Common Internet File System
– Network File System
– Qtrees and Security Styles
ƒ Day 4
– Storage Area Networks
– Snapshot Copies
– Write and Read Request Processing
– System Data Collection

© 2008 NetApp. All rights reserved. 7

COURSE AGENDA (CONT.)

11 Data ONTAP® 7.3 Fundamentals: Welcome

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Course Agenda (Cont.)

ƒ Day 5
– FlexShare™
– Active-Active Controller Configuration
– NDMP Fundamentals
– Final Words

© 2008 NetApp. All rights reserved. 8

COURSE AGENDA (CONT.)

12 Data ONTAP® 7.3 Fundamentals: Welcome

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Information Sources

ƒ NOWTM (NetApp on the Web) Site


– http://NOW.NetApp.com

ƒ NetApp University
– http://www.netapp.com/us/services/university/

ƒ NetApp University Support


– http://netappusupport.custhelp.com

© 2008 NetApp. All rights reserved. 9

INFORMATION SOURCES

13 Data ONTAP® 7.3 Fundamentals: Welcome

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Typographic Conventions
Convention Type of Information

ƒBook titles
ƒWord or characters that require special attention
ƒVariable names or placeholders for information you
must supply, for example:
Italic Font
Enter the following command:
ifstat [-z] {-a interface}
Interface is the name of the interface for which
you want to view statistics.

ƒCommand names, daemon names, and option


names
Monospaced font ƒInformation displayed on the system console or
other computer monitors
ƒThe contents of files
Words or characters you type, for example:
Bold monospaced font Enter the following command:
options httpd.enable on

© 2008 NetApp. All rights reserved. 10

TYPOGRAPHIC CONVENTIONS

14 Data ONTAP® 7.3 Fundamentals: Welcome


©© 2008
2008 Network
NetApp. Appliance, Inc.isAll
This material rights reserved.
intended Specifications
for training use only. Notare subject tofor
authorized change without purposes.
reproduction notice. NetApp, the Network Appliance logo,

NetApp University - Do Not Distribute


NearStore, SnapLock, and SnapVault are registered trademarks and Network Appliance, DataFort, FlexClone, and FlexVol are trademarks of
Network Appliance, Inc. in the U.S. and/or other countries. Windows is a registered trademark of Microsoft Corporation. UNIX is a registered
trademark of The Open Group. Oracle is a registered trademark of Oracle Corporation. All other brands or products are trademarks or registered
trademarks of their respective holders and should be treated as such.
Overview

NetApp University - Do Not Distribute


MODULE 1: DATA ONTAP FUNDAMENTALS

Overview
Module 1
Data ONTAP® 7.3 Fundamentals

OVERVIEW

1-1 Data ONTAP® 7.3 Fundamentals: Overview

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Objectives

By the end of this module, you should be able to:


ƒ State the advantages, features, and functions
of a storage system
ƒ Identify the key features of NetApp® product
series
ƒ Distinguish between SAN and NAS topologies
ƒ Describe the basic functions of the Data
ONTAP operating system
ƒ Access the NOW™ (NetApp on the Web)
knowledge base to obtain software and
hardware documentation
© 2008 NetApp. All rights reserved. 2

MODULE OBJECTIVES

1-2 Data ONTAP® 7.3 Fundamentals: Overview

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Products

© 2008 NetApp. All rights reserved. 4

PRODUCTS

1-3 Data ONTAP® 7.3 Fundamentals: Overview

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Storage System

ƒ The primary function of a storage system is to


store data.
ƒ NetApp™ enterprise storage systems function
as "unification engines" that simultaneously
support FC and IP SANs, and NAS.

© 2008 NetApp. All rights reserved. 5

STORAGE SYSTEM

STORAGE-SYSTEM ARCHITECTURES

The primary function of a storage system is to store data. NetApp storage systems have
integrated disks that store data in a variety of network and storage environments including
SAN and NAS

The two main protocols used in SAN are Fibre Channel Protocol (FCP) and Internet Small
Computer System Interface (iSCSI). The two main protocols used in NAS are Network File
System (NFS) and Common Internet File System (CIFS). NetApp storage systems use the
Data ONTAP operating system to ensure simple operation, speed, and reliability.

1-4 Data ONTAP® 7.3 Fundamentals: Overview

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Unified Storage

iSCSI NFS

CIFS Corporate
FC Ethernet LAN

SAN NAS
(Blocks) (Files)

NetApp
FAS

© 2008 NetApp. All rights reserved. 6

UNIFIED STORAGE

SAN
• Is a block-based storage system
• Makes data available over the network
• Uses FC and iSCSI protocols

NAS
• Is a file-based storage system
• Makes data available over the network
• Uses NFS and CIFS protocols

The NetApp SAN and unified storage architecture provides an outstanding level of
investment protection and flexibility. The fabric attached storage (FAS) system at the bottom
implies one “box” However, the actual storage environment includes small and large FAS
systems, and NearStore® systems.

1-5 Data ONTAP® 7.3 Fundamentals: Overview

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NetApp Data ONTAP 7G Products
Data Center Remote Office/ Near-line Storage
Storage Dept Storage Storage Virtualization

FAS3000 Series & FAS200 Series & NearStore® on FAS V-Series


FAS6000 Series FAS2000 Series Economical secondary Dynamic virtualization for
Unified, enterprise-class Remote and departmental storage heterogeneous storage
storage storage
Data ONTAP® 7G Operating System – NAS, FC SAN, IP SAN
Provisioning & Performance Data Data
Manageability
Volume Mgmt Management Protection Retention
• FlexClone® • FlexCache™ • MetroCluster • SnapLock ® • ApplianceWatch™ • Operations Manager
• FlexVol® • FlexShare™ • Open Systems • LockVault™ for MOM • Protection Manager
• MultiStore® SnapVault® • CommandCentral™ • Provisioning Manager
System • ReplicatorX™ Storage • SnapDrive®
Resiliency • SnapManager®
• File Storage
• Active-active
Management • SnapMirror®
• VFM® (Virtual File
Resource Manager
• controller • FilerView® • SnapRestore® Manager)
configuration • Snapshot™
• RAID-DP® • SnapVault Included at no additional cost
• SnapValidator®
Not restricted to Data ONTAP 7G environments
• SyncMirror®

© 2008 NetApp. All rights reserved. 7

NETAPP DATA ONTAP 7G PRODUCTS

NETAPP HARDWARE

NETAPP FAS

NetApp FAS systems comprise some of the largest families of compatible storage systems in
the storage industry today. NetApp FAS systems integrate easily into complex enterprise
environments and provide shared access to UNIX, Microsoft ® Windows®, Linux®, and
Web data while simultaneously supporting FC SAN, IP SAN, iSCSI, and NAS. FAS systems
are designed to consolidate and serve data for e-mail, Enterprise Content Management
(ECM), technical applications, files, home directories, and Web content.

NEARSTORE

NearStore is a near-line storage system that uses FC disk drives organized in disk shelves.
NearStore systems store data that requires faster access than tape storage, but requires less
activity than primary data. The product requires a software license to allow ATA and FC
disks on the same system. NearStore combines the Data ONTAP operating system with
inexpensive serial advanced technology attachment (SATA) disks to provide disk storage
performance and flexibility at near-tape storage costs.

V-SERIES SYSTEMS

V-Series systems are virtual storage systems for a multiprotocol, multivendor storage
environment. V-Series enables NAS and SAN simultaneous access to existing FC SAN
infrastructures.

1-6 Data ONTAP® 7.3 Fundamentals: Overview

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NETAPP SOFTWARE

The NetApp manageability software family consists of four suites that provide software tools
for effective data management.

The NetApp Application Suite delivers increased productivity and flexibility across the
entire enterprise. The various NetApp SnapManager® software products enable you to
improve data availability, reduce unexpected data loss, and increase storage management
flexibility by leveraging the power of integrated NetApp storage systems.

The NetApp Server Suite includes the SnapDrive® and ApplianceWatch™ product families.
SnapDrive provides a server-aware alternative to maintaining manual host connections to
underlying NetApp storage systems. ApplianceWatch products integrate with third-party
system management tools from HP, IBM, and Microsoft. ApplianceWatch allows
administrators to view, monitor, and manage NetApp storage systems from within their
respective system-management environments.

The NetApp Data Suite, consisting of Protection Manager and VFM® (Virtual File Manager,
Enterprise Edition and Migration Edition), provides effective tools for abstracting storage and
enables administrators and users to think in terms of data and data management rather than
the underlying storage.

With the NetApp Storage Suite of products, including Operations Manager, File Storage
Resource Manager, SAN Manager, and Command Central Storage, you will be able to do
more with less. Instead of managing separate physical storage systems, you can view and
manage multiple devices from central consoles.

“Not restricted to Data ONTAP 7G environments” means:


• In the case of Open Systems SnapVault®, the agents run on systems that do not use the Data
ONTAP operating system (UNIX, Linux, or Windows systems). The agents communicate with a
SnapVault secondary which must run on a machine equipped with Data ONTAP 7G.
• ReplicatorX™ and VFM software executes on machines that are not running Data ONTAP 7G,
and can be sold into environments where not a single Data ONTAP 7G-based storage system has
been sold. They are identified here because combining them with storage systems equipped with
Data ONTAP and software that is based on Data ONTAP (for example, FlexClone®) can be quite
powerful.

NETAPP STORAGE SUITE:


NETAPP COMMANDCENTRAL STORAGE BY SYMANTEC

NetApp CommandCentral™ Storage (CCStorage) by Symantec® provides a centralized


operational console for delivering storage management services in large-scale, heterogeneous,
FC and iSCSI SAN environments. It complements the NetApp native storage management
technology and integrates storage resource management, performance and policy
management, provisioning, and SAN management capabilities. The efficient management of
heterogeneous SAN storage resources underpins service-level agreements, ensuring improved
performance and availability. In addition, CCStorage offers customizable, policy-based
management to automate notification, recovery, and other user-definable actions.

1-7 Data ONTAP® 7.3 Fundamentals: Overview

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NETAPP STORAGE SUITE:
FILE STORAGE RESOURCE MANAGER

File Storage Resource Manager (FSRM) enables you to better understand the types of files in
your storage environment. Data is classified in terms of file size, file age, modification
history, access history, file type usage, and file owner usage. This data-classification
information gives you a clear picture of data within your organization. It allows you to put
policies in place that eliminate stale data, unused data, and data stored by personnel who are
no longer employed by your company.

FSRM also provides quota management. Administrators can set soft thresholds and hard
quotas. The system then issues policy violation notices. These FSRM features all contribute
to better storage utilization.

NETAPP STORAGE SUITE:


OPERATIONS MANAGER

NetApp Operations Manager (formerly DataFabric® Manager) delivers comprehensive


monitoring and management for NetApp enterprise-storage and content-caching
environments. From a central point of control, Operations Manager provides alerts, reports,
and configuration tools to keep your storage infrastructure in line with your business
requirements, for maximum availability and reduced total cost of ownership (TCO).

No other single management application provides the same level of NetApp monitoring and
management for NetApp FAS systems and NearStore near-line storage systems. The detailed
performance and health monitoring tools available through Operations Manager gives
administrators proactive information to help resolve potential problems before they occur,
and to troubleshoot problems faster when they do occur.

PROTECTION MANAGER

Protection Manager is an intuitive and innovative backup and replication-management


software package for NetApp disk-based data protection environments. Protection Manager
delivers greater assurance of data protection and higher productivity by providing policy-
based management (including automated data protection set up). This management
application allows you to apply consistent data protection policies across the enterprise,
automate complex data protection processes, and pool backup and replication resources for
higher utilization.

PROVISIONING MANAGER

Provisioning Manager provides automated, policy-based provisioning for NetApp NAS and
SAN environments. The software automates manual and repetitive provisioning processes,
increasing the productivity of administrators, and improves availability of data by providing
policy compliance for provisioned storage.

Provisioning Manager improves the productivity of storage administrators by automating


repetitive provisioning processes and allowing the delegation of provisioning activities to
others. This product also helps improve capacity utilization by leveraging flexible volumes
and intelligently provisioning storage from resource pools.

1-8 Data ONTAP® 7.3 Fundamentals: Overview

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
FAS2000 Series Overview

FAS2020 FAS2050
High-performance Serial-Attached SCSI (SAS) infrastructure

SAS or SATA drives configurable internally

Single controller or dual controller for high availability (HA)

Unified storage for iSCSI, NAS, FC SAN, FC tape, SCSI tape

Each controller has dual GigE ports and dual 4Gb FC ports

Onboard remote platform management

12 internal drive bays (2U) 20 internal drive bays (4U)


No PCIe slots 1 PCIe slot per controller
Up to 4 external shelves Up to 6 external shelves

© 2008 NetApp. All rights reserved. 8

FAS2000 SERIES OVERVIEW


The NetApp FAS2000 series systems support “internal” disk drive configurations. The
FAS2050 features 20 drives in a 4U chassis and provides flexibility with PCIe expansion
slots. The SAN-attached storage provides the same high performance and quality as FC
drives, but the interface also supports lower-cost SATA drives.

You can use the FAS2000 series to manage your dispersed, expanding, and complex data
requirements, and leverage the common operating system, management tools, backup and
restore functions, and disaster-recovery solutions to support your special business needs. The
FAS2000 series also helps reduce system costs with high data availability and less downtime
costs.

1-9 Data ONTAP® 7.3 Fundamentals: Overview

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
FAS3000 Series Overview
The FAS3000 Series features:
ƒ A modular system with integrated I/O
– Eight 4Gb FC and eight GbE on
board
– Meets the needs of most
configurations
– Occupies six rack units (RU)
ƒ Superior scalability
– Up to 504 TB storage capacity
– Up to 504 FC or SATA spindles
– Six PCI slots
– Up to 32 FC ports
– Up to 32 GbE ports
ƒ Built-in, enterprise-class manageability
with enhanced capabilities through RLM
© 2008 NetApp. All rights reserved. 9

FAS3000 SERIES OVERVIEW


The above summary assumes active-active, failover configuration for FAS3000 Series
systems.

Use the FAS3000 to manage up to twice as much data. The FAS3000 can use the Data
ONTAP FlexShare™ software to dynamically adjust workload priorities so that important
applications always get a fast response. Other key benefits include:
• A single platform to satisfy multiple requirements for SAN, NAS, primary storage, and secondary
storage
• Consistent, stable performance when creating Snapshot copies
• Easy upgrades and expansion within the FAS3000 family, and to the high-end FAS6000 family

Storage capacity for the FAS3000 is dependent on the number of spindles and the per-disk
storage density. The 504 TB maximum capacity assumes 504 SATA drives (with 1 TB each).

Enhanced manageability is achieved through the addition of the optional Remote LAN
Module (RLM). Some of the key features offered by RLM include failure alert through e-mail
notifications, real-time monitor and event logging, console redirection, and remote power-
cycling.

1-10 Data ONTAP® 7.3 Fundamentals: Overview

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
FAS6000 Series Overview
The FAS6000 Series features:
ƒ A modular system with integrated I/O
– Sixteen 4Gb FC* and twelve GbE
onboard***
– Occupies twelve RU***
ƒ Superior scalability
– Up to 1176 TB storage capacity**
– Up to 1176 FC or SATA spindles**
– Up to 56 FC ports***
– Up to 52 GbE ports***
ƒ Built-in, enterprise-class manageability
with enhanced capabilities through RLM

* Available on FAS6040 and FAS6080


** Requires Data ONTAP 7.2.4 or higher
***Enterprise active-active configuration
© 2008 NetApp. All rights reserved. 10

FAS6000 SERIES OVERVIEW


The FAS6000 series is designed for the largest enterprise applications and demanding
technical applications. Because they are highly scalable and very flexible, the FAS6000
systems are ideally suited for storage consolidation supporting hundreds of applications.

Data can be stored either in file or block format. Supported protocols include FCP, NFS,
CIFS, HTTP, and iSCSI.

1-11 Data ONTAP® 7.3 Fundamentals: Overview

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Shelf Compatibility

DS14 DS14mk2 DS14mk4 R100/R150 DS14mk2-AT


Grey Color Grey Color Black Color ATA Shelf Black Color

FAS6000
9 9 9 9
FAS3000
9 9 9 9
FAS2000
9 9 9 9
R200
9 9
FAS2XX
9 9

© 2008 NetApp. All rights reserved. 11

SHELF COMPATIBILITY

DISK SHELVES

NetApp storage systems use disk shelves as an alternative to sequential-access tape drives.
Each storage system has several FC ports at the rear of the system where you can attach disk
shelves. The number of ports varies by storage system.

1-12 Data ONTAP® 7.3 Fundamentals: Overview

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NAS Versus SAN

© 2008 NetApp. All rights reserved. 12

NAS VERSUS SAN

1-13 Data ONTAP® 7.3 Fundamentals: Overview

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NAS Versus SAN Topology

iSCSI NFS

CIFS Corporate
FC Ethernet LAN

SAN NAS
(Blocks) (Files)

NetApp
FAS

© 2008 NetApp. All rights reserved. 13

NAS VERSUS SAN TOPOLOGY


NAS, which is file-based, was initially designed for data sharing in a LAN environment and
incorporates file system capabilities into the storage device. In a NAS environment, servers
are connected to a storage system through a standard Ethernet network and use standard file
access protocols such as NFS and CIFS to make storage requests. Local file system calls from
clients are redirected to the NAS device, which provides shared file storage for all clients. If
clients are desktop systems, the NAS device provides "serverless" file serving. If clients are
server systems, the NAS device offloads the data management overhead from the servers.

SAN, which is block-based, lowers TCO and increases the performance and availability of
corporate storage resources. Because of these benefits, SANs that are based on FC technology
have become standard in many corporate data centers.

NetApp IP SAN (iSCSI) encapsulates SCSI block-storage commands into Ethernet packets
for transport over IP networks, enabling companies to leverage standard, familiar Ethernet
networking infrastructures to create affordable SANs.

1-14 Data ONTAP® 7.3 Fundamentals: Overview

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Data ONTAP Supported Protocols

Data ONTAP supports


the following protocols:
ƒ NAS NFS
– NFS
CIFS
– CIFS
– FTP iSCSI

– HTTP
– WebDAV LAN (TCP/IP)

ƒ SAN
– FCP FCP Data
ONTAP
– iSCSI Platform
FC

© 2008 NetApp. All rights reserved. 14

DATA ONTAP SUPPORTED PROTOCOLS


NFS: The NFS protocol allows UNIX and PC NFS clients to mount file systems to local
mount points. The storage appliance supports NFS v2, NFS v3, NFS v4, and NFS over User
Datagram Protocol (UDP) and transmission control protocol (TCP).

CIFS: The CIFS protocol supports Windows 2000, Windows for Workgroups, and Windows
NT 4.0.

HTTP: The HTTP enables Web browsers to display files that are stored on the storage
appliance.

FTP: The FTP enables UNIX clients to remotely transfer files to and from the storage
appliance.

WebDAV: Web-based Distributed Authoring and Versioning (WebDAV) enables certain


applications to create, modify, and access files over HTTP-extended methods.

FCP or iSCSI: The FCP or iSCSI protocols enable a storage device to communicate with one
or more hosts running operating systems such as Solaris™ or Windows in a SAN
environment. You can also configure logical units of storage (LUNs) for multiprotocol
access, block access, or as files for file access (or both).

1-15 Data ONTAP® 7.3 Fundamentals: Overview

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Architecture

© 2008 NetApp. All rights reserved. 15

ARCHITECTURE

1-16 Data ONTAP® 7.3 Fundamentals: Overview

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Data ONTAP Architecture

Client Data ONTAP

Network
Client

Network Protocols WAFL® RAID Storage


Client

Client
NVRAM
Physical Disks
Memory
Client

© 2008 NetApp. All rights reserved. 16

DATA ONTAP ARCHITECTURE


Data ONTAP is the operating system that all NetApp storage systems use and consists of:
• Network Interface: The point of interconnection between a user terminal network. The network
layer delivers data to RAM through the simple kernel and some libraries.
• Protocol Stack: The protocol stack takes the data placed into RAM by the network layer to
process further. Processing is based on protocols (CIFS, NFS, FCP, iSCSI, FTP, HTTP).
• Write Anywhere File Layout (WAFL): WAFL is an intelligent file system that actively
optimizes write performance by identifying the most effective way to lay out data on the disk.
• RAID Layer: Provides RAID 4 and RAID-DP™ protection by taking the data processed by
WAFL. The RAID layer creates stripes used to calculate parity. The RAID layer also protects data
by performing RAID scrubs and assists in the reconstruction of failed disks.
• Storage: Manages data transfer to and from disks. The storage layer is responsible for writing to
the disks. According to the data delivered by WAFL and RAID, it optimizes the write process to
the disks.
• Nonvolatile Random Access Memory (NVRAM): NVRAM is the battery backup of all
incoming write requests. All transactions that change the state of the file system are processed in
system RAM and logged in NVRAM. Write requests are acknowledged when the request is stored
in both system RAM and NVRAM. Because writes are processed in system RAM, NVRAM
provides battery-backed protection against data loss only in emergency situations. NVRAM is read
only after an improper shutdown, most often due to an emergency loss of power.

1-17 Data ONTAP® 7.3 Fundamentals: Overview

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NetApp on the Web

NetApp on the Web (NOW) is the NetApp online


support and services Web site.
© 2008 NetApp. All rights reserved. 17

NETAPP ON THE WEB

NETAPP ON THE WEB (NOW) SITE

The NOW knowledge database provides a source for support, information, and
documentation. NOW is a NetApp customer- and employee-driven knowledge base
accessible at either:

http://www.netapp.com

or

http://now.netapp.com

After you have logged into the NOW database, the Service and Support page is displayed.
From this page, you can access the following administrative support:
• Technical Assistance
• Submit or check the status of a technical assistance case
• Submit or check the status of a Return Materials Authorization (RMA)
• Find bug reports
• Documentation
• Downloads
• Your product information
• Troubleshooting solutions

1-18 Data ONTAP® 7.3 Fundamentals: Overview

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Simulate ONTAP Benefits

Simulate ONTAP™ provides the following


benefits:
ƒ Simulates the experience of administering a
NetApp storage system with all features
available
ƒ Allows administrators to:
– Experiment with new features (simulator runs
with a full set of licenses)
– Practice procedures in a non-production
environment
– Experiment with new versions of Data ONTAP

© 2008 NetApp. All rights reserved. 18

SIMULATE DATA ONTAP BENEFITS


The Data ONTAP Simulator for Linux provides you with the experience of administering a
NetApp storage system―complete with a "simulated" set of populated disks―with all the
features of Data ONTAP at your disposal.

NOTE: Support for the Data ONTAP Simulator is on a best-effort volunteer basis; therefore,
support is not guaranteed. Please do not call the Global Support Centers for Simulator support
issues.

SIMULATOR REQUIREMENTS
• Hardware
ƒ Intel® processor-based PC
ƒ Network card
ƒ Recommended 256 MB main memory
ƒ Minimum 250 MB free hard drive space
• Linux installed, running, and networked
• Tested on Red Hat® Linux 7.1 through 9.0, SUSE 8.1, and 8.2
• Must log in as root for installation

1-19 Data ONTAP® 7.3 Fundamentals: Overview

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Summary

In this module, you should have learned to:


ƒ State the advantages, features, and functions
of a storage system
ƒ Identify the key features of NetApp product
series
ƒ Distinguish between SAN and NAS topologies
ƒ Describe the basic functions of the Data
ONTAP operating system
ƒ Access the NOW knowledge base to obtain
software and hardware documentation

© 2008 NetApp. All rights reserved. 19

MODULE SUMMARY

1-20 Data ONTAP® 7.3 Fundamentals: Overview

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Exercise
Module 1: Data ONTAP
Fundamentals
Estimated Time: 45 minutes

EXERCISE
Please refer to your Exercise Guide for more instruction.

1-21 Data ONTAP® 7.3 Fundamentals: Overview

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Install Config

NetApp University - Do Not Distribute


MODULE 2: INSTALLATION AND CONFIGURATION

Installation and
Configuration
Module 2
Data ONTAP® 7.3 Fundamentals

INSTALLATION AND CONFIGURATION

2-1 Data ONTAP® 7.3 Fundamentals: Installation and Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Objectives
By the end of this module, you should be able to:
ƒ Access the NOW site for the following documents:
– NetApp Configuration Guide
– Data ONTAP System Administration Guide
ƒ Locate hardware components using Parts Finder
ƒ Collect data for installation using a configuration
worksheet
ƒ Interpret the network interface configuration
ƒ Set up console access for a storage system
ƒ Configure a storage system using the setup
command
ƒ Describe how to perform Data ONTAP software
upgrades and reboots

© 2008 NetApp. All rights reserved. 2

MODULE OBJECTIVES

2-2 Data ONTAP® 7.3 Fundamentals: Installation and Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Documentation

© 2008 NetApp. All rights reserved. 3

DOCUMENTATION

2-3 Data ONTAP® 7.3 Fundamentals: Installation and Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NOW Support

NOW is an online support and services Web site


that provides:
ƒ Product support through knowledge base
ƒ Documentation
– Parts Finder
– Data ONTAP guides
ƒ System Administration Guide
ƒ Software Setup Guide
– Configuration Guide
– Configuration Worksheet

© 2008 NetApp. All rights reserved. 4

NOW SUPPORT

PRODUCT DOCUMENTATION
Product documentation and additional information about your new storage system is available
online on the NOW site at http://now.netapp.com. From the NOW site, you can view a list of
all licenses purchased for your storage appliance.

ADDITIONAL INFORMATION
For the latest information about your version of Data ONTAP, see the Data ONTAP Release
Notes and Read Me First documents.

SOFTWARE
The system software is preinstalled, you don’t need CD’s or system boot diskettes to install or
configure a new storage system. If for any reason you need to reinstall the system software,
you can access to the NOW site.

2-4 Data ONTAP® 7.3 Fundamentals: Installation and Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Parts Finder

© 2008 NetApp. All rights reserved. 5

PARTS FINDER

The Parts Finder Web site allows you to search the parts database for spare parts and view
details about the part. There are currently three methods to search for a part:
• By part number
• By the description provided in the sysconfig output
• By category

2-5 Data ONTAP® 7.3 Fundamentals: Installation and Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
System Administration Guide

© 2008 NetApp. All rights reserved. 6

SYSTEM ADMINISTRATION GUIDE

The System Administration Guide describes how to configure, operate, and manage NetApp
storage systems running Data ONTAP 7.3 software. This guide provides information about
all storage system models.

2-6 Data ONTAP® 7.3 Fundamentals: Installation and Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Software Setup Guide

© 2008 NetApp. All rights reserved. 7

SOFTWARE SETUP GUIDE

The Software Setup Guide describes how to set up and configure storage systems running
Data ONTAP 7.3 software. This guide provides information about all supported storage
system models.

2-7 Data ONTAP® 7.3 Fundamentals: Installation and Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
System Configuration Guide

© 2008 NetApp. All rights reserved. 8

SYSTEM CONFIGURATION GUIDE

The System Configuration Guide Web site provides configuration information for all NetApp
storage systems running multiple versions of Data ONTAP. It also provides a table of
component compatibilities for both normal environments and high-availability configurations.

2-8 Data ONTAP® 7.3 Fundamentals: Installation and Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Console Access

© 2008 NetApp. All rights reserved. 9

CONSOLE ACCESS

2-9 Data ONTAP® 7.3 Fundamentals: Installation and Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Console Access

DB9-RJ45

Connection to
console port

Bits per second 9600


Data bits 8
Parity None
Stop bits 1
Flow control None

© 2008 NetApp. All rights reserved. 10

CONSOLE ACCESS

For console access, you can connect to a terminal (or terminal server) to the storage system
console port through a standard RS232 connection such as a DB9-to-DB9 serial cable (null
modem) with the following settings for the serial communication port:
• Bits per second: 9600
• Data bits: 8
• Parity: None
• Stop bits: 1
• Flow control: None

Console access can be password protected.


NOTE: For illustration purposes, the figure above shows a standard PC or laptop computer
connected to the storage system console. The adminhost then uses a terminal emulation
program to access the console session.

2-10 Data ONTAP® 7.3 Fundamentals: Installation and Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Console Access from a Terminal Server
RS-232

RS-232

RS-232

RS-232

Terminal
Server

TCP/IP Network

© 2008 NetApp. All rights reserved. 11

CONSOLE ACCESS FROM A TERMINAL SERVER

To avoid the sometimes difficult task of connecting several consoles in the lab you can
instead connect a terminal server. A terminal server is a specialized computer with several
console ports that allows administrators to access a storage system console through the
network, which is important when rebooting a system or for hardware debugging.
You can access the storage system from the console by using the IP address of the terminal
server with the port number for the storage system.

2-11 Data ONTAP® 7.3 Fundamentals: Installation and Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Storage System Administration Interfaces

Storage System
B. Telnet A. Console

RS-232

D. SSH
(secure shell)
C. Terminal
Server DOT
E. rsh CLI
NIC shell
(non-interactive)

F. FilerView
(Web-based; GUI)
Telnet
G. Operations Manager
(Web-based; GUI)

HTTP

© 2008 NetApp. All rights reserved. 12

STORAGE SYSTEM ADMINISTRATION INTERFACES

A. RS232 (console)―DOT shell


B. Telnet―Telnet service DOT shell
One at a time for either
Echo on other access (option to turn off)
NOTE: There should never be more than one Data ONTAP shell running at one time.
C. Terminal server―Connection to RS232 allows telnet access to RS232 port (equivalent to
console access)
D. SSH―Secure Shell
E. RSH―Remote Shell
F. FilerView® (HTTP)
G. Operations Manager (HTTP)

2-12 Data ONTAP® 7.3 Fundamentals: Installation and Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Data ONTAP Command Format

command subcommand –option argument

Example:
system> aggr create –traid-dp <aggrname>

NOTE: Refer to the Commands Manual for specific information.

© 2008 NetApp. All rights reserved. 13

DATA ONTAP COMMAND FORMAT

2-13 Data ONTAP® 7.3 Fundamentals: Installation and Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Command Levels
Level Prompt Command For
Administrative > Administration
Privilege *> Special tasks:
ƒ Troubleshooting
ƒ System tuning
ƒ Testing
ƒ Displaying statistics

© 2008 NetApp. All rights reserved. 14

COMMAND LEVELS

Data ONTAP provides two separate sets of commands based on privilege level, either
administrative or advanced. You can set the privilege level using the priv command.
The administrative level provides access to commands that are sufficient for managing your
storage system. The advanced level provides access to these same administrative commands
as well as additional troubleshooting commands.
Advanced level commands should only be used with the guidance of NetApp technical
support. When using Advanced level commands, the following warning is displayed:
“Warning: These advanced commands are potentially dangerous; use them only when
directed to do so by Network Appliance personnel.”

2-14 Data ONTAP® 7.3 Fundamentals: Installation and Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Changing Command Levels
priv set Command
ƒ Use the priv set command to change
command levels:
priv set level
ƒ Change to the advanced command level:
system> priv set advanced
system*>
ƒ Change to the administrative command level:
system> priv set admin
system>

© 2008 NetApp. All rights reserved. 15

CHANGING COMMAND LEVELS: PRIV SET COMMAND

The initial privilege level for the console and for each RSH session is Administrative. Use the
priv set command to change the privilege level. When you change to the advanced level,
the prompt includes an asterisk.

2-15 Data ONTAP® 7.3 Fundamentals: Installation and Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Viewing Manual (man) Pages From the CLI

ƒ To view a man page from the CLI:


system> man <command>
ƒ To view information about man pages:
system> help
ƒ To view a list of commands:
system> ?

NOTE: Data ONTAP commands are case-sensitive.

© 2008 NetApp. All rights reserved. 16

VIEWING MANUAL (MAN) PAGES FROM THE CLI

When using the command line interface (CLI), you can get CLI syntax help by entering the
name of the command followed by help or the question mark (?).
For a list of all commands available at the current privilege level (administrative or
advanced), at the CLI prompt, type the question mark (?).

2-16 Data ONTAP® 7.3 Fundamentals: Installation and Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Command References on
Data ONTAP FilerView Menu

© 2008 NetApp. All rights reserved. 17

COMMAND REFERENCES ON DATA ONTAP FILERVIEW MENU

FilerView is an HTTP/Web-based graphical management interface that enables you to


manage most storage system functions from a Web browser rather than entering commands at
the console through a telnet session, or using the rsh command, or by using scripts or
configuration files.
You can also use FilerView to view information about the storage system, its physical storage
units (such as adapters, disks, and RAID groups), and its data storage units (such as
aggregates, volumes, and LUNs). You can also use FilerView to view statistics about network
traffic.
FilerView is easy to use and provides access to online Help, which explains Data ONTAP
features and how to use them.

2-17 Data ONTAP® 7.3 Fundamentals: Installation and Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Reboot and Installation

© 2008 NetApp. All rights reserved. 18

REBOOT AND INSTALLATION

2-18 Data ONTAP® 7.3 Fundamentals: Installation and Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Halting a Storage System

ƒ Use the halt command to shut down a


storage system:
system> halt
system> halt –t [interval]
– Flushes all data from memory to disk
– Avoids potential data loss for clients
ƒ Use the reboot command to halt a system
and automatically reboot it:
system> reboot
system> reboot –t [interval]

© 2008 NetApp. All rights reserved. 19

HALTING A STORAGE SYSTEM

Use the halt command to perform an orderly shutdown that flushes file system updates to
disk and clears the NVRAM.

REASONS TO USE THE HALT COMMAND


The storage system stores requests in NVRAM. For the following reasons, always execute the
halt command before turning off the storage system:
• The halt command flushes all data from memory to disk, eliminating a potential point of failure.
• The halt command avoids potential data loss on CIFS clients.
If a CIFS client is disconnected from the storage system, the users' applications are terminated
and changes made to open files since the last save are lost.

IMPORTANT
Always warn CIFS users in advance when you halt the storage system. This gives users a
chance to save changes and avoid losing data when the CIFS service is interrupted.

NOTE: To properly display CIFS shutdown messages on clients running Windows 95 or


Windows for Workgroups, you must configure the WinPopup program to receive messages.
Clients running Windows NT and Windows XP automatically display messages from the
storage system and need no additional configuration.

2-19 Data ONTAP® 7.3 Fundamentals: Installation and Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
USING THE REBOOT COMMAND
Using the reboot command is the same as halting and then booting the storage system.
During a reboot, the contents of the storage system NVRAM are flushed to disk and the
storage system sends a warning message to CIFS clients.

2-20 Data ONTAP® 7.3 Fundamentals: Installation and Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Boot Sequence

© 2008 NetApp. All rights reserved. 20

BOOT SEQUENCE

2-21 Data ONTAP® 7.3 Fundamentals: Installation and Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Boot Sequence

As the storage system boots, press Ctrl-C to


display the special boot menu on the console.

CFE version 1.2.0 based on Broadcom CFE: 1.0.35


Copyright (C) 2000,2001,2002,2003 Broadcom Corporation.
Portions Copyright (C) 2002,2003 Network Appliance Corporation.

CPU type 0x1040102: 600MHz


Total memory: 0x20000000 bytes (512MB)

Starting AUTOBOOT press any key to abort...


Loading: 0xffffffff80001000/8659992 Entry at 0xffffffff80001000
Starting program at 0xffffffff80001000
Press CTRL-C for special boot menu
.........................................

© 2008 NetApp. All rights reserved. 21

BOOT SEQUENCE

The special boot menu or maintenance menu (1-5 menu) is displayed either when a boot
variable is set or Ctrl-C is pressed during boot.
You can also enter one of the following boot options at the boot environment prompt:
• CFE> for FAS200, FAS3020, and FAS3050 systems
• LOADER> for FAS3040, FAS3070, and FAS6000 series systems

BOOT_ONTAP
The boot_ontap option boots the current Data ONTAP software release stored on the
CompactFlash card. By default, the storage system automatically boots this release if you do
not select another option from the basic menu.

BOOT_PRIMARY
The boot_primary option boots the Data ONTAP release stored on the CompactFlash card
as the primary kernel. This option overrides the firmware AUTOBOOT_FROM environment
variable if it is set to a value other than PRIMARY. By default, the boot_ontap and
boot_primary commands load the same kernel.

BOOT_BACKUP
The boot_backup option boots the backup Data ONTAP release from the CompactFlash
card. The backup release is created during the first software upgrade to preserve the kernel
that was preinstalled on the storage system. It provides a "known good" release from which
you can boot the storage system if it fails to automatically boot the primary image.

2-22 Data ONTAP® 7.3 Fundamentals: Installation and Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NETBOOT
The netboot option boots from a Data ONTAP version stored on a remote HTTP or TFTP
(Trivial File Transfer Protocol) server. The netboot option enables you to:
• Boot an alternative kernel if the CompactFlash card becomes damaged
• Upgrade the boot kernel for several devices from a single server
To enable netboot, you must configure networking for the storage system (using Dynamic
Host Configuration Protocol or static IP address) and place the boot image on a configured
server.

2-23 Data ONTAP® 7.3 Fundamentals: Installation and Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Accessing the Flash Boot Commands

As the storage system boots, the special boot


menu allows you to control the booting
sequence.
Special boot options menu will be available.

NetApp Release 7.3RC1: Wed Mar 5 02:17:31 PST 2008


Copyright (c) 1992-2008 Network Appliance, Inc.
Starting boot on Fri May 16 19:50:18 GMT 2008

(1) Normal boot.


(2) Boot without /etc/rc.
(3) Change password.
(4) Initialize all disks.
(4a)Same as option 4, but create a flexible root volume.
(5) Maintenance mode boot.

Selection (1-5)?

© 2008 NetApp. All rights reserved. 22

ACCESSING THE FLASH BOOT COMMANDS

The menu choices available from the special boot menu allow you to continue booting the
storage appliance under normal or special conditions.
Menu selections 2 and 5 are used for troubleshooting. Selections 4 (or 4a) are usually
performed at the beginning of a system installation. To choose a selection on the command
line, enter the option number.

2-24 Data ONTAP® 7.3 Fundamentals: Installation and Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
SPECIAL BOOT MENU OPTIONS
Menu Option Function
(1) Normal boot. This option allows the system to boot as normal.
(2) Boot without /etc/rc. This option performs a normal boot, but bypasses execution of the
etc/rc file. Following this boot, the system runs normally, but without
the configuration normally provided to it by the etc/rc file and system
daemons. To make the system fully operational, you can enter the
commands in the etc/rc file manually.
As a general rule, use this command when there is something in the
etc/rc file causing the storage appliance to misbehave. Often, only
ifconfig, nfs on , and exportfs –a commands are executed
manually, allowing NFS or CIFS to become operational. The etc/rc
file is then edited to remove any offending lines and the system is
rebooted. In this scenario, CIFS is disabled and cannot be restarted until
the system is rebooted.
(3) Change password. This option allows you to change the root password of the filer. It is
usually used when you forget the current password and cannot use the
online passwd command.
(4) Initialize all disks. This command zeroes all the disks in the storage appliance and re-enters
(4a)Same as option 4, but creates a the setup menu. It is typically used only once during system
flexible root volume. reinstallation. This option first prompts you to confirm your choice.
After confirming, there is no way to retrieve data that was previously on
the disks. Zeroing the disks can take time (sometimes hours), depending
on how many disks, and the capacity of each disk.
NOTE: Do not use this option unless you are certain you want to
initialize your disks.

(5) Maintenance mode boot. This option enters a special system mode with only a small subset of
commands available. It is usually used to diagnose hardware problems
(often disk-related). In maintenance mode, WAFL volumes are
recognized but not used, the /etc/rc file is not interpreted, and few
system services are started. NFS and CIFS cannot be used. Disk
reconstructions do not occur. No file system upgrade occurs, even if the
system is newer than the OS release previously installed.

2-25 Data ONTAP® 7.3 Fundamentals: Installation and Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Boot Sequence

As the storage system boots, if you do not press Ctrl-C


(or you select Normal boot), the system:
1. Loads the Data ONTAP kernel into physical memory
from the CompactFlash.
2. Checks the root volume on the physical disk.
...
Wed Apr 7 20:53:00 GMT [mgr.boot.reason_ok:notice]: System rebooted.
CIFS local server is running.
system> Wed Apr 7 20:53:01 GMT [console_login_mgr:info]: root logged
in from console
Wed Apr 7 20:53:23 GMT [NBNS03:info]: All CIFS name registrations
complete for local server

system>

© 2008 NetApp. All rights reserved. 23

BOOT SEQUENCE

2-26 Data ONTAP® 7.3 Fundamentals: Installation and Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Boot Sequence (Cont.)

As the storage system boots and the Data ONTAP kernel


is loaded, it reads-in the following configuration and
system files from the /etc directory.
ƒ /etc/rc file (boot initialization)
ƒ /etc/registry file (option configurations)
ƒ /etc/hosts file (local name resolution)
Wed Apr 7 20:52:50 GMT [fmmbx_instanceWorke:info]: Disk 0b.18 is a
primary mailbox disk
...Loading Volume vol0
Wed Apr 7 20:52:53 GMT [rc:notice]: The system was down for 64 seconds
Wed Apr 7 20:52:54 GMT [dfu.firmwareUpToDate:info]: Firmware is up-to-
date on all disk drives
Wed Apr 7 20:52:58 GMT [ltm_services:info]: Ethernet e0a: Link up
add net default: gateway 10.32.91.1
Wed Apr 7 20:53:00 GMT [mgr.boot.floppy_done:info]: NetApp Release
7.3RC1 boot complete.

© 2008 NetApp. All rights reserved. 24

BOOT SEQUENCE (CONT.)

NORMAL BOOT SEQUENCE


The /etc/rc, /etc/registry, and /etc/hosts files (as well as others) are read during
a normal boot.
NOTE: Additional files might be used depending on the protocols that have been licensed.

2-27 Data ONTAP® 7.3 Fundamentals: Installation and Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Installation

There are three types of Data ONTAP software


updates:
ƒ New installations (systems with Data
ONTAP preinstalled)
ƒ Upgrades
ƒ Redeployments

© 2008 NetApp. All rights reserved. 25

INSTALLATION

1. Familiarize yourself with the requirements for the release you are installing.
2. Consider the following:
• Requirements for upgrading to Data ONTAP from your existing software
• Potential changes to your system following the upgrade
• Appropriate upgrade method for storage systems in an active-active configuration
• If you run the SnapMirror® software, you must identify storage systems with destination
and source volumes
3. Perform any necessary upgrades before upgrading to Data ONTAP. These upgrades
might include:
• Storage system firmware
• Disk firmware
4. Obtain the Data ONTAP system files from the NOW site at http://now.netapp.com/.
5. Install the Data ONTAP system files, and then download them to your storage
system(s).

2-28 Data ONTAP® 7.3 Fundamentals: Installation and Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Using the Setup Script

ƒ Runs automatically during initial system


configuration
ƒ Use the configuration worksheet to prepare
ƒ Run script at any time
– Use to change existing configuration
ƒ Reboot for changes to take effect
– Alternative to a reboot:
system> source /etc/rc

© 2008 NetApp. All rights reserved. 26

USING THE SETUP SCRIPT

The setup command can be run from the storage system CLI at any time; however, it is
usually run during initial system configuration when it is invoked automatically.
Do not run the setup command unless you want to reconfigure your system.

2-29 Data ONTAP® 7.3 Fundamentals: Installation and Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Configuration

© 2008 NetApp. All rights reserved. 27

CONFIGURATION

2-30 Data ONTAP® 7.3 Fundamentals: Installation and Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Configuration Worksheet

eng_router
10.10.10.1
NetApp1 OurDomain
Joey 10.10.10.100
GMT 10.10.10.200
Bldg. 1
en_US
adminhost
10.10.10.20

e0
10.10.10.100
10.10.10.30
255.255.255.0

Administrator
vif1 nollip
4
e4a,e4b,e4c,e4d

© 2008 NetApp. All rights reserved. 28

CONFIGURATION WORKSHEET

The setup script requires information specific to your network environment. A configuration
worksheet is provided in the Software Setup Guide. You can use the worksheet to gather the
necessary configuration information.
NOTE: You may not need to complete every field on this worksheet, depending on your
specific installation requirements.

HOST NAME
The host name is the name by which the storage appliance is identified on the network. If the
storage appliance is licensed for the NFS protocol, the host name can be no longer than 32
characters. If the storage system is licensed for the CIFS protocol, the host name can be no
longer than 15 characters. The host name must be unique for each storage appliance in a
cluster.

PASSWORD
The storage system requires a password before granting administrative access on the console,
through a telnet session, or through the remote shell protocol.

TIME ZONE
For a list of valid time zones, see the Setup Guide. The time zone must be identical on both
storage appliances in a clustered system.

LOCATION
The location is a description of the physical location of the storage system. This information
sets the SNMP location information.

2-31 Data ONTAP® 7.3 Fundamentals: Installation and Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
LANGUAGE
Language refers to the language used for multiprotocol storage systems when both the CIFS
and NFS protocols are licensed. For a list of supported languages and language abbreviations,
see the Setup Guide. The language must be identical on both storage appliances in a cluster.

ADMINISTRATION HOST
The administration host is a client computer that is allowed to access the storage appliance
through a telnet session or through the remote shell protocol. In /etc/exports, adminhost
is granted root access to / so that it can access and modify the configuration files in /etc. All
other NFS clients are granted access only to /home. If no adminhost is specified, all clients
are granted root access to the root directory (not recommended for sites where security is a
concern).

ETHERNET
If your network uses standard Ethernet or Gigabit Ethernet (GbE) interfaces, you must gather
the following information for each interface:
• Network interface name―The name of the Ethernet (or GbE) interface, depending on what port
the Ethernet card is installed in. Examples include: e0 (for Ethernet single); e1 (for GbE); and e3a,
e3b, e3c, e3d (for Ethernet quad-port). Data ONTAP automatically assigns network interface
names as it discovers them.
• IP address―A unique address for each network interface.
• Subnet mask (network mask)―The subnet mask for the network to which each network
interface is attached. Example: 255.255.255.0
• Partner IP address (interface to take over)―If your storage system is licensed for cluster
takeover, record the interface name or IP address belonging to the partner that this interface should
take over.
• Jumbo frames―Jumbo frames are packets that are longer than the standard Ethernet (IEEE
802.3) frame size of 1,518 bytes. Because jumbo frames are not part of the IEEE standard, the
frame size definition for jumbo frames is vendor-specific. The most commonly used jumbo frame
sizes are 9,018 bytes and higher.

VIRTUAL INTERFACE
Specify the interface name rather than the interface IP address.

ROUTER (ROUTING GATEWAY)


Identify the gateway name and IP address for the primary gateway that is used for routing
outbound network traffic.

DNS DOMAIN
Enter the name of your network domain name service (DNS). The DNS domain name must
be identical on both storage systems in an active-active configuration. Record the IP
addresses of your DNS servers.

NIS DOMAIN NAME


Record the name of your NIS domain. The NIS domain is used to authenticate users. The NIS
domain name must be identical on both storage systems in an active-active configuration.

2-32 Data ONTAP® 7.3 Fundamentals: Installation and Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NIS SERVERS
Enter the IP or host names of your preferred NIS servers.

WINDOWS DOMAIN
If your site uses Windows servers, it has one or more Windows domains. Record the name of
the Windows domain to which the storage appliance should belong.

WINS SERVERS
Identify the servers that handle your Windows Internet Name Service (WINS) name
registrations, queries, and releases. If you choose to make the storage appliance visible
through WINS, you should record up to four WINS IP addresses.

WINDOWS 2000 USER (WINDOWS 2000 ADMINISTRATOR)


This is the name of the administrative Windows 2000 domain user who has sufficient
privileges to add this storage appliance to the Windows 2000 domain. This entry is required
only if you are using a Windows 2000 domain.

WINDOWS 2000 USER PASSWORD


This is the domain password to be used for the Windows 2000 user or administrative user. If
your domain is a Windows 2000 domain, you must use the password for the user who has
sufficient privileges to add this storage appliance to the domain.

ACTIVE DIRECTORY
This is the container for the storage appliance accounts, which can be either the default of
computers or a previously created organizational unit (OU) specified by you. The path for the
OU must be specified in reverse order and separated by commas. Example: If the path is
eng\dev\mgmt, the active directory distinguished name is: ou=mgmt, ou=dev, ou=eng.
NOTE: A user in the Windows 2000 domain can create the account in advance, and then
Data ONTAP updates that account.

2-33 Data ONTAP® 7.3 Fundamentals: Installation and Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Network Interface Configuration

ƒ The NetApp storage system automatically


names interfaces
ƒ Interface names are based on:
– Network type
– Motherboard bus slot number
– Port number, with multiport interface

© 2008 NetApp. All rights reserved. 29

NETWORK INTERFACE CONFIGURATION

For storage systems with PCI buses, the network interface naming is as follows:
• Ethernet interface names start with the letter e, followed by the slot number (for example, e0), and
then the port number. If the adapter card is a quad-port Ethernet card, an example of an interface
name is e1a and e1b.
• PCI-based storage systems automatically create host names for network interfaces by appending
the interface slot number to the storage system host name. For example, if the storage system is
named toaster, the host name for e0 is toaster-e0, and the host name for e1a is
toaster-1a.

2-34 Data ONTAP® 7.3 Fundamentals: Installation and Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Administration Host

ƒ Any workstation that is either an NFS or a


CIFS client on the network
– Designated by you during setup
ƒ Benefits of an administration host:
– Limits access to the storage system’s root file
system
– Provides a text editor to edit configuration files
– Provides the ability to administer a storage
system remotely

© 2008 NetApp. All rights reserved. 30

ADMINISTRATION HOST

2-35 Data ONTAP® 7.3 Fundamentals: Installation and Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Setup

© 2008 NetApp. All rights reserved. 31

SETUP

2-36 Data ONTAP® 7.3 Fundamentals: Installation and Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The setup Script

Please enter the new hostname []: system


Do you want to configure virtual network interfaces? [n]:
Please enter the IP address for Network Interface e0a []: 10.10.10.30
Please enter the netmask for Network Interface e0a [255.0.0.0]: 255.255.255.0
Please enter media type for e0a {100tx-fd, tp-fd, 100tx, tp, auto (10/100/1000)}
[auto]:
Please enter flow control for e0a {none, receive, send, full} [full]:
Do you want e0a to support jumbo frames? [n]:
Would you like to continue setup through the web interface? [n]:
Please enter the name or IP address of the default gateway: 10.10.10.1
The administration host is given root access to the filer's
/etc files for system administration. To allow /etc root access
to all NFS clients enter RETURN below.
Please enter the name or IP address of the administration host: adminhost
Please enter the IP address for adminhost: 10.10.10.20
Please enter timezone [GMT]: US/Pacific
Where is the filer located? []: Bldg. 1

© 2008 NetApp. All rights reserved. 32

THE SETUP SCRIPT

The setup script installs new versions of /etc/rc, /etc/hosts, /etc/exports,


/etc/resolv.conf, /etc/hosts.equiv, and /etc/dgateways to reflect the new
configuration. When setup is complete, the new configuration does not take effect until the
storage system is rebooted. You can reconfigure a storage system any time by typing setup
at the console prompt (this is not recommended unless you are performing a new installation).
If a reconfiguration is performed, the old contents of these five configuration files are saved
in rc.bak, exports.bak, resolv.conf.bak, hosts.bak, hosts.equiv.bak, and
dgateways.bak.
The empty brackets ([ ]) indicates that there is no default setting for the question. When the
brackets have a value, then the displayed value is the default.

2-37 Data ONTAP® 7.3 Fundamentals: Installation and Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Check Software Version & License Status
system> version
NetApp Release 7.3RC1: Wed Mar 5 02:17:31 PST 2008

system> sysconfig -v
NetApp Release 7.3RC1: Wed Mar 5 02:17:31 PST 2008
System ID: 0084166726 (NetApp1)
System Serial Number: 3003908 (NetApp1)
slot 0: System Board 599 MHz (TSANTSA D0)
Model Name: FAS250
Part Number: 110-00016
Revision: D0
Serial Number: 280646
Firmware release: CFE 1.2.0 system> license
Processors: 2
Processor revision: B2 nfs site ABCDEFG
Processor type: 1250 cifs site BCDEFGH
Memory Size: 510 MB http site CDEFGHI
NVMEM Size: 64 MB of Main Memory Used
cluster not licensed
snapmirror not licensed
snaprestore not licensed

© 2008 NetApp. All rights reserved. 33

CHECK SOFTWARE VERSION & LICENSE STATUS

SOFTWARE VERSION
You can verify your software version by using the sysconfig or sysconfig -v command,
or by using the version command. One way to verify the software version on the disks is to
change to the /etc/boot directory and then view the link to which that directory points.

FIRMWARE VERSION
There are two primary ways to verify your firmware version: use sysconfig –v, or halt the
storage system and type version from the OK prompt. Ensure that the firmware version on
your system is what it should be and that you have the most current version of the firmware
for your platform.

LICENSES
Use the license command to verify that all licenses are listed for your storage appliance.
The licenses that are displayed in the autosupport log are encrypted versions of the actual
licenses. If all authorized licenses are not displayed using this command, contact NetApp.
NOTE: You can also access the license information using FilerView. Under Filer, select
Manage Licenses.

2-38 Data ONTAP® 7.3 Fundamentals: Installation and Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Message Logging
ƒ Message logging is done by a syslogd daemon
ƒ The /etc/syslog.conf configuration file on the
storage system's root volume determines how system
messages are logged
ƒ Messages can be sent to:
– The console
– A file
– A remote system
ƒ By default, all system messages are sent to the
console and logged in the /etc/messages file
ƒ You can access the /etc/messages files via
– An NFS or CIFS client (discussed later in this course)
– The FilerView administration tool
© 2008 NetApp. All rights reserved. 35

MESSAGE LOGGING

The syslog contains information and error messages that the storage system displays on the
console and logs in the /etc/messages file.
To specify the types of messages that the storage system logs, use the Syslog Configuration
page in FilerView to edit the /etc/syslog.conf file. This file specifies which types of
messages are logged by the syslogd daemon. (A daemon is a process that runs in the
background, rather than under the direct control of a user.)

2-39 Data ONTAP® 7.3 Fundamentals: Installation and Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The /etc/syslog.conf File

ƒ The /etc/syslog.conf file consists of lines


with space-separated fields in the following
format:
facility.level action
ƒ The facility parameter specifies the
subsystem from which the message originated
ƒ The level parameter describes the severity
level of the message
ƒ The action parameter specifies where to send
messages

© 2008 NetApp. All rights reserved. 36

THE /ETC/SYSLOG.CONF FILE

By default, the /etc/syslog.conf file does not exist; however, there is a sample
/etc/syslog.conf file. To view a manual page, enter the man syslog.conf command.
The facility parameter uses one of the following keywords: kern, daemon, auth, cron,
or local7.
The level parameter is a keyword from the following ordered list (higher to lower):
emerg, alert, crit, err, warning, notice, info, debug.
The action parameter can be in one of three forms:
• A pathname (beginning with a leading slash)―Selected messages will be appended to the
specified log file.
• A hostname (preceded by ‘@’)―Selected messages will be forwarded to the syslogd daemon on
the named host.
• /dev/console―Selected messages are written to the console.
For more information about /etc/syslog.conf settings, see the System Administration
Guide.

2-40 Data ONTAP® 7.3 Fundamentals: Installation and Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Upgrades

© 2008 NetApp. All rights reserved. 37

UPGRADES

2-41 Data ONTAP® 7.3 Fundamentals: Installation and Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Options to Upgrade
ƒ NetApp periodically releases new versions of Data
ONTAP
ƒ Release families means major release to major
release upgrade
– First two digits of the release is the version number
Examples: 7.0, 7.1, 7.2
– Point releases have a new third digit
Examples: 7.2.1.1, 7.2.2L1
ƒ Download upgrades from the NOW site
ƒ Install an upgrade using either:
– CLI
software update <subcommand>
– A Windows or UNIX client to unzip or untar to the
shared or mounted /etc directory

© 2008 NetApp. All rights reserved. 38

OPTIONS TO UPGRADE

The process of upgrading a storage system can vary from release to release. Because disk
firmware is usually tied to the Data ONTAP version, there may be additional considerations
besides the ones identified here. For more information about the specific release you are
upgrading to, including instructions for downgrading to a previous version of Data ONTAP,
see the Upgrade Guide.
NOTE: The information presented in this course represents a generalized approach to
upgrading the operating system. Because some specifics may change between releases, be
sure to consult the appropriate guide for more information.

2-42 Data ONTAP® 7.3 Fundamentals: Installation and Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The software update Command
To upgrade Data ONTAP, enter
the command: software
update
This command:
ƒ Copies Data ONTAP
release file into
/etc/software
Root Volume ƒ Unzips Data ONTAP release
file
/etc
ƒ Updates /etc/boot (to be
loaded to CompactFlash)
/disk_fw ƒ Places other files in /etc
/boot /software (such as FilerView HTML
Other
System
files and disk firmware)
Files NOTE: The system continues
running the previous version of
Data ONTAP until rebooted.
© 2008 NetApp. All rights reserved. 39

THE SOFTWARE UPDATE COMMAND

The software update command allows you to upgrade Data ONTAP from the console
without using CIFS or NFS.
Before you using this command, you must first establish an HTTP host and download the
appropriate software (the Windows executable file for the specific platform, for example,
XX_setup_i.exe) from the NOW site to the host. After the .exe files are downloaded on an
HTTP server, you can install them on any storage appliance by using the software
install command.
The software update command requires the fewest number of steps and the least amount
of time to upgrade Data ONTAP. To upgrade Data ONTAP using the software install
command:
1. Verify the HTTP host and source URL.
2. Execute the software update command.
3. Reboot Data ONTAP.

2-43 Data ONTAP® 7.3 Fundamentals: Installation and Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Summary

In this module, you should have learned:


ƒ The NOW site provides an excellent source for
documentation and support.
ƒ The Configuration Worksheet aids in initial
configuration of a storage system.
ƒ Console access provides direct feedback on events
that occur on storage systems.
ƒ Pressing Ctrl-C during boot prompts the user with a
special boot menu.
ƒ The setup command runs when a storage system is
first turned on.
ƒ The software command provides a simple way to
upgrade Data ONTAP.
© 2008 NetApp. All rights reserved. 40

MODULE SUMMARY

2-44 Data ONTAP® 7.3 Fundamentals: Installation and Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Exercise
Module 2: Installation and
Configuration
Estimated Time: 45 minutes

EXERCISE

Please refer to your Exercise Guide for more instruction.

2-45 Data ONTAP® 7.3 Fundamentals: Installation and Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Basic Admin

NetApp University - Do Not Distribute


MODULE 3: BASIC ADMINISTRATION

Basic
Administration
Module 3
Data ONTAP® 7.3 Fundamentals

BASIC ADMINISTRATION

3-1 Data ONTAP® 7.3 Fundamentals: Basic Administration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Objectives

By the end of this module, you should be able to:


ƒ Connect remotely to the CLI of a FAS system
using the console and a remote host
ƒ Access FilerView to manage a storage system
ƒ Execute commands using the console, a
remote host, and FilerView
ƒ Demonstrate how to use commands to analyze
a FAS system
ƒ Configure and manage the AutoSupport
service for a FAS system

© 2008 NetApp. All rights reserved. 2

MODULE OBJECTIVES

3-2 Data ONTAP® 7.3 Fundamentals: Basic Administration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Graphical Interfaces

© 2008 NetApp. All rights reserved. 3

GRAPHICAL INTERFACES

3-3 Data ONTAP® 7.3 Fundamentals: Basic Administration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Administration Options

A storage system may be managed from a:


ƒ GUI
ƒ CLI

© 2008 NetApp. All rights reserved. 4

ADMINISTRATION OPTIONS

3-4 Data ONTAP® 7.3 Fundamentals: Basic Administration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Graphical Interfaces

A storage system can be managed through


multiple graphical interfaces:
ƒ FilerView administration tool
ƒ Operations Manager (formerly DataFabric®
Manager)
ƒ Microsoft® Windows® interfaces such as
Computer Management for certain CIFS
functionality

© 2008 NetApp. All rights reserved. 5

GRAPHICAL INTERFACES

3-5 Data ONTAP® 7.3 Fundamentals: Basic Administration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
FilerView Administration Tool

http://<system name or ipaddress>/na_admin

© 2008 NetApp. All rights reserved. 6

FILERVIEW ADMINISTRATION TOOL

The FilerView interface is a Web-based graphical management interface that enables you to
manage most storage system functions from a Web browser rather than from the console, a
telnet session, or the rsh command. FilerView supports Windows, UNIX, Solaris, Linux,
and HP-UX® environments.
To access a storage system from a client using FilerView, complete the following steps:
1. Start your Web browser.
2. Enter the following URL:
http://filername/na_admin
Where filername is either the fully qualified name (or the short name) or the IP address
of your storage system.
3. Click FilerView.
A new browser window is displayed. If you are running SecureAdmin™ 2.1.1 or later,
click Secure FilerView to start an encrypted browser session.
4. Select a management function. If prompted, supply an administrative user name and
password.

3-6 Data ONTAP® 7.3 Fundamentals: Basic Administration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Operations Manager

Operations Manager is a Web application that discovers,


monitors, and manages NetApp storage from a
management console for maximum availability, reduced
TCO, and business policy compliance.

© 2008 NetApp. All rights reserved. 7

OPERATIONS MANAGER

NetApp Operations Manager (formerly DataFabric Manager) delivers comprehensive


monitoring and management for NetApp enterprise-storage and content-caching
environments. From a central point of control, Operations Manager provides alerts, reports,
and configuration tools to keep your storage infrastructure in line with your business
requirements for maximum availability and reduced TCO.
Operations Manager is a simple, centralized administration tool that enables comprehensive
management of enterprise-storage and content-delivery infrastructures. No other single
management application provides the same level of NetApp monitoring and management for
NetApp FAS systems and NearStore near-line storage. The detailed performance and health
monitoring tools available through Operations Manager gives administrators proactive
information to help resolve potential problems before they occur, and to troubleshoot
problems faster if and when they do occur.

3-7 Data ONTAP® 7.3 Fundamentals: Basic Administration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Alternative GUIs

Some operating systems have other interfaces, such as


Windows Computer Management, that interact with the
functionality of the storage system.

© 2008 NetApp. All rights reserved. 8

ALTERNATIVE GUIS

Microsoft Windows Server 2000 and later, as well as client operating systems such as
Microsoft Windows XP and later, provide Computer Management, a management console
that is able to connect to a storage system. Microsoft Management Consoles (MMC) can also
be used to remotely administer a storage system.

3-8 Data ONTAP® 7.3 Fundamentals: Basic Administration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Command Line
Interface

© 2008 NetApp. All rights reserved. 9

COMMAND LINE INTERFACE

3-9 Data ONTAP® 7.3 Fundamentals: Basic Administration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Command Line Interface

The CLI is a powerful tool to administer storage


systems and can be accessed through:
ƒ Console
ƒ Telnet
ƒ Remote LAN Module (RLM)
ƒ FilerView CLI
ƒ Remote Shell (RSH)
ƒ Secure Shell (SSH)
system> Wed Apr 7 20:53:01 ...
logged in from console

system>

© 2008 NetApp. All rights reserved. 10

COMMAND LINE INTERFACE

3-10 Data ONTAP® 7.3 Fundamentals: Basic Administration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Telnet to the Console

Login>
Password>

© 2008 NetApp. All rights reserved. 11

TELNET TO THE CONSOLE

The ability to telnet to a console is useful because it allows remote access to that console.
However, it is limited by the fact that only one telnet session at a time is allowed.
The storage system can be configured to allow telnet access only by certain hosts using one
of the following commands:
options trusted.hosts
or
options telnet.hosts (deprecated)
The storage system can be configured to terminate an unused telnet session automatically
using the following commands:
options autologout.telnet.enable
options autologout.telnet.timeout
To terminate a telnet session immediately without waiting for the timeout period, you can
also use the logout telnet command from RSH or the console.

3-11 Data ONTAP® 7.3 Fundamentals: Basic Administration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Remote LAN Module

The RLM, which is available in some hardware


configurations, provides:
ƒ Remote access to your storage system
regardless of the system state
ƒ Continuous power and secure access

RLM Port

© 2008 NetApp. All rights reserved. 12

REMOTE LAN MODULE

The RLM is a management card available on FAS6000/V6000 and FAS3000/V3000 series


systems that provides remote platform management capabilities.
The RLM is powered by standby voltage that is available as long as the system has input
power to at least one of the appliance's power supplies. Therefore, the RLM is always
operational, regardless of the system operating state.

RLM PROVIDES A SECURE CONNECTION


The RLM was designed with security as a core requirement and uses a dedicated 10/100
Mbps Ethernet interface. You must use a secure shell (SSH) client to log in to the RLM. It
does not provide plain, text-based protocols such as telnet or RSH. To log into the RLM, you
can continue to use the same user credentials as defined in Data ONTAP.

ADDITIONAL FEATURES
The RLM provides secure, out-of-band access to the system that can be used regardless of the
system state. The RLM offers a number of remote management capabilities for NetApp
systems including remote access, monitoring, troubleshooting, logging, and alerting.

3-12 Data ONTAP® 7.3 Fundamentals: Basic Administration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Some additional features of the RLM:
• Remote access to the storage system console without using a serial terminal or a terminal
concentrator
• Remote access to control system power if you need to power off, power on, or power cycle the
system remotely without using a LAN-based power strip
• Remote initiation of a core dump without requiring the use of the Non-Maskable Interrupt (NMI)
button on the system
• Remote access to hardware system event logs even when the system is down

The RLM also extends the AutoSupport capabilities of the NetApp system by sending alerts
or "down-filer" notifications through an AutoSupport message when the system goes down,
regardless of whether or not the system is able to send AutoSupport messages. These
AutoSupport messages provide proactive alerts to NetApp to help provide you with faster
service.

CONFIGURING THE RLM


You can configure an RLM using Data ONTAP in one of three ways:
1. During the initial setup
2. Using the setup command
3. Using the rlm setup command

3-13 Data ONTAP® 7.3 Fundamentals: Basic Administration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
FilerView CLI

© 2008 NetApp. All rights reserved. 13

FILERVIEW CLI

FilerView allows you to access a telnet session to administer the storage system from a CLI.
NOTE: Only one telnet session at a time is allowed, regardless of whether you are using the
FilerView CLI editor or an alternative telnet emulator.

3-14 Data ONTAP® 7.3 Fundamentals: Basic Administration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
CLI Session Limitations

A storage system allows only one session and


one user at a time. Attempts to create additional
sessions will generate an error.
Too many users logged in! Please try again later. Connection
closed.

To create two separate sessions to use at the


same time:
telnet.distinct.enable

NOTE: If separate sessions are not created,


administrators must share the same session.
© 2008 NetApp. All rights reserved. 14

CLI SESSION LIMITATIONS

The FilerView interface allows only one telnet session at a time. If you try to open a telnet
session in FilerView or the CLI directly, and there is already an active session open, you will
receive an error message.
When you close the Use Command Line window in FilerView, the telnet session is also
closed.

3-15 Data ONTAP® 7.3 Fundamentals: Basic Administration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Remote Shell

RSH allows you to issue console commands without a


console session.

To use RSH, enable it with the following:


options rsh.enable on
ƒ If there is a console password, you must modify
/etc/hosts.equiv to allow access
ƒ Multiple RSH commands can be issued using the
following syntax:
– UNIX
rsh <storage_system> “<command>[; <command>]”
– Windows
rsh <storage_system> –l <user> “<command>”

© 2008 NetApp. All rights reserved. 15

REMOTE SHELL

To configure RSH access from a host:


• Verify that RSH is enabled on the storage system by using the options rsh command.
If RSH is disabled, enable it on the storage system using the options rsh.enable on
command.
• If no console password has been set, RSH access is allowed without a hosts.equiv file.
Subsequently, setting the console password prevents RSH access until a hosts.equiv file is
created. An entry for the server and account name in the storage system’s /etc/hosts.equiv
file must be created. For example, if the Windows server IP is 192.168.2.2 and the login name
is user1, the following entry should be in the hosts.equiv file on the storage
system:192.168.2.2 user1

NOTE: Be sure to add an empty line or new line at the end of the hosts.equiv file. Also,
in Windows, the RSH "username" is required in the /etc/hosts.equiv file. In UNIX, if
the RSH "username" is omitted, only "root" may access RSH from the host(s) listed in the
/etc/hosts.equiv file. This allows user1 to run RSH from the Windows host.

3-16 Data ONTAP® 7.3 Fundamentals: Basic Administration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Secure Shell

SSH is a network protocol that allows data


exchange over a secure channel. SSH also:
ƒ Allows for secure administrative access to the
storage system
ƒ Requires no license
ƒ Can be implemented using the secureadmin
command or by using FilerView

For a complete description of the SSH feature,


see the System Administration Guide.

© 2008 NetApp. All rights reserved. 16

SECURE SHELL

USING FILERVIEW TO EXECUTE THE SECUREADMIN COMMAND


When you use the secureadmin command, it makes it very difficult for anyone to intercept
a storage system administrator's password over the network because the password and all
administrative communications are encrypted. The secureadmin command also provides a
secure communication channel between a client and the storage system using one or both of
the following protocols:
• SSH―Provides a secure remote shell and interactive network session. The secureadmin
command supports SSH 1.x clients and SSH 2.0 clients.
• SSL―Provides secure Web access for FilerView and Data ONTAP Application Program
Interfaces (APIs).

3-17 Data ONTAP® 7.3 Fundamentals: Basic Administration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Common Commands

© 2008 NetApp. All rights reserved. 17

COMMON COMMANDS

3-18 Data ONTAP® 7.3 Fundamentals: Basic Administration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Basic Administration Commands
system> ?
? halt nfs snapvault
aggr help nfsstat snmp
backup hostname nis software
cf httpstat options source
cifs ifconfig orouted storage
config ifstat partner sysconfig
dafs igroup passwd sysstat
date ipsec ping timezone
df ipspace priv traceroute
disk iscsi qtree ups
disk_fw_update iswt quota uptime
dns license rdate useradmin
download lock reboot version
dump logger restore vfiler
echo logout rmc vif
ems lun route vlan
environment man routed vol
exportfs maxfiles rsm vscan
fcp mt savecore wcc
fcstat nbtstat secureadmin ypcat
file ndmpcopy setup ypgroup
filestats ndmpd shelfchk ypmatch
fpolicy netdiag snap ypwhich
ftp netstat snapmirror
system>

© 2008 NetApp. All rights reserved. 18

BASIC ADMINISTRATION COMMANDS

At the normal administration privilege level, entering a question mark (?) at the command
line displays the commands available to a system administrator for disk management,
networking, system management, physical and virtual interface configuration, and related
tasks.
Some of these commands are simple; some use arguments; some perform obvious functions
such as backup, ping, or help. To display a brief description of a command, enter help
command name on the command line. To display the full syntax of a command, including
associated arguments, enter command name on the command line.
You can also use FilerView to access the manual pages for each command.

3-19 Data ONTAP® 7.3 Fundamentals: Basic Administration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Advanced Privilege Commands
system> priv set advanced
Warning: These advanced commands are potentially dangerous; use
them only when directed to do so by Network Appliance
df personnel. led_on quota sysstat
disk led_on_all rdate test_lcd
disk led_on_off rdfile timezone
disk_fw_update led_test reboot toe
disk_list led_test_one registry traceroute
disk_stat license remote ups
dns lmem_stat restore uptime
download lock result useradmin
dump log revert_to version
echo logger rm vfiler
ems logout rmc vif
environ ls rmt vlan
environment lun rod vol
exit man route vscan
exportfs maxfiles routed wafl
fcadmin mbstat rsm wafl_susp
fcp mem_scrub_stats rtag wcc
fcstat mt savecore wrfile
file mv scsi ypcat
filestats nbtstat secureadmin ypgroup
fpolicy ndmpcopy setup ypmatch
ftp priv set admin
ndmpd sh ypwhich
getXXbyYY

© 2008 NetApp. All rights reserved. 19

ADVANCED PRIVILEGE COMMANDS

Advanced privilege commands are additional commands that provide more control and access
to the storage system. In some cases, these commands are simply normal commands with
additional arguments or options available.
CAUTION: Because advanced privilege commands are potentially dangerous, they should
only be used by knowledgeable personnel.
You can access advanced privilege commands using the priv set advanced command.
This command changes the command-line prompt by embedding an asterisk (*) in the prompt
when advanced privileges are enabled. To return to basic administration mode, use the priv
set admin command.
There are additional administration commands that are considered advanced but are available
in the basic administration mode. However, these commands are hidden and do not appear
when you enter help while in the basic administration mode.

3-20 Data ONTAP® 7.3 Fundamentals: Basic Administration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Basic System Configuration

The following are basic system configuration commands:


ƒ System configuration
– sysconfig
– sysconfig -v
ƒ RAID configuration
– sysconfig –r
– vol status -r
ƒ Disk configuration
– sysconfig -d
ƒ Check your configuration
– sysconfig -c

© 2008 NetApp. All rights reserved. 20

BASIC SYSTEM CONFIGURATION

Many console commands provide storage system configuration information. You can use
these commands to:
• Check your system configuration
• Monitor system status
• Verify the correct system

3-21 Data ONTAP® 7.3 Fundamentals: Basic Administration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Configuring Your
System

© 2008 NetApp. All rights reserved. 21

CONFIGURING YOUR SYSTEM

3-22 Data ONTAP® 7.3 Fundamentals: Basic Administration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Configuring Your System

To change the configuration of a storage system, use


one of the following methods:
ƒ CLI
– options
– vol options
ƒ Configuration files
– /etc/hosts
– /etc/rc
– /etc/hosts.equiv
– /etc/dgateways
ƒ FilerView
ƒ AutoSupport to report configurations (discussed later)

© 2008 NetApp. All rights reserved. 22

CONFIGURING YOUR SYSTEM

SYSTEM CONFIGURATION OPTIONS: COMMANDS VERSUS CONFIGURATION FILES


Use system or volume-options commands to change configurable storage system software
options. These options commands:
• Can all be entered on the console, with some available using FilerView
• Are automatically added to the storage appliance’s registry in the /etc/registry file
• Are persistent across reboots
• Do not require that you edit any configuration files
To make non-options configurations permanent, you must edit configuration files such as
/etc/rc, /etc/hosts.equiv, /etc/dgateways, and /etc/hosts.

3-23 Data ONTAP® 7.3 Fundamentals: Basic Administration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
CLI Commands

ƒ System options:
options [option name] [value]
Example: options rsh.enable on
NOTE: If no value is entered, the current value is displayed.
ƒ Volume options:
vol options volname [option name] [value]
ƒ Aggregate options:
aggr options aggrname [option name] [value]

© 2008 NetApp. All rights reserved. 23

CLI COMMANDS

The options or vol options commands are used to change configurable storage system
software options.
If no options are specified, then the current value of all available options is printed. If an
option is specified with no value, then the current value of that option is printed. If only a part
of an option is specified with no value, then the list of all options that start with the partial-
option string is printed. This is similar to the UNIX grep command.
The default value for most options is off, which means that the option is not set.
Changing the value to on enables the option. For most options, the only valid values are:
• On (also expressed as yes, true, or 1) in any combination of upper and lower case
• Off (also expressed as no, false, or 0) in any combination of upper and lower case

If the default is not set to off, the description of an option indicates the default setting along
with allowable values (if it is not an on-or-off option).
For options that accept string values, use a double quote ("") as the option argument if you
want to set the option to be the null string. Normally, arguments are limited to 255 characters
in length.

3-24 Data ONTAP® 7.3 Fundamentals: Basic Administration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Registry Files

Registry files contain many persistent


configurations.
IMPORTANT: The registry should not be edited directly.

File Usage
/etc/registry Current registry
/etc/registry.lastgood Copy of registry after last
successful boot
/etc/registry.bck First-level backup
/etc/registry.default Default registry

© 2008 NetApp. All rights reserved. 24

REGISTRY FILES

REGISTRY DATABASE
Persistent configuration information and other data is stored in a registry database.
There are several backups of the registry database that are automatically used if the original
registry becomes unusable. The /etc/registry.lastgood is a copy of the registry as it
existed after the last successful boot.
The /etc/registry is edited by Data ONTAP and should not be manually edited.
Configuration commands such as the network interface configuration (ifconfig) must
remain in the /etc/rc file.

3-25 Data ONTAP® 7.3 Fundamentals: Basic Administration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Editing Configuration Files

The following are important configuration files:


ƒ /etc/rc
ƒ /etc/hosts
ƒ /etc/hosts.equiv
There are three ways to edit a configuration file:
ƒ Access the /etc directory and use local
editing tools
ƒ Use commands on the console
ƒ Use FilerView to edit configuration files

© 2008 NetApp. All rights reserved. 25

EDITING CONFIGURATION FILES

EDITING BOOT CONFIGURATION FILES


The storage system boot configuration files contain commands that run automatically when
the storage system is booted. These files are located in the /etc directory of the storage
appliance’s root volume.
The rc configuration file contains:
• Network interface configuration information
• Commands that automatically export NFS mounts
The hosts file is used for name-machine resolution, while the hosts.equiv is used to
develop a trust relationship for certain applications such as RSH.

RECOVERING FROM CONFIGURATION FILE ERRORS


The storage system can become inaccessible if:
• You introduce errors into the /etc/rc file during editing
• The network interface configuration commands in the /etc/rc file specify an incorrect address
or the file contains other misconfigured items
You can correct errors through the console using the ifconfig and exportfs commands.
Or, if CIFS is enabled, you can access the /etc/rc file from a CIFS client and then correct
the NFS export error after the network interface is properly configured.

3-26 Data ONTAP® 7.3 Fundamentals: Basic Administration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Editing Configurations from Host

#Auto-generated by setup Sun Mar 29 15:05:20 GMT


1970
127.0.0.1 localhost
10.254.134.43 DEV-FAS250 DEV-FAS250-e0a
# 0.0.0.0 DEV-FAS250-e0b

© 2008 NetApp. All rights reserved. 26

EDITING CONFIGURATIONS FROM HOST

To edit a configuration file from a host, the storage system must be configured with a NAS
protocol such as CIFS or NFS. An adminhost can then access the /etc directory to edit the
configuration file.

ADMINHOST
The term adminhost is used to describe an NFS or CIFS client machine that is able to view
and modify configuration files stored in the /etc directory of the storage system’s root
volume.
The storage system grants root permissions to the adminhost after the setup procedure is
complete.

CLIENT REQUIREMENTS PRIVILEGE


NFS The host name must be entered in the Mount the storage system root directory
/etc/hosts.equiv file. The setup procedure with root permissions, and then edit
automatically populates this file. configuration files. Enter storage system
commands using a remote shell program
such as RSH.
CIFS The user must be a member of the Administrators Edit configuration files by accessing the
domain. \\storagesystem\C$ share.

3-27 Data ONTAP® 7.3 Fundamentals: Basic Administration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Console Editing

To edit files on the console:


1. Make a backup copy of the file.
2. Read the file using rdfile.
3. Write the file using wrfile.

IMPORTANT: When the wrfile command is issued, the original


file is deleted. Therefore, it is important to first use the rdfile
command. To append to a file without deleting it, use wrfile –a.

© 2008 NetApp. All rights reserved. 27

CONSOLE EDITING

The console command rdfile displays the present contents of an ASCII text file. If the file
doesn’t exist or is empty, this command returns nothing.
For example, to display the /etc/hosts file from the CLI, enter rdfile /etc/hosts.
The console command wrfile creates or re-creates a file when executed.
For example, to re-create the /etc/hosts file from the CLI, enter wrfile /etc/hosts,
and then enter the contents of the file and press Ctrl-C to commit the file. Issue the command
rdfile to verify the contents of the file.

3-28 Data ONTAP® 7.3 Fundamentals: Basic Administration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
System Configuration Using FilerView

Configuration settings are accessed through the left


navigation pane.

© 2008 NetApp. All rights reserved. 28

SYSTEM CONFIGURATION USING FILERVIEW

FilerView has a variety of different menu items that display and allow configuration of a
storage system.

3-29 Data ONTAP® 7.3 Fundamentals: Basic Administration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
AutoSupport

© 2008 NetApp. All rights reserved. 29

AUTOSUPPORT

3-30 Data ONTAP® 7.3 Fundamentals: Basic Administration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
AutoSupport Mail Host

Email Server

© 2008 NetApp. All rights reserved. 30

AUTOSUPPORT MAIL HOST

AutoSupport is a call home feature included in the Data ONTAP and NetCache software for
all NetApp systems. AutoSupport is an integrated and efficient monitoring and reporting tool
that constantly monitors the health of your system.
AutoSupport allows storage systems to send messages to the NetApp Technical Support team
and to other designated addressees when specific events occur. The AutoSupport message
contains useful information for Technical Support to identify and solve problems quickly and
proactively.
You can also subscribe to the abbreviated version of urgent AutoSupport messages through
alphanumeric pages, or you can customize the type of message alerts you want to receive.
The AutoSupport Message Matrices list all the current AutoSupport messages in order of
software version.

WHY SHOULD AUTOSUPPORT BE ENABLED?


AutoSupport allows the system to send messages directly to system administrators and
NetApp Technical Support, which has a dedicated team continually enhancing AutoSupport
analysis tools.

TURNING TRADITIONAL SUPPORT PROCESSES AROUND


NetApp automatically analyzes AutoSupport messages, immediately notifying our Technical
Support team when there is a critical alert. Technical support then initiates an investigation.
This means that you don’t have to report the problem. In some cases, status messages provide
enough information for NetApp to take corrective action right away. For example, when
NetApp receives a "Failed Disk Trigger" message, we automatically ship a replacement drive
to you (per the terms of your maintenance agreement).

3-31 Data ONTAP® 7.3 Fundamentals: Basic Administration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
STAYING AHEAD OF POTENTIAL PROBLEMS
Not all AutoSupport messages lead to immediate actions. Warning messages in the syslog can
point to suspect components. Often, this initiates corrective action that can prevent unplanned
disruption. Our AutoSupport analysis tools also monitor syslog messages for known
configuration issues. A config alert notifies our support team of a configuration issue that
could lead to system instability. Finally, being connected through AutoSupport means that
NetApp already has details about your current system configuration.

NetApp strongly recommends that AutoSupport be configured on all your systems.

3-32 Data ONTAP® 7.3 Fundamentals: Basic Administration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
AutoSupport

The autosupport daemon monitors the storage


system's operations and sends automatic
messages to technical support. Technical
support contacts you with any reported issues.
AutoSupport messages are generated:
ƒ When events occur on the storage system that
require corrective action
ƒ When you initiate a test message
ƒ When the system reboots
ƒ Once a week (usually after 12 a.m. on
Sundays)
© 2008 NetApp. All rights reserved. 31

AUTOSUPPORT

AUTOSUPPORT DAEMON
NetApp storage systems use an AutoSupport daemon to control how messages are sent to
NetApp Technical Support. The AutoSupport daemon is enabled by default on a storage
system.

HOW AUTOSUPPORT WORKS


In order to continuously monitor your system status and health, AutoSupport:
• Is automatically triggered by the kernel once a week to send information to the e-mail addresses
specified in the autosupport.to option before the messages file is backed up. In addition, the
options command can be used to invoke the AutoSupport mechanism to send this information.
• Sends a message in response to events that require corrective action from the system administrator
or NetApp Technical Support.
• Sends a message when the system reboots.

3-33 Data ONTAP® 7.3 Fundamentals: Basic Administration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
AutoSupport E-Mail Events

Events E-Mail Subject Line


Low NVRAM battery BATTERY_LOW

Disk failure DISK_FAIL!!!

Disk scrub detected checksum errors DISK_SCRUB CHECKSUM ERROR

Shutdown occurred because of overheating OVER_TEMPERATURE_SHUTDOWN!!!

Partial RPS failure occurred REBOOT

Disk shelf error occurred SHELF_FAULT

Spare disk failure occurred SPARE DISK FAILED

Weekly backup of /etc/messages occurred WEEKLY_LOG

Successful cluster takeover of partner CLUSTER TAKEOVER COMPLETE

Unsuccessful cluster takeover CLUSTER TAKEOVER FAILED

Cluster takeover of virtual filer REBOOT (CLUSTER TAKEOVER)

Cluster giveback occurred CLUSTER GIVEBACK COMPLETE

© 2008 NetApp. All rights reserved. 32

AUTOSUPPORT E-MAIL EVENTS

AutoSupport e-mails can be triggered by the following events:


• Weekly logs (/etc/messages)
• System reboots
• Low NVRAM batteries
• Disk, fan, and power supply failures
• Shelf faults
• File system growing too large
• Administrator-defined SNMP traps

To read descriptions of some of the AutoSupport messages you might receive, access the
NOW site and search for AutoSupport Message Matrices, You can view either the online
version or the version in the Data ONTAP guide.

3-34 Data ONTAP® 7.3 Fundamentals: Basic Administration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
AutoSupport E-mail Contents

ƒ The options autosupport.content


command specifies how much to send
– options autosupport.content
[Minimal|Complete]
ƒ The default is Complete
ƒ Complete AutoSupports are stored in:
– /etc/log/autosupport

© 2008 NetApp. All rights reserved. 33

AUTOSUPPORT E-MAIL CONTENTS

AutoSupport e-mails contain the following information:


• Output of system commands
• Message date and time stamp
• NetApp software version
• Storage system ID and hostname
• Software licenses enabled
• SNMP contact information and location (if configured)
• Contents of the /etc/messages file
• Contents of the /etc/serialnum file (if created)

AutoSupport messages also contain additional information specific to your storage system.
This information helps identify crucial parameters required for follow-up handling of the
triggering event.
To control the detail level of event messages and weekly reports, use the options command
to specify the value of autosupport.content as complete or minimal. Complete
AutoSupport messages are required for normal technical support. Minimal AutoSupport
messages omit sections and values that might be considered sensitive information, and reduce
the amount of information sent. However, keep in mind that choosing minimal greatly affects
the level of support NetApp is able to provide.

3-35 Data ONTAP® 7.3 Fundamentals: Basic Administration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
AutoSupport Configuration Options

AutoSupport commands:
ƒ options autosupport.support.enable
[on|off]
ƒ options autosupport.mailhost
[host1,…,host5]
ƒ options autosupport.to
[address1,…,address5]
ƒ options autosupport.from
ƒ options autosupport.content
ƒ options autosupport.noteto
ƒ options autosupport.doit [message]
ƒ options autosupport.enable [on|off]

© 2008 NetApp. All rights reserved. 34

AUTOSUPPORT CONFIGURATION OPTIONS

The commands listed above are some of the AutoSupport configuration commands available
from the console.
The following table is an abbreviated version of the AutoSupport options list. See the
command reference for a full list of options and descriptions.

EXAMPLE RESULT
options autosupport.enable off Disables the AutoSupport daemon. The default is on.

options autosupport.support.enable Disables the AutoSupport notification to NetApp. The default is on.
off This option is superseded (overridden) by the value of
autosupport.enable.

options autosupport.mailhost Specifies two mail host names: maildev1 and mailengr1. (You can
maildev1,mailengr1 enter up to five mail host names.)
Hostname is the hostname of the SMTP mail host(s) that will receive
AutoSupport e-mail messages. The default is the hostname of the
adminhost specified during setup.

options autosupport.to Specifies two recipients (jjandar and ssmith) of AutoSupport e-mail
jjandar@netapp.com, messages.
ssmith@netapp.com Address is an SMTP e-mail address. You can specify up to five
addresses. NOTE: Do not enter autosupport@netapp.com if
autosupport.support.enable is on.
options autosupport.from techsupport Defines the user, techsupport, as the sender of the notification.

3-36 Data ONTAP® 7.3 Fundamentals: Basic Administration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Testing AutoSupport

To send an AutoSupport test message, complete


the following steps:
1. Configure AutoSupport using the options
command.
2. Run the following command on the storage
system console:
options autosupport.doit testing
The message “System Notification Sent”
appears on the console.

© 2008 NetApp. All rights reserved. 35

TESTING AUTOSUPPORT

3-37 Data ONTAP® 7.3 Fundamentals: Basic Administration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Summary

In this module, you should have learned:


ƒ Connect remotely to the CLI of a FAS system
using the console and a remote host
ƒ Access FilerView to manage a storage system
ƒ Execute commands using the console, a
remote host, and FilerView
ƒ Demonstrate how to use commands to analyze
a FAS system
ƒ Configure and manage the AutoSupport
service for a FAS system

© 2008 NetApp. All rights reserved. 36

MODULE SUMMARY

3-38 Data ONTAP® 7.3 Fundamentals: Basic Administration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Exercise
Module 3: Basic Administration
Estimated Time: 60 minutes

EXERCISE

Please refer to your Exercise Guide for more instruction.

3-39 Data ONTAP® 7.3 Fundamentals: Basic Administration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Admin Security

NetApp University - Do Not Distribute


MODULE 4: ADMINISTRATION SECURITY

Administration
Security
Module 4
Data ONTAP® 7.3 Fundamentals

ADMINISTRATION SECURITY

4-1 Data ONTAP® 7.3 Fundamentals: Administration Security

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Objectives

By the end of this module, you should be able to:


ƒ Restrict administrative access
ƒ Restrict console and FilerView access
ƒ Configure a client machine as an adminhost to
manage a storage system

© 2008 NetApp. All rights reserved. 2

MODULE OBJECTIVES

4-2 Data ONTAP® 7.3 Fundamentals: Administration Security

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Storage System Access

To access a storage system, administrative


users can log into the storage system (or
FilerView).
ƒ Multiple administrators are allowed.
ƒ Login information can be tracked the syslog
(/etc/messages file), including:
– User name
– Time of access
– Node name or address
ƒ Administrative operations are tracked in the
audit log (/etc/log/auditlog).
© 2008 NetApp. All rights reserved. 3

STORAGE SYSTEM ACCESS

To manage a storage system, you can use the default system administration account, or root.
You can also create additional administrator user accounts using the useradmin command.
Administrator accounts are beneficial because:
• You can give administrators and groups of administrators differing levels of administrative access
to your storage systems.
• You can limit an individual administrator's access to specific storage systems by giving him or her
an administrative account only on those systems.
• Having different administrative users allows you to display information about who is performing
what commands on a storage system.
The auditlog file keeps a record of all administrator operations performed on a storage system
and the administrator who performed it, as well as any operations that failed due to insufficient
capabilities.
• You can assign each administrator to one or more groups whose assigned roles (sets of
capabilities) determine what operations they are authorized to carry out on the storage system.
• If a storage system running CIFS is a member of a domain or a Windows workgroup, domainuser
accounts authenticated on the Windows domain can use any available method to access the storage
system.

4-3 Data ONTAP® 7.3 Fundamentals: Administration Security

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
THE AUDIT LOG
An audit log is a record of commands executed at the console through a telnet shell, an SSH
shell, or by using the rsh command. All commands executed in a source-file script are also
recorded in the audit log. Audit log data is stored in the /etc/log directory in the auditlog
file. Administrative HTTP operations, such as those resulting from the use of FilerView, are
also logged. The maximum size of the auditlog file is specified using the
auditlog.max_file_size option. By default, Data ONTAP is configured to save an audit
log.

4-4 Data ONTAP® 7.3 Fundamentals: Administration Security

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Role-Based Access Control
Role-Based Access Control (RBAC) is a mechanism for managing a
set of actions (capabilities) that a user or administrator can perform
on a storage system.
ƒ A role is created.
ƒ Capabilities are granted to the role.
ƒ Groups are assigned to one or more roles.
ƒ Users are assigned to groups.

Groups Roles Capabilities

© 2008 NetApp. All rights reserved. 4

ROLE-BASED ACCESS CONTROL

Role-based access control (RBAC) specifies how users and administrators can use a
particular computing environment.
Most organizations have multiple system administrators, some of whom require more
privileges than others. By selectively granting or revoking privileges for each user, you can
customize the degree of access that each administrator has to the system.
RBAC allows you to define sets of capabilities (roles) that apply to one or more users. Users
are assigned to groups based on their job functions, and each group is granted a set of roles to
perform those functions.

4-5 Data ONTAP® 7.3 Fundamentals: Administration Security

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Capabilities and Roles

© 2008 NetApp. All rights reserved. 5

CAPABILITIES AND ROLES

4-6 Data ONTAP® 7.3 Fundamentals: Administration Security

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Capabilities

ƒ Capabilities are predefined privileges that


allow users to execute commands or take
other specified actions.
ƒ A role is a set of capabilities.
ƒ The following capabilities are predefined:
– Login
– CLI
– Security
– API

© 2008 NetApp. All rights reserved. 6

CAPABILITIES

A capability is a privilege granted to a role to execute commands or take other specified


actions.
Data ONTAP uses four types of capabilities:
• Login rights―These capabilities begin with login- and are used to control which access
methods an administrator is permitted to use for managing the system.
• CLI rights―These capabilities begin with cli- and are used to control which commands an
administrator may use in the Data ONTAP CLI.
• API rights―These capabilities begin with api- and are used to control which API commands
may be used. API commands are usually executed by programs not administrators. However, you
might want to restrict a specific program to certain APIs by creating a special user account for it,
or you might want to have a program authenticate as the administrator who is using the program
and then be limited by that administrator’s roles.
• Security rights―These capabilities begin with security- and are used to control an
administrator’s ability to use advanced commands or change passwords for other users.

4-7 Data ONTAP® 7.3 Fundamentals: Administration Security

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Roles

A role is a defined set of capabilities.


ƒ Data ONTAP includes several predefined
roles.
ƒ Administrators can create additional roles or
modify existing roles.

Admin Role
Capabilities

Login capability
Security capability
CLI capability
API capability

© 2008 NetApp. All rights reserved. 7

ROLES

A role is a collection of capabilities or rights to execute certain functions. Usually, a role is


created to assign a task or tasks to a particular group of users.

4-8 Data ONTAP® 7.3 Fundamentals: Administration Security

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Predefined Administrative Roles

ƒ Root—Grants all possible capabilities


ƒ Admin—Grants all CLI, API, login, and security
capabilities
ƒ Power—Grants the ability to:
– Invoke all cifs, exportfs, nfs, and
useradmin CLI commands
– Make all cifs and nfs API calls
– Log in using Telnet, HTTP, RSH, and SSH
sessions
ƒ Audit—Grants the ability to make snmp-get
and snmp-get-next API calls
ƒ None—Grants no administrative capabilities
© 2008 NetApp. All rights reserved. 8

PREDEFINED ADMINISTRATIVE ROLES

• Roles have assigned capabilities that can be modified.


• The useradmin role list command is used to view capabilities assigned to each role.
• To assign users to a system, you must first assign them to a group that has specified capabilities.

4-9 Data ONTAP® 7.3 Fundamentals: Administration Security

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Groups and Users

© 2008 NetApp. All rights reserved. 9

GROUPS AND USERS

4-10 Data ONTAP® 7.3 Fundamentals: Administration Security

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Groups

ƒ A group is:
– A collection of users
– Associated with one or more roles
ƒ Groups have defined permissions and access levels
that are defined by roles

Admin Role

© 2008 NetApp. All rights reserved. 10

GROUPS

A group is a collection of users or domain users. It is important to remember that the groups
defined in Data ONTAP are separate from other groups such as groups defined in the
Microsoft Active Directory server or an NIS environment. This is true even if the groups
defined in the Microsoft Active Directory and the groups defined in Data ONTAP have the
same name.
When creating new users or domain users, Data ONTAP requires that you specify a group.
Therefore, you should create appropriate groups before defining users or domain users.

4-11 Data ONTAP® 7.3 Fundamentals: Administration Security

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Predefined Groups

ƒ Administrators—Grants all CLI, API, login, and


security capabilities
ƒ Power Users—Grants the ability to invoke
cifs, nfs, and useradmin CLI commands,
manage cifs and nfs API calls, and log in
using Telnet, HTTP, RSH, and SSH sessions
ƒ Backup Operators—None
ƒ Users—Grants the ability to make snmp-get
and snmp-get-next API calls
ƒ Guests—None
ƒ Everyone—None
© 2008 NetApp. All rights reserved. 11

PREDEFINED GROUPS

To create or modify a group, start by giving the group capabilities associated with one or
more predefined or customized roles.

4-12 Data ONTAP® 7.3 Fundamentals: Administration Security

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Users

A user is:
ƒ An individual account that may or may not have
capabilities defined for the storage system
ƒ Part of a group
All administrative users have a unique login name and
password.

Admin Role

© 2008 NetApp. All rights reserved. 12

USERS

4-13 Data ONTAP® 7.3 Fundamentals: Administration Security

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
User Creation Requirements—Role

ƒ Current role definitions can be viewed using the


CLI command:
useradmin role list [role]
– Empty list Æ general information for all roles
– Specific role Æ detailed information about a
particular role
ƒ Create new role using the CLI command:
useradmin role add <rolename> –a
<capability> ,…
– Capability can be one or more of login, CLI,
security, or API capabilities
– Each capability can be refined to a specific subset

© 2008 NetApp. All rights reserved. 13

USER CREATION REQUIREMENTS—ROLE

4-14 Data ONTAP® 7.3 Fundamentals: Administration Security

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
User Creation Requirements—Group

ƒ Current group definitions can be viewed using


the CLI command:
useradmin group list [group_name]
– Empty list Æ general information for all groups
– Specific group Æ detailed information about a
particular group
ƒ Create a new group using the CLI command:
useradmin group add <groupname> -r
<role>, …
A group must be associated with one or more
roles.

© 2008 NetApp. All rights reserved. 14

USER CREATION REQUIREMENTS—GROUP

4-15 Data ONTAP® 7.3 Fundamentals: Administration Security

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Purpose of Local Users

Local users on the storage system:


ƒ Are used for administrative console access
ƒ In CIFS:
– Provides list of authenticated users with
Microsoft® Windows® workgroup
authentications
– Provides access to users when there is no
domain controller access with Windows domain
authentications
ƒ In NFS, provides access to the storage system

© 2008 NetApp. All rights reserved. 15

PURPOSE OF LOCAL USERS

Local users are often used to delegate configuration duties to other administrators. However,
local users are also created if the system storage is configured to perform local authentication
with CIFS or NFS protocols (for example, when the storage system’s CIFS server is
configured for Windows workgroup authentication).

4-16 Data ONTAP® 7.3 Fundamentals: Administration Security

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Security Administration
ƒ User accounts are managed from the CLI only (no
FilerView interface) using the following command:
useradmin
– This command allows you to list, add, and delete users.
– The user account is maintained in the /etc/registry
file.
ƒ User authentication is performed locally on the storage
system.

Admin Role

© 2008 NetApp. All rights reserved. 16

SECURITY ADMINISTRATION

4-17 Data ONTAP® 7.3 Fundamentals: Administration Security

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Security Administration (Cont.)

ƒ Password control is defined by security options


in:
system> options security
ƒ Password management is defined by the CLI
command:
passwd
NOTE:
ƒ The root user ID cannot be deleted
ƒ No initial password for root
ƒ Root has full admin rights to machine without login if there are no
other user definitions or password settings

© 2008 NetApp. All rights reserved. 17

SECURITY ADMINISTRATION (CONT.)

SECURITY OPTIONS

PASSWORD RULE DESCRIPTION


OPTION
security.passwd.firstl Specifies whether new users and users logging in for the first time after
ogin.enable {on|off} another user has changed their password must change their password. The
default value for this option is off.
NOTE: If you enable this option, you must ensure that all groups have
login-telnet and cli-passwd capabilities. Users in groups that
do not have these capabilities cannot log in to the storage system.
security.passwd.lockou Specifies the number of allowable login attempts before a user account is
t.numtries num disabled. The default value for this option is 4,294,967,295.

security.passwd.rules. Specifies whether a check-for-password composition is performed when


enable {on|off} new passwords are specified. If this option is set to on, passwords are
checked against the rules specified in this table and the password is
rejected if it doesn't pass the check. If this option is set to off, the check is
not performed. The default value for this option is on. By default, this
option does not apply to root or administrator users/
security.passwd.rules. Specifies whether a check-for-password composition is performed for the
everyone {on|off} root and administrator users. If the
security.passwd.rules.enable option is set to off, this option
does not apply. The default value for this option is off.
security.passwd.rules. Specifies the number of previous passwords checked against a new
history num password to disallow repeats. The default value for this option is 0,
meaning that no repeat passwords are allowed.

4-18 Data ONTAP® 7.3 Fundamentals: Administration Security

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
SECURITY OPTIONS (CONT.)

PASSWORD RULE DESCRIPTION


OPTION
security.passwd.rules Specifies the maximum number of characters in a password.
. NOTE: This option can be set to a value greater than 16, but a maximum of
maximum max_num 16 characters are used to match the password.

Users with passwords longer than 14 characters cannot log in through


Windows interfaces, so if you are using Windows, do not set this option
higher than 14. The default value for this option is 256.
security.passwd.rules Specifies the minimum number of characters in a password. The default
. value for this option is 8.
minimum min_num
security.passwd.rules Specifies the minimum number of alphabetic characters in a password. The
. default value for this option is 2.
minimum.alphabetic
min_num
security.passwd.rules Specifies the minimum number of digit characters (numbers from 0 to 9) in a
. password. The default value for this option is 1.
minimum.digit min_num
security.passwd.rules Specifies the minimum number of symbol characters (white space and
. punctuation characters) in a password. The default value for this option is 0.
minimum.symbol
min_num

4-19 Data ONTAP® 7.3 Fundamentals: Administration Security

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
User Access

ƒ Administrative activities are logged


ƒ For security purposes, each administrative
user must be identified
ƒ Only administrative users can log in to the
storage system
ƒ The syslog file records console logins
according to the following:
– User name (up to 32 characters, not case-
sensitive)
– Time of access
– Node name and address
© 2008 NetApp. All rights reserved. 18

USER ACCESS

4-20 Data ONTAP® 7.3 Fundamentals: Administration Security

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
User Creation Requirements - User

ƒ Current user definitions can be viewed using


the CLI command:
useradmin user list [user]
– Empty list Æ general information for all users
– Specific user Æ detailed information about a
particular user
ƒ Create a new user using the CLI command:
useradmin user add <username> -g
<group>,…
– Password may be required (see security
options)
– User must be associated with one or more
groups
© 2008 NetApp. All rights reserved. 19

USER CREATION REQUIREMENTS – USER

4-21 Data ONTAP® 7.3 Fundamentals: Administration Security

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Administration Host
Access

© 2008 NetApp. All rights reserved. 20

ADMINISTRATIVE HOST ACCESS

4-22 Data ONTAP® 7.3 Fundamentals: Administration Security

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Administration Host

ƒ The setup command requests the name and


IP address of adminhost
– This is a UNIX/Linux host that has access to
mount /etc from the storage system
– When mounted, root on the adminhost has root
access to /etc
ƒ If provided, the adminhost is granted access to
the root volume for administrative purposes
ƒ If not provided, all NFS clients will be granted
root access to the root volume (this is not
recommended)

© 2008 NetApp. All rights reserved. 21

ADMINISTRATION HOST

The term adminhost is used to describe an NFS or CIFS client machine that has the ability to
view and modify configuration files stored in the /etc directory of the storage system’s root
volume.
When you designate a workstation as an administration host, the storage system's root file
system (/vol/vol0 by default) is accessible only to the specified workstation in the
following ways:
• If the storage system is licensed for the CIFS protocol, the root file system is accessible as a share
named, C$.
• If the storage system is licensed for the NFS protocol, the root file system is accessible by NFS
mounting.
You can designate additional administration hosts after setup by modifying the storage
system's NFS exports and CIFS shares.

ADMINISTRATION HOST PRIVILEGES


The storage system grants root permissions to the administration host after the setup
procedure is complete. The following table describes administration host privileges.

IF THE YOU CAN...


ADMINISTRATION
HOST IS...
An NFS client ƒ Mount the storage system root directory and edit configuration files from
the administration host
ƒ Enter Data ONTAP commands using a Remote Shell connection
A CIFS client Connect to the storage system as root or administrator, and then edit
configuration files from any CIFS client

4-23 Data ONTAP® 7.3 Fundamentals: Administration Security

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Restricting Access

ƒ To improve security, you can configure the


storage system to allow logins only from
trusted hosts. This option can be configured
using:
– CLI command:
options trusted.hosts [hostname|*|-]
– FilerView
ƒ You may specify up to five clients to be given
Telnet, RSH, and FilerView privileges

© 2008 NetApp. All rights reserved. 22

RESTRICTING ACCESS

When restricting access using options trusted.hosts command:


• Host names should be entered as a comma-separated list with no spaces.
• Enter an asterisk (*) to allow access to all clients (this is the default).
• Enter a hyphen (-) to disable access to the server.
• This value is ignored for telnet if options telnet.access is set, and is ignored for
administrative HTTP if options httpd.admin.access is set.

4-24 Data ONTAP® 7.3 Fundamentals: Administration Security

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Summary

In this module, you should have learned to:


ƒ Restrict administrative access control by
configuring and managing users, groups and
roles
ƒ Restrict console and FilerView access
ƒ Configure a client host as an adminhost to
manage a storage system

© 2008 NetApp. All rights reserved. 23

MODULE SUMMARY

4-25 Data ONTAP® 7.3 Fundamentals: Administration Security

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Exercise
Module 4: Administration Security
Estimated Time: 30 minutes

EXERCISE

Please refer to your Exercise Guide for more instruction.

4-26 Data ONTAP® 7.3 Fundamentals: Administration Security

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Networking

NetApp University - Do Not Distribute


MODULE 5: NETWORKING

Networking
Module 5
Data ONTAP® 7.3 Fundamentals

NETWORKING

5-1 Data ONTAP® 7.3 Fundamentals: Networking

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Objectives

By the end of this module, you should be able to:


ƒ Identify the configuration of network settings and
components in Data ONTAP
ƒ Explain the main features and uses of naming services
ƒ Explain the function of /etc/hosts, NIS, and DNS
ƒ Configure Data ONTAP for name resolution in
/etc/nsswitch.conf
ƒ Use host files to troubleshoot name resolution
ƒ Explain routing tables in Data ONTAP
ƒ Identify how a FAS system routes packets
ƒ Define and create virtual interfaces (VIFs)
ƒ Discuss the operation and method for routing in VLANs

© 2008 NetApp. All rights reserved. 2

MODULE OBJECTIVES

5-2 Data ONTAP® 7.3 Fundamentals: Networking

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Interface Configuration

© 2008 NetApp. All rights reserved. 3

INTERFACE CONFIGURATION

5-3 Data ONTAP® 7.3 Fundamentals: Networking

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Interface Configuration

ƒ Initial interface configuration


– Configured by the setup command
ƒ After initial setup, you can create and modify
the interface configuration using:
– CLI with the ifconfig command
– FilerView
ƒ Interface configuration is stored in the
/etc/rc file, which is read when the
storage system boots

© 2008 NetApp. All rights reserved. 4

INTERFACE CONFIGURATION

From the CLI, the ifconfig command displays and configures network interfaces for a
storage system.
The following are ifconfig command examples:
• Display network interface configurations:
ifconfig -a
• Change an interface IP address:
ifconfig interface 10.10.10.XX
• Bring down an interface:
ifconfig interface down
• Bring up an interface:
ifconfig interface up

The /etc/rc file configures the interface settings during boot. You can edit this
configuration on the storage system using the wrfile command, from FilerView, or from an
adminhost using CIFS/NFS.
Example: Using the ifconfig command in the /etc/rc file:
ifconfig interface 10.10.10.XX netmask 255.255.252.0 up

5-4 Data ONTAP® 7.3 Fundamentals: Networking

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Interface Configuration (Cont.)

The storage system Network Types Letter


supports the following Ethernet e
network types:
ƒ Ethernet 10/100 Base-T Port Number Letter

ƒ 1G Ethernet 1 a
2 b
ƒ 10G Ethernet (Data 3 c
ONTAP 7.2 or later) 4 d

Storage systems with


multiple-port Ethernet
adapters use letters to
identify each port.

© 2008 NetApp. All rights reserved. 5

INTERFACE CONFIGURATION (CONT.)

Your storage system supports the following interface types:


• Ethernet―including quad-port Ethernet adapters
• Gigabit Ethernet (GbE)
• Asynchronous transfer mode (ATM)―Emulated LAN and FORE/IP
• Onboard network interfaces (supported on FAS250, FAS270, FAS3000/V3000, and
FAS6000/V6000 systems)
• 10 GbE TCP offload engine (TOE) network interface card (NIC)

Your storage system also supports the following virtual network interface types:
• Virtual interface (VIF)
• Virtual local area network (VLAN)
• Virtual hosting (VH)

5-5 Data ONTAP® 7.3 Fundamentals: Networking

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Interface Naming Example

Interface Type Slot Port Interface Name


Ethernet 0 (onboard) 1 e0a
Ethernet 0 (onboard) 2 e0b
Ethernet 3 1 e3a
Ethernet 3 2 e3b
Ethernet 3 3 e3c
Ethernet 3 4 e3d

© 2008 NetApp. All rights reserved. 6

INTERFACE NAMING EXAMPLE

For physical interfaces, interface names are assigned automatically based on the slot where
the network adapter is installed.
VLAN interfaces are displayed in the interfaceID_and_slot_number-vlan_id format,
where slot_number is the slot where the network adapter is installed, and vlan_id is the
identifier of the VLAN that is configured on the interface. For example, e8-2, e8-3, and e8-4
are three VLAN interfaces for VLANs 2, 3, and 4, configured on interface e8.
You can assign names to vifs and emulated LAN interfaces.

5-6 Data ONTAP® 7.3 Fundamentals: Networking

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Managing Interfaces: ifconfig

ƒ Network interface configuration parameters:


– IP address
– Netmask address
– Broadcast address
– Media type and speed
– Maximum Transmission Unit (MTU)
– Flow control (Gigabit Ethernet II controller only)
– Up or down state
ƒ To display current status:
– ifconfig -a
ƒ Interface configuration changes are not
permanent until entered into the /etc/rc file
© 2008 NetApp. All rights reserved. 7

MANAGING INTERFACES: IFCONFIG

INTERFACE CONFIGURATION PARAMETERS


There are seven common network interface parameters that can be configured:
• IP address
• Netmask address
• Broadcast address
• Media type and speed
• Maximum Transmission Unit (MTU)
• Flow control for the GbE II controller
• Up or down state
Changes made using the CLI are not permanent until added to the /etc/rc file using either
the CLI or FilerView.

5-7 Data ONTAP® 7.3 Fundamentals: Networking

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NETWORK PARAMETER DESCRIPTIONS
• IP address―Standard format is used for IP addresses (for example, 192.168.23.10). IP addresses
are mapped to host names in the /etc/hosts file.
• Netmask and broadcast address―Standard format is used for netmask and broadcast addresses
(for example, 255.255.255.0 for netmask, and 192.168.1.255 for broadcast address).
• Media type and speed―The following media types can be configured:
[ mediatype { tp | tp–fd | 100tx | 100tx–fd | 1000fx | auto }]
• MTU―Use a smaller interface MTU value if a bridge or router on the attached network cannot
break large packets into fragments.
• Flow control for the GbE II controller―The original GbE controller supports only full duplex,
not flow control. The GbE Controller II negotiates flow control with an attached device that
supports autonegotiation. However, if autonegotiation fails on either device, the flow control
setting that was entered using the ifconfig command is used. The following flow control
settings can be configured:
[ flowcontrol { none | receive | send | full } ]
• Up or down state―The state of any interface can be configured up or down.

IFCONFIG COMMAND SYNTAX

NetApp> ifconfig
usage: ifconfig [ -a | [ <interface>
[ [ alias | -alias ] <address> ] [ up | down ]
[ netmask <mask> ] [ broadcast <address> ]
[ mtusize <size> ]
[ mediatype { tp | tp-fd | 100tx | 100tx-fd 1000fx | auto } ]
[ flowcontrol { none | receive | send | full } ]
[ trusted | untrusted ]
[ wins | -wins ]
[ [ partner { <address> | <interface> } ] | [ -partner ] ] ] ]

5-8 Data ONTAP® 7.3 Fundamentals: Networking

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Managing Interfaces: FilerView

NOTE: Modifications made in FilerView are persistent in


the /etc/rc file.

© 2008 NetApp. All rights reserved. 8

MANAGING INTERFACES: FILERVIEW

FilerView provides a confident and administrator-friendly way to manage network interfaces.


In addition, FilerView offers advanced functionality such as vif and VLAN management.
Any modifications made using FilerView are persisted in the /etc/rc file and will persist
through reboot.

5-9 Data ONTAP® 7.3 Fundamentals: Networking

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Managing Interfaces: FilerView (Cont.)

© 2008 NetApp. All rights reserved. 9

MANAGING INTERFACES: FILERVIEW (CONT.)

FilerView provides the same level of control over a storage system’s interfaces as the
ifconfig command in the CLI.

5-10 Data ONTAP® 7.3 Fundamentals: Networking

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Managing Interfaces: CLI

ƒ To configure the current status:


ifconfig
ƒ To display permanent settings:
rdfile /etc/rc
ƒ To change permanent settings:
wrfile /etc/rc
– Command overwrites the existing file
– Existing information can be cut and pasted
– Press Control-C to save changes and exit
– To activate changes to the /etc/rc file, reboot
or issue source /etc/rc
© 2008 NetApp. All rights reserved. 10

MANAGING INTERFACES: CLI

5-11 Data ONTAP® 7.3 Fundamentals: Networking

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Name Resolution

© 2008 NetApp. All rights reserved. 11

NAME RESOLUTION

5-12 Data ONTAP® 7.3 Fundamentals: Networking

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Host-Name Resolution

A storage system must be able to resolve host


names to valid IP addresses.
ƒ Host-name resolution is commonly used in:
– Processing CIFS requests
– Processing NFS requests
– Authenticating RSH sessions
– Many other services

© 2008 NetApp. All rights reserved. 12

HOST-NAME RESOLUTION

MAINTAINING HOST INFORMATION


Data ONTAP uses the following methods to resolve host information on a storage system:
• /etc/hosts
• DNS
• NIS
By default, the storage appliance tries first to resolve host names locally by searching the
/etc/hosts and the /etc/nsswitch.conf files.
DNS and NIS can be configured using the setup command during installation of a storage
system. Therefore, many of the commands and files included in this lesson are executed
automatically. Usually, NIS or DNS commands are only entered manually when:
• NIS or DNS was not configured during setup
• You need to make a change to a configuration

5-13 Data ONTAP® 7.3 Fundamentals: Networking

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Host-Name Resolution (Cont.)
Data ONTAP stores and maintains host information in
the following locations:
ƒ /etc/hosts file
ƒ DNS server
ƒ Network Information Service (NIS) server

In host-name resolution:
ƒ The /etc/nsswitch.conf file controls the order in
which these three locations are checked.
ƒ Data ONTAP stops checking locations when a valid IP
address is returned.
NOTE: For convenience, you can use the Host Name Resolution
Policy Wizard in FilerView.

© 2008 NetApp. All rights reserved. 13

HOST-NAME RESOLUTION (CONT.)

SPECIFYING A RESOLUTION SEARCH ORDER


The /etc/nsswitch.conf file displays the order in which a storage system searches for
resolution. For example, to resolve host names, a storage appliance uses the search order list
for hosts and (in this example) searches first using the /etc/hosts file, then NIS, and then
DNS. Each line in the /etc/nsswitch.conf file uses this format. You can change the
default search order for host-name resolution at any time by modifying this file. After a
storage system resolves the host name, the search ends.

5-14 Data ONTAP® 7.3 Fundamentals: Networking

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
/etc/hosts Configuration

Local IP and name resolution is provided by


/etc/hosts.
To modify /etc/hosts, use:
– The rdfile and wrfile commands in CLI
– adminhost
– FilerView

© 2008 NetApp. All rights reserved. 14

/ETC/HOSTS CONFIGURATION

USING THE /ETC/HOSTS FILE FOR HOST-NAME RESOLUTION


Because the /etc/hosts file is checked first and changes in it take effect immediately, it is
important to keep this file current. You can edit the file using a standard editing program.
When using a standard editing program, be sure to include a blank line at the end. The
following is the /etc/hosts format:
IP address hostname alias(es)

/ETC/HOSTS FILE FUNCTION


The /etc/hosts file is automatically generated during the storage system setup procedure
as part of the data installation process. It is populated at that time with IP addresses and host
names.
NOTES:
• The default IP address for the local host (storage appliance) is listed in the /etc/hosts file.
• Installed cards without IP addresses are included in the /etc/hosts file, but are commented
out.

5-15 Data ONTAP® 7.3 Fundamentals: Networking

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
/etc/hosts Configuration: FilerView

© 2008 NetApp. All rights reserved. 15

/ETC/HOSTS CONFIGURATION: FILERVIEW

You can also manage the /etc/hosts file using the FilerView Web application. To access
and edit the file from FilerView, from the main menu, select Manage Hosts File from the
Network node.

5-16 Data ONTAP® 7.3 Fundamentals: Networking

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
DNS Configuration

The DNS provides a centralized mechanism for


host-name resolution in Windows and UNIX
environments.

To configure the DNS:


ƒ In FilerView, use the Host Name Resolution
Policy Wizard
ƒ In the CLI, use:
– setup command
– options dns.*
– dns command

© 2008 NetApp. All rights reserved. 16

DNS CONFIGURATION

USING DNS FOR HOST-NAME RESOLUTION


DNS matches domain names to IP addresses and enables you to centrally maintain host
information so that you do not have to update the /etc/hosts file every time you add a new
host to the network. This is particularly helpful if you have several storage appliances on your
network.
DNS is configured using options and commands.
To make the configuration commands permanent, enter them in the /etc/rc file. The
/etc/rc file is automatically generated during the setup procedure as part of the Data
ONTAP installation process. If you choose to set up DNS at that time, the file is populated
with DNS configuration information.
Use the information command dns info to display the status of the DNS resolver, a list of
DNS servers, the state of each DNS server, the default domain configured on the storage
appliance, and a list of other domains that will be used with unqualified names for name
lookup.

EXAMPLE RESULT
options dns.domainname dns_campus2 Sets the DNS domain name to dns_campus2
options dns.enable on Enables DNS

5-17 Data ONTAP® 7.3 Fundamentals: Networking

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NIS
In UNIX environments, NIS provides:
ƒ A centralized mechanism for host-name resolution
ƒ User authentication

The storage system can participate as an NIS client or


server.

To configure NIS:
ƒ In FilerView, use the Host Name Resolution Policy
Wizard
ƒ In the CLI, use:
– setup command
– options nis.*
– nis command

© 2008 NetApp. All rights reserved. 17

NIS

USING NIS FOR HOS-NAME RESOLUTION


The NIS client service provides information about security-related parameters on a network
such as hosts, user passwords, user groups, and netgroups.
NIS enables you to centrally maintain host information, so you don't have to update the
/etc/hosts file on every storage appliance on your network.
While the storage system can be an NIS client and can query NIS servers for host
information, it cannot be an NIS server. As an NIS client, the storage system can be
configured as an NIS slave using the options nis.slave.enable command. The storage
system then downloads NIS maps from the NIS master servers defined in nis.servers. The
storage system NIS slave checks the master servers every 45 minutes. Downloaded maps are
stored under /etc/yp/<NIS_domain>/. If you want to use NIS as the primary method for
host resolution, specify it ahead of the other methods in the /etc/nsswitch.conf file.

5-18 Data ONTAP® 7.3 Fundamentals: Networking

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Host Name Resolution Policy Wizard:
FilerView
To ease configuration, use the FilerView Host
Name Resolution Policy Wizard:

© 2008 NetApp. All rights reserved. 18

HOST NAME RESOLUTION POLICY WIZARD: FILERVIEW

The FilerView Host Name Resolution Policy Wizard configures:


• DNS
• NIS
• Name service configuration file (/etc/nsswitch.conf)
To ensure all changes are persisted, you must click the Commit button when the wizard is
finished.

5-19 Data ONTAP® 7.3 Fundamentals: Networking

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Host Resolution Policy Wizard:
FilerView (Cont.)
Choose a resolution method:

© 2008 NetApp. All rights reserved. 19

HOST NAME RESOLUTION POLICY WIZARD: FILERVIEW (CONT.)

FILERVIEW METHODS OF RESOLUTION


The Host Name Resolution Policy Wizard configures DNS and NIS, but does not configure
the /etc/hosts/ file, which is the default host-resolution mechanism. The /etc/hosts
file is a mmethod always available to resolve names to addresses.

5-20 Data ONTAP® 7.3 Fundamentals: Networking

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Host Resolution Policy Wizard:
FilerView (Cont.)
Provide DNS parameters:

© 2008 NetApp. All rights reserved. 20

HOST RESOLUTION POLICY WIZARD: FILERVIEW (CONT.)

5-21 Data ONTAP® 7.3 Fundamentals: Networking

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Host Resolution Policy Wizard:
FilerView (Cont.)
List DNS server address(es):

© 2008 NetApp. All rights reserved. 21

HOST RESOLUTION POLICY WIZARD: FILERVIEW (CONT.)

DNS SETTINGS
DNS server parameters can also be configured through the CLI using the setup command.

5-22 Data ONTAP® 7.3 Fundamentals: Networking

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Host Resolution Policy Wizard:
FilerView (Cont.)
Specify NIS information:

© 2008 NetApp. All rights reserved. 22

HOST RESOLUTION POLICY WIZARD: FILERVIEW (CONT.)

IMPORTANT: You must click Commit to persist changes.

5-23 Data ONTAP® 7.3 Fundamentals: Networking

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Host Resolution Policy Wizard:
FilerView (Cont.)
Specify NIS Group Parameters:

© 2008 NetApp. All rights reserved. 23

HOST RESOLUTION POLICY WIZARD: FILERVIEW (CONT.)

5-24 Data ONTAP® 7.3 Fundamentals: Networking

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Host Resolution Policy Wizard:
FilerView (Cont.)
Specify the order for the Name Service
Configuration:

© 2008 NetApp. All rights reserved. 24

HOST RESOLUTION POLICY WIZARD: FILERVIEW (CONT.)

5-25 Data ONTAP® 7.3 Fundamentals: Networking

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Host Resolution Policy Wizard:
FilerView (Cont.)
Commit the changes:

© 2008 NetApp. All rights reserved. 25

HOST RESOLUTION POLICY WIZARD: FILERVIEW (CONT.)

5-26 Data ONTAP® 7.3 Fundamentals: Networking

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Route Resolution

© 2008 NetApp. All rights reserved. 26

ROUTE RESOLUTION

5-27 Data ONTAP® 7.3 Fundamentals: Networking

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Route Information

A route defines the direction to a network or


host.
To display the current routing table:
ƒ In CLI, use netstat -r
system> netstat -r
Routing tables
Internet:
Destination Gateway Flags Refs
default 66.166.149.161 UGS 14
66.166.149.160/2 link#1 UC 0
66.166.149.161 0:20:6f:10:25:7a UHL

ƒ FilerView

© 2008 NetApp. All rights reserved. 27

ROUTE INFORMATION

ROUTING
A storage appliance does not function as a router for other network hosts, even if it has
multiple network interfaces.However, the storage appliance does route its own packets.
To display the defaults and explicit routes your storage appliance uses to route its own
packets, use the netstat -r command to view the current routing table. The netstat
command displays network-related data structures.

MODIFYING THE ROUTING TABLE


The route command allows you to manually manipulate the network routing table for a
specific host or network specified by destination.
To add or delete a specific host or network route in the routing table, use route.

COMMAND RESULT
route add default 10.10.10.1 1 Adds a default route through 10.10.10.1 with a metric (hop) of 1

route delete 193.20.8.173 Deletes the route destination 193.20.8.173 connecting through
193.20.4.254 193.20.4.254

5-28 Data ONTAP® 7.3 Fundamentals: Networking

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The netstat Command

ƒ Use the netstat –r command to view or


change the network routing tables
ƒ Use the netstat –nr command to view or
change the network routing tables with IP
addresses (instead of name resolution)
ƒ Use the netstat –rs command to view or
display the per protocol statistics

© 2008 NetApp. All rights reserved. 28

THE NETSTAT COMMAND

The netstat command symbolically displays the contents of various network-related data
structures. There are a number of output formats, depending on the options chosen. Use the
man page to see all available options.

5-29 Data ONTAP® 7.3 Fundamentals: Networking

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The route Command

ƒ Use the route -s command to show routing


tables
ƒ Use the route -f command to flush all
gateway entries in the routing table
ƒ Use the route –ns command to view
network routing tables with IP addresses
(instead of name resolution)

© 2008 NetApp. All rights reserved. 29

THE ROUTE COMMAND

The route command allows you to manually manipulate the network routing table for a
specific host or network specified by destination.

5-30 Data ONTAP® 7.3 Fundamentals: Networking

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Virtual Interfaces

© 2008 NetApp. All rights reserved. 30

VIRTUAL INTERFACES

5-31 Data ONTAP® 7.3 Fundamentals: Networking

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Virtual Interfaces

Virtual interfaces (VIFs) allow:


ƒ Trunking of one or more Ethernet interfaces
ƒ Increased throughput to and from the storage
system Virtual IP Ether Channel Switch

Load Balancing

VIFs can be configured as:


ƒ Single-mode trunks
ƒ Multimode trunks
© 2008 NetApp. All rights reserved. 31

VIRTUAL INTERFACES

A vif is a group of Ethernet interfaces working together as a logical unit. You can group up to
16 Ethernet interfaces into a single logical interface.
The following are some advantages of vifs over single-network interfaces:
• Higher throughput―Multiple interfaces work as one
• Fault tolerance―If one vif interface goes down, the remaining interfaces maintain the connection
to the network
• Protection against a switch port becoming a single point of failure

5-32 Data ONTAP® 7.3 Fundamentals: Networking

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Single-Mode VIF

In single mode, only one


interface is active. The
other interface is on
standby. Provides failover capability x
e0 e1

© 2008 NetApp. All rights reserved. 32

SINGLE-MODE VIF

SINGLEMODE TRUNK
Vifs are also known as trunks, virtual aggregations, link aggregations, or EtherChannel
virtual interfaces.
Trunks can be single-mode or multimode. In a single-mode trunk, one interface is active
while the other interface is on standby.
NOTE: A failure signals the inactive interface to take over and maintain the connection with
the switch.

5-33 Data ONTAP® 7.3 Fundamentals: Networking

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Multimode VIF

In multimode, all interfaces


are active and share a
MAC address.
Provides multiplex capability x
e0 e1 e2

© 2008 NetApp. All rights reserved. 33

MULTIMODE VIF

MULTIMODE TRUNK
In a multimode trunk, all interfaces are active, providing greater speed when multiple hosts
access the storage appliance. Because the switch determines how the load is balanced among
the interfaces, it must therefore support manually-configurable trunking.
In the figure above, three active interfaces comprise the multimode trunk. If any one of the
three interfaces fail, the storage appliance remains connected to the network.
Not all switches provide this capability. Check with your switch manufacturer for more
information.

5-34 Data ONTAP® 7.3 Fundamentals: Networking

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Second-Level VIF

Switch and NIC failures are transparent to clients when


communicating to system A through the Vif_A “super”
virtual interface.
Defining a virtual interface at this level provides resilience for a Quad NIC failure.

Switch X Switch Y
Vif_XA Vif_YA

Vif_YB
Vif_XB “Super” Vif_B

Quad Quad Quad Quad

Final step: Active-Active

© 2008 NetApp. All rights reserved. 34

SECOND-LEVEL VIF

A second-level vif is a group of multimode vifs. If a primary multimode vif fails, a second-
level vifs provides a standby multimode. You can use second-level vifs on a single storage
system or in a cluster.
You can set up your storage system with two double-link multimode vifs where each vif is
connected to a different switch that is capable of link aggregation over multiple ports. You
can then set up a second-level single-mode vif that contains both of the multimode vifs.
When you configure the second-level vif using the vif create command, only one of the
two multimode vifs is brought up as the active link. If all the underlying interfaces in the
active vif fail, the second-level vif activates the link corresponding to the other vif.
In the example above, “Quad” is an Ethernet card with four Ethernet ports.

5-35 Data ONTAP® 7.3 Fundamentals: Networking

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Load Balancing
Load balancing is supported for multimode VIFs only:
ƒ IP-based (default)
ƒ MAC-based
ƒ Round-robin (not recommended)

Load balancing assumes an even distribution of


IP addresses, such as the following:
e0 e1 e2 e3
× × × ×
10.10.10.1 10.10.10.2 10.10.10.3 10.10.10.4
10.10.10.5 10.10.10.6 10.10.10.7 10.10.10.8
10.10.10.9 10.10.10.10 10.10.10.11 10.10.10.12
10.10.10.13 10.10.10.14 10.10.10.15 10.10.10.16

© 2008 NetApp. All rights reserved. 35

LOAD BALANCING

Load balancing ensures that all the interfaces in a multimode vif are equally utilized for
outbound traffic. Load balancing is supported for multimode trunks only, and assumes a nice
distribution of hosts. There are three methods of load balancing using the IP-based default:
• IP-based―The outgoing interface is selected based on the storage system and client’s IP address.
• MAC-based―The outgoing interface is selected on the basis of the storage system and client’s
MAC (Media Access Control) address.
• Round robin―All the interfaces are selected on a rotating basis.
Both IP-based and MAC-based address methods use a formula to determine which interface
to use for outgoing frames. The formula uses the exclusive operator (XOR) value of the last
four bits of the source destination addresses to determine which interface to return data on.
NOTE: The round-robin method provides true load balancing, but may cause out-of-order
packet delivery and retransmissions due to overruns.

5-36 Data ONTAP® 7.3 Fundamentals: Networking

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Creating a VIF from the CLI:
Single-level Example
ƒ The named virtual interface is treated as a
single interface: ifconfig vif_name
ƒ Entries created on the command line are not
permanent
system> vif create single SingVif1 e3a e3b
system> ifconfig SingVif1 172.17.200.201 netmask
255.255.255.0 mediatype 100tx-fd up
system> vif favor e3a
system> ifconfig SingVif1
SingVif1:flags=1148043<UP,BROADCAST,RUNNING,MULTICAST,TC
PCKSUM> mtu 1500
inet 172.17.200.201 netmask 0xffffff00 broadcast
172.17.200.255
ether 02:a0:98:03:28:8e (Disabled virtual interface)

© 2008 NetApp. All rights reserved. 36

CREATING A VIF FROM THE CLI: SINGLE-LEVEL EXAMPLE

You create and modify a trunk using vif commands. A trunk name must be unique, must
begin with a letter, contain no spaces, and must not exceed 15 characters. After you create the
trunk, configure it like a regular network interface using the ifconfig command.

USING FILERVIEW TO CONFIGURE A VIF


In addition to creating vifs using the CLI, you can use FilerView to create and configure any
type of vif.

STEP 1
Enter the command:
vif create single vif_name [interface_list]
where vif_name is the name of the vif
and interface_list is a list of the interfaces you want the vif to consist
of.
NOTE: You must ensure that all interfaces to be included in the vif are configured down.
You can use the ifconfig command to configure an interface down.
Example:
To create a single-mode vif with the name SingleTrunk1:
vif create single SingleTrunk1 e0 e1

5-37 Data ONTAP® 7.3 Fundamentals: Networking

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
STEP 2
Enter the command:
ifconfig vifname IP_address netmask mask
where vifname is the name of the vif
IP_address is the IP address for this interface
and mask is the network mask for this interface.
Example:
To configure an IP address of 10.120.5.74 and a netmask of 255.255.255.0 on the single-
mode vif SingleTrunk1 that was created in the previous example:
ifconfig SingleTrunk1 10.120.5.74 netmask 255.255.255.0
STEP 3
To change the active interface in a single-mode vif, enter the command:
vif favor interface
where interface is the name of the interface you want to be active.
Example:
To specify the interface e1 as preferred:
vif favor e1
STEP 4
To check the status of your new interface, enter the command:
ifconfig vifname
where vifname is the name of the new interface.
STEP 5
To make a new vif permanent, update the /etc/rc file by entering the command:
wrfile -a /etc/rc
ifconfig SingleTrunk1 10.120.5.74 netmask 255.255.255.0
vif favor e1
STEP 6
Verify your changes to the /etc/rc file by entering one of the following commands:
rdfile /etc/rc
source /etc/rc

5-38 Data ONTAP® 7.3 Fundamentals: Networking

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Creating a VIF from the CLI:
Multilevel Example
system> vif create multi multiVif2 e3a e3b e3c e3d
system> ifconfig multiVif2 172.17.200.202 netmask
255.255.255.0 mediatype 100tx-fd up
system> ifconfig multiVif2
multiVif2:flags=1148043<UP,BROADCAST,RUNNING,MULTICAST,
TCPCKSUM> mtu 1500
inet 172.17.200.202 netmask 0xffffff00 broadcast
172.17.200.255
ether 02:a0:98:03:28:8e (Disabled virtual interface)

© 2008 NetApp. All rights reserved. 37

CREATING A VIF FROM THE CLI: MULTILEVEL EXAMPLE

This procedure enables you to create a static or dynamic multilevel vif on your storage
system. By default, the load-balancing method based on IP address is used for a multilevel
vif. However, you can select another method when creating the vif. Once a load-balancing
method has been assigned to a vif, it cannot be changed.

STEP 1
To create a static multimode vif, enter the command:
vif create multi vif_name -b {rr|mac|ip} [interface_list]
Or, to create a dynamic multimode vif, enter the command:
vif create lacp vif_name -b {rr|mac|ip} [interface_list]
where -b specifies one of the following load-balancing methods:
• rr―round robin
• mac―based on MAC address
• ip―based on IP address (default)
NOTE: For dynamic multimode vifs, use the ip load-balancing method.
vif_name is the name of the vif
and interface_list is a list of the interfaces that make up the vif.
NOTE: You must ensure that all interfaces to be included in the vif are configured down.
You can use the ifconfig command to configure an interface down.

5-39 Data ONTAP® 7.3 Fundamentals: Networking

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Example
To create a multimode vif made up of interfaces e0, e1, e2, and e3 using MAC-based load
balancing:
vif create multi MultiTrunk1 -b mac e0 e1 e2 e3

STEP 2
Enter the command:
ifconfig vifname IP_address netmask mask
where vifname is the name of the vif
IP_address is the IP address for this interface
and mask is the network mask for this interface.

5-40 Data ONTAP® 7.3 Fundamentals: Networking

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Creating a VIF from the CLI:
Second-Level VIF Example
system> vif create multi multiVif1 e3a e3b
system> vif create multi multiVif2 e3c e3d
system> vif create single L2vif multiVif1 multiVif2
system> ifconfig L2vif 172.17.200.206 netmask
255.255.255.0 mediatype 100tx-fd up
system> ifconfig L2vif
L2vif:flags=1148043<UP,BROADCAST,RUNNING,MULTICAST,TCPC
KSUM> mtu 1500
inet 172.17.200.206 netmask 0xffffff00 broadcast
172.17.200.255
ether 02:a0:98:03:28:8c (Disabled virtual
interface)

© 2008 NetApp. All rights reserved. 38

CREATING A VIF FROM THE CLI: SECOND-LEVEL VIF EXAMPLE

This procedure creates a second-level vif called vif_name on a single storage system with
two multimode vifs called vif_name1 and vif_name2. The vif_name1 is composed of
two physical interfaces, if1 and if2, and vif_name2 is composed of two physical
interfaces, if3 and if4.

STEP 1
To create two multimode interfaces, enter the commands:
vif create multi -b {rr|mac|ip} vif_name1 if1 if2
vif create multi -b {rr|mac|ip} vif_name2 if3 if4
where -b specifies one of the following load-balancing methods:
• rr―round robin
• mac―based on MAC address
• ip―based on IP address (default)

NOTE: You must ensure that all interfaces to be included in the vif are configured down.
You can use the ifconfig command to configure an interface down.

5-41 Data ONTAP® 7.3 Fundamentals: Networking

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
vif Commands

ƒ vif create [single|multi]


<vif_name> [-b [rr|ip|mac]]
[<interface_list>]
ƒ vif delete <vif_name>
[interface_list]
ƒ vif destroy <vif_name>
ƒ vif add <vif_name>
<interface_list>
ƒ vif [favor|nofavor] <interface>
ƒ vif status [<vif_name>]
ƒ vif stat vif_name [interval]
© 2008 NetApp. All rights reserved. 39

VIF COMMANDS

EXAMPLE RESULT
vif create single Creates a single-mode trunk vif on interfaces e1 and e2. Enter this
SingleTrunk e1 e2 command into the /etc/rc file to make it persistent over reboots.

vif delete MultiTrunk1 e2 Removes interfaces e2 and e3 on MultiTrunk1.


e3
vif destroy SingleTrunk1 Deletes the trunk SingleTrunk1.
vif add MultiTrunk1 e2 e3 Adds interfaces e2 and e3 to MultiTrunk1. Modify the existing vif
create command in the /etc/rc file to make it persistent over
reboots.
vif favor e0 Specifies the active interface in a single-mode trunk. If no link is
specified, the active link is randomly chosen. Enter this command into
the /etc/rc file to make it persistent over reboots.
vif status SingleTrunk1 Displays the status of the specified virtual interface. If no interface is
specified, displays the status of all virtual interfaces.

vif stat SingleTrunk1 10 Displays the number of packets received and transmitted on each
interface. You can specify the time interval (in seconds) at which the
statistics are displayed. If no number is entered, statistics are displayed
by default at two-second intervals.

5-42 Data ONTAP® 7.3 Fundamentals: Networking

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Creating a VIF with FilerView

After the VIF is created, assign it an address


using ifconfig.

© 2008 NetApp. All rights reserved. 40

CREATING A VIF WITH FILERVIEW

After a vif is created using either the CLI vif command or FilerView, you must assign an
address to the vif. To configure the vif as if it were a single interface, use the ifconfig
command

5-43 Data ONTAP® 7.3 Fundamentals: Networking

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Virtual LANs

© 2008 NetApp. All rights reserved. 41

VIRTUAL LANS

5-44 Data ONTAP® 7.3 Fundamentals: Networking

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Virtual LANs

Virtual LANs (VLANs) provide:


ƒ Increased IP network security
ƒ Optimized packet routing
VLAN 0

VLAN 1 VLAN 2
Floor 1 0

1
2

1
Floor 2
2

© 2008 NetApp. All rights reserved. 42

VIRTUAL LANS

A virtual local area network (VLAN) is a switched network that is logically segmented by
function, project team, or applications. End stations can be grouped by department, by
project, or by security level. End stations can be geographically dispersed and still be part of
the broadcast domain in a switched network.

ADVANTAGES OF VLANS
• Ease of administration―VLANs enable a logical grouping of users who are physically
dispersed. Moving to a new location does not interrupt membership in a VLAN. Similarly,
changing job functions does not require moving the end station because it can be reconfigured into
a different VLAN.
• Confinement of broadcast domains―VLANs reduce the need for routers on the network to
contain broadcast traffic. Packet flooding is limited to the switch ports on the VLAN.
• Reduction of network traffic―Because the broadcast domains are confined to the VLAN, traffic
on the network is significantly reduced.
• Enforcement of security―End stations on one VLAN cannot communicate with end stations on
another VLAN unless a router is connected between them.

5-45 Data ONTAP® 7.3 Fundamentals: Networking

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Creating a VLAN from the CLI

system> ifconfig e3b down


system> vlan create e3b 10
vlan: e3b-10 has been created
system> ifconfig e3b-10 172.17.200.201 netmask
255.255.255.0 mediatype 100tx-fd up
system> ifconfig –a
e3b:flags=80908043<BROADCAST,RUNNING,MULTICAST,TCPCKSUM
,VLAN> mtu 1500
ether 00:a0:98:03:28:8f (auto-1000t-fd-up)
flowcontrol full

© 2008 NetApp. All rights reserved. 43

CREATING A VLAN FROM THE CLI

Use the vlan create and the ifconfig commands to create and configure a VLAN.

The vlan create command:


• Creates a VLAN interface
• Includes the VLAN interface in one or more VLAN groups as specified by the VLAN identifier
• Enables VLAN tagging
• Enables (optionally) GVRP on the VLAN interface

After creating the VLAN interface with the vlan command, you can configure it using the
ifconfig command.

5-46 Data ONTAP® 7.3 Fundamentals: Networking

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
vlan Commands

ƒ Use the following commands for VLANs:


– vlan create –g on <ifname> <vlanid …>
– vlan delete [-q] <ifname> <vlanid>
– vlan add <ifname> <vlanid [vlanid …]>
– vlan stat <ifname> <vlanid>
– vlan modify –g [on|off] <ifname>
ƒ Supported VLAN IDs are 1–4094
NOTE: VLAN ID 1 is used by a number of switch vendors.
ƒ VLANs over VIFs are supported
ƒ Use the /etc/rc file to persist configurations during
reboot

© 2008 NetApp. All rights reserved. 44

VLAN COMMANDS

A VLAN is created using the vlan create command in the CLI or in FilerView. After
creating the trunk, you can configure it like any other regular network interface using the
ifconfig command.

EXAMPLE RESULT
vlan create –g on e4 2 3 Creates three VLANs on interface e4 named e4-2, e4-3, and e4-4. The
4 -g on option enables GVRP on the VLANs. Enter this command in
the /etc/rc file to make it persistent over reboots.
vlan delete –q e8 2 Removes VLAN e8-2. If the interface was configured up, a message
appears asking you to confirm the deletion.
vlan add e8 3 Adds e8-3 to the VLAN. Enter this command in the /etc/rc file to
make it persistent over reboots.
vlan stat e4 10 Displays the number of packets received and transmitted on each
interface. You can specify the time interval (in seconds) at which the
statistics are displayed. If no number is entered, statistics are displayed
by default at two-second intervals.
vlan modify –g off e8 Interface e8 is excluded from participating with GVRP. Enter this
command in the /etc/rc file to make it persistent over reboots.

5-47 Data ONTAP® 7.3 Fundamentals: Networking

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Summary
In this module, you should have learned to:
ƒ Use the ifconfig command to configure interfaces
ƒ Identify host-name resolution methods:
– /etc/hosts file
– DNS
– NIS
ƒ Explain how a VIF is a single virtual interface created
from multiple physical interfaces
ƒ Identify trunking modes supported on the storage
system:
– Single mode―failover
– Multimode―increased bandwidth
ƒ Explain how VLANs increase IP network security by
tagging specific packets with the appropriate VLAN ID
© 2008 NetApp. All rights reserved. 45

MODULE SUMMARY

5-48 Data ONTAP® 7.3 Fundamentals: Networking

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Exercise
Module 5: Networking
Estimated Time: 45 minutes

EXERCISE

Please refer to your Exercise Guide for more instruction.

5-49 Data ONTAP® 7.3 Fundamentals: Networking

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Physical Storage

NetApp University - Do Not Distribute


MODULE 6: PHYSICAL STORAGE MANAGEMENT

Physical Storage
Management
Module 6
Data ONTAP® 7.3 Fundamentals

PHYSICAL STORAGE MANAGEMENT

6-1 Data ONTAP® 7.3 Fundamentals: Physical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Objectives

By the end of this module, you should be able to:


ƒ Describe Data ONTAP RAID technology
ƒ Identify a disk in a disk shelf based on its ID
ƒ Execute commands to determine disk ID
ƒ Identify a hot-spare disk in an FAS system
ƒ Calculate usable disk space
ƒ Describe the effects of using multiple disk
types
ƒ Execute aggregate commands in Data ONTAP
ƒ Define and create an aggregate

© 2008 NetApp. All rights reserved. 2

MODULE OBJECTIVES

6-2 Data ONTAP® 7.3 Fundamentals: Physical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Disks

© 2008 NetApp. All rights reserved. 3

DISKS

6-3 Data ONTAP® 7.3 Fundamentals: Physical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Disks

ƒ All data is stored on disks


ƒ To understand how physical media is
managed in your storage system, we will
address:
– Disk types
– Disk qualification
– Disk ownership
– Spare disks

© 2008 NetApp. All rights reserved. 4

DISKS

6-4 Data ONTAP® 7.3 Fundamentals: Physical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Supported Disk Topologies

FC-AL

FAS2000 FAS3000 FAS6000


SATA

FAS2000 FAS3000 FAS6000 R200

© 2008 NetApp. All rights reserved. 5

SUPPORTED DISK TOPOLOGIES

FIBRE CHANNEL ARBITRATED LOOP


Fibre Channel-Arbitrated Loop (FC-AL) is a Fibre Channel topology that requires no switch.
NetApp uses this topology to connect supported storage controllers and disk shelves in a type
of loop.

SERIAL ATA
Serial ATA (SATA) is a successor of the Advanced Technology Attachment (ATA) standard.
NetApp uses this topology to connect supported storage controllers and disk shelves in a
high-speed serial link.

6-5 Data ONTAP® 7.3 Fundamentals: Physical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Disk Qualification

Use only NetApp Qualified Disks Modifying the Disk


Qualification Requirement
file can cause your
storage system to halt.

© 2008 NetApp. All rights reserved. 6

DISK QUALIFICATION

NetApp storage systems only support disks qualified by NetApp. Disks must be purchased
from NetApp or an approved reseller.

UNQUALIFIED DISKS
Data ONTAP automatically detects unqualified disks. If you attempt to use an unqualified
disk, Data ONTAP responds by issuing a “delay forced shutdown” warning, giving you 72
hours to remove and replace the unqualified disk before a forced system shutdown occurs.
In addition, when Data ONTAP detects an unqualified disk it takes the following actions:
• Provides notification through syslog entries, console messages, and AutoSupport
• Generates an automatic error message and delayed forced shutdown if the
/etc/qual_devices file is modified
• Marks unsupported drives as “unqualified.”

DISK QUALIFICATION
If you install a new disk drive into your disk shelf and the storage system responds with an
unqualified disk error message, you must remove the disk and replace it with a qualified disk.

6-6 Data ONTAP® 7.3 Fundamentals: Physical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
To correct an unqualified disk error and avoid a forced shutdown, complete the following
steps:
1. Remove any disk drives not provided by NetApp or an authorized NetApp vendor or reseller.
2. To update your list of qualified disks, download and install the most recent
/etc/qual_devices file from http://now.netapp.com/NOW/download/tools/diskqual/.
3. If the unqualified disk error message persists after installing an up-to-date
/etc/qual_devices file, try reinstalling the/etc/qual_devices file.
4. If the reinstallation fails, remove the unqualified disk and contact NetApp Technical Support.

DISK FIRMWARE UPDATES


Disk firmware updates are included in new Data ONTAP upgrades. However, if you receive a
NetApp = Field Alert recommending that you install a new version of the disk firmware, and
you are not upgrading the operating system, you can download disk firmware separately.
Download disk-drive firmware images from the NOW site at:
http://now.netapp.com/NOW/download/tools/diskfw/.
NOTE: From the disk firmware download page on the NOW site, you can download either
an individual disk firmware image or a tarball that contains all the current disk firmware
images.
To install disk firmware images from the tarball, complete the following steps:
1. Download either the all.gz or all.zip tarball file (depending which format you choose).
2. From your client, mount (or map) to the /vol/vol0 root volume of the storage system. Next,
change directory to the /etc/disk_fw directory of the storage system.
3. If the format for the tarball file is all.zip (for Windows clients), then run an unzipping program
like WinZip to uncompress and extract disk firmware files.
If the format for the tarball file is all.gz (for UNIX clients), then run gunzip all.gz to
create a tar file named all, and then use tar xvf all to extract all of the disk firmware
images.

The /etc/disk_fw directory should now have all the current disk firmware images stored.

6-7 Data ONTAP® 7.3 Fundamentals: Physical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Disk Ownership

Disks must be assigned to (owned by) a


controller.
Software Disk Ownership Hardware Disk Ownership
ƒ Ownership is assigned ƒ Ownership is based on slot used
ƒ FAS270 ƒ R200
ƒ FAS3000 series ƒ FAS250
ƒ FAS2000 series ƒ FAS3000 series
ƒ FAS6000 series

© 2008 NetApp. All rights reserved. 7

DISK OWNERSHIP

Disk ownership can be hardware-based or software-based.

HARDWARE-BASED OWNERSHIP
In hardware-based disk ownership, disk ownership and pool membership is determined by the
slot position of the HBA or onboard port, and the shelf module port where the HBA is
connected.

SOFTWARE-BASED OWNERSHIP
In software-based disk ownership, disk ownership and pool membership is determined by the
storage system administrator. Data ONTAP might also set disk ownership and pool
membership automatically, depending on the initial configuration. Slot position and shelf
module port does not affect disk ownership.

6-8 Data ONTAP® 7.3 Fundamentals: Physical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Disk Ownership
system> sysconfig -r
Volume vol0 (online, normal) (block checksums)
Plex /vol0/plex0 (online, normal, active)
RAID group /vol0/plex0/rg0 (normal)

RAID Disk Device HA SHELF BAY CHAN Used (M...

parity 3a.16 4a 1 0 FC:A 17000/...


data 3b.17 4a 1 1 FC:A 17000/...

Disk ID = Loop_id.Device_id

© 2008 NetApp. All rights reserved. 8

DEVICE OWNERSHIP

DISK ID
Disks are numbered in all storage systems. Disk numbering allows you to:
• Interpret messages displayed on your screen such as command output or error messages
• Quickly locate a disk associated with a displayed message
To determine a disk ID, use the sysconfig –r, vol status –r, or aggr status –r
commands.

6-9 Data ONTAP® 7.3 Fundamentals: Physical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Disk Ownership: Loop_id

The loop_id is the designation


for the slot and port where an
adapter is located. In the
following illustration, the loop_id
is 3b.
3b

PROPER LY SH UT DOWN SYST EM BEF ORE OPENING C HASSIS.

DISC ONN ECT AC POWER DISC ONN ECT AC POWER

CORD BEFOR E R EMOVAL CORD BEFOR E R EMOVAL

PCI 1 PCI 3

I I

O O

PCI 2 PCI 4

Console
e0a e0b RLM e0c e0d
L 0a F C 0b L L 0c F C 0d L

I I I I

N N N N
0e LVD SCSI
K K K K

AC AC
L L L L

I I I I
N 0a F C 0b N e0a e0b RLM e0c e0d N 0c F C 0d N
Console 0e LVD SCSI
K K K K status status

© 2008 NetApp. All rights reserved. 9

DEVICE OWNERSHIP: LOOP_ID

Disks are numbered based on a combination of loop_id and device_id, represented as


Loop_ID.Device_ID, as shown in the graphic.
Loop ID is the adapter number associated with the disk, while Device ID is the logical loop
ID of the disk.
In the figure above, the disk with the ID 3a is on the loop connected to host adapter port A,
slot 3.

6-10 Data ONTAP® 7.3 Fundamentals: Physical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Disk Ownership: Device_id

Shelf ID

13 12 11 10 9 8 7 6 5 4 3 2 1 0 Bay Number

Shelf ID Bay Number Device ID


1 13–0 29–16
2 13–0 45–32
3 13–0 61–48
4 13–0 77–64
5 13–0 93–80
6 13–0 109–96
7 13–0 125–112

© 2008 NetApp. All rights reserved. 10

DEVICE OWNERSHIP: DEVICE_ID

FC LOOP IDS
The table above shows the numbering system for FC loop IDs.
For DS14 Series shelves, the following IDs are reserved (not used): 0-15, 30-31, 46-47, 62-
63, 78-79, 94-95, 110-111.
The numbering system in the table above can be summarized by the following formula:
DS14 Disk/Loop ID = DS14 Shelf ID * 16 + Bay Number

6-11 Data ONTAP® 7.3 Fundamentals: Physical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The fcstat device_map Command

ƒ Use the fcstat command to troubleshoot


disks and shelves.
ƒ Use the fcstat device_map command to
show disks and their relative physical position
map of drives on an FC loop.

© 2008 NetApp. All rights reserved. 11

THE FCSTAT DEVICE_MAP COMMAND

An FC loop is a logically closed loop from a frame transmission perspective. Consequently,


signal integrity problems caused by a component upstream will be seen as problem symptoms
by components downstream. The relative physical position of drives on a loop is not
necessarily related directly to their loop IDs (which are, in turn, determined by the drive shelf
IDs). To determine the relative physical positions of drives on a loop, use the device_map
subcommand,.
The device_map subcommand displays:
• The relative physical position of drives on the loop as if the loop was one flat space
• The mapping of devices to shelves, which allows you to quickly correlate disk IDs with shelf
tenancy

6-12 Data ONTAP® 7.3 Fundamentals: Physical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Matching Disk Speeds

ƒ When creating an aggregate or traditional


volume, Data ONTAP selects disks:
– With same speed
– That match speed of existing disks
ƒ Data ONTAP verifies that adequate spares are
available

© 2008 NetApp. All rights reserved. 12

MATCHING DISK SPEEDS

If disks with different speeds are present on a NetApp system (for example, 10,000 RPM and
15,000 RPM disks), Data ONTAP attempts to avoid mixing them in one aggregate or
traditional volume.
By default, Data ONTAP selects disks:
• With the same speed when creating an aggregate or traditional volume in response to the following
commands:
• aggr create
• vol create
• That match the speed of existing disks in the aggregate or traditional volume that requires
expansion or mirroring in response to the following commands:
• aggr add
• aggr mirror
• vol add
• vol mirror

If you use the -d option to specify a list of disks for commands that add disks, the operation
fails if disk speeds differ from each other or differ from the speed of disks already included in
the aggregate or traditional volume. The commands for which the -d option will fail in this
case are aggr create, aggr add, aggr mirror, vol create, vol add, and vol
mirror. For example, if you enter aggr create vol4 -d 9b.25 9b.26 9b.27 and two
of the disks are different speeds, the operation fails.

6-13 Data ONTAP® 7.3 Fundamentals: Physical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
When using the aggr create or vol create commands, you can use the -R rpm option to
specify the type of disk used based on its speed. The –R rpm option is only necessary for
systems using different disks with different speeds. Typical values for rpm are 5400, 7200,
10,000, and 15,000. The -R option cannot be used with the -d option.
If you are going to specify a disk speed and you are not sure of its actual speed, use the
sysconfig -r command to first determine actual disk speed.
NOTE: It is possible to use the –f option to override the RPM check, but NetApp does not
recommend this practice. Using the –f option in this situation can produce an aggregate or
traditional volume that does not meet performance expectations.
Data ONTAP periodically checks to see if adequate spares are available for the storage
system. Only disks with matching speeds are considered acceptable spares. However, if a disk
fails and a spare with matching speed is not available, Data ONTAP may use a spare with a
different speed for RAID reconstruction.
NOTE: If an aggregate or traditional volume includes disks with different speeds, and
adequate spares are present, you can use the disk replace command to replace
mismatched disks. Data ONTAP uses Rapid RAID Recovery to copy these disks to more
appropriate replacements.

6-14 Data ONTAP® 7.3 Fundamentals: Physical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Using Multiple Disk Types in an Aggregate

ƒ Drives in an aggregate can be:


– Different speeds
– On the same shelf or on different
shelves
ƒ Avoid mixing drive types within an
aggregate
ƒ The spares pool is global
12

34 56

© 2008 NetApp. All rights reserved. 13

USING MULTIPLE DISK TYPES IN AN AGGREGATE

MIXING AND MATCHING DISKS


The storage system allows the use of various disk sizes, which sometimes occurs when disks
are purchased after the original equipment is set up.
However, different-sized disks require different versions of Data ONTAP and different disk
shelves. For specific information about your system, see the System Configuration Guide on
the NOW site.
You must ensure that parity and hot spare disks are as large as the largest disk in a RAID
group so they can support all the stripes on the data disks. When creating RAID groups with
disks of different sizes, Data ONTAP assigns parity to the largest disk. If you later add larger
disks to the RAID group, Data ONTAP reassigns parity to the largest of those disks.
NOTE: While mixing disk sizes in a volume is a supported configuration, this can lead to
suboptimal volume performance. NetApp recommends that all disks in a volume be the same
size.

6-15 Data ONTAP® 7.3 Fundamentals: Physical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Spare Disks

ƒ What is the purpose of spare disks?


– Increase aggregate capacity
– Replace failed disks
– Zeros disk automatically when the disk is
brought into use
ƒ It is best to zero drives in the spares pool in
advance, allowing Data ONTAP to use the
drives immediately
system> disk zero spares

© 2008 NetApp. All rights reserved. 14

SPARE DISKS

ADDING SPARES
You can add spare disks to an aggregate to increase its capacity. If the spare is larger than the
other data disks, it becomes the parity disk. However, it does not use the excess capacity
unless another disk of similar size is added. The second larger additional disk has full use of
additional capacity.

REPLACING FAILED DISKS WITH SPARES


If a disk fails, a spare disk is automatically used to replace the failed disk. If a larger disk is
selected to replace a smaller disk, the excess capacity of the larger disk is not used.

ZEROING USED DISKS


After you have assigned ownership to a disk, you can add that disk to an aggregate on the
storage system that owns it, or leave it as a spare disk on that storage system. If the disk has
been used previously in another aggregate, you should zero the disk (using the disk zero
spares command) to reduce delays when the disk is used.

6-16 Data ONTAP® 7.3 Fundamentals: Physical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Sizing

© 2008 NetApp. All rights reserved. 15

SIZING

6-17 Data ONTAP® 7.3 Fundamentals: Physical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Disk Sizing

To properly provision NetApp storage systems,


you must know how disk sizes are calculated:
ƒ All disks are right-sized
ƒ Count the size of data disks, not parity disks
NOTE: The df command does not reflect parity disks.
ƒ Use df -h to view the output in a format you
can read

© 2008 NetApp. All rights reserved. 16

DISK SIZING

CALCULATING USABLE AND PHYSICAL SPACE FOR A VOLUME


In a volume, usable space is different from physical capacity. To display the amount of free
(usable) disk space in one or all volumes on a storage system use the df command. The total
amount of disk space shown in the df output is less than the sum of available space on all
disks installed in the volume.

RAID GROUP PARITY DISK(S)


The df command does not display information about parity disks. If a volume is configured
for RAID-4 protection, it has one parity disk. If a volume is configured for RAID-DP
protection, it has two parity disks.
NOTE: Data ONTAP 6.5 and later supports RAID 4 and RAID-DP, which you can assign on
a per-volume basis.

6-18 Data ONTAP® 7.3 Fundamentals: Physical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Right-Sizing

Disk Type Disk Size Right-Sized Capacity Available Blocks


FC/SCSI 72 GB 68 GB 139,264,000
144 GB 136 GB 278,528,000
300 GB 272 GB 557,056,000
ATA/SATA 160 GB 136 GB 278,258,000
250 GB 212 GB 434,176,000
320 GB 274 GB 561,971,200
500 GB 423 GB 866,531,584
NOTE: ATA drives have only 512 bytes per sector and lose an
additional 1/9 or 12.5% due to block checksum allocation.

© 2008 NetApp. All rights reserved. 17

RIGHT-SIZING

Some disks have slightly more capacity, depending on the manufacturer and model. Disk
drives in the same size category that are made by different manufacturers can differ slightly in
size. Data ONTAP "right sizes" these disks and makes all usable disk space the same. Right-
sizing ensures that disks are compatible regardless of manufacturer.
When you add a new disk, Data ONTAP reduces the amount of space available for user data
on that disk by rounding down. This maintains compatibility across disks from different
manufacturers. This means that the available disk space displayed using an informational
command such as sysconfig is less than each disk’s rated capacity. The table above,
reprinted from the Storage Management Guide, shows how Data ONTAP rounds down
available disk space.
NOTE: Existing disks in an upgraded system are not automatically right-sized. Right-sizing
is applied only to disks that are added to the storage system. To compare physical space and
usable space, and to determine if disks are right-sized, use sysconfig -r.

FC VERSUS ATA
FC drives have 520 bytes per sector, while ATA drives have 512 bytes per sector. When data
is written to an FC disk, the checksum can be saved within the sector. However, for ATA
drives, the checksum must be reallocated. Because of this, a disk with 512 bytes per sector
has only 8/9 of the space of an equivalent disk with 520 bytes per sector, resulting in a 12.5%
loss.

6-19 Data ONTAP® 7.3 Fundamentals: Physical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Usable Disk Space

ƒ When disks are right-sized, 10% of the space


is reserved for WAFL. This reserved space is:
– Used for system and core usage to maximize
disk efficiency
– Similar to other operating systems (for example,
the UNIX FFS)
ƒ The space that remains after right-sizing is
usable disk space, which can be used for
either:
– Traditional volumes
– Aggregates with flexible volumes

© 2008 NetApp. All rights reserved. 18

USABLE DISK SPACE

Similar to the UNIX FFS (Fast File System), the storage system reserves 10% of its total disk
space for efficiency . The df command does not count this 10% as part of the file system
space.

6-20 Data ONTAP® 7.3 Fundamentals: Physical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Disk Space Allocation: Aggregates
ƒ Aggregates with a Aggregate
Space
Traditional Volume—Each
aggregate has 10%
10% WAFL Overhead
allocated for WAFL.
ƒ Traditional Volumes—Each
volume has 20% allocated
for Snapshot reserve. The
remainder is used for client
data. WAFL Aggregate 80%
Space
ƒ Snapshot Reserve—The
90%
amount of space allocated
for Snapshot reserve is
adjustable. To use this
space for data (not
Snapshot Reserve
recommended), you must
manually override the 20%
(Adjustable)
allocation used for
Snapshot copies.

© 2008 NetApp. All rights reserved. 19

DISK SPACE ALLOCATION: AGGREGATES

DISK SPACE ALLOCATION FOR TRADITIONAL VOLUMES

AGGREGATES
The size of an aggregate depends on the number and size of disks allocated to it. In an
aggregate, 10 % is allocated for WAFL.

TRADITIONAL VOLUMES
An aggregate can include only one traditional volume. A traditional volume has 20%
allocated for Snapshot reserve, with no aggregate overhead.

SNAPSHOT RESERVE
Like flexible volumes, the space used for the Snapshot reserve in a traditional volume can be
expanded into user space as required by the system. This expansion could occur, for example,
if numerous changes are made to the active file system. If necessary, the Snapshot reserve
expands into user space as Snapshot copies are made, regardless of the designated Snapshot
reserve percentage.
You can manually reallocate disk space using the snap reserve command. However,
unless you specifically readjust the user data space on a volume, it will never exceed 80% of
the usable disk space.

6-21 Data ONTAP® 7.3 Fundamentals: Physical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Disk Space Allocation: Flexible Volumes
ƒ Aggregates—Each Aggregate
Space
aggregate has 5%
allocated for Snapshot FlexVol
10% WAFL Overhead
reserve and 10% allocated Space
Plus
for WAFL. WAFL Aggregate Aggregate
Snapshot
ƒ Flexible Volumes—Each Space
Reserve
volume has 20% allocated
for Snapshot reserve. The FlexVol1x 80%
remainder is used for client
data. .snapshot 20% 95%
90%
ƒ Snapshot Reserve—The
amount of space allocated
for Snapshot reserve is FlexVol#n 80%
adjustable. To use this
space for data (not .snapshot 20%
recommended), you must
manually override the Aggregate Snapshot Reserve 5%
(Adjustable)
allocation used for
Snapshot copies.
© 2008 NetApp. All rights reserved. 20

DISK SPACE ALLOCATION: FLEXIBLE VOLUMES

AGGREGATES
The size of an aggregate depends on the number and size of disks allocated to it. Five percent
of the aggregate is allocated as Snapshot reserve for aggregate Snapshot copies, while 10 %
of the aggregate is allocated for WAFL.

FLEXIBLE VOLUMES
An aggregate can include more than one flexible volume. However, each flexible volume
allocates 20% for Snapshot reserve. To use the Snapshot reserve space for data (not
recommended), you must manually override the allocation used for Snapshot copies, allowing
the remainder to be used for client data.

SNAPSHOT RESERVE
The Snapshot reserve for aggregates does not automatically expand into the WAFL aggregate
space. When space is needed for Snapshot copies, by default, the older aggregate Snapshot is
deleted to accommodate a new Snapshot. You can adjust the Snapshot reserve size in an
aggregate using the snap reserve –A command.
In volumes, the space used for the Snapshot reserve expands into user space as required by
the system. This expansion could occur, for example, if numerous changes are made to the
active file system. If necessary, the Snapshot reserve expands into the user space as Snapshot
copies are taken, regardless of the designated Snapshot reserve percentage. You can manually
reallocate disk space using the snap reserve command.

6-22 Data ONTAP® 7.3 Fundamentals: Physical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Disk Protection

© 2008 NetApp. All rights reserved. 21

DISK PROTECTION

6-23 Data ONTAP® 7.3 Fundamentals: Physical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Disk Protection

ƒ Data ONTAP protects disks through:


– RAID
– Disk scrubbing
ƒ Data ONTAP can assist in recovering from disk
failures

© 2008 NetApp. All rights reserved. 22

DISK PROTECTION

6-24 Data ONTAP® 7.3 Fundamentals: Physical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
RAID Groups

ƒ RAID groups are a collection of data disks and


parity disks
ƒ RAID groups provide protection through parity
ƒ Data ONTAP organizes disks into RAID groups
ƒ Data ONTAP supports:
– RAID 4
– RAID-DP™

© 2008 NetApp. All rights reserved. 23

RAID GROUPS

A RAID group includes several disks linked together in a storage system. While there are
different implementations of RAID, Data ONTAP supports only RAID 4 and RAID-DP. To
understand how to manage disks and volumes, it is important to first understand the concept
of RAID.
In Data ONTAP, each RAID 4 group consists of one parity disk and one or more data disks.
The storage system assigns the role of parity disk to the largest disk in the RAID group.
When a data disk fails, the storage system identifies the data on the failed disk and rebuilds a
hot spare with that data.
RAID-DP provides double-parity protection against a single- or double-disk failure within a
RAID group. The minimum number of disks in a RAID-DP group is three—one data disk,
one parity disk, and one double-parity (DP) disk.
NOTE: If a parity disk fails, it can be rebuilt from data on the data disks.

6-25 Data ONTAP® 7.3 Fundamentals: Physical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
RAID 4 Technology

ƒ RAID 4 protects against data loss that results from a


single-disk failure in a RAID group
ƒ A RAID 4 group requires a minimum of two disks:
– One parity disk
– One data disk

Parity Data Data Data Data Data Data Data

© 2008 NetApp. All rights reserved. 24

RAID 4 TECHNOLOGY

RAID 4 protects against data loss due to a single-disk failure within a RAID group.
Each RAID 4 group contains the following:
• One parity disk (assigned to the largest disk in the RAID group)
• One or more data disks
Using RAID 4, if one disk block goes bad, the parity disk in that disk's RAID group is used to
recalculate the data in the failed block, and then the block is mapped to a new location on the
disk. If an entire disk fails, the parity disk prevents any data from being lost. When the failed
disk is replaced, the parity disk is used to automatically recalculate its contents. This is
sometimes referred to as row parity.

6-26 Data ONTAP® 7.3 Fundamentals: Physical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
RAID-DP Technology
ƒ RAID-DP protects against data loss that results from
double-disk failures in a RAID group
ƒ A RAID-DP group requires a minimum of three disks:
– One parity disk
– One double-parity disk
– One data disk

Parity Double- Data Data Data Data Data Data


Parity

© 2008 NetApp. All rights reserved. 25

RAID-DP TECHNOLOGY

RAID-DP technology protects against data loss due to a double-disk failure within a RAID
group.
Each RAID-DP group contains the following:
• One data disk
• One parity disk
• One double-parity disk
RAID-DP employs the traditional RAID 4 horizontal row parity. However, in RAID-DP, a
diagonal parity stripe is calculated and committed to the disks when the row parity is written.
For more information about RAID-DP processes, see Technical Report 3298 at
http://www.netapp.com/library/tr/3298.pdf.

6-27 Data ONTAP® 7.3 Fundamentals: Physical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
RAID Group Size

RAID-DP
NetApp Minimum Maximum Default
Platform Group Size Group Size Group Size
All storage systems (with
3 16 14
SATA disks)
All storage systems (with
3 28 16
FC disks)

RAID 4
NetApp Minimum Maximum Default
Platform Group Size Group Size Group Size
FAS270 2 14 7
All other storage systems
2 7 7
(with SATA)
All other storage systems
2 14 8
(with FC)

© 2008 NetApp. All rights reserved. 26

RAID GROUP SIZE

RAID groups can include anywhere from 2 to 28 disks, depending on the platform and RAID
type. For best performance and reliability, NetApp recommends using the default RAID
group size.

6-28 Data ONTAP® 7.3 Fundamentals: Physical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Data Reliability

ƒ RAID-level checksums enhance data


protection and reliability
ƒ Two processes:
– options raid.media_scrub
ƒ Checks for media errors only
ƒ If enabled, runs continuously in the background
– options raid.scrub (also called “disk
scrubbing”)
ƒ Checks for media errors
ƒ Corrects parity consistency

© 2008 NetApp. All rights reserved. 27

DATA RELIABILITY

Media scrubbing checks disk blocks for physical errors. Disk scrubbing checks disk blocks on
all disks in the storage system for media errors and logical parity errors.
If Data ONTAP identifies media errors or inconsistencies, it repairs them by reconstructing
the data from parity data, and then rewriting the data back to the data disk. Disk scrubbing
reduces the chance of data loss from media errors that occur during reconstruction.

6-29 Data ONTAP® 7.3 Fundamentals: Physical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
RAID Checksums

ƒ Zone Checksums (ZCS)


– Eight 512-byte sectors (4,096 bytes) per block
– Every 64th block checksums the previous 63
– WAFL never uses these checksum blocks; RAID does
– Available for V-Series
ƒ Block Checksums (BCS)
– Eight 512-byte sectors (4096 bytes) per block
– Every 64th block checksums the previous 63
– Every sector checksums itself
– Are faster than ZCS
– Are the standard for FC, SCSI and V-Series disks
– 8/9ths ATA disks; every ninth sector checksums the
previous eight
© 2008 NetApp. All rights reserved. 28

RAID CHECKSUMS

6-30 Data ONTAP® 7.3 Fundamentals: Physical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Comparing Media and RAID Scrubs
A media scrub: A RAID scrub:
ƒ Is always running in the ƒ Is enabled by default
background when the storage ƒ Can be scheduled or disabled
system is not busy – Disabling is not
ƒ Looks for unreadable blocks recommended
at the lowest level (0s and 1s) ƒ Uses RAID checksums
ƒ Is unaware of the data stored ƒ Reads a block and then
in a block checks the data
ƒ Takes corrective action when ƒ If the RAID scrub finds a
it finds too many unreadable discrepancy between the
blocks on a disk (sends RAID checksum and the data
warnings or fails a disk, read, it re-creates the data
depending on findings) from parity and writes it back
to the block
ƒ Ensures that data has not
become stale by reading
every block in an aggregate,
even when users haven’t
accessed the data

© 2008 NetApp. All rights reserved. 29

COMPARING MEDIA AND RAID SCRUBS

DISK SCRUB
Storage systems use disk scrubbing to protect data from media errors or bad sectors on a disk.
Each disk in a RAID group is scanned for errors. If errors are identified, they are repaired by
reconstructing data from parity and rewriting the data. Without this process, a disk media
error could cause a multiple disk failure while running in degraded mode.
Automatic RAID scrub is enabled by default. If you prefer to control the timing of RAID
scrubs, you can turn off the automatic scrubs. You can also manually start and stop disk
scrubbing regardless of the current value (on or off) of the raid.scrub.enable option.

ERROR MESSAGE CAUSE


Inconsistent parity on volume volume_name, RAID group Inconsistent parity block
n, stripe #n. Rewriting bad parity block on volume
volume_name, RAID group n.
Rewriting bad parity block on volume volume_name, Media error on the parity disk or a data disk
RAID group n, stripe #n.
Multiple bad blocks found on volume volume_name, More than one bad block
RAID group n, stripe #n.
Scrub found n parity inconsistencies. Scrub found n media Disk scrubbing complete
errors. Disk scrubbing finished.

6-31 Data ONTAP® 7.3 Fundamentals: Physical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
About Disk Scrubbing

ƒ Automatic RAID scrub:


– By default, begins at 1 a.m. on Sundays
– Schedule can be changed by an administrator
– Duration can be specified by an administrator
ƒ Manual RAID scrub overrides automatic
settings
– To scrub disks manually:
raid.scrub.enable off
And then:
aggr scrub start
– To view scrub status:
aggr scrub status aggr_name

© 2008 NetApp. All rights reserved. 30

ABOUT DISK SCRUBBING

6-32 Data ONTAP® 7.3 Fundamentals: Physical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
RAID Group Options

options raid.timeout

options raid.reconstruct.perf_impact

options raid.scrub.enable

options raid.scrub.perf_impact

vol options <volname> raidtype

aggr options <aggrname> raidtype

NOTE: For a complete list of RAID options, see your product


documentation.

© 2008 NetApp. All rights reserved. 31

RAID GROUP OPTIONS

EXAMPLE RESULT
options raid.timeout 36 Changes the amount of time the system will operate in
degraded mode from the default (24 hours) to 36 hours.

options Changes the amount of system resources allocated to


raid.reconstruct.perf_impact low reconstruction of data from the default (medium) to low
(runs when nothing else is running). Can also be set to
high (runs except when CPIO is running).
options raid.scrub.enable on Enables RAID scrub to occur automatically at 1 a.m. on
Sundays.
options raid.scrub.perf_impact low Sets scrub performance to low. Can also be set to
medium or high.
vol options vol0 raidtype raid_dp Changes the RAID type of RAID groups for vol0 to
RAID-DP. Default RAID type is RAID 4.

aggr options aggr0 raidtype raid4 Changes the RAID type of RAID groups for aggr0 to
RAID 4. Default RAID type is RAID-DP.

6-33 Data ONTAP® 7.3 Fundamentals: Physical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
disk Commands

ƒ disk fail diskname


ƒ disk remove diskname
ƒ disk swap
ƒ disk unswap
ƒ disk replace [start|stop]
ƒ disk zero spares
ƒ disk scrub [start|stop]
ƒ disk sanitize

© 2008 NetApp. All rights reserved. 32

DISK COMMANDS

EXAMPLE RESULT
disk fail 4a.16 Fails the file system disk, 4a.16.
disk remove 4a.17 Removes the spare disk, 4a.17.
disk swap Prepares (quiets) external SCSI bus for swap (not required for FC-AL loops).
disk unswap Undoes disk swap (not required for FC-AL loops).
disk scrub stop Stops disk scrubbing.
disk replace start Replaces disk 4a.16 with a hot spare.
4a.16
disk zero spares Zeros all unzeroed RAID spare disks.
disk sanitize start Starts removal of all disk data by overwriting disk 4a.18 several times.
4a.18

6-34 Data ONTAP® 7.3 Fundamentals: Physical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Disk Failures

Volume 1

Raid Group 0 Spares

Parity Data Data Data

Spares are global.

© 2008 NetApp. All rights reserved. 33

DISK FAILURES

If you have a disk failure, you can use the sysconfig -r command to determine which disk
has failed. You can also obtain the same information using the vol status –r or aggr
status –r commands.
Hopefully, your system is equipped with appropriate hot spare disks. In the figure above, hot
spares are part of the storage system, but are not part of a RAID group. When a disk fails in
this configuration, the storage system automatically rebuilds data or parity on an available hot
spare disk.

REPLACING DISKS
In addition to using hot spares, you can also replace a failed disk by hot swapping it, which
means that the disk is removed or installed while the storage system is running, Hot swapping
allows new disks to be added with minimal interruption to a file system.
If two disks are removed at the same time from a RAID 4 group, a double-disk failure occurs
and data loss results. If the volume uses RAID-DP, the data is protected.
If a volume contains more than one RAID 4 group, two disks in a volume can fail as long as
the disks are not in the same RAID group.

6-35 Data ONTAP® 7.3 Fundamentals: Physical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Degraded Mode
ƒ Degraded mode occurs when:
– A single disk fails in a RAID 4 group with no spares
– Two disks fail in a RAID-DP group with no spares
ƒ Degraded modes operates for 24 hours, during which
time:
– Data is still available
– Performance is less-than-optimal
ƒ Data must be recalculated from the parity until the failed disk
is replaced
ƒ CPU usage increases to calculate from parity
ƒ System shuts down after 24 hours
ƒ To change time interval, use the options
raid.timeout command
ƒ If an additional disk in the RAID group fails during
degraded mode, the result will be data loss
© 2008 NetApp. All rights reserved. 34

DEGRADED MODE

If you experience a single-disk failure in a RAID 4 group or a double-disk failure in a RAID-


DP group and have no hot spares available, the system operates in “degraded” mode for a
specified length of time (default is 24 hours). If access to the system is limited for a specified
period, such as a long weekend, you might consider setting this to a longer time interval using
the options raid.timeout command.
NOTE: Because a longer timeout interval increases the risk of a second disk failure (and
resulting data loss), consider carefully the interval for degraded mode before changing it from
the default.
While the system does not perform optimally in degraded mode, no data is lost. However, if
the failed disk is not replaced during this interval, the system shuts down.
If a second disk in a RAID 4 group fails while the system is operating in degraded mode, it
constitutes a double-disk failure and data loss results. (In a RAID-DP group, no data loss
occurs unless a third disk fails.) For this reason, failed disks and used hot-spare disks should
be replaced as soon as possible.

6-36 Data ONTAP® 7.3 Fundamentals: Physical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Replacing a Failed Disk by Hot Swapping

ƒ Hot-swapping is the process of removing or


installing a disk drive while system is running
and allows for:
– Minimal interruption
– The addition of new disks as needed
ƒ Removing two disks from a RAID 4 group:
– Double-disk failure
– Data loss will occur
ƒ Removing two disks from a RAID-DP group:
– Degraded mode
– No data loss
© 2008 NetApp. All rights reserved. 35

REPLACING A FAILED DISK BY HOT SWAPPING

6-37 Data ONTAP® 7.3 Fundamentals: Physical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Replacing Failed Disks

750 GB

1 TB 750 GB 750 GB 750 GB 750 GB

NOTE: Disk resizing occurs if a smaller disk is replaced by a


larger one.

© 2008 NetApp. All rights reserved. 36

REPLACING FAILED DISKS

When replacing a failed disk, the size of the new disk must be equal to or larger than the
usable space of the replaced disk to accommodate all the data blocks on the failed disk.
If the usable space on the replacement disk is larger than the failed disk, the replacement disk
is right-sized to the capacity of the failed disk. The extra space on the disk is not usable.

6-38 Data ONTAP® 7.3 Fundamentals: Physical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Aggregates

© 2008 NetApp. All rights reserved. 37

AGGREGATES

6-39 Data ONTAP® 7.3 Fundamentals: Physical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Aggregates

Aggregates represent physical storage:


ƒ Made up of one or more RAID groups
ƒ A RAID group includes:
– One or more data disks
– Parity disks
ƒ RAID 4 has only one parity disk
ƒ RAID-DP has two parity disks
ƒ Data is striped for parity protection
ƒ A flexible volume depends on an aggregate for
physical storage

© 2008 NetApp. All rights reserved. 38

AGGREGATES

To support the differing security, backup, performance, and data-sharing requirements of


users, physical data storage resources on your storage system can be grouped into one or
more aggregates. Aggregates provide storage for the volume or volumes they contain.
Each aggregate has its own RAID configuration, plex structure, and set of assigned disks.
When you create an aggregate without an associated traditional volume, you can use it to hold
one or more FlexVol® volumes (logical file systems that share the physical storage resources,
RAID configuration, and plex structure of the aggregate container). When you create an
aggregate with a traditional volume tightly-bound, it can contain only that volume.
A single storage system supports up to 100 aggregates (including traditional volumes).

6-40 Data ONTAP® 7.3 Fundamentals: Physical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Naming Rules for Aggregates

An aggregate name must:


– Begin with either a letter or the underscore
character (_)
– Contain only letters, digits, and underscore
characters (_)
– Contain no more than 255 characters

© 2008 NetApp. All rights reserved. 39

NAMING RULES FOR AGGREGATES

AGGREGATE NAMES
Aggregate names must follow the naming conventions shown above. The same rules apply to
naming volumes.

6-41 Data ONTAP® 7.3 Fundamentals: Physical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Adding an Aggregate

ƒ To add an aggregate using the CLI:


aggr create
ƒ To add an aggregate using FilerView, use the
Aggregate Wizard
ƒ When adding aggregates, you must have the
following information available:
– Aggregate name
– Parity (DP is default)
– RAID group size (minimum)
– Disk selection method
– Disk size
– Number of disks (including parity)
© 2008 NetApp. All rights reserved. 40

ADDING AN AGGREGATE

6-42 Data ONTAP® 7.3 Fundamentals: Physical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Creating an Aggregate Using the CLI

The following is an example of a CLI entry used


to create an aggregate:
system> aggr create aggr_name 24

ƒ Creates an aggregate called aggr_name with


24 disks
ƒ By default, this aggregate uses RAID-DP
ƒ Using the default, 4 of the 24 disks are parity
drives

© 2008 NetApp. All rights reserved. 41

CREATING AN AGGREGATE USING THE CLI

EXAMPLES: CREATING AN AGGREGATE USING THE CLI


The following command creates the aggregate newaggr, with a RAID group size of 8,
consisting of disks with disk IDs 8a.16, 8a.17, 8a.18, and 8a.19:
aggr create newaggr -r 8 -d 8a.16 8a.17 8a.18 8a.19

The following command creates the aggregate newfastaggr, with 20 disks, the default RAID
group size, and all disks with 15,000 RPM:
aggr create newfastaggr -R 15000 20

The following command creates the aggregate newFCALaggr:


aggr create newFCALaggr -T FCAL 15
NOTE: Because FC and SAS disks are considered to be the same type, if SAS disks are
present, they might be used.

6-43 Data ONTAP® 7.3 Fundamentals: Physical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Common Aggregate Commands
ƒ aggr create <aggrname> [options]
<disklist>
ƒ aggr add <aggrname> [options] <disklist>
ƒ aggr status <aggrname> [options]
ƒ aggr rename <aggrname> <new-aggrname>
ƒ aggr show_space [-b] <aggrname>
ƒ aggr offline {<aggrname> | <plexname>}
ƒ aggr online {<aggrname> | <plexname>}
ƒ aggr destroy {<aggrname> | <plexname>}

© 2008 NetApp. All rights reserved. 42

COMMON AGGREGATE COMMANDS

Aggregate commands are similar to vol commands except that they are performed on an
aggregate. In fact, many aggr commands work on traditional volumes, and many vol
commands work on aggregates. For a complete list of commands, see your product
documentation.

6-44 Data ONTAP® 7.3 Fundamentals: Physical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Creating an Aggregate Using
the FilerView Aggregate Wizard

© 2008 NetApp. All rights reserved. 43

CREATING AN AGGREGATE USING THE FILERVIEW AGGREGATE


WIZARD

6-45 Data ONTAP® 7.3 Fundamentals: Physical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Aggregate Size

In Data ONTAP prior to version 7.3:


ƒ Aggregate size is calculated using:
sysconfig –r
ƒ All disks in the aggregate (parity and data) are
included
In Data ONTAP 7.3:
ƒ Aggregate size is calculated using the size of
data disks
ƒ Only data disks in the aggregate are included
(parity disks are excluded)

© 2008 NetApp. All rights reserved. 44

AGGREGATE SIZE

CALCULATING AGGREGATE SIZE


The sysconfig command displays configuration information about your storage system.
Without any arguments, the output includes the Data ONTAP version number and a separate
line for each I/O device on the storage system.
The –r option of the sysconfig command displays RAID configuration information. The
command prints information about all aggregates, volumes, file system disks, spare disks, and
failed disks.

6-46 Data ONTAP® 7.3 Fundamentals: Physical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Summary

In this module, you should have learned to:


ƒ NetApp supports FC-AL and SATA disk drives
ƒ Use either FilerView or the CLI to find disk
information
ƒ Data ONTAP organizes disks into RAID groups
ƒ RAID groups consist of data disks and parity
disks
ƒ Degraded mode occurs when a single disk
fails in a RAID 4 group with no spares, or two
disks fail in a RAID-DP group with no spares

© 2008 NetApp. All rights reserved. 45

MODULE SUMMARY

6-47 Data ONTAP® 7.3 Fundamentals: Physical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Exercise
Module 6: Physical Storage
Management
Estimated Time: 60 minutes

EXERCISE

Please refer to your Exercise Guide for more instruction.

6-48 Data ONTAP® 7.3 Fundamentals: Physical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Logical Storage

NetApp University - Do Not Distribute


MODULE 7: LOGICAL STORAGE MANAGEMENT

Logical Storage
Management
Module 7
Data ONTAP® 7.3 Fundamentals

LOGICAL STORAGE MANAGEMENT

7-1 Data ONTAP® 7.3 Fundamentals: Logical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Objectives

By the end of this module, you should be able to:


ƒ Explain volume concepts in Data ONTAP
ƒ Define and create a traditional and flexible
volume
ƒ Define a FlexClone® volume
ƒ Execute vol and qtree commands

© 2008 NetApp. All rights reserved. 2

MODULE OBJECTIVES

7-2 Data ONTAP® 7.3 Fundamentals: Logical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Storage Concepts

© 2008 NetApp. All rights reserved. 3

STORAGE CONCEPTS

7-3 Data ONTAP® 7.3 Fundamentals: Logical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
WAFL File System

ƒ To recap, an aggregate:
– Is a collection of disks
– Represents physical storage
ƒ A flexible volume is a collection of stored data
(including the directory) within an aggregate
ƒ WAFL keeps track of:
– The aggregate
– The flexible volumes in the aggregate
– All the data in the flexible volumes
ƒ In Data ONTAP, the file system is called Write
Anywhere File Layout (WAFL)
© 2008 NetApp. All rights reserved. 4

WAFL FILE SYSTEM

7-4 Data ONTAP® 7.3 Fundamentals: Logical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Volumes

ƒ Volumes represent logical storage


– Data ONTAP allows up to 500 volumes per
storage system
– Volumes are accessible through supported
protocols
ƒ Data ONTAP provides two types of volumes:
– Traditional
– Flexible

© 2008 NetApp. All rights reserved. 5

VOLUMES

Volumes are file systems that contain user data accessible through one or more access
protocols supported by Data ONTAP, including NFS, CIFS, HTTP, WebDAV, FTP, FCP,
and iSCSI. To maintain multiple, space-efficient, point-in-time data images for the purpose of
backup and recovery, you can create one or more Snapshot copies of the data in a volume.
Data ONTAP limits a storage system to 100 aggregates, but within those aggregates you can
create up to 500 traditional and flexible volumes.

7-5 Data ONTAP® 7.3 Fundamentals: Logical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Naming Rules for Volumes

A volume name must:


– Begin with either a letter or underscore
character (_)
– Contain only letters, digits, and underscore
characters (_)
– Contain no more than 255 characters

© 2008 NetApp. All rights reserved. 6

NAMING RULES FOR VOLUMES

7-6 Data ONTAP® 7.3 Fundamentals: Logical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Root Volumes

ƒ One root volume per storage system


ƒ Root volume /etc directory is used at boot
ƒ All volume path names begin with /vol
– The /vol path name is not a directory
– The /vol is not accessible
– You must mount each volume separately
ƒ Root volume can be either traditional or flexible

© 2008 NetApp. All rights reserved. 7

ROOT VOLUMES

The storage system contains a root volume that was created when the system was initially set
up. The default root volume name is /vol/vol0.
Storage systems with Data ONTAP 7.0 or later preinstalled have a FlexVol volume for a root
volume. Systems running earlier versions of Data ONTAP have a traditional root volume.

7-7 Data ONTAP® 7.3 Fundamentals: Logical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Root Volumes: Example

/vol Special virtual root path

/vol0 Root volume

/etc Directory (configuration information)


/users Volume

/cheryl Directory

© 2008 NetApp. All rights reserved. 8

ROOT VOLUMES: EXAMPLE

Each storage system has only one root volume, although the designated root volume can be
changed. The root volume is used to start up the storage system. It is the only volume with
root attributes, meaning that its /etc directory is used for configuration information.
Volume path names begin with /vol. For example:
/vol/vol0
where vol0 is the name of the volume
/vol/users/Cheryl
where cheryl is a directory on the users volume
NOTE: The /vol path is not a directory. It is a special virtual root path that the storage
system uses to mount other directories. You cannot mount /vol to view all of the volumes on
the storage system. You must mount each volume separately.

7-8 Data ONTAP® 7.3 Fundamentals: Logical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Traditional Volumes

© 2008 NetApp. All rights reserved. 9

TRADITIONAL VOLUMES

7-9 Data ONTAP® 7.3 Fundamentals: Logical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Traditional Volumes

ƒ A traditional volume is:


– Contained by a single, dedicated aggregate
– Independent from other aggregates
ƒ Disks are dedicated to a volume
ƒ You can change volume size by adding disks
ƒ Disks are organized into RAID groups

© 2008 NetApp. All rights reserved. 10

TRADITIONAL VOLUMES

A traditional volume is a volume contained by a single, dedicated aggregate that is tightly


coupled with the container aggregate. The only way to grow a traditional volume is to add
entire disks to the aggregate container. It is impossible to decrease the size of a traditional
volume.
All volumes created with Data ONTAP versions earlier than 7.0 are traditional volumes. If
you upgrade to Data ONTAP 7.0, your volumes and data remain unchanged, and the
commands used to manage volumes and data are still supported.

7-10 Data ONTAP® 7.3 Fundamentals: Logical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Aggregates and FlexVol Volumes
FlexVol Volumes
In FlexVol® volumes:
ƒ The Primary unit of data
storage and
management is still the
Disks Disks Disks WAFL volume
RG1 RG2 RG3
ƒ Aggregate contains the
physical storage
Aggregate
ƒ Volumes are no longer
tied to physical storage
ƒ Can be multiple FlexVol
volumes per aggregate
ƒ Storage space can be
dynamically reallocated
© 2008 NetApp. All rights reserved. 13

AGGREGATES AND FLEXVOL VOLUMES

Flexible volumes are logical data containers that can be sized, resized, managed, and moved
independently from the underlying physical storage without disrupting normal operations.
As shown in the figure above, an aggregate is defined as a pool of many disks from which
space is allocated to volumes (volumes are shown as FlexVol and FlexClone entities). From
an administrator’s perspective, volumes remain the primary unit of data management. But
transparently to the administrator, flexible volumes now refer to logical entities, not (directly)
to physical storage.
Flexible volumes are volumes that are no longer bound by the limitations of the disks on
which they reside. A FlexVol volume is simply a “pool” of storage that can be sized based on
how much data you want to store in the volume, not the physical disk capacity. You can
increase or decrease a FlexVol volume on-the-fly without any downtime. In a flexible
volume, all spindles in the aggregate are available at all times. Flexible volumes can run I/O-
bound applications must faster than traditional volumes of the same size.
Flexible volumes provide these additional benefits while preserving the familiar semantics of
volumes and the current set of volume-specific data management and space-allocation
capabilities.

7-11 Data ONTAP® 7.3 Fundamentals: Logical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
How Aggregates and FlexVol Volumes
Work
ƒ Create aggregate
FlexVol 1 FlexVol 2 FlexVol 3
– RAID groups are
created as result
vol1 vol2

vol3
ƒ Create FlexVol 1
– Only metadata space is
used
– There is no preallocation
of blocks to a specific
volume
Aggregate ƒ Create FlexVol 2
RG1 RG2 RG3 – WAFL allocates
aggregate space as
RG1 RG2 RG3 data is written
ƒ Populate volumes
Aggregate

© 2008 NetApp. All rights reserved. 14

HOW AGGREGATES AND FLEXVOL VOLUMES WORK

An aggregate is a representation of physical storage space provided by a combination of


RAID groups. When a flexible volume is created, some space is used for the metadata
associated with the volume, but no other physical space in the aggregate is used. Aggregate
space is used as data is written to the flexible volume.
Metadata, also called metainformation, is data about data. In a file system, metadata is data
that describes the file system, such as the location of a file’s data blocks, and file information
such as owner, permissions, creation and modification dates, and so on.

7-12 Data ONTAP® 7.3 Fundamentals: Logical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Aggregates and FlexVol Volume
Components
ƒ FlexVol volumes are a logical
storage container that:
– Can grow or shrink nondisruptively
– Can be just a few MBs in size or as
large as (or larger than) the
aggregate
– Uses physical storage space like
qtrees
– Preserves all other volume-level
properties
ƒ Aggregates are a physical storage
pool

© 2008 NetApp. All rights reserved. 15

AGGREGATES AND FLEXVOL VOLUME COMPONENTS

FLEXIBLE VOLUMES
A flexible volume (also called a FlexVol volume) is a volume that is loosely coupled to its
container aggregate. Because the volume is managed separately from the aggregate, you can
create small FlexVol volumes (20 MB or larger), and then increase or decrease the size of
FlexVol volumes in increments as small as 4 kB.
Advantages of flexible volumes:
• You can create flexible volumes almost instantaneously. These volumes:
• Can be as small as 20 MB
• Are limited to aggregate capacity (if guaranteed)
• Can be as large as the volume capacity supported for your storage system (not guaranteed)
• You can increase and decrease a flexible volume while online, allowing you to:
• Resize without disruption
• Size in any increment (as small as 4 kB)
• Size quickly

7-13 Data ONTAP® 7.3 Fundamentals: Logical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Flexible Volumes

© 2008 NetApp. All rights reserved. 16

FLEXIBLE VOLUMES

7-14 Data ONTAP® 7.3 Fundamentals: Logical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Flexible Volumes

ƒ Flexible volumes manage the logical layer


independent of the physical layer.
ƒ Multiple flexible volumes can exist within a
single aggregate.

© 2008 NetApp. All rights reserved. 17

FLEXIBLE VOLUMES

CONSIDERATIONS FOR FLEXIBLE VOLUMES


Within an aggregate, you can create one or many flexible volumes.

FLEXIBLE VOLUME SPACE GUARANTEE


You can choose to overcommit space with flexible volumes. For more information, see
Technical Report 3348, at http://www.netapp.com/library/tr/3348.pdf.

BACKUP
You can size your flexible volumes for convenient, volume-wide data backup.

7-15 Data ONTAP® 7.3 Fundamentals: Logical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Increasing I/O Performance With FlexVol
Volumes
ƒ Regular volumes:
– Volume performance
is limited by the
number of disks in the
volume
– “Hot” volumes can’t
be helped by disks on
other volumes
ƒ FlexVol volumes:
– Spindle-sharing
makes total
aggregate
performance available
to all volumes

© 2008 NetApp. All rights reserved. 18

INCREASING I/O PERFORMANCE WITH FLEXVOL VOLUMES

7-16 Data ONTAP® 7.3 Fundamentals: Logical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Improving Space Utilization With FlexVol
Volumes
Vol 1
Vol 2 Vol 4 ƒ Traditional volumes:
Vol 3
– Free space is
scattered across
volumes
– Free space is not
Vol 1 available to other
volumes
ƒ FlexVol volumes:
Vol 2
– No pre-allocation of
Free free space
Vol 3 – Free space is
available for use by
Vol 4
other volumes or new
volumes

© 2008 NetApp. All rights reserved. 19

IMPROVING SPACE UTILIZATION WITH FLEXVOL VOLUMES

7-17 Data ONTAP® 7.3 Fundamentals: Logical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Creating a Flexible Volume

ƒ To create a flexible volume using the CLI:


vol create <volname> <aggrname>
<size>[k|m|g|t]
ƒ To create a flexible volume using FilerView,
use the Volume Wizard
ƒ When creating a flexible volume, you must
have the following information available:
– Volume name
– Aggregate name
– Language
– Space guarantee settings
© 2008 NetApp. All rights reserved. 20

CREATING A FLEXIBLE VOLUME

When you create a FlexVol volume, you must provide the following information:
• A name for the volume
• The name of the container aggregate

The size of a FlexVol volume must be at least 20 MB and no more than 16 TB (or whatever is
the largest size your system configuration supports).
In addition, you can provide the following optional FlexVol volume values:
• Language (the default language is the language of the root volume)
• Space-guarantee setting for the new volume

7-18 Data ONTAP® 7.3 Fundamentals: Logical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Creating a Flexible Volume Using the CLI

The following is an example of a CLI entry used


to create a flexible volume:
system> vol create flexvol_name aggr1 20G

This command creates a flexible volume called


flexvol_name in aggr1.

© 2008 NetApp. All rights reserved. 21

CREATING A FLEXIBLE VOLUME USING THE CLI

THE VOL COMMAND

EXAMPLE RESULT
vol create vol2 2 Creates the new volume, vol2, from spares. You can specify disks of
a certain size, enter a specific list of disks, or specify how many to
add.
vol create vol2 -n 3 Displays the command that the system will execute without actually
making any changes. In this example, vol create vol2 -d
0b.28 0b.27 0b.26 is returned.
The -n option is useful for displaying automatically selected disks,
as shown above.
vol create flexvol aggr1 Creates the new 20 GB volume, flexvol, on aggr1.
20G
vol add vol1 3 Adds three disks to the existing traditional volume, vol1.
vol status vol1 Displays the volume size, options, and so on, for vol1.
vol rename vol2 vol3 Changes the name of volume vol2 to vol3.
vol options vol3 Displays current options settings for vol3.
vol offline vol3 Removes volume vol3 from active use without restarting.

vol online vol3 Reactivates volume vol3, which was offline.


vol restrict vol3 Places volume vol3 in restricted mode.
vol destroy vol1 Turns volume vol1 (if offline) back into individual spare disks.

vol size flexvol 30G Changes the size of the flexvol volume to 30 GB.
vol size flexvol +10g Increases the size of the flexvol volume by 10 GB.

7-19 Data ONTAP® 7.3 Fundamentals: Logical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Creating a Flexible Volume Using FilerView

To create a flexible volume using FilerView:


Volumes > Add > Volume Wizard

© 2008 NetApp. All rights reserved. 22

CREATING A FLEXIBLE VOLUME USING FILERVIEW

7-20 Data ONTAP® 7.3 Fundamentals: Logical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Resizing a Flexible Volume
Use the vol size command to resize a flexible volume.
Syntax:
vol size <vol-name> [[+|-]<size>[k|m|g|t]]
Command Result
vol size flexvol 50m FlexVol volume size is changed to 50 MB
Vol size flexvol +50m FlexVol volume size is increased by 50 MB
Vol size flexvol -25m FlexVol volume size is decreased by 25 MB

© 2008 NetApp. All rights reserved. 23

RESIZING A FLEXIBLE VOLUME

7-21 Data ONTAP® 7.3 Fundamentals: Logical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
FlexClone Volume Clones

ƒ Enables multiple, instant data-set clones with


no storage overhead
ƒ Provides dramatic improvement for application
test and development environments
ƒ Renders competitive methods archaic

© 2008 NetApp. All rights reserved. 24

FLEXCLONE VOLUME CLONES

FlexClone volume clones provide an efficient way to copy data for:


• Manipulation
• Projection operations
• Upgrade testing
Data ONTAP allows you to create a volume duplicate with the original volume and clone
volume sharing the same disk space for storing unchanged data.

7-22 Data ONTAP® 7.3 Fundamentals: Logical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
How Volume Cloning Works
Volume cloning:
Aggregate ƒ Starts with a volume
ƒ Takes a Snapshot of the
volume
FlexVol Volume
ƒ Creates a clone (a new
volume based on the
Snapshot copy)
Parent
Snap
ƒ Modifies the original
shot volume
ƒ Modifies the cloned volume

Clone Result
Independent volume copies
are efficiently stored.
© 2008 NetApp. All rights reserved. 25

HOW VOLUME CLONING WORKS

FlexClone volumes are managed similarly to regular FlexVol volumes, with a few key
differences.
The following is a list of important facts about FlexClone volumes:
• FlexClone volumes are a point-in-time, writable copy of the parent volume. Changes made to the
parent volume after the FlexClone volume is created are not reflected in the FlexClone volume.
• You can only clone FlexVol volumes. To create a copy of a traditional volume, you must use the
vol copy command, which creates a distinct copy with its own storage.
• Before you create FlexClone volumes, you must install the FlexClone license.
• FlexClone volumes are fully functional volumes managed just like the parent volume using the
vol command.
• FlexClone volumes always exist in the same aggregate as parent volumes.
• FlexClone volumes can be cloned.
• FlexClone volumes and parent volumes share the same disk space for common data. This means
that creating a FlexClone volume is instantaneous and requires no additional disk space (until
changes are made to the clone or parent).
• A FlexClone volume is created with the same space guarantee as the parent.
• While a FlexClone volume exists, there are some operations on the parent that are not allowed.
• You can sever the connection between the parent and the clone. This is called splitting the
FlexClone volume. Splitting removes all restrictions on the parent volume and causes the
FlexClone to use its own storage.
IMPORTANT: Splitting a FlexClone volume from its parent volume deletes all existing Snapshot
copies of the FlexClone volume and disables the creation of new Snapshot copies while the
splitting operation is in progress.
• Quotas applied to a parent volume are not automatically applied to the clone.
7-23 Data ONTAP® 7.3 Fundamentals: Logical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
• When a FlexClone volume is created, existing LUNs in the parent volume are also present in the
FlexClone volume, but are unmapped and offline.

7-24 Data ONTAP® 7.3 Fundamentals: Logical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Flexible Volume Clone Syntax
Use the vol clone create command to create a flexible
volume clone.
Syntax:
vol clone create <vol-name> [-s none | file |
volume] -b <parent_flexvol> [parent_snapshot>]
The following is an example of a CLI entry used to create a flexible
volume clone:
vol clone create clone1 –b flexvol1

system> vol status clone1


Volume State Status Options
clone1 online raid_dp, flex
guarantee=volume(disabled)
Clone, backed by volume 'flex1’ snapshot
clone_clone1.1'
Containing aggregate: 'aggr1'

© 2008 NetApp. All rights reserved. 26

FLEXIBLE VOLUME CLONE SYNTAX

7-25 Data ONTAP® 7.3 Fundamentals: Logical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Splitting Volumes

Volume 1 ƒ Split volumes when


Snapshot of
most of the data on a
Volume 1 volume is not shared
ƒ Replicate shared blocks
in the background
Volume 2 Result
Cloned New, permanent volume
Volume is created for forking
project data.

© 2008 NetApp. All rights reserved. 27

SPLITTING VOLUMES

Splitting a FlexClone volume from its parent removes any space optimizations currently
employed by the FlexClone volume. After the split, both the FlexClone volume and the
parent volume require the full space allocation specified by their space guarantees. After the
split, the FlexClone volume becomes a normal FlexVol volume.
When splitting clones, keep in mind the following:
• When you split a FlexClone volume from its parent, all existing Snapshot copies of the FlexClone
volume are deleted.
• During the split operation, no new Snapshot copies of the FlexClone volume can be created.
• Because the clone-splitting operation is a copy operation that could take some time to complete,
Data ONTAP provides the vol clone split stop and vol clone split status
commands to stop clone-splitting or check the status of a clone-splitting operation.
• The clone-splitting operation executes in the background and does not interfere with data access to
either the parent or the clone volume.
• If you take the FlexClone volume offline while clone-splitting is in progress, the operation is
suspended. When you bring the FlexClone volume back online, the splitting operation resumes.
• Once a FlexClone volume and its parent volume have been split, they cannot be rejoined.

7-26 Data ONTAP® 7.3 Fundamentals: Logical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
vol clone split Command

The following are vol clone split command


options:
ƒ vol clone split start volname
ƒ vol clone split stop
ƒ vol clone split status [volname]
ƒ vol clone split estimate [volname]

© 2008 NetApp. All rights reserved. 28

VOL CLONE SPLIT COMMAND

HOW TO VIEW THE RESULTS OF A CLONE SPLIT COMMAND


Example:
vol clone split status:
vol clone split start clone1
Tue Oct 12 23:49:43 GMT [wafl.scan.start:info]: Starting volume clone
split on volume clone1.
Clone volume 'clone1' will be split from its parent.
Monitor system log or use 'vol clone split status' for progress.
vol clone split status
Volume 'clone1', 117193 of 364077 inodes processed (32%)
18578 blocks scanned. 18472 blocks updated.

7-27 Data ONTAP® 7.3 Fundamentals: Logical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Destroying a Volume

Destroying a volume is a two-step process:


1. Take the volume or aggregate offline:
vol offline <volname>
2. Destroy the volume using the destroy
command:
vol destroy <volname>

© 2008 NetApp. All rights reserved. 29

DESTROYING A VOLUME

There are two reasons to destroy a volume:


• You no longer need the data it contains
• You copied the data it contains elsewhere
When you destroy a traditional volume, you also destroy the container aggregate dedicated to
that volume and convert the parity disk and all data disks back into hot spares. After the
traditional volume is destroyed, you can use the disks in other aggregates, traditional
volumes, or storage systems.
When you destroy a FlexVol volume, all the disks in its container aggregate remain assigned
to that container aggregate.

7-28 Data ONTAP® 7.3 Fundamentals: Logical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Qtrees

© 2008 NetApp. All rights reserved. 30

QTREES

7-29 Data ONTAP® 7.3 Fundamentals: Logical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Qtrees

ƒ A qtree is a logically defined file system that


exists as a special subdirectory at the root of a
volume
ƒ When creating a qtree, you can:
– Partition data within a volume
– Establish special quota requirements
ƒ The maximum number of qtrees is 4,995 per
volume
ƒ Qtrees look like a directory to the client
ƒ Qtrees can be removed from the client by
removing the directory or using FilerView
© 2008 NetApp. All rights reserved. 31

QTREES

You might consider creating a qtree for the following reasons:


• You can easily create qtrees for managing and partitioning data within a volume.
• You can create a qtree to assign user- or workgroup-based usage quotas (soft or hard) and limit the
amount of storage space that a specific user or group of users can consume on the qtree to which
they have access.

CREATING QTREES
When you want to group files without creating a volume, you can create qtrees instead. When
creating qtrees, you can group files using any combination of the following criteria:
• Security style
• Oplocks setting
• Quota limit
• Backup unit

QTREES LIMITATIONS
The primary limitation of qtrees is that there is a maximum of 4,995 qtrees allowed per
volume on a storage system.
NOTE: When you enter a df command with a qtree path name on a UNIX client, the
command displays the smaller client file system limit or the storage system disk space,
making the qtree look fuller than it actually is.

7-30 Data ONTAP® 7.3 Fundamentals: Logical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Adding a Qtree

ƒ To add a qtree using the CLI:


qtree create <fullpath>
ƒ To add a qtree using FilerView:
FilerView > Volumes > Qtree > Add

© 2008 NetApp. All rights reserved. 32

ADDING A QTREE

QTREE ADVANTAGES

BACKING UP QTREES
You can back up individual qtrees to:
• Add flexibility to your backup schedules
• Modularize backups by backing up only one set of qtrees at a time
• Limit the size of each backup to one tape
Many products with NetApp software (such as SnapMirror and SnapVault) are “qtree-aware.”
When you work at the qtree level, because you are working in a smaller increment than the
entire volume, you can perform backup and recovery of files quickly.

7-31 Data ONTAP® 7.3 Fundamentals: Logical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Summary

In this module, you should have learned to:


ƒ Data ONTAP provides volumes for organizing
data
ƒ Traditional volumes are tightly coupled with
their aggregate and disks
ƒ Flexible volumes are loosely coupled with their
aggregate and disks
ƒ Flexible volumes enable multiple, instant data-
set clones with no storage overhead

© 2008 NetApp. All rights reserved. 33

MODULE SUMMARY

7-32 Data ONTAP® 7.3 Fundamentals: Logical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Exercise
Module 7: Logical Storage
Management
Estimated Time: 40 minutes

EXERCISE

Please refer to your Exercise Guide for more instruction.

7-33 Data ONTAP® 7.3 Fundamentals: Logical Storage Management

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
CIFS

NetApp University - Do Not Distribute


MODULE 8: CIFS

CIFS
Module 8
Data ONTAP Fundamentals

CIFS

8-1 Data ONTAP® 7.3 Fundamentals: CIFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Objectives

By the end of this module, you should be able to:


ƒ Describe the CIFS environment
ƒ Configure the storage system to participate in
the CIFS environment
ƒ Share a resource on the storage system
ƒ Map a drive from a client to the shared
resource on the storage system

© 2008 NetApp. All rights reserved. 2

MODULE OBJECTIVES

8-2 Data ONTAP® 7.3 Fundamentals: CIFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
CIFS Overview

© 2008 NetApp. All rights reserved. 3

CIFS OVERVIEW

8-3 Data ONTAP® 7.3 Fundamentals: CIFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
CIFS Definition

ƒ The CIFS is a Microsoft network file-sharing


protocol that evolved from the Server Message
Block (SMB) protocol
ƒ In a CIFS environment, any application that
processes network I/O can access and
manipulate files and folders (directories) on:
– Local system
– Remote servers
ƒ CIFS supports SMB 1.0 and SMB 2.0

© 2008 NetApp. All rights reserved. 4

CIFS DEFINITION

The Common Internet File System (CIFS) is a Microsoft network file-sharing protocol that
evolved from the Server Message Block (SMB) protocol.
When using CIFS, any application that processes network I/O can access and manipulate files
and folders (directories) on remote servers similar to the way it accesses and manipulates files
and folders on the local system.

8-4 Data ONTAP® 7.3 Fundamentals: CIFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
User Authentication

In a CIFS environment, the storage system


authenticates users in one of four ways:
ƒ Active Directory authentication
ƒ Microsoft® Windows NT® 4.0 domain
authentication
ƒ Windows workgroup authentication
ƒ Authentication for non-Windows workgroups

NOTE: This module focuses only on Active Directory authentication.

© 2008 NetApp. All rights reserved. 5

USER AUTHENTICATION

For information about methods of authenticating users other than Active Directory, see the
Data ONTAP CIFS Administration course.

8-5 Data ONTAP® 7.3 Fundamentals: CIFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Storage System Joining a Domain

Joining a domain Directory

Machine
name

Clients Member Domain Machine


Server Controller Accounts

© 2008 NetApp. All rights reserved. 6

STORAGE SYSTEM JOINING A DOMAIN

JOINING A STORAGE SYSTEM TO A WINDOWS DOMAIN


A Windows domain is a group of computers that share a common directory database located
on a domain controller. Each domain has a unique name, provides access to user and group
accounts, and enables centralized administration of user and group accounts.
Joining a storage system to a Windows domain provides:
• Centralized administration
• Integration into the Windows topology
• User authentication performed by the domain controllers

MEMBER SERVER
The storage system joins the Windows topology as a member server with member-server
privileges in the Active Directory environment.

8-6 Data ONTAP® 7.3 Fundamentals: CIFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
User Authentication on a
Storage System in a Domain
Client B requests Authenticates Client B
User authentication

Session with Client B

Client A Client B Member Domain


Server Controller

© 2008 NetApp. All rights reserved. 7

USER AUTHENTICATION ON A STORAGE SYSTEM IN A DOMAIN

For storage systems in a domain, domain users browse the storage system for available shares
and then request access to that share.
User authentication is performed centrally on the domain controller, establishing a user
session with a storage system.
For user authentication on storage systems in a domain:
• Users must be authorized to access a share and its resources
• Data access on the storage system requires a network login to the storage system

8-7 Data ONTAP® 7.3 Fundamentals: CIFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Setting Up and
Configuring CIFS

© 2008 NetApp. All rights reserved. 8

SETTING UP AND CONFIGURING CIFS

8-8 Data ONTAP® 7.3 Fundamentals: CIFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Preparing for CIFS

To prepare a storage system to support


Windows client users, complete the following
steps:
1. License CIFS.
2. Perform the initial CIFS configuration by
running the cifs setup program.

© 2008 NetApp. All rights reserved. 9

PREPARING FOR CIFS

STEPS TO SET UP CIFS


During CIFS setup, you can perform the following tasks:
• Assign or remove WINS servers
• Configure the storage system Active Directory site information (if not already configured)
• Join the storage system to a domain or change domains
• Automatically generate /etc/passwd and /etc/group files when Network Information
Service (NIS) or Lightweight Directory Access Protocol (LDAP) is enabled

8-9 Data ONTAP® 7.3 Fundamentals: CIFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Step 1: License CIFS

ƒ The CIFS license may be preinstalled at the factory


ƒ To install a CIFS license, use either:
– Data ONTAP CLI:
license add <cifslicense>
– FilerView

© 2008 NetApp. All rights reserved. 10

STEP 1: LICENSE CIFS

USING FILERVIEW
To license CIFS from FilerView:
1. From the left navigation pane, click Filer and then click Manage Licenses.
2. Enter the CIFS license number.
3. Click Apply.

When the CIFS license is preinstalled, the cifs setup script runs immediately after the
setup script.

8-10 Data ONTAP® 7.3 Fundamentals: CIFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Step 2: Set Up CIFS

ƒ There are two ways to join a storage system to


a CIFS environment:
– Data ONTAP CLI
cifs setup
– FilerView
ƒ If the setup is successful, the CIFS server
starts automatically

© 2008 NetApp. All rights reserved. 11

STEP 2: SET UP CIFS

JOIN A DOMAIN USING THE CLI


To join a domain using the CLI, run the cifs setup command. This begins a series of
prompts that requests the information necessary to add or change a computer account for a
storage system into a Windows 2000, Windows 2003, or Windows 2008 domain.

8-11 Data ONTAP® 7.3 Fundamentals: CIFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
CIFS Setup Wizard

To access the CIFS Setup Wizard:


CIFS > Configure > Setup Wizard

© 2008 NetApp. All rights reserved. 12

CIFS SETUP WIZARD

To set up CIFS using the FilerView wizard:


1. Select FilerView > CIFS > Configure > Setup Wizard.
The Setup Wizard screen is displayed.
2. Click Next.
The CIFS Setup Wizard helps you configure your storage system for CIFS access. You can
run the wizard any time to change the settings. CIFS stops and then restarts when the wizard
is finished.

8-12 Data ONTAP® 7.3 Fundamentals: CIFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
CIFS Setup Wizard (Cont.)

© 2008 NetApp. All rights reserved. 13

CIFS SETUP WIZARD (CONT.)

FILER NAME
The name of the storage system appears on the Filer Name screen of the CIFS Setup Wizard.
You can add a description of the storage system here. This description is available from the
CLI by typing cifs comment.
NOTE: Older environments might use WINS instead of DNS to resolve names.

8-13 Data ONTAP® 7.3 Fundamentals: CIFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
CIFS Setup Wizard (Cont.)

© 2008 NetApp. All rights reserved. 14

CIFS SETUP WIZARD (CONT.)

AUTHENTICATION
On the Authentication screen of the CIFS Setup Wizard, select the type of Windows
authentication your system will use.
NOTE: This module covers Windows Active Directory Domain only. For information about
other user authentication methods, see the Data ONTAP CIFS Administration course, and the
File Access and Protocols Management Guide.

8-14 Data ONTAP® 7.3 Fundamentals: CIFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
CIFS Setup Wizard (Cont.)

© 2008 NetApp. All rights reserved. 15

CIFS SETUP WIZARD (CONT.)

When entering an administrator name on the CIFS Setup Wizard Domain screen, the
Windows Administrator must have domain privileges to create a new computer account in
Active Directory.

8-15 Data ONTAP® 7.3 Fundamentals: CIFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
CIFS Setup Wizard (Cont.)

NOTE: If your storage system supports CIFS clients only, set the
Security style to NTFS Only. Otherwise, use the default
Multi-Protocol.

© 2008 NetApp. All rights reserved. 16

CIFS SETUP WIZARD (CONT.)

SECURITY STYLE
The Security Style screen of the CIFS Setup Wizard is where you specify the type of security
to be used as the default on the storage system.
If FilerView is used to configure CIFS, the default security style is none.
If the CLI is used to configure CIFS:
• The default security style is NTFS only if CIFS is licensed.
• The default security style is Multi-Protocol if CIFS and NFS are licensed.
NOTE: Changing the default security style does not change existing files and directories,
only newly created files and directories.

8-16 Data ONTAP® 7.3 Fundamentals: CIFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
CIFS Setup Wizard (Cont.)

© 2008 NetApp. All rights reserved. 17

CIFS SETUP WIZARD (CONT.)

8-17 Data ONTAP® 7.3 Fundamentals: CIFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
CIFS Services

cifs terminate Host1

Host1

cifs terminate [-t time] host

Host2

cifs terminate

Host3

Host4

© 2008 NetApp. All rights reserved. 18

CIFS SERVICES

STARTING AND STOPPING THE CIFS PROTOCOL


The cifs terminate command stops the CIFS service. If a single host is named, all CIFS
sessions opened by that host are terminated. If a host is not specified, all CIFS sessions are
terminated and the CIFS service is shut down.
If you run the cifs terminate command without specifying a time until shutdown and
there are users with open files, you are prompted to enter the number of minutes to delay
before terminating. If the CIFS service is terminated immediately on a host that has one or
more files open, users will not be able to save changes. You can use the -t option to warn of
an impending service shutdown. If you execute cifs terminate from rsh, you must
supply the -t option.

EXAMPLE RESULT
cifs terminate -t 10 Terminates a session in 10 minutes for the host gloriaswan. Alerts are
gloriaswan sent periodically to the affected host(s).
cifs terminate -t 0 Terminates all CIFS sessions immediately for all clients.
cifs restart Reconnects the storage system to the domain controller, and then
restarts the CIFS service.

8-18 Data ONTAP® 7.3 Fundamentals: CIFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Reconfiguring CIFS

ƒ Use the cifs terminate command to


disconnect users and stop the CIFS service
ƒ Use the cifs setup command to reconfigure
the CIFS service
ƒ The storage system automatically attempts to
restart the CIFS service with the new CIFS
configuration

© 2008 NetApp. All rights reserved. 19

RECONFIGURING CIFS

To reconfigure CIFS, you must run the cifs setup program again, and then enter new
configuration settings. You can use cifs setup to change the following CIFS settings:
• WINS server addresses
• Security style (multiprotocol or NTFS-only)
• Authentication (Windows domain, Windows workgroup, or UNIX password)
• File system used by the storage system
• Domain or workgroup to which the storage system belongs
• Storage system name

PREREQUISITES FOR RECONFIGURING CIFS


Before reconfiguring CIFS, you must meet the following prerequisites:
• The CIFS service must be terminated.
• If you want to change the storage system's domain, the storage system must be able to
communicate with the primary domain controller for the domain in which you want to install the
storage system. You cannot use the backup domain controller for installing the storage system.
• If you want to change the name of the storage system, you must create a new computer account on
the domain controller. (This is not necessary if you are using Windows 2000.)

8-19 Data ONTAP® 7.3 Fundamentals: CIFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
• Your storage system and the domain controllers in the same domain must be synchronized with
the same time source. If the time on the storage system and the time on the domain controllers are
not synchronized, the following error message is displayed:
sClock skew too great
For a detailed description of how to set up time synchronization services, see the Storage
Management Guide.

8-20 Data ONTAP® 7.3 Fundamentals: CIFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
CIFS Sessions

© 2008 NetApp. All rights reserved. 20

CIFS SESSIONS

8-21 Data ONTAP® 7.3 Fundamentals: CIFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Displaying CIFS Sessions

You can display the following types of CIFS


session information:
ƒ A summary of session information
ƒ Share and file information for one or all
connected users
ƒ Security information for one or all connected
users

© 2008 NetApp. All rights reserved. 21

DISPLAYING CIFS SESSIONS

You can display the following types of session information:


• A summary of session information, including storage system information, and the number of open
shares and files opened by each connected user
• Share and file information about one connected user or all connected users, including:
• Names of shares opened by a specific connected user or all connected users
• Access levels of opened files
• Security information about a specific connected user or all connected users, including the UNIX
UID, and a list of UNIX groups and Windows groups to which the user belongs.
NOTE: The number of open shares shown in the session information includes the hidden
IPC$ share.

8-22 Data ONTAP® 7.3 Fundamentals: CIFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Using the CLI for CIFS Sessions
system> cifs sessions
Server Registers as ' NetApp1 ' in Windows 2000 domain 'EDSVCS'
Filer is using en_US for DOS users
Selected domain controller \\DEVDC for authentication
========================================
PC (user) #shares #files
TPILLON2-L2K (EDSVCS\administrator - root)
1 0
system> cifs sessions -s
users
Security Information
TPILLON2-L2K (EDSVCS\administrator - root)
***************
UNIX uid = 0
user is a member of group daemon (1)
user is a member of group daemon (1)

NT membership
EDSVCS\Administrator
EDSVCS\Domain Users
EDSVCS\Domain Admins
BUILTIN\Users
BUILTIN\Administrators
User is also a member of Everyone, Network Users,
Authenticated Users
***************

© 2008 NetApp. All rights reserved. 22

USING THE CLI FOR CIFS SESSIONS

To display a summary of information about the storage system and connected users, use the
cifs sessions command without arguments. For information about a single connected
user, you can specify the user, machine name, or IP address, or use the -s option to obtain
security information about one or all connected users.

EXAMPLE RESULT
cifs sessions Displays a summary of all connected users.
cifs sessions growe Displays information about the user, files opened by the user,
and the access level of the open files.

cifs sessions growe_NT Displays information about the host, files opened by the host,
cifs sessions 192.168.33.3 and the access level of the open files.

cifs sessions –s Displays security information about all connected users.

cifs sessions –s growe_NT Displays security information about the connected machine.

8-23 Data ONTAP® 7.3 Fundamentals: CIFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Using FilerView for CIFS Sessions

© 2008 NetApp. All rights reserved. 23

USING FILERVIEW FOR CIFS SESSIONS

To obtain CIFS session information using FilerView, complete the following steps:
1. From the FilerView main menu, select CIFS > Session Report.
2. Enter a user name or PC name.
3. To view session information, click Sessions or Security.
NOTE: If you leave the name field blank and select one of the option buttons, a full session
or security report on all connected users is displayed. Current session status is displayed at the
bottom of the CIFS Session Report screen.

8-24 Data ONTAP® 7.3 Fundamentals: CIFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
CIFS Shares

© 2008 NetApp. All rights reserved. 24

CIFS SHARES

8-25 Data ONTAP® 7.3 Fundamentals: CIFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Creating and Managing Shares

Shares can be created and managed using:


CLI system> cifs shares –add
webfinal /vol/vol1/webfinal
Windows Computer
Management

FilerView

© 2008 NetApp. All rights reserved. 25

CREATING AND MANAGING SHARES

SHARES
When creating CIFS shares, there is a limitation with Windows Computer Management . For
more information, see the Data ONTAP CIFS Administration course, and the File Access and
Protocols Management Guide.

8-26 Data ONTAP® 7.3 Fundamentals: CIFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The cifs shares Command

ƒ Display shares
cifs shares [share_name]
ƒ Add shares
cifs shares -add <share_name> <path>
[-comment description] [-forcegroup
name] [-maxusers n]
ƒ Change shares
cifs shares -change <share_name>
<path> [-comment description]
[-forcegroup name] [-maxusers n]
ƒ Delete shares
cifs shares -delete <share_name>

© 2008 NetApp. All rights reserved. 26

THE CIFS SHARES COMMAND

You can use the CLI or FilerView to create and modify shares.

CREATING AND ACCESSING SHARES USING THE CLI


To display one or more shares, add a share, change a share, or delete a share, use the cifs
shares command.

PARAMETERS USED WITH CIFS SHARES


The parameters used with the cifs shares command allow you to modify or display CIFS
shares information. Share settings can be changed at any time, even if the share is in use.

8-27 Data ONTAP® 7.3 Fundamentals: CIFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
PARAMETER WHAT IT DOES
sharename Name of the share that CIFS users will use to access the directory on the storage system.
If the sharename already exists, this command with the add option fails.

-comment Option that precedes a description field if the comment is intended.


-forcegroup Assigns a name to the group that will have access to all files created by CIFS users in the
name UNIX environment. The group must be a predefined group in the UNIX group database.
Specifying a forcegroup is useful only if the share is in a UNIX or mixed qtree. By
default, the shares give everyone full control.
-maxusers Specifies a maximum number of simultaneous users, depending on the storage system
memory. If the number of users is not specified, the default is Unlimited.

description Describes the purpose of the share and contains only characters in the current code page.
It is required by the CIFS protocol and is displayed in the share list in Network
Neighborhood. If the description contains spaces, enclose it in single quotes.
-nocomment Specifies that there is no description.
group name The name of the group in the UNIX group database. This group will own all files created
in the share.
- Specifies that no particular UNIX group owns files that are created in the share. Files that
noforcegroup are created belong to the same group as the owner of the file.

-nomaxusers Specifies no maximum number of users who can have simultaneous access.

8-28 Data ONTAP® 7.3 Fundamentals: CIFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The cifs shares Command: Example

system> cifs shares -add pub /vol/vol0/pub -comment “new pub”


system> cifs shares
Name Mount Point Description
---- ----------- -----------
ETC$ /etc Remote Administration
BUILTIN\Administrators / Full Control
HOME /vol/vol0/home Default Share
everyone / Full Control
C$ / Remote Administration
BUILTIN\Administrators / Full Control
pub /vol/vol0/pub new pub
everyone / Full Control
system> cifs shares -delete pub
system> cifs shares
Name Mount Point Description
---- ----------- -----------
ETC$ /etc Remote Administration
BUILTIN\Administrators / Full Control
HOME /vol/vol0/home Default Share
everyone / Full Control
C$ / Remote Administration
BUILTIN\Administrators / Full Control

© 2008 NetApp. All rights reserved. 27

THE CIFS SHARES COMMAND: EXAMPLE

8-29 Data ONTAP® 7.3 Fundamentals: CIFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Managing Share Permissions

Share permissions can be managed using:


CLI system> cifs shares –add
webfinal /vol/vol1/webfinal
Windows Computer
Management

FilerView

© 2008 NetApp. All rights reserved. 28

MANAGING SHARE PERMISSIONS

PROVIDING ACCESS TO SHARES


After you have created shares, you can use the cifs access command to set or modify the
access control list (ACL) for that share. This command grants or removes access by
specifying the share, the rights, and the user or group.

EXAMPLE RESULT
cifs access webfinal tuxedo Gives full Windows NT access to the group tuxedo on the
Full Control webfinal share.
cifs access webfinal Gives read/write access to the user engineering\jbrown on
engineering\jbrown rw the webfinal share.

8-30 Data ONTAP® 7.3 Fundamentals: CIFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The cifs access Command

ƒ Set share permission


cifs access <sharename> [-g]
[user|group] <rights>
ƒ Remove share permission
cifs access -delete <sharename>
[user|group]

© 2008 NetApp. All rights reserved. 29

THE CIFS ACCESS COMMAND

PARAMETERS USED WITH THE CIFS ACCESS COMMAND


Parameter What It Does
-g Specifies that the user is the name of the UNIX group. Use this option when you have a
UNIX group and a UNIX user, or Windows NT user or group with the same name.

user Specifies that the user or group for the ACL entry can be a Windows NT user or group (if the
storage system uses NT domain authentication), or can be the special group, everyone.

group Specifies the user or group for the ACL entry. Can be a Windows NT user or group (if the
storage system uses NT domain authentication), or can be the special group, everyone.

rights Assigns either Windows NT or UNIX-style rights. Windows NT rights are: No Access,
Read, Change, and Full Control.
-delete Removes the ACL entry for the named user on the share.

8-31 Data ONTAP® 7.3 Fundamentals: CIFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The cifs access Command: Example

system> cifs access eng engineering “Full Control”


system> cifs shares eng
Name Mount Point Description
---- ----------- -----------
eng /vol/vol1/eng Eng Share
EDSVCS\engineering / Full Control
system> cifs access eng jbrown Read
system> cifs shares eng

Name Mount Point Description


---- ----------- -----------
eng /vol/vol1/eng Eng Share
EDSVCS\jbrown / Read
EDSVCS\engineering / Full Control

system> cifs access –delete eng jbrown


Name Mount Point Description
---- ----------- -----------
eng /vol/vol1/eng Eng Share
EDSVCS\engineering / Full Control

© 2008 NetApp. All rights reserved. 30

THE CIFS ACCESS COMMAND: EXAMPLE

8-32 Data ONTAP® 7.3 Fundamentals: CIFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Client Access

© 2008 NetApp. All rights reserved. 31

CLIENT ACCESS

8-33 Data ONTAP® 7.3 Fundamentals: CIFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Mapping the Share

© 2008 NetApp. All rights reserved. 32

MAPPING THE SHARE

8-34 Data ONTAP® 7.3 Fundamentals: CIFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Mapping the Share (Cont.)

© 2008 NetApp. All rights reserved. 33

MAPPING THE SHARE (CONT.)

8-35 Data ONTAP® 7.3 Fundamentals: CIFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Other CIFS Administration Resources

For more information about CIFS administration, see the


Data ONTAP CIFS Administration course. This advanced
course covers:
ƒ Different CIFS user authentication methods:
– Workgroup
– Active Directory
– Windows NT 4.0 domain
– Non-Windows workgroup
ƒ Advanced configuration
ƒ Collecting CIFS statistics
ƒ CIFS performance tuning
ƒ Troubleshooting CIFS

© 2008 NetApp. All rights reserved. 34

OTHER CIFS ADMINISTRATION RESOURCES

8-36 Data ONTAP® 7.3 Fundamentals: CIFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Summary

In this module, you should have learned to:


ƒ CIFS is a protocol for accessing computers on a
network
ƒ The CIFS server on the storage system must be
licensed and configured
ƒ Use the cifs shares command to create, view,
modify, and delete shares
ƒ Use the cifs access command to grant permissions
to users and groups
ƒ Use the cifs sessions command to view users and
connections
ƒ Use the cifs terminate command to terminate
CIFS sessions
© 2008 NetApp. All rights reserved. 35

MODULE SUMMARY

8-37 Data ONTAP® 7.3 Fundamentals: CIFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Exercise
Module 8: CIFS
Estimated Time: 20 minutes

EXERCISE

Please refer to your Exercise Guide for more instruction.

8-38 Data ONTAP® 7.3 Fundamentals: CIFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NFS

NetApp University - Do Not Distribute


MODULE 9: NFS

NFS
Module 9
Data ONTAP® 7.3 Fundamentals

NFS

9-1 Data ONTAP® 7.3 Fundamentals: NFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Objectives

By the end of this module, you should be able to:


ƒ Explain NFS implementation in Data ONTAP
ƒ License NFS on a storage system
ƒ Explain the purpose and format of
/etc/exports
ƒ List and define the export specification options
ƒ Explain the rules for exports
ƒ Describe the use of the exportfs command
ƒ Mount an export on a UNIX host
ƒ Add an export for an adminhost

© 2008 NetApp. All rights reserved. 2

MODULE OBJECTIVES

9-2 Data ONTAP® 7.3 Fundamentals: NFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NFS Overview

© 2008 NetApp. All rights reserved. 3

NFS OVERVIEW

9-3 Data ONTAP® 7.3 Fundamentals: NFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NFS Overview

ƒ NFS allows network systems (clients) to share


files and directories that are stored and
administered centrally from a storage system
ƒ The following platforms usually support NFS:
– Sun Microsystems® Solaris™
– Linux
– HP-UX
– And more

© 2008 NetApp. All rights reserved. 4

NFS OVERVIEW

The Network File System (NFS) is a protocol originally developed by Sun Microsystems in
1984, allowing users on a client computer to access files over a network as easily as if the
network devices were attached to its local disks. NFS, like many other protocols, builds on
the Open Network Computing Remote Procedure Call (ONC RPC) system. The NFS protocol
is specified in RFC 1094, RC 1813, and RFC 3530.

9-4 Data ONTAP® 7.3 Fundamentals: NFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Exported Resources Overview

SS1

vol0 flexvol1
data_files
etc eng_files
home misc_files

Network Connection

Client1 Client1

© 2008 NetApp. All rights reserved. 5

EXPORTED RESOURCES OVERVIEW

In the diagram above, SS1 contains resources that many users need such as data_files,
eng_files, and misc_files.
To use a resource, SS1 must have the resource exported and Client1 must have the resource
mounted. A user on Client1 can then change to the directory (cd) that contains the mounted
resource and access it as if it were stored locally (assuming that permissions are set
appropriately).

9-5 Data ONTAP® 7.3 Fundamentals: NFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Setting Up and
Configuring NFS

© 2008 NetApp. All rights reserved. 6

SETTING UP AND CONFIGURING NFS

9-6 Data ONTAP® 7.3 Fundamentals: NFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Setting up NFS

ƒ Configure NFS using either:


– The CLI
– FilerView
ƒ When setting up NFS, you must have:
– An NFS license code
– Determined if you are enabling NFS over TCP,
UDP, or both
– Determined which version of NFS to enable

© 2008 NetApp. All rights reserved. 7

SETTING UP NFS

9-7 Data ONTAP® 7.3 Fundamentals: NFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Configuring NFS Using the CLI

To use the CLI to configure NFS on a storage


system, complete the following steps:
1. License NFS on the storage system:
license add <nfslicensecode>
Executing this command starts the rpc.mountd
and nfsd daemons.
2. Set NFS options:
options nfs

© 2008 NetApp. All rights reserved. 8

HCONFIGURING NFS USING THE CLI

When you license NFS on a storage system, it starts the daemons (rpc.mountd and nfsd)
that handle NFS RPC protocol.
The following are NFS configurable options:
• nfs.v3.enable
• nfs.v4.enable
• nfs.tcp.enable
• nfs.udp.xfersize

9-8 Data ONTAP® 7.3 Fundamentals: NFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Configuring NFS Using FilerView

To configure NFS using FilerView:


FilerView > NFS > Configure

© 2008 NetApp. All rights reserved. 9

CONFIGURING NFS USING FILERVIEW

The Configure NFS screen in FilerView enables you to configure NFS for use on the storage
system.
The following are NFS configurable parameters:
• NFS License
• NFS Enable
• PCNFS Enabled
• PCNFS umask
• WebNFS Enable
• Client statistics
• NFS Over TCP
• NFS Version 3
• Report Maximum

9-9 Data ONTAP® 7.3 Fundamentals: NFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Exporting Resources

© 2008 NetApp. All rights reserved. 10

EXPORTING RESOURCES

9-10 Data ONTAP® 7.3 Fundamentals: NFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Exporting Resources

To make resources available to remote clients,


the resource must be exported.
ƒ To export a resource persistently:
– Edit the /etc/exports file with new entry
– Execute the exportfs -a command
– Use FilerView
ƒ To export a resource temporarily, use the
exportfs -i -o command

© 2008 NetApp. All rights reserved. 11

EXPORTING RESOURCES

To export resources, use one of the following methods:


• For persistence across reboots, specify the resources to export in the /etc/exports file; and
then execute the exportfs -a command to make changes effective immediately
• For temporary access, use the exportfs command to export resources not specified in the
/etc/exports file, or to export resources specified in the file but with different access
permissions

FIVE RULES FOR CREATING EXPORTS


1. You must export each volume separately. If you create, rename, or destroy a volume, the
/etc/exports file is updated automatically. This functionality can be disabled using the
options nfs.export.auto-update switch.
2. The storage system must be able to resolve host names if used in exports:
/etc/hosts, NIS, DNS
3. Access must be granted in a positive way:
• A host is excluded when it is not listed or it is preceded by a dash (-).
• If no host is specified, all hosts have access.
4. Subdirectories of parent exports can be exported with different option specifications.
5. Permissions are determined by matching the longest prefix to the access permissions in the
/etc/exports file.

9-11 Data ONTAP® 7.3 Fundamentals: NFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Adding an Export: /etc/exports

Specifies the full path to the directory that is exported. The first option is listed following a dash.
Additional options are separated by commas.
In this example the -rw option allows host1
and host2 to mount the pubs directory. Host
names are listed separated by colons.

/vol/vol0/pubs -rw=host1:host2,root=host1
/vol/vol1 -rw=host2
/vol/vol0/home

This option gives root permissions for the


pubs directory to host1.

All hosts can mount the /vol/vol0/home directory as This option gives read-write permissions to
read-write if an option is not specified. host2 only. All other hosts have no access.

© 2008 NetApp. All rights reserved. 12

ADDING AN EXPORT: /ETC/EXPORTS

System administrators must control how NFS clients access files and directories on a storage
system. Exported resources are resources made available to hosts. NFS clients can only
mount resources that have been exported from a storage system licensed for NFS.
To export directories, add an entry for each directory to the /etc/exports file, using the
full path to the directory and options. The full path name must include /vol.
Export specifications use the following options to restrict access:
• root = list of hosts, netgroup names, and subnets
• rw = list of hosts, netgroup names, and subnets
• ro = list of hosts, netgroup names, and subnets

9-12 Data ONTAP® 7.3 Fundamentals: NFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Test Your Knowledge

1. Allow root access to /etc/exports


/vol/vol0 by adminhost a. /vol/vol1 -rw=host2
b. /vol/vol0 -rw=adminhost,root=adminhost
2. Allow read-write access to c. /vol/vol0/home -rw=host1:host2
/vol/vol0/home by d. /vol/vol0 -ro=host2
host1 and host2
e. /vol/vol1 -rw=host1,ro=host3
f. /vol/vol1 –rw=host1,root=host3
3. Allow read-write access to
g. /vol/vol0/home –rw=host1,ro=host2
/vol/vol1 by host1 and
read-only access by host3 h. /vol/vol0 –ro=adminhost,root=adminhost

© 2008 NetApp. All rights reserved. 13

TEST YOUR KNOWLEDGE

9-13 Data ONTAP® 7.3 Fundamentals: NFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Exporting

© 2008 NetApp. All rights reserved. 16

EXPORTING

9-14 Data ONTAP® 7.3 Fundamentals: NFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The exportfs Command

After adding an export to /etc/exports, use


the exportfs -a command to load the exports.
system> exportfs -a

system> exportfs
/vol/flexvol/qtree -sec=sys,rw=10.254.232.12
/vol/vol0/home -sec=sys,rw,root=10.254.232.12,nosuid

system> rdfile /etc/exports


#Auto-generated by setup Mon Apr 30 08:32:21 GMT 2007
/vol/flexvol/qtree -sec=sys,rw=10.254.232.12
/vol/vol0/home -sec=sys,rw,root=10.254.232.12,nosuid

system>

© 2008 NetApp. All rights reserved. 17

THE EXPORTFS COMMAND

To specify which file system paths Data ONTAP automatically exports when NFS starts up,
add export entries to (or remove them from) the /etc/exports file. To manually export or
unexport file system paths, use the exportfs command in the storage system CLI.

EDITING THE /ETC/EXPORTS FILE


To add export entries to (or remove from) the /etc/exports file, use a text editor on an
NFS client that has root access to the storage system.
The following is an example of /etc/exports file entries:
#Auto-generated by setup Mon Mar 24 14:39:40 PDT 2008
/vol/vol0 -sec=sys,ro,rw=sunhost,root=sunhost,nosuid
/vol/vol0/home -sec=sys,rw,root=sunhost,nosuid
/vol/flexvol1 -sec=sys,rw,root=sunhost,nosuid

9-15 Data ONTAP® 7.3 Fundamentals: NFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Adding an Export Using FilerView

© 2008 NetApp. All rights reserved. 18

ADDING AN EXPORT USING FILERVIEW

OPTION DESCRIPTION
Root Access The root option specifies that the root on the client has root permissions for the
resource when it is mounted from the storage system.

Read-Write The rw option gives read-write access to specific hosts. If no host is specified, all
Access hosts have read-write access.
Read-Only The ro option gives read-only access to specific hosts. If no host is specified, all
Access hosts have read-only access.
Anonymous User The anon option determines the UID of the root user on the client.
ID

9-16 Data ONTAP® 7.3 Fundamentals: NFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Adding an Export Using FilerView (Cont.)

© 2008 NetApp. All rights reserved. 19

ADDING AN EXPORT USING FILERVIEW (CONT.)

EXPORT PATH
You must specify as the export path the full path name for the exported resource.
Example:
/vol/vol0

9-17 Data ONTAP® 7.3 Fundamentals: NFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Adding an Export Using FilerView (Cont.)

© 2008 NetApp. All rights reserved. 20

ADDING AN EXPORT USING FILERVIEW (CONT.)

9-18 Data ONTAP® 7.3 Fundamentals: NFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Managing NFS Exports

© 2008 NetApp. All rights reserved. 21

MANAGING NFS EXPORTS

When Data ONTAP receives a mount request from a client, it compares the path name in the
mount request to the path names of the exported resources contained in the /etc/exports
file. If Data ONTAP finds a match, the NFS client is allowed to mount the resource.
FilerView enables you to insert and modify export lines in the file, add options for specific
hosts to an existing export line, and delete an existing export line. After adding export lines,
you can then export all resources for client access.
The FilerView Manage NFS Exports screen enables you to manage NFS exports on the
storage system. NFS clients can only mount resources after they have been exported
(meaning that they have been made available for mounting).
You can export a resource to the following targets:
• Hosts―Hosts are individual computers.
• Netgroups―When exporting a resource, if you do not want to list names of individual hosts as
targets, you can specify a predefined netgroup as the target.
• Subnets―Exporting to a subnet has the same effect as exporting to individual hosts on the subnet
without having to list individual host names. If all hosts on the same subnet should mount the same
resource, export that resource to the subnet.

9-19 Data ONTAP® 7.3 Fundamentals: NFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
When exporting a resource, keep in mind the following:
• You must specify the complete path name for a resource to be exported.
• You cannot export /vol, which is not a path name to a file, directory, or volume. If you want to
export all volumes on the storage system, you must export each volume separately.
• Each line in the /etc/exports file can contain up to 4,096 noncommented characters. The
number of characters allowed for comments is unlimited.
• The /etc/exports file contains a list of resources that can be exported. When the storage
system is rebooted, Data ONTAP exports all resources in this file.

9-20 Data ONTAP® 7.3 Fundamentals: NFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Temporary Exports

Use the exportfs command to create


in-memory exports:
exportfs -i -o
Example:
– exportfs -i -o ro=host1
/vol/vol0/home
NOTE: When the storage system reboots, this export will be gone.

© 2008 NetApp. All rights reserved. 22

TEMPORARY EXPORTS

THE EXPORTFS COMMAND

EXAMPLE RESULT
exportfs -a Exports all entries in the /etc/exports file.

exportfs –i –o ro=host1:host3 Exports /vol/vol0/home to host1 and host3 with


/vol/vol0/home read-only access. This is an example of a temporary
export from the CLI.

exportfs –u /vol/vol1 Unexports /vol/vol1.


exportfs -ua Unexports all exports.
exportfs -av Calls the verbose option and exports, and prints the path
name for each export in the /etc/exports file.

9-21 Data ONTAP® 7.3 Fundamentals: NFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Common exportfs Options

ƒ Display all current exports:


exportfs
ƒ Add exports to the /etc/exports file:
exportfs -p [options] path
ƒ Reload exports from /etc/exports files:
exportfs -r
ƒ Unload all exports:
exportfs -uav
ƒ Unload a specific export:
exportfs -u [path]
ƒ Unload an export and remove it from /etc/exports
exportfs -z [path]

© 2008 NetApp. All rights reserved. 23

COMMON EXPORTFS OPTIONS

To export a file system path and add a corresponding export entry to the /etc/exports file,
enter the following command:
exportfs -p [options] path
NOTE: If you do not specify an export option, Data ONTAP automatically exports
the file system path with the rw and sec=sys export options.
To export all file system paths specified in the /etc/exports file and unexport all file
system paths not specified in the /etc/exports file, enter the following command:
exportfs –r
To unexport all file system paths without removing the corresponding export entries from the
/etc/exports file, enter the following command:
exportfs –uav
To unexport a file system path without removing the corresponding export entry from the
/etc/exports file, enter the following command:
exportfs -u path
To unexport a file system path and remove the corresponding export entry from the
/etc/exports file, enter the following command:
exportfs -z path

9-22 Data ONTAP® 7.3 Fundamentals: NFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Mounting

© 2008 NetApp. All rights reserved. 24

MOUNTING

9-23 Data ONTAP® 7.3 Fundamentals: NFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Mounting From a Client
To mount an export from a client:
1. Telnet or log in to the host.
2. Create a directory as a mountpoint for the storage appliance.
3. Mount the exported directory in the host directory you just
created.
4. Change directories to the mounted export.
5. Enter ls –l to verify that the storage appliance is mounted and
accessible.

telnet 10.32.30.20 (1)


# mkdir /system-vol2 (2)
# mount system:/vol/vol2 /system-vol2 (3)
# cd /system-vol2 (4)
NetApp1-vol2$ ls –l (5)
-rwxr-xr-x root 719634 FEB 11 2004 ,general
-rwxr-xr-x root 719634 FEB 13 2004 ,policy

© 2008 NetApp. All rights reserved. 25

MOUNTING FROM A CLIENT

Use the mount command to mount an exported NFS directory from another machine.
An alternate way to mount an NFS export is to add a line to the /etc/fstab (called
/etc/vfstab on some UNIX systems). This line must specify the NFS server host name,
the exported directory on the server, and the local machine directory where the NFS share is
to be mounted. For more information, see the NFS documentation for your client.

9-24 Data ONTAP® 7.3 Fundamentals: NFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Other NFS Administration Resources

For more information about NFS administration,


see the Data ONTAP NFS Administration
course. This advanced course covers:
ƒ Exporting resources across domains, subnets,
and netgroups
ƒ Advanced configuration
ƒ NFS statistics gathering
ƒ NFS performance tuning
ƒ NFS troubleshooting

© 2008 NetApp. All rights reserved. 26

OTHER NFS ADMINISTRATION RESOURCES

9-25 Data ONTAP® 7.3 Fundamentals: NFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Summary

In this module, you should have learned to:


ƒ NFS is a protocol used by Data ONTAP to
network computers.
ƒ The /etc/exports file defines exports.
ƒ You can configure NFS on a storage system
using the CLI or FilerView.
ƒ After you export a resource, you can mount an
exported file system on a UNIX host.

© 2008 NetApp. All rights reserved. 27

MODULE SUMMARY

9-26 Data ONTAP® 7.3 Fundamentals: NFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Exercise
Module 9: NFS
Estimated Time: 45 minutes

EXERCISE

Please refer to your Exercise Guide for more instruction.

9-27 Data ONTAP® 7.3 Fundamentals: NFS

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Qtrees

NetApp University - Do Not Distribute


MODULE 10: QTREES AND SECURITY STYLES

Qtrees and Security


Styles
Module 10
Data ONTAP® 7.3 Fundamentals

QTREES AND SECURITY STYLES

10-1 Data ONTAP® 7.3 Fundamentals: Qtrees and Security Styles

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Objectives

By the end of this module, you should be able to:


ƒ Describe multiprotocol configuration on a
storage system, including the
/etc/usermap.conf file
ƒ Explain the purpose of a security style
ƒ Configure a security style setting for a qtree
and a volume
ƒ Explain the difference between security styles
and access types using the CLI and FilerView
ƒ Explain, create, and manage quotas using the
CLI and FilerView
© 2008 NetApp. All rights reserved. 2

MODULE OBJECTIVES

10-2 Data ONTAP® 7.3 Fundamentals: Qtrees and Security Styles

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Qtrees

© 2008 NetApp. All rights reserved. 3

QTREES

10-3 Data ONTAP® 7.3 Fundamentals: Qtrees and Security Styles

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Qtrees

ƒ A qtree is a logically defined file system that


exists as a special subdirectory at the root of a
volume
ƒ When creating a qtree, you can:
– Partition data within a volume
– Establish special quota requirements
ƒ The maximum number of qtrees is 4,995 per
volume
ƒ Qtrees look like a directory to the client
ƒ Qtrees can be removed from the client by
removing the directory or using FilerView
© 2008 NetApp. All rights reserved. 4

QTREES

Qtrees are similar to flexible volumes, but have the following unique characteristics:
• Qtrees allow you to set security styles.
• Qtrees allow you to set oplocks for CIFS clients.
• Qtrees allow you setup and apply quotas.
• Qtrees are used as a backup unit for SnapMirror and SnapVault.

10-4 Data ONTAP® 7.3 Fundamentals: Qtrees and Security Styles

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Qtree Advantages

ƒ Set a qtree security style without affecting the


security style of other qtrees in a volume
(discussed later)
ƒ Set CIFS oplocks, if appropriate, without
affecting the settings of projects in other qtrees
ƒ Use tree quotas to limit the disk space and
number of files available to each qtree in a
volume (discussed later)
ƒ Backup and restore qtrees

© 2008 NetApp. All rights reserved. 5

QTREE ADVANTAGES

QTREE FOR BACKUP


Backing up individual qtrees provides the following benefits:
• Adds flexibility to backup schedules
• Modularizes backups by backing up only one set of qtrees at a time
• Limits the size of each backup to one tape
Many products that use NetApp software (such as SnapMirror and SnapVault) are “qtree-
aware.” Because you are working at a smaller increment than the entire volume, working at
the qtree level allows rapid file backup and recovery.

CIFS OPLOCKS
CIFS oplocks (opportunistic locks) enable the CIFS client in certain file-sharing scenarios to
perform client-side caching of read-ahead, write-behind, and lock information. A client can
then work with a file (read or write it) without regularly reminding the server that it needs
access to the file. This improves performance by reducing network traffic. For more
information about CIFS oplocks, see the Data ONTAP CIFS Administration course.

10-5 Data ONTAP® 7.3 Fundamentals: Qtrees and Security Styles

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Security Styles

ƒ Every qtree and volume has a security style


setting that determines whether files in that
qtree or volume can use Windows NT ACLs or
UNIX security
ƒ There are three security styles:
– NTFS
– UNIX
– Mixed
ƒ To change the security style using the CLI:
qtree security <fullpath>
[ntfs|unix|mixed]

© 2008 NetApp. All rights reserved. 6

SECURITY STYLES

The following describes the three qtree security styles:


• NTFS
• For CIFS clients, security is handled using Windows NTFS ACLs.
• For NFS clients, the NFS UID (user id) is mapped to a Windows SID (security identifier) and
its associated groups. These mapped credentials are used to determine file access based on the
NFTS ACL.
• UNIX
• Just like UNIX, files and directories have UNIX permissions.
• Mixed
• Both NTFS and UNIX security is allowed. A file or directory can have either Windows NT
permissions or UNIX permissions.
• The default file security style is the style most recently used to set permissions on that file.

10-6 Data ONTAP® 7.3 Fundamentals: Qtrees and Security Styles

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Security Styles

Security Styles
Hosts that can
CIFS Client Access NFS Client Access
Security Style change Security/
Determined by Determined by
Permissions
UNIX permissions
unix NFS clients Windows user UNIX permissions
names mapped to
UNIX account
NFS and CIFS Depends on the last client to set security
mixed
clients settings (permissions)
Windows NT ACLs
ntfs CIFS clients Windows NT ACLs UNIX user names
mapped to
Windows account

© 2008 NetApp. All rights reserved. 7

SECURITY STYLES

MULTIPROTOCOL SECURITY ADMINISTRATION


Volumes and qtrees have security styles that allow for multiprotocol access. These security
styles are UNIX, NTFS for Windows, and mixed. In order to use the NTFS and mixed
security styles, a storage system must be configured as part of a Windows NT domain. On
any storage system licensed for CIFS and NFS, Windows and UNIX clients can access any
volume or qtree if they have access rights. What differs is security and permissions. The
security style specifies who can modify security properties and directories in a volume or
qtree. The security style also specifies the process for authenticating users.
To accommodate new users or files that require a different style, you can change the security
style of a qtree.

TYPES OF FILE ACCESS


Data ONTAP offers the following types of file access on a storage system with both CIFS
and NFS licenses:
• CIFS access to Windows files
• CIFS access to UNIX files
• NFS access to UNIX files
• NFS access to Windows files

10-7 Data ONTAP® 7.3 Fundamentals: Qtrees and Security Styles

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
SECURITY STYLES
• UNIX―Files and directories have the security style of UNIX permissions.
• Mixed―Both NTFS and UNIX security styles are allowed. A file or directory can have either
Windows NT permissions or UNIX permissions, and security style is determined on file-by-file
basis.
• NTFS―Files and directories have Windows NT file-level permissions over the access control lists
(ACLs).

HOSTS THAT CAN CHANGE SECURITY


• NFS clients―NFS or Windows clients can change file permissions and ownership.
• NFS and CIFS clients―Both CIFS and NFS clients can change security.
• CIFS clients only―Only CIFS clients can change file permissions and ownership.

HOW CIFS CLIENT ACCESS IS DETERMINED


• UNIX permissions―If a CIFS client requests access, the CIFS user name is mapped to a UNIX
user name and associated with a UNIX UID and group ID (GID). Access is determined by the
rights in an ACL at the share level and is limited by the UNIX permissions assigned to the file.
• Last client to set security―Both CIFS and NFS clients can change security. This can cause
confusion because the security style of a file or directory is dictated by the most recent client.
• Windows NT ACLs―Windows NT ACLs determine CIFS client access.

HOW NFS CLIENT ACCESS IS DETERMINED


• UNIX permissions―NFS accesses to UNIX files comply with UNIX security rules.
• Last client to set security―Both CIFS and NFS clients can change security. This can cause
confusion because the security style of a file or directory is dictated by the most recent client.
• Windows NT ACLs―Windows NT ACLs determine client access. If an NFS client requests
access, the NFS user name is mapped to a CIFS user name and ACLs are applied based on the user
name and associated SID.

10-8 Data ONTAP® 7.3 Fundamentals: Qtrees and Security Styles

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Adding a Qtree

ƒ To add a qtree using the CLI:


qtree create <fullpath>
ƒ To add a qtree using FilerView:
FilerView > Volumes > Qtree > Add

© 2008 NetApp. All rights reserved. 8

ADDING A QTREE

CREATING QTREES USING FILERVIEW


To use FilerView to create, modify, and manage qtrees:
1. From the FilerView main menu, select Volumes > Qtrees.
2. Select Manage > Add Qtree.
3. From the Add Qtree menu, select the qtree volume, enter the qtree name, and then click Add.

You can set the security style and oplocks state when you create the qtree, or you can modify
them later by completing the following steps:
1. From the FilerView main menu, select Volumes > Qtrees.
2. From the Qtrees menu, select Manage.
3. Open the Modify menu by selecting the name of the qtree you want to change.
4. After you modify the settings, click Apply.
The new qtree is listed and the changes are updated on the FilerView screen.

10-9 Data ONTAP® 7.3 Fundamentals: Qtrees and Security Styles

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Qtree Commands
system> qtree create /vol/vol2/updates
system> qtree security /vol/vol2/updates mixed

system> qtree oplocks /vol/vol2/updates disable

system> qtree create /vol/vol3/show03

system> qtree security /vol/vol3/show03 ntfs

system> qtree status

Volume Tree Style Oplocks Status


-------- -------- ----- -------- ------
vol0 unix enabled normal
vol0 mktg ntfs enabled normal
vol1 unix enabled normal
vol2 unix enabled normal
vol2 updates mixed disabled normal
vol3 ntfs enabled normal
vol3 show02 mixed enabled normal
vol3 show03 ntfs enabled normal

© 2008 NetApp. All rights reserved. 9

QTREE COMMANDS

EXAMPLE RESULT
qtree create pubs Creates the qtree, pubs. If the qtree pathname does not begin with a slash (/),
the qtree is created in the root volume.

qtree create Creates the qtree, engr, in the /vol/projects/ volume.


/vol/projects/engr
qtree security / ntfs Applies NTFS security to the files and directories in the root volume.

qtree security Applies mixed security to the files and directories in the projects volume.
/vol/projects/ mixed
qtree oplocks Enables oplocks for files and directories in the engr qtree.
/vol/projects/engr
enable
qtree oplocks Disables oplocks in the files and directories for qtree 0 in the projects
/vol/projects/ volume.
disable

10-10 Data ONTAP® 7.3 Fundamentals: Qtrees and Security Styles

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Managing Qtrees

ƒ To manage qtrees using the CLI:


qtree status
ƒ To manage qtrees using FilerView:
FilerView > Volumes > Qtree > Manage

© 2008 NetApp. All rights reserved. 10

MANAGING QTREES

10-11 Data ONTAP® 7.3 Fundamentals: Qtrees and Security Styles

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Multiprotocols

© 2008 NetApp. All rights reserved. 11

MULTIPROTOCOLS

10-12 Data ONTAP® 7.3 Fundamentals: Qtrees and Security Styles

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Multiprotocols

ƒ NetApp storage systems support


heterogeneous environments:
– CIFS (usually associated with Windows NTFS)
– NFS (usually associated with the UNIX file
system)
ƒ Setting the security style of a volume or qtree
to:
– UNIX, does not prevent CIFS users from access
– NTFS, does not prevent NFS users from access
NOTE: The storage system’s multiprotocol feature must be properly
configured.

© 2008 NetApp. All rights reserved. 12

MULTIPROTOCOLS

NetApp storage systems support both NFS-style and CIFS-style file permissions. NFS-style
file permissions are widely used in UNIX systems, while CIFS-style file permissions are used
in Windows when communicating over networks. Because the ACL security model for CIFS
is richer than the NFS file security model used in UNIX, you cannot perform one-to-one
mapping between them. This issue has led vendors of multiprotocol file storage products to
develop nonmathematical strategies to blend the two systems and make them compatible.
This section explains the NetApp approach to this issue of incompatibility.

10-13 Data ONTAP® 7.3 Fundamentals: Qtrees and Security Styles

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NAS File Access: Four Scenarios

There are four basic scenarios for NAS file


access:
1. A UNIX client accesses a UNIX file (not
multiprotocol)
2. A PC client accesses a file with an ACL (not
multiprotocol)
3. A UNIX client accesses a file with an ACL
4. A PC client accesses a file with UNIX
permissions

Because the first two scenarios are not


multiprotocol scenarios, this modules focuses
on scenarios 3 and 4.
© 2008 NetApp. All rights reserved. 13

NAS FILE ACCESS: FOUR SCENARIOS

MULTIPROTOCOL SCENARIOS
For the purpose of this section, the assumption is that all UNIX files have no ACLs. This is
not always true, as ACLs are preserved when changing qtree styles. But ACLs on files in a
UNIX qtree are ignored when performing permissions checks, so the net effect is as if no file
in a UNIX qtree ever has an ACL.

10-14 Data ONTAP® 7.3 Fundamentals: Qtrees and Security Styles

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The /etc/usermap.cfg File

The /etc/usermap.cfg file maps between


Windows NT and UNIX accounts when:
ƒ The Windows NT account name does not
match the desired UNIX account name
ƒ A different UNIX account is required
ƒ A different Windows NT account is required

© 2008 NetApp. All rights reserved. 14

THE /ETC/USERMAP.CFG FILE

The /etc/usermap.cfg file allows you to map a Windows user to a UNIX user, or a
UNIX user to a Windows user.

10-15 Data ONTAP® 7.3 Fundamentals: Qtrees and Security Styles

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The /etc/usermap.cfg File (Cont.)

Format for File Entries


IPqualifier: NTdomain\NTuser direction
IPqualifier: UnixUser

/etc/usermap.cfg
"Bob Garg" == bobg
mktg\Roy => nobody
engr\Tom => ""
uguest <= *

© 2008 NetApp. All rights reserved. 15

THE /ETC/USERMAP.CFG FILE (CONT.)

FORMAT FOR FILE ENTRY

IP_QUALIFIER FIELD
The IP_qualifier field contains an IP address that narrows a message. For example,
192.4.1.0/24 narrows possible matches to the 192.4.1.0 class C subnet. The IP_qualifier can
be a host name or a network name (for example, corpnet/255.255.255.0 specifies the corpnet
subnet). The IP_qualifier is an optional parameter.

NTDOMAIN\NTUSER FIELD
This field contains a user name and an optional domain name. If the Windows NT user name
is empty or specified as “” on the destination side of the map entry, the matching UNIX name
is denied access. If the storage system uses local accounts for authentication, the domain
name is the storage system name. On the source side of the map entry, using the domain name
specifies the domain in which the user resides. On the destination side of the entry, the
domain specifies the domain used for the mapped UNIX entry. If an NT user name contains
spaces, enclose the name in quotation marks.

DIRECTION FIELD
This field specifies whether the entry will map Windows to UNIX or UNIX to Windows, or
will map in both directions. The direction field uses one of three values. To indicate that
mapping is bidirectional (so that the entry maps from Windows to UNIX and from UNIX to
Windows), use = =.
NOTE: Omitting the direction field is the same as using = =.

UNIX USER FIELD


Contains a name in the UNIX password database.

10-16 Data ONTAP® 7.3 Fundamentals: Qtrees and Security Styles

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The /etc/usermap.cfg File (Cont.)

Guidelines for Using Asterisks


uguest <= *
uguest => *
homeuser\* == *
*\root => administrator
*\root <= root

© 2008 NetApp. All rights reserved. 16

THE /ETC/USERMAP.CFG FILE (CONT.)

GUIDELINES FOR USING ASTERISKS


The asterisk (*) is a wildcard character. When using an asterisk in user name fields, follow
these guidelines:
• If the asterisk is on the source side, any user maps to the name specified on the destination side.
• If the destination side contains an asterisk but the source side does not, no mapping is performed.
• If both the source and destination sides contain an asterisk, the corresponding name is mapped. In
this example, all UNIX users map to corresponding names in the home users domain.
The asterisk is also used in the domain name field. When using an asterisk in the domain
name field, follow these guidelines:
• If the asterisk is on the source side, the specified name in any domain maps to the specified UNIX
name.
• If the asterisk is on the destination side, the specified UNIX name maps to a Windows name in any
trusted environment.

10-17 Data ONTAP® 7.3 Fundamentals: Qtrees and Security Styles

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
UNIX User Accessing an NTFS Qtree

Username lookup of UID in


/etc/passwd or NIS

/etc/usermap.cfg
Domain\user <= UNIX User

If user is mapped to “ ” Reject

Maps to user name No entry

DOMAIN_of_filer\Username

Authenticate with domain Maps to wafl.default_nt_user

Accept
Reject
Multiprotocol accept

© 2008 NetApp. All rights reserved. 17

UNIX USER ACCESSING AN NTFS QTREE

When a UNIX client requests access to a file without UNIX permissions, the ACL on the file
is used to determine if access is granted. The UID of the requester is validated in the
/etc/passwd file (or NIS or LDAP). If CIFS domain authentication is configured, the valid
user is looked-up in the /etc/usermap.cfg file, mapped to a Windows user or SID, and
then passed to the domain for authentication,. If CIFS domain authentication is configured,
but there is no entry in /etc/usermap.cfg, then the
Domain Of The Storage System\Unix name is passed to the domain for authentication. If the
user is rejected, the storage system will allow authentication for the guest user, if configured,
or the default user (sometimes referred to as the generic user). If the user authentication is
valid, then the user is granted the share- and file-level access assigned to that account.
Otherwise, the user is denied access.

10-18 Data ONTAP® 7.3 Fundamentals: Qtrees and Security Styles

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Windows NT User Accessing a UNIX Qtree
domain, user, password

Guest
Authenticate with domain
Reject
Accept

/etc/usermap.cfg
Domain\user <= UNIX User

If user is mapped to “ ” Reject

Maps to user name or no entry

/etc/passwd or
Maps to wafl.default_nt_user
NIS unixuser ID

Multiprotocol accept

© 2008 NetApp. All rights reserved. 18

WINDOWS NT USER ACCESSING A UNIX QTREE

When a user on a PC client requests access to a file without an ACL, access is granted or
denied based on the UNIX permissions on the file. The NTFS SID of the requester is looked-
up on the domain controller (or local user database) to acquire a user name. The user name is
then mapped to a UNIX user name using the name-mapping feature. Using this name-
mapping feature, the user name is looked-up in the /etc/passwd file (or NIS or LDAP) to
acquire the mapped UID and GIDs. These are then compared to the UID, GID, and associated
permissions for the file, exactly as if the requester is the mapped UNIX user.
To troubleshoot authentication, you can enable the following option:
options cifs.trace_login
NOTE: This option should be disabled when not troubleshooting authentication.

10-19 Data ONTAP® 7.3 Fundamentals: Qtrees and Security Styles

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Multiprotocol Security Administration

Use these guidelines to keep entries simple and


easy to understand:
ƒ Make Windows and UNIX user names the
same whenever possible. If the names are
identical, you do not need to create map
entries in the /etc/usermap.cfg file.
ƒ Avoid confusing entries that map the same
user to different user names.
ƒ Use IP qualifiers only to restrict access.

© 2008 NetApp. All rights reserved. 19

MULTIPROTOCOL SECURITY ADMINISTRATION

MAPPING
By default, Windows names map to identical user names in the UNIX space. For example,
the Windows user bob maps to the UNIX user bobg. You can use the user mapping file
/etc/usermap.cfg to map PC users to UNIX users that are named differently, and to
handle other special or generic users such as root, guest, administrator, nobody, and so on.

10-20 Data ONTAP® 7.3 Fundamentals: Qtrees and Security Styles

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Quotas

© 2008 NetApp. All rights reserved. 20

QUOTAS

10-21 Data ONTAP® 7.3 Fundamentals: Qtrees and Security Styles

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Quotas

ƒ Quotas are necessary to:


– Limit the amount of disk space that can be used
– Track disk space usage
– Warn of excessive usage
ƒ Quota targets
– Users
– Groups
– Qtrees

© 2008 NetApp. All rights reserved. 21

QUOTAS

Quotas are important tools for managing the use of disk space on your storage system. A
quota is a limit that is set to control or monitor the number of files or amount of disk space an
individual or group can consume. Quotas allow you to manage and track the use of disk space
by clients on your system.
A quota is used to:
• Limit the amount of disk space or the number of files that can be used
• Track the amount of disk space or number of files used, without imposing a limit
• Warn users when disk space or file usage is high

10-22 Data ONTAP® 7.3 Fundamentals: Qtrees and Security Styles

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Managing Quotas in FilerView

To manage quotas in FilerView:


FilerView > Volumes > Quotas

© 2008 NetApp. All rights reserved. 22

MANAGING QUOTAS IN FILERVIEW

10-23 Data ONTAP® 7.3 Fundamentals: Qtrees and Security Styles

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Adding a Quota Using
the Quota Rule Wizard
Step 1: Quota Type

© 2008 NetApp. All rights reserved. 23

ADDING A QUOTA USING THE QUOTA RULE WIZARD

QUOTA TYPE
The type of a quota limit can be a:
• User—Indicated by a UNIX or Windows ID
• Group—Indicated by UNIX GIDs
• Qtree—Represented by the qtree path name
User quotas, group quotas, and tree quotas are stored in the /etc/quotas file. You can edit
this file at any time.
Quotas are based on a Windows account name, UNIX ID, or GID in both NFS and CIFS
environments.
The CIFS system administrator must maintain the /etc/passwd file for CIFS users to obtain
UIDs (if those users are going to create UNIX files), and the /etc/group file for CIFS users
to obtain GIDs or use an NIS server to implement CIFS quotas.
Qtree quotas do not require UIDs or GIDs. If you only implement qtree quotas, it is not
necessary to maintain the /etc/passwd and /etc/group files (or NIS services).

10-24 Data ONTAP® 7.3 Fundamentals: Qtrees and Security Styles

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Adding a Quota Using
the Quota Rule Wizard (Cont.)
Step 2: Limits

© 2008 NetApp. All rights reserved. 24

ADDING A QUOTA USING THE QUOTA RULE WIZARD (CONT.)

LIMITS

DISK COLUMN
The Disk Space Hard Limit field list the maximum disk space allocated to the quota target.
This hard limit cannot be exceeded. If the limit is reached, messages are sent to the user and
console, and SNMP traps are created.
Use the abbreviations G for gigabytes, M for megabytes, and K for kilobytes. This field
accepts either uppercase or lowercase letters. If you omit the size abbreviation, the system
assumes K (kilobytes). Do not leave this field blank. If you want to track usage without
imposing a limit, enter a dash (-).
Maximum values for the Disk Space Hard Limit is:
• 4,294,967,295 K
• 4,194,303 M
• 4,095 G

FILES COLUMN
The Files Hard Limit field specifies the maximum number of files the quota target can use.
To track usage of the number of files without imposing a quota, enter a blank or a dash (-) in
this field. You can omit abbreviations (uppercase or lowercase) and you can enter an absolute
value, such as 15000.
Maximum value for the Files Hard Limit: 4,294,967,295 K
NOTE: The value for the Files Hard Limit field must be on the same line in your quotas file
as the value for the disk field, otherwise, the Files field is ignored.

10-25 Data ONTAP® 7.3 Fundamentals: Qtrees and Security Styles

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
THRESHOLD COLUMN
The Threshold field specifies the limit at which write requests trigger messages to the
console. If the threshold is exceeded, the write still succeeds, but a warning is logged to the
console.
The Threshold field uses the same format as the Disk field.
Do not leave this field blank. The value following Files is always assigned to the Threshold
field. If you do not want to specify a threshold limit, enter a dash (-) here.
Maximum values for the Threshold:
• 4,294,967,295 K
• 4,194,303 M
• 4,095 G

SOFT DISK COLUMN


The Disk Space Soft Limit field specifies the disk space that can be used before a warning is
issued. If this limit is exceeded, a message is logged to the console and an SNMP trap is
generated.
When the soft disk limit returns to normal, another syslog message and SNMP trap is
generated. The Disk Space Soft Limit field has the same format as the Disk Space Hard Limit
field. If you do not want to specify a soft limit, enter a dash (-) or leave this field blank.
Maximum value for Disk Space Soft Limit: 4,294,967,295 K
NOTE: The Disk Space Soft Limit value must be on the same line as the value for the Disk
Space Hard Limit field; otherwise, soft disk limit is ignored. The sdisk limit is the NFS
equivalent of a CIFS threshold.

SOFT FILES COLUMN


The Files Soft Limit field specifies the number of files that can be used before a warning is
issued. If the soft limit is exceeded, a warning message is logged to the storage system
console and an SNMP trap is generated. When the soft files limit returns to normal, another
syslog message and SNMP trap is generated. The Files Soft Limit field has the same format
as the Files Hard Limit field. If you do not want to specify a soft files limit, enter a dash (-)
or leave the field blank.
Maximum value for Files Soft Limit: 4,294,967,295 K

10-26 Data ONTAP® 7.3 Fundamentals: Qtrees and Security Styles

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Adding a Quota Using
the Quota Rule Wizard (Cont.)
Step 3: Commit

© 2008 NetApp. All rights reserved. 25

ADDING A QUOTA USING THE QUOTA RULE WIZARD (CONT.)

Changes in the /etc/quotas file are not persistent until you click Commit.

10-27 Data ONTAP® 7.3 Fundamentals: Qtrees and Security Styles

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Turning Quotas On or Off

© 2008 NetApp. All rights reserved. 26

TURNING QUOTAS ON OR OFF

QUOTA COMMANDS
The table below lists commands you can use to manage quotas in each volume. If there is
only one volume on the system, you can omit the volume name on all of these commands.
Quota commands are persistent across reboots.

USING FILERVIEW
You can also manage quotas using FilerView. To access the Quota functions, enter the
storage system address in your browser and click the Volumes option. Select Quotas >
Manage.

USING THE QUOTA COMMANDS

EXAMPLE RESULT
quota on vol1 Activates quotas on vol1 based on the contents of the /etc/quotas
file
quota resize vol1 Activates changes on vol1 based on the contents of the /etc/quotas
file
qtree create Creates a special directory at the root of a volume for
/vol/vol2/techpubs /vol/vol2/techpubs

10-28 Data ONTAP® 7.3 Fundamentals: Qtrees and Security Styles

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Qtree Statistics

To display the number of NFS/CIFS operations resulting


from user access to files in a qtree:
qtree stats
system> qtree stats
No qtrees are in use in Volume vol1
No qtrees are in use in Volume vol0
Volume Tree NFS ops CIFS ops
-------- -------- ------- -----
flexvol1 datatree1 9 262
system>

© 2008 NetApp. All rights reserved. 27

QTREE STATISTICS

To help you determine what qtrees are incurring the most traffic, the qtree stats
command enables you to display statistics about user accesses to files in the qtrees on your
system. This information can identify traffic patterns to help with qtree-based load balancing.
The storage system maintains counters for each quota tree in each of the storage system’s
volumes. These counters are not persistent.
To reset the qtree counters, use the -z flag.
The values displayed by the qtree stats command correspond to the operations on the
qtrees that have occurred since the volume (where the qtrees exist) was created, or since it
was made online on the storage system (either through a vol online command or a reboot),
or since the counters were last reset, whichever occurred most recently.
If you do not specify a name in the qtree stats command, the statistics for all qtrees on
the storage system are displayed. Otherwise, statistics for qtrees in the volume name are
displayed.
Similarly, if you do not specify a name with the -z flag, the counters are reset on all qtrees
and all volumes.
The qtree stats command displays the number of NFS and CIFS accesses on the
designated qtrees since the counters were last reset. The qtree stats counters are reset
when one of the following actions occurs:
• System is booted
• Volume containing the qtree is brought online
• Counters are explicitly reset using the qtree stats -z command

10-29 Data ONTAP® 7.3 Fundamentals: Qtrees and Security Styles

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Quota Errors

ƒ “Disk quota exceeded”—Results from requests


that cause a user or group to exceed an
applicable quota
ƒ “Out of disk space” —Results from requests
that cause the number of blocks or files in a
qtree to exceed the qtree limit
ƒ Root or Windows administrator account
– Group quotas do not apply
– Tree quotas do apply

© 2008 NetApp. All rights reserved. 28

QUOTA ERRORS

EXCEEDING QUOTAS
Quotas are set to warn you that limits are being approached, allowing you to act before users
are affected.
For all quota types, Data ONTAP sends console messages when the quota is exceeded and
when it returns to normal. SNMP traps for quota events are also initiated. Additional
messages are sent to the client when hard quota limits are exceeded.
NOTE: Threshold quotas in CIFS are the same as soft quotas in NFS.

QUOTA ERROR MESSAGES


When receiving a write request, Data ONTAP checks to see if the file to be written is in a
qtree. If the write would exceed the tree quota, the following error message is sent to the
console:
tid tree_ID: tree quota exceeded on volume vol_name.
If the qtree is not full, but the write would cause either the user or group quota to be
exceeded, Data ONTAP logs one of the following errors:
uid user_ID: disk quota exceeded on volume vol_name
gid group_ID: disk quota exceeded on volume vol_name

ERROR MESSAGES RECEIVED BY CLIENTS


When hard quota limits are violated, Data ONTAP returns an out of disk space error to
the NFS write request, or a disk full error to the CIFS write request.

10-30 Data ONTAP® 7.3 Fundamentals: Qtrees and Security Styles

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
MESSAGES FOR NFS CLIENTS
The client OS version and application determine what messages user will see. If a UNIX
client mounts a storage system without the noquota option, every time the user logs in, the
client login program checks to see if the user has reached disk and file quotas. If a hard quota
has already been reached, the client displays a message to alert the user before displaying the
system prompt.
Not all versions of UNIX perform this quota check, and messages vary depending on version.

MESSAGES FOR CIFS CLIENTS


If a write from a CIFS client causes a hard quota to be exceeded, the message displayed to the
user depends on the operating system and the application.
An application might display the following message:
Cannot write file filename
Or, when a user tries to copy a file using the Windows Explorer in Windows 95, the
following error is displayed:
Cannot create or replace filename: cannot read from the source
file or disk.

10-31 Data ONTAP® 7.3 Fundamentals: Qtrees and Security Styles

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Editing Quota Rules

To edit quota rules using FilerView:


FilerView > Volumes > Quotas > Edit Rules

© 2008 NetApp. All rights reserved. 29

EDITING QUOTA RULES

10-32 Data ONTAP® 7.3 Fundamentals: Qtrees and Security Styles

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Quota Rules

ƒ New users or groups created after the default quota is


in effect will have the default value
ƒ Users or groups that do not have a specific quota
defined will have the default value
ƒ Configurable rules (/etc/quotas fields) are:
# Target Type Disk Files Thold Sdisk Sfiles

* user@/vol/vol2 50M 15K 45M - 10K


/vol/home/usr/x1 user 50M 10K 45M - -
21 group 750M 75K 700M - 9000
/vol/eng/proj tree 100M 75K 90M - -
writers group@/vol/techpub 75M 75K 70M - -
acme\cheng user@/vol/vol2 200M - 150M - -
tonyp@acme.com user - - - - -
rtaylor user@/vol/vol2 200M - 150M - -
s-1-5-32-544 user@/vol/vol2 200M - 150M - -

NOTE: Columns are separated by white spaces.

© 2008 NetApp. All rights reserved. 30

QUOTA RULES

CONTENTS OF THE /ETC/QUOTAS FILE

TARGET COLUMN
The Target column identifies what the quota will be applied against. In the example above,
there are multiple equivalent ways you can specify the target. These entries provide target
UIDs (for users) or GIDs (for groups) to the storage system of. The ID numbers must not be
0. The system checks quotas every time it receives a write request, so it is important to use a
target that won’t change over time unless you account for the change in the quotas file.
NOTE: Do not use the backslash (\) or “at” sign (@) in UNIX quota targets. Data ONTAP
interprets these characters as part of Windows names.

DEFAULT QUOTAS
You can create a default quota for users, groups, or qtrees. A default quota applies to quota
targets that are not explicitly referenced in the /etc/quotas file.

OVERRIDING DEFAULT QUOTAS


If you do not want Data ONTAP to apply a default quota to a particular target, you can create
an entry in the /etc/quotas file for that target so that the explicit quota overrides the
default quota.

10-33 Data ONTAP® 7.3 Fundamentals: Qtrees and Security Styles

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Quota Report

© 2008 NetApp. All rights reserved. 31

QUOTA REPORT

THE QUOTA REPORT COMMAND


The quota report command prints the current file and space consumption for each user,
group, and qtree. Using a path argument, it displays information about all quotas that apply to
the files in the path. Space consumption and disk limits are rounded up and reported in
multiples of 4 K.
In the example above, the quota report command is used with the –u option. For targets
with multiple IDs, this report shows the first ID on the first line of each report entry. Other
IDs are shown on separate lines with one ID per line. Each ID is followed by its original
quota specifier, if any. The default is to display one ID per target.
The following are options available with the quota report command:
• -s prints soft and hard limit values for each user, group, and qtree
• -u shows the first ID on first line of each report entry for targets with multiple IDs. Other IDs are
shown on separate lines with one ID per line. Each ID is followed by its original quota specifier, if
any. The default is to display one ID per target.
• -x shows all IDs (separated by commas) on first line of each report entry for targets with multiple
IDs. The report also shows threshold column. Columns are tab delineated.
• -t prints the threshold of the quota entry. If omitted, the warning threshold is not included.

10-34 Data ONTAP® 7.3 Fundamentals: Qtrees and Security Styles

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Resizing Quotas

ƒ Quota information is stored in the


/etc/quotas file
ƒ Resizing adjusts the active quotas in a specific
volume to reflect changes in the
/etc/quotas file
ƒ The values in the /etc/quotas file are
displayed in FilerView when you select:
FilerView Æ Volumes Æ Quotas Æ Edit Rules

© 2008 NetApp. All rights reserved. 32

RESIZING QUOTAS

THE QUOTA RESIZE COMMAND


The quota resize command adjusts currently active quotas to reflect changes in the
/etc/quotas file. For example, if you edit an entry in /etc/quotas to increase a quota,
executing the quota resize command causes the change to take effect. To view active
quotas, create a quota report before and after the quota resize.
Use quota resize only when quotas are already set for the volume. The quota resize
command implements additions and changes to the /etc/quotas file.

10-35 Data ONTAP® 7.3 Fundamentals: Qtrees and Security Styles

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Resizing Quotas (Cont.)

For changes to take effect, you must select Resize.

© 2008 NetApp. All rights reserved. 33

RESIZING QUOTAS (CONT.)

USING FILERVIEW
You can also use FilerView to view reports showing quota usage for volumes. To access the
Reports function, enter your storage system address into your browser, and select Volumes.
Then click Quotas > Report.

10-36 Data ONTAP® 7.3 Fundamentals: Qtrees and Security Styles

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Quota Information

ƒ Beginning with Data ONTAP version 7.3,


AutoSupport contains the following quota
information:
– A collection of quota statistics, including a set of
new counters that collect quota statistics
– The quota configuration file (/etc/quotas)
– The user mapping file (/etc/usermap.cfg)
ƒ Quota information is included in ASUP as
attachments

© 2008 NetApp. All rights reserved. 34

QUOTA INFORMATION

QUOTA INFORMATION IN AUTOSUPPORT


Starting with Data ONTAP 7.3, AutoSupport now includes quota information, which allows
NetApp to improve its response to quota-related support questions. Prior to the release of
Data ONTAP 7.3, if you had a quota-related support question, it was necessary for you to
gather the appropriate information and send it to NetApp support, which sometimes created a
delay of several days. With the inclusion of quota information in the latest version of Data
ONTAP AutoSupport, this information is automatically sent to NetApp. Quota information in
AutoSupport also allows NetApp to store quota statistics that are useful for analysis.
Quota information is included in AutoSupport as an attachment. The attachment name
appears in the format YYYYMMDDHHMM.N.quotas.gz. For privacy protection, the contents of
the quota files are scrambled.
The AutoSupport attachment contains three types of quota information”
• A collection of quota statistics
• The configuration file, /etc/quotas
• The user-mapping file, /etc/usermap.cfg

10-37 Data ONTAP® 7.3 Fundamentals: Qtrees and Security Styles

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Summary

In this module, you should have learned to:


ƒ Qtrees allow administrators to create logical
file systems within a volume
ƒ Quotas are applied to qtrees
ƒ The /etc/usermap.cfg file allows Windows
users to be mapped to a UNIX account or
UNIX users to be mapped to a Windows
account
ƒ The /etc/quotas file defines qtree, volume,
and user quotas for specific users as well as
the default user
© 2008 NetApp. All rights reserved. 35

MODULE SUMMARY

10-38 Data ONTAP® 7.3 Fundamentals: Qtrees and Security Styles

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Exercise
Module 10: Qtrees and Security
Styles
Estimated Time: 60 minutes

EXERCISE

Please refer to your Exercise Guide for more instruction.

10-39 Data ONTAP® 7.3 Fundamentals: Qtrees and Security Styles

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
SAN

NetApp University - Do Not Distribute


MODULE 11: SAN

SAN
Module 11
Data ONTAP® 7.3 Fundamentals

SAN

11-1 Data ONTAP® 7.3 Fundamentals: SAN

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Objectives

By the end of this module, you should be able to:


ƒ Explain the purpose of a SAN
ƒ Identify supported SAN configurations
ƒ Distinguish between FC and iSCSI protocols
ƒ Define a LUN and explain LUN attributes
ƒ Use the lun setup command and FilerView
to create iSCSI LUNs
ƒ Access and manage a LUN from a Windows
host
ƒ Define SnapDrive® and its features

© 2008 NetApp. All rights reserved. 2

MODULE OBJECTIVES

11-2 Data ONTAP® 7.3 Fundamentals: SAN

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
SAN Overview

© 2008 NetApp. All rights reserved. 3

SAN OVERVIEW

11-3 Data ONTAP® 7.3 Fundamentals: SAN

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
What is a SAN?

Storage Area Network (SAN)

iSCSI

FC Ethernet

Corporate
LAN

SAN NAS
(Blocks) (Files)

NetApp FAS

© 2008 NetApp. All rights reserved. 4

WHAT IS A SAN?

The Storage Networking Industry Association (SNIA) defines a Storage Area Network (SAN)
as “a network whose primary purpose is the transfer of data between computer systems and
storage elements and among storage elements.”
There are usually three parts to a SAN: host, fabric network, and storage system. SANs may
also be directly connected (host to storage system).

11-4 Data ONTAP® 7.3 Fundamentals: SAN

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
SAN Protocols

WAFL Architecture

Block Services

SAN Protocols
FCP iSCSI

Network Interfaces
FC Ethernet
Encapsulated SCSI Encapsulated SCSI

Fibre Channel Network TCP/IP Network

© 2008 NetApp. All rights reserved. 5

SAN PROTOCOLS

Network access to LUNs on a NetApp storage system can be either through an FC network or
a TCP/IP-based network. Both of these protocols carry encapsulated SCSI commands as the
data transport mechanism.

11-5 Data ONTAP® 7.3 Fundamentals: SAN

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
SAN Components

© 2008 NetApp. All rights reserved. 6

SAN COMPONENTS

11-6 Data ONTAP® 7.3 Fundamentals: SAN

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
SAN Components

ƒ Hosts
– Supported platforms are Windows, Solaris,
AIX®,HP-UX, Linux, NetWare®, VMware®
– Referred to as “initiators”
ƒ Connectivity
– Direct-attach
– Network—Network can be fabric (FCP) or IP
(iSCSI)
ƒ Storage system
– Allocates blocks to an initiator group (igroup)
– Referred to as “targets”
© 2008 NetApp. All rights reserved. 7

SAN COMPONENTS

STORAGE DEVICES
The storage devices in a SAN are the NetApp storage systems.

FABRIC OR NETWORK
Fabrics and networks provide any-to-any connectivity between servers and storage devices.
• FC fabrics use FC switches.
• Ethernet networks use standard Ethernet switches.

HOSTS
Hosts are connected to the fabric or network using:
• FC SAN—Host bus adapters (HBAs) in an FC environment
• IP SAN—Standard NICs or HBAs in an IP SAN environment

11-7 Data ONTAP® 7.3 Fundamentals: SAN

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
FC Components

Host bus adapter (HBA)


ƒ Storage systems and hosts have HBAs with
one or more ports.
ƒ The HBA is identified by a Worldwide Node
Name (WWNN).
ƒ Each port on the HBA is identified by a
Worldwide Port Name (WWPN), which is used
to create igroups.
HBA WWNN
20:00:00:2b:34:26:a6:54

HBA WWPN
21:00:00:2b:34:26:a6:54
22:00:00:2b:34:26:a6:54

© 2008 NetApp. All rights reserved. 8

FC COMPONENTS

HOW IS AN FC SAN IMPLEMENTED?


An FC SAN requires HBAs (supplied by vendors such as Emulex®) to connect to each other
or to FC switches. Each FCP node is represented by a WWNN and a WWPN. The WWPNs
are used to create igroups. LUNs are mapped to these igroups. This allows the host (whose
WWPN is in a specific igroup) to access LUNs that are mapped to the igroup.
The FC specification for naming nodes and ports on nodes is loosely followed. Each device is
given a globally unique WWNN and an associated WWPN for each port on the node.
WWNNs and WWPNs are made up of 16 hexadecimal digits grouped together in twos, with a
colon separating each pair (for example, 21:00:00:2b:34:26:a6:54).
The FC WWN (worldwide name) specification states that the first four digits of a node should
be 1N:NN, where 1 defines it as a node, and the next three numbers (N:NN) are reserved for
WWPN port definition.
The first four digits of a port should be 2N:NN, where the 2 defines it as a port, and the next
three numbers (N:NN) are reserved for WWPN port definition.
In the example above, 21:00 represents Port 1 (Port 2 is 22:00). Port 1 could also have been
represented as 20:01 (or 20:02 for Port 2). HBA vendors follow naming conventions
loosely, and, in many cases, they use 20:00 for the node and 21:00 for the port.

11-8 Data ONTAP® 7.3 Fundamentals: SAN

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
iSCSI Components

ƒ A host is configured with one of the following:


– Software initiators with a standard NIC
– TCP Offload Engine (TOE) with a software
initiator
– iSCSI HBA
ƒ A host initiator is identified by a WWNN, such
as the following:
– iqn.1998-02.com.netapp:sn12345678
– eui.1234567812345678
ƒ A storage system is configured with either:
– iSCSI HBA
– Standard NIC
© 2008 NetApp. All rights reserved. 9
B

ISCSI COMPONENTS

HOW IS AN ISCSI SAN IMPLEMENTED?


The naming convention for nodes in iSCSI is the same as in FC. There are two formats for
node names: iqn and eui. These node names are used to create igroups. LUNs are mapped
to the igroups. Hosts that have node names in an igroup can see LUNs mapped to that igroup.
You can implement iSCSI on a host using one of the following methods:
• Initiator software that uses the host’s standard Ethernet interfaces
• An iSCSI HBA
• TOE adapter that offloads TCP/IP processing (iSCSI protocol processing is performed by host
software)
Storage controllers are connected to the network through standard Ethernet interfaces or
through target HBAs.
Nodes are identified in IP SAN environments by a node identifier using the following
formats:
• iqn.1998-02.com.netapp:sn12345678
• eui.1234567812345678
The host node names are used to create igroups, which control host access to LUNs.

11-9 Data ONTAP® 7.3 Fundamentals: SAN

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Initiator/Target Relationship
Host (Initiator) Fabric/Network Controller (Target)
FC HBAs
Windows or UNIX Storage System

FC FC SAN Services
Application
Driver SCSI over FC Driver

iSCSI HBAs WAFL


File SCSI iSCSI
System Driver Driver
RAID
SCSI over TCP/IP
(iSCSI)
iSCSI TCP TCP
Driver /IP /IP
SCSI over TCP/IP
(iSCSI)

Ethernet NICs Data ONTAP

SCSI Adapters FC or SATA


LUN Attached

Direct-Attached Storage

© 2008 NetApp. All rights reserved. 10

INITIATOR/TARGET RELATIONSHIP

An initiator is a host in a SAN. The initiator communicates directly with a target, or over a
network. The network can be either IP-based with iSCSI, or a fabric with FCP. The controller
or target receives calls from the initiator and allows it access to LUNs.

11-10 Data ONTAP® 7.3 Fundamentals: SAN

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
LUN Overview

A LUN (Logical Unit Number) is:


ƒ A logical representation of storage
ƒ Configured as a single disk
ƒ Appears as a local disk on the host
ƒ Centrally stored, but managed by a host at the
block level

© 2008 NetApp. All rights reserved. 11


B

LUN OVERVIEW

Block-based storage is defined by logical unit numbers (LUNs). A LUN is a logical


representation of a physical unit of storage.
A LUN is a collection, or part of a collection, of physical or virtual disks that are configured
as a single disk. LUNs appear as local disks that can be formatted and managed by the host.
LUNs are managed at the block level, so Data ONTAP does not interpret the data.

11-11 Data ONTAP® 7.3 Fundamentals: SAN

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Setting Up a SAN

© 2008 NetApp. All rights reserved. 12

SETTING UP A SAN

11-12 Data ONTAP® 7.3 Fundamentals: SAN

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Setting Up a SAN

To set up a SAN:
1. License the appropriate SAN protocol on the
storage system.
2. Create a volume or qtree where the LUN will
reside (apply quotas when appropriate).
3. Verify the SAN protocol driver is on.
4. Configure the host initiator.
5. Create the LUN and igroup, and then
associate the igroup to the LUN.

© 2008 NetApp. All rights reserved. 13

SETTING UP A SAN

11-13 Data ONTAP® 7.3 Fundamentals: SAN

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Review Questions

ƒ How do you license the appropriate SAN


protocol on the storage system?
ƒ How do you create a volume or qtree for a
LUN?

© 2008 NetApp. All rights reserved. 14

REVIEW QUESTIONS

11-14 Data ONTAP® 7.3 Fundamentals: SAN

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Managing FCP or iSCSI

ƒ After licensing, the FCP or iSCSI drivers can


be activated
ƒ To manage the FCP or iSCSI protocols:
– Using the CLI
ƒ FCP
fcp [subcommand]
Example: fcp status
ƒ iSCSI
iscsi [subcommand]
Examples: iscsi start or iscsi status
– Using FilerView

© 2008 NetApp. All rights reserved. 16

MANAGING FCP OR ISCSI

After protocols are licensed, you must start the services. For FCP, this requires a reboot. For
iSCSI, you can issue the iscsi start command.
You can administrate FCP and iSCSI from both the CLI and from FilerView.

11-15 Data ONTAP® 7.3 Fundamentals: SAN

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Configuring the Initiator

ƒ Only the initiator(s) identified in the LUN igroup


can access the LUN through the protocol
specified (FCP or iSCSI)
ƒ There are different methods for setting up the
initiator depending on:
– Host operating system
– SAN protocol used (FCP or iSCSI)
– Method of connection (HBA or software initiator)

© 2008 NetApp. All rights reserved. 17

CONFIGURING THE INITIATOR

This module focuses on Windows platforms using the iSCSI Software Initiator from
Microsoft. For more information about configuring iSCSI and FCP LUNs on other platforms,
see the Data ONTAP SAN Administration course.

11-16 Data ONTAP® 7.3 Fundamentals: SAN

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
iSCSI Software Initiator

© 2008 NetApp. All rights reserved. 18

ISCSI SOFTWARE INITIATOR

CONFIGURING MICROSOFT ISCSI SOFTWARE INITIATOR


To configure the iSCSI Software Initiator:
2. Install the Microsoft iSCSI Software Initiator.
3. In the Discovery tab, specify the storage system’s IP address as a target portal.

11-17 Data ONTAP® 7.3 Fundamentals: SAN

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
iSCSI Software Initiator (Cont.)

© 2008 NetApp. All rights reserved. 19

ISCSI SOFTWARE INITIATOR (CONT.)

CONFIGURING MICROSOFT ISCSI SOFTWARE INITIATOR (CONT.)

4. The storage system WWNN is displayed in the target table.


5. Select the WWNN and click Log On.

11-18 Data ONTAP® 7.3 Fundamentals: SAN

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
iSCSI Software Initiator (Cont.)

© 2008 NetApp. All rights reserved. 20

ISCSI SOFTWARE INITIATOR (CONT.)

CONFIGURING MICROSOFT ISCSI SOFTWARE INITIATOR (CONT.)

6. In the Log On to Target window, if you select “Automatically restore this connection when the
system reboots,” then the connection appears in the Persistent Targets tab.

11-19 Data ONTAP® 7.3 Fundamentals: SAN

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Creating LUNs

Create LUNs using one of the following methods:


ƒ The CLI:
– lun create
– lun setup
ƒ FilerView
ƒ SnapDrive (covered later in this module)

© 2008 NetApp. All rights reserved. 21

CREATING LUNS

METHODS FOR CREATING LUNS


You can create a LUN using one of the following methods:
1. Use the lun create command on the storage system.
a. This command only creates a LUN.
b. When using this command, you must complete the following additional configuration
steps:
i. Create initiator groups using the igroup screate command.
ii. Map the LUN to an initiator group using the lun map command.
iii. Add portset (FCP).
2 Use the lun setup command on the storage system. This command is a wizard that walks you
through creating and mapping the LUN and igroup.
3 Use FilerView on a client host.
4 Use SnapDrive on a client host.

11-20 Data ONTAP® 7.3 Fundamentals: SAN

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The lun setup Command

system> lun setup


This setup will take you through the steps needed to create LUNs
and to make them accessible by initiators. You can type ^C (Control-C)
at any time to abort the setup and no unconfirmed changes will be made
to the system.
Do you want to create a LUN? [y]: y
Multiprotocol type of LUN (image/solaris/windows/hpux/aix/linux) [image]: windows
A LUN path must be absolute. A LUN can only reside in a volume or
qtree root. For example, to create a LUN with name "lun0" in the
qtree root /vol/vol1/q0, specify the path as "/vol/vol1/q0/lun0".
Enter LUN path: /vol/winvol/tree1/lun0
A LUN can be created with or without space reservations being enabled.
Space reservation guarantees that data writes to that LUN will never
fail.
Do you want the LUN to be space reserved? [y]: y
Size for a LUN is specified in bytes. You can use single-character
multiplier suffixes: b(sectors), k(KB), m(MB), g(GB) or t(TB).
Enter LUN size: 12g
You can add a comment string to describe the contents of the LUN.
Please type a string (without quotes), or hit ENTER if you don't
want to supply a comment.
Enter comment string: Windows LUN

© 2008 NetApp. All rights reserved. 22

THE LUN SETUP COMMAND

CREATING AN ISCSI LUN FOR WINDOWS

When creating an iSCSI LUN for Windows, the LUN will have the following attributes:

OPERATING SYSTEM WINDOWS


Path /vol/winvol/tree1/lun0
Space Reservation Yes (Y)
Size 12 gigabyte (12g)
igroup Type iSCSI
Node Name iqn.1991-05.com.microsoft:slu2-
win.edsvcs.netapp.com
OS for Initiator Group Windows

LUN ID 0

11-21 Data ONTAP® 7.3 Fundamentals: SAN

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The lun setup Command (Cont.)

The LUN will be accessible to an initiator group. You can use an


existing group name, or supply a new name to create a new initiator
group. Enter '?' to see existing initiator group names.
Name of initiator group: ?
Existing initiator groups:
Name of initiator group: salesigroup
Type of initiator group salesigroup (FCP/iSCSI) [FCP]: iscsi
An iSCSI initiator group is a collection of initiator node names.Each
node name can begin with either 'eui.' or 'iqn.' and should be in the
following formats: eui.{EUI-64 address} or iqn.yyyy-mm.{reversed domain
name}:{any string}
Eg: iqn.2001-04.com.acme:storage.tape.sys1.xyz or eui.02004567A425678D
You can separate node names by commas. Enter '?' to display a list of
connected initiators. Hit ENTER when you are done adding node names to
this group.
Enter comma separated nodenames: ?
Initiators connected on adapter iswta:
iSCSI Initiator Name Group
iqn.1991-05.com.microsoft:slu2-win.edsvcs.netapp.com

Adapter iswtb is running on behalf of the partner.


Enter comma separated nodenames: iqn.1991-05.com.microsoft:slu2-win.edsvcs.netapp.com
Enter comma separated nodenames:

© 2008 NetApp. All rights reserved. 23

THE LUN SETUP COMMAND (CONT.)

11-22 Data ONTAP® 7.3 Fundamentals: SAN

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The lun setup Command (Cont.)

The initiator group has an associated OS type. The following are


currently supported: solaris, windows, hpux, aix, linux or default.
OS type of initiator group "salesigroup" [windows]: windows
The LUN will be accessible to all the initiators in the
initiator group. Enter '?' to display LUNs already in use
by one or more initiators in group "salesigroup".
LUN ID at which initiator group "salesigroup" sees "/vol/winvol/tree1/lun0"[0]: 0
LUN Path : /vol/winvol/tree1/lun0
OS Type : windows
Size : 12.0g (12889013760)
Comment : Windows LUN
Initiator Group : salesigroup
Initiator Group Type : iSCSI
Initiator Group Members : iqn.1991-05.com.microsoft:slu2-win.edsvcs.netapp.com
Mapped to LUN-ID : 0
Do you want to accept this configuration? [y]: y
Do you want to create another LUN? [n]: n
NetApp1> lun show -m
LUN path Mapped to LUN ID
-------------------------------------------------------------
/vol/winvol/tree1/lun0 salesigroup 0
system>

© 2008 NetApp. All rights reserved. 24

THE LUN SETUP COMMAND (CONT.)

11-23 Data ONTAP® 7.3 Fundamentals: SAN

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Creating a LUN Using FilerView

© 2008 NetApp. All rights reserved. 25

CREATING A LUN USING FILERVIEW

To create a LUN using the FilerView LUN Wizard, complete the following steps:
1 In the FilerView left navigation pane, click LUNs > Wizard.
2. To open the Specify LUN Parameters window, click Next.
3. Follow the Wizard instructions and enter all requested information.
4. To open the Commit Changes window, click Next.
5. If the information displayed is correct, click Commit.

11-24 Data ONTAP® 7.3 Fundamentals: SAN

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Accessing a LUN

© 2008 NetApp. All rights reserved. 26

ACCESSING A LUN

11-25 Data ONTAP® 7.3 Fundamentals: SAN

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Accessing a LUN

© 2008 NetApp. All rights reserved. 27

ACCESSING A LUN

After creating a LUN with the lun setup command, use Windows Disk Management on the
host to prepare the LUN for use. The new LUN should be visible as a local disk. If it is not,
click the Action button in the toolbar, and then click Rescan Disks.
Disk Management will:
1 Write the disk signature
2 Partition the disk
3 Format the disk

11-26 Data ONTAP® 7.3 Fundamentals: SAN

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Accessing a LUN (Cont.)

© 2008 NetApp. All rights reserved. 28

ACCESSING A LUN (CONT.)

To open the Create Partition Wizard, right-click the bar that represents the unallocated disk
space, and then select Create Partition. Or, from the Action dropdown menu in the Computer
Management window, you can click All Tasks > Create Partition.

11-27 Data ONTAP® 7.3 Fundamentals: SAN

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Accessing a LUN (Cont.)

© 2008 NetApp. All rights reserved. 29

ACCESSING A LUN (CONT.)

Select either a Primary or Extended partition.

11-28 Data ONTAP® 7.3 Fundamentals: SAN

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Accessing a LUN (Cont.)

© 2008 NetApp. All rights reserved. 30

ACCESSING A LUN (CONT.)

Create a Primary partition no larger than the maximum size available. Choose the partition
size and drive letter.

11-29 Data ONTAP® 7.3 Fundamentals: SAN

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Accessing a LUN (Cont.)

© 2008 NetApp. All rights reserved. 31

ACCESSING A LUN (CONT.)

Accept the default drive assignment or use the drop-down list to select a different drive.
Partition the drive using the settings shown, but change the Volume Label to an appropriate
Windows volume name that represents the LUN you are creating.
Review the settings specified and then click Finish.

11-30 Data ONTAP® 7.3 Fundamentals: SAN

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Accessing a LUN (Cont.)

© 2008 NetApp. All rights reserved. 32

ACCESSING A LUN (CONT.)

Verify that the LUN appears as a local drive in Disk Management. If it appears as a local
drive, you can then copy files to the new disk and treat it like any other local disk.

11-31 Data ONTAP® 7.3 Fundamentals: SAN

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
SnapDrive

© 2008 NetApp. All rights reserved. 33

SNAPDRIVE

11-32 Data ONTAP® 7.3 Fundamentals: SAN

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
SnapDrive

ƒ SnapDrive ensures consistent LUN Snapshots


and is available:
– From NetApp to manage a LUN from a host
– For the Windows, Solaris, Linux, AIX, and HP-
UX platforms
ƒ SnapDrive can create an iSCSI LUN on the
storage system and automatically attach it to
the client host
NOTE: If SnapDrive is used to create a LUN, you must use
SnapDrive to manage that LUN. Do not use the CLI to delete,
rename, or otherwise manage a LUN created by SnapDrive.

© 2008 NetApp. All rights reserved. 34

SNAPDRIVE

SnapDrive is management software for Windows 2000, Windows 2003, and Windows 2008
systems that provides virtual-disk and Snapshot management on the client side. Use
SnapDrive to create FCP or iSCSI LUNs on a Windows host.
SnapDrive includes three main components:
1. Windows 2000 service
2. Microsoft Management Console (MMC) plug-in
3. CLI
SnapDrive includes the same features of the lun setup command on the storage system, but
also includes the ability to add LUNs to the Windows host and integrates use of LUNs into
other NetApp applications such as SnapManager.

11-33 Data ONTAP® 7.3 Fundamentals: SAN

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
SnapDrive for Windows

© 2008 NetApp. All rights reserved. 35

SNAPDRIVE FOR WINDOWS

SnapDrive for Windows provides an interface for Windows to interact with LUNs directly.
SnapDrive also:
• Enables online storage configuration, LUN expansion, and streamlined management
• Integrates Snapshot technology to create point-in-time images of data stored in LUNs

11-34 Data ONTAP® 7.3 Fundamentals: SAN

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Other SAN Administration Resources
For more information about SAN administration, see the
SAN Administration on Data ONTAP 7.3 course.
This advanced course covers:
ƒ Creating FCP and iSCSI LUNs from the CLI
ƒ Creating FCP and iSCSI LUNs from SnapDrive with
Windows
ƒ Creating FCP and iSCSI LUNs from SnapDrive with
Solaris
ƒ Configuring Solaris hosts for FCP and iSCSI
ƒ Configuring Windows hosts for FCP and iSCSI
ƒ Configuring other hosts, such as Linux, HP-UX, and
AIX
ƒ SAN in a clustered storage system environment
ƒ SAN performance tuning
ƒ SAN troubleshooting
© 2008 NetApp. All rights reserved. 36

OTHER SAN ADMINISTRATION RESOURCES

11-35 Data ONTAP® 7.3 Fundamentals: SAN

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Summary

In this module, you should have learned to:


ƒ A SAN is a Storage Area Network
ƒ A LUN is a Logical Unit Number
ƒ The lun setup command and FilerView are
common ways to create LUNs
ƒ iSCSI is a simple but effective protocol to
connect to a LUN
ƒ To access a LUN from a Windows host, use
Disk Management
ƒ SnapDrive creates a LUN and connects to that
LUN quickly and securely
© 2008 NetApp. All rights reserved. 37

MODULE SUMMARY

11-36 Data ONTAP® 7.3 Fundamentals: SAN

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Exercise
Module 11: SAN
Estimated Time: 45 minutes

EXERCISE

Please refer to your Exercise Guide for more instruction.

11-37 Data ONTAP® 7.3 Fundamentals: SAN

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Snapshot Copies

NetApp University - Do Not Distribute


MODULE 12: SNAPSHOT COPIES

Snapshot Copies
Module 12
Data ONTAP® 7.3 Fundamentals

SNAPSHOT COPIES

12-1 Data ONTAP® 7.3 Fundamentals: Snapshot Copies

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Objectives

By the end of this module, you should be able to:


ƒ Describe the function of Snapshot copies
ƒ Explain the benefits of Snapshot copies
ƒ Identify and execute Snapshot commands
ƒ Create and delete a Snapshot copy
ƒ Configure and modify Snapshot options
ƒ Explain the importance of the .snapshot directory
ƒ Describe disk space consumed by a Snapshot for
volumes and aggregates
ƒ Schedule a Snapshot copy through FilerView
ƒ Configure and manage the Snapshot reserve

© 2008 NetApp. All rights reserved. 2

MODULE OBJECTIVES

12-2 Data ONTAP® 7.3 Fundamentals: Snapshot Copies

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Overview

© 2008 NetApp. All rights reserved. 3

OVERVIEW

12-3 Data ONTAP® 7.3 Fundamentals: Snapshot Copies

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Snapshot Technology

ƒ A Snapshot copy is a read-only image of a


volume at a point in time
ƒ The benefits of Snapshot technology are:
– Nearly instantaneous application data backups
– Fast recovery of data lost due to:
ƒ Accidental data deletion
ƒ Accidental data corruption
ƒ Snapshot technology is the foundation for:
– SnapRestore® SnapManager®
– SnapDrive® SnapMirror®
– FlexClone® SnapVault®
© 2008 NetApp. All rights reserved. 4

SNAPSHOT TECHNOLOGY

The Snapshot technology is a key element in the implementation of the WAFL (Write
Anywhere File Layout) file system.
• A Snapshot copy is a read-only, space-efficient, point-in-time image of data in a volume or
aggregate
• A Snapshot copy is only a “picture” of the file system and does not contain any data file content
• Snapshot copies are used for backup and error recovery
Data ONTAP automatically creates and deletes Snapshot copies of data in volumes to support
commands related to Snapshot technology.

12-4 Data ONTAP® 7.3 Fundamentals: Snapshot Copies

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Snapshot and WAFL

ƒ Snapshot technology is a core feature of the


storage system’s WAFL file system
ƒ Every file in WAFL has at least one inode that
organizes its data
ƒ Each volume can contain up to 255 Snapshot
copies

© 2008 NetApp. All rights reserved. 5

SNAPSHOT AND WAFL

12-5 Data ONTAP® 7.3 Fundamentals: Snapshot Copies

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Inodes

ƒ An inode is a data structure that is used to


represent file system objects such as files and
directories
ƒ An inode is 192 bytes that describes a file’s
attributes and includes the following:
– Type of file (regular file, directory, link, and so
on)
– Size
– Owner, group, permissions
– Pointer to xinode (ACLs)
– Complete file data if the file is 64 bytes or less
– Pointers to data blocks

© 2008 NetApp. All rights reserved. 6

INODES

WAFL FILES AND INODES


WAFL inodes are similar to Berkeley Fast File System (FFS) inodes. The Veritas and
Microsoft file systems are based on the Berkeley FFS, which force writes to preallocated
locations. The primary difference between WAFL and the Berkeley FFS is in the way WAFL
writes contiguous data and metadata blocks to the next available block, instead of to
predefined locations.

ROOT INODES
The most important metadata file is the inode file, which contains the inodes that describe all
other files in the file system. The inode that describes the inode file itself is the root inode.
The root inode is a fixed-disk location.

12-6 Data ONTAP® 7.3 Fundamentals: Snapshot Copies

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Snapshot Copies and Inodes

ƒ Snapshot copies are a copy of the root inode


of a volume
ƒ The inodes of a Snapshot copy are read-only
ƒ When the Snapshot inode is created:
– It points to exactly the same disk blocks as the
root inode
– Brand new Snapshot copies consume only the
space for the inode itself

© 2008 NetApp. All rights reserved. 7

SNAPSHOT COPIES AND INODES

A Snapshot copy is a frozen, read-only image of a traditional volume, a FlexVol volume, or


an aggregate that reflects the state of the file system at the time the Snapshot copy was
created. Snapshot copies are your first line of defense for backing up and restoring data. You
can configure the Snapshot copy schedule.

12-7 Data ONTAP® 7.3 Fundamentals: Snapshot Copies

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Managing Inodes

ƒ To verify the amount of inodes:


df -i
ƒ To increase the maximum:
maxfiles

© 2008 NetApp. All rights reserved. 12

MANAGING INODES

LEVEL 4 INODES
For file sizes between 64 GB and 8 TB, the single-indirect blocks in Level 3 inodes become
double-indirect blocks. These double-indirect blocks reference 1,024 single-indirect blocks,
which then reference up to 1024 4 kB data blocks.

DF -I
The df -i command displays the amount of inodes in a volume. For more information about
this command, see the manual pages.

MAXFILES
The maxfiles command increases the number of inodes designated in a volume. For more
information about this command, see the manual pages.

12-8 Data ONTAP® 7.3 Fundamentals: Snapshot Copies

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
How Snapshot Works

Active Data

File X

Disk Blocks A B C

© 2008 NetApp. All rights reserved. 13

HOW SNAPSHOT WORKS

Before a Snapshot copy is created, there is a file system tree pointing to data blocks that
contain content. When the Snapshot copy is created, a copy of the file structure metadata is
created. The Snapshot copy points to the same data blocks.

12-9 Data ONTAP® 7.3 Fundamentals: Snapshot Copies

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
How Snapshot Works (Cont.)

Active Data Snapshot

File X File X

Disk Blocks A B C Blocks are “frozen” on disk

ƒConsistent point-in-time copy


ƒReady to use (read-only)
ƒConsumes no space*

* With the exception of 4 kB replicated root inode block that defines the Snapshot

© 2008 NetApp. All rights reserved. 14

HOW SNAPSHOT WORKS (CONT.)

There is no significant impact on disk space when a Snapshot copy is created. Because the
file structure takes up little space, and no data blocks must be copied to disk, a new Snapshot
copy consumes almost no additional disk space. In this case, the phrase “Consumes no space”
really means no appreciable space. The so-called “top-level root inode,” which is necessary
to define the Snapshot copy, is 4 kB.

12-10 Data ONTAP® 7.3 Fundamentals: Snapshot Copies

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
How Snapshot Works (Cont.)

Active Data Snapshot

File/LUN: X File/LUN: X

Disk Blocks A B C C’

Client sends
new data for New
block C Data

© 2008 NetApp. All rights reserved. 15

HOW SNAPSHOT WORKS (CONT.)

Snapshot copies begin to use space when data is deleted or modified. WAFL writes the new
data to a new block (C’) on the disk and changes the root structure for the active file system
to point to the new block.
Meanwhile, the Snapshot copy still references the original block C. As long as there is a
Snapshot copy referencing a data block, the block remains unavailable for other uses. This
means that Snapshot copies start to consume disk space only as the file system changes after a
Snapshot copy is created.

12-11 Data ONTAP® 7.3 Fundamentals: Snapshot Copies

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
How Snapshot Works (Cont.)

Active Data Snapshot

File X File X

Disk Blocks A B C C’

ƒ Active version of X is now composed of blocks A, B, C’


ƒ Snapshot version of X remains composed of blocks A, B, C
ƒ Atomically moves active data to new consistent state

© 2008 NetApp. All rights reserved. 16

HOW SNAPSHOT WORKS (CONT.)

12-12 Data ONTAP® 7.3 Fundamentals: Snapshot Copies

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Creating Snapshot
Copies

© 2008 NetApp. All rights reserved. 17

CREATING SNAPSHOT COPIES

12-13 Data ONTAP® 7.3 Fundamentals: Snapshot Copies

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Taking a Snapshot Copy

ƒ Administrators can take Snapshot copies of:


– Aggregates
ƒ Aggregate default for Snapshot reserve is 5% of aggregate
ƒ Restoring an aggregate Snapshot copy restores all volumes
within that aggregate
– Volumes
ƒ Volume default for Snapshot reserve is 20% of volume
ƒ Administrators can restore the entire volume or one or more
files
ƒ To change the amount of Snapshot reserve:
snap reserve [ -A | -V ] [volume_name]
[percent]

© 2008 NetApp. All rights reserved. 18

TAKING A SNAPSHOT COPY

VOLUMES
Snapshot copies for traditional and flexible volumes are stored in special subdirectories that
can be made accessible to Windows and UNIX clients so that users can access and recover
their own files without assistance. The maximum number of Snapshot copies per volume is
255.

AGGREGATES
In an aggregate, 5% of space is reserved for Snapshot copies. In normal, day-to-day
operations, aggregate Snapshot copies are not actively managed by a system administrator.
For example, Data ONTAP automatically creates Snapshot copies of aggregates to support
commands related to the SnapMirror software, which provides volume-level mirroring.
NOTE: Even if the Snapshot reserve is 0%, you can still create Snapshot copies. If there is no
Snapshot reserve, Snapshot copies take their blocks from the active file system.

12-14 Data ONTAP® 7.3 Fundamentals: Snapshot Copies

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Snapshot Reserve
ƒ Aggregates Aggregate Space

Each aggregate has 5% allocated for


Snapshot reserve and 10% allocated WAFL Overhead

for WAFL.
Volume Space
ƒ Flexible Volumes
Each volume has 20% allocated for
Snapshot reserve. The remainder is Active File 80%
used for client data. System

ƒ Snapshot Reserve snap reserve 20%


The amount of space allocated for
Snapshot reserve is adjustable. To
use this space for data (not
recommended), you must manually
override the allocation used for
Snapshot copies.
Snap Reserve 5%

© 2008 NetApp. All rights reserved. 19

SNAPSHOT RESERVE

HOW DISK SPACE IS ALLOCATED FOR TRADITIONAL VOLUMES

AGGREGATES
The size of an aggregate depends on the number and size of disks allocated to it. Ten percent
of the aggregate is allocated for WAFL. Five percent of the aggregate is allocated for the
Snapshot reserve.

FLEXIBLE VOLUMES
By default, a flexible volume has 20% of its space allocated for the Snapshot reserve. More
than one flexible volume can exist in an aggregate. Each flexible volume, however, has 20%
allocated for Snapshot reserve. To use the Snapshot reserve space for data (not
recommended), you must manually override the allocation for Snapshot copies. The
remainder of space can be used for client data.

SNAPSHOT RESERVE
The Snapshot reserve for an aggregate does not automatically expand into the WAFL
aggregate space. When space is needed the Snapshot copies, by default, older, aggregate
Snapshot copies are replaced by new Snapshot copies. The Snapshot aggregate reserve size is
adjustable using the snap reserve –A command.
On volumes, the Snapshot reserve space is expanded into user space as required by the
system (for example, if numerous changes are made to the active file system). If necessary, as
new Snapshot copies are created, the Snapshot reserve expands into user space regardless of
the designated Snapshot reserve amount. You can manually reallocate disk space using the
snap reserve command.

12-15 Data ONTAP® 7.3 Fundamentals: Snapshot Copies

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Creating Snapshot Copies

ƒ Snapshot copies can be:


– Scheduled
– Manual
ƒ To manually create Snapshot copies, use
either:
– The CLI:
snap create [ -A | -V ] [volname]
[snapshotname]
– FilerView
ƒ To rename Snapshot copies:
snap rename [ -A | -V ] [volname]
[oldfilename] [newfilename]

© 2008 NetApp. All rights reserved. 20

CREATING SNAPSHOT COPIES

SNAPSHOT COMMANDS
In the snap command, the option A is used for aggregates and the option V is used for
volumes. If neither A nor V is specified, volume is the default.
The following table lists the commands used to create and manage Snapshot copies. If you
omit the volume name from any of these commands, the command will apply to the root
volume.

EXAMPLE RESULT
snap create engineering test Creates the Snapshot copy, test, in the engineering
volume.
snap list engineering Lists all available Snapshot copies in the engineering
volume.

snap delete engineering test Deletes the Snapshot copy test in the engineering
volume.
snap delete –a vol2 Deletes all Snapshot copies in vol2.
snap rename engineering nightly.0 Renames the Snapshot copy from nightly.0 to firstnight.0
firstnight.0 in the engineering volume.
snap reserve vol2 25 Changes the Snapshot reserve to 25 % on vol2.
snap sched vol2 0 2 6 @ 8, 12, 16, Sets the automatic schedule on vol2 to save the following
20 weekly Snapshot copies: 0 weekly, 2 nightly, and 6
hourly at 8 a.m., 12 p.m., 4 p.m., and 8 p.m.

12-16 Data ONTAP® 7.3 Fundamentals: Snapshot Copies

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Restoring Snapshot
Copies

© 2008 NetApp. All rights reserved. 21

RESTORING SNAPSHOT COPIES

12-17 Data ONTAP® 7.3 Fundamentals: Snapshot Copies

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Recovering Data

ƒ When recovering data, you have two options:


– Copy the data from a Snapshot copy
– Use SnapRestore
ƒ To copy data from a Snapshot copy:
– Locate the Snapshot copy
– Recover the copy from .snapshot directory
ƒ To overwrite, copy to the original location
ƒ For a new, writeable version, copy to a new
location

© 2008 NetApp. All rights reserved. 22

RECOVERING DATA

USING SNAPSHOT COPIES TO RECOVER DATA


To recover data, you can:
• Restore a file from a Snapshot copy
• Use SnapRestore (license required)

To restore a file from a Snapshot copy:


1. Locate the Snapshot copy that contains the correct version of the file.
2. Restore the file from the .snapshot directory.
• To overwrite existing data, copy to the original location.
• To restore a writeable version, copy to a new location.

12-18 Data ONTAP® 7.3 Fundamentals: Snapshot Copies

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Snapshot Options

ƒ Disable automatic Snapshot copies:


vol options volname nosnap [on|off]
ƒ Make the .snapshot directory invisible to clients, and
turn off access to the .snapshot directory:
vol options volname nosnapdir [on|off]
ƒ Make the ~snapshot directory visible to CIFS clients:
options cifs.show_snapshot [on|off]
ƒ Make the .snapshot directory visible to NFS clients:
options nfs.hide_snapshot [on|off]

© 2008 NetApp. All rights reserved. 23

SNAPSHOT OPTIONS

The following table lists the options available for controlling the creation of Snapshot copies
and access to those copies and Snapshot directories on a volume:
• Disable automatic Snapshot copies. Setting the nosnap option to on disables automatic Snapshot
creation. You can still create Snapshot copies manually at any time.
• Make the .snapshot directory invisible to clients and turn off access to the .snapshot
directory. Setting the nosnapdir option to on disables access to the Snapshot directory that is
present at client mountpoints and the root of CIFS directories, and makes the Snapshot directories
invisible. (NFS uses .snapshot for directories, while CIFS uses ~snapshot.) By default, the
nosnapdir option is off (directories are visible).
• Make the ~snapshot directory visible to CIFS clients by completing the following steps:
1 Turn the cifs.show_snapshot option on.
2 Turn the nosnapdir option off for each volume that you want directories to be visible.
NOTE: You must also ensure that Show Hidden Files and Folders is enabled on your
Windows system.

12-19 Data ONTAP® 7.3 Fundamentals: Snapshot Copies

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
SNAPSHOT OPTIONS

EXAMPLE RESULT
vol options vol2 nosnap on Disables automatic Snapshot copies for vol2.

vol options vol2 nosnapdir on Makes the .snapshot (or ~snapshot) directory invisible
to clients.
options cifs.show_snapshot on Makes the ~snapshot directory visible to CIFS clients.

SHOWING SNAPSHOT DIRECTORIES USING FILERVIEW


To make Snapshot directories visible, you can also use FilerView to change CIFS
configuration settings.

12-20 Data ONTAP® 7.3 Fundamentals: Snapshot Copies

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The .snapshot Directory
/

mnt etc usr var

system

vol0

.snapshot Directory

nightly.0 Directory nightly.1 Directory

Files on vol0 Files on vol0


(as of previous midnight) (as of night-before-last)

© 2008 NetApp. All rights reserved. 24

THE .SNAPSHOT DIRECTORY

The .snapshot directory is at the root of a volume.


In the figure above, the directory structure is shown for an NFS client mounting vol0 to the
mountpoint /mnt/system.

12-21 Data ONTAP® 7.3 Fundamentals: Snapshot Copies

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Snapshot View from a UNIX Client
# pwd
/system/.snapshot
# ls -l
total 240
drwxrwxrwx 9 root other 12288 Jan 29 16:19 hourly.0
drwxrwxrwx 9 root other 12288 Jan 29 16:19 hourly.1
drwxrwxrwx 9 root other 12288 Jan 29 16:19 hourly.2
drwxrwxrwx 9 root other 12288 Jan 29 16:19 hourly.3
drwxrwxrwx 9 root other 12288 Jan 29 16:19 hourly.4
drwxrwxrwx 9 root other 12288 Jan 29 16:19 hourly.5
drwxrwxrwx 9 root other 12288 Jan 29 16:19 nightly.0
drwxrwxrwx 9 root other 12288 Jan 29 16:19 nightly.1
drwxrwxrwx 9 root other 12288 Jan 29 16:19 weekly.1
drwxrwxrwx 9 root other 12288 Jan 29 16:19 weekly.2
#

© 2008 NetApp. All rights reserved. 25

SNAPSHOT VIEW FROM A UNIX CLIENT

SNAPSHOT DIRECTORIES
Every volume in your file system contains a special Snapshot subdirectory that allows you to
access earlier versions of the file system to recover lost or damaged files.

VIEWING SNAPSHOT COPIES FROM A UNIX CLIENT


The Snapshot subdirectory appears to NFS clients as .snapshot. The .snapshot
directories are usually hidden and are not displayed in directory listings.
To view a .snapshot directory, complete the following steps:
1. On the storage appliance, log in as root and ensure that the nosnapdir option is set to off.
2. To view hidden directories, from the NFS mountpoint, enter the ls command with the -a (all)
option.
When listing the client Snapshot directories, the date/time stamp is usually the same for all
directories. To find the actual date/time of each Snapshot, use the snap list command on
the storage system.

12-22 Data ONTAP® 7.3 Fundamentals: Snapshot Copies

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Snapshot View from a Windows Client

Snapshot copies are visible to


Windows clients that have File
Manager configured to display
“hidden files.”

© 2008 NetApp. All rights reserved. 26

SNAPSHOT VIEW FROM A WINDOWS CLIENT

Snapshot directories are hidden on Windows clients. To view them, you must first configure
the File Manager to display hidden files, then navigate to the root of the CIFS share and find
the directory folder.
The subdirectory for Snapshot copies appears to CIFS clients as ~snapshot. Files displayed
here are those files created automatically for specified intervals. Manually created Snapshot
copies would also be listed here.

RESTORING A FILE
To restore a file from the ~snapshot directory, rename or move the original file, then copy
the file from the ~snapshot directory to the original directory.

12-23 Data ONTAP® 7.3 Fundamentals: Snapshot Copies

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Scheduling Snapshot
Copies

© 2008 NetApp. All rights reserved. 27

SCHEDULING SNAPSHOT COPIES

12-24 Data ONTAP® 7.3 Fundamentals: Snapshot Copies

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Scheduling Snapshot Copies

ƒ Default schedule:
– Once nightly, Monday through Saturday, at
midnight (12 a.m.)
– Four hourly at 8 a.m., 12 p.m., 4 p.m., and 8
p.m.
ƒ Retains:
– Two most recent nightly
– Six most recent hourly
ƒ First in, First out:
– Oldest nightly Snapshot copy
– Oldest hourly Snapshot copy
© 2008 NetApp. All rights reserved. 28

SCHEDULING SNAPSHOT COPIES

12-25 Data ONTAP® 7.3 Fundamentals: Snapshot Copies

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Snapshot Schedule
Function Function
Entering the command with no arguments Entering the command with just a volume
prints the current Snapshot schedule for all name argument prints the schedule for the
volumes in the system. specified volume.

Function
Snapshot Command Syntax
The values in these positions specify how
snap sched [volume_name [weeks [days [ hours[@list]]]]] many Snapshot copies will be saved for each
interval (weekly, daily, and hourly). In the
Example: snap sched vol2 0 2 6@8,12,16,20 example, the zero means that no weekly
Snapshot copy will be made or saved.

Function
The hourly variable has an optional list
attached to it. Enter values here to specify
which times Snapshot copies are taken.
If you don’t enter a list, the default is hourly.

The Snapshot schedule above keeps the following Snapshot copies for vol2:
ƒ No weekly Snapshot copies
ƒ Two nightly Snapshot copies
ƒ Six hourly Snapshot copies taken at 8 a.m., 12 p.m., 4 p.m., and 8 p.m.
© 2008 NetApp. All rights reserved. 29

SNAPSHOT SCHEDULE

THE SNAP SCHED COMMAND


The snap sched command sets a schedule to automatically create Snapshot copies and
specifies how many of each type are stored. When the limit is reached, the oldest Snapshot
copy for each interval is deleted and replaced by a new Snapshot copy.
The figure above shows a default schedule specifying that Snapshot copies will be taken at 8
a.m., 12 noon, 16:00, and 20:00 (the latter two on the 24-hour clock), and that the two most
recent daily Snapshot copies and the six most recent hourly Snapshot copies will be kept.
Snapshot copies are like a picture of a volume. The only difference between a weekly
Snapshot copy and a nightly or hourly copy is the time at which the Snapshot copy was
created and any data that was changed between the Snapshot copies.

12-26 Data ONTAP® 7.3 Fundamentals: Snapshot Copies

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Using FilerView to
Schedule Snapshot Copies

© 2008 NetApp. All rights reserved. 30

USING FILERVIEW TO SCHEDULE SNAPSHOT COPIES

The Snapshot copy feature is turned on by default and uses a preset schedule until it is
changed by an administrator using the snap sched command, or the FilerView graphical
interface.

To modify the schedule for an existing Snapshot copy using FilerView:


1. From the FilerView left navigation pane, select Volumes > Snapshots > Configure.
2. Enter the new Snapshot copy schedule information.
3. To activate the schedule, click Apply.

To create a brand new Snapshot copy using FilerView:


1. From the FilerView left navigation pane, select Volumes > Snapshots > Add.
2. Select a volume and insert a new Snapshot copy name.
3. To create the new Snapshot copy, click Add.
To view a list of current Snapshot copies using FilerView:
From the FilerView left navigation pane, select Volumes > Snapshots > Manage.

To manually delete individual Snapshot copies from a volume using FilerView:


1. From the FilerView left navigation pane, select Volumes > Snapshots > Manage.
2. Click the check box next to the Snapshot copy you want to delete.
3. Click Delete.
4. To verify the deletion, click OK.

12-27 Data ONTAP® 7.3 Fundamentals: Snapshot Copies

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Space Usage

© 2008 NetApp. All rights reserved. 31

SPACE USAGE

12-28 Data ONTAP® 7.3 Fundamentals: Snapshot Copies

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Using the CLI to Monitor Space Used

ƒ To monitor space used for Snapshot copies, use:


– From the CLI:
snap list
– FilerView
ƒ To determine how much space you will get back:
– snap reclaimable
– snap delta
ƒ To delete:
– A particular Snapshot copy:
snap delete [ -A | -V ] [volumename]
[snapshotname]
– All Snapshot copies:
snap delete [ -A | -V ] -a [volumename]

© 2008 NetApp. All rights reserved. 32

USING THE CLI TO MONITOR SPACE USED

12-29 Data ONTAP® 7.3 Fundamentals: Snapshot Copies

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The snap list Command
%Used Snapshot List Example
The % used column shows the relationship
between accumulated Snapshot copies and
the total disk space consumed by the active system> snap list
file system. Values in parentheses show the Volume vol0
contribution of this individual Snapshot copy. working...
%used %total date name
------- ------ ------------ --------
%Total 0% (0%) 0% (0%) Apr 20 12:00 hourly.0
The % total column shows the relationship
between accumulated Snapshot copies and
17% (20%) 1% (1%) Apr 20 10:00 hourly.1
the total disk space consumed by the 33% (20%) 2% (1%) Apr 20 08:00 hourly.2
volume.
Values in parentheses show the contribution
of this individual Snapshot copy.

Date
The date column shows the date and time
the
Snapshot copy was taken. Time is indicated
on the 24-hour clock, and in this example
reflects the hours set in the automatic
Snapshot schedule.

Name
Scheduled Snapshot copies are
automatically
renumbered as new ones are taken so that
the most recent is always “.0.” This also
ensures that the file with the highest number
is always the oldest.

© 2008 NetApp. All rights reserved. 33

THE SNAP LIST COMMAND

The snap list command displays a single line of information for each Snapshot copy in a
volume. In the Snapshot List Example in the figure above, a list of Snapshot copies is
displayed for the engineering volume.
The following is a description of each column in the list:
• %used—Shows the relationship between accumulated Snapshot copies and the total disk space
consumed by the active file system. Values in parentheses show the contribution of this individual
Snapshot copy.
• %total—Shows the relationship between accumulated Snapshot copies in the total disk space
consumed by the volume. Values in parentheses show the contribution of this individual Snapshot
copy.
• date—Shows the date and time the Snapshot copy was taken. Time is indicated on the 24-hour
clock, and in this example, reflects the hours set in the automatic Snapshot copy schedule.
• name—Lists the names of each of the saved Snapshot copies. Scheduled Snapshot copies are
automatically renumbered as new ones are created so that the most recent copy is always .0. This
also ensures that the file with the highest number (in this case, hourly.2) is always the oldest
Snapshot copy.

EXAMPLES: SNAP LIST


The following examples demonstrate how the %used values in snap list command output
relate to the size of Snapshot copies, and how to determine which Snapshot copies to delete to
reclaim the most space.

12-30 Data ONTAP® 7.3 Fundamentals: Snapshot Copies

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
EXAMPLE 1: NO CHANGES HAVE BEEN MADE TO THE VOLUME SINCE THE CREATION OF
SNAPSHOT COPIES
The NetApp storage system /vol/vol0 has used 100 MB. No data has changed since the
Snapshot was taken. The snap list command output shows:
NetApp> snap list
Volume vol0
working...
%used %total date name
---------- ---------- -------- --------
0% ( 0%) 0% ( 0%) Apr 20 08:00 hourly.0
The space used by the hourly.0 snapshot is 0%. No changes have been made to the files in
/vol/vol0, so no blocks have changed between the Snapshot copy and the active file
system.

EXAMPLE 2: CHANGES HAVE BEEN MADE TO THE VOLUME SINCE THE CREATION OF
SNAPSHOT COPIES
At 9:30 a.m., a 20-MB file is deleted and a new 20 MB file is created. At 10 a.m., a new
hourly Snapshot copy is taken. The snap list command output now shows:
NetApp> snap list
Volume vol0
working...
%used %total date name
-------- -------- --------- --------
0% ( 0%) 0% ( 0%) Apr 20 10:00 hourly.0
20% ( 20%) 1% ( 1%) Apr 20 08:00 hourly.1
The hourly.1 Snapshot copy now consumes space because it holds the blocks for the 20-
MB file that was deleted from the active file system. The hourly.0 Snapshot copy
consumes no space because no changes were made to the volume after this Snapshot copy
was created.

EXAMPLE 3: CHANGES HAVE BEEN MADE TO THE VOLUME BETWEEN THE SNAPSHOT
CREATIONS
At 11:30 a.m., the 20-MB file created at 9:30 a.m. was deleted. At 12 noon, the hourly.0
Snapshot is created. The snap list command output now shows:
NetApp> snap list
Volume vol0
working...
%used %total date name
---------- ---------- ---------- --------
0% ( 0%) 0% ( 0%) Apr 20 12:00 hourly.0
17% ( 20%) 1% ( 1%) Apr 20 10:00 hourly.1
33% ( 20%) 2% ( 1%) Apr 20 08:00 hourly.2

12-31 Data ONTAP® 7.3 Fundamentals: Snapshot Copies

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
In this list, hourly.2 and hourly.1 both contain 20 MB of data that no longer exists in the
active file system (AFS). However, each file references different blocks on the system’s
disks.

The space used by the Snapshot copies is:


• Percentage of space used by hourly.1 = 20%
• 20 MB held in hourly.1 x 100%
• 20 MB in hourly.1 + 80 MB in vol0 AFS
• Space held by hourly.2 = 20%
• 20 MB held in hourly.2 x 100%
• 20 MB in hourly.2 + 80 MB in vol0 AFS
• Cumulative space used by hourly.2 = 33%
• 20 MB held in hourly.2 x 100%
• 20 MB in hourly.2 + 20 MB in hourly.1 + 80 MB in vol0 AFS

12-32 Data ONTAP® 7.3 Fundamentals: Snapshot Copies

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The snap reclaimable and snap delta
Commands
ƒ snap reclaimable
system> snap reclaimable vol0 hourly.0 nightly.0
Processing (Press Ctrl-C to exit) ...........................
snap reclaimable: Approximately 47108 Kbytes would be freed.

ƒ snap delta (Provides the rate of change)


system> snap delta vol0
Volume vol0
working...
From Snapshot To kB changed Time Rate(kB/hour)
-------------------------------------------------------
--nightly.0 AFS 46932 0d 23:00 3911.000

nightly.1 nightly.0 16952 1d 00:00 4237.705


nightly.2 nightly.1 16952 1d 00:00 4237.705

© 2008 NetApp. All rights reserved. 34

THE SNAP RECLAIMABLE AND SNAP DELTA COMMANDS

NEW SNAP COMMANDS

THE SNAP RECLAIMABLE COMMAND


If your storage system is running Data ONTAP 7.0 or later, the snap reclaimable
command provides an easy method to determine the amount of space that can be freed by
deleting a Snapshot copy. This command can be run against a single Snapshot copy or
multiple Snapshot copies. The snap reclaimable command can take some time to run,
because it is calculating blocks unique to the Snapshot copy (or copies) queried.
For example, a snap list command shows the following Snapshot copies on vol0:
NetApp> snap list
Volume vol0
%used %total date name
---------- -------- ---------- --------
4% ( 4%) 0% ( 0%) Apr 20 08:00 hourly.0
5% ( 0%) 0% ( 0%) Apr 20 00:00 nightly.0
You can run the snap reclaimable command to determine how much space you can save
by deleting these Snapshot copies:
NetApp> snap reclaimable vol0 hourly.0 nightly.0
Processing (Press Ctrl-C to exit)
..................................................
snap reclaimable: Approximately 47,108 kB would be freed.

12-33 Data ONTAP® 7.3 Fundamentals: Snapshot Copies

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
THE SNAP DELTA COMMAND
If your storage system is running Data ONTAP 7.0 or later, the snap delta command
provides an easy method to determine the rate of data change between Snapshot copies on a
volume. This command can be run for a single Snapshot copy, multiple Snapshot copies, or
all volumes on the storage system.
A possible application for this command could be in planning SnapMirror updates. For
example, if you are planning to implement SnapMirror and need to know the approximate
rate of change between Snapshot copy intervals (to estimate the size of the SnapMirror
transfers), the snap delta command can be used to display this rate:
NetApp> snap list
Volume vol0
working...
%used %total date name
-------- -------- ------------ --------
4% ( 4%) 0% ( 0%) Apr 20 00:00 nightly.0
5% ( 1%) 0% ( 0%) Apr 19 00:00 nightly.1
5% ( 0%) 0% ( 0%) Apr 18 00:00 nightly.2
NetApp> snap delta vol0
Volume vol0
working...
From Snapshot To kB changed Time Rate (kB/hour)
---------- ------ ---------- -------- ---------
nightly.0 AFS 46,932 0d 23:00 3,911.000
nightly.1 nightly.0 16,952 1d 00:00 4,237.705
nightly.2 nightly.1 16,952 1d 00:00 4,237.705
In this example, the rate of change is about 16,952 kB per day, assuming that one Snapshot
per day is created.

12-34 Data ONTAP® 7.3 Fundamentals: Snapshot Copies

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The snap autodelete Command

Allows the user to define a policy to


automatically delete Snapshot copies when
The volume is nearly full.
Command Syntax:
snap autodelete <vol-name> [on | off | show | reset
help] |
snap autodelete <vol-name> <option> <value>...

Supported options and corresponding values:


commitment try, disrupt
trigger volume, snap_reserve, space_reserve
target_free_space 1-100
delete_order oldest_first, newest_first
defer_delete scheduled, user_created, prefix, none
prefix <string>

© 2008 NetApp. All rights reserved. 35

THE SNAP AUTODELETE COMMAND

The snap autodelete command provides a way to automatically manage Snapshot copies.

EXAMPLES
To enable autodelete on volume vol1:
snap autodelete vol1 on

To disable autodelete on vol1:


snap autodelete vol1 off

To display the current autodelete setting for vol1:


snap autodelete vol1 show

To reset all the autodelete options to the default for vol1:


snap autodelete vol1 reset

12-35 Data ONTAP® 7.3 Fundamentals: Snapshot Copies

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The snap autodelete Command:
commitment Option
What Snapshot copies can autodelete remove?
ƒ The user can protect certain kinds of Snapshot copies
from deletion
ƒ The commitment option defines:
– try
Deletes Snapshot copies that are not being used by any
data mover, recovery, or clones. (NOT LOCKED)
– disrupt
Deletes snapshot copies locked by applications that move
data (such as SnapMirror), dump data, and restore data
(mirror and dumps are aborted).

© 2008 NetApp. All rights reserved. 36

THE SNAP AUTODELETE COMMAND: COMMITMENT OPTION

12-36 Data ONTAP® 7.3 Fundamentals: Snapshot Copies

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The snap autodelete Command:
commitment Option (Cont.)
snap autodelete <vol-name> <option> <value>...
Supported options and corresponding values:
commitment try, disrupt
trigger volume, snap_reserve, space_reserve
target_free_space 1-100
delete_order oldest_first, newest_first
defer_delete scheduled, user_created, prefix, none
prefix <string>

© 2008 NetApp. All rights reserved. 37

THE SNAP AUTODELETE COMMAND: COMMITMENT OPTION (CONT.)

12-37 Data ONTAP® 7.3 Fundamentals: Snapshot Copies

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The snap autodelete Command:
trigger Option
When does snap autodelete occur?
When the “trigger” criteria is near full:
ƒ volume
The volume is nearly full (98%)
ƒ snap_reserve
The reserve is nearly full
ƒ space_reserve
The space reserved is nearly full (useful for
volumes with fractional_reserve < 100)

© 2008 NetApp. All rights reserved. 38

THE SNAP AUTODELETE COMMAND: TRIGGER OPTION

12-38 Data ONTAP® 7.3 Fundamentals: Snapshot Copies

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The snap autodelete Command:
trigger Option (Cont.)
snap autodelete <vol-name> <option> <value>...
Supported options and corresponding values:
commitment try, disrupt
trigger volume, snap_reserve, space_reserve
target_free_space 1-100
delete_order oldest_first, newest_first
defer_delete scheduled, user_created, prefix, none
prefix <string>

© 2008 NetApp. All rights reserved. 39

THE SNAP AUTODELETE COMMAND: TRIGGER OPTION (CONT.)

12-39 Data ONTAP® 7.3 Fundamentals: Snapshot Copies

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The snap autodelete Command:
target_free_space Option
When does snap autodelete stop?
ƒ When the free space in the trigger criteria
reaches a user-specified percentage, snap
autodelete stops
– This percentage is controlled by the value of
target_free_space
– The default percentage is 80%
snap autodelete <vol-name> <option> <value>...
Supported options and corresponding values:
commitment try, disrupt
trigger volume, snap_reserve, space_reserve
target_free_space 1-100
delete_order oldest_first, newest_first
defer_delete scheduled, user_created, prefix, none
prefix <string>

© 2008 NetApp. All rights reserved. 40

THE SNAP AUTODELETE COMMAND: TARGET_FREE_SPACE OPTION

12-40 Data ONTAP® 7.3 Fundamentals: Snapshot Copies

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The snap autodelete Command:
order Options
In what order are Snapshot copies deleted?
ƒ The delete_order option defines the age
order. If the value is set to:
– oldest_first
Delete oldest snapshots first
– newest_first
Delete newest snapshots first
snap autodelete <vol-name> <option> <value>...
Supported options and corresponding values:
commitment try, disrupt
trigger volume, snap_reserve, space_reserve
target_free_space 1-100
delete_order oldest_first, newest_first
defer_delete scheduled, user_created, prefix, none
prefix <string>

© 2008 NetApp. All rights reserved. 41

THE SNAP AUTODELETE COMMAND: ORDER OPTIONS

12-41 Data ONTAP® 7.3 Fundamentals: Snapshot Copies

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The snap autodelete Command:
order Options (Cont.)
Snapshot copies are deleted in the following
order:
ƒ The defer_delete option defines the order for
deletion
ƒ If the value is set to:
– scheduled
Delete the scheduled Snapshot copies last (identified by the
scheduled Snapshot naming convention)
– user_created
Delete the administrator-created Snapshot copies last
– prefix
Delete the Snapshot copies with names matching the prefix
String last

© 2008 NetApp. All rights reserved. 42

THE SNAP AUTODELETE COMMAND: ORDER OPTIONS (CONT.)

12-42 Data ONTAP® 7.3 Fundamentals: Snapshot Copies

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The snap autodelete Command:
order Options (Cont.)
snap autodelete <vol-name> <option> <value>...
Supported options and corresponding values:
commitment try, disrupt
trigger volume, snap_reserve, space_reserve
target_free_space 1-100
delete_order oldest_first, newest_first
defer_delete scheduled, user_created, prefix, none
prefix <string>

© 2008 NetApp. All rights reserved. 43

THE SNAP AUTODELETE COMMAND: ORDER OPTIONS (CONT.)

12-43 Data ONTAP® 7.3 Fundamentals: Snapshot Copies

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The snap autodelete Command:
order Options (Cont.)
In what order are Snapshot copies deleted?
ƒ The prefix option value pair is only
considered when defer_delete is set to
prefix. Otherwise, it is ignored.
snap autodelete <vol-name> <option> <value>...
Supported options and corresponding values:
commitment try, disrupt
trigger volume, snap_reserve, space_reserve
target_free_space 1-100
delete_order oldest_first, newest_first
defer_delete scheduled, user_created, prefix, none
prefix <string>

© 2008 NetApp. All rights reserved. 44

THE SNAP AUTODELETE COMMAND: ORDER OPTIONS (CONT.)

12-44 Data ONTAP® 7.3 Fundamentals: Snapshot Copies

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Using FilerView to Monitor Space Used

© 2008 NetApp. All rights reserved. 45

USING FILERVIEW TO MONITOR SPACE USED

12-45 Data ONTAP® 7.3 Fundamentals: Snapshot Copies

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Summary

In this module, you should have learned to:


ƒ The function and benefits of Snapshot copies
ƒ How to create and delete Snapshot copies
ƒ How to schedule a Snapshot copy using the
CLI and FilerView
ƒ How disk space is consumed by a Snapshot
copy for volumes and aggregates

© 2008 NetApp. All rights reserved. 46

MODULE SUMMARY

12-46 Data ONTAP® 7.3 Fundamentals: Snapshot Copies

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Exercise
Module 12: Snapshot Copies
Estimated Time: 45 minutes

EXERCISE

Please refer to your Exercise Guide for more instruction.

12-47 Data ONTAP® 7.3 Fundamentals: Snapshot Copies

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Writes Reads

NetApp University - Do Not Distribute


MODULE 13: WRITE AND READ REQUEST PROCESSING

Write and Read


Request Processing
Module 13
Data ONTAP® 7.3 Fundamentals

WRITE AND READ REQUEST PROCESSING

13-1 Data ONTAP® 7.3 Fundamentals: Write and Read Request Processing

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Objectives

By the end of this module, you should be able to:


ƒ Describe how data is written from and read to
a WAFL file system on a volume
ƒ Describe the WAFL file system, including
consistency points, RAID management, and
storage levels
ƒ Describe how RAID is used to protect disk data
ƒ Describe how the WAFL file system processes
write and read requests

© 2008 NetApp. All rights reserved. 2

MODULE OBJECTIVES

13-2 Data ONTAP® 7.3 Fundamentals: Write and Read Request Processing

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Data ONTAP Simplified

Client Data ONTAP

Network
Client

Network Protocols WAFL® RAID Storage


Client

Client
NVRAM
Physical Disks
Memory
Client

© 2008 NetApp. All rights reserved. 3

DATA ONTAP SIMPLIFIED

Data ONTAP is the operating system that all NetApp storage systems use. Data ONTAP,
which simplifies storage management and helps ensure business continuity, is built on three
fundamental elements that provide speed, reliability, and safety for NetApp storage systems:
• Real-time mechanism for process execution
• WAFL file system with NVRAM support
• RAID manager

DATA FLOW
Client systems interact with Data ONTAP through the OS networking layer, with the protocol
layer providing appropriate protocol interfaces. Read and write requests are processed by the
WAFL layer and its associated memory. NVRAM is used to create a backup copy of the
WAFL buffers to prevent data loss. The WAFL determines where data is read from or written
to, and forwards this information to the RAID manager. The RAID manager calculates the
parity value required to protect the stored data.
With the WAFL data placement and RAID information computed, the storage layer writes the
blocks to the appropriate disks, and then Data ONTAP determines the new consistency point.

13-3 Data ONTAP® 7.3 Fundamentals: Write and Read Request Processing

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Write Requests

© 2008 NetApp. All rights reserved. 4

WRITE REQUESTS

13-4 Data ONTAP® 7.3 Fundamentals: Write and Read Request Processing

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Write Requests

ƒ Write requests are received by Data ONTAP


through multiple interface protocols:
– CIFS
– NFS
– FCP
– iSCSI
– HTTP
– WebDAV
ƒ Write requests are buffered into:
– System memory
– NVRAM
© 2008 NetApp. All rights reserved. 5

WRITE REQUESTS

13-5 Data ONTAP® 7.3 Fundamentals: Write and Read Request Processing

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Write Request Data Flow: Write Buffer

Network
Network Protocols NVLOG N
Stack NVLOG V
Memory Buffer NVLOG R
SAN NVLOG A
RS-232 Service NVLOG M
NVRAM Full
SAN Host NFS
HBA Service WAFL

CIFS
UNIX NIC Service
RAID
Client

Storage
Windows
Client

© 2008 NetApp. All rights reserved. 6

WRITE REQUEST DATA FLOW: WRITE BUFFER

WRITE REQUEST PROCESSING


Write requests are received from clients. Each write request is stored in a memory buffer. A
copy of each write request is stored in the NVRAM. The WAFL acknowledges receipt as
requests once the requests are stored both in memory and battery-backed NVRAM.

13-6 Data ONTAP® 7.3 Fundamentals: Write and Read Request Processing

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Consistency Point

ƒ A completely self-consistent image of the file


system. Although dynamic, if the file system
were frozen momentarily to capture its
structure–that is a consistency point
ƒ A consistency point occurs when designated
data is written to a disk and a new root inode is
determined
ƒ A consistency point occurs for multiple
reasons, including the following:
– One bank of the NVRAM card is full
– 10 seconds have elapsed

© 2008 NetApp. All rights reserved. 7

CONSISTENCY POINT

TRIGGERING A CONSISTENCY POINT


A consistency point (CP) is a completely self-consistent image of the entire file system. A CP
is accomplished after data has been written to the disk and a new root inode is determined.
Although there are multiple events that trigger CPs, two primary triggers occur when:
• One bank of NVRAM is full
• 10 seconds have elapsed

13-7 Data ONTAP® 7.3 Fundamentals: Write and Read Request Processing

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Consistency Point (Cont.)

ƒ During a consistency point, Data ONTAP


flushes writes to disk
– Always writes to new data blocks
– The volume is always consistent on disk
ƒ When Data ONTAP flushes memory to disk:
– It updates the file system “atomically,” meaning
that the entire write must be completed or the
entire write is rolled back
– This includes all metadata
– After checking, the NVRAM is cleared

© 2008 NetApp. All rights reserved. 8

CONSISTENCY POINT (CONT.)

At least once every 10 seconds, the WAFL generates a CP (an internal Snapshot copy) so that
disks contain a completely self-consistent version of the file system. When the storage system
boots, the WAFL always uses the most recent CP on the disks. This means you don’t have to
spend time checking the file system, even after power loss or hardware failure. The storage
system boots in a minute or two, with most of the boot time devoted to spinning up disk
drives and checking system memory.
The storage system uses battery-backed NVRAM to avoid losing data write requests that
might have occurred after the most recent CP. During a normal system shutdown, the storage
system turns off protocol services, flushes all cached operations to disk. and turns off the
NVRAM. When the storage system restarts after a power loss or hardware failure, it replays
into system RAM any protocol requests stored in NVRAM that are not on the disk.
To view the CP types that the storage system is currently using, use the sysstat –x 1 option.
CPs triggered by the timer, a Snapshot copy, or internal synchronization are normal. Other
types of CPs can occur from time to time.

ATOMIC OPERATIONS
An atomic operation in computer science refers to a set of operations that can be combined so
that they appear to the rest of the system to be a single operation, with only two possible
outcomes: success or failure.
To accomplish an atomic operation, the following conditions must be met:
1. Until the entire set of operations is complete, no other process can be “aware” of the changes being
made.
2. If any one operation fails, then the entire set of operations fail and the system state is restored to its
state prior to the start of any operations.
Source: http://en.wikipedia.org/wiki/Atomic_operation

13-8 Data ONTAP® 7.3 Fundamentals: Write and Read Request Processing

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Write Request Data Flow: WAFL to RAID

Network
Network Protocols NVLOG N
Stack NVLOG V
Memory Buffer NVLOG R
SAN NVLOG A
RS-232 Service NVLOG M
NVRAM Full
SAN Host NFS
HBA Service WAFL

CIFS
UNIX NIC Service
RAID
Client

Storage
Windows
Client

© 2008 NetApp. All rights reserved. 9

WRITE REQUEST DATA FLOW: WAFL TO RAID

WAFL
The WAFL provides shorter response times to write requests by saving a copy of write
requests in system memory and battery-backed NVRAM, and then immediately sends
acknowledgments. This process is different from traditional servers that must write requests
to the disk before acknowledging them. The WAFL delays writing data to the disk, which
provides more time to collect multiple write requests and determine how to optimize storing
data across multiple disks in a RAID group. Because NVRAM is battery-backed, you don’t
have to worry about losing data.
The following are some key facts about WAFL:
• There is no fixed location for data except the superblock.
• Metadata is stored in files.
• Everything is a file.
• Always free to optimize layout.

By combining batch writes, the WAFL:


• Allows Data ONTAP to convert multiple small file writes into one sequential disk write
• Distributes data across all disks in a large array (meaning no overloaded disk drives, or hot spots)

13-9 Data ONTAP® 7.3 Fundamentals: Write and Read Request Processing

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Consistency Point: WAFL to RAID

ƒ The RAID layer calculates the parity of the


data:
– To protect it from one or more disk failures
– To protect stripes of data
ƒ If a data disk fails, the missing information can
be calculated from parity
ƒ The storage system can be configured as
either:
– RAID 4—Allows one disk failure in the RAID
group
– RAID-DP—Allows up to two disk failures in the
RAID group

© 2008 NetApp. All rights reserved. 10

CONSISTENCY POINT: WAFL TO RAID

RAID 4 AND RAID-DP PROTECTION


WAFL then hands off data to the RAID subsystem, which calculates parity and then passes
the data and parity to the storage layer, where the data is committed to the disks. RAID uses
parity to reconstruct broken disks. Parity scrubs, which proactively identify and solve
problems, are performed at the RAID level using checksum data.

13-10 Data ONTAP® 7.3 Fundamentals: Write and Read Request Processing

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Write Request Data Flow: RAID to Storage

Network
Network Protocols NVLOG N
Stack NVLOG V
Memory Buffer NVLOG R
SAN NVLOG A
RS-232 Service NVLOG M
NVRAM Full
SAN Host NFS
HBA Service WAFL

CIFS
UNIX NIC Service
RAID
Client
4k

Storage
Windows
Client

© 2008 NetApp. All rights reserved. 11

WRITE REQUEST DATA FLOW: RAID TO STORAGE

RAID LAYER
Storage drivers move data between system memory and the storage adapters, and ultimately
to the disks. The disk driver component reassembles writes into larger I/Os and also monitors
which disks have failed. The SCSI driver creates the appropriate SCSI commands to sync
with the reads and writes it receives.

13-11 Data ONTAP® 7.3 Fundamentals: Write and Read Request Processing

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Consistency Point: RAID to Storage

ƒ Storage layer commits the data and parity to


the physical disks
ƒ Two atomic checksums are calculated to verify
that the data is written consistently on the disks
ƒ The root inode is updated to point to the new
file inodes on the disk
ƒ The NVRAM is flushed and made available
ƒ The consistency point now is complete

© 2008 NetApp. All rights reserved. 12

CONSISTENCY POINT: RAID TO STORAGE

13-12 Data ONTAP® 7.3 Fundamentals: Write and Read Request Processing

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Write Request Data Flow: Storage Writes

Network
Network Protocols NVLOG N
Stack NVLOG V
Memory Buffer NVLOG R
SAN NVLOG A
RS-232 Service NVLOG M
NVRAM Full
SAN Host NFS
HBA Service WAFL

CIFS
UNIX NIC Service
RAID
Client

Storage
Windows
Client

© 2008 NetApp. All rights reserved. 13

WRITE REQUEST DATA FLOW: STORAGE WRITES

STORAGE LAYER
The storage layer transfers data to physical disks. After data is written to the disks, a new root
inode is updated, a CP is created, and the NVRAM bank is cleared.

13-13 Data ONTAP® 7.3 Fundamentals: Write and Read Request Processing

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NVRAM

ƒ Data ONTAP writes from system memory


– NVRAM is never used for normal write
operations
– NVRAM is backed up with a battery
ƒ If a system failure occurs before the
completion of a consistency point, the data is
written from NVRAM when the system is
brought back online (or by the partner machine
in an active-active environment)

© 2008 NetApp. All rights reserved. 14

NVRAM

The NVRAM is viewed as a log of write requests.


When a write request is received:
• The request is logged to NVRAM. During normal processing, NVRAM is not read. It is simply a
log of requests for action (including any necessary data such as the contents of a write request).
• The request is acted upon. The main memory of the storage system is used for processing requests.
Buffers are read from the network and from the disk, and then processed according to the
directions received as CIFS or NFS requests. If the same action needs to be repeated, NVRAM
holds the necessary instructions.
The NVRAM is normally cleared after a CP is created and is never read back. However, if the
storage system crashes, the data from NVRAM is processed as if the storage system was
receiving those same requests a second time. Because WAFL does not have an opportunity to
atomically update the file system when a crash occurs, the storage system responds to the
NVRAM request in the same way it would have if the request had been received again
through the network.

13-14 Data ONTAP® 7.3 Fundamentals: Write and Read Request Processing

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Read Requests

© 2008 NetApp. All rights reserved. 15

READ REQUESTS

13-15 Data ONTAP® 7.3 Fundamentals: Write and Read Request Processing

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Read Requests

ƒ Every time a read request is received, WAFL


does one of the following:
– Reads the data from the system memory, also
known as “cache”
– Reads the data from the disks
ƒ The cache is populated by:
– Data recently written to disk
– Data recently read from disk

© 2008 NetApp. All rights reserved. 16

READ REQUESTS

READ CACHE
Data ONTAP includes several built-in, read-ahead algorithms. These algorithms are based on
patterns of usage, which helps ensure the read-ahead cache is used efficiently.

FIVE STEPS IN A READ


The following process occurs when a read request is received:
1. The network layer receives an incoming read request (read requests are not logged to NVRAM).
2. If the requested data is located in cache, it is returned immediately to the requesting client.
3. If the requested data is not located in cache, WAFL initiates a read request from the disk.
4. Requested blocks and intelligently chosen read-ahead data is sent to cache.
5. The requested data is sent to the requesting client.

NOTE: In the read process, cache is used to refer to system memory.

13-16 Data ONTAP® 7.3 Fundamentals: Write and Read Request Processing

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Read Request Data Flow: Cache

Network
Network Protocols NVLOG N
Stack NVLOG V
Memory Buffer NVLOG R
SAN NVLOG A
RS-232 Service NVLOG M
NVRAM Full
SAN Host NFS
HBA Service WAFL

CIFS
UNIX NIC Service
RAID
Client

Storage
Windows
Client

© 2008 NetApp. All rights reserved. 17

READ REQUEST DATA FLOW: CACHE

READ REQUESTS FROM CACHE


When a read request is received from a client, the WAFL determines whether to read data
from the disk or respond to the request using the cache buffers. The cache may include data
that was recently written to or read from the disk.

13-17 Data ONTAP® 7.3 Fundamentals: Write and Read Request Processing

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Read Request Data Flow: Read from Disk

Network
Network Protocols NVLOG N
Stack NVLOG V
Memory Buffer NVLOG R
SAN NVLOG A
RS-232 Service NVLOG M
NVRAM Full
SAN Host NFS
HBA Service WAFL

CIFS
UNIX NIC Service
RAID
Client

Storage
Windows
Client

© 2008 NetApp. All rights reserved. 18

READ REQUEST DATA FLOW: READ FROM DISK

READ REQUEST FROM DISK


Read requests that can be satisfied from the read cache are retrieved from the disk. The read
cache is then updated with new disk information for subsequent read requests.

13-18 Data ONTAP® 7.3 Fundamentals: Write and Read Request Processing

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Summary

In this module, you should have learned to:


ƒ WAFL plans and manages storage
ƒ RAID protects data
ƒ The storage layer manages data transfer to
and from disk

© 2008 NetApp. All rights reserved. 19

MODULE SUMMARY

13-19 Data ONTAP® 7.3 Fundamentals: Write and Read Request Processing

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Exercise
Module 13: Write and Read Request
Processing
Estimated Time: 10 minutes

EXERCISE

Please refer to your Exercise Guide for more instruction.

13-20 Data ONTAP® 7.3 Fundamentals: Write and Read Request Processing

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Data Collection

NetApp University - Do Not Distribute


MODULE 14: SYSTEM DATA COLLECTION

System Data
Collection
Module 14
Data ONTAP® 7.3 Fundamentals

SYSTEM DATA COLLECTION

14-1 Data ONTAP® 7.3 Fundamentals: System Data Collection

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Objectives

By the end of this module, you should be able to:


ƒ Use the sysstat, stats, statit, and
options commands
ƒ Describe the factors that affect RAID
performance
ƒ Execute commands to collect data about write
throughput
ƒ Execute commands to verify the operation of
hardware, software, and network components
ƒ Identify commands and options used to obtain
configuration and status
© 2008 NetApp. All rights reserved. 2

MODULE OBJECTIVES

14-2 Data ONTAP® 7.3 Fundamentals: System Data Collection

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
System Health

Performance problems can originate from


multiple sources. To avoid some of these
problems, check or monitor the following:
ƒ Disk configuration
– Disk status
– Write performance
– Read performance
ƒ RAID configuration
ƒ Connectivity configuration
ƒ Performance measures

© 2008 NetApp. All rights reserved. 3

SYSTEM HEALTH

CHECKING THE SYSTEM


Good performance results from hardware, software, and communication protocols working
together at optimal limits. The failure, or underperformance, of one element in the system can
negatively impact others. For this reason, it is important to monitor your system and use the
NetApp command tools available to adjust the system accordingly, which reduces latency,
improves data throughput, and achieves optimal performance.

14-3 Data ONTAP® 7.3 Fundamentals: System Data Collection

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Disk Status

© 2008 NetApp. All rights reserved. 4

DISK STATUS

14-4 Data ONTAP® 7.3 Fundamentals: System Data Collection

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Disk Status

ƒ Monitor disks:
– shelfchk
– led_on diskid and led_off diskid
ƒ Storage Health Monitor:
– Simple storage system management service
– Automatically initiates during system boot
– Provides background monitoring of individual
disk performance
– Detects impending disk problems before they
actually occur
– disk shm_stats
© 2008 NetApp. All rights reserved. 5

DISK STATUS

VERIFYING FC SHELF CONNECTIONS


To confirm proper connection between the storage appliance and the FC shelves, use the
shelfchk command. The shelfchk command verifies that the host adaptors on the storage
appliance are communicating with the disk shelves. The command prompts you to verify
whether specified LEDs are on or off. Because you must be able to see the LEDs, enter the
command from a nearby console.

CHECKING DISK LED FUNCTION


To verify that LEDs are working on all disks, run the led_on and led_off tests. To use
these commands, you must be operating in advanced mode.
NOTE: The led_on and led_off tests can also be used to identify the address where disks
are located.
To verify that LEDs are working on all disks, complete the following steps:
1. To set command privileges to advanced, enter priv set advanced.
2. To turn on LEDs on a specific disk, enter led_on [device_name].
3. Locate the disk on the shelf and verify that the LEDs are lit.
4. To turn off the device LEDs, enter led_off [device_name].
5. To return command privileges back to the basic administration mode, enter priv set admin.

14-5 Data ONTAP® 7.3 Fundamentals: System Data Collection

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
STORAGE HEALTH MONITOR
The Storage Health Monitor (SHM) is a simple storage system management service that is
automatically initiated during system startup. It provides background monitoring of individual
disk performance.
Instead of detecting problems when a disk failure occurs, the SHM detects impending disk
problems before they occur, giving you the opportunity to replace the disk before any client
data problems occur.
SHM messages are written to two text files in the /etc directory and can then be reported
through SNMP, AutoSupport, and syslog, depending on what error metrics you specify.
The SHM provides three message levels:
• Urgent (current problem)
• Non-urgent (potential problem)
• Informational (general status information)

14-6 Data ONTAP® 7.3 Fundamentals: System Data Collection

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Syslog Messages
ƒ shm: disk has reported a predicted
failure (PFA) event: disk XX,
serial_number XXXX
ƒ shm: link failure detected, upstream
from disk: id XX, serial_number XXXXX
ƒ shm: disk I/O completion times too long:
disk XX, serial number XXXXX
ƒ shm: possible link errors on disk: id
XX, serial number XXXXX
ƒ shm: disk returns excessive recovered
errors: disk XX, serial number XXXXX
ƒ shm: intermittent instability on the
loop that is attached to Fibre Channel
adapter: id XXX, name XXXXX

© 2008 NetApp. All rights reserved. 6

SYSLOG MESSAGES

shm: disk has reported a predicted failure (PFA) event: disk XX,
serial_number XXXX
Description: The disk's internal error processing and logging algorithm computation
results are exceeding an internally set threshold. The disk will likely fail in a matter
of hours.
Category: Urgent
Action required: Replace the disk
shm: link failure detected, upstream from disk: id XX, serial_number
XXXXX
Description: An FC disk (or cable, if disks are in different disk shelves) might be
malfunctioning, causing an open loop condition. This results in a sync loss of more
than 100 milliseconds for a downstream disk that reported it as a link failure.
Category: Urgent
Action required: Shut down the storage appliance. Use disk scrub on each disk, and
replace the disks and cable one at a time to determine which component is
malfunctioning. Replace the malfunctioning disk or cable.

14-7 Data ONTAP® 7.3 Fundamentals: System Data Collection

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
shm: disk I/O completion times too long: disk XX, serial number XXXXX
Description: Either the disk is old and slow, or it is internally recovering errors and
taking too long to complete an I/O. This message also indicates that there are too
many I/O timeouts and retries on a disk. The disk could also be frequently returning
the Command Aborted status. All these issues can produce a low data-throughput rate
for this specific disk and a reduction in overall system performance.
Category: Non-urgent
Action required: The disk is prone to failure and should be replaced.
shm: possible link errors on disk: id XX, serial number XXXXX
Description: One of a group of four FC disks in a disk shelf (or any connecting
cable) might be malfunctioning. This results in a large number of invalid CRC frames
and data under-runs on the loop. The invalid CRC and under-run count has crossed
the specified threshold several times.
Category: Non-urgent
Action required: Shut down the storage appliance and remove the disks and cables
one at a time to determine which component is malfunctioning. Replace the
malfunctioning disk or cable.
shm: disk returns excessive recovered errors: disk XX, serial number
XXXXX
Description: Either the disk has found media or hardware errors (unrecovered
errors), or it has internally recovered a large number of errors. The disk might also be
returning a Command Aborted status. The errors returned have exceeded the bit error
rate specified by the disk vendor.
Category: Non-urgent
Action required: The disk is failure prone; you should replace it.
shm: intermittent instability on the loop that is attached to Fibre
Channel adapter: id XXX, name XXXXX
Description: An FC adapter, attached disk shelf, disk, cable, or connector might have
caused instability on the FC-AL loop, which resulted in I/O completion rates below a
set threshold.
Category: Informational
Action required: None

14-8 Data ONTAP® 7.3 Fundamentals: System Data Collection

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Write Performance

© 2008 NetApp. All rights reserved. 7

WRITE PERFORMANCE

14-9 Data ONTAP® 7.3 Fundamentals: System Data Collection

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Write Performance Commands

Use the following commands to research write


performance:
Command Function
sysstat Displays current statistics
statit Displays disk utilization
stats Displays performance data

© 2008 NetApp. All rights reserved. 8

WRITE PERFORMANCE COMMANDS

COMMANDS FOR RESEARCHING WRITE PERFORMANCE


When planning a drive configuration that optimizes write performance, you must make
choices based on thorough knowledge of current system performance, user needs, and
resource constraints.

14-10 Data ONTAP® 7.3 Fundamentals: System Data Collection

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Write Performance: sysstat Command

system> sysstat -c 10 -s 5
CPU NFS CIFS HTTP Net kB/s Disk kB/s Tape kB/s Cache
in out read write read write age
2% 0 0 0 0 0 9 23 0 0 >60
0% 0 0 0 0 0 0 0 0 0 >60
5% 0 0 0 0 0 21 27 0 0 >60
1% 0 0 0 0 0 0 0 0 0 >60
5% 0 0 0 0 0 20 28 0 0 >60
1% 0 0 0 0 0 0 0 0 0 >60
4% 0 0 0 0 0 21 26 0 0 >60
1% 0 0 0 0 0 0 0 0 0 >60
5% 0 0 0 0 0 22 27 0 0 >60
0% 0 0 0 0 0 0 0 0 0 >60
--
Summary Statistics (10 samples 5.0 secs/sample)
CPU NFS CIFS HTTP Net kB/s Disk kB/s Tape kB/s Cache
in out read write read write age
Min
0% 0 0 0 0 0 0 0 0 0 >60
Avg
2% 0 0 0 0 0 9 13 0 0 >60
Max
5% 0 0 0 0 0 22 28 0 0 >60
system*>

© 2008 NetApp. All rights reserved. 9

WRITE PERFORMANCE: SYSSTAT COMMAND

USING THE SYSSTAT COMMAND TO CHECK CPU STATUS


The best command for viewing system utilization is sysstat [interval], where
interval is the incremental interval in seconds (the default is every 15 seconds). The
sysstat command is a little like a speedometer for your storage system, allowing you to
view real-time activity per second.
The statistics displayed by the sysstat command should help you answer questions such as:
• Is the system usage steady or does it fluctuate?
• Is the CPU percentage high without corresponding input/output activity?

INTERPRETING SYSSTAT RESULTS


The following is a description of sysstat command results:
• CPU—Displays an average of the busiest CPUs
NOTE: The sysstat –M command displays statistics for each CPU in a multiprocessor system.
• NFS—Number of NFS operations per second
• CIFS—Number of CIFS operations per second
• HTTP—Number of HTTP operations per second
• Net kB/s in and out—The kilobytes per second of data requested from the network as a read or
write
This is the network traffic displayed in kB/s, which tells you how much network traffic the storage
appliance is handling, how constant that traffic is, and if the system is exceeding its network traffic
limitations.

14-11 Data ONTAP® 7.3 Fundamentals: System Data Collection

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
• Disk kB/s reads and writes—Shows disk activity
Disk reads occur if data is not cached. Disk writes should occur ideally every 10 seconds.
• Cache age—Displays the age, in minutes, of the oldest read-only blocks in the buffer cache (not
information relevant to diagnosing performance)

14-12 Data ONTAP® 7.3 Fundamentals: System Data Collection

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The stats Command:
System Performance
ƒ The stats command displays statistical data about
the storage system and is capable of displaying
statistics on every aspect of the storage system
ƒ Statistics returned using the stats command are
based on the following hierarchy:
– Objects—Any entity in the system is an object (physical
or logical, including volumes, aggregates, qtrees, disks,
and NICs)
– Instances—An object such as a volume called nfsflex,
or an aggregate called aggr1, or a disk identified as
0b.17
– Counters—The counters associated with particular
objects and instances

© 2008 NetApp. All rights reserved. 10

THE STATS COMMAND: SYSTEM PERFORMANCE

Data ONTAP has a layer built into its architecture that collects data from several of its
subsystems. The stats command provides access (through the CLI or scripts) to a set of
predefined data-collection tools in Data ONTAP known as counters. These counters provide
you with information about your storage system, either instantaneously or over a period of
time. You can use the stats command and other tools such as the Microsoft Windows utility
(perfmon) to gather statistics from this layer.

OBJECTS, INSTANCES, AND COUNTERS


The stats counters are grouped by the objects for which they provide data. These objects
include:
• Physical entities such as a system, processor or disk
• Logical entities such as a volume or aggregate
• Protocols such as iSCSI or FCP
• Other modules on the storage system
For a complete list of the stat objects, use the stats list objects command.
Each object can have zero or more instances on your storage system, depending on your
system configuration. Each object instance has its own name. For example, a system with two
processors has the instance names processor0 and processor1.
Counters have an associated privilege mode. If you are not operating with sufficient
privileges for specific counters, the counters are not recognized as valid.

14-13 Data ONTAP® 7.3 Fundamentals: System Data Collection

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The stats Command:
Examples of Objects and Instances
ƒ Examples of objects:
– aggregate
– Volume
– Qtree
– Disk
– Cifs
– Nfs
– LUN
ƒ Examples of instances:
– /vol/vol0, /vol/nfstree, 0b.18
– /vol/flex1/lun_test
– cifs_ops, cifs_latency, cifs_read_ops
© 2008 NetApp. All rights reserved. 11

THE STATS COMMAND: EXAMPLES OF OBJECTS AND INSTANCES

Use the stats command list and show options to view current objects and instances. The
following are some examples of stat commands for objects and instances:
stats list objects
Displays the names of objects active in the system for which data is available.
stats list instances
Displays the list of active instance names for a specific object.
stats list counters | [ object_name ]
Displays a list of all counters associated with an object.
stats explain counters [ object_name [ counter_name ] ]
Displays an explanation for specific counter(s) in a specific object, or all counters in
all objects if no object_name or counter_name is provided.
stats show
Shows all or selected statistics in various formats.

For more information about the stats command, see your manual.

14-14 Data ONTAP® 7.3 Fundamentals: System Data Collection

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The stats Command (Cont.)

The stats command can be executed in one of


three ways, based on the frequency of displays:
ƒ Once—Current counter values are displayed
stats show
ƒ Repeating—Counter values are displayed at a
fixed interval
stats show –i 1
ƒ Period—Counter values are gathered over a
single period of time and then displayed
stats start then stats stop

© 2008 NetApp. All rights reserved. 12

THE STATS COMMAND (CONT.)

VIEWING CURRENT COUNTER VALUES


To view information about the system state from the CLI, use the stats show command in
singleton mode.
The following table shows some examples of using the stats show command to view
system state information.
To view Use command
Statistics from all counters for all instances of a > stats show system
specified object (in this case, system)
Statistics from all counters for all instances of a > stats show processor:processor0
specified object (in this case, processor0)
All current statistics for the volume vol0 > stats show volume:vol0

For more information about counters, see Chapter 8 of the Performance Advisor
Administration Guide.

14-15 Data ONTAP® 7.3 Fundamentals: System Data Collection

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
GATHERING COUNTER VALUES OVER A PERIOD OF TIME
In addition to using the stat command to view the current system state, you can also use it
to gather system information for a single period of time.
The following table shows some examples of using the stats command to gather counter
values over a period of time.
Description Command
Start collecting system information > stats start –I
processor:processor0
Display interim results without stopping the > stats show –I processor:processor0
background stats command
Stop collecting system information and output the > stats stop -I processor:processor0
final results

14-16 Data ONTAP® 7.3 Fundamentals: System Data Collection

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The stats Command (Cont.)
ƒ Use stats list counters to see what is available
ƒ The statistics available through the stats
infrastructure are available using other tools such as
perfmom, perfstat and Operations Manager
ƒ The following are examples of stats commands:
system> stats show cifs:cifs:cifs_latency
cifs:cifs:cifs_latency:1.92m

system> stats show volume:vol0:write_latency


volume:vol0:write_latency:171.50us

© 2008 NetApp. All rights reserved. 13

THE STATS COMMAND (CONT.)

STATS LIST COMMANDS

The following are some examples of stats list commands:

stats list counters


Displays the list of all counters associated with an object. If -p is specified, the
counters used by the preset are listed. If neither -p nor object_name is specified,
then all counters for all objects are listed.
stats list objects
Displays the names of objects active in the system for which data is available. If -p is
specified, the objects used by the preset are listed.
stats list instances
Displays the list of active instance names for a particular object. If -p is specified, the
instances used by the preset are listed. If neither -p nor object_name is specified,
then all instances for all objects are listed.

14-17 Data ONTAP® 7.3 Fundamentals: System Data Collection

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Client-Side Tools: Windows Command
The Windows perfmon
utility:
ƒ Connects to the
storage system from
Windows
ƒ Requires CIFS to be
licensed and running
on the storage system
ƒ Receives output from
the stats command
and graphs the data

NOTE: To view the Add


Counters screen, in the
Performance window, click
the plus sign (+).
© 2008 NetApp. All rights reserved. 14

CLIENT-SIDE TOOLS: WINDOWS COMMAND

The perfmon performance monitoring tool is integrated into the Microsoft Windows
operating system. If you use storage systems in a Windows environment, you can use
perfmon to access many of the counters and objects available through the Data ONTAP
stats command.

USING PERFMON TO ACCESS SYSTEM PERFORMANCE STATISTICS


To use perfmon to access storage system performance statistics, you must specify the name
or IP address of the storage system as the counter source. The lists of performance objects and
counters reflect the objects and counters available from Data ONTAP.
NOTE: The default sample rate for perfmon is once per second. Depending on which
counters you choose to monitor, that sample rate could cause a small performance
degradation on the storage system. If you want to use perfmon to monitor storage system
performance, we recommend that you change the sample rate to once per 10 seconds. You
can change the sample rate using the System Monitor Properties.

14-18 Data ONTAP® 7.3 Fundamentals: System Data Collection

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Read Performance

© 2008 NetApp. All rights reserved. 15

READ PERFORMANCE

14-19 Data ONTAP® 7.3 Fundamentals: System Data Collection

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Read Performance

ƒ Data ONTAP is optimized for write


performance
ƒ Read performance could decrease over time
NOTE: Efficient use of cache can offset some disk performance
issues.
ƒ Optimization:
– To measure optimization:
reallocate measure [vol | file]
– To resolve optimization:
reallocate start <pathname>

© 2008 NetApp. All rights reserved. 16

READ PERFORMANCE

The WAFL is optimized for write performance. To enhance performance, WAFL does the
following:
• Writes adjacent blocks in files that are adjacent on the disk (whenever possible). As the file system
grows, blocks may not be written on an immediately adjacent disk, but the blocks will still be
close together.
• Reserves 10% of the disk space to increase the probability of blocks being available at or near
optimal locations.
• Handles interleaved writes much better than other file systems because WAFL does not
immediately write-allocate data. By holding the write data in system memory until a CP is
generated, WAFL can write-allocate a lot of data from a particular file into contiguous blocks.
• Minimizes the impact of write performance with the “write anywhere" allocation scheme, which
minimizes disk-seeks for writes.

The write optimizations can lead to decreased file and LUN read performance as the file
system ages because files are written to the best place on the disks for write performance. As
the file system expands and WAFL has fewer options for writing blocks, it may have to write
blocks that are not immediately adjacent on the disk. Using flexible volumes and the
autosize volume option can help prevent problems. In addition, WAFL uses built-in,
multiple-read cache algorithms to offset any potential performance degradation.

14-20 Data ONTAP® 7.3 Fundamentals: System Data Collection

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
RAID Configuration

© 2008 NetApp. All rights reserved. 17

RAID CONFIGURATION

14-21 Data ONTAP® 7.3 Fundamentals: System Data Collection

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
RAID Groups

/vol0 /vol1 /vol2

rg0 rg0 rg0

rg1

© 2008 NetApp. All rights reserved. 18

RAID GROUPS

RELATIONSHIP BETWEEN VOLUMES AND RAID GROUPS


• Each aggregate has at least one RAID group, and each RAID group belongs to only one aggregate.
• When a new aggregate is created, a new RAID group is also created with two parity disks and at
least one data disk.
• When disks are added that exceed the specified or maximum RAID group size, new RAID groups
are automatically created for an aggregate.
• You can increase or decrease RAID group size using the aggr options aggr_name
raidsize option.

14-22 Data ONTAP® 7.3 Fundamentals: System Data Collection

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
RAID Group Size and Composition

The following are some examples of poor RAID


configuration choices:
ƒ Unnecessarily using multiple RAID groups
ƒ Using mixed disk sizes
ƒ Configuring RAID groups with wide variations
in capacity
ƒ Configuring RAID groups with only one or two
data disks
ƒ Configuring RAID groups with a number of
disks larger than the default

© 2008 NetApp. All rights reserved. 19

RAID GROUP SIZE AND COMPOSITION

DETERMINING RAID GROUP SIZE AND COMPOSITION


When initially configuring the storage system, it is important to properly size the number of
drives and RAID groups. While write performance can benefit from more drives, any change
might be masked by the effect of NVRAM and the efficient manner in which WAFL manages
write operations. Configuring multiple RAID groups in a volume should not impact
performance. However, improper configuration can significantly impact performance.
For best results when configuring RAID, use the default RAID group and then follow the
guidelines in the Technical Report 3437, Storage Best Practices and Resiliency Guide at
http://www.netapp.com/us/library/technical-reports/tr-3437.html.

14-23 Data ONTAP® 7.3 Fundamentals: System Data Collection

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Initial RAID Group Configuration

ƒ Limit the number of disks in a RAID group to


the recommended numbers
ƒ Ensure that each RAID group in an aggregate
has approximately the same capacity
ƒ Ensure that each RAID group in an aggregate
has at least three data disks
ƒ Use disks of the same size within a RAID
group to optimize write performance
ƒ Use RAID-DP to protect against disk failures

© 2008 NetApp. All rights reserved. 20

INITIAL RAID GROUP CONFIGURATION

14-24 Data ONTAP® 7.3 Fundamentals: System Data Collection

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Adding Disks to Existing RAID Groups

ƒ Add RAID groups when the applied load is


stressing the drives in the current array
ƒ Add RAID groups and disks before the file
system or aggregate is 80% to 90% full
ƒ Add disks in groups
ƒ Plan data expansion so that no fewer than
three data disks are used for any RAID group

© 2008 NetApp. All rights reserved. 21

ADDING DISKS TO EXISTING RAID GROUPS

RECOMMENDATIONS FOR ADDING DISKS TO EXISTING RAID GROUPS

RAID GROUP SIZES


The maximum RAID group size is 28 (26 data disks and two parity disks when using RAID-
DP). When creating an aggregate, if you do not specify a RAID group size, the system uses
the default.

CONSIDERATIONS FOR SIZING RAID GROUPS


Configuring an optimum RAID group size for an aggregate requires some trade-offs. You
must decide what is the most important feature of the aggregate you are configuring—speed
of recovery, assurance against data loss, or maximization of data storage space.
In most cases, the default RAID group size is the best size for your RAID groups. However,
you can change the maximum size of those groups.

14-25 Data ONTAP® 7.3 Fundamentals: System Data Collection

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Monitoring
Connectivity

© 2008 NetApp. All rights reserved. 22

MONITORING CONNECTIVITY

14-26 Data ONTAP® 7.3 Fundamentals: System Data Collection

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Connectivity
Use the following to monitor connectivity:
ƒ MAC
– ifconfig
– ifstat
– arp
ƒ TCP/IP
– /etc/rc and /etc/hosts
– ping
– netstat -r
ƒ Protocols
– nfsstat
– cifs stat
– nbtstat

© 2008 NetApp. All rights reserved. 23

CONNECTIVITY

MONITORING CONNECTIVITY ISSUES AT THE MAC LEVEL


Connectivity problems can arise with functions at the MAC, TCP/IP, and protocol layers. At
the MAC level, you can use the commands in the following table to view various connectivity
statistics and settings.
Sample Command Result
ifstat –a Displays status information for all interfaces. To view status information about
ifstat ns1 a specific interface, enter ifstat [interface_name]for each interface
(ifstat ns1).
If the numbers for collisions, CRCs, or runt frames are high, this could indicate
a problem with the media type or card.
arp Displays the contents of the address resolution table (hostname/IPaddress) so it
can be modified. This command can also help to identify duplicate MAC
addresses.
arp –a Displays all current contents of the table.
arp -d Deletes or flushes a bad MAC address from the ARP table.
arp –s Adds a new entry.

14-27 Data ONTAP® 7.3 Fundamentals: System Data Collection

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Performance Measures

© 2008 NetApp. All rights reserved. 24

PERFORMANCE MEASURES

14-28 Data ONTAP® 7.3 Fundamentals: System Data Collection

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Measuring NFS Performance
ƒ options nfs.per_client_stats.enable [on|off]
ƒ Recommended to disable when not using nfsstat –l
This display shows The output includes server name and address, mount
the breakdown on flags, current read and write sizes, retransmissions count,
this mountpoint of and timers used for dynamic retransmission.
lookups, reads,
writes, and all
operations. The
Data ONTAP NFS Output - Command: nfsstat -l
average deviation /n/homesystem from homesystem.corp.com:/home
and the settings for Flags:vers=2,proto=udp,auth=unix,hard,intr,dynamic
retransmissions of ,rsize=8192
each type also are wsize=8192,retrans=5
displayed.
Lookups: sttr=7(17ms), dev=4(20ms),
cur=2(40ms)
Round trip response Reads: sttr=12(30ms), dev=4(20ms),
times for specific cur=3(40ms)
NFS operations are Writes: sttr=21(52ms), dev=5(25ms),
displayed. cur=5(100ms)
All: sttr=7(7ms), dev=4(20ms),
cur=2(40ms)

© 2008 NetApp. All rights reserved. 25

MEASURING NFS PERFORMANCE

You can track the performance of each NFS server by routinely collecting statistics in the
background across all subnets. One of the most important ways to measure performance is to
capture response times for each NFS operation such as writes, reads, lookups, and get
attributes, so the data can be analyzed by the server and the file system.
You can obtain statistics for NFS operations by server (where the storage system is the NFS
server) by enabling the per-client stats option and running nfsstat -l. Once you
establish site-specific baseline measurements, you can compare your system’s performance
against optimum benchmark configurations, or against its own performance at different times.
Any changes from the baseline can indicate problems that require further analysis.
To measure NFS performance, use the sysstat and nfsstat commands.
• To display real-time NFS operations every second on your console, enter sysstat 1, or you
can view the output using FilerView.
• To focus the output on counters related to response times on Solaris NFS clients, run nfsstat -
m.
• To reset statistics and counters to zero, use nfsstat -z.

14-29 Data ONTAP® 7.3 Fundamentals: System Data Collection

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Measuring CIFS Performance

This number is the total number of This column represents


operations since smb_hist millisecond (ms) time stamps
statistics were last reset. for operations.

Analyzing smb_hist output


CIFS request time processing: (46457) - milliseconds units
0ms 1ms 2ms 3ms 4ms 5ms 6ms 7ms
13175 17752 5111 664 451 478 570 568
<16ms <24ms <32ms <40ms <48ms <56ms <64ms unused
4039 2309 569 165 61 21 10 0

Every other row displays the number of The time interval window lies halfway between
operations that took place in the interval in the values for adjacent columns. In this
the row above it. In this example, 13,715 example, 165 operations occurred in the 36-ms
operations happened in less than .5 ms. to 44-ms windows.

© 2008 NetApp. All rights reserved. 26

MEASURING CIFS PERFORMANCE

To measure CIFS performance, you can use the sysstat and smb_hist commands.
To display CIFS operations per second on the console, enter the sysstat 1 command, or
use FilerView.
To view CIFS throughput statistics, complete the following steps:
Click the CLI window to step through the process.
1. Set the command privileges to advanced.
2. To zero the counters, enter smb_hist –z.
3. Wait long enough to get a good sample.
4. To view CIFS statistics generated since the reset, enter smb_hist.
5. Review first section of output.

In the example in the figure above, the first part of the smb_hist output shows there were
13,175 operations that occurred in less than .5 milliseconds (ms), 17,752 operations that
occurred in the window between 0.5 ms and 1.5 ms, and 5,111 operations that occurred in the
window between 1.5 ms and 2.5 ms, and so on. In normal situations, as the interval window
gets larger, the number of operations that take that long decreases to zero.

14-30 Data ONTAP® 7.3 Fundamentals: System Data Collection

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Obtaining Statistics

The statit command:


ƒ Is an advanced-mode command used for more
detailed analysis of system performance
ƒ Gathers per-second statistics averaged over
the length of time it is running in the
background
ƒ Shows statistics representing all physical and
some logical objects on the storage system
ƒ Most of the data collected represents rates at
which things are happening

© 2008 NetApp. All rights reserved. 27

OBTAINING STATISTICS

14-31 Data ONTAP® 7.3 Fundamentals: System Data Collection

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Using the statit Command
to Obtain Statistics
To obtain statistics using the statit command,
complete the following steps:
1. To enter advanced privilege mode, enter:
priv set advanced
2. To begin collecting statistics, enter:
statit –b.
3. After 30 seconds (or as necessary to end statistics
collection and include NFS statistics), enter:
statit –e –n
4. To return to normal admin privilege mode, enter:
priv set admin

© 2008 NetApp. All rights reserved. 28

USING THE STATIT COMMAND TO OBTAIN STATISTICS

To run the statit command from a client with an rsh issue:


rsh storage_system “priv set advanced; statit –b”
Wait 30 to 60 seconds for statistics to be collected. The statit command runtime is
determined by many factors based on what events need to be captured. If you are establishing
a baseline, then 30 seconds might be enough. If you are running a benchmark test that takes
five minutes, then you should run the statit command for at least five minutes. The
statit command averages results over the time period that it runs. Therefore, if you run the
statit command for a longer period of time, it will “smooth out” the statistics collected.
After collecting statistics for a period of time, enter the command:
rsh storage_system -; root:passwd “priv set advanced; statit –e”
Analyze the results.

14-32 Data ONTAP® 7.3 Fundamentals: System Data Collection

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Obtaining Statistics
The report generated is divided into the following
statistics sections:
ƒ CPU
ƒ Multiprocessor
ƒ CSMP domain switches
ƒ Miscellaneous
ƒ WAFL
ƒ RAID
ƒ Network interface
ƒ Disk
ƒ Aggregate
ƒ Spares and other disks
ƒ FCP
ƒ iSCSI
ƒ Tape
© 2008 NetApp. All rights reserved. 29

OBTAINING STATISTICS

14-33 Data ONTAP® 7.3 Fundamentals: System Data Collection

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
CPU Statistics
CPU Statistics
506.934263 time (seconds) 100 %
275.044317 system time 54 %
23.412966 rupt time 5 % (7022 rupts x 0 usec/rupt
251.466451 non-rupt system time 50 %
271.837944 idle time 44 %
439.543653 time in CP 92 % 100 %
21.837230 rupt time in CP 5 % (132 rupts x 0 sec/rupt)

© 2008 NetApp. All rights reserved. 30

CPU STATISTICS

The first section of the statistics report provides CPU statistics.

In the example in the figure above:


275.044317 system time 54 %:
Shows the percentage of time the CPUs were busy.
23.412966 rupt time 5 % (7022 rupts x 0 usec/rupt):
Shows the number of interrupts received when the CPU ran at interrupt level.
271.837944 idle time 44 %:
Shows the percentage of time the CPUs executed the idle loop.
439.543653 time in CP 92 % 100 %:
Shows the percentage of time the system was in a consistency point, flushing data to
disk.

14-34 Data ONTAP® 7.3 Fundamentals: System Data Collection

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Multiprocessor Statistics
Multiprocessor Statistics (per second)
cpu0 cpu1 total
sk switches 1378.09 46.82 1424.91
hard switches 1175.27 29.15 1204.42
domain switches 103.89 16.08 119.96
CP rupts 0.00 0.00 0.00
nonCP rupts 100.00 0.00 100.00
nonCP rupt usec 0.00 0.00 0.00
Idle 1000000.00 1000000.00 2000000.00
kahuna 0.00 0.00 0.00
network 0.00 0.00 0.00
storage 0.00 0.00 0.00
exempt 0.00 0.00 0.00
raid 0.00 0.00 0.00
target 0.00 0.00 0.00
netcache 0.00 0.00 0.00
netcache2 0.00 0.00 0.00

© 2008 NetApp. All rights reserved. 31

MULTIPROCESSOR STATISTICS

The second section of the report includes multiprocessor statistics for multiple CPUs.

14-35 Data ONTAP® 7.3 Fundamentals: System Data Collection

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Miscellaneous Statistics

Miscellaneous Statistics (per second)


1893.73 hard context switches
0.00 NFS operations
0.00 CIFS operations
0.00 HTTP operations
0.00 NetCache URLs
0.00 streaming packets
0.00 network KB received
0.00 network KB transmitted
18.16 disk KB read
61.30 disk KB written
0.28 NVRAM KB written
0.00 nolog KB written
0.00 WAFL® bufs given to clients
0.00 checksum cache hits ( 0%)
0.00 no checksum - partial buffer
0.00 DAFS operations
0.00 FCP operations
0.00 iSCSI operations

© 2008 NetApp. All rights reserved. 32

MISCELLANEOUS STATISTICS

The miscellaneous section of the statistics report includes rates (or counts) for many
operations. The statistics from this section most commonly viewed are:
• NFS, CIFS, and HTTP operations
• Network KB transmitted and received
• Disk KB read and written
• FCP and iSCSI operations

14-36 Data ONTAP® 7.3 Fundamentals: System Data Collection

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
WAFL Rates
WAFL Statistics (per second) 0.00 blocks over-written
5.96 name cache hits ( 62%) 0.28 wafl_timer generated CP
3.69 name cache misses ( 38%) 0.00 snapshot generated CP
19.30 inode cache hits ( 100%) 0.00 wafl_avail_bufs generated CP
0.00 inode cache misses ( 0%) 0.00 dirty_blk_cnt generated CP
55.06 buf cache hits ( 100%) 0.00 full NV-log generated CP
0.00 buf cache misses ( 0%) 0.00 back-to-back CP
0.00 blocks read 0.00 flush generated CP
0.00 blocks read-ahead 0.00 sync generated CP
0.00 chains read-ahead 0.00 wafl_avail_vbufs generated CP
0.00 blocks speculative read-ahead 55.06 non-restart messages
5.11 blocks written 0.00 IOWAIT suspends
0.57 stripes written 604852 buffers

© 2008 NetApp. All rights reserved. 33

WAFL RATES

The WAFL section of the statistics report displays WAFL rates (or counts). The statistics
from this section most commonly viewed are:
• All cache hits and misses
• Inode cache hits and misses
• Per second rates for all the CP types
All cache hits and misses and inode cache hits and misses provide information about read
performance. Generally, it is considered good to have more hits than misses. However, there
are many factors to consider when analyzing these numbers, such as the fact that a file that is
only read once does not reside in cache. This would be true for most backup applications.

14-37 Data ONTAP® 7.3 Fundamentals: System Data Collection

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Network Interface Statistics

Network Interface Statistics (per second)


iface side bytes packets multicasts errors collisions
e0 recv 171.69 2.55 0.00 0.00 0.00
xmit 115.22 1.42 0.00 0.00 0.00
e9 recv 0.00 0.00 0.00 0.00 0.00
xmit 0.00 0.00 0.00 0.00 0.00
e6 recv 0.00 0.00 0.00 0.00 0.00
xmit 0.00 0.00 0.00 0.00 0.00
vh recv 0.00 0.00 0.00 0.00 0.00
xmit 0.00 0.00 0.00 0.00 0.00

© 2008 NetApp. All rights reserved. 34

NETWORK INTERFACE STATISTICS

The Network Interface section of the statistics report provides network interface statistics,
including rates for:
• Packets and bytes transmitted and received
• Transmit and receive errors
• Collisions

14-38 Data ONTAP® 7.3 Fundamentals: System Data Collection

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Disk Statistics
Disk Statistics (per second)
ut% is the percent of time the disk was busy.
xfers is the number of data transfer commands issued per second.
xfers = ureads + writes + cpreads + greads + gwrites

chain is the average number of 4K blocks per command.


usecs is the average disk round trip time per 4K block.
disk ut% xfers ureads--chain-usecs writes--chain-usecs cpreads-chain-usecs
/vol0/plex0/rg0:
8a.16 5 3.69 0.57 1.00 94500 ...
8a.21 4 3.12 0.57 1.00 39500 ...

© 2008 NetApp. All rights reserved. 35

DISK STATISTICS

The Disk section of the statistics report provides statistics for each drive. Some of the column
headings are defined at the top of the screen.
Beginning with the fourth column of data, the report uses hyphens in the column headings to
group related information. For example, user reads and the associated chain and round-trip
times are linked in the heading ureads--chain—usecs.
The following list defines some of the column headings on the Disk statistics report:
• disk—Indicates which drives are included in the statistics.
• ut%—Shows the drive utilization averaged per second, as in the percent of elapsed time that the
driver had a request outstanding.
Utilization rates of more than 80% might suggest an I/O bottleneck.
• xfers—Shows the total number of transfers, or reads and writes averaged per second. Most
drives are capable of 50 to 100 input operations per second (IOPS).

14-39 Data ONTAP® 7.3 Fundamentals: System Data Collection

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Aggregate, Spares, and Disk Statistics
Aggregate statistics:
Minimum 0 0.00 0.00 0.00 0.00 0.00 0.00
Mean 1 0.28 0.00 0.28 0.00 0.00 0.00
Maximum 5 3.69 0.57 3.12 0.00 0.00 0.00

Spares and other disks:


8b.16 2 1.70 1.70 1.00 10167 0.00 .... . 0.00 .... . 0.00 .... . 0.00 ..
8b.17 0 0.00 0.00 .... . 0.00 .... . 0.00 .... . 0.00 .... . 0.00 .... .
8b.18 0 0.00 0.00 .... . 0.00 .... . 0.00 .... . 0.00 .... . 0.00 .... .

© 2008 NetApp. All rights reserved. 36

AGGREGATE, SPARES, AND DISK STATISTICS

This section of the report displays aggregate, spares, and other disk statistics.

14-40 Data ONTAP® 7.3 Fundamentals: System Data Collection

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
FCP, iSCSI, and Tape Operations

FCP Statistics (per second)


0.00 FCP Bytes recv 0.00 FCP Bytes sent
0.00 FCP ops

iSCSI Statistics (per second)


0.00 iSCSI Bytes recv 0.00 iSCSI Bytes xmit
0.00 iSCSI ops

Interrupt Statistics (per second)


2000.15 Clock 3.97 Fast Enet
47.68 FCAL 4.54 int_22
3.41 FCAL 2059.75 total

© 2008 NetApp. All rights reserved. 37

FCP, ISCSI, AND TAPE OPERATIONS

The last three sections of the report display statistics for FCP, iSCSI, and tape operations.

14-41 Data ONTAP® 7.3 Fundamentals: System Data Collection

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Other Resources: Data Collection and
Performance
For more information about data collection and performance, see the
Fundamentals of Performance Analysis course.
This advanced course shows you how to:
ƒ Analyze data using recommended methodology to correlate
performance data into performance analysis information
ƒ Monitor performance using performance tools and establish a
baseline of expected throughput and response times for storage
systems under planned and increasing workloads
ƒ Perform capacity planning by monitoring performance and
comparing baseline information over time to determine when a
storage system will reach maximum capacity
ƒ Perform tuning for optimal performance for protocols such as
CIFS, NFS and SAN (including locating resources with tuning
guidelines for database scenarios)
ƒ Perform bottleneck analysis

© 2008 NetApp. All rights reserved. 38

OTHER RESOURCES: DATA COLLECTION AND PERFORMANCE

14-42 Data ONTAP® 7.3 Fundamentals: System Data Collection

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Summary

In this module, you should have learned to:


ƒ sysstat provides storage system statistics
ƒ statit provides low-level statistics on the
storage system averaged over the length of
time that the command runs
ƒ stats provides many statistics
ƒ To gauge system performance:
– Check disk configuration
– Monitor connectivity
– Check storage system configuration

© 2008 NetApp. All rights reserved. 39

MODULE SUMMARY

14-43 Data ONTAP® 7.3 Fundamentals: System Data Collection

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Exercise
Module 14: System Data Collection
Estimated Time: 30 minutes

EXERCISE

Please refer to your Exercise Guide for more instruction.

14-44 Data ONTAP® 7.3 Fundamentals: System Data Collection

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
FlexShare

NetApp University - Do Not Distribute


MODULE 15: FLEXSHARE

FlexShare
Module 15
Data ONTAP® 7.3 Fundamentals

FLEXSHARE

15-1 Data ONTAP® 7.3 Fundamentals: FlexShare

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Objectives

By the end of this module, you should be able to:


ƒ Describe and use FlexShare to tune and
improve performance
ƒ Monitor FlexShare performance changes
ƒ Describe specific cases that could benefit from
quality of service (QOS) improvements using
FlexShare

© 2008 NetApp. All rights reserved. 2

MODULE OBJECTIVES

15-2 Data ONTAP® 7.3 Fundamentals: FlexShare

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
FlexShare

FlexShare facilitates increased storage resource


control in the following ways:
ƒ Volume priorities
ƒ Hints of cache buffers
ƒ No license required

© 2008 NetApp. All rights reserved. 3

FLEXSHARE

FlexShare is a tool provided by Data ONTAP that enables you to use priorities and hints to
increase your control over how your storage system resources are used based on the
following:
• Priorities are assigned to volumes to assign relative priorities between the following:
• Different volumes
For example, you could specify that operations on /vol/db are more important than
operations on /vol1/test.
• Client data accesses and system operations
For example, you could specify that client accesses are more important than SnapMirror
operations.
• Hints are used to affect the way cache buffers are handled for a particular volume

15-3 Data ONTAP® 7.3 Fundamentals: FlexShare

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
FlexShare Application Scenarios
ƒ Scenario 1―A mission-critical database is on the
– same storage system as user home directories
– Use FlexShare to ensure that database accesses
are assigned a higher priority than accesses to
home directories
ƒ Scenario 2―System operations are negatively
impacting client accesses
– Use FlexShare to ensure that client accesses are
assigned a higher priority than system operations
ƒ Scenario 3―Volumes have different caching
requirements
– A database log volume that does not need to be
– cached after writing
– Use the cache buffer policy hint to help Data
ONTAP determine how to manage the cache buffers
for volumes with different caching requirements
© 2008 NetApp. All rights reserved. 4

FLEXSHARE APPLICATION SCENARIOS

WHEN TO USE FLEXSHARE


If your storage system consistently provides the performance required for your environment,
then you do not need FlexShare. If, however, your storage system sometimes does not deliver
sufficient performance to some of its users, you can use FlexShare to increase your control
over storage system resources and ensure that those resources are being used most effectively
for your environment.
The following sample scenarios describe how FlexShare can be used to set priorities for the
use of system resources:
• You have different applications on the same storage system. For example, if you have a
mission-critical database on the same storage system as user home directories, you can use
FlexShare to ensure that database accesses are assigned a higher priority than accesses to home
directories.
• You want to reduce the impact of system operations such as SnapMirror on client data
accesses. You can use FlexShare to ensure that client accesses are assigned a higher priority than
system operations.
• You have volumes with different caching requirements. For example, if you have a database
log volume that does not need to be cached after writing, or a heavily accessed volume that must
remain cached as much as possible, you can use the cache buffer policy hint to help Data ONTAP
determine how to manage the cache buffers for those volumes.

15-4 Data ONTAP® 7.3 Fundamentals: FlexShare

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
FlexShare Characteristics

ƒ No performance guarantees
ƒ Priority levels are relative
ƒ Both nodes in an active-active configuration
ƒ Must have the priority feature enabled

© 2008 NetApp. All rights reserved. 5

FLEXSHARE CHARACTERISTICS
NO PERFORMANCE GUARANTEES
FlexShare enables you to construct a priority policy that helps Data ONTAP manage system
resources optimally for your application environment, but does not guarantee performance.

PRIORITY LEVELS ARE RELATIVE


When you set a priority level on a volume or operation, you are not giving that volume or
operation an absolute priority level. Instead, you are providing a hint to Data ONTAP about
how to set priorities for accesses to that volume (or operations of that type) relative to other
accesses or operations. For example, if you set the priority level on each of your volumes to
the highest level, it will not improve the performance of your system. In fact, doing so would
not produce any change in performance.

USING FLEXSHARE IN AN ACTIVE-ACTIVE CONFIGURATION


If you use FlexShare in active-active configuration storage systems, you must ensure that
FlexShare is enabled or disabled on both nodes. Otherwise, a takeover could cause
unexpected results.
After a takeover occurs, the FlexShare priorities you set for volumes on the node that was
taken over are still operational, and the takeover node creates a new priority policy by
merging the policies configured on each individual node. For this reason, make sure that the
priorities you configure on each node work well together.

15-5 Data ONTAP® 7.3 Fundamentals: FlexShare

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Effects of Volume Operations on FlexShare
Priorities
Volume Operation Effect on FlexShare Settings
Deletion FlexShare settings removed
Rename FlexShare settings unchanged
FlexClone Volume Creation Parent volume settings unchanged
FlexShare settings for new FlexClone
volume unset (as for a newly created
volume)
Copy Source volume settings unchanged
FlexShare settings for destination
volume unset (as for a newly created
volume)
Offline or Online FlexShare settings preserved

© 2008 NetApp. All rights reserved. 6

EFFECTS OF VOLUME OPERATIONS ON FLEXSHARE PRIORITIES

15-6 Data ONTAP® 7.3 Fundamentals: FlexShare

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Global I/O Concurrency Option

ƒ FlexShare limits concurrent I/O based on the


following criteria:
– Volume priority
– Disk type
ƒ The default for most applications is the correct
value
ƒ Change concurrent I/O for nonstandard disks
or loads

© 2008 NetApp. All rights reserved. 7

GLOBAL I/O CONCURRENCY OPTION

Disks have a maximum number of concurrent I/O operations they can support, which varies
according to disk type. FlexShare limits the number of concurrent I/O operations per volume
based on multiple values, including volume priority and disk type.
For most customers, the default io_concurrency value is correct and should not be
changed. If you have nonstandard disks or loads, your system performance could be improved
by changing the value of the io_concurrency option.
NOTE: Because the io_concurrency option effects the entire system, use caution when
changing its value, and monitor system performance to ensure that this option actually does
improve performance.
For more information about FlexShare, see the na_priority(1) man page or the NOW site at
http://now.netapp.com/NOW.

15-7 Data ONTAP® 7.3 Fundamentals: FlexShare

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Assigning Priorities to Volume Data
Access
To assign priorities to volume data access:
1. Ensure that FlexShare is enabled:
priority on
2. Specify the priority for a volume:
priority set volume vol_name
level=priority_level
3. (Optional) Verify the priority level:
priority show volume [-v] vol_name

© 2008 NetApp. All rights reserved. 8

ASSIGNING PRIORITIES TO VOLUME DATA ACCESS

USING FLEXSHARE TO ASSIGN VOLUME DATA ACCESS PRIORITIES

ASSIGNING PRIORITY TO A VOLUME RELATIVE TO OTHER VOLUMES


You can use FlexShare to assign a relative priority to a volume, which causes accesses to that
volume to be assigned a priority that is higher or lower than that of other volumes on the
storage system.
NOTE: For best results, when you set the priority for any one volume, set the priority for all
volumes on the system.
To assign a priority to a volume relative to other volumes, complete the following steps:
1. Ensure that FlexShare is enabled for your storage system by entering the following command:
priority on
2. Specify the priority for the volume by entering the following command:
priority set volume vol_name level=priority_level
where vol_name is the name of the volume for which you want to set the priority, and
priority_level is one of the following values: VeryHigh, High, Medium, Low, and
VeryLow.
Example: The following command sets the priority level for the dbvol volume as high as
possible. This causes accesses to the dbvol volume to receive higher priority than accesses to
volumes with a lower priority.
system> priority set volume dbvol level=VeryHigh
3. You can optionally verify the priority level of the volume by entering the following command:
priority show volume [-v] vol_name

15-8 Data ONTAP® 7.3 Fundamentals: FlexShare

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Assign Priorities to System and User
Operations
To assign priorities to system and user
operations:
1. Ensure that FlexShare is enabled:
priority on
2. Specify the priority:
priority set volume vol_name system
3. (Optional) Verify the volume priority levels:
priority show volume -v vol_name

© 2008 NetApp. All rights reserved. 9

ASSIGN PRIORITIES TO SYSTEM AND USER OPERATIONS

ASSIGNING PRIORITY TO SYSTEM OPERATIONS RELATIVE TO USER OPERATIONS


If system operations (for example, SnapMirror transfers or backup operations) are negatively
affecting user accesses to the storage system, you can use FlexShare to assign a priority to
system operations that is lower than user operations for any volume.
NOTE: Synchronous SnapMirror updates are not considered system operations because they
are performed from NVRAM when the primary operation is initiated.This means that
synchronous SnapMirror updates are affected by the target volume priority, but not the
relative priority of system operations for that volume.

PROCEDURE
To assign a priority to system operations relative to user operations for a specific volume,
complete the following steps:
1. Ensure that FlexShare is enabled for your storage system by entering the following
command:
priority on
2. Specify the priority for system operations for the volume by entering the following
command:
priority set volume vol_name system=priority_level
where vol_name is the name of the volume for which you want to set the priority of
system operations, and priority_level is one of the following values: VeryHigh,
High, Medium, Low, Very Low, or a number from 1 to 100.
The number indicates the priority of system operations. When both user and system
operations are requested, the system operations will be selected over the user operations
[1-100] percent of the time, and the other percentage user operations will be selected.

15-9 Data ONTAP® 7.3 Fundamentals: FlexShare

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NOTE: Setting the priority of system operations to 30 does not mean that 30 percent of
storage system resources are devoted to system operations. Rather, when both user and
system operations are requested, the system operations will be selected over the user
operations 30 percent of the time, and the other 70 percent of the time the user operation
is selected.
3. You can optionally verify the priority levels of the volume by entering the following
command:
priority show volume [-v] vol_name

15-10 Data ONTAP® 7.3 Fundamentals: FlexShare

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Buffer Cache Policy

ƒ Settings:
– Keep
– Reuse
– Default
ƒ Setting policy:
– Priority on
– Priority set volume vol_name
cache=policy

© 2008 NetApp. All rights reserved. 10

BUFFER CACHE POLICY

You can use FlexShare to give Data ONTAP a hint about how to manage the buffer cache of
a volume.
NOTE: While this capability provides direction in the form of a hint to Data ONTAP,
ultimately Data ONTAP determines how a buffer is reused based on multiple factors,
including the hint.
The following table lists possible values for the buffer cache policy:
Value Description
keep This value instructs Data ONTAP to wait as long as possible before reusing the cache
buffers. This value can improve performance for a volume that is accessed frequently and
has a high incidence of multiple accesses to the same cache buffers.
reuse This value instructs Data ONTAP to make buffers for this volume available for reuse
quickly. You can use this value for volumes that are written but rarely read, such as database
log volumes, or volumes that have a data set so large that keeping the cache buffers probably
won’t increase the hit rate.
default This value instructs Data ONTAP to use the default system cache buffer policy for this
volume.

15-11 Data ONTAP® 7.3 Fundamentals: FlexShare

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
SETTING THE VOLUME BUFFER CACHE POLICY
You can use FlexShare to influence how Data ONTAP determines when to reuse buffers. To
set the buffer cache policy for a specific volume, complete the following steps.
1. If you haven’t already done so, ensure that FlexShare is enabled for your storage system
by entering the following command:
priority on
2. Specify the cache buffer policy for the volume by entering the following command:
priority set volume vol_name cache=policy
Example: The following command sets the cache buffer policy for the testvol1 volume
to keep, which instructs Data ONTAP not to reuse buffers for this volume when
possible.
priority set volume testvol1 cache=keep
3. You can optionally verify priority levels of the volume by entering the following
command:
priority show volume [-v] vol_name

15-12 Data ONTAP® 7.3 Fundamentals: FlexShare

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Removing or Disabling FlexShare Policies

ƒ To temporarily disable FlexShare priority:


system> priority set volume [volname]
service=off
ƒ To remove FlexShare priority:
system> priority delete volume
[volname]

© 2008 NetApp. All rights reserved. 11

REMOVING OR DISABLING FLEXSHARE POLICIES

REMOVING A FLEXSHARE PRIORITY FROM A VOLUME


You can temporarily disable a FlexShare priority for a particular volume, or you can remove
the priority completely.

TEMPORARILY DISABLING A FLEXSHARE PRIORITY


To temporarily disable a FlexShare priority for a specific volume, you can set the service
option for that volume to off, which puts the volume back into the default queue.
Example: The following command temporarily disables a FlexShare priority for the
testvol1 volume:
system> priority set volume testvol1 service=off

REMOVING A FLEXSHARE PRIORITY


To completely remove FlexShare priority settings from a specific volume, you can use the
priority delete command, which puts the volume back into the default queue.
Example: The following command completely removes FlexShare priority settings for the
testvol1 volume:
system> priority delete volume testvol1

15-13 Data ONTAP® 7.3 Fundamentals: FlexShare

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Default Volume Priority

To specify or modify the default volume priority:


priority set default option=value
[option=value]

© 2008 NetApp. All rights reserved. 12

DEFAULT VOLUME PRIORITY

MODIFYING THE DEFAULT PRIORITY


If you have not assigned a priority to a volume, then that volume is assigned the default
priority for your storage system. The default value is Medium.
NOTE: The default priority is also used for all aggregate operations. Changing the default
priority to VeryHigh or VeryLow could have unintended consequences.
To change the default volume priority, enter the following command:
priority set default option=value [option=value]
where option is either a level or a system, and the possible values for these options are
the same as for assigning priorities for a specific volume.
Example: The following command sets the default priority level for volumes to Medium,
while setting the default system operations priority to Low.
priority set default level=Medium system=Low

15-14 Data ONTAP® 7.3 Fundamentals: FlexShare

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Summary

In this module, you should have learned to:


ƒ Describe and use FlexShare to tune and
improve performance
ƒ Monitor FlexShare performance changes
ƒ Describe specific cases that could benefit from
quality of service (QOS) using FlexShare

© 2008 NetApp. All rights reserved. 13

MODULE SUMMARY

15-15 Data ONTAP® 7.3 Fundamentals: FlexShare

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Exercise
Module 15: FlexShare
Estimated Time: 30 minutes

EXERCISE

Please refer to your Exercise Guide for more instruction.

15-16 Data ONTAP® 7.3 Fundamentals: FlexShare

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NDMP

NetApp University - Do Not Distribute


MODULE 16: NDMP FUNDAMENTALS

NDMP
Fundamentals
Module 16
Data ONTAP® 7.3 Fundamentals

NDMP FUNDAMENTALS

16-1 Data ONTAP® 7.3 Fundamentals: NDMP Fundamentals

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Objectives

By the end of this module, you should be able to:


ƒ Describe Network Data Management Protocol
(NDMP) concepts and models
ƒ Enable and configure NDMP on storage
systems
ƒ Describe NDMP dump and restore phases
ƒ Display and analyze NDMP events logs
ƒ Understand and optimize NDMP performance
ƒ Use ndmpcopy to process full and incremental
backups
ƒ Find additional NDMP tools and information

© 2008 NetApp. All rights reserved. 2

MODULE OBJECTIVES

INTRODUCTION
This module describes how to use Network Data Management Protocol (NDMP) services on
your storage system to enable network-based backup and recovery using NDMP-enabled
commercial backup applications. It also explains how to monitor NDMP services running on
the storage system and to use ndmpcopy to migrate data efficiently within or between storage
systems.

16-2 Data ONTAP® 7.3 Fundamentals: NDMP Fundamentals

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NDMP Overview

© 2008 NetApp. All rights reserved. 3

NDMP OVERVIEW

16-3 Data ONTAP® 7.3 Fundamentals: NDMP Fundamentals

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NDMP Overview

ƒ NDMP is an open standard that allows backup


applications to control native backup and
recovery functions on NetApp storage systems
and other NDMP servers.
ƒ NDMP-compliant backup applications interact
with the ndmpd process on the storage system.
ƒ NDMP requests from backup applications
prompt the storage system to invoke native
dump and restore commands to initiate
backups and restores.

© 2008 NetApp. All rights reserved. 4

NDMP OVERVIEW

The NDMP is an open standard for centralized control of data management across the
enterprise. NDMP enables backup software vendors to provide support for NetApp storage
systems without having to port client code.
An NDMP-compliant solution separates the flow of backup and restore control information
from the flow of data to and from the backup media. These solutions invoke the Data ONTAP
operating system’s native dump, and restore to back up data from, and restore data to, a
NetApp storage system.
NDMP also provides low-level control of tape devices and media changers.
Using data protection services through backup applications that support NDMP offers a
number of advantages:
• Provides sophisticated scheduling of data protection operations across multiple storage systems.
• Provides media management and tape inventory management services to eliminate or minimize
manual tape handling during data protection operations.
• Supports data catalogue services that simplify the process of locating specific recovery data. Direct
Access Recovery optimizes the access of specific data from large backup tape sets.
• Supports multiple topology configurations, allowing efficient sharing of secondary storage
resources (tape library) through the use of three-way network data connections.

16-4 Data ONTAP® 7.3 Fundamentals: NDMP Fundamentals

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NDMP Support Matrix
Partner Data ONTAP Data ONTAP Data ONTAP Data ONTAP Data ONTAP
6.2 6.3 6.4 6.5 7.0
Atempo® Time Navigator™ 3.6 3.6 3.7 TBD TBD
BakBone® NetVault® 6.5.2 6.5.2 7.0 7.0, 7.1 7.1.1
CommVault® Galaxy® 4.1 4.1 4.2 TBD 5.9
BrightStor® ARCserve® 9 9 9 TBD 11.1
HP® Data Protector 5.0 5.0 5.0, 5.1 TBD 5.5
Legato® NetWorker™ 6.2 6.1.3, 6.2 6.1.3, 6.2, TBD 7.2
7.0
Syncsort® Backup Express 2.1.4, 2.1.5 2.1.4, 2.1.5 2.1.5 TBD 2.3
IBM® Tivoli® Storage Manager 5.0, 5.1 5.0, 5.1 5.2 TBD 5.3
Veritas® NetBackup™ 3.4, 3.4.1, 4.5 4.5, 5.0 4.5, 5.0 4.5, 5.0, 5.1
4.5 (Data (Data
ONTAP 6.3.3 ONTAP 6.4.2
or later only) or later only)

© 2008 NetApp. All rights reserved. 5

NDMP SUPPORT MATRIX

THIRD-PARTY NDMP SOLUTIONS


Solutions based on NDMP can centrally manage and control backup and recovery of highly
distributed data while minimizing network traffic. These products can direct a NetApp
storage system to back itself up to a locally attached tape drive without sending the backup
data over the network. NDMP-based solutions are designed to assure data protection and
efficient restoration in the event of data loss. These solutions include many control and
management features—such as discovery, configuration, scheduling, media management,
tape library control, and user interface—not available using the native dump and restore
commands for NetApp storage systems.
In 1996, NetApp partnered with Intelliguard to create NDMP. Since then, the two companies
have promoted the industry standardization of NDMP.
In the compatibility matrix in the figure above, key backup vendors and their NDMP
solutions are listed. To obtain a complete list of third-party NDMP backup applications and
software versions, see the online documentation on the NOW site.
NDMP third-party solutions provide:
• Central management and control of highly distributed data
• Local backup of NetApp storage systems (without sending data over the network)
• Control of robotics in tape libraries
• Data protection in a mixed server environment with UNIX, Windows NT, and NetApp storage
systems
• Investment protection with established backup strategies

16-5 Data ONTAP® 7.3 Fundamentals: NDMP Fundamentals

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Gigabit Ethernet Tape to SAN
Client Client Client Client Client

UNIX NT Backup
Application Application Ethernet LAN Application Application Host
Server Server Server Server Server

Gigabit Ethernet
Tape SAN

Spectra Logic Quantum|ATL

© 2008 NetApp. All rights reserved. 6

GIGABIT ETHERNET TAPE TO SAN

NetApp delivers both certified FC fabric Tape-to-SAN backup solutions and Gigabit Ethernet
(GbE) Tape-to-SAN solutions. These solutions are made possible through our joint
partnerships with industry leaders in the fields of tape automation, fabric switches, and
backup software. They offer significant benefits for enterprise customers over tape devices
attached (through SCSI) directly to NetApp storage systems. Specifically, these two Tape-to-
SAN solutions offer the following benefits:
• Tape sharing and amortization of tape resources
• Extended distances from data to centralized tape backup libraries
• Minimized impact of backups on servers on the network
• Tape drive hot-swapping
• Dynamic tape configuration changes without shutting down the NetApp storage system
The GbE Tape-to-SAN configurations allow multiple NetApp storage systems to
concurrently transfer data over GbE to one or more tape libraries that support NDMP. This
architecture allows each drive inside the tape library to be seen as a shared resource and an
NDMP server. A clear advantage of this configuration is the demonstrated interoperability of
Ethernet-based components.

16-6 Data ONTAP® 7.3 Fundamentals: NDMP Fundamentals

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Fibre Channel Tape to SAN
Client Client Client Client Client

UNIX NT Backup
Application Application Ethernet LAN Application Application Host
Server Server Server Server Server

Fibre Channel
Tape SAN

Spectra Logic StorageTek Quantum|ATL ADIC IBM

© 2008 NetApp. All rights reserved. 7

FIBRE CHANNEL TAPE TO SAN

Together with a third-party NDMP-based data protection solution that supports technology
known as dynamic drive sharing, both FC and GbE Tape-to-SAN solutions enable you to
dynamically allocate tape drives in a larger library to NetApp storage systems as needed for
backup or recovery operations. This eliminates the need to dedicate expensive tape devices to
each system.
These solutions help provide essential elements to enterprise customers seeking to maximize
the availability of their NetApp storage. You can replace or upgrade tape devices with no
impact on the system's ability to serve data to clients. Drives can be dynamically added or
removed without requiring any downtime.
For information about certified backup solutions, see the NOW site.

16-7 Data ONTAP® 7.3 Fundamentals: NDMP Fundamentals

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NDMP Terminology and Components

ƒ NDMP client
– The backup application is the NDMP client
– NDMP clients submit requests to an NDMP
server, and then receive replies and status back
from the NDMP server
ƒ NDMP server
– A process or service that runs on the NetApp
storage system
– The NDMP server processes requests from
NDMP clients, and then returns reply and status
information back to the NDMP client

© 2008 NetApp. All rights reserved. 8

NDMP TERMINOLOGY AND COMPONENTS

In the following definitions, the primary storage is the system that performs the NDMP Data
Service and the secondary system is the one that performs the NDMP Tape Service.
• Data Management Application (DMA)—Also called the backup application. The DMA controls
the NDMP session. Veritas NetBackup and Legato Networker are examples of backup
applications.
• NDMP Service—Provides Data Service, Tape Service, and SCSI Service.
• Control Connection—Bidirectional TCP/IP connection that carries external data representation
standard (XDR) encoded NDMP messages between the DMA and the NDMP server. The Control
Connection is analogous to an NDMP session on the storage system.
• Data Connection—Establishes a connection between the two NDMP systems that carry the data
stream; either internal to the NetApp storage system (local) or TCP/IP (remote).
• Data Service—NDMP service that transfers data between the primary storage system (where the
data on disks resides) and the Data Connection.
• Tape Service—NDMP service that transfers data between the secondary storage and the Data
Connection, allowing the DMA to manipulate and access secondary storage.

16-8 Data ONTAP® 7.3 Fundamentals: NDMP Fundamentals

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Typical NDMP Backup Session
DMA Host Control Messages
Data Connection
DMA Payload Data
Notifications, file history, log messages
Content
Index

TCP/IP TCP/IP
IP Network

NDMP NDMP
Data Tape
Service Service

TCP/IP, IPC

Primary Secondary
Storage Storage
System System

© 2008 NetApp. All rights reserved. 9

TYPICAL NDMP BACKUP SESSION

The figure above represents a storage system (primary) to storage system (secondary) to tape
data protection topology, where the backup operation is driven by a DMA host (NDMP
client).
The DMA opens connections to, and activates NDMP services in, both storage systems.
Control messages to the services configure the services and create a data connection between
them. More control messages initiate and start the backup; the data service creates the
payload (backup image) and writes it to the data connection, where the tape service receives
it.
Log messages and notifications are sent from the services to the DMA.

CONTROL CONNECTION
• XDR encoded
• DMA-server exchanges
• Well known, registered TCP port (10,000)
• DMA “manages” servers through request/reply control exchanges
• Server-initiated log and notification posts (short, unidirectional)
• Server-initiated file history transfers (bulk data, unidirectional)

DATA CONNECTIONS
• Opaque byte stream with no XDR encoding
• Server-server exchanges
• DMA “manages” data connection between peer servers

16-9 Data ONTAP® 7.3 Fundamentals: NDMP Fundamentals

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
• Non-reserved TCP ports are assigned by “listening” server
• Server-initiated backup-stream transfers (bulk data, unidirectional)

16-10 Data ONTAP® 7.3 Fundamentals: NDMP Fundamentals

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NDMP Connection Information

ƒ NDMP uses a TCP/IP connection to a


dedicated port
ƒ NDMP does not require CIFS, NFS, HTTP,
FCP, or iSCSI protocol license
ƒ Storage system listens for NDMP requests on
port 10000 when ndmpd is enabled
ƒ All messages are encoded using external data
representation (XDR) standard (see RFC 1014
for more information)

© 2008 NetApp. All rights reserved. 10

NDMP CONNECTION INFORMATION

16-11 Data ONTAP® 7.3 Fundamentals: NDMP Fundamentals

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NDMP Tape Backup
Topologies

© 2008 NetApp. All rights reserved. 11

NDMP TAPE BACKUP TOPOLOGIES

16-12 Data ONTAP® 7.3 Fundamentals: NDMP Fundamentals

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NDMP Tape Backup Topologies

ƒ Storage system to local tape (direct-attached)


ƒ Storage system to network-attached tape
library
ƒ Storage system to storage system to tape
ƒ Storage system to server to tape
ƒ Server to storage system to tape

© 2008 NetApp. All rights reserved. 12

NDMP TAPE BACKUP TOPOLOGIES

NDMP supports a number of topologies and configurations between backup applications and
storage systems or other NDMP servers providing data (file systems) and tape services.
The NDMP protocol specification allows the following backup configurations:
• Local backup from a NetApp storage system to a direct-attached tape device
• Three-way backup from a NetApp storage system to a network-attached tape library
• Three-way backup from a NetApp storage system through the network to another NetApp storage
system with a local tape device
• Backup from a NetApp storage system through the network to a UNIX or Windows NT backup
server with a local tape device
• Backup from a UNIX or Windows NT server through the network to a NetApp storage system
with a local tape device

16-13 Data ONTAP® 7.3 Fundamentals: NDMP Fundamentals

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Storage System to Local Tape (Direct-
Attached)
ƒ The storage system to local tape topology
provides the best performance.
ƒ The distance between the storage system and
tape limited by SCSI/FC
ƒ SCSI-attached tape drives are dedicated to a
single storage system

Backup Server NDMP Server

File History
LAN Boundary NDMP Control
Backup Data

Data

© 2008 NetApp. All rights reserved. 13

STORAGE SYSTEM TO LOCAL TAPE (DIRECT-ATTACHED)

In the simplest configuration, a backup application backs up data from a storage system to a
tape subsystem attached to the storage system. The NDMP control connection exists across
the network boundary. The NDMP data connection that exists within the storage system
between the data and tape services is called an NDMP local configuration.

16-14 Data ONTAP® 7.3 Fundamentals: NDMP Fundamentals

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Storage System to Network-Attached Tape
Library
ƒ Dynamic drive sharing without additional
software
ƒ No distance limit between source storage
system and tape library
ƒ Performance is dependant on network
architecture and storage system resources

Backup Server Data

File History
NDMP Control
Backup Data

© 2008 NetApp. All rights reserved. 14

STORAGE SYSTEM TO NETWORK-ATTACHED TAPE LIBRARY

NDMP-enabled tape libraries provide a variation of the three-way configuration. In this case,
the tape library attaches directly to the TCP/IP network, and then communicates with the
backup application and the storage system through an internal NDMP server.

16-15 Data ONTAP® 7.3 Fundamentals: NDMP Fundamentals

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Storage System to Storage System to Tape

ƒ Tape devices shared between multiple storage


systems
ƒ No distance limit between source storage
system and tape
ƒ Performance is dependant on network
architecture

Backup Server Data

File History
NDMP Control
Backup Data

© 2008 NetApp. All rights reserved. 15

STORAGE SYSTEM TO STORAGE SYSTEM TO TAPE

A backup application can also back up data from a storage system to a tape library (a media
changer with one or more tape drives) attached to another storage system. In this case, the
NDMP data connection between the data and tape services is provided by a TCP/IP network
connection. This configuration is referred to as an NDMP three-way storage-system-to-
storage-system.

16-16 Data ONTAP® 7.3 Fundamentals: NDMP Fundamentals

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Storage System to Server to Tape

ƒ Tape device shared across all backups


ƒ No distance limit between source storage
system and disk
ƒ Performance is dependant on network and
server architecture

Backup Server NDMP Server

File History
NDMP Control
Backup Data

Data

© 2008 NetApp. All rights reserved. 16

STORAGE SYSTEM TO SERVER TO TAPE

The storage-system-to-server configuration allows storage system data to be backed up to a


tape library attached to the backup application host, or to another data server system.

16-17 Data ONTAP® 7.3 Fundamentals: NDMP Fundamentals

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Server to Storage System to Tape

ƒ Tape devices attached to the storage system


are used for all backups
ƒ No distance limit between source system and
tape
ƒ Performance is dependant on network, server
architecture, and storage system resources

Backup Server NDMP Server

File History
NDMP Control
Backup Data
Data

© 2008 NetApp. All rights reserved. 17

SERVER TO STORAGE SYSTEM TO TAPE

The server-to-storage-system configuration allows server data to be backed up to a tape


library attached to a storage system.

16-18 Data ONTAP® 7.3 Fundamentals: NDMP Fundamentals

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Using Tape Devices with NDMP

ƒ Can be attached through NetApp storage appliance

Backup Server NDMP Server Tape Drive


NDMP Control Connection Tape Drive

Robot

ƒ Can be attached through backup server


Backup Server NDMP Server Tape Drive
NDMP Control Connection Tape Drive

Robot

NOTE: When sharing a tape device with backup


server, alwaysattach/configure the device through
the backup server.
© 2008 NetApp. All rights reserved. 18

USING TAPE DEVICES WITH NDMP

When using NDMP, the storage system can read from or write to the following devices:
• Stand-alone tape drives or tapes in a tape library that is attached to the storage system
• Tape drives or tape libraries attached to the workstation that runs the backup application
• Tape drives or tape libraries attached to a workstation or storage system on your network
• NDMP-enabled tape libraries attached to your network
NOTE: To use NDMP to manage your tape library, you must set the tape stacker autoload
setting to off. Otherwise, the system won’t allow media-changer operations to be controlled
by the NDMP backup application.

NAMING CONVENTIONS FOR TAPE LIBRARIES


The following names are used to refer to tape libraries: mcn or /dev/mcn; sptn or /dev/sptn.
Tape libraries can also be aliased to worldwide names (WWNs).
• To view the tape libraries that are recognized by the system, use sysconfig –m.
• To display the names currently assigned to libraries on the storage system, use storage
show mc.
• To display the aliases of tape drives, use storage show tape.
For more information about tape aliasing and tape commands, see the Data ONTAP Data
Protection Tape Backup and Recovery Guide on the NOW site.

16-19 Data ONTAP® 7.3 Fundamentals: NDMP Fundamentals

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Enabling and
Configuring NDMP

© 2008 NetApp. All rights reserved. 19

ENABLING AND CONFIGURING NDMP

16-20 Data ONTAP® 7.3 Fundamentals: NDMP Fundamentals

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Enabling and Configuring NDMP
ƒ NDMP is disabled by default. To enable:
ndmpd on or options ndmpd.enable on
ƒ The version must match the version configured on the
NDMP backup application. To configure:
ndmpd version { 2 | 3| 4}
ƒ By default, there is no host access control configured. To
configure:
ndmpd.access
ƒ Configuring NDMP authorization methods:
ndmpd.authtype
– Combination of challenge, plaintext, or both
– SnapVault and SnapMirror management requires challenge

© 2008 NetApp. All rights reserved. 20

ENABLING AND CONFIGURING NDMP

To enable a storage system for basic management by an NDMP backup application, you must
enable the storage system’s NDMP support, and specify the configured NDMP version of the
backup application, host IP address, and authentication method.
To prepare a storage system for NDMP management, complete the following steps.
1. Enable the NDMP service:
system>options ndmpd.enable on
When disabling ndmpd, the storage system continues processing all requests for sessions already
established, but rejects new sessions.
2. Specify the NDMP version to support on the storage system. This version must match the version
configured on the NDMP backup application server:
system>ndmpd version {2|3|4}
Data ONTAP supports NDMP versions 2, 3, and 4 (4 is the default value).
The storage system and the backup application must agree on a version of NDMP to be used for
each NDMP session. When the backup application connects to the storage system, the storage
system sends the default version back. The application can choose to use that default version and
continue with the session. However, if the backup application uses an earlier version, it begins
version negotiation, asking if each version is supported, to which the storage system responds with
a yes or no.
Because some backup applications do not support version negotiation, the ndmpd version
command controls the maximum and default NDMP version allowed. If you know your backup
application does not support NDMP version 4, and does not negotiate versions, you can use this
command to define the maximum version Data ONTAP supports so that the application can
operate correctly.

16-21 Data ONTAP® 7.3 Fundamentals: NDMP Fundamentals

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
3. If you want to specify a restricted set of NDMP backup-application hosts that can connect to the
storage system, set the following option:
system>options ndmpd.access {all|legacy|host[!]=hosts|if
[!]=interfaces}
Where:
• all is the default value, which permits NDMP sessions with any host
• legacy restores previous values in effect before a Data ONTAP version upgrade; in the
case of Data ONTAP 6.2, the legacy value is equal to all\
• host=hosts allows a specified host or a comma-separated list of hosts to run NDMP
sessions on this storage system; the hosts can be specified by either host name or IP address
• host!=hosts, blocks a specified host or comma-separated list of hosts from running
NDMP sessions on this storage system; the hosts can be specified by either host name or IP
address
• if=interfaces, allows NDMP sessions through a specified interface or comma-
separated list of interfaces on this storage system
• if!=interfaces, blocks NDMP sessions through a specified interface or comma-
separated list of interfaces on this storage system
4. Specify the authentication method through which users are allowed to start NDMP sessions with
the storage system. This setting must include an authentication type supported by the NDMP
backup application:
system>options ndmpd.authtype
{challenge|plaintext|challenge,plaintext}
The challenge authentication method is generally the preferred, and more secure,
authentication method. Challenge is the default type.
With the plaintext authentication method, the login password is transmitted as clear text.

16-22 Data ONTAP® 7.3 Fundamentals: NDMP Fundamentals

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Enabling and Configuring NDMP (Cont.)
ƒ Creating local account:
useradmin useradd <backupuser>
ƒ Setting the NDMP password length:
By default, Data ONTAP generates a 16-character
password. If DMA does not support this, reduce the
password length to 8 characters. To set the password
length:
ndmpd.password_length { 8 | 16 }
ƒ Generating an encoded NDMP password:
ndmpd password <backupuser>
ƒ Enabling the NDMP connection log:
ndmpd.connectlog.enabled { off | on }
ƒ Including or excluding files with ctime changed from
incremental dumps:
ndmpd.ignore_ctime.enabled { on | off }
© 2008 NetApp. All rights reserved. 21

ENABLING AND CONFIGURING NDMP (CONT.)

5. If you have operators without root privileges on the storage system that will be carrying out tape-
backup operations through the NDMP backup application, then add a new backup user to the
Backup Operators useradmin group list:
system>useradmin user add backupuser -g “Backup Operators”
6. Specify an 8- or 16-character NDMP password length (the default value is 16):
system>options ndmpd.password_length
7. Generate an NDMP password for the new user:
System>ndmpd password backupuser
NOTE: If you change the password to your regular storage system account, repeat this procedure
to obtain your new system-generated, NDMP-specific password.
8. Enable logging of NDMP connection attempts with the storage system:
system>options ndmpd.connectlog.enabled on
This enables Data ONTAP to log NDMP connection attempts in the /etc/messages file.
These entries can help you determine if and when authorized or unauthorized users are attempting
to start NDMP sessions. The default for this option is off.

16-23 Data ONTAP® 7.3 Fundamentals: NDMP Fundamentals

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Log entries for attempted NDMP connections or operations include the following fields:
• Time
• Thread
• NDMP request and action (allow or refuse)
• NDMP version
• Session ID
• Source IP (address where the NDMP request originated)
• Destination IP (address of the storage system receiving the NDMP request)
• Source port (port through which the NDMP request was transmitted)
• Storage system port through which the NDMP request was received
Example:
Friday Aug 25:16:45:17GMT ndmpd.access allowed fosr version =4,
sessid=34,
from src ip = 172.29.19.40, dst ip =172.29.19.95, src port =
63793, dst port = 10000.
9. Include or exclude files with the ctime changed from incremental dumps according to your
backup requirements:
system>options ndmpd.ignore_ctime.enabled { on | off }
When this option is on, users can exclude files with the ctime changed from system storage
incremental dumps, because other processes (like virus scanning) often alter the ctime of files.
When this option is off, backup on the storage system includes all files with a change or modified
time later than the last dump in the previous level dump.

16-24 Data ONTAP® 7.3 Fundamentals: NDMP Fundamentals

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NDMP Status and Session Information

ƒ Displaying NDMP status and session


information
– To determine if a session is operating as
expected:
ndmpd status [session]
– To debug an NDMP session if there are
problems:
ndmpd probe [session]
ƒ Terminating NDMP sessions
– To terminate a specific session:
ndmpd kill session#
– To terminate all NDMP sessions:
ndmpd killall
© 2008 NetApp. All rights reserved. 22

NDMP STATUS AND SESSION INFORMATION

DISPLAYING NDMP SESSION INFORMATION


To display NDMP session information, use the following command:
system>ndmpd status [session]
where session is the specific session number you want the status of, from 0 to 99.
To display the status of all current sessions, leave session blank.
In the following example, the command displays information about session 4:
system>ndmpd status 4
ndmpd ON.
Session: 4
Active
version: 3
Operating on behalf of primary host.
tape device: not open
mover state: Idle
data state: Idle
data operation: None
To display detailed NDMP session information, use the following command:
system>ndmpd probe [session]

16-25 Data ONTAP® 7.3 Fundamentals: NDMP Fundamentals

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
In the following example, the command displays detailed status information for session 4:
system>ndmpd probe 4
ndmpd ON.
Session: 4
isActive: TRUE
protocol version: 3
effHost: Local
authorized: FALSE
client addr: 10.10.10.12.47154
spt.device_id: none
spt.ha: -1
spt.scsi_id: -1
spt.scsi_lun: -1
tape.device: rst0a
tape.mode: Read/Write
mover.state: Active
mover.mode: Read
mover.pauseReason N/A
mover.haltReason N/A
mover.recordSize: 10240
mover.recordNum: 315620
mover.dataWritten: 3231948800
mover.seekPosition: 0
mover.bytesLeftToRead: 0
mover.windowOffset: 0
mover.windowLength: -1
mover.position: 0
mover.connect.addr_type:LOCAL
data.operation: Backup
data.state: Active
data.haltReason: N/A
data.connect.addr_type: LOCAL
data.bytesProcessed: 3231989760

16-26 Data ONTAP® 7.3 Fundamentals: NDMP Fundamentals

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
TERMINATING NDMP SESSIONS
To terminate a specific session:
system>ndmpd kill session#
where session # is the specific NDMP session you want to terminate, from 0 to 99
To terminate all NDMP sessions:
system>ndmpd killall
These kill commands allow no responding sessions to be cleared without the need for a
reboot, because the ndmpd off command waits until all sessions are inactive before turning
off the NDMP service.

16-27 Data ONTAP® 7.3 Fundamentals: NDMP Fundamentals

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NDMP Dump and Restore Format
The Data ONTAP dump adheres to the Solaris ufsdump
format.
ƒ Dump format:
– Phase I and II: Build the map of files and directories, and
collect file history and attribute information
– Phase III: Dump data to tape, specifically directory
entries
– Phase IV: Dump files
– Phase V: Dump ACLs
ƒ Restore format:
– Phase I: Restore directories
– Phase II: Restore files

© 2008 NetApp. All rights reserved. 23

NDMP DUMP AND RESTORE FORMAT

Data ONTAP dump adheres to the Solaris ufsdump format.


Before taking a look at the NDMP dump and restore format, it is important to understand the
following terms:
• Every file on a file system has an identifier associated with it, called the inode. A file system
typically has inodes preallocated on it.
• The inode file is a special file on the file system that contains a list of all the inodes and their
details.
• The inode map is an array that has as many elements as it has number of inodes on the volume.
The inode number serves as the index for its corresponding entry in the array. A value of 1 in any
entry indicates that the corresponding file will be present in a particular backup.
• An offset map is an array that has as many elements as it has number of inodes on the volume. The
inode number serves as the index for its corresponding entry in the array. If an inode exists in a
backup, its corresponding entry contains the physical address on tape that marks the beginning of
the file data in the backup image.

PHASES AND SUBPHASES OF A DUMP OPERATION


Phase I generates the list of files that need to be backed up. The output of this phase is the
inode map. The inode to be backed up has its corresponding entry set to 1. The inode that is
not to be backed up has its entry set to 0.
Phase II is essentially a no-op for level 0 backups. It does nothing except rewrite the inode
map generated in Phase I to the tape.

16-28 Data ONTAP® 7.3 Fundamentals: NDMP Fundamentals

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Phase III writes the entire directory structure for what is being backed up to the tape. Phase
III includes two subphases:
• Phase IIIa is the early ACLs phase. This phase dumps ACLs for the data set to the tape. This step
could take more time if a lot of files in the data set have ACLs.
• Phase IIIb was introduced in Data ONTAP 6.4. This phase is executed only for NDMP backups
that have File History turned on. The output of this phase is the offset map. For each file on any
given backup, the offset map contains the physical address on the tape that marks the beginning of
the file in the backup image.
Phase IV dumps the actual file data on tape. This phase operates in the inode order. A smaller
inode number is guaranteed to be found before a larger inode number.
Phase V is a duplicate of Phase IIIa. This is what traditionally existed in the NetApp native
dump, and this phase is retained for backward compatibility.

PHASES IN A RESTORE OPERATION


Phase I: Restore directories.
Phase II: Restore files.

16-29 Data ONTAP® 7.3 Fundamentals: NDMP Fundamentals

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Dump and Restore Event Logs

ƒ Event logging is off by default


To enable event logging:
backup.log.enable { off | on }
ƒ Event log files
– Stored in the /etc/log/backup log file
– Rotated once a week
– Saved for up to six weeks
ƒ Event log message format
To change the log message format:
type timestamp identifier event
(event_info)

© 2008 NetApp. All rights reserved. 24

DUMP AND RESTORE EVENT LOGS

Data ONTAP automatically logs significant events and the times at which they occur during
dump and restore operations. You might want to view event log files to verify if a backup was
successful, to gather statistics on backup operations, or to use information contained in past
event-log files to help diagnose problems with dump and restore operations.
Event logging is turned off by default. To enable event logging:
system>options backup.log.enable on
All dump and restore events are recorded in a log file backup in the /etc/log/ directory.
Once a week, log files are rotated. The /etc/log/backup file is copied to
/etc/log/backup.0, the /etc/log/backup.0 file is copied to /etc/log/backup.1,
and so on. The system saves the log files for up to six weeks. This means you can have up to
seven message files (/etc/log/backup.0 through /etc/log/backup.5, plus the current
/etc/log/backup file).

EVENT LOG MESSAGE FORMAT


For each event, a message is written to the backup log file in the following format:
type timestamp identifier event (event_info)
Example:
dmp Fri Aug 25 18:54:56 PDT /vol/vol0/home(5) Start (level 0,
NDMP)

16-30 Data ONTAP® 7.3 Fundamentals: NDMP Fundamentals

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Each log message begins with one of the type indicators:
Type Meaning
Log Logging event
Dmp Dump event
rst Restore event

The timestamp field shows the date and time of the event.
The identifier field for a dump event includes the dump path and the unique ID for the
dump. The identifier field for a restore event uses only the restore destination path name
as a unique identifier. Logging-related event messages do not include an identifier field.

START AND STOP LOGGING EVENTS


The event field of a message that begins with log contains one of the following events:
• Start_Logging: Indicates the beginning of logging, or that logging has been turned back on
after being disabled
• Stop_Logging: Indicates that logging has been turned off

DUMP EVENTS
The event field for a dump event contains an event type followed by event-specific
information in parentheses. The following table describes events, their meanings, and the
related event information that might be recorded for a dump operation.
Event Meaning Event Information
Start A dump or NDMP dump begins Dump level and the type of dump
Restart A dump restarts Dump level
End Dumps completed successfully Amount of data processed
Abort The operation aborts Amount of data processed
Options Specified options are listed All options and their associated values,
including NDMP options
Tape_open The tape is open for read/write The new tape device name
Tape_close The tape is closed for read/write The tape device name
Phase_change Dump is entering a new processing phase The new phase name
Error Dump encounters an unexpected event Error message
Snapshot A snapshot is created or located The name and time of the snapshot
Base_dump A base dump entry in the etc/dumpdates The level and time of the base dump
file has been located (for incremental dumps only)

The log file for a dump operation begins with either a Start or Restart event and ends
with either an End or Abort event.
The following is an example of the output for a dump operation:
dmp Fri Aug 25 01:11:22 GMT /vol/vol0/(1) Start (Level 0
dmp Fri Aug 25 01:11:22 GMT /vol/vol0/(1) Options (b=63,
B=1000000,u)
dmp Fri Aug 25 01:11:22 GMT /vol/vol0/(1) Snapshot
(snapshot_for_backup.6, Sep 20 01:11:21 GMT)
dmp Aug 25 01:11:22 GMT /vol/vol0/(1) Tape_open (nrst0a)
dmp Aug 25 01:11:22 GMT /vol/vol0/(1) Phase_change (I)
dmp Aug 25 01:11:24 GMT /vol/vol0/(1) Phase_change (II)
16-31 Data ONTAP® 7.3 Fundamentals: NDMP Fundamentals

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
dmp Aug 25 01:11:24 GMT /vol/vol0/(1) Phase_change (III)
dmp Aug 25 01:11:26 GMT /vol/vol0/(1) Phase_change (IV)
dmp Aug 25 01:14:19 GMT /vol/vol0/(1) Tape_close (nrst0a)
dmp Aug 25 01:14:20 GMT /vol/vol0/(1) Tape_open (nrst0a)
dmp Aug 25 01:14:54 GMT /vol/vol0/(1) Phase_change (V)
dmp Aug 25 01:14:54 GMT /vol/vol0/(1) Tape_close (nrst0a)
dmp Aug 25 01:14:54 GMT /vol/vol0/(1) End (1224 MB)

RESTORE EVENTS
The event field for a restore event contains an event type followed by event-specific
information in parentheses. The following table describes the events, their meanings, and the
related event information that might be recorded for a restore operation.
Event Meaning Event Information
Start A restore or NDMP restore begins Restore level and the type of restore
Restart A restore restarts Restore level
End Restore completed successfully Number of files and amount of data processed
Abort The operation aborts Number of files and amount of data processed
Options Specified options are listed All options and their associated values, including
NDMP options
Tape_open The tape is open for read/write The new tape device name
Tape_close The tape is closed for read/write The tape device name
Phase_change Dump is entering a new The new phase name
processing phase
Error Restore encounters an unexpected Error message
event

The log file for a restore operation begins with either a Start or Restart event and ends
with either an End or Abort event.

The following is an example of the output for a restore operation:


rst Fri Aug 25 02:24:22 GMT /vol/rst_vol/ Start (level 0)
rst Fri Aug 25 02:24:22 GMT /vol/rst_vol/ Options (r)
rst Fri Aug 25 02:24:22 GMT /vol/rst_vol/ Tape_open (nrst0a)
rst Fri Aug 25 02:24:23 GMT /vol/rst_vol/ Phase_change (Dirs)
rst Fri Aug 25 02:24:24 GMT /vol/rst_vol/ Phase_change (Files)
rst Fri Aug 25 02:39:33 GMT /vol/rst_vol/ Tape_close (nrst0a)
rst Fri Aug 25 02:39:33 GMT /vol/rst_vol/ Tape_open (nrst0a)
rst Fri Aug 25 02:44:22 GMT /vol/rst_vol/ Tape_close (nrst0a)
rst Fri Aug 25 02:44:22 GMT /vol/rst_vol/ End (3516 files, 1224
MB)

16-32 Data ONTAP® 7.3 Fundamentals: NDMP Fundamentals

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The following is an example of the output for an aborted restore operation:
rst Fri Aug 25 02:13:54 GMT /rst_vol/ Start (Level 0)
rst Fri Aug 25 02:13:54 GMT /rst_vol/ Options (r)
rst Fri Aug 25 02:13:54 GMT /rst_vol/ Tape_open (nrst0a)
rst Fri Aug 25 02:13:55 GMT /rst_vol/ Phase_change (Dirs)
rst Fri Aug 25 02:13:56 GMT /rst_vol/ Phase_change (Files)
rst Fri Aug 25 02:23:40 GMT /vol/rst_vol/ Error (Interrupted)
rst Fri Aug 25 02:23:40 GMT /vol/rst_vol/ Tape_close (nrst0a)
rst Fri Aug 25 2:23:40 GMT /vol/rst_vol/ Abort (3516 files,
598MB)

16-33 Data ONTAP® 7.3 Fundamentals: NDMP Fundamentals

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Using the ndmpcopy Command to
Copy Data
ƒ The ndmpcopy command:
– Used to transfer data between storage systems
that support NDMP v3 or v4
– Can carry out full and incremental transfers
– Limits incremental transfers to a maximum of
two levels (one full and up to two incremental)
– Applies NetApp to NetApp only
– Syntax:
ndmpcopy
[options]source_hostname:source_path
destination_hostname:destination_path

© 2008 NetApp. All rights reserved. 25

USING THE NDMPCOPY COMMAND TO COPY DATA

The ndmpcopy command enables you to transfer file system data between storage systems
that support NDMP v3 or v4, and the UNIX file system (UFS) dump format.
Using the ndmpcopy command, you can carry out both full and incremental data transfers.
However, incremental transfers are limited to a maximum of two levels (one full and no more
than two incremental). You can transfer full or partial volumes, qtrees, or directories, but not
individual files.
To copy data within a storage system or between storage systems using ndmpcopy, use the
following command from the source or the destination system, or from a storage system that
is not the source or the destination:
system>ndmpcopy [options] source_hostname:source_path
destination_hostname:destination_path
where source_hostname and destination_hostname can be host names or IP
addresses. If destination_path does not specify a volume (or specifies a
nonexistent volume), the root volume is used.

16-34 Data ONTAP® 7.3 Fundamentals: NDMP Fundamentals

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The following table lists available options for the ndmpcopy command.
Option Description
-sa Source authorization specifies the user name and password for connecting to the
username:[password] source storage system.
-da Destination authorization specifies the user name and password for connecting
username:[password] to the destination storage system.
-st Sets the source authentication type to be used when connecting to the source
{challenge|text} storage system.
-dt Sets the destination authentication type to be used when connecting to the
{challenge|text} destination storage system. By default, challenge is the authentication type
used. The text authentication type exchanges the user name and password in
clear text. The challenge authentication type exchanges the user name and
password in encrypted form.
-l level Sets the dump level used for the transfer to the specified value of the level.
Valid values for the dump level are 0, 1, and 2, where 0 indicates a full transfer,
and 1 or 2 indicates an incremental transfer. The default is 0.
-d Enables ndmpcopy debug log messages (which appear in the root volume
/etc/log directory) to be generated. The ndmpcopy debug log file
names are in the form ndmpcopy.yyyymmdd.
-f Enables forced mode. This mode enables overwriting system files in the /etc
directory on the root volume.
-h Prints the help message.

16-35 Data ONTAP® 7.3 Fundamentals: NDMP Fundamentals

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NDMP Online
Documentation

© 2008 NetApp. All rights reserved. 26

NDMP ONLINE DOCUMENTATION

NetApp strongly recommends you read the following documents to gain a complete
understanding of NDMP and its integration with other partner backup solutions. All of these
references are available on the NOW site.

TECHNICAL REPORTS
TR-3066: Data Protection Strategies for Network Appliances Storage Systems

NDMP SPECIFICATIONS
http://www.ndmp.org

MANUAL
Data Protection Tape Backup and Recovery Guide for Data ONTAP (latest release
recommended)

16-36 Data ONTAP® 7.3 Fundamentals: NDMP Fundamentals

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Exercise
Module 16: NDMP Fundamentals
Estimated Time: 30 minutes

EXERCISE

Please refer to your Exercise Guide for more instruction.

16-37 Data ONTAP® 7.3 Fundamentals: NDMP Fundamentals

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Active-Active

NetApp University - Do Not Distribute


MODULE 17: ACTIVE-ACTIVE CONTROLLER CONFIGURATION

Active-Active
Controller
Configuration
Module 17
Data ONTAP® 7.3 Fundamentals

ACTIVE-ACTIVE CONTROLLER CONFIGURATION

17-1 Data ONTAP® 7.3 Fundamentals: Active-Active Controller Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Objectives

By the end of this module, you should be able to:


ƒ Describe active-active controller configuration
ƒ Explain how the active-active configuration
works
ƒ Describe software requirements for active-
active configurations
ƒ List three modes of operation for active-active
configurations
ƒ Describe the effects on client connections of
failover and giveback operations
ƒ Describe best practices for active-active
controller configurations
© 2008 NetApp. All rights reserved. 2

MODULE OBJECTIVES

17-2 Data ONTAP® 7.3 Fundamentals: Active-Active Controller Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Active-Active Controller Configuration

ƒ Consists of two identical storage controllers


with each:
– Connected to its own disks shelves
– Connected to the other’s disk shelves
– Storing and saving data independently during
normal operations
ƒ If a storage controller fails, the surviving
partner serves the data of failed controller

© 2008 NetApp. All rights reserved. 3

ACTIVE-ACTIVE CONTROLLER CONFIGURATION

An active-active configuration is two storage systems (nodes) whose controllers are


connected to each other either directly or through switches.
The nodes are connected to each other through a cluster adapter or NVRAM adapter, which
allows one node to serve data to the disks on its failed partner node. Each node continually
monitors its partner, mirroring data for the partner’s NVRAM.

17-3 Data ONTAP® 7.3 Fundamentals: Active-Active Controller Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Active-Active is for High Availability

Configuring storage systems in an active-active


controller configuration provides the following
high-availability benefits:
ƒ Fault tolerance
ƒ Nondisruptive software upgrades
ƒ Nondisruptive hardware maintenance

© 2008 NetApp. All rights reserved. 4

ACTIVE-ACTIVE IS FOR HIGH AVAILABILITY

BENEFITS OF ACTIVE-ACTIVE CONFIGURATIONS


Active-active configurations provide fault tolerance and the ability to perform nondisruptive
upgrades and maintenance.
Active-active configurations provide the following benefits:
• Fault tolerance—When one node fails or becomes impaired, a takeover occurs and the partner
node continues to serve the data of the failed node.
• Nondisruptive software upgrades—When you halt one node and allow takeover, the partner
node continues to serve data for the halted node, allowing you to upgrade the halted node.
• For more information about nondisruptive software upgrades, see the Data ONTAP Upgrade
Guide.
• Nondisruptive hardware maintenance—When you halt one node and allow takeover, the
partner node continues to serve data for the halted node, allowing you to replace or repair
hardware on the halted node.

17-4 Data ONTAP® 7.3 Fundamentals: Active-Active Controller Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Configuration Characteristics

ƒ Connected through a cluster interconnect


ƒ Uses two or more loops
ƒ Each own their spare disks
ƒ Each have two mailbox disks on the root
volume

© 2008 NetApp. All rights reserved. 5

CONFIGURATION CHARACTERISTICS

CHARACTERISTICS OF NODES IN AN ACTIVE-ACTIVE CONFIGURATION


Nodes in an active-active configuration have a number of common characteristics, regardless
of what type of active-active configuration. These characteristics include:
• The nodes are connected to each other through a separate cluster interconnect consisting of
adapters and cables, through which they do the following:
• Continually verify that the other node is functioning
• Mirror log data for the partner’s NVRAM
• Synchronize the partner node’s time
• The nodes use two or more loops in which:
• Each node manages its own disks
• Each node in takeover mode manages its partner’s disks
• NOTE: For systems using software-based disk ownership, disk ownership is established by
Data ONTAP or an administrator, rather than the location of the disk on a disk shelf. For more
information about disk ownership, see the Data ONTAP Storage Management Guide.
• The nodes own their spare disks and do not share them with the other node
• The nodes each have two mailbox disks on the root volume (four mailbox disks if the root
volume is mirrored using SyncMirror) that are used to perform the following tasks:
• Maintain consistency between the pair of nodes
• Continually verify that the partner node is running or whether it has performed a takeover
• Store configuration information not specific to any particular node
• The nodes can reside on the same Windows domain or on different domains

17-5 Data ONTAP® 7.3 Fundamentals: Active-Active Controller Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Types of Active-Active Configurations

ƒ Standard active-active―Local node connected


to partner node
ƒ SyncMirror―Standard active-active
configuration plus synchronously written
mirrors of volumes called “plexes”
ƒ MetroCluster―An extended version of
SyncMirror for surviving site outages
– Distance between partners can be up to 100 km
– Uses specific Brocade switch models

© 2008 NetApp. All rights reserved. 6

TYPES OF ACTIVE-ACTIVE CONFIGURATIONS

STANDARD
In a standard active-active configuration, Data ONTAP ensures that each node monitors the
functioning of its partner through a heartbeat signal sent between the nodes. The NVRAM
data from one node is mirrored by its partner and each node can take over the partner’s disks
if the partner node fails. Also, the nodes synchronize each other’s time.

ACTIVE-ACTIVE WITH SYNCMIRROR


Mirroring data protects it from the following problems that could cause data loss:
• The failure or loss of two or more disks in a RAID 4 aggregate
• The failure or loss of three or more disks in a RAID-DP aggregate
In addition, the failure of an FC-AL adapter, loop, or disk shelf module does not require a
failover in a mirrored, active-active configuration.
Similar to standard active-active configurations, if either node in a mirrored active-active
configuration becomes impaired or cannot access its data, the other node automatically serves
the impaired node’s data until the problem is corrected.

METROCLUSTER
MetroCluster provides the same advantages of mirroring as mirrored active-active
configurations, with the additional ability to initiate failover if an entire site becomes lost or
unavailable.

17-6 Data ONTAP® 7.3 Fundamentals: Active-Active Controller Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
If there is a failure or loss of two or more disks in a RAID 4 aggregate, or three or more disks
in a RAID-DP aggregate, your data is protected.
The failure of an FC-AL adapter, loop, or ESH2 module does not require a failover.
In addition, a MetroCluster enables you use a single command to initiate a failover if an entire
site becomes lost or unavailable. If a disaster occurs at one of the node locations and destroys
your data there, your data not only survives on the other node, but can also be served by that
node while you address the issue or rebuild the configuration.

17-7 Data ONTAP® 7.3 Fundamentals: Active-Active Controller Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Requirements for Standard Active-Active

ƒ Architecture compatibility
ƒ Storage capacity
ƒ Disk and disk shelf compatibility
ƒ Cluster interconnect adapters and cables
installed
ƒ Nodes attached to the same networks
ƒ Same software licensed and enabled

© 2008 NetApp. All rights reserved. 7

REQUIREMENTS FOR STANDARD ACTIVE-ACTIVE

SETUP REQUIREMENTS AND RESTRICTIONS FOR STANDARD ACTIVE-ACTIVE


CONFIGURATIONS

STORAGE CAPACITY
The number of disks in a standard active-active configuration must not exceed the maximum
configuration capacity. In addition, the total amount of storage attached to each node must not
exceed the capacity of a single node.
To determine your maximum configuration capacity, see the System Configuration Guide at
http://now.netapp.com/NOW/knowledge/docs/hardware/hardware_index.shtml.
NOTE: When a failover occurs, the takeover node temporarily serves data from all the
storage in the active-active configuration. When the single-node capacity limit is less than the
total active-active configuration capacity limit, the total disk space in a cluster can be greater
than the single-node capacity limit. It is acceptable for the takeover node to temporarily serve
more than the single-node capacity would normally allow, as long as it does not own more
than the single-node capacity.

DISKS AND DISK-SHELF COMPATIBILITY


• Both FC and SATA storage is supported in standard active-active configurations, as long as the
two storage types are not mixed on the same loop.
• If needed, a node can have only FC storage and the partner node can have only SATA storage.
• Cluster interconnect adapters and cables must be installed.
• Nodes must be attached to the same network and the NICs must be configured correctly.
• System features such as CIFS, NFS, or SyncMirror must be licensed and enabled on both
nodes.

17-8 Data ONTAP® 7.3 Fundamentals: Active-Active Controller Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NOTE: If a takeover occurs, the takeover node can only provide the functionality for the licenses
installed on it. If the takeover node does not have a license that was being used by the partner node to
serve data, your active-active configuration loses functionality after a takeover.

17-9 Data ONTAP® 7.3 Fundamentals: Active-Active Controller Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Standard Active-Active Configuration
Active-Active Controller Configuration

Network

Active-Active Configuration
Interconnect

FC-AL FC-AL
A Loops A Loops

FC-AL
B Loops

© 2008 NetApp. All rights reserved. 8

STANDARD ACTIVE-ACTIVE CONFIGURATION

CONFIGURATION VARIATIONS
The following list describes some configuration variations that are supported for standard
active-active configurations:
• Asymmetrical configurations—One node has more storage than the other. This configuration
is supported as long as neither node exceeds the maximum capacity limit.
• Active/passive configurations—The passive node has only a root volume, while the active
node has all the remaining storage, and services all data requests during normal operation. The
passive node responds to data requests only if it has taken over the active node.
• Shared loops or stacks—If your standard active-active configuration is using software-based
disk ownership, you can share a loop or stack between the two nodes. This is particularly useful
for active/passive configurations.
• Multipath Storage—Provides a redundant connection from each node to each disk. This
configuration can prevent some types of failovers.

17-10 Data ONTAP® 7.3 Fundamentals: Active-Active Controller Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Enabling the License

1. License
Example:
license add abcedfg
2. Reboot
Example:
reboot
3. Enable the service on one of the two systems
Example:
cf enable
4. Check the status
Example:
cf status

© 2008 NetApp. All rights reserved. 9

ENABLING THE LICENSE

To add the license, enter the following command on both node consoles for each required
license:
license add xxxxxx
where xxxxx is the license code you received for the feature
To reboot both nodes, enter the following command:
reboot
To enable the license, enter the following command on the local node console:
cf enable
To verify that controller failover is enabled, enter the following command on each node
console:
cf status

17-11 Data ONTAP® 7.3 Fundamentals: Active-Active Controller Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Setting Matching Node Options

To set matching node options:


1. View and note the values of the options for
local and partner nodes.
2. Verify that the options settings are the same.
3. Correct any mismatched options.

© 2008 NetApp. All rights reserved. 10

SETTING MATCHING NODE OPTIONS

Because some Data ONTAP options must be the same on both the local and partner nodes,
you should use the options command to check these options on each node, and change them
as necessary.
To set matching node options, complete the following steps:
• Display and record options values on local and partner nodes using the following command on
each console:
• options
• The current option settings for the node are displayed on the console. The output displayed is
similar to the following:
• options autosupport.doit DONT
• options autosupport.enable on
• Verify that the options are set to the same value for both nodes. The comments are as follows:
• Value might be overwritten in takeover
• Same value required in local+partner
• Same value in local+partner recommended
• Correct any mismatched options using the following command:
• options option_name option_value

17-12 Data ONTAP® 7.3 Fundamentals: Active-Active Controller Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Parameters That Must Be the Same

Parameter Setting for


Date date, rdate

NDMP NDMP (on or off)

Published route table route

Route routed (on or off)

Time zone timezone

© 2008 NetApp. All rights reserved. 11

PARAMETERS THAT MUST BE THE SAME

The parameters listed in the figure above must be the same on both nodes so that takeover is
smooth and data is correctly transferred between the nodes.

17-13 Data ONTAP® 7.3 Fundamentals: Active-Active Controller Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Three Modes of Operation

In active-active controller configurations, there


are three modes of operation:
1. Normal
2. Takeover
3. Giveback

© 2008 NetApp. All rights reserved. 12

THREE MODES OF OPERATION

The following are descriptions of the three modes of operation:


• 1. Normal—This is the state of a storage system in an active-active configuration when there is
no takeover.
• 2. Takeover—Corresponds to a condition used to interact with a storage system after it has
taken over its partner.
• 3. Giveback—The return of identity from the emulated storage system to the failed storage
system, resulting in a return to normal operation. Giveback mode is the reverse of Takeover
mode.

17-14 Data ONTAP® 7.3 Fundamentals: Active-Active Controller Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
How Partners Communicate

In an active-active controller configuration,


partners communicate through the interconnect
cable with a heartbeat.
ƒ System state is written to disk in a “Mailbox”
ƒ Data not committed to disk is written to the
local and partner NVRAM
ƒ If the heartbeat is not received by the partner
node, takeover can occur

© 2008 NetApp. All rights reserved. 13

HOW PARTNERS COMMUNICATE

To ensure that both nodes in an active-active controller configuration maintain the correct and
current status of the partner node, heartbeat information and node status are stored on each
node in the mailbox disks. The mailbox disks are a redundant set of disks used in
coordinating takeover or giveback operations. If one node stops functioning, the surviving
partner node uses the information on the mailbox disks to perform takeover processing, which
creates a virtual storage system. In the event of an interconnect failure, the mailbox heartbeat
information prevents an unnecessary failover from occurring. Moreover, if cluster
configuration information that is stored on the mailbox disks is out of sync during boot, the
active-active controller nodes automatically resolve the situation. The FAS system failover
process is extremely robust, preventing split-brain issues from occurring.

17-15 Data ONTAP® 7.3 Fundamentals: Active-Active Controller Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Active-Active Controllers and NVRAM

ƒ Each node reserves half NVRAM During Normal Mode


of the total NVRAM for Storage System 1 Storage System 2
the partner’s data
Storage Storage
ƒ During takeover, the System 1 System 2
NVLOG NVLOG
surviving partner
combines the split
NVRAM
Storage Storage
System 2 System 1
NVLOG NVLOG
(mirror) (mirror)

© 2008 NetApp. All rights reserved. 14

ACTIVE-ACTIVE CONTROLLERS AND NVRAM

NVRAM
Data ONTAP uses the WAFL file system to manage data processing and NVRAM to
guarantee data consistency before committing writes to disks. If the storage controller
experiences a power failure, the most current data is protected by the NVRAM, and file
system integrity is maintained.
In the active-active controller environment, each node reserves half of the total NVRAM size
for the partner node’s data to ensure that exactly the same data exists in NVRAM on both
storage controllers. Therefore, only half of the NVRAM in the active-active controller is
dedicated to the local node. If failover occurs, when the surviving node takes over the failed
node, all WAFL checkpoints stored in NVRAM are flushed to disk. The surviving node then
combines the split NVRAM. After the surviving node restores disk control and data
processing to the recovered failed node, all NVRAM data belonging to the partner node is
flushed to disk during the giveback operation.

HOW THE INTERCONNECT WORKS


The interconnect adapters are a critical component in the active-active controller
configuration. Data ONTAP uses these adapters to transfer system data between the partner
nodes, which maintains data synchronization in the NVRAM on both controllers. Other
critical information is also exchanged through the interconnect adapters, including the
heartbeat signal, system time, and details about temporary disk unavailability due to pending
disk-firmware updates.

17-16 Data ONTAP® 7.3 Fundamentals: Active-Active Controller Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
When Does a Takeover Occur?
In an active-active configuration, a takeover occurs
when:
ƒ A node undergoes a software or system failure that
leads to a panic
ƒ A node undergoes a system failure (for example, a
loss of power) and cannot reboot
ƒ There is a mismatch between the disks that one node
recognizes and the disks that the other node
recognizes
ƒ One or more network interfaces configured to support
failover becomes unavailable
ƒ A node cannot send heartbeat messages to its partner
ƒ A node is halted using -f
ƒ A takeover is manually initiated
© 2008 NetApp. All rights reserved. 15

WHEN DOES A TAKEOVER OCCUR?

In an active-active configuration, a takeover occurs when:


• A node experiences a software or system failure that leads to a panic.
• A node experiences a system failure (for example, a loss of power) and cannot reboot.
• NOTE: If the storage for a node loses power at the same time, a standard takeover is not
possible. For MetroClusters in this situation, you can initiate a forced takeover.
• There is a mismatch between the disks that one node sees and the disks that the other node sees.
• One or more network interfaces that are configured to support failover become unavailable.
• A node cannot send heartbeat messages to its partner. This could happen if the node experiences
a hardware or software failure that does not result in a panic, but still prevents it from
functioning correctly.
• You halt one of the nodes without using the -f flag.
• You initiate a takeover manually.

17-17 Data ONTAP® 7.3 Fundamentals: Active-Active Controller Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
What Happens When a Takeover Occurs
Active-Active Controller Configuration

Network

Active-Active Configuration
Interconnect

Partner assumes the


identity of the failed node
FC-AL FC-AL
A Loops A Loops

FC-AL
B Loops Partner accesses the
failed node’s disks and
serves its data to clients

© 2008 NetApp. All rights reserved. 16

WHAT HAPPENS WHEN A TAKEOVER OCCURS

When a takeover occurs, the functioning partner node takes over the functions and disk drives
of the failed node by creating an emulated storage system that:
• Assumes the identity of the failed node
• Accesses the failed node’s disks and serves its data to clients
The partner node maintains its own identity and its own primary functions, but also handles
the added functionality of the failed node through the emulated node.

17-18 Data ONTAP® 7.3 Fundamentals: Active-Active Controller Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
During a Takeover

ƒ Surviving partner has two identities, with each


identity able to access appropriate volumes
and networks only
ƒ You can access the failed node using rsh and
telnet commands

© 2008 NetApp. All rights reserved. 17

DURING A TAKEOVER

When a takeover occurs, the surviving partner has two identities: its own and that of its
partner. These identities exist simultaneously on the same storage system. Each identity can
access only the appropriate volumes and networks. You can send commands or log in to
either storage system by using the rsh command, allowing remote scripts that invoke
storage system commands through a remote-shell connection to continue normal operations.

ACCESS WITH RSH


Commands sent to the failed node through a remote shell connection are serviced by the
partner node, as are rsh command requests.

ACCESS WITH TELNET


If you log in to a failed node through a telnet session, a message is displayed alerting you that
your storage system has failed and to log in to the partner node. If you are logged in to the
partner node, you can access the failed node or its resources from the partner node using the
partner command.

17-19 Data ONTAP® 7.3 Fundamentals: Active-Active Controller Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
During a Giveback

ƒ The giveback command terminates the


emulated node on the partner: cf giveback
ƒ The failed node resumes normal operation
(including serving data)
ƒ The active-active configuration resumes
normal operation

© 2008 NetApp. All rights reserved. 18

DURING A GIVEBACK

After a partner node is repaired and operating normally, you can use the cf giveback
command to return operations to the partner.
When the failed node is functioning again, the following events can occur:
• You initiate a cf giveback command that terminates the emulated node on the partner.
• The failed node resumes normal operation, serving its own data.
• The active-active configuration resumes normal operation, with each node ready to take over for
its partner if the partner fails.

17-20 Data ONTAP® 7.3 Fundamentals: Active-Active Controller Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Active-Active Commands

ƒ cf enable | disable
ƒ cf takeover
ƒ cf partner
ƒ cf giveback
ƒ cf status
ƒ aggr status -r
ƒ halt –f
ƒ partner

© 2008 NetApp. All rights reserved. 19

ACTIVE-ACTIVE COMMANDS

• To manually initiate a takeover, use the cf takeover command.


• To display the name of a partner node, use the cf partner command.
• To return control to a partner that has been taken over, use the cf giveback command.
• To determine whether the controller failover feature is enabled and the partner node in an active-
active configuration is up, use the cf status command.
• To display information about the disks on both the local and partner nodes, use the sysconfig
or the aggr status commands.
• To halt a node and prevent its partner from taking over, use the halt –f command.

17-21 Data ONTAP® 7.3 Fundamentals: Active-Active Controller Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Failover Effects On Client Connections

ƒ For clients or applications using stateless


connection protocols, I/O requests are
suspended during the takeover/giveback
period
ƒ Connections can resume when the
takeover/giveback process is complete
ƒ For CIFS and NFS v4, sessions are lost
ƒ Stateful clients and applications may—and
usually do—attempt to re-establish the session

© 2008 NetApp. All rights reserved. 20

FAILOVER EFFECTS ON CLIENT CONNECTIONS

Client disruption, although minimal, can still occur in the active-active controller
environment during the takeover and giveback processes.
When one node in an active-active configuration encounters an error and stops processing
data, its partner detects the failed (or failing) status of the partner and takes over all data
processing from that controller. If the partner is confirmed down, the surviving storage
controller then initiates the failover process to assume control of all services from the failed
storage controller. This period is referred to as takeover time for clients.
After the failed storage controller is repaired, you can return all services to the repaired
storage controller by issuing the cf giveback command on the surviving storage controller
serving all clients. This command triggers the giveback process, and the repaired storage
controller boots when the giveback operation is complete. This process is referred to as
giveback time for clients.
Therefore, the takeover and giveback period for clients equals the sum of the takeover time
plus the giveback time, as represented in the following equations:
• Takeover time = time to detect controller error (mailbox disks not responding) and initiate
takeover + time required for takeover to complete (synchronize the WAFL logs).
• Giveback time = time required to release partner’s disks + time to replay the WAFL log + time
to start all services (NFS/NIS/CIFS, and so on) and process export rules
• Total time = takeover time + giveback time.

NOTE: For clients or applications using stateless connection protocols, I/O requests are
suspended during the takeover and giveback periods, but resume when the takeover and
giveback processes are complete. For CIFS, sessions are lost, but the application could—and
generally does—attempt to re-establish the session.

17-22 Data ONTAP® 7.3 Fundamentals: Active-Active Controller Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The amount of time required for takeover and giveback is critical. With newer versions of
Data ONTAP, this time has been decreasing. In some instances, if the network is unstable or
the storage controller is configured incorrectly, total time can be very long.

17-23 Data ONTAP® 7.3 Fundamentals: Active-Active Controller Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Best Practices for Active-Active
Configurations
ƒ Test failover and giveback operations before
placing active-active controllers into production
ƒ Monitor:
– Performance of network
– Performance of disks and storage shelves
– CPU utilization of each controller to ensure it
does not exceed 50%
ƒ Enable AutoSupport

© 2008 NetApp. All rights reserved. 21

BEST PRACTICES FOR ACTIVE-ACTIVE CONFIGURATIONS

Thoroughly test newly installed active-active controllers before moving them into
production.
General best practices require comprehensive testing of all mission-critical systems before
introducing them into a production environment. Active-active controller testing should
include not only takeover and giveback, or functional testing, but performance evaluation as
well. Extensive testing validates planning.
Monitor network connectivity and stability.
Unstable networks not only affect total takeover and giveback times, they adversely affect all
devices on the network in various ways. NetApp storage controllers are typically connected to
the network to serve data, so if the network is unstable, the first symptom is degradation of
storage-controller performance and availability. Client service requests are retransmitted
many times before reaching the storage controller, appearing to the client as slow responses
from the storage controller. In a worst-case scenario, an unstable network can cause
communication to time-out, and the storage controller appears to be unavailable.
During takeover and giveback operations in the active-active controller environment, storage
controllers attempt to connect to numerous types of servers on the network, including
Windows domain controllers, DNS, NIS, LDAP, and application servers. If these systems are
unavailable or the network is unstable, the storage controller continues to retry establishing
communications, which delays takeover or giveback times.

17-24 Data ONTAP® 7.3 Fundamentals: Active-Active Controller Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Summary

In this module, you should have learned to:


ƒ Describe active-active controller configuration
ƒ Explain how the active-active configuration
works
ƒ Describe software requirements for active-
active configurations
ƒ List three modes of operation for active-active
configurations
ƒ Describe the effects on client connections of
failover and giveback operations
ƒ Describe best practices for active-active
controller configurations
© 2008 NetApp. All rights reserved. 22

MODULE SUMMARY

17-25 Data ONTAP® 7.3 Fundamentals: Active-Active Controller Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Exercise
Module 17: Active-Active Controller
Configuration
Estimated Time: 30 minutes

EXERCISE

Please refer to your Exercise Guide for more instruction.

17-26 Data ONTAP® 7.3 Fundamentals: Active-Active Controller Configuration

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Final Words

NetApp University - Do Not Distribute


MODULE 18: FINAL WORDS

Final Words
Module 18
Data ONTAP® 7.3 Fundamentals

FINAL WORDS

18-1 Data ONTAP® 7.3 Fundamentals: Final Words

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Objectives

By the end of this module, you should be able to:


ƒ Recall major areas in this course
ƒ Identify available resources

© 2008 NetApp. All rights reserved. 2

MODULE OBJECTIVES

18-2 Data ONTAP® 7.3 Fundamentals: Final Words

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Unified Storage

iSCSI NFS

CIFS Corporate
FC Ethernet LAN

SAN NAS
(Blocks) (Files)

NetApp
FAS

© 2008 NetApp. All rights reserved. 3

UNIFIED STORAGE

18-3 Data ONTAP® 7.3 Fundamentals: Final Words

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Console Access
system> version
NetApp Release 7.3RC1: Wed Mar 5 02:17:31 PST 2008

system> sysconfig -v
NetApp Release 7.3RC1: Wed Mar 5 02:17:31 PST 2008
System ID: 0084166726 (NetApp1)
System Serial Number: 3003908 (NetApp1)
slot 0: System Board 599 MHz (TSANTSA D0)
Model Name: FAS250
Part Number: 110-00016
Revision: D0
Serial Number: 280646
Firmware release: CFE 1.2.0 system> license
Processors: 2
Processor revision: B2 nfs site ABCDEFG
Processor type: 1250 cifs site BCDEFGH
Memory Size: 510 MB http site CDEFGHI
NVMEM Size: 64 MB of Main Memory Used
cluster not licensed
snapmirror not licensed
snaprestore not licensed

© 2008 NetApp. All rights reserved. 4

CONSOLE ACCESS

18-4 Data ONTAP® 7.3 Fundamentals: Final Words

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
FilerView

http://<system name or ipaddress>/na_admin

© 2008 NetApp. All rights reserved. 5

FILERVIEW

18-5 Data ONTAP® 7.3 Fundamentals: Final Words

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
AutoSupport

Email Server

© 2008 NetApp. All rights reserved. 6

AUTOSUPPORT

18-6 Data ONTAP® 7.3 Fundamentals: Final Words

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Role-Based Access Control
Role-Based Access Control (RBAC) is a mechanism for managing a
set of actions (capabilities) that a user or administrator can perform
on a storage system.
ƒ A role is created.
ƒ Capabilities are granted to the role.
ƒ Groups are assigned to one or more roles.
ƒ Users are assigned to groups.

Groups Roles Capabilities

© 2008 NetApp. All rights reserved. 7

ROLE-BASED ACCESS CONTROL

18-7 Data ONTAP® 7.3 Fundamentals: Final Words

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Disks and Data Protection

Volume 1

Raid Group 0 Spares

Parity Data Data Data

Spares are global.

© 2008 NetApp. All rights reserved. 8

DISKS AND DATA PROTECTION

18-8 Data ONTAP® 7.3 Fundamentals: Final Words

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Aggregates

ƒ Represent the physical storage


ƒ Made up of one or more RAID groups
ƒ A RAID group:
– One or more data disks
– Parity disks
ƒ RAID 4 has only one parity disk
ƒ RAID-DP has two parity disks
ƒ Data is striped for parity protection
ƒ Volumes are allocated space from an
aggregate

© 2008 NetApp. All rights reserved. 9

AGGREGATES

18-9 Data ONTAP® 7.3 Fundamentals: Final Words

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
How They Work:
Aggregates and FlexVol Volumes
ƒ Create RAID groups
FlexVol 1 FlexVol 2 FlexVol 3
ƒ Create aggregate
vol1 vol2 ƒ Create FlexVol 1
vol3
– Only metadata space
used
– No preallocation of
blocks to a specific
volume
Aggregate ƒ Create FlexVol 2
RG1 RG2 RG3 – WAFL allocates space
from aggregate as data
RG1 RG2 RG3 is written
Aggregate
ƒ Populate volumes

© 2008 NetApp. All rights reserved. 10

HOW THEY WORK: AGGREGATES AND FLEXVOL VOLUMES

18-10 Data ONTAP® 7.3 Fundamentals: Final Words

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
How Snapshot Works

Active Data Snapshot

File X File X

Disk Blocks A B C C’

ƒ Active version of X is now composed of blocks A, B, C’


ƒ Snapshot version of X remains composed of blocks A, B, C
ƒ Atomically moves active data to new consistent state

© 2008 NetApp. All rights reserved. 11

HOW SNAPSHOT WORKS

18-11 Data ONTAP® 7.3 Fundamentals: Final Words

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NFS

SS1

vol0 flexvol1
data_files
etc eng_files
home misc_files

Network Connection

Client1 Client1

© 2008 NetApp. All rights reserved. 12

NFS

18-12 Data ONTAP® 7.3 Fundamentals: Final Words

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
CIFS

Client B requests Authenticates Client B


User authentication

Session with Client B

Client A Client B Member Domain


Server Controller

© 2008 NetApp. All rights reserved. 13

CIFS

18-13 Data ONTAP® 7.3 Fundamentals: Final Words

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
SAN Protocols

WAFL Architecture

Block Services

SAN Protocols
FCP iSCSI

Network Interfaces
FC Ethernet
Encapsulated SCSI Encapsulated SCSI

Fibre Channel Network TCP/IP Network

© 2008 NetApp. All rights reserved. 15

SAN PROTOCOLS

18-14 Data ONTAP® 7.3 Fundamentals: Final Words

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Data ONTAP Simplified

Client Data ONTAP

Network
Client

Network Protocols WAFL® RAID Storage


Client

Client
NVRAM
Physical Disks
Memory
Client

© 2008 NetApp. All rights reserved. 16

DATA ONTAP SIMPLIFIED

18-15 Data ONTAP® 7.3 Fundamentals: Final Words

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
FlexShare

FlexShare facilitates increased storage resource


control in the following ways:
ƒ Volume priorities
ƒ Hints of cache buffers
ƒ No license required

© 2008 NetApp. All rights reserved. 17

FLEXSHARE

18-16 Data ONTAP® 7.3 Fundamentals: Final Words

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Standard Active-Active Configuration
Active-Active Controller Configuration

Network

Active-Active Configuration
Interconnect

FC-AL FC-AL
A Loops A Loops

FC-AL
B Loops

© 2008 NetApp. All rights reserved. 18

STANDARD ACTIVE-ACTIVE CONFIGURATION

18-17 Data ONTAP® 7.3 Fundamentals: Final Words

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Additional Data ONTAP Resources

ƒ Education
– Data ONTAP CIFS Administration
– Data ONTAP NFS Administration
– Data ONTAP SAN Administration Basics
– Data Protection and Retention
– Fundamentals of Performance Analysis
ƒ Web sites
– NOW
– NetApp (www.netapp.com)

© 2008 NetApp. All rights reserved. 19

ADDITIONAL DATA ONTAP RESOURCES

18-18 Data ONTAP® 7.3 Fundamentals: Final Words

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Thank You!
Please fill out an evaluation.

18-19 Data ONTAP® 7.3 Fundamentals: Final Words

© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute