You are on page 1of 934

NetApp University

SAN Implementation Workshop


Student Guide

NETAPP UNIVERSITY

SAN Implementation Workshop


Student Guide
Course Number: STRSW-ILT-SANIMP-R1
Catalog Number: STRSW-ILT-SANIMP-R1-SG
Revision: 2.0

NetApp University - Do Not Distribute

ATTENTION
The information contained in this guide is intended for training use only. This guide contains information
and activities that, while beneficial for the purposes of training in a closed, non-production environment,
can result in downtime or other severe consequences and therefore are not intended as a reference
guide. This guide is not a technical reference and should not, under any circumstances, be used in
production environments. To obtain reference materials, please refer to the NetApp product
documentation located at http://now.netapp.com/ for product information.

COPYRIGHT
2009 NetApp. All rights reserved. Printed in the U.S.A. Specifications subject to change without
notice.
No part of this book covered by copyright may be reproduced in any form or by any meansgraphic,
electronic, or mechanical, including photocopying, recording, taping, or storage in an electronic retrieval
systemwithout prior written permission of the copyright owner.
NetApp reserves the right to change any products described herein at any time and without notice.
NetApp assumes no responsibility or liability arising from the use of products or materials described
herein, except as expressly agreed to in writing by NetApp. The use or purchase of this product or
materials does not convey a license under any patent rights, trademark rights, or any other intellectual
property rights of NetApp.
The product described in this manual may be protected by one or more U.S. patents, foreign patents,
or pending applications.

RESTRICTED RIGHTS LEGEND


Use, duplication, or disclosure by the government is subject to restrictions as set forth in subparagraph
(c)(1)(ii) of the Rights in Technical Data and Computer Software clause at DFARS 252.277-7103
(October 1988) and FAR 52-227-19 (June 1987).

TRADEMARK INFORMATION
NetApp, the NetApp logo, Go further, faster, Data ONTAP, ApplianceWatch, BareMetal, Center-toEdge, ContentDirector, gFiler, MultiStore, SecureAdmin, Smart SAN, SnapCache, SnapDrive,
SnapMover, Snapshot, vFiler, Web Filer, SpinAV, SpinManager, SpinMirror, SpinShot, FAServer,
NearStore, NetCache, WAFL, DataFabric, FilerView, SecureShare, SnapManager, SnapMirror,
SnapRestore, SnapVault, Spinnaker Networks, the Spinnaker Networks logo, SpinAccess, SpinCluster,
SpinFS, SpinHA, SpinMove, SpinServer, and SpinStor are trademarks or registered trademarks of
NetApp, Inc. in the United States and other countries.
Apple is a registered trademark and QuickTime is a trademark of Apple Computer, Inc. in the United
States and/or other countries.
Microsoft is a registered trademark and Windows Media is a trademark of Microsoft Corporation in the
United States and/or other countries.
RealAudio, RealNetworks, RealPlayer, RealSystem, RealText, and RealVideo are registered
trademarks and RealMedia, RealProxy, and SureStream are trademarks of RealNetworks, Inc. in the
United States and/or other countries.
All other brands or products are trademarks or registered trademarks of their respective holders and
should be treated as such.
NetApp is a licensee of the CompactFlash and CF Logo trademarks.

SAN Implementation Workshop: Welcome

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

TABLE OF CONTENTS
WELCOME ........................................................................................................................................ 1
MODULE 1: SAN REVIEW ............................................................................................................ 1-1
MODULE 2: WINDOWS FC CONNECTIVITY ............................................................................... 2-1
MODULE 3: WINDOWS ISCSI CONNECTIVITY .......................................................................... 3-1
MODULE 4: WINDOWS LUN ACCESS ........................................................................................4-1
MODULE 5: VSPHERE OVERVIEW ............................................................................................. 5-1
MODULE 6: VSPHERE ISCSI CONNECTIVITY ........................................................................... 6-1
MODULE 7: VSPHERE FC CONNECTIVITY ................................................................................7-1
MODULE 8: VSPHERE LUN ACCESS .........................................................................................8-1
MODULE 9: RED HAT OVERVIEW............................................................................................... 9-1
MODULE 10: RED HAT FC CONNECTIVITY ............................................................................. 10-1
MODULE 11: RED HAT ISCSI CONNECTIVITY .........................................................................11-1
MODULE 12: RED HAT LUN ACCESS....................................................................................... 12-1
MODULE 13: LUN PROVISIONING ............................................................................................13-1
MODULE 14: SAN MANAGEMENT ............................................................................................ 14-1
MODULE 15: SAN TROUBLESHOOTING ..................................................................................15-1
APPENDIX 1: FC DETAILS ........................................................................................................ A1-1
APPENDIX 2: INTRODUCTION TO FCOE ................................................................................ A2-1
APPENDIX 3: INTERNET STORAGE NAME SERVICE ............................................................ A3-1
APPENDIX 4: MULTIPLE CONNECTION SESSIONS .............................................................. A4-1
APPENDIX 5: STORAGE MANAGER FOR SANS .................................................................... A5-1
APPENDIX 6: HYPER-V INTRODUCTION ................................................................................ A6-1
APPENDIX 7: SAN TROUBLESHOOTING ON WINDOWS ...................................................... A7-1
APPENDIX 8: NFS DATASTORES ............................................................................................ A8-1
APPENDIX 9: SERVER CONSOLIDATION ............................................................................... A9-1
APPENDIX 10: NPIV TROUBLESHOOTING ........................................................................... A10-1
APPENDIX 11: VMWARE SNAPSHOTS AND NETAPP SNAPSHOT COPIES ..................... A11-1
APPENDIX 12: VIRTUAL STORAGE CONSOLE ....................................................................A12-1
APPENDIX 13: SNAPDRIVE FOR UNIX (LINUX VERSION) .................................................. A13-1

SAN Implementation Workshop: Welcome

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

SAN Implementation
Workshop
Course Number: STRSW-ILTSANIMP-R1

SAN IMPLEMENTATION WORKSHOP

SAN Implementation Workshop: Welcome

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Logistics
Introductions
Schedule (start time, breaks, lunch, close)
Telephones and messages
Food and drinks
Restrooms

2009 NetApp. All rights reserved.

LOGISTICS

SAN Implementation Workshop: Welcome

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Safety
Alarm signal
Evacuation route
Assembly area
Electrical safety

2009 NetApp. All rights reserved.

SAFETY

SAN Implementation Workshop: Welcome

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Course Objectives
By the end of this course, you should be able to:
Define and describe storage area networks using Fibre
Channel Protocol (FCP) and Internet SCSI (iSCSI)
Configure Windows Server 2008 R2, vSphere (ESX
4.0), Red Hat 5.3, and Data ONTAP for iSCSI
connectivity
Configure Windows Server 2008 R2, vSphere (ESX
4.0), Red Hat 5.3, and Data ONTAP for FC
connectivity
Create and access a LUN by way of FCP and iSCSI
from Windows Server 2008 R2, vSphere (ESX 4.0),
and Red Hat 5.3
2009 NetApp. All rights reserved.

COURSE OBJECTIVES

SAN Implementation Workshop: Welcome

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Course Objectives (Cont.)


Install and use SnapDrive for Windows and
SnapDrive for UNIX (in Appendix 13) to create LUNs,
Snapshot LUNs, restore LUNs from a Snapshot copy,
and remove LUNs
Size, clone, back up and recover LUNs on Windows
Server 2008 R2, vSphere (ESX 4.0), and Red Hat 5.3
Troubleshoot SAN connectivity and performance
issues

2009 NetApp. All rights reserved.

COURSE OBJECTIVES (CONT.)

SAN Implementation Workshop: Welcome

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Course Agenda
Day 1

Introductions and Overview


Module 1: SAN Review
Module 2: Windows FC Connectivity
Module 3: Windows iSCSI Connectivity
Module 4: Windows LUN Access

Day 2

Module 5: vSphere
Module 6: vSphere
Module 7: vSphere
Module 8: vSphere

Overview
iSCSI Connectivity
FC Connectivity
LUN Access

2009 NetApp. All rights reserved.

COURSE AGENDA

SAN Implementation Workshop: Welcome

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Course Agenda (Cont.)


Day 3

Module 9: Red Hat Overview


Module 10: Red Hat FC Connectivity
Module 11: Red Hat iSCSI Connectivity
Module 12: Red Hat LUN Access
Module 13: LUN Provisioning
Module 14: Data ONTAP SAN Management
Module 15: Data ONTAP SAN Troubleshooting

NOTE: Additional materials may be found in the


appendixes
2009 NetApp. All rights reserved.

COURSE AGENDA (CONT.)

10

SAN Implementation Workshop: Welcome

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Information Sources
NOWTM (NetApp on the Web)
http://NOW.NetApp.com

NetApp University
http://www.netapp.com/us/services/university/

NetApp University Support


http://netappusupport.custhelp.com

2009 NetApp. All rights reserved.

INFORMATION SOURCES

11

SAN Implementation Workshop: Welcome

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Typographic Conventions
Convention

Type of Information
Book titles.
Words or characters that require special
attention.
Variable names or placeholders for
information you must supply, for example:
An ifstat command looks like this:
ifstat -z -a <interface>

Italic Font

The name of the interface for which you


want to view statistics is interface.

Monospaced font

Command names, daemon names, and


option names.
Information displayed on the system console
or other computer monitors.
The contents of files.
Words or characters you type, for example:

Bold monospaced font

Enter the following command:


options httpd.enable on
license add <code1> <code2>

2009 NetApp. All rights reserved.

TYPOGRAPHIC CONVENTIONS

12

SAN Implementation Workshop: Welcome

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

SAN Review
Module 1
SAN Implementation Workshop

SAN REVIEW

1-1

SAN Implementation Workshop: SAN Review

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Objectives
By the end of this module, you should be able to:
Describe the differences between networkattached storage (NAS) and storage area
network (SAN)
List the methods to implement SAN solutions
Define logical unit number, initiator and target
Describe ports, worldwide names, and
worldwide port names
List the steps to implement a SAN

2009 NetApp. All rights reserved.

MODULE OBJECTIVES

1-2

SAN Implementation Workshop: SAN Review

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Assumptions of the Course


This course is unlike other NetApp courses
It is not designed to teach you the concepts

This class provides the opportunity to


implement FC and IP SANs on the following
platforms:
Microsoft Windows Server 2008 R2
VMware vSphere (also known as ESX 4.0)
Red Hat Enterprise Linux 5.3

In this module, we will review the basics of a


SAN
2009 NetApp. All rights reserved.

ASSUMPTIONS OF THE COURSE

1-3

SAN Implementation Workshop: SAN Review

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

SAN Review

2009 NetApp. All rights reserved.

SAN REVIEW

1-4

SAN Implementation Workshop: SAN Review

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Question
What does NAS and SAN stand for and what is the difference?

NFS
Corporate
LAN

iSCSI

CIFS
FCoE

NAS

FC

SAN
NetApp FAS

2009 NetApp. All rights reserved.

QUESTION
Operating systems and applications request data either at the block level or the file level.
Network-attached storage (NAS) provides file-level access to data on a storage system.
Access is by way of a network, using Data ONTAP services such as CIFS and NFS.
Storage area networks (SANs) provide block-level access to data on a storage system. SAN
solutions can be any mixture of iSCSI or FC protocols. When both SAN and NAS storage are
present on the same storage system, it is referred to as unified storage.

1-5

SAN Implementation Workshop: SAN Review

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

SCSI
SAN uses Small Computer System Interface
(SCSI) protocol over a distance
SCSI features:

Block-level access
Efficiency
Lower overhead
Resiliency

2009 NetApp. All rights reserved.

SCSI
Small Computer System Interface (SCSI) is a set of standards that define commands,
protocols, and interfaces used to transmit data. SCSI allows low-level block access to data
in units of 512-byte blocks. This is highly efficient and has low overhead compared to NAS
or file level access. SCSI has a high level of resiliency that makes it perfect for an
enterprise-level protocol.

1-6

SAN Implementation Workshop: SAN Review

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

SCSI on Host and Controller


Host
Application
File System
SCSI Driver

SCSI Adapter

Direct-Attached Devices

Fibre Channel

SAN Services
WAFL

Controller

Direct-Attached Storage
2009 NetApp. All rights reserved.

SCSI ON HOST AND CONTROLLER


Traditionally, storage is attached to a local machine. SCSI is used for transmitting data
between a host and peripheral devices either through SCSI adapters or other adapters that
communicate using SCSI commands.

1-7

SAN Implementation Workshop: SAN Review

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Logical Unit
Host
Application
File System
SCSI Driver

The logical unit


is accessed by a
Direct-Attached Devices
Logical Unit
Number (LUN)
SAN Services
WAFL

Controller
LUN

The virtual disk


is a single file on
the server

2009 NetApp. All rights reserved.

LOGICAL UNIT

1-8

SAN Implementation Workshop: SAN Review

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Question
What are the benefits of using a NetApp SAN
over direct-attached storage (DAS) device?
Availability
Data is generally more reliable than it is with DAS

Storage utilization
Space can be assigned to optimize usage

Centralized management
Provisioning is centralized

Disaster recovery and backup


Centralize backup and recovery

2009 NetApp. All rights reserved.

QUESTION
SAN provides numerous benefits over the traditional NAS environment:
Availability: As data is moved off of local hosts and onto enterprise storage arrays, data
access is generally more reliable.
Storage Utilization: Space can be assigned based upon appropriate need allowing a higher
optimization of the storage.
Management: Since all storage is centralized with SAN, management of the storage becomes
more efficient and effective.
Disaster Recovery and Backup: Centralizing data allows for easier and more effective
backup and recovery strategies.

1-9

SAN Implementation Workshop: SAN Review

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Terms Review

2009 NetApp. All rights reserved.

TERMS REVIEW

1-10

SAN Implementation Workshop: SAN Review

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Question
What is the host called in a SAN?

Host

Application
File System
SCSI Driver

Initiator
What is the storage system called in a SAN?

Target

SAN Services
WAFL

Controller
LUN

2009 NetApp. All rights reserved.

QUESTION
Initiators, including Windows and UNIX-type hosts, are consumers or clients within a SCSI
relationship. Targets, including NetApp controllers and storage arrays, present data as logical
units and are the servers within a SCSI relationship.

1-11

SAN Implementation Workshop: SAN Review

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Question
How may a SAN be implemented?
Fibre Channel (FC)
Referred to as FC SAN
Uses Fibre Channel Protocol (FCP) to communicate
Physical Data FC Frame

SCSI

Uses Fibre Channel over Ethernet (FCoE) to communicate


Ethernet FCoE FC Frame

SCSI

Internet Protocol (IP)


Referred to as IP SAN
Uses Internet SCSI (iSCSI) to communicate
Ethernet IP TCP

iSCSI

SCSI

2009 NetApp. All rights reserved.

QUESTION
LUNs on a NetApp storage system can be accessed through either a Fibre Channel (FC SAN)
fabric using the Fibre Channel Protocol (FCP)or an Ethernet network using the Fibre Channel
over Ethernet (FCoE) or Internet SCSI (iSCSI) protocols. In all cases, the transport portals
(FCP, FCoE or iSCSI) carry encapsulated SCSI commands as the data transport mechanism.
iSCSI is an IETF standard found here: http://www.ietf.org/rfc/rfc3720.txt?number=3720

1-12

SAN Implementation Workshop: SAN Review

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Question
What are the ports called in an IP and FC SAN?
Application

Initiator

File System
TCP/IP Driver iSCSI Driver

SCSI Driver

Fibre Channel Port or


Converged Network Adapter (CNA)

Ethernet Port

Target TCP/IP Driver iSCSI Driver

FC Driver

SAN Services
WAFL

IP
SAN

LUN

FC Driver

FC
SAN

2009 NetApp. All rights reserved.

QUESTION
Data is communicated over ports. In an IP SAN, the data is communicated by way of
Ethernet ports. In an FC SAN, the data is communicated over Fibre Channel ports.

1-13

SAN Implementation Workshop: SAN Review

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Question
What are the Nodes and Ports called in an FC SAN?
Application
File System

Initiator

SCSI Driver
21:00:00:2b:34:26:a6:56

20:00:00:2b:34:26:a6:56

Worldwide Node Name (WWNN)

Worldwide Port Name (WWPN)


50:0a:09:81:86:f7:c7:86

50:0a:09:80:86:f7:c7:86
SAN Services
WAFL

Target

IP
SAN

LUN

FC
SAN

2009 NetApp. All rights reserved.

QUESTION
In FC SAN, a worldwide node name (WWNN) describes a machine while a worldwide port
name (WWPN) describes a physical port attached to that machine.
The FC specification for the naming of nodes and ports on those nodes can be fairly
complicated. Each device is given a globally unique WWNN and an associated WWPN for
each port on the node. WWNNs and WWPNs are 64-bit addresses made up of 16
hexadecimal digits grouped together in twos with a colon separating each pair (for example,
21:00:00:2b:34:26:a6:54).
The first number in the address defines what the other numbers in the address represent,
according to the FC specification. The first number is generally a 1, 2, or 5. In the example of
QLogic initiator host bus adapters (HBAs), the first number is generally a 2. For Emulex
initiator HBAs, the first number is generally a 1. Finally, a NetApp storage system is
assigned with a 5.

1-14

SAN Implementation Workshop: SAN Review

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Question
What are the Nodes and Ports called in an IP SAN?
Application
File System

Initiator

SCSI Driver

Local Network Connection

iqn.1999-04.com.a:host

Portals

Worldwide Node (WWN)

Target Portal Group (TPG)

iqn.1998-02.com.netapp:ss1
SAN Services
WAFL

Target

IP
SAN

LUN

FC
SAN

2009 NetApp. All rights reserved.

QUESTION
In IP SAN, the worldwide node (WWN) describes a machine while the portal describes a
physical port. Each iSCSI node must have a node name. There are two possible node name
formats.
IQN-TYPE DESIGNATOR
The format of this node name is conventionally iqn.yyyy-mm.backward_naming_authority:
unique_device_name. This is the most popular node name format and is the default used by
a NetApp storage system. The components of the logical name are the following:
Type designator, IQN, followed by a period (.)
The date when the naming authority acquired the domain name, followed by a period
The name of the naming authority, optionally followed by a colon (:)
A unique device name
EUI-TYPE DESIGNATOR
The format of this node name is eui.nnnnnnnnnnnnnnnn. The components of the logical
name are the following:
The type designator itself, eui, followed by a period (.)
Sixteen hexadecimal digits
Example: eui.123456789ABCDEF0

1-15

SAN Implementation Workshop: SAN Review

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Question
How can we connect an initiator to a target?
Application
File System

Initiator

SCSI Driver

Directly connected
Connected through
a switch
SAN Services
WAFL

Target

IP
SAN

LUN

FC
SAN

2009 NetApp. All rights reserved.

QUESTION

1-16

SAN Implementation Workshop: SAN Review

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Implementing SAN

2009 NetApp. All rights reserved.

IMPLEMENTING SAN

1-17

SAN Implementation Workshop: SAN Review

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Designing and Planning a SAN Solution


1.

Installation preparation
Purpose: Determine compatibility issues that need to be resolved
Tools: System Configuration Guide, FC and iSCSI Configuration
Guide, system configuration forms, Interoperability Matrix Tool
SAN assessment

Purpose: Collect all information that will be needed to plan the


implementation

Tools: Interoperability Matrix Tool


Solution verification

Purpose: Ensure that solution satisfies customers requirements

Tools: Interoperability Matrix Tool


Gap analysis

Purpose: Identify any activities that need to be performed to fill gaps

Tools: Interoperability Matrix Tool


Assessment phase review and Sign off

Purpose: Review solution proposed and obtain customer sign off

2.

3.

4.

5.

2009 NetApp. All rights reserved.

DESIGNING AND PLANNING A SAN SOLUTION


The System Configuration Guide can be found here:
http://now.netapp.com/NOW/knowledge/docs/hardware/NetApp/syscfg/
The FC & iSCSI Configuration Guide can be found here:
http://now.netapp.com/NOW/knowledge/docs/san/fcp_iscsi_config/index.shtml

1-18

SAN Implementation Workshop: SAN Review

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Interoperability Matrix Tool


Found at: http://now.netapp.com/matrix
Requires a NOW account for authentication
Select Storage
Area Network
(SAN)
to configure a
SAN solution
Then choose your
configuration to see
if it is supported
or better yet...
Dont purchase
hardware until you
validate it is
supported

Number of
configurations
with the
current filter

2009 NetApp. All rights reserved.

INTEROPERABILITY MATRIX TOOL


The NetApp Interoperability Matrix tool is a Web-based utility. You can use the tool to
search for information about configurations for NetApp products that work with third-party
products and components qualified by NetApp.
The Interoperability Matrix tool contains both supported and certified NetApp configurations.
A configuration consists of components, such as operating systems, host utilities, and
switches, that NetApp has identified. NetApp assigns the configuration a name, which is a
concatenation of the timestamp and an auto-generated number. For example, a configuration
created on 21 November 2008 is assigned a configuration name similar to 20081121021727786.
NOTE: Supported configurations are those that have been qualified by NetApp. Certified
configurations are ones that have been qualified by another company to work with NetApp
components. Do not confuse a supported configuration with a configuration that has a status
of Supported. If a supported or certified configuration has been reviewed and approved by
NetApp, the status of the configuration can be Supported or PVR required, regardless of
which company qualified the components. A configuration in PVR required state indicates
that the configuration has been evaluated and can be supported for all customers upon
approval of the associated PVR.

1-19

SAN Implementation Workshop: SAN Review

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Interoperability Matrix Tool (Cont.)

All configurations
are in the
SAN Report;
Available as
spreadsheet
or PDF
2009 NetApp. All rights reserved.

INTEROPERABILITY MATRIX TOOL (CONT.)


Reports are available for all configurations in the format of a PDF or an Excel spreadsheet.

1-20

SAN Implementation Workshop: SAN Review

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Interoperability Matrix Tool (Cont.)

Select your
hardware or OS
or software...
...and
Add Components
to add them to your
proposed
configuration
2009 NetApp. All rights reserved.

INTEROPERABILITY MATRIX TOOL (CONT.)


You can search the configurations for NetApp products that work with third-party products
and components that are qualified by NetApp by using the configuration search feature.

1-21

SAN Implementation Workshop: SAN Review

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Interoperability Matrix Tool (Cont.)

After the platform


was added,
the remaining
list is filtered

Click here to show


the two results

Click the trash can


to remove the item
from the filter

2009 NetApp. All rights reserved.

INTEROPERABILITY MATRIX TOOL (CONT.)

1-22

SAN Implementation Workshop: SAN Review

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Interoperability Matrix Tool (Cont.)

Supported configuration(s)
based upon the filter

Export to
Excel and PDF

NOTE: Best practice is to print or save to PDF


the supported configuration of your solution
once purchased, installed, and configured

2009 NetApp. All rights reserved.

INTEROPERABILITY MATRIX TOOL (CONT.)


To export the search results of a single storage solution to a Microsoft Excel worksheet,
complete the following steps:
1. Click Excel.
The Export Settings dialog box appears.
2. Select the appropriate component type based on how you want to group the worksheets
from the Worksheet Grouping list.
3. Select the columns to be exported from the Component Types to export list.
4. Click Export.
The search results are grouped based on your requirements and are exported to an Excel
sheet.
To export your search results to a PDF, complete the following steps:
1. Click Pdf.
The Pdf Export Settings dialog box appears.
2. Select the appropriate option based on which you want to group the search results from
Grouping.
3. Select the columns to be exported from Component Types to export.
4. Click Export.
The search results are grouped based on your requirements and are exported to a PDF.

1-23

SAN Implementation Workshop: SAN Review

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Steps to Implement a SAN


1. Have an initiator discover a target portal
2. Create a session between the initiator with
the target (make the bindings persistent
between reboots)
3. Create an igroup on the storage system if
necessary
4. Create a LUN on the storage system
5. Map the LUN to an igroup on the storage
system
6. Find the LUN on the host
7. Prepare the disk for the host OS if necessary
2009 NetApp. All rights reserved.

STEPS TO IMPLEMENT A SAN

1-24

SAN Implementation Workshop: SAN Review

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

1. Have an Initiator Discover a Target

Initiator

Ethernet

Fibre Channel

Must tell host


where to discover
the target by either
host name (IP) or
by a name service

When ports are


active, discovery is
automatic either
direct connected or
by a name service

Ethernet

Fibre Channel

Target

2009 NetApp. All rights reserved.

1. HAVE AN INITIATOR DISCOVER A TARGET


The first step in a SAN implementation is discovery of a target. In IP SAN, an administrator
must tell the client where to discover a target by either directly connecting by host name (IP)
or discovery through a name service (iSNS). See Appendix 3. In FC SAN, if the ports are
active, discovery is automatic either directly connected or by a name service. For more
information about IP SAN discovery, see Module 3. For more information concerning FC
SAN discovery, see Module 2.

1-25

SAN Implementation Workshop: SAN Review

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

1. Have an Initiator Discover a Target (Cont.)

Initiator

Ethernet

Fibre Channel

Single path
Multiple paths

Ethernet

Fibre Channel

Target

2009 NetApp. All rights reserved.

1. HAVE AN INITIATOR DISCOVER A TARGET (CONT.)

1-26

SAN Implementation Workshop: SAN Review

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

1. Have an Initiator Discover a Target (Cont.)

Initiator

Ethernet

Fibre Channel

iSCSI does not


connect over IC

Ethernet

Fibre
Channel

Target

FC connects
over the IC

Ethernet

Fibre
Channel

Active-active
controller configuration
2009 NetApp. All rights reserved.

1. HAVE AN INITIATOR DISCOVER A TARGET (CONT.)

1-27

SAN Implementation Workshop: SAN Review

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

2. Create a Session

Initiator

Ethernet

Fibre Channel

A session associates an initiator with a target; persisting


the session is possible to ensure availability after reboot

Ethernet

Fibre Channel

Target

2009 NetApp. All rights reserved.

2. CREATE A SESSION
Sessions associate the initiators with targets. A session may be persisted to ensure availability
after a host reboots.

1-28

SAN Implementation Workshop: SAN Review

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

3. Create an igroup

Initiator

Target

Ethernet

Fibre Channel

Ethernet

Fibre Channel
My_IP_igroup
iqn.1999-04.com.a:system
OS Type: Windows

Place WWN (iqn / eui)


in igroups for iSCSI

My_FC_igroup
21:00:00:2b:34:26:a6:56
OS Type: Windows

Place WWPN in
igroups for FC

2009 NetApp. All rights reserved.

3. CREATE AN IGROUP
An igroup is a group of one or more initiators that have access to a target. In IP SAN, an
administrator references an initiator by WWN. In FC SAN, an administrator references an
initiator by WWPN.

1-29

SAN Implementation Workshop: SAN Review

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

4. Create a Logical Unit

Initiator

Target

Ethernet

Fibre Channel

Ethernet

Fibre Channel
My_IP_igroup
iqn.1999-04.com.a:system
OS Type: Windows

My_FC_igroup
21:00:00:2b:34:26:a6:56
OS Type: Windows

Create a Logical Unit

LUNa

LUNb

2009 NetApp. All rights reserved.

4. CREATE A LOGICAL UNIT


The next step in implementing a SAN is to create a logical unit. A logical unit is a logical
representation of a physical unit of storage. It is a collection of, or a part of, physical or
virtual disks configured as a single disk. When you create a logical unit, it is automatically
striped across many physical disks. Data ONTAP manages logical units at the block level, so
it cannot interpret the file system or data in a logical unit.

1-30

SAN Implementation Workshop: SAN Review

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

5. Map a Logical Unit to an Igroup


NOTE: This step is also called LUN masking
Initiator

Ethernet

Fibre Channel

Map a logical unit to an igroup by


assigning a logical unit number
Ethernet

Target

Fibre Channel
My_IP_igroup
iqn.1999-04.com.a:system
OS Type: Windows

My_FC_igroup
21:00:00:2b:34:26:a6:56
OS Type: Windows

2
LUNa

LUNb

2009 NetApp. All rights reserved.

5. MAP A LOGICAL UNIT TO AN IGROUP


The logical unit is mapped to an igroup and referenced by an ID. The logical unit is then
referred to as a logical unit number, or LUN.

1-31

SAN Implementation Workshop: SAN Review

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

6. Find the Disk

Initiator

Ethernet

Fibre Channel

/
dev
- disk 1
- disk 2
- disk 3

LUNa is identified by client OS


LUNb is identified by client OS

Ethernet

Target

Fibre Channel
My_IP_igroup
iqn.1999-04.com.a:system
OS Type: Windows

My_FC_igroup
21:00:00:2b:34:26:a6:56
OS Type: Windows

2
LUNa

LUNb

2009 NetApp. All rights reserved.

6. FIND THE DISK


The LUN is then identified by the client operating system. From the host, LUNs appear as
local disks, allowing you to format and store data on them.

1-32

SAN Implementation Workshop: SAN Review

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

6. Find the Disk (Cont.)

Initiator

Ethernet

Fibre Channel

/
dev
- disk 1
- disk 2 - disk 4
- disk 3 - disk 5
Ethernet

Target

My_IP_igroup
iqn.1999-04.com.a:system
OS Type: Windows

If multiple paths
exist, the same LUN
might appear more
than once
Fibre Channel
My_FC_igroup
21:00:00:2b:34:26:a6:56
21:00:00:2b:34:26:a6:57
OS Type: Windows

2
LUNa

LUNb

2009 NetApp. All rights reserved.

6. FIND THE DISK (CONT.)


If multiple paths exist, the LUN will appear more than once unless multipathing software is
used.

1-33

SAN Implementation Workshop: SAN Review

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

7. Prepare the Disk for the Host OS (Cont.)

Initiator

Ethernet

Fibre Channel

/
dev
- disk 1
- disk 2
- disk 3

Treat LUNs as a single


disk or combine
together using a
volume manager

Ethernet

Target

Fibre Channel
My_IP_igroup
iqn.1999-04.com.a:system
OS Type: Windows

My_FC_igroup
21:00:00:2b:34:26:a6:56
OS Type: Windows

2
LUNa

LUNb

2009 NetApp. All rights reserved.

7. PREPARE THE DISK FOR THE HOST OS (CONT.)


LUNs may be used as single disk or combined together using a host-based volume manager.

1-34

SAN Implementation Workshop: SAN Review

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

7. Prepare the Disk for the Host OS

Initiator

Ethernet

Fibre Channel

/
mount mount
luna
lunb

dev
- disk 1
- disk 2
- disk 3

Label, format, add a file


system, and mount the
LUN per the OS

Ethernet

Target

Fibre Channel
My_IP_igroup
iqn.1999-04.com.a:system
OS Type: Windows

My_FC_igroup
21:00:00:2b:34:26:a6:56
OS Type: Windows

2
LUNa

LUNb

2009 NetApp. All rights reserved.

7. PREPARE THE DISK FOR THE HOST OS


Finally, the logical unit must be labeled, formatted, a file system added, and finally mounted
by the client OS.

1-35

SAN Implementation Workshop: SAN Review

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

LUN Setup Complete

Initiator

File System

Ethernet

Fibre Channel

/
mount mount
luna
lunb

dev
- disk 1
- disk 2
- disk 3

Ethernet

Target

Fibre Channel
My_IP_igroup
iqn.1999-04.com.a:system
OS Type: Windows

My_FC_igroup
21:00:00:2b:34:26:a6:56
OS Type: Windows

2
LUNa

LUNb

2009 NetApp. All rights reserved.

LUN SETUP COMPLETE

1-36

SAN Implementation Workshop: SAN Review

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Evaluation
How did you do?
If you are unfamiliar with many of the concepts
here in this this module, please see the SAN
Administration on Data ONTAP course for
instructional development of the SAN concepts
This course is focused on providing the steps to
implement a SAN on the following platforms:
Microsoft Windows Server 2008 R2
VMware vSphere (ESX 4.0)
Red Hat Enterprise Linux 5.3

2009 NetApp. All rights reserved.

EVALUATION

1-37

SAN Implementation Workshop: SAN Review

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary

2009 NetApp. All rights reserved.

MODULE SUMMARY

1-38

SAN Implementation Workshop: SAN Review

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary
In this module, you should have learned to:
Describe the differences between networkattached storage (NAS) and storage area
network (SAN)
List the methods to implement SAN solutions
Define logical unit number, initiator and target
Describe ports, worldwide names, and
worldwide port names
List the steps to implement a SAN

2009 NetApp. All rights reserved.

MODULE SUMMARY

1-39

SAN Implementation Workshop: SAN Review

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Exercise
Module 1: SAN Review
Estimated Time: 30 minutes

EXERCISE
Please refer to your Exercise Guide materials for more instructions.

1-40

SAN Implementation Workshop: SAN Review

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Windows FC
Connectivity
Module 2
SAN Implementation Workshop

WINDOWS FC CONNECTIVITY

2-1

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Objectives
By the end of this module, you should be able to:
Describe multiple path implementation with
Fibre Channel (FC) connectivity
Describe how to configure FC ports on
Windows and NetApp systems
Describe commands and utilities to identify the
worldwide node name (WWNN) and worldwide
port name (WWPN) on Windows and NetApp
systems
Use commands and utilities to examine FC
switch activity
2009 NetApp. All rights reserved.

MODULE OBJECTIVES

2-2

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

FC Connectivity Configuration
The following are the steps to configure FC SAN:
1. Determine the FC topology
2. Verify host HBA configuration, drivers,
firmware, cables, and multipathing software
3. Configure the switch (if in the topology)
4. Configure the target(s)
5. Configure the initiator(s)
6. Cable the devices together
7. Implement FC zoning (if required)

2009 NetApp. All rights reserved.

FC CONNECTIVITY CONFIGURATION

2-3

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

FC Topology

2009 NetApp. All rights reserved.

FC TOPOLOGY

2-4

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

FC Connectivity
Type of topologies:
1. Direct-attached
2. Single fabric
3. Dual fabric

NetApp FAS

2009 NetApp. All rights reserved.

FC CONNECTIVITY
NOTE: Fibre Channel-Arbitrated Loop (FC-AL), private loop and public loop topologies
are not discussed in this presentation.

2-5

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Direct-Attached Topology
Windows

Also called
point-to-point

Fibre Channel
2009 NetApp. All rights reserved.

DIRECT-ATTACHED TOPOLOGY
Initially, Fibre Channel (FC) point-to-point topologies were seen as a replacement for the
parallel SCSI bus, to overcome bandwidth and distance limitations. FC at 100 Mbps was
superior to SCSI at 10 to 20 Mbps, and as SCSI progressed to 40, 80, then 160 Mbps, FC
stayed ahead with 200 Mbps then 400 Mbps. SCSI bandwidth was reaching a ceiling where
FC at 200 Mbps was just getting started. FC point-to-point also overcame the severe distance
limitations of SCSI, but one limitation remained: it connected one initiator to one target,
supporting only the simplest topology. This provides limited connectivity.

2-6

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Problems with Direct-Attached Topology


Windows

Windows

...

Does not
scale

No fault
tolerance

Fibre Channel
2009 NetApp. All rights reserved.

PROBLEMS WITH DIRECT-ATTACHED TOPOLOGY


Direct-attached topology does not scale and is therefore not appropriate for enterprise
environments. There is also no fault tolerance. If the cable or an HBA is defective, then a host
will lose all connectivity with its storage as in the example above.

2-7

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Single-Fabric Topology
Windows

Inter-Switch
Link (ISL) =
single-fabric

Fibre Channel
2009 NetApp. All rights reserved.

SINGLE-FABRIC TOPOLOGY
The switched fabric uses a 24-bit addressing scheme with a 64-bit WWPN and WWNN. This
scheme has 12 million possibleaddresses, and the initiator-target pair got a dedicated nonblocking path to ensure full bandwidth.
In this configuration, all devices are connected to FC switches.
Single fabric is a switched fabric topology where the servers are attached to NetApp storage
controllers through a single FC fabric. This fabric may consist of multiple FC switches that
are connected together.

2-8

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Problems with Single-Fabric Topology


Windows

Only one name server per fabric


Not fully redundant

Fibre Channel
2009 NetApp. All rights reserved.

PROBLEMS WITH SINGLE-FABRIC TOPOLOGY

2-9

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Dual Fabric Topology


Windows

Cluster the initiator for


complete redundancy

Two
switches
required

Switches not
connected =
dual fabric

Fibre Channel
2009 NetApp. All rights reserved.

DUAL FABRIC TOPOLOGY

2-10

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Topology Summary
To optimize connectivity:
Architect clustered hosts
Implement a dual fabric design with multiple
paths
Configure active-active storage systems
NOTE: The exercise environment has an activeactive storage configuration with single hosts and
single switch

2009 NetApp. All rights reserved.

TOPOLOGY SUMMARY

2-11

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

FC Exercise Environment
Windows Server 2008 R2
6

switch
port
0c - 0

0d - 1

Storage System 1

0c - 2

0d - 3

Storage System 2

Fibre Channel
2009 NetApp. All rights reserved.

FC EXERCISE ENVIRONMENT

2-12

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Data ONTAP

2009 NetApp. All rights reserved.

DATA ONTAP

2-13

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

High Availability

The exercise environments storage systems


have been configured as a high-availability
(HA) pair
To configure controller failover:
system and system2> license add xxxxxx
system and system2> reboot
system> cf enable
2009 NetApp. All rights reserved.

HIGH AVAILABILITY

2-14

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Data ONTAP as an FC Target


Data ONTAP 6.3 and later has support for FC
Data ONTAP features:
Built-in FC functionality
Simple LUN creation and management

To properly configure Data ONTAP for FC


connectivity:

Confirm FC HBA(s) / port(s)


Configure and verify Fibre Channel protocol
Configure the FC Target HBA(s)
Identify the worldwide node name (WWNN)
Identify the worldwide port name (WWPN)

2009 NetApp. All rights reserved.

DATA ONTAP AS AN FC TARGET

2-15

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

HBA Confirmation
Confirm the current HBAs:
system> sysconfig -a

To identify the type of the on-board FC ports:


system> fcadmin config
Local
Adapter Type
State
Status
-----------------------------------------------0a
initiator CONFIGURED
online
0b
initiator CONFIGURED
online
0c
initiator CONFIGURED
online
0d
initiator CONFIGURED
online
NOTE: Add-on cards are configured to be either an initiator or
target and cannot be changed
2009 NetApp. All rights reserved.

HBA CONFIRMATION
The fcadmin utility manages the Fibre Channel adapters used by the storage subsystem. Use
these commands to show link- and loop-level protocol statistics, list the storage devices
connected to the storage system, and configure the personality of embedded adapters.

2-16

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

HBA Confirmation (Cont.)


To change an onboard interface from an
initiator to a target:

Down the interface


system> fcadmin config -d 0b
Reconfigure the
system> fcadmin config -t target 0b
interface as a target
system> reboot
A reboot is required

To change an onboard interface from a target


to an initiator:
system> fcadmin config -d 0b
system> fcadmin config -t initiator 0b
system> reboot

2009 NetApp. All rights reserved.

HBA CONFIRMATION (CONT.)

2-17

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Configuring FC Services in Data ONTAP


1. Verify Fibre Channel protocol service is
running:
fcp status

2. Verify FC is licensed (license the FC services


if needed):
license
license add XXXXXX

3. Start the FC service:


fcp start

2009 NetApp. All rights reserved.

CONFIGURING FC SERVICES IN DATA ONTAP

2-18

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Configuring FC HBA in Data ONTAP


1. List the installed target HBAs:
fcp show adapters
2. Take an HBA offline:
fcp config adapter down
3. Set target HBA speed to the FC switch ports speed
to improve takeover / giveback performance:
fcp config adapter speed [auto|1|2|4]
4. Bring an HBA online:
fcp config adapter up

2009 NetApp. All rights reserved.

CONFIGURING FC HBA IN DATA ONTAP

2-19

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Identify WWNN in Data ONTAP


WWNN uniquely identifies the storage system
The default WWNN is generated by a serial
number in its NVRAM and stored on disk
Normally doesnt need to be changed

To identify the WWNN:


system> fcp nodename
Fibre Channel nodename: 50:0a:09:80:86:f7:c7:86
(500a098086f7c786)

To change the WWN:


system> fcp nodename [-f] new_wwnn
Available in
Data ONTAP 7.3.1.1
and later
2009 NetApp. All rights reserved.

IDENTIFY WWNN IN DATA ONTAP

2-20

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Identify WWPN in Data ONTAP


WWPN uniquely identifies an FC HBA port
WWPNs are determined by:
WWNN
Controller failover mode (cfmode)
Internal port index

To verify the default WWPN:


system> fcp portname show
Portname
-------50:0a:09:81:86:f7:c7:86
50:0a:09:82:86:f7:c7:86

To change a WWPN:

Adapter
------Available in
0c
Data
ONTAP 7.3.1.1
0d
and later

system> fcp portname set adapter


2009 NetApp. All rights reserved.

IDENTIFY WWPN IN DATA ONTAP


Within Fibre Channel (FC) SAN, worldwide port names (or WWPNs) uniquely identify each
Fibre Channel port. Each 64-bit address is determined by three factors. The first factor is the
worldwide node name (WWNN), which is the unique identifier for the NetApp storage
system running as an FC target device server. The second factor is the controller failover
mode (cfmode) currently set on the NetApp storage system. The third and final factor is that
each FC target port has an internal port index range that assists in assigning the WWPNs.

2-21

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

FCP cfmodes Defined


Controller failover mode (cfmode) determines
how HBAs of storage systems in an activeactive configuration:
Log in to the fabric
Provide access to local and partner LUNs

Both storage systems in the active-active


configuration must have the same settings
fcp show cfmode to verify current setting
fcp set cfmode to set the cfmode
Requires advanced mode to set cfmode
priv set advanced
2009 NetApp. All rights reserved.

FCP CFMODES DEFINED


Cfmode only applies to Fibre Channel environments in an active-active NetApp storage
controller configuration. The cfmode determines how target ports do the following:
Log in to the fabric
Handle local and partner traffic for a cluster
Provide access to local and partner LUNs in a cluster
In the original release of Data ONTAP 6.3, which included SAN support for Fibre Channel,
cfmode standby was the implied default. There was not a setting for cfmode in that release,
and it was not called cfmode standby. However, when Data ONTAP 6.5 was released, four
cfmodes were introduced. One of these modes was standby. The others were partner, mixed
and dual fabric. In Data ONTAP 7.1, a new cfmode called single system image (SSI) became
available. SSI is the default cfmode for new installations with Data ONTAP 7.2. The
availability of standby, partner, mixed and dual fabric modes is dependent on the storage
controller model, Data ONTAP version, and/or the use of 2-Gb or 4-Gb FC ports. With Data
ONTAP 7.3, the only available cfmode is single system image.

2-22

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

FCP cfmode Types


Prior to Data ONTAP 7.3, there were five types of cfmodes:
cfmode

Supported systems

Benefits and limitations

partner

All systems except the FAS270c, FAS20x0,


FAS3040, FAS3070, FAS31x0, FAS60x0 and any
FAS system with a 4-Gb or 8-Gb target FC adapter

Supports all host OS types


Supports all switches

single_image

All systems

Supports all host OS types


Supports all switches
Makes all LUNs available on all target
ports

dual_fabric

FAS270c only

Supports all host OS types


Requires fewer switch ports
Does not support all switches;
requires switches that support public
loop

standby

All systems except the FAS270c, FAS20x0, FAS31x0,


FAS6040, FAS6080 and FAS6030 / FAS6070 with a
4-Gb or 8-Gb FC adapter

Requires more switch ports


Supports only Windows and Solaris
hosts

mixed

All systems except the FAS270c, FAS20x0, FAS30x0,


FAS31x0, FAS6040, FAS6080 and FAS6030 /
FAS6070 with a 4-Gb or 8-Gb FC adapter

Supports all operating systems


Does not support all switches;
requires switches that support public
loop

2009 NetApp. All rights reserved.

FCP CFMODE TYPES


There are five possible cfmodes on the storage controller. Only one cfmode can be set per
each storage controller, and in a cluster situation the cfmode must be the same for both
systems.
STANDBY
The standby mode is supported on all systems except the FAS270c. It supports only Windows
and Solaris operating systems. In addition, this mode requires additional switch ports.
PARTNER
The partner mode is supported on all systems except the FAS270c and the FAS6000 series.
All switches and host operating systems are supported.
DUAL-FABRIC
The dual-fabric mode is only supported on a FAS270c. All host operating systems are
supported by this mode. This mode requires a switch that supports a public loop.
MIXED
The mixed mode is supported on all systems except the FAS270c and the FAS6000 series.
Mixed mode supports all host operating systems, but requires a switch that supports a public
loop.
SINGLE IMAGE
The single image mode is supported on all systems, switches, and host operating systems.
This mode makes all LUNs available on all target ports.

2-23

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

FCP cfmodes Types (Cont.)


Data ONTAP 7.3 and later only supports
single_image cfmode

Partner and standby are supported only on upgrades of


existing systems that currently support and use these
cfmodes modes
After an upgrade to 7.3+, a storage system pair may only
be (re)configured to single_image

single_image cfmode

Each active-active pair has a single WWNN, allowing


both storage systems in the active-active configuration to
function as a single Fibre Channel storage system
All LUNs in an HA are available on all ports in the HA
pair
Multipathing software is required

2009 NetApp. All rights reserved.

FCP CFMODES TYPES (CONT.)


The single_image cfmode setting is available in Data ONTAP 7.1. This cfmode setting is the
default for new installs Data ONTAP 7.2 and later. Upgrades to Data ONTAP 7.2 will retain
the cfmode from the previous version. In SSI cfmode, an active-active storage controller
configuration has a single WWNN, and both systems in the configuration function as a single
Fibre Channel node. Each node in the cluster shares the partner nodes LUN map
information.
All LUNs in the cluster are available on all ports in the cluster by default. As a result, more
paths to each LUN are stored on the cluster. Any port can provide access to both local and
partner LUNs. You can specify the LUNs available on a subset of ports by defining port sets
and binding them to an igroup. Any host in the igroup can then access the LUNs only by
connecting to the target ports in the port set.

2-24

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Single Image Example


Host

Fabric 1

Fabric 2

Solid blue lines are paths


to the LUNs being served
by Controller 1
0c

0d

0c

0d

Active-Active
Configuration
Controller 1

Dashed green lines are


paths to the LUNs being
served by Controller 2

Controller 2

LUNs

LUNs
Multipathing software required

2009 NetApp. All rights reserved.

SINGLE IMAGE EXAMPLE


LUNs from both controllers are visible through a single physical (and logical) port.
A single FC port is a primary path for a LUN served by that controller and a secondary path
for a LUN on the partner controller.

2-25

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Single Image Example (Cont.)


Host

Fabric 1

Switch/Fabric 1 will
experience a failure
MP layer works around
the failure

Fabric 2

Solid blue lines are paths


to the LUNs being served
by Controller 1

0c

0d

0c

0d

Active-Active
Configuration
Controller 1

Dashed green lines are


paths to the LUNs being
served by Controller 2

Controller 2

LUNs

LUNs
Multipathing software required

2009 NetApp. All rights reserved.

SINGLE IMAGE EXAMPLE (CONT.)


Switch 1 experiences a failure. The host multipathing software layer works around that failure
to reroute the I/O through the secondary path to the LUN.

2-26

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Single Image Example (Cont.)


Host

Controller 1 will
experience a failure

Fabric 1

0c
Active-Active
Configuration

Fabric 2

Controller 2 takes over all


operations MP layer
works around the failure

Solid blue lines are paths


to the LUNs being served
by Controller 1

0d

0c

0d

Controller 1

Dashed green lines are


paths to the LUNs being
served by Controller 2

Controller 2

LUNs

LUNs
Multipathing software required

2009 NetApp. All rights reserved.

SINGLE IMAGE EXAMPLE (CONT.)


When Controller 1 experiences a failure, Controller 2 takes over all operations. The host
multipathing software layer works around the failure by rerouting I/O through the Controller
2 path to the LUN.

2-27

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Single Image Example (Cont.)


Host

Fabric 1

Fabric 2

Solid blue lines are paths


to the LUNs being served
by Controller 1
0c

0d

0c

0d

Active-Active
Configuration
Controller 1

Dashed green lines are


paths to the LUNs being
served by Controller 2

Controller 2

LUNs

LUNs
Multipathing software required

2009 NetApp. All rights reserved.

SINGLE IMAGE EXAMPLE (CONT.)


Single system image can protect against controller and fabric failure with as little as a single
port. However, NetApp recommends a fully redundant configuration if more ports are
available. This flexibility allows single system image cfmode to support models like the
FAS270c, as well as models like the FAS3000 and FAS6000 series.

2-28

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Single Image Example (Cont.)


Host

Loop Mode

Loop Mode
Solid blue lines are paths
to the LUNs being served
by Controller 1

0c

0d

0c

0d

Active-Active
Configuration
Controller 1

Dashed green lines are


paths to the LUNs being
served by Controller 2

Controller 2

LUNs

LUNs
Multipathing software required

2009 NetApp. All rights reserved.

SINGLE IMAGE EXAMPLE (CONT.)


The FAS270c and FAS3000 active-active controllers may be directly attached to the host
when single image cfmode is used. This configuration is not supported with any FAS800 or
FAS900 series (standby, partner, or mixed cfmodes).
Single system image does support this configuration. SSI allows ports to alternate between
fabric point-to-point login and individual loop. All LUNs are available on a single port in the
event that one of the links fails.

2-29

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

fcp config (Single System Image Mode)


system or system2> fcp nodename
Fibre Channel nodename: 50:0a:09:80:86:f7:c7:86 (500a098086f7c786)
system or system2> fcp show cfmode
fcp show cfmode: single_image
system> fcp config
0c:
ONLINE <ADAPTER UP> PTP Fabric
host address 011000
portname 50:0a:09:81:96:f7:c7:86 nodename 50:0a:09:80:86:f7:c7:86
mediatype auto speed auto
0d:
ONLINE <ADAPTER UP> PTP Fabric
host address 011100
portname 50:0a:09:82:96:f7:c7:86 nodename 50:0a:09:80:86:f7:c7:86
mediatype auto speed auto
system2> fcp config
0c:
ONLINE <ADAPTER UP> PTP Fabric
host address 011200
portname 50:0a:09:81:86:f7:c7:86 nodename 50:0a:09:80:86:f7:c7:86
mediatype auto speed auto
0d:
ONLINE <ADAPTER UP> PTP Fabric
host address 011300
portname 50:0a:09:82:86:f7:c7:86 nodename 50:0a:09:80:86:f7:c7:86
mediatype auto speed auto
2009 NetApp. All rights reserved.

FCP CONFIG (SINGLE SYSTEM IMAGE MODE)


This is an example of the adapter settings for single_image cfmode. Notice that all node
names are identical and that the media type is set to auto. This means that the ports log into
the fabric using point-to-point mode. If point-to-point mode fails, then the ports will try loop
mode.

2-30

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Windows

2009 NetApp. All rights reserved.

WINDOWS

2-31

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Windows as an FC Initiator
NetApp has supported Windows as an FC initiator
OS since Windows 2000 Server
Windows Server 2008 has many advantages over
previous versions
New tools
Storage Explorer
Storage Manager for
SANs
Multipath I/O (MPIO)
Built-in FC drivers

Windows must be properly configured for FC


connectivity
2009 NetApp. All rights reserved.

WINDOWS AS AN FC INITIATOR
Windows Server 2008 provides many new features that make configuring an FC SAN easier.
Storage Explorer provides a one-stop interface for investigating local HBAs as well as the FC
switches, if present. Storage Manager for SANs is an additional tool available in Windows
Server 2003 R2 and later that allows configuring a SAN environment. Storage Manager for
SANs requires the Virtual Disk Service add-in provided by NetApp at the NOW site.

2-32

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Windows FC Design and Installation


1. Verify host operating system releases, required
patches, and NetApp Windows Host Utility Kit with
Interoperability Matrix
Use System Properties dialog to verify release

2. Install compatible host bus adapters (HBAs)


3. Install and configure required HBA drivers and utilities
4. Verify an HBA:
Emulex: Use HBAnyware
QLogic: Use SANsurfer
All HBA Types: Device Manager Dialog

5. Install compatible NetApp Windows Host Utility Kit


Provides Perl scripts to diagnose and troubleshoot Windows
Example: windows_info provides diagnostic information

2009 NetApp. All rights reserved.

WINDOWS FC DESIGN AND INSTALLATION


Host utilities contain software tools and documentation that allow you to configure a host in a
NetApp SAN environment.
NOTE: Host utilities were formerly called Host Attach and Support Kits. Kits that were
released before this naming convention changed are still called Host Attach and Support Kits.
The term host utilities will be used in this course, but be aware that NetApp is in the process
of transitioning to this name.
Host utilities are available from the Download Software page on the NOW site at:
http://now.netapp.com/NOW/cgi-bin/software

2-33

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Windows Implementation
After installation, to configure a Windows Emulex
or QLogic implementation:
Verify the HBA is enabled
Identify the WWNN on the host HBA(s)
Identify the WWPN on the host HBA(s)
Verify connectivity between the initiator(s) and
target

2009 NetApp. All rights reserved.

WINDOWS IMPLEMENTATION

2-34

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Windows/Emulex Implementation
Verify that Windows Server 2008 has identified the HBA(s)

Within Device Manager


Select Storage controllers

Enabled Emulex HBAs


Double-click to investigate

2009 NetApp. All rights reserved.

WINDOWS/EMULEX IMPLEMENTATION

2-35

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Windows/Emulex Implementation (Cont.)


Identify the driver associated with the HBA(s)
Other HBAs will be at
a different location

For more
information

2009 NetApp. All rights reserved.

WINDOWS/EMULEX IMPLEMENTATION (CONT.)

2-36

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Windows/Emulex Implementation (Cont.)


Verify the HBA(s) is connected on Windows Server 2008

Locate the correct server

In this example, two


Emulex HBAs are
installed

2009 NetApp. All rights reserved.

WINDOWS/EMULEX IMPLEMENTATION (CONT.)

2-37

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Windows/Emulex Implementation (Cont.)


Identify the local WWPN(s) on Windows Server 2008

The WWNN
Select one of the HBAs

The WWPN

2009 NetApp. All rights reserved.

WINDOWS/EMULEX IMPLEMENTATION (CONT.)

2-38

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Discovery
Initiator

Fibre Channel

When ports are active


(and properly zoned),
discovery is automatic
Fibre Channel

Target

2009 NetApp. All rights reserved.

DISCOVERY
Within FC SAN, discovery is automatic unless switch zoning prevents it. See Appendix 1 for
a discussion about switch zoning.

2-39

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Data ONTAP Discovery of Initiators


Verify connectivity from the storage system:
system> fcp show initiators
Initiators connected on adapter 0c:
Portname
Group
-----------10:00:00:00:c9:6b:77:b4
Windows WWPN

NOTE: For convenience, you may assign an alias to the


Windows WWPN

2009 NetApp. All rights reserved.

DATA ONTAP DISCOVERY OF INITIATORS

2-40

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

WWPN Aliases
In large FC installations, it can be difficult to
identify WWPNs because of their cryptic 64-bit
name
Example: 10:00:00:00:c9:6b:77:b4
For convenience, WWPN may be assigned a
name or alias within Data ONTAP
Both target and initiator ports may be aliased

2009 NetApp. All rights reserved.

WWPN ALIASES
One common problem administrators face in large Fibre installations is determining how to
distinguish between WWPNs due to their cryptic 64-bit naming conventions. Now with Data
ONTAP 7.3, administrators can rename or alias a WWPN with a more convenient name to
assist in easy identification. Aliases may be used for both target and initiator ports.

2-41

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

fcp wwpn-alias Command


To alias a WWPN:

Use fcp wwpn-alias set [-f] alias wwpn


Example:
system> fcp wwpn-alias set WIN1-FC
10:00:00:00:c9:6b:77:b4

To remove an alias:

Use fcp wwpn-alias remove {-a alias|-w wwpn}


Example:
system> fcp wwpn-alias remove -a WIN1-FC WIN2-FC

(removes the aliases)


system> fcp wwpn-alias remove -w
10:00:00:00:c9:6b:77:b4

(removes all aliases for a particular WWPN)

To show aliases:

Use fcp wwpn-alias show [-a alias | -w wwpn]

2009 NetApp. All rights reserved.

FCP WWPN-ALIAS COMMAND


To alias a WWPN, an administrator must use the new fcp wwpn-alias set command. The -f
switch can be used to force or reassign an existing alias to a new WWPN. The new WWPN
may have multiple aliases, but only one alias can be assigned to a single WWPN.
To remove an alias, an administrator may use either the -a switch to remove one or more
particular aliases or the -w switch to remove all aliases for a given WWPN with the new fcp
wwpn-alias remove command.
To verify all aliases, an administrator may use the fcp wwpn-alias show command. The
administrator can then limit the alias return by requesting to see only a particular alias with
the -a switch or a particular WWPN with the -w switch.

2-42

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Alias Rules
A storage system can have up to 1024 aliases
An alias can have the following characters:
A-Z, a-z, 0-9, '_','-','.','{','}' and no spaces
Many aliases maybe associated with a single
WWPN, but each alias will be assigned to only
one WWPN
Use fcp wwpn-alias help subcommand
for more information on the subcommand

2009 NetApp. All rights reserved.

ALIAS RULES
The following rules apply to WWPN aliases:

A storage system can have up to 1,024 aliases.


An alias can have the following characters: A-Z, a-z, 0-9, '_','-','.','{','}' but no spaces.
Many aliases may be associated with a single WWPN, but each alias will be assigned to
only one WWPN.
Use the fcp wwpn-alias help <subcommand> function for more information on a particular
subcommand.

2-43

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

WWPN Aliases Example


system> fcp wwpn-alias set WIN1-FC
10:00:00:00:c9:6b:77:b4
system> fcp wwpn-alias show
WWPN
Alias
-------10:00:00:00:c9:6b:77:b4
WIN1-FC
system> fcp show initiators
Initiators connected on adapter 0c:
Portname
Group
-----------10:00:00:00:c9:6b:77:b4
WWPN Alias(es): WIN1-FC

2009 NetApp. All rights reserved.

WWPN ALIASES EXAMPLE

2-44

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Windows Discovery of Targets


Verify connectivity from Windows Server 2008

Storage systems
WWPNs show up
Select the
Brocadefabric

Windows WWPNs
show up

2009 NetApp. All rights reserved.

WINDOWS DISCOVERY OF TARGETS

2-45

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Binding
Binding or mapping in FC SAN is the process
of associating an OS device name to a targets
worldwide port name
Persistent binding in FC SAN is the process of
ensuring that the same binding occurs even
after a host OS reboot
NOTE: Binding is done automatically in most
modern OS; therefore, it does not need to be
manually configured

2009 NetApp. All rights reserved.

BINDING

2-46

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Window/Emulex Binding
Select

From Device Properties ...

SCSI ID = Bus
Number, Target ID

Current
Bindings

Current
SCSI IDs

Do not use
persistent
binding

2009 NetApp. All rights reserved.

WINDOW/EMULEX BINDING
There are no native Windows Server 2003 or 2008 tools for verifying binding of an initiator
and one or more targets. Therefore, an administrator must use third-party tools such as
HBAnyware from Emulex.
NOTE: Do not use FC-persistent binding within a Windows SAN environment.

2-47

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Command-Line Interface for Server Core


Command-line interface (CLI) commands for
FC management is available for the HBA
vendors
Emulex: hbacmd
QLogic: scli

Commands available to:

Verify connectivity
Interrogate the fabric
Manage bindings
Verify configuration
Administrate VPORTs (discussed in module 7)

2009 NetApp. All rights reserved.

COMMAND-LINE INTERFACE FOR SERVER CORE

2-48

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Multipath I/O

2009 NetApp. All rights reserved.

MULTIPATH I/O

2-49

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Multiple Paths to a LUN


Windows Server 2008 R2
6

switch
port
0c - 0

0d - 1

Storage System 1

0c - 2

0d - 3

Storage System 2

Fibre Channel
2009 NetApp. All rights reserved.

MULTIPLE PATHS TO A LUN

2-50

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Multiple Paths
When multiple paths are present to a LUN, the same
LUN would appear multiple times
The same LUN would appear a single instance for
each path available
Without MPIO

With MPIO

2009 NetApp. All rights reserved.

MULTIPLE PATHS

2-51

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Optimized or Non-optimized Paths


Not all paths are necessarily equal
Optimized or primary or favored = active path
between initiator and target
Same latency level

Non-optimized or secondary or unfavored =


inactive path between initiator and target
Latency differs between path

2009 NetApp. All rights reserved.

OPTIMIZED OR NON-OPTIMIZED PATHS

2-52

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Multipath Access
Symmetric

Asymmetric

All paths are favored


or optimized

Only certain paths are


favored or optimized
But how
do you determine
optimized or
non-optimized
paths?

0c

0d

0c

0d

LUN
2009 NetApp. All rights reserved.

MULTIPATH ACCESS

2-53

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Windows Multipath FC Implementation


Storage management software available for Windows:
Native Windows Disk Management
Veritas Storage Foundation

Native Disk Management in W2K8 will use ALUA by


default with Microsoft native Device Specific Module
(DSM); enable AULA on the igroup
system> igroup set my_igroup alua on
Pre-defined igroup Default is off

When possible, use the NetApp DSM and not enabled


AULA on the igroup
system> igroup set my_igroup alua off

This course focuses on native Disk Management,


NetApp DSM and Emulex HBAs
2009 NetApp. All rights reserved.

WINDOWS MULTIPATH FC IMPLEMENTATION

2-54

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Windows Native Multipath I/O


Windows Server 2008 can be configured to
support Multipath I/O

Right-click
Features and
select Add
Feature

Multipath I/O
added

No need for a
reboot in R2
2009 NetApp. All rights reserved.

WINDOWS NATIVE MULTIPATH I/O

2-55

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Windows Native Multipath I/O (Cont.)


Out of the box, Windows multipath I/O comes
with a generic DSM
NetApp recommends installing the NetApp
Data ONTAP DSM
Verify with Interoperability Matrix for the
recommended version

2009 NetApp. All rights reserved.

WINDOWS NATIVE MULTIPATH I/O (CONT.)

2-56

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

NetApp Data ONTAP DSM 3.3.1


Features:

Supports Windows Server 2008 R2


Supports multiple load-balancing policies
Support for claiming iSCSI LUNs
Coexists with other DSMs
Multiprotocol LUN support (simultaneous iSCSI
and FC paths to the same LUN)

Requirements:
Windows Server 2003 or 2008 or 2008 R2
Data ONTAP 7.2.2+
2009 NetApp. All rights reserved.

NETAPP DATA ONTAP DSM 3.3.1


The Data ONTAP DSM for Windows MPIO is a Device Specific Module (DSM) that works
with the Microsoft Windows MPIO drivers (mpdev.sys, mpio.sys, and mpspfltr.sys) to
manage multiple paths between NetApp and IBM N series storage systems and Windows
host computers. The DSM includes the storage-system-specific intelligence needed to
correctly identify paths and to manage path failure and recovery.

2-57

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Install NetApp DSM

2009 NetApp. All rights reserved.

INSTALL NETAPP DSM

2-58

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary

2009 NetApp. All rights reserved.

MODULE SUMMARY

2-59

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary
In this module, you should have learned to:
Describe multiple path implementation with
Fibre Channel (FC) connectivity
Describe how to configure FC ports on
Windows and NetApp systems
Describe commands and utilities to identify the
worldwide node name (WWNN) and worldwide
port name (WWPN) on Windows and NetApp
systems
Use commands and utilities to examine FC
switch activity
2009 NetApp. All rights reserved.

MODULE SUMMARY

2-60

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Exercise
Module 2: Windows FC Connectivity
Estimated Time: 45 minutes

EXERCISE
Please refer to your Exercise Guide materials for more instructions.

2-61

SAN Implementation Workshop: Windows FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Windows iSCSI
Connectivity
Module 3
SAN Implementation Workshop

WINDOWS ISCSI CONNECTIVITY

3-1

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Objectives
By the end of this module, you should be able to:
Describe multiple path implementation with
iSCSI connectivity
Configure network ports on Windows and
NetAppsystems
Identify the worldwide node (WWN) on
Windows and NetApp systems
Set up and verify multiple path iSCSI
connectivity between Windows and NetApp
systems
2009 NetApp. All rights reserved.

MODULE OBJECTIVES

3-2

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

iSCSI Review
iSCSI encapsulates SCSI-3 command frames
in IP packets
Ethernet IP TCP

iSCSI

SCSI

Uses TCP port 3260


IETF standard documented RFC3720
http://www.ietf.org/rfc/rfc3720.txt?number=3720

The iSCSI standard specifies:


Connection negotiation
Authentication methods
Device discovery
2009 NetApp. All rights reserved.

ISCSI REVIEW

3-3

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

iSCSI Node Names Review


Symbolic name used to uniquely identify nodes
(targets and initiators)
Two naming formats:
IQN (iSCSI Qualified Name): defined by RFC3720, a
DNS-based naming format
Type Date Domain
Registered

Domain
Name

Organizationally
Defined

Example: iqn.1992-08.com.netapp:sn.101171811
EUI (Extended Unique Identifier): OUI-based scheme
similar to WWN
Type OUI + Organizationally Defined

Example: eui.495f1de83eb8
2009 NetApp. All rights reserved.

ISCSI NODE NAMES REVIEW


Valid characters for iSCSI node names are:

ASCII dash ('-') , dot ('.'), colon (':') ASCII lower-case characters - ('a' through 'z, '0'
through '9')
Maximum size is 223 bytes
No white space is allowed
Upper case characters are converted to lower case

IQN
iqn.yyyy-mm.backward_naming_authority:device
yyyy-mm is the month and year in which the naming authority acquired the domain name
backward_naming_authority is the reverse domain name of the entity responsible for
naming this device
device is the unique host name for the device
Extended Unique Identifier (EUI) eui + . + 16 hexadecimal digits
NOTE: Older Microsoft iSCSI software initiator allowed an underscore character in the IQN
name. Data ONTAP complies with the iSCSI specification and will not recognize it.

3-4

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

IP Connectivity

2009 NetApp. All rights reserved.

IP CONNECTIVITY

3-5

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

IP Connectivity
Type of topologies:
1. Direct-attached
2. Network

NetApp FAS

2009 NetApp. All rights reserved.

IP CONNECTIVITY

3-6

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Direct-Attached Topology
Windows

Also called
point-to-point

iSCSI
2009 NetApp. All rights reserved.

DIRECT-ATTACHED TOPOLOGY
In direct-attached topologies, servers (or hosts) are directly attached to the NetApp controller
using a crossover cable. It is not possible to directly attach to more than one controller in a
high-availability configuration.

3-7

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Problems with Direct-Attached Topology


Windows

Windows

...

Does not
scale

No fault
tolerance

iSCSI
2009 NetApp. All rights reserved.

PROBLEMS WITH DIRECT-ATTACHED TOPOLOGY


Direct-attached topologies do not scale and are therefore not appropriate for enterprise
environments. There is also no fault tolerance. If the cable or an adapter is defective, then a
host will lose all connectivity with its storage.

3-8

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Network Topology
Windows

Cluster the initiator for


complete redundancy

Dual switches
provides extra
redundancy

iSCSI
2009 NetApp. All rights reserved.

NETWORK TOPOLOGY
In a network environment, servers are attached to NetApp controllers through Ethernet
switches. This network may consist of multiple Ethernet switches in any configuration.
There are two types of switched environments, dedicated Ethernet and shared Ethernet. In a
dedicated Ethernet, there is no extraneous network traffic. The network is totally dedicated to
iSCSI and related management traffic. Such a network is typically located in a secure data
center. Direct-attached and dedicated Ethernet networks represent approximately 90% of
current iSCSI deployments. In a shared Ethernet, the network is shared with other traffic or a
corporate Ethernet network. This typically introduces firewalls, routers, and IP security
(IPsec) into the Ethernet network.

3-9

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

iSCSI Adapters Types


Because iSCSI implements SAN over IP,
administrators have choices on how to connect
to an iSCSI network:
NIC and iSCSI
Soft Initiator

TOE and iSCSI


Soft Initiator

iSCSI Hardware
Initiator/HBA

Application

Application

Application

SCSI

SCSI

SCSI

Other
iSCSI
Protocols

Other
iSCSI
Protocols

iSCSI

TCP

TCP

TCP

IP

IP

IP

Network
Interface

Network
Interface

Network
Interface

Server
Processing

NIC/HBA
Processing

2009 NetApp. All rights reserved.

ISCSI ADAPTERS TYPES


There are three forms of iSCSI initiators: iSCSI software initiators with standard Ethernet
network interface cards (NICs), iSCSI software initiators with Ethernet TCP offload engine
(TOE) NICs, and iSCSI hardware initiators (host bus adapters, or HBAs). The iSCSI software
initiator over standard Ethernet NIC solution is easy to install and has no extra costs as most
modern hosts have one or two onboard gigabit Ethernet NICs standard. The drawback of this
solution is that the host CPU must process iSCSI and TCP/IP protocol data and application
performance may suffer due to the cost of the iSCSI and TCP/IP load.
The TOE offloads TCP from the host processor, allowing the host to focus on application
processing. TOEs do not offload iSCSI processing from the host and therefore require an
iSCSI software initiator.
With an iSCSI HBA, all of the functions associated with iSCSI are moved off the host
processor(s) and onto a card, similar to an FC HBA or a SCSI HBA. This allows the host
processor to work solely on the application, handing all storage-related work to the iSCSI
HBA. The iSCSI hardware initiator solution is more expensive and has platform compatibility
considerations. In addition, the iSCSI HBA may provide support for SAN boot or the ability
to boot from the iSCSI LUN on the NetApp storage system. Each path may also have a
different single point of failure, clustering support capabilities, platform support
considerations, and integration with NetApp SnapDrive.
NOTE: The iSCSI Support Matrix on the NOW site contains the most up-to-date
information.

3-10

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Topology Summary
To optimize connectivity:
Architect clustered hosts
Implement dual switches with multiple paths
Configure active-active storage systems
NOTE: The exercise environment will implement the
high availability (active-active) storage system
configuration with several multiple path techniques

2009 NetApp. All rights reserved.

TOPOLOGY SUMMARY

3-11

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

IP Exercise Environment
Windows Server 2008 R2
e0a are used
for management
access only for
storage systems
without e0M
e0a

e0b

e0c

Storage System 1

e0a

e0b

e0c

Storage System 2

iSCSI
2009 NetApp. All rights reserved.

IP EXERCISE ENVIRONMENT

3-12

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Discovery
For an initiator and target to communicate, the
initiator must discover the target
Proper configuration of the initiator OS is
required for discovery
Discovery is accomplished over TCP port 3260
We will investigate:
Data ONTAP setup
Windows Server 2003/2008 software initiator
with a standard NIC

2009 NetApp. All rights reserved.

DISCOVERY
NOTE: Discovery occurs generally over TCP port 3260. Therefore, this port must not be
blocked by firewall servers positioned between the initiator and target on a network.

3-13

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Data ONTAP

2009 NetApp. All rights reserved.

DATA ONTAP

3-14

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Data ONTAP as an iSCSI Target


Data ONTAP 6.4 and later has support for iSCSI
Data ONTAP features:
Built-in iSCSI service
Simple LUN creation and management

Data ONTAP must be properly configured for


iSCSI connectivity
1. Configure IP interfaces
2. Configure iSCSI services
3. Configure the iSCSI interfaces
4. Identify the worldwide name (WWN)
2009 NetApp. All rights reserved.

DATA ONTAP AS AN ISCSI TARGET

3-15

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Configuring Interfaces
1. List the available interfaces:
ifconfig -a

2. Take an interface offline:


ifconfig interface_name down

3. Configure the interface:


ifconfig interface_name ipaddress

4. Bring an interface online:


ifconfig interface_name up
NOTE:Virtual interfaces (interface groups) may also
be configured to be used with the iSCSI service
2009 NetApp. All rights reserved.

CONFIGURING INTERFACES

3-16

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Configuring iSCSI Services in Data ONTAP


1. Verify the iSCSI service is running:
iscsi status

2. Verify iSCSI is licensed (license it if needed):


license
license add XXXXXX

3. Start the iSCSI service:


iscsi start

2009 NetApp. All rights reserved.

CONFIGURING ISCSI SERVICES IN DATA ONTAP

3-17

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Verify Interfaces
Verify the interface is enabled for iSCSI:
iscsi interface show

By default, all interfaces are enabled

To enable the interface for iSCSI traffic:


iscsi interface enable interface_name

To disable iSCSI traffic for a particular


interface:
iscsi interface disable interface_name

2009 NetApp. All rights reserved.

VERIFY INTERFACES

3-18

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Interface Access List


Administrators may force initiators to access a
storage system through certain interfaces
iscsi interface accesslist add
initiator_name {-a|interface_name}

By default, all initiators may use any interface


that is enabled for iSCSI traffic
To display the current access list, use:
iscsi interface accesslist show

To remove an entry from the access list, use:


iscsi interface accesslist remove
initiator_name {-a|interface_name}

2009 NetApp. All rights reserved.

INTERFACE ACCESS LIST


The accesslist feature controls initiator access to network interfaces. By default, an initiator
does not have an access list and can access the storage system through any network interface.
The administrator can create an access list using the add subcommand:
iscsi interface accesslist add initiator {-a | interface...}
This creates an access list for initiator with a list of network interface names.
The specified initiator can only log in through the network interfaces in the access list.
If the specified initiator sends a SendTargets request, it will receive a list of target addresses.
These target addresses are associated with only those network interfaces that are included in
the access list.
An existing access list can be edited by means of the add and remove subcommands:
iscsi interface accesslist remove <initiator> [-a | <interface>...]
The -a parameter adds or removes all network interfaces. When the last network interface is
removed from an access list, the access list itself is removed.

3-19

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Identifying WWN in Data ONTAP


To identify the WWN:
system> iscsi nodename
iSCSI target nodename: iqn.1992-08.com.netapp:system

Remember WWPNs are not used within iSCSI

2009 NetApp. All rights reserved.

IDENTIFYING WWN IN DATA ONTAP

3-20

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Interfaces on the Storage System


Verify interface status:
system> ifconfig -a
...
e0b: flags=108042<BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
ether 00:a0:98:03:28:8f (auto-unknown-down) flowcontrol full
e0c: flags=108042<BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
ether 00:a0:98:03:28:8f (auto-unknown-down) flowcontrol full
...

Enable the interface:


system> ifconfig
...[system:
system> ifconfig
...[system:

e0b 10.254.134.75 up
netif.linkUp:info]: Ethernet e0b: Link up.
e0c 10.254.134.81 up
netif.linkUp:info]: Ethernet e0c: Link up.

Ensure that the iSCSI service may use the interface:


system> iscsi interface enable e0b e0c

2009 NetApp. All rights reserved.

INTERFACES ON THE STORAGE SYSTEM

3-21

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Windows

2009 NetApp. All rights reserved.

WINDOWS

3-22

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Windows as an iSCSI Initiator


NetApp has supported Windows as an iSCSI initiator
OS since Windows 2000 Server
Always check Interoperability Matrix Tool for current
supported Operating Systems

Windows Server 2008 has many advantages over


previous versions
New tools such as Storage Explorer and Storage
Manager for SANs

Windows Server 2008 R2 has many advantages over


Windows Server 2008
User interface enhancement and redesign
iSCSI boot support for up to 32 paths

Windows must be properly configured for iSCSI


connectivity over a standard network interface
2009 NetApp. All rights reserved.

WINDOWS AS AN ISCSI INITIATOR

3-23

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Windows iSCSI Design and Installation


1. Verify host operating system releases, required
patches, and NetApp iSCSI Host Utility Kit with the
Interoperability Matrix Tool and FC and iSCSI
configuration guides
Use System Properties Dialog

2. Identify and verify a network interface is properly


configured or install a supported iSCSI HBA or TOE
3. In Windows Server 2008, the software initiator is
preinstalled
4. Install compatible NetApp Windows Host Utility Kit
Provides Perl scripts to monitor and diagnose iSCSI on
Windows

2009 NetApp. All rights reserved.

WINDOWS ISCSI DESIGN AND INSTALLATION

3-24

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Windows/NIC Implementation
After installation, to configure a Windows
standard NIC software initiator implementation:
1. Identify the local network interface(s) to use
2. Verify iSCSI Initiator driver is enabled and the
service is started
3. Identify the WWN for the local Windows host
4. Identify which method of discovery to use and
enter the storage systems portal IP address or
iSNS address
5. Configure authentication security if necessary
6. Verify discovery & log on to the storage system
2009 NetApp. All rights reserved.

WINDOWS/NIC IMPLEMENTATION

3-25

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Windows/NIC Implementation (Cont.)


1. Identify and configure the local interfaces

2009 NetApp. All rights reserved.

WINDOWS/NIC IMPLEMENTATION (CONT.)

3-26

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Windows/NIC Implementation (Cont.)


2. Verify iSCSI Initiator
driver is enabled and
verify iSCSI Initiator
service is started

Window Server 2008 R2


version shown. NOTE:In
Windows Server 2003, the
category is called SCSI
and RAID controllers
2009 NetApp. All rights reserved.

WINDOWS/NIC IMPLEMENTATION (CONT.)

3-27

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Windows/NIC Implementation (Cont.)


iSCSI initiator may be configured through:
Storage Explorer
iSCSI Initiator Properties Dialog

Select
and then
configure

2009 NetApp. All rights reserved.

WINDOWS/NIC IMPLEMENTATION (CONT.)


The Microsoft iSCSI software initiator may be configured through either Storage Explorer in
Windows Server 2008 or the iSCSI Initiator Properties dialog in Windows Server 2003 or
2008.

3-28

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Windows/NIC Implementation (Cont.)


3. Identify WWN using iSCSI Initiator Properties
Select first
Current WWN
To change the WWN

2009 NetApp. All rights reserved.

WINDOWS/NIC IMPLEMENTATION (CONT.)


The local host WWN appears on the Configuration tab of the iSCSI Initiator Properties
dialog. It generally does not need to be changed.

3-29

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

WNN Spoofing
iSCSI node names are:
Spoof-able
Sniff-able
Can be attacked

NetApp recommends using authentication


methods and other security techniques
(discussed later)

2009 NetApp. All rights reserved.

WNN SPOOFING

3-30

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Discovery
Initiator
Ethernet

Discovery is not
automatic

Ethernet

Target

2009 NetApp. All rights reserved.

DISCOVERY
Unlike FC discovery, iSCSI discovery is not automatic.

3-31

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Windows/NIC Implementation (Cont.)


4. Discovery with iSCSI Initiator Properties Dialog
Click here to discover
a target portal
Never
change
the port
or add iSNS
server to poll
Add storage
systems IP address
To set security
2009 NetApp. All rights reserved.

WINDOWS/NIC IMPLEMENTATION (CONT.)


From the Discovery tab on the iSCSI Initiator Properties dialog, an administration may set the
method of discovery of targets. Discovery may be accomplished either through add-a-targets
portal address or by polling an Internet Storage Name Service (iSNS) server. See appendix 3
for a discussion of iSNS.
When adding a target portal address, if authentication is required, an administrator will set
this by clicking the Advanced button. Next, iSCSI authentication is discussed in detail.

3-32

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

iSCSI Authentication in Windows


5. To increase security, iSCSI may be configured
to require authentication
Authentication methods:
CHAP
Unidirectional - targets will authenticate initiators
Bidirectional - initiators and targets will authenticate
each other

RADIUS

IPsec could also be used increase security


This course will discuss using CHAP
authentication, but will not use it in the exercise

2009 NetApp. All rights reserved.

ISCSI AUTHENTICATION IN WINDOWS


There are two methods of authenticating systems in iSCSI:
CHALLENGE HANDSHAKE AUTHENTICATION PROTOCOL (CHAP)
CHAP requires a known secret that is shared by both ends of the authentication. There are
four basic steps of unidirectional authentication:

After the completion of the link establishment phases, the target sends a challenge
message to the initiator.
The initiator responds with a one-way hash function of the shared secret.
The target checks the response against its own calculation of the expected hash value.
At random intervals, the target will send a new challenge to the initiator and repeat Steps
1, 2, and 3.
NOTE: In bidirectional authentication the process is also implemented in the reverse.
REMOTE AUTHENTICATION DIAL-IN USER SERVICE (RADIUS)
RADIUS is a networking protocol that uses access servers to provide centralized management
of access to large networks.
Proper authentication resists man-in-the-middle attacks as well as other attacks.
IPsec can also be used to increase security.

3-33

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

iSCSI Unidirectional CHAP Authentication


Configure the discovery
to use the CHAP
authentication

Check to configure

system> iscsi security add


-i iqn.1991-05.com.microsoft:win
-s CHAP
-n iqn.1991-05.com.microsoft:win
-p thisismysecret

To configure
bidirectional, check here
and then...
2009 NetApp. All rights reserved.

ISCSI UNIDIRECTIONAL CHAP AUTHENTICATION


To configure unidirectional (inbound) CHAP authentication for the Microsoft Software
Initiator, enter the iscsi security add command on the storage system and enter the user name
and shared secret as appropriate. By convention, the user name is generally the same as the
WWN.
The Microsoft iSCSI Software Initiator requires both the initiator and target CHAP passwords
to be at least 12 bytes if IPsec encryption is not being used. The maximum password length is
16 bytes regardless of whether IPsec is used.
NOTE: Data ONTAP provides an iscsi security generate command that creates a random
128-bit key that may, in some cases, be used as the password.

3-34

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

iSCSI Bidirectional CHAP Authentication


Set Windows CHAP secret

Set CHAP secret


from switch -o

On the storage system:


system> iscsi security add
-i iqn.1991-05.com.microsoft:win
-s CHAP
-n iqn.1991-05.com.microsoft:win
-p thisismysecret
-o thisismysecret2
-m iqn.1991-05.com.microsoft:win
2009 NetApp. All rights reserved.

ISCSI BIDIRECTIONAL CHAP AUTHENTICATION


To add bidirectional (inbound and outbound) authentication to the Microsoft software
initiator, check the mutual authentication on the Advanced Setting (see previous page) and
then add the CHAP secret from switch -o of the iscsi security add command.
NOTE: The user name is the same as the WWN but the shared secret is different. This is
because the user name and password combination cannot be the same for inbound and
outbound settings on a storage system.

3-35

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Windows/NIC Implementation (Cont.)


6. Discovered in iSCSI Initiator Properties Dialog

Storage system
is discovered
Both
target portals
discovered
(New in Windows
2008 R2)

2009 NetApp. All rights reserved.

WINDOWS/NIC IMPLEMENTATION (CONT.)

3-36

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Binding
iSCSI binding or logging on is the process of
creating a session between an initiator and a
target
Persistent binding ensures that an initiator
binds to a target after a reboot of the initiator
OS

2009 NetApp. All rights reserved.

BINDING

3-37

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Windows/NIC Implementation (Cont.)


Targets in the iSCSI Initiator Properties Dialog
Storage system
is discovered
Click here to
connect

Best
practice:
Check
both

To change the interface


that is used with which
to connect
2009 NetApp. All rights reserved.

WINDOWS/NIC IMPLEMENTATION (CONT.)

3-38

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Windows/NIC Implementation (Cont.)


Connection in iSCSI Initiator Properties Dialog
NOTE: Console message appears
[system: iscsi.notice:notice]:
ISCSI: New session from initiator
iqn.1991-05.com.microsoft:dev05s2.
development.netappu.com at IP addr
10.254.144.75

2009 NetApp. All rights reserved.

WINDOWS/NIC IMPLEMENTATION (CONT.)

3-39

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Windows/NIC Implementation (Cont.)


Disconnect in iSCSI Initiator Properties Dialog

Select session
and click
Select first

To disconnect all
session for
particular target

2009 NetApp. All rights reserved.

WINDOWS/NIC IMPLEMENTATION (CONT.)


To disconnect, select the session from the target Properties dialog and click Disconnect.

3-40

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Windows/NIC Implementation (Cont.)


iSCSI persistent binding

2009 NetApp. All rights reserved.

WINDOWS/NIC IMPLEMENTATION (CONT.)


While logging into a target, an administrator may optionally set the target as a favorite target
that automatically logs in when the local host is booted.

3-41

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Command-Line Interface Commands


Windows Server 2008 provides fully functional
command-line interface (CLI)
Important for:
Windows Server 2008 Server Core
Scripting
Windows Server 2008 Server Core is a minimal server
installation option; provides:
Low-maintenance
Limited attack surface
Limited functionality
Command prompt interface
NOTE: In the exercise environment, we will use PowerShell
2009 NetApp. All rights reserved.

COMMAND-LINE INTERFACE COMMANDS

3-42

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

iscsicli Command
Start the iscsicliutility:
PS C:\Users\Adminstrator> iscsicli
Microsoft iSCSI Initiator Version 6.1 Build 7600
[iqn.1991-05.com.microsoft:win] Enter command o
^C to exit

Verify the initiator:


> listinitiators
Initiators List:
Root\ISCSIPRT\0000_0

2009 NetApp. All rights reserved.

ISCSICLI COMMAND

3-43

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

iscsicli Command (Cont.)


Add target portals:
> qaddtargetportal 10.254.144.75
The operation completed successfully.

List target portals:


> listtargetportals
Total of 1 portals are persisted:
Address and Socket
: 10.254.144.75 3260
Symbolic Name
:
Initiator Name
:
Port Number
: <Any Port>
Security Flags
: 0x0
Version
: 0
Information Specified: 0x0
Login Flags
: 0x0
2009 NetApp. All rights reserved.

ISCSICLI COMMAND (CONT.)

3-44

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

iscsicli Command (Cont.)


List targets:
> listtargets
Targets List:
iqn.1992-08.com.netapp:system

Verify discovery method:


> targetinfo iqn.1992-08.com.netapp:system
Discovery Mechanisms :
"SendTargets:*10.254.144.75 0003260
Root\ISCSIPRT\0000_0

2009 NetApp. All rights reserved.

ISCSICLI COMMAND (CONT.)

3-45

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

iscsicli Command (Cont.)


Log in to target (creates first connection):
> qlogintarget iqn.1992-08.com.netapp:system
Session Id is 0xfffffa8002936018-0x4000013700000004
Connection Id is 0xfffffa8002936018-0x4

Verify discovery method:


> targetinfo iqn.1992-08.com.netapp:system
Discovery Mechanisms :
"SendTargets:*10.254.144.75 0003260
Root\ISCSIPRT\0000_0

2009 NetApp. All rights reserved.

ISCSICLI COMMAND (CONT.)

3-46

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

iscsicli Command (Cont.)


List session information:
> sessionlist
Total of 1 sessions
Session Id
: fffffa8002936018-4000013700000004
Initiator Node Name : iqn.1991-05.com.microsoft:win
Target Node Name
: (null)
Target Name
: iqn.1992-08.com.netapp:system
ISID
: 40 00 01 37 00 00
TSID
: 06 00
Number Connections
: 1
Connections:
Connection Id
: fffffa8002936018-4
Initiator Portal : 0.0.0.0/3780
Target Portal
: 10.254.144.75/3260
CID
: 01 00

2009 NetApp. All rights reserved.

ISCSICLI COMMAND (CONT.)

3-47

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

iscsicli Command (Cont.)


Additional commands available:
Session Management

LogoutTarget
PersistentLoginTarget
ListPersistentTargets
RemovePersistentTarget
ChapSecret...

Connection Management
AddConnection
RemoveConnection

Other commands available


2009 NetApp. All rights reserved.

ISCSICLI COMMAND (CONT.)

3-48

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Multipath Techniques

2009 NetApp. All rights reserved.

MULTIPATH TECHNIQUES

3-49

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Multiple Paths to a LUN


Windows Server 2008 R2
e0a are used
for management
access only for
storage systems
without e0M
e0a

e0b

e0c

Storage System 1

e0a

e0b

e0c

Storage System 2

iSCSI
2009 NetApp. All rights reserved.

MULTIPLE PATHS TO A LUN

3-50

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Multipath Access
Symmetric
All paths are favored
or optimized

10G

Asymmetric
Only certain paths are
favored or optimized

10G

LUN

10G

1G

LUN

2009 NetApp. All rights reserved.

MULTIPATH ACCESS

3-51

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

iSCSI Optimized/Non-optimized Paths


The iSCSI protocol generally doesnt require
AULA the mechanism of determining the
optimized path is already present
Always verify when AULA is supported by using
the Interoperability Matrix

However, you can set the path priority:


Enable ALUA:
system> igroup set igroup1 alua yes

igroup
associated with
initiator(s)

Set the path priority:


system> iscsi tpgroup alua set e0c optimized
system> iscsi tpgroup alua set e0b non-optimized

2009 NetApp. All rights reserved.

ISCSI OPTIMIZED/NON-OPTIMIZED PATHS

3-52

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Techniques for Multipath iSCSI


At the initiator:
Link aggregation
Multiple Connections per Session (MCS)
Multipathing Technology Layer or
Multipath I/O (MPIO)

At the target:
Link aggregation (interface groups)
See Data ONTAP Fundamentals course

Target Portal Groups (TPGs) that are related to


sessions and connections
Set optimized or non-optimized paths
2009 NetApp. All rights reserved.

TECHNIQUES FOR MULTIPATH ISCSI

3-53

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Link Aggregation
Implemented by 802.3ad
IEEE spec
Called teaming, channel
bonding, and interface
groups
Pros:
Supports all network
protocols
Cons:
Not supported with
Microsoft iSCSI software
initiator
Same path often used to
avoid out-of-order delivery

One TCP
connection
via two paths
Disk-Class Driver
SCSI Layer
iSCSI Initiator
TCP/IP
NIC Driver (GbE)
NIC

NIC

Teaming
driver

To storage
system

2009 NetApp. All rights reserved.

LINK AGGREGATION
Host-side NIC Teaming (802.3ad) is supported by NetApp on many non-Windows hosts.

3-54

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Multiple Connections per Session


MCS creates multipaths
starting at the iSCSI
session layer
Pros:
Supported by Microsoft
software initiator 2.0+
No dependency on
Ethernet infrastructure
Cons:
Not supported by iSCSI
HBAs
Both initiator and target
must support MCS

Disk-Class Driver
SCSI Layer
iSCSI Initiator
TCP/IP

TCP/IP

NIC Driver

NIC Driver

NIC

NIC

To storage
system

2009 NetApp. All rights reserved.

MULTIPLE CONNECTIONS PER SESSION


Multiple Connections per Session (MCS): MCS creates the multiple paths starting at the
iSCSI session layer of the storage stack. Both the iSCSI initiator (host) and the iSCSI target
(controller) need to support multi-connection sessions in order to configure sessions with
multiple connections. MCS requires a software initiator. MCS should not be confused with
the Microsoft Cluster Service or Microsoft Consulting Services.
See Appendix 4 for a discussion of MCS.

3-55

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Multipathing Technology Layer (MPIO)


Classic method of
multipathing
Pros:
Achieve multipath with
FC and iSCSI
Supports multiple loadbalancing algorithms
Support software and
hardware initiators
Cons:
Usually requires add-on
software (Device
Specific Module)

Disk-Class Driver
Multipathing
Technology Layer
SCSI Layer
iSCSI Initiator

iSCSI Initiator

TCP/IP

TCP/IP

NIC Driver

NIC Driver

NIC

NIC

To storage
system

2009 NetApp. All rights reserved.

MULTIPATHING TECHNOLOGY LAYER (MPIO)


Multipath Input/Output (MPIO): The "classic" way to do multipathing is to insert a separate
multipathing layer into the storage stack. This method is not specific to iSCSI or to any
underlying transport, and is the standard way to achieve multipathing access to Fibre Channel
and even parallel SCSI targets. There are multiple implementations of this type of
multipathing on the various operating systems. The MPIO infrastructure offered by Microsoft
is the standard way to do this on Windows Server technologies. With the Microsoft MPIO,
each storage vendor supplies a device-specific module for its storage array.

3-56

ONTAP DSM: In the past, the ONTAPDSM (previous called NTAP DSM) was bundled
with SnapDrive. Beginning with SnapDrive 4.2 and later, there is a separate install for
the ONTAP DSM.
Microsoft iSCSI DSM: Microsoft iSCSI DSM is supported in active-passive and activeactive modes. It requires a software initiator.
VERITAS DSM: Veritas DSM is supported beginning with Windows Server 2003.

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Target Portal Groups


Target Portal Group (TPG)
One or more interfaces assigned to a TPG
Interfaces may be physical or virtual

By default, one interface per TPG


One iSCSI session per TPG
When an initiator discovers a target, a session
will be established with the TPG

2009 NetApp. All rights reserved.

TARGET PORTAL GROUPS


A target portal group is a set of one or more storage system network interfaces that can be
used for an iSCSI session between an initiator and a target. A target portal group is identified
by a name and a numeric tag.
For iSCSI sessions that use multiple connections, all of the connections must use interfaces in
the same target portal group. Each interface belongs to one and only one target portal group.
Interfaces can be physical interfaces or logical interfaces (VLANs and vifs).
Starting with Data ONTAP 7.1, you can explicitly create target portal groups and assign tag
values.
Because a session can use interfaces in only one target portal group, you may want to put all
of your interfaces in one large group. However, some initiators are also limited to one session
with a given target portal group. To support MPIO, you need to have one session per path,
and therefore more than one target portal group.
If you do not plan to use multi-connection iSCSI sessions, you do not need to create target
portal groups.
If you do plan to use multi-connection sessions, create a target portal group that contains all
of the interfaces you want to use for one iSCSI session. However, note that you cannot
combine iSCSI hardware-accelerated interfaces with standard iSCSI storage system interfaces
in the same target portal group.

3-57

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Connections Versus Sessions


SESSION 64

CONN 64/1

SESSION 65

CONN 65/1

SESSION 66

CONN 66/1

SESSION 67

CONN 67/1

Four sessions (4 TPG) with


One connection (1 interface)
each

Microsofts Multipath I/O (MPIO


CONN 68/3

SESSION 68

CONN 68/1
CONN 68/4
CONN 68/2

One session (1 TPG) with


Four connections (4
interface)

Microsofts Multiple Connections


per Session (MCS)
2009 NetApp. All rights reserved.

CONNECTIONS VERSUS SESSIONS


The diagram provides two examples illustrating sessions and connections, and how they
relate to each other.
The top example shows four sessions each with one connection. Microsoft MPIO would use a
session and connection configuration like this. The MPIO software manages how the sessions
are used to move data between the host and the target LUNs.
The bottom example shows four connections within a single session. Both examples have
four connections. The difference in the examples is in the number of sessions.

3-58

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Target Portal Groups (Cont.)


List Target portal groups:
system> iscsi tpgroup show
TPGTag Name
Member
Interfaces
1000
e0a_default
e0a
1001
e0b_default
e0b
1002
e0c_default
e0c
1003
e0d_default
e0d

No need to change the default unless


supporting MCS (see Appendix 4)

2009 NetApp. All rights reserved.

TARGET PORTAL GROUPS (CONT.)

3-59

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Multipath I/O

2009 NetApp. All rights reserved.

MULTIPATH I/O

3-60

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Windows Multipath iSCSI Implementation


Windows supports two techniques of iSCSI
multipathing:

Device Specific Module (DSM) for Multipath IO


MCS (see Appendix 4)

Windows requires an initiator that can be either:


An iSCSI HBA initiator
A software initiator

Microsoft software initiator is generally used

Windows Server 2003 - must install software initiator


Windows Server 2008 - software initiator preinstalled

Storage management software available for Windows:


Native Windows Disk Management
Veritas Storage Foundation

This course will focus on a MPIO native Disk


Management with the Microsoft software initiator
implementation
2009 NetApp. All rights reserved.

WINDOWS MULTIPATH ISCSI IMPLEMENTATION

3-61

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

iSCSI MPIO on Windows


To enable iSCSI MPIO on Windows Server:
1. Install and/or enable NICs on initiator and
target
2. Verify IP connectivity
3. Add Multipath I/O feature
4. Install NetApp DSM
5. Allow Microsoft software initiator to discover
the target portals
6. Create two sessions (different target portal
groups) with the target
7. Verify multiple paths
2009 NetApp. All rights reserved.

ISCSI MPIO ON WINDOWS

3-62

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Add Multipath I/O Feature

No need for a
reboot in R2

2009 NetApp. All rights reserved.

ADD MULTIPATH I/O FEATURE

3-63

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Windows Native Multipath I/O (Cont.)


Out of the box, native multipath I/O comes with
a generic Device Specific Module (DSM)
NetApp recommends installing the NetApp
Data ONTAP DSM
Verify with Interoperability Matrix for
recommended version

2009 NetApp. All rights reserved.

WINDOWS NATIVE MULTIPATH I/O (CONT.)

3-64

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

NetApp Data ONTAP DSM 3.3.1


Features:

Supports Windows Server 2008 R2


Supports multiple load-balancing policies
Support for claiming iSCSI LUNs
Coexists with other DSMs
Multiprotocol LUN support (simultaneous iSCSI
and FC paths to the same LUN)

Requirements:
Windows Server 2003 or 2008 or 2008 R2
Data ONTAP 7.2.2+
2009 NetApp. All rights reserved.

NETAPP DATA ONTAP DSM 3.3.1

3-65

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

First Session Created


Initiator has now
connected with the
first target portal
First session

Click to add
another session

Target Portal Group ID


system> iscsi tpgroup show
TPGTag Name
Member
Interfaces
1000
e0a_default
e0a
1001
e0b_default
e0b
1002
e0c_default
e0c
1003
e0d_default
e0d
2009 NetApp. All rights reserved.

FIRST SESSION CREATED

3-66

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Connect to the Second Portal


Connect to the second
portal

Different initiator
IP and target
portal IP than
first session
This functionality is
available because
multiple target portal
groups were
identified during
discovery
2009 NetApp. All rights reserved.

CONNECT TO THE SECOND PORTAL

3-67

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Second Session Created


Initiator has now
connected with the
both target portals
Second session
Target portal group ID
system> iscsi tpgroup show
TPGTag Name
Member
Interfaces
1000
e0a_default
e0a
1001
e0b_default
e0b
1002
e0c_default
e0c
1003
e0d_default
e0d
2009 NetApp. All rights reserved.

SECOND SESSION CREATED

3-68

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary

2009 NetApp. All rights reserved.

MODULE SUMMARY

3-69

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary
In this module, you should have learned to:
Describe multiple path implementation with
iSCSI connectivity
Configure network ports on Windows and
NetApp systems
Identify the worldwide node (WWN) on
Windows and NetApp systems
Set up and verify multiple path iSCSI
connectivity between Windows and NetApp
systems
2009 NetApp. All rights reserved.

MODULE SUMMARY

3-70

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Exercise
Module 3: Windows
iSCSI Connectivity
Estimated Time: 30 minutes

EXERCISE
Please refer to your Exercise Guide materials for more instructions.

3-71

SAN Implementation Workshop: Windows iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Windows LUN
Access
Module 4
SAN Implementation Workshop

WINDOWS LUN ACCESS

4-1

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Objectives
By the end of this module, you should be able to:
Describe the steps to allow a WindowsServer
2008 R2 initiator to access a LUN on a storage
system

2009 NetApp. All rights reserved.

MODULE OBJECTIVES

4-2

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

LUN Access Overview

2009 NetApp. All rights reserved.

LUN ACCESS OVERVIEW

4-3

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

LUN Access
To connect an initiator to a targets LUN:
1. Create an igroup if necessary
2. Create the LUN
3. Map the LUN to the igroup
4. Find the LUN on the initiator
5. Prepare the LUN as a new disk on the
initiator

2009 NetApp. All rights reserved.

LUN ACCESS

4-4

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

1. Create an igroup
Initiator

Target

Ethernet

Fibre Channel

Ethernet

Fibre Channel
My_IP_igroup
iqn.1999-04.com.a:system
OS Type: Windows

My_FC_igroup
21:00:00:2b:34:26:a6:56
OS Type: Windows

Place WWN (IQN or eui)


in igroups for iSCSI

Place WWPN in
igroups for FC

2009 NetApp. All rights reserved.

1. CREATE AN IGROUP
If necessary, create an igroup to provide access to a LUN.
Initiator groups (igroups) are tables of host identifiers (FC WWPNs or iSCSI WWNs) that are
used to control access to LUNs. Typically, you want all of the host's host bus adapters
(HBAs) or software initiators to have access to a LUN. If you are using multipathing
software or have clustered hosts, each HBA or software initiator of each clustered host needs
redundant paths to the same LUN.
You can create igroups that specify which initiators have access to the LUNs either before or
after you create LUNs, but you must create igroups before you can map a LUN to an igroup.
Initiator groups can have multiple initiators, and multiple igroups can have the same initiator.
However, you cannot map a LUN to multiple igroups that have the same initiator.
NOTE: An initiator cannot be a member of igroups of differing operating systems types
(ostypes). Also, a given igroup can be used for FC or iSCSI, but not both.

4-5

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Steps to Create an igroup


1. Optionally, verify initiators connectivity:
fcp show initiators
iscsi initiators show

2. Create the igroup and place the initiators into the


igroup:
igroup create {-i|-f} -t ostype
igroup_name[node, node]
i = iSCSI igroup
f = FC igroup
ostype = solaris, windows, hpux, aix, linux, netware, vmware
node
iSCSI type has worldwide node (WWN - IQN or eui)
FC type has worldwide port name (WWPN - may be aliased)

3. Verify the igroup:


igroup show
2009 NetApp. All rights reserved.

STEPS TO CREATE AN IGROUP


Use the igroup create command to configure an igroup on a storage system. Note that you
add nodes to the igroup and, therefore, the optional step of listing the currently connected
initiators is provided in the first step.

4-6

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Data ONTAP Configuration


Add WWPNs to igroup:
system> igroup create -f -t windows iWIN_fcp
system> igroup add iWIN_fcp WIN1-FC WIN2-FC

Verify igroup:
system> igroup show -v
iWIN_fcp (FCP) (ostype: windows):
10:00:00:00:c9:6b:77:b3 (logged in on: 0d, 0c)
WWPN Alias(es): WIN1-FC
10:00:00:00:c9:6b:77:b4 (logged in on: 0d, 0c)
WWPN Alias(es): WIN2-FC
NOTE:Connected
using paths displayed
2009 NetApp. All rights reserved.

DATA ONTAP CONFIGURATION

4-7

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

2. Create a Logical Unit


Initiator

Target

Ethernet

Fibre Channel

Ethernet

Fibre Channel

LUNs may exists in any volumes or qtrees


with no NAS data

LUNa exists in vol1 in


aggr1 /vol/vol1/LUNa.lun

LUNa

LUNb exists in vol2 in


LUNb aggr1 /vol/vol2/LUNb.lun

2009 NetApp. All rights reserved.

2. CREATE A LOGICAL UNIT


LUNs may exist in any volume or qtree.When creating traditional or flexible volumes that
contain LUNs, follow these guidelines:

4-8

Do not create any LUNs in the systems root volume. Data ONTAP uses this volume to
administer the storage system. The default root volume is /vol/vol0.
Ensure that no other files or directories exist in a volume that contains a LUN. If this is
not possible and you are storing LUNs and files in the same volume, use a separate qtree
to contain the LUNs.
If multiple hosts share the same volume, create a qtree on the volume to store all LUNs
for the same host. This is a recommended best practice that simplifies LUN
administration and tracking.
Ensure that the volume option create_ucode is set to on (vol options <volname>
create_ucode on). Data ONTAP requires that the path of a volume or qtree containing a
LUN be in the Unicode format. This option is off by default when you create a volume.
It is important to enable this option for volumes that will contain LUNs.
To simplify management, use naming conventions for LUNs and volumes that reflect
their ownership or the way that they are used.

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Steps to Create a LUN


Create the aggregate for the LUN:
system> aggr create aggr_SAN 7

Create the volume for the LUN:


system> vol create vol_SAN0 aggr_SAN 10g

Set Snapshot policy for the volume:

(more on this in Module 13)


system> snap reserve vol_SAN0 0
system> vol options vol_SAN0 nosnap on

Optional: Create a qtree for the LUN:


system> qtree create /vol/vol_SAN0/qtSAN0

2009 NetApp. All rights reserved.

STEPS TO CREATE A LUN

4-9

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Steps to Create a LUN (Cont.)


1. Create a LUN
lun create -s size -t ostype lun_path
size = in bytes by default
Use m for megabytes
Use g for gigabytes
NOTE: LUN sizing is discussed in detail in Module 13

ostype = solaris, windows, hpux, aix, linux, netware, and


vmware, windows_gpt, windows_2008
lun_path
LUN path begins with /vol/{VolumeName}/[qtreeName]
Last portion of path is the LUN Name
Example: /vol/vol_SAN0/qtSAN0/lun0

2. Verify the LUN


lun show
2009 NetApp. All rights reserved.

STEPS TO CREATE A LUN (CONT.)


Use the lun create command to create a LUN.
The host ostype indicates the type of operating system running on the host that accesses the
LUN, which also determines the following:

4-10

Geometry used to access data on the LUN


Minimum LUN sizes
Layout of data for multiprotocol access

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

3. Map a Logical Unit to an igroup


NOTE: This step is also called LUN masking
Initiator

Ethernet

Fibre Channel

Map a Logical Unit to an igroup by


assigning a Logical Unit Number
Ethernet

Target

Fibre Channel
My_IP_igroup
iqn.1999-04.com.a:system
OS Type: Windows

LUNa

My_FC_igroup
21:00:00:2b:34:26:a6:56
OS Type: Windows

2
LUNb

2009 NetApp. All rights reserved.

3. MAP A LOGICAL UNIT TO AN IGROUP


When you map the LUN to the igroup, you grant the initiators in the igroup access to the
LUN. If you do not map a LUN, the LUN is not accessible to any hosts. Data ONTAP
maintains a separate LUN map for each igroup to support a large number of hosts and to
enforce access control.

4-11

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Steps to Mask the LUN


1. Map a LUN to an igroup:
lun map lun_path igroup_name [lun_id]
lun_path = path name of a LUN
igroup_name = name of an initiator group
lun_id = unique identification number that the initiator
uses when the LUN is mapped to it
If not entered, automatically assigned
Example:
lun map /vol/vol1/qtree1/luna My_IP_igroup 1

2. Verify the LUN mapping:


lun show m

2009 NetApp. All rights reserved.

STEPS TO MASK THE LUN


Use the lun map command to map an igroup to a LUN.
You map a LUN to an igroup by specifying the following attributes:
LUN NAME
Specify the path name of the LUN to be mapped.
INITIATOR GROUP
Specify the name of the igroup that contains the hosts that will access the LUN.
LUN ID
Assign a number for the LUN ID, or accept the default LUN ID. Typically, the default LUN
ID begins with 0 and increases in increments of one as each additional LUN is created. The
host associates the LUN ID with the location and path name of the LUN. The range of valid
LUN ID numbers depends on the host. For detailed information, see the documentation
provided with your host utilities kit.

4-12

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

LUN Creation
Using Wizards

2009 NetApp. All rights reserved.

LUN CREATION USING WIZARDS

4-13

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

LUN Creation Using Wizards


You can ease the creation of LUN and igroups
using wizards
Administrators can use:
lun setup command
SnapDrive (discussed later in this module and
with the Linux material)
NetApp System Manager (discussed later in
this course)
Provisioning Manager

2009 NetApp. All rights reserved.

LUN CREATION USING WIZARDS

4-14

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

lun setup Command


LUN creation, optionally igroup creation, and
igroup and LUN mapping may be accomplished
with a single command:
lun setup
A wizard-like command that prompts the user
for relevant information
The result of the command is a newly created
LUN mapped to a new or existing igroup

2009 NetApp. All rights reserved.

LUN SETUP COMMAND


The lun setup command prompts you through the process of creating a LUN, creating an
igroup, and mapping the LUN to an igroup. The volume where the LUN will reside must be
created before running lun setup.

4-15

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Creating a LUN with lun setup


system> lun setup
This setup will take you through the steps needed to
create LUNs and to make them accessible by initiators.
You can type ^C (Control-C)at any time to abort the
setup and no unconfirmed changes will be made to the
system.
Do you want to create a LUN? [y]: y
Multiprotocol type of LUN
(solaris/windows/hpux/aix/linux/netware/vmware/
windows_gpt/windows_2008)[linux]: windows
A LUN path must be absolute. A LUN can only reside in a
volume or qtree root. For example, to create a LUN with
name "lun0" in the qtree root /vol/vol1/q0, specify the
path as "/vol/vol1/q0/lun0".
Enter LUN path: /vol/SAN/lun1
2009 NetApp. All rights reserved.

CREATING A LUN WITH LUN SETUP


Multiprotocol type of LUN: Specify the OS that will be accessing the LUN.
LUN path: Specify the name of the LUN and where it will be located. This is referred to as
the long_lun_path (example: /vol/SAN/lun1).

4-16

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Creating a LUN with lun setup (Cont.)


A LUN can be created with or without space reservations
being enabled.
Space reservation guarantees that data writes to that
LUN will never fail.
Do you want the LUN to be space reserved? [y]: y
Size for a LUN is specified in bytes. You can use
single-character multiplier suffixes: b(sectors), k(KB),
m(MB), g(GB) or t(TB).
Enter LUN size: 1g
You can add a comment string to describe the contents of
the LUN.
Please type a string (without quotes), or hit ENTER if
you don't want to supply a comment.
Enter comment string: Windows LUN
2009 NetApp. All rights reserved.

CREATING A LUN WITH LUN SETUP (CONT.)


Space reservations: Specify whether you want the LUN created with space reservations
enabled. By default, space reservations are enabled.
LUN size: Specify the size of the LUN.
Comment: Create a comment or a brief description about the LUN.

4-17

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Creating a LUN with lun setup (Cont.)


The LUN will be accessible to an initiator group. You
can use an existing group name, or supply a new name to
create a new initiator group. Enter '?' to see existing
initiator group names.
Name of initiator group []: ?
No existing initiator groups.
Name of initiator group []: iWIN_fcp
Type of initiator group iWIN_fcp (FCP/iSCSI) [FCP]: FCP

2009 NetApp. All rights reserved.

CREATING A LUN WITH LUN SETUP (CONT.)


Initiator group name: Create or specify an igroup.
Type of initiator group: If you entered a new igroup name, specify which protocol will be
used by the hosts in the igroup.

4-18

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Creating a LUN with lun setup (Cont.)


A Fibre Channel Protocol (FCP) initiator group is
a collection of initiator port names. Each port
name (WWPN) is 16 hexadecimal digits, separated
(only) by optional colon (:) characters. You can
separate port names by commas. Enter '?' to
display a list of connected initiators. Hit ENTER
when you are done adding port names to this group.
Enter comma separated portnames: ?
Initiators connected on adapter 0c:
Portname
Group
10:00:00:00:c9:2d:9f:76
10:00:00:00:c9:2d:a0:63

2009 NetApp. All rights reserved.

CREATING A LUN WITH LUN SETUP (CONT.)


FCP: Entering a question mark (?) for portname will display all the initiators connected by
way of FCP. This includes initiators on any hosts accessed using FCP.
iSCSI: The (?) will not display iSCSI initiators that are connected. To see the iSCSI initiators
connected, run the command iscsi show initiator from the storage system.

4-19

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Creating a LUN with lun setup (Cont.)


Enter comma separated portnames:
10:00:00:00:c9:2d:9f:76
Enter comma separated portnames: <CR>
The initiator group has an associated OS type. The
following are currently supported: solaris,
windows, hpux, aix, linux, netware or vmware.
OS type of initiator group "iWIN_fcp" [windows]:
windows
The LUN will be accessible to all the initiators
in the initiator group. Enter '?' to display LUNs
already in use by one or more initiators in group
"iWIN_fcp".
LUN ID at which initiator group "iWIN_fcp" sees
"/vol/SAN/lun1" [0]: 1
2009 NetApp. All rights reserved.

CREATING A LUN WITH LUN SETUP (CONT.)


Enter port names: Enter the port names for initiators you want to include in your initiator
group.
Select an OS type for the initiator group: OS type governs the finer details of SCSI protocol
interaction with the initiators, including cluster failover behavior.
LUN ID: LUN ID allows LUN to igroup mapping. The lun setup command will use the next
available ID by default.

4-20

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Creating a LUN with lun setup (Cont.)


LUN Path
OS Type
Size
Comment
Initiator
Initiator
Initiator
Mapped to

:
:
:
:
Group
:
Group Type
:
Group Members :
LUN-ID
:

/vol/SAN/lun1
windows
1.0g (1077511680)
Windows LUN
iWIN_fcp
FCP
10:00:00:00:c9:2d:9f:76
1

Do you want to accept this configuration? [y]: y


Do you want to create another LUN? [n]: n

2009 NetApp. All rights reserved.

CREATING A LUN WITH LUN SETUP (CONT.)


Verify the configuration: Verify the configuration you have entered, and then accept or reject
the configuration by typing y (yes) or n (no). Select n when asked if you want to create
another LUN in order to exit lun setup.

4-21

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Windows Setup

2009 NetApp. All rights reserved.

WINDOWS SETUP

4-22

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Windows Steps
To connect an initiator to a targets LUN:
1. Create an igroup
2. Create the LUN
3. Map the LUN to the igroup
4. Find the LUN on the initiator
5. Prepare the LUN as a new disk on the
initiator

2009 NetApp. All rights reserved.

WINDOWS STEPS

4-23

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

4. Find the LUN on Windows


Using the Disk Management Tool, Rescan Disks

Select Disk
Management

Right-click and choose


Rescan Disks
2009 NetApp. All rights reserved.

4. FIND THE LUN ON WINDOWS


On a Windows host, you must partition and format any new LUN. To perform these tasks on
Windows, use the Disk Management tool. First, access Computer Management by rightclicking My Computer and selecting Manage. Select Disk Management.
NOTE: The Disk Management tool will see and treat the LUN as though it is a local disk.
Within Computer Management, expand Storage and double-click Disk Management. In order
for Disk Management to discover the new LUNs (virtual disks), select Action > Rescan
Disks. From the Action menu, you can see tasks that may be performed on the new disk(s).

4-24

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

4. Find the LUN on Windows (Cont.)


The LUN appears

The LUN
is offline

2009 NetApp. All rights reserved.

4. FIND THE LUN ON WINDOWS (CONT.)

4-25

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

5. Preparing the LUN for Windows


Make the LUN online

Right-click,
select Online
NOTE: The LUN
is not initialized
2009 NetApp. All rights reserved.

5. PREPARING THE LUN FOR WINDOWS


The disk must be brought online before we can use it. To bring the disk online, right-click
Disk # and select Online.

4-26

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

5. Preparing the LUN for Windows (Cont.)


Initialize the LUN in Windows Server 2008

Right-click,
select
Initialize
NOTE: MBR is the default

2009 NetApp. All rights reserved.

5. PREPARING THE LUN FOR WINDOWS (CONT.)


The disk must be initialized before it is formatted and partitioned. To initialize a disk, rightclick Disk # and select Initialize. The administrator may select either a Master Boot
Record (MBR) or a GUID Partition Table (GPT).

4-27

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Partition Styles
Master Boot Record
(MBR)

GUID Partition Table


(GPT)

Traditional style
Uses partition table on the
first sector of disk
Supports up to 2 TB unless
dynamic disks are used
Two types:
Primary: used to format
and mount directly
Extended: used to create
logical drives (format and
mount logical drives)

Supported in all versions of


Windows Server 2008
(default for 64-bit versions)
Partition data is stored in
redundant primary and
backup tables
Uses CRC32 checksum to
verify partition data
between tables
Supports up to 18 EB and
up to 128 partitions per
disk
Cant be removable disks

2009 NetApp. All rights reserved.

PARTITION STYLES
To convert from MBR to GPT, use convert gpt.
To convert from GPT to MBR, use convert mbr.

4-28

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

5. Preparing the LUN for Windows (Cont.)


The LUN is now ready for provisioning
Windows administrators may use either:
Disk Managements New Simple Volume Wizard in
Windows Server 2008
Disk Managements Create Partition Wizard in Windows
Server 2003
Share and Storage Management
Available on Windows Server 2008 or later
One location for administrating shares and storage
We will examine this tool in this module

Storage Manager for SANs (see appendix)


Available on Windows Server 2003 R2 or later
Requires Data ONTAP VDS Hardware Provider,
downloadable at the NOW site
2009 NetApp. All rights reserved.

5. PREPARING THE LUN FOR WINDOWS (CONT.)


The LUN is now ready for provisioning. Administrators may use one of several possible
tools to configure the LUN.
DISK MANAGEMENT
Disk Management is the traditional method for provisioning LUNs. In Windows Server 2003,
the Create Partition Wizard is available while in Windows Server 2008, the same wizard has
been renamed as the New Simple Volume Wizard.
SHARE AND STORAGE MANAGEMENT
The Share and Storage Management tool is only available in Windows Server 2008 and later.
Later in this presentation, we will review the Provision Storage Wizard within the new Share
and Storage Management tool.
STORAGE MANAGER FOR SANS
Storage Manager for SANs helps you create and manage LUNs on Fibre Channel and iSCSI
disk drive subsystems that support Virtual Disk Service (VDS) in your storage area network
(SAN). See Appendix 5 for a discussion of using Storage Manager for SANs.

4-29

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Basic or Dynamic Disks


Windows allows
administrators to
treat a LUN as single
disk or combine it
with other dynamic
disks using a volume
manager

Basic disk
Are like other disks
Used to manage a
LUN as a single disk

Dynamic disk types

Spanned
Striped
Mirrored
RAID-5

NOTE: We will first examine the basic disk configuration and then
look at a spanned dynamic disk configuration
2009 NetApp. All rights reserved.

BASIC OR DYNAMIC DISKS

4-30

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

5. Preparing the LUN for Windows (Cont.)


Windows Server 2008 has Disk Managements New
Simple Volume Wizard

Right-click,
select
New Simple
Volume

The wizard will launch

2009 NetApp. All rights reserved.

5. PREPARING THE LUN FOR WINDOWS (CONT.)


In Windows Server 2003, to partition the disk, select Action > All Tasks > Create Partition or
right-click the unallocated box and select Create Partition.
In Windows Server 2008, to partition the disk, select Action > All Tasks > New Simple
Volume or right-click the unallocated box and select New Simple Volume.
After the disk is initialized, partitioned, and formatted, you should test access to the disk by
navigating to it and creating a file.
NOTE: The presentation displays the Windows Server 2008 version of the wizard. There are
only cosmetic changes between the Windows Server 2003 and Windows Server 2008 version.

4-31

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

5. Preparing the LUN for Windows (Cont.)


New Simple Volume Wizard (Cont.)
Specify the volume size

Specify the method


to mount
2009 NetApp. All rights reserved.

5. PREPARING THE LUN FOR WINDOWS (CONT.)

4-32

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

5. Preparing the LUN for Windows (Cont.)


New Simple Volume Wizard (Cont.)
Specify the format

Verify the configuration


and finish
2009 NetApp. All rights reserved.

5. PREPARING THE LUN FOR WINDOWS (CONT.)

4-33

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

5. Preparing the LUN for Windows (Cont.)


Alternatively, a LUN may be formatted through the
Share and Storage Management for Windows 2008

2009 NetApp. All rights reserved.

5. PREPARING THE LUN FOR WINDOWS (CONT.)


The Share and Storage Management Tool is new in Windows Server 2008. This one tool
allows administrators to provision storage and share it to network users through a single
interface.

4-34

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

5. Preparing the LUN for Windows (Cont.)


Navigate to the Volumes tab to investigate the current
volume configuration

Click
here
Select Provision Storage
to configure the LUN

2009 NetApp. All rights reserved.

5. PREPARING THE LUN FOR WINDOWS (CONT.)


The Provision Storage wizard is an alternative to the New Simple Volume Wizard of
Windows Server 2008 or the Create Partition Wizard of Windows Server 2003.

4-35

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

5. Preparing the LUN for Windows (Cont.)

Because our
LUN is
identified
as a disk,
select here

Then click
Next
2009 NetApp. All rights reserved.

5. PREPARING THE LUN FOR WINDOWS (CONT.)

4-36

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

5. Preparing the LUN for Windows (Cont.)

Select the appropriate


disk and then
select Next

2009 NetApp. All rights reserved.

5. PREPARING THE LUN FOR WINDOWS (CONT.)

4-37

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

5. Preparing the LUN for Windows (Cont.)

Determine the size


of your volume and
select Next

2009 NetApp. All rights reserved.

5. PREPARING THE LUN FOR WINDOWS (CONT.)

4-38

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

5. Preparing the LUN for Windows (Cont.)

Determine how to
mount the new
volume and select
Next

2009 NetApp. All rights reserved.

5. PREPARING THE LUN FOR WINDOWS (CONT.)

4-39

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

5. Preparing the LUN for Windows (Cont.)

Determine whether
to format the volume
and then select Next

2009 NetApp. All rights reserved.

5. PREPARING THE LUN FOR WINDOWS (CONT.)

4-40

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

5. Preparing the LUN for Windows (Cont.)

Review, and if
correct, select
Create

2009 NetApp. All rights reserved.

5. PREPARING THE LUN FOR WINDOWS (CONT.)

4-41

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

5. Preparing the LUN for Windows (Cont.)

NOTE: You may want


to share out the new
volume

2009 NetApp. All rights reserved.

5. PREPARING THE LUN FOR WINDOWS (CONT.)


NOTE: The Provision Storage Wizard can launch the Share Storage Wizard after
completing.

4-42

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

5. Preparing the LUN for Windows (Cont.)


The new volume is now available for use

2009 NetApp. All rights reserved.

5. PREPARING THE LUN FOR WINDOWS (CONT.)

4-43

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

4. Find the LUN with CLI


Start DiskPart utility:
PS C:\Users\Adminstrator> diskpart

Rescan for disks:


DISKPART> rescan

List current disks visible:


DISKPART> list disk
Disk ### Status
Size
Free
DynGpt
-------- ------------- ------- ------- --- --Disk 0
Online
68 GB
0 B
Disk 1
Online
2055 MB 1088 KB
*
Disk 2
Offline
2055 MB 2055 MB

The discovered LUN is offline


2009 NetApp. All rights reserved.

4. FIND THE LUN WITH CLI

4-44

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

5. Preparing the LUN with CLI


Select the offline disk (LUN)
DISKPART> select disk 2

Online the disk


DISKPART> online disk

List details on selected disk


DISKPART> detail disk
NETAPP LUN Multi-Path
Disk Device
Disk ID: 00000000
Type
: iSCSI
Status : Online
Path
: 0
Target : 0
LUN ID : 1
Location Path : UNAVAILABLE

NOTE: The LUN defaults


to read-only

Current Read-only State : Yes


Read-only : Yes
Boot Disk : No
Pagefile Disk : No
Hibernation File Disk :No
Crashdump Disk : No
Clustered Disk : No
There are no volumes.

2009 NetApp. All rights reserved.

5. PREPARING THE LUN WITH CLI

4-45

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

5. Preparing the LUN with CLI (Cont.)


Make the disk writable
DISKPART> attributes disk clear readonly

Create a primary partition


DISKPART> create partition primary

List partitions
DISKPART> list partition
Partition ### Type
Size
Offset
------------- ---------------- ------- ------Partition 1
Primary
2054 MB
64 KB

Identify the partition number


2009 NetApp. All rights reserved.

5. PREPARING THE LUN WITH CLI (CONT.)

4-46

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

5. Preparing the LUN with CLI (Cont.)


Select partition 1
DISKPART> select partition 1

List the details on selected partition


DISKPART> detail partition
Partition 1
Type : 06
Hidden: No
Active: No
Offset in Bytes: 65536
Volume ### Ltr Label Fs Type
Size
Status
Info
-------------------------------------------------------* Volume 4
Partition 2054 MB Healthy Offline

Volume already created


2009 NetApp. All rights reserved.

5. PREPARING THE LUN WITH CLI (CONT.)

4-47

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

5. Preparing the LUN with CLI (Cont.)


Mount disk
DISKPART> assign mount=C:\Data\CLI

Format partition

Pre-created folder

DISKPART> format fs=NTFS label=My CLI Volume quick

List volumes
DISKPART> list volume
Volume ### Ltr Label Fs
Type
Size
Status Info
----------------------------------------------------------------Volume 0
D
DVD-ROM
0 B
No Media
Volume 1
Partition 100 MB Healthy Offline
Volume 2
C
NTFS
Partition 68 GB
Healthy Boot
Volume 3
E
NTFS
Partition 2022 MB Healthy
* Volume 4 C:\Data\CLI\ NTFS
Partition 2054 MB Healthy

Exit DiskPart
DISKPART> exit
2009 NetApp. All rights reserved.

5. PREPARING THE LUN WITH CLI (CONT.)

4-48

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Dynamic Disks
In this example, two 5-GB LUNs were created

LUNs
brought
online

Right-click,
select
Initialize
2009 NetApp. All rights reserved.

DYNAMIC DISKS

4-49

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Dynamic Disks (Cont.)


Convert LUNs to dynamic disks

LUNs
initialized

Right-click,
select
Convert to
Dynamic
Disk...
2009 NetApp. All rights reserved.

DYNAMIC DISKS (CONT.)

4-50

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Dynamic Disks (Cont.)


Start the New Spanned Volume Wizard

LUNs are
dynamic

Right-click,
select New
Spanned
Volume
Wizard....
2009 NetApp. All rights reserved.

DYNAMIC DISKS (CONT.)

4-51

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Dynamic Disks (Cont.)


New Spanned Volume Wizard (Cont.)
Added disk 2 to spanned
volume

2009 NetApp. All rights reserved.

DYNAMIC DISKS (CONT.)

4-52

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Dynamic Disks (Cont.)


New Spanned Volume Wizard (Cont.)
Selected disk 2

Used only 2 GB of
the total 5 GB
Total size of
spanned volume
2009 NetApp. All rights reserved.

DYNAMIC DISKS (CONT.)

4-53

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Dynamic Disks (Cont.)


New Spanned Volume Wizard (Cont.)
Specify the mount point

Format volume

2009 NetApp. All rights reserved.

DYNAMIC DISKS (CONT.)

4-54

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Dynamic Disks (Cont.)


New Spanned Volume Wizard (Cont.)
Review the settings

Click to create the


spanned volume

2009 NetApp. All rights reserved.

DYNAMIC DISKS (CONT.)

4-55

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Dynamic Disks (Cont.)


New Spanned Volume Created

Volume E
is a
spanned
volume

2009 NetApp. All rights reserved.

DYNAMIC DISKS (CONT.)

4-56

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Windows Stack
Spanned E:

E:

Mounts
File System Added

NTFS

Disk 1

F:
NTFS

Disk 2

Format LUN
Port 3,4 Bus 0, Target 0, LUN1

SCSI Devices Path

Port 3,4 Bus 0, Target 0, LUN2

LUN 1

SCSI\DISK&VEN_NETAPP&PROD_LUN\4&61E00BC&0&000001

LUN 2

SCSI\DISK&VEN_NETAPP&PROD_LUN\4&29BC0C71&0&000100

LUN 1

LUN 2

2009 NetApp. All rights reserved.

WINDOWS STACK

4-57

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

File Services Role


After the LUN is available, the File Service role
may be added to Windows Server 2008

Roles added
through
Server Manager

2009 NetApp. All rights reserved.

FILE SERVICES ROLE


The File Services server role in the Windows Server 2008 operating system provides
technologies that help manage storage, enable file replication, manage shared folders, ensure
fast file searching, and enable access for UNIX client computers. See Microsofts
TechNet Web site for more details:
http://technet.microsoft.com/en-us/library/cc730983(WS.10).aspx.

4-58

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Exercise
Module 4: Windows LUN Access
Tasks 1-6
Estimated Time: 45 minutes

EXERCISE
Please refer to your Exercise Guide for more instruction.

4-59

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

SnapDrive for
Windows

2009 NetApp. All rights reserved.

SNAPDRIVE FOR WINDOWS

4-60

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

SnapDrive
SnapDrive software provides:

Simple storage provisioning of LUNs (virtual


disks)
Consistent data Snapshot copies
Automation for backups and recoveries

SnapDrive is available for:


Solaris
Windows
AIX

HP-UX
Linux

This course investigates SnapDrive 6.1 for


Windows
2009 NetApp. All rights reserved.

SNAPDRIVE

4-61

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

SnapDrive for
Windows

2009 NetApp. All rights reserved.

SNAPDRIVE FOR WINDOWS

4-62

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

SnapDrive 6.1 for Windows


Features:

Supports Windows Server 2008 R2


Runs as a Windows service
Communicates through RPC, HTTP, or HTTPS
RPC requires CIFS to be set up

Integrates with Performance Manager


Supports VMware Guest OS features such as
VMotion

Two Interfaces:

Command-line interface (CLI)


sdcli.exe

Graphical user interface (GUI)

Integrates into Microsoft Management Console (MMC)

2009 NetApp. All rights reserved.

SNAPDRIVE 6.1 FOR WINDOWS


SnapDrive for Windows software integrates with the Windows volume manager so that
storage systems can serve as storage devices for application data in Windows Server 2003
and 2008 environments.
SnapDrive manages LUNs on a storage system, making this storage available as local disks
on Windows hosts. This allows Windows hosts to interact with the LUNs just as if they
belonged to a direct-attached disk array.
SnapDrive for Windows provides the following additional features:

4-63

It enables online storage configuration, LUN expansion, and streamlined management.


It integrates Snapshot technology, which creates point-in-time images of data stored on
LUNs.
It works in conjunction with SnapMirror software to facilitate disaster recovery from
asynchronously mirrored destination volumes.
When used with Data ONTAP 7.1 or later, it allows for fractional reserve monitoring and
rapid LUN restoration.

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Install SnapDrive 6.1 for Windows


Requires .NET Framework 3.5 SP1
Requires a valid license key or a controller with
SnapDrive licensed
Requires IP-to-hostname resolution
Default ports on host:
RPC 808
HTTP 4094
HTTPS 4095
Default ports on Data ONTAP:
RPC 445
HTTP 80
HTTP 443
2009 NetApp. All rights reserved.

INSTALL SNAPDRIVE 6.1 FOR WINDOWS


When installing SnapDrive 6.0 for Windows, several hotfixes are required. See the
documentation for details. .NET Framework 3.0 is also required. During configuration, an
administrator chooses a default method to communicate between SnapDrive and a storage
system. The default port for RPC is 808, 4094 for HTTP, and 4095 for HTTPS.

4-64

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

RPC Configuration
SnapDrive service runs under a local
administrators account
The same administrator account must
be configured as an administrator on
the storage system
To communicate with a storage
system, you can:
Add a domain user to the local user
account
system> useradmin domainuser add
Development\SDService -g Administrators

Or use pass-through authentication

system> useradmin user add SDService


-g Administrators
2009 NetApp. All rights reserved.

RPC CONFIGURATION
NOTE: For HTTP or HTTPS configuration, you do not have to have the same user as the
SnapDrive service.

4-65

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

SnapDrive for Windows


Integrates into MMC

2009 NetApp. All rights reserved.

SNAPDRIVE FOR WINDOWS


The SnapDrive for Windows interface is an MMC snap-in that appears under the Storage
node by default.

4-66

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Storage System Management


Configure the storage systems to manage

Select

Select

2009 NetApp. All rights reserved.

STORAGE SYSTEM MANAGEMENT


To manage a storage system from SnapDrive, Select Storage System Management and then
add the appropriate storage system.

4-67

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

SnapDrive Services
Local SnapDrive Server

2009 NetApp. All rights reserved.

SNAPDRIVE SERVICES

4-68

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

LUN Creation with SnapDrive for Windows


Select Disk and then Create Disk

2
1

Select

Select

2009 NetApp. All rights reserved.

LUN CREATION WITH SNAPDRIVE FOR WINDOWS


LUNs managed by SnapDrive are located under Disks. Select Create Disk to start the Create
Disk Wizard.

4-69

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Create Disk Wizard

Select a predefined
storage system

2009 NetApp. All rights reserved.

CREATE DISK WIZARD

4-70

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Create Disk Wizard (Cont.)

2009 NetApp. All rights reserved.

CREATE DISK WIZARD (CONT.)

4-71

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Create Disk Wizard (Cont.)

2009 NetApp. All rights reserved.

CREATE DISK WIZARD (CONT.)

4-72

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Create Disk Wizard (Cont.)

2009 NetApp. All rights reserved.

CREATE DISK WIZARD (CONT.)


NOTE: This wizard page will only appear if properties need to be altered on the storage
systems volume.

4-73

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Create Disk Wizard (Cont.)

2009 NetApp. All rights reserved.

CREATE DISK WIZARD (CONT.)

4-74

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Create Disk Wizard (Cont.)

2009 NetApp. All rights reserved.

CREATE DISK WIZARD (CONT.)

4-75

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Create Disk Wizard (Cont.)

Success

2009 NetApp. All rights reserved.

CREATE DISK WIZARD (CONT.)


To view the LUNs (virtual disks) created with SnapDrive, expand SnapDrive, the local
SnapDrive services, and Disk. The LUN appears with a series of identifiers.
In the example, the Disk Identification is listed as LUN [4,0,0,0](T:\). The number 4 refers to
the SCSI port that is being used to maintain a connection to the LUN. The first 0 refers to the
bus number. The second 0 refers to the target ID, and the third 0 refers to the LUN number
assigned by either the storage system or SnapDrive. Finally, the letter T is the local drive
letter from which the disk may be accessed.
NOTE: This information is very valuable when troubleshooting FC and iSCSI-based SANs.

4-76

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Snapshot Creation with SnapDrive


Created a file on the virtual disk

2009 NetApp. All rights reserved.

SNAPSHOT CREATION WITH SNAPDRIVE

4-77

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Snapshot Creation with SnapDrive (Cont.)

Select

2009 NetApp. All rights reserved.

SNAPSHOT CREATION WITH SNAPDRIVE (CONT.)

4-78

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Snapshot Creation with SnapDrive (Cont.)

2009 NetApp. All rights reserved.

SNAPSHOT CREATION WITH SNAPDRIVE (CONT.)

4-79

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Snapshot Restore with SnapDrive


Delete a file on the virtual disk

The test file


is deleted

2009 NetApp. All rights reserved.

SNAPSHOT RESTORE WITH SNAPDRIVE

4-80

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Snapshot Restore with SnapDrive (Cont.)

Select the
Snapshot
2

Click
Restore
Disk

2009 NetApp. All rights reserved.

SNAPSHOT RESTORE WITH SNAPDRIVE (CONT.)

4-81

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Snapshot Restore with SnapDrive (Cont.)


Performs a single file Snapshot restore
SnapRestore must be licensed

2009 NetApp. All rights reserved.

SNAPSHOT RESTORE WITH SNAPDRIVE (CONT.)

4-82

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Snapshot Restore with SnapDrive (Cont.)


Verify the file is restored on the virtual disk

The test file


is back

2009 NetApp. All rights reserved.

SNAPSHOT RESTORE WITH SNAPDRIVE (CONT.)

4-83

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

LUN Removal with SnapDrive for Windows

To remove,
right-click
and select
Delete Disk

2009 NetApp. All rights reserved.

LUN REMOVAL WITH SNAPDRIVE FOR WINDOWS

4-84

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Exercise
Module 4: Windows LUN Access
Tasks 7-10
Estimated Time: 40 minutes

EXERCISE
Please refer to your Exercise Guide for more instruction.

4-85

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Multipath I/O in
Windows

2009 NetApp. All rights reserved.

MULTIPATH I/O IN WINDOWS

4-86

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Windows MPIO
LUNa
Device (LUN)
arrives

Adapter/PCI bus
discovers/detects

Is the
path
known?

PnP/Disk
Manager

DSM
modules
interrogation

NetApp DSM
claims device
Microsoft DSM
claims device

Yes No
Group under pseudo
Create pseudo
node/device
device

Port Driver

Yes

MSFT
DSM claims
device?
No

Expose pseudo device


of real device (with MPIO)

Dont expose device

2009 NetApp. All rights reserved.

WINDOWS MPIO

4-87

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Verify Paths
The LUN should appear in the DSM Interface

Right-click and
pick from menu
to select a loadbalancing policy

Optionally, set loadbalancing policy

2009 NetApp. All rights reserved.

VERIFY PATHS

4-88

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Load-Balancing Policies
Least Queue Depth
High performance
Default Policy
Optimizes distribution of I/O load

2009 NetApp. All rights reserved.

LOAD-BALANCING POLICIES

4-89

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Load-Balancing Policies (Cont.)


Least Weighted Paths
Active-Passive policy
Determines best path and assigns it as the
active path

2009 NetApp. All rights reserved.

LOAD-BALANCING POLICIES (CONT.)

4-90

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Load-Balancing Policies (Cont.)


Auto Assign

2009 NetApp. All rights reserved.

LOAD-BALANCING POLICIES (CONT.)

4-91

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Load-Balancing Policies (Cont.)


Failover

2009 NetApp. All rights reserved.

LOAD-BALANCING POLICIES (CONT.)

4-92

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Load-Balancing Policies (Cont.)


Round Robin

2009 NetApp. All rights reserved.

LOAD-BALANCING POLICIES (CONT.)

4-93

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Load-Balancing Policies (Cont.)


Round Robin with Subset

2009 NetApp. All rights reserved.

LOAD-BALANCING POLICIES (CONT.)

4-94

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Multiple Path MPIO Configuration

2009 NetApp. All rights reserved.

MULTIPLE PATH MPIO CONFIGURATION

4-95

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

MPIO Panel
Within Control Panel, the MPIO panel can be
used to verify MPIO services

NetApp LUN
managed by
Windows MPIO
framework

2009 NetApp. All rights reserved.

MPIO PANEL

4-96

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

MPIO Panel (Cont.)


Using NetApp
DSM

Registered DSMs: 2
================
+--------------------------------|-------------------|----|----|--|DSM Name
|
Version
|PRP |
|--------------------------------|-------------------|----|----|----|--|
|Microsoft DSM
|006.0001.07600.16385|0|
|Data ONTAP DSM
|003.0003.25090.093
+--------------------------------|-------------------|----|----|----|--...
Data ONTAP DSM
==============
MPIO Disk14: 04 Paths, Least Queue Depth,
SN: C4n3J4ROg/Mi
Supported Load Balance Policies: FOO
Path ID
State
SCSI Address
----------------------------------------------------------0000000003000101 Active/Optimized
Adapter: Emulex LightPulse HBA - Storport Mini
Controller: C1677CB400000000 ...

NOTE: Redacted output

2009 NetApp. All rights reserved.

MPIO PANEL (CONT.)

4-97

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Exercise
Module 4: Windows LUN Access
Task 11-12 (optional Task 13)
Estimated Time: 15 minutes

EXERCISE
Please refer to your Exercise Guide for more instruction.

4-98

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary

2009 NetApp. All rights reserved.

MODULE SUMMARY

4-99

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary
In this module, you should have learned to:
Describe the steps to allow a Windows Server
2008 R2 initiator to access a LUN on a storage
system

2009 NetApp. All rights reserved.

MODULE SUMMARY

4-100

SAN Implementation Workshop: Windows LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

vSphere Overview
Module 5
SAN Implementation Workshop

VSPHERE OVERVIEW

5-1

SAN Implementation Workshop: vSphere Overview

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Objectives
By the end of this module, you should be able to:
Describe virtualization and how it can be used
to promote server efficiency
Explain methods of mapping NetApp storage
to vSphere datastores
List the interfaces to administrate vSphere

2009 NetApp. All rights reserved.

MODULE OBJECTIVES

5-2

SAN Implementation Workshop: vSphere Overview

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Virtualization Overview

Application

Application

Application

Operating System

Operating System

CPU

Operating System

Memory

Memory

NIC

Disk

CPU

Memory

NIC

Disk

VMware Virtualization Layer

x86 Architecture
CPU

NIC

x86 Architecture
HBA

Disk

Before virtualization:

Single OS image per machine


Software and hardware tightly
coupled
Running multiple applications on
same machine often creates
conflict
Underutilized resources
Inflexible and costly infrastructure

CPU

Memory

NIC

HBA

Disk

After virtualization:

Hardware-independence of
operating system and
applications
Virtual machines can be
provisioned to any system
Can manage OS and
application as a single unit by
encapsulating them into
virtual machines

2009 NetApp. All rights reserved.

VIRTUALIZATION OVERVIEW
Virtualization is described this way in VMware materials: Virtualization is an abstraction
layer that decouples the physical hardware from the operating system to deliver greater IT
resource utilization and flexibility.
Virtualization allows multiple virtual machines with heterogeneous operating systems to run
in isolation, side-by-side on the same physical machine.
Each virtual machine has its own set of virtual hardware (for example, RAM, CPU, NIC, and
so on) upon which an operating system and applications are loaded.
VMware virtualizes servers; NetApp virtualizes storage.
There are other server virtualization applications, such as Microsoft Virtual Server, or Xen
in Red Hat Enterprise Linux 5.0.

5-3

SAN Implementation Workshop: vSphere Overview

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Hosted and Bare-Metal Strategies


Application

Application

Guest Operating System

App

App

App

App

Clustering Software

OS

OS

OS

OS

Service Console

Virtualization Layer

CPU

Host Operating System

VMware Virtualization Layer

x86 Architecture

x86 Architecture

Memory

NIC

HBA

Disk

Hosted architecture:
Installs and runs as an
application
Relies on host OS for device
support and physical resource
management

VMware Products:
VMware Server, Workstation

CPU

Memory

NIC

HBA

Disk

Bare-metal (hypervisor)
architecture:
Lean virtualization-centric
kernel
Service console for agents
and helper applications

VMware Products:
VMware ESX

2009 NetApp. All rights reserved.

HOSTED AND BARE-METAL STRATEGIES


Hosted: The VMware server or workstation software runs as an application on top of a
hosting OS (typically on top of Windows).
Bare-metal: The VMware ESX Server product is an OS in itself based on Linux. ESX runs
directly on x86 hardware (no hosting OS).

5-4

SAN Implementation Workshop: vSphere Overview

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Server Consolidation
Application

Application

Operating System

Operating System

CPU

Memory

NIC

Disk

CPU

VM

Local
Datastore

NIC

Disk

VM

VMware Virtualization Layer


x86 Architecture
CPU

Memory

Memory

NIC

HBA

Disk

Virtual Machine
File System
.vmdk

(VMFS)

For an
enterprise
solution,
cluster ESX
and attach
NetApp storage

.vmdk

2009 NetApp. All rights reserved.

SERVER CONSOLIDATION
With vSphere and NetApp storage, administrators may transform their data centers by
converting physical machines-to-virtual machines (P2V).
vSphere stores virtual machine (VM) data in datastores. Datastores can be shared by more
than one VM as shown in this diagram.
VMs can access their storage by way of:

5-5

Virtual disks (VMDKs) stored on the VMware File System (VMFS) accessed through
FCP or iSCSI
VMDKs stored on NFS
Raw device mappings (RDMs) by way of FC or iSCSI

SAN Implementation Workshop: vSphere Overview

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

vSphere or ESX 4.0


VMware vSphere is the fourth generation of
the VMwares ESX server
New in vSphere:
Native NIC iSCSI multipathing
New Pluggable Storage Architecture
VMDirectPath to assign a PCI adapter directly to
a virtual machine
Growing virtual disks and VMware vStorage
VMFS volumes while they are live

NOTE: This course will use ESX 4.0 and


vSphere interchangeably
2009 NetApp. All rights reserved.

VSPHERE OR ESX 4.0


VMDirectPath I/O allows a guest operating system on a virtual machine to directly access
physical PCI and PCIe devices connected to a host.

5-6

SAN Implementation Workshop: vSphere Overview

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

NetApp and vSphere

2009 NetApp. All rights reserved.

NETAPP AND VSPHERE

5-7

SAN Implementation Workshop: vSphere Overview

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

VMFS Datastore
ESX Cluster
Datastore
VM1

VM2

VM3

VM4

VDisk0

VDisk0

VDisk0

VDisk0

FC / FCoE / iSCSI
LUN
1.vmx
1.vmdk

2.vmx
3.vmx
2.vmdk
3.vmdk
VMFS

Flexible Volume

4.vmx
4.vmdk

NetApp FAS Array

2009 NetApp. All rights reserved.

VMFS DATASTORE
The VMware Virtual Machine File System (VMFS) is a high-performance clustered file
system that provides datastores, which are shared storage pools. VMFS datastores can be
configured with LUNs accessed by Fibre Channel, iSCSI, or Fibre Channel over Ethernet.
VMFS allows traditional LUNs to be accessed simultaneously by every ESX Server in a
cluster.
VMFS provides the VMware administrator with a fair amount of independence from the
storage administrator. By deploying shared datastores, the VMware administrator is free to
provision storage to virtual machines as needed. In this design, most data management
operations are performed exclusively through VMware vCenter Server.
This storage design can be challenging in the area of performance monitoring and scaling.
Because shared datastores serve the aggregated I/O demands of multiple VMs, this
architecture doesnt natively allow a storage array to identify the I/O load generated by an
individual VM. This issue can be exacerbated by spanning VMFS volumes across multiple
LUNs.

5-8

SAN Implementation Workshop: vSphere Overview

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

NAS Datastore
ESX Cluster
Datastore
VM1

VM2

VM3

VM4

VDisk0

VDisk0

VDisk0

VDisk0

NFS

1.vmx
1.vmdk

2.vmx
2.vmdk

Flexible Volume

3.vmx
3.vmdk

4.vmx
4.vmdk

NetApp FAS Array

2009 NetApp. All rights reserved.

NAS DATASTORE
In addition to VMFS, vSphere allows a customer to leverage enterprise-class NFS servers in
order to provide datastores with concurrent access by all of the nodes in an ESX cluster. This
method of access is very similar to that with VMFS. NFS provides high performance, the
lowest per-port storage costs (as compared to Fibre Channel solutions), and some advanced
data management capabilities.
Deploying VMware with NetApp NFS datastores is the easiest means to integrate VMware
virtualization technologies directly with the NetApp WAFL (Write Anywhere File Layout)
file system, our advanced data management and storage virtualization engine.
See appendix 8 for more information about NAS datastores.

5-9

SAN Implementation Workshop: vSphere Overview

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Raw Device Mapping (RDM) Datastore


ESX Cluster
Datastore

FC / FCoE / iSCSI

VM1

VM2

VM3

VDisk0

VDisk0

VDisk0

Read

Open

1.vmx
2.vmx
*.vmx
1.vmdk
2.vmdk
*.vmdk

LUN

Write

LUN

LUN

VMFS

.vmx/.vmdk files could be


Flexible Volume with the LUN or separate
NetApp FAS Array
2009 NetApp. All rights reserved.

RAW DEVICE MAPPING (RDM) DATASTORE


ESX allows for virtual machines to have direct access to LUNs for specific use cases such as
P2V clustering or storage vendor management tools. This type of access is referred to as a
raw device mapping and can be configured with Fibre Channel, iSCSI, and Fibre Channel
over Ethernet. In this design, ESX acts as a connection proxy between the VM and the
storage array.
Unlike VMFS and NFS, RDMs are not used to provide shared datastores.
RDMs are an enabling technology for solutions such as virtual machine and physical-tovirtual-machine host-based clustering, such as with Microsoft Cluster Server (MSCS). RDMs
provide traditional LUN access to a host. Therefore, they can achieve high individual disk I/O
performance, and they can be easily monitored for disk performance by a storage array.

5-10

SAN Implementation Workshop: vSphere Overview

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Datastore Comparison Table


Capability / Feature

FC/FCoE

iSCSI

NFS

VMFS or RDM

VMFS or RDM

NetApp WAFL

256

256

64

Maximum datastore size

64 TB

64 TB

16 TB*

Maximum LUN / NAS file


system size

2 TB

2 TB

16 TB*

Recommended VMDKs per


LUN / NAS file system

16

16

250

Optimal queue depth per


LUN / file system

64

64

N/A

4 and 8 Gb FC /
10 GbE

1 and 10 GbE

1 and 10 GbE

Format
Maximum number of
datastores or LUNs

Available link speeds

* Data ONTAP 8.0 supports up to 100 TB per aggregate

2009 NetApp. All rights reserved.

DATASTORE COMPARISON TABLE


Differentiating what is available with each type of datastore and storage protocol can require
considering many points. The following table compares the features available with each
storage option.

5-11

SAN Implementation Workshop: vSphere Overview

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

vSphere Interfaces

2009 NetApp. All rights reserved.

VSPHERE INTERFACES

5-12

SAN Implementation Workshop: vSphere Overview

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

vSphere Graphical Interfaces

vCenter Server

vSphere Client

vSphere Web

Requires additional
license
Centralized
management of
multiple ESX hosts
Automatic detection
of changes
Runs as a Windows
Service

No additional license
required
Used to manage
single ESX host
No automatic
detection of changes
Installs to local
machine

No additional license
required
Perform basic VM
management and
configuration
No automatic
detection of changes
http://esx_host/ui

2009 NetApp. All rights reserved.

VSPHERE GRAPHICAL INTERFACES

5-13

SAN Implementation Workshop: vSphere Overview

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

vSphere Client

Selected

Default
Datastore

2009 NetApp. All rights reserved.

VSPHERE CLIENT

5-14

SAN Implementation Workshop: vSphere Overview

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

vSphere Command-Line Interface (CLI)

vMA

A pre-built VM
hosted in ESX that
contains the CLI
Package

CLI
Package
Runs in:
Windows Shell
PowerShell
Linux

In the exercise, you


will change this

Service Console
Requires shell user
account access
Root account
disabled

2009 NetApp. All rights reserved.

VSPHERE COMMAND-LINE INTERFACE (CLI)

5-15

SAN Implementation Workshop: vSphere Overview

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

CLI Examples
If a CLI Package or vMA is used, commands must
have connection information:
esxcli --server esx_ip --username user
--password passwd ...

If the user logs in to the Service Console with the


appropriate user account, no additional connection
information is needed
esxcli ...

Commands might differ between interfaces


CLI Packages/vMA

Service Console

Description

vicfg-vmknic

esxcfg-vmknic

NIC management

vicfg-vswitch

esxcfg-vswitch

Virtual switch management

esxcli

esxcli

Storage plug-in management

2009 NetApp. All rights reserved.

CLI EXAMPLES

5-16

SAN Implementation Workshop: vSphere Overview

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

User Accounts Within vSphere


Selected

Right-click
to add new
user

Strong
passwords
required

2009 NetApp. All rights reserved.

USER ACCOUNTS WITHIN VSPHERE

5-17

SAN Implementation Workshop: vSphere Overview

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary

2009 NetApp. All rights reserved.

MODULE SUMMARY

5-18

SAN Implementation Workshop: vSphere Overview

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary
In this module, you should have learned to:
Describe virtualization and how it can be used
to promote server efficiency
Explain methods of mapping NetApp storage
to vSphere datastores
List the interfaces to administrate vSphere

2009 NetApp. All rights reserved.

MODULE SUMMARY

5-19

SAN Implementation Workshop: vSphere Overview

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Exercise
Module 5: vSphere Overview
Time Estimate: 30 minutes

EXERCISE
Please refer to your Exercise Guide materials for more instructions.

5-20

SAN Implementation Workshop: vSphere Overview

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

vSphere iSCSI
Connectivity
Module 6
SAN Implementation Workshop

VSPHERE ISCSI CONNECTIVITY

6-1

SAN Implementation Workshop: vSphere iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Objectives
By the end of this module, you should be able to:
Describe multiple path implementation with
iSCSI connectivity for vSphere and NetApp
systems
Configure network ports on vSphere systems
Identify the worldwide node (WWN) on
vSphere systems
Set up and verify multiple path iSCSI
connectivity between vSphere and NetApp
systems
2009 NetApp. All rights reserved.

MODULE OBJECTIVES

6-2

SAN Implementation Workshop: vSphere iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

IP Connectivity

2009 NetApp. All rights reserved.

IP CONNECTIVITY

6-3

SAN Implementation Workshop: vSphere iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

iSCSI SAN Storage Virtualization


VM1

VM2

VDisk0

VDisk0

ESX Cluster

SCSI Controller

SCSI Controller

Virtualization Layer

iSCSI software initiator

HBA
LAN

NIC
iSCSI

HBA

LAN

NIC

1.vmx
2.vmx
1.vmdk
2.vmdk
VMFS
Flexible Volume

LUN

NetApp FAS Array

2009 NetApp. All rights reserved.

ISCSI SAN STORAGE VIRTUALIZATION

6-4

SAN Implementation Workshop: vSphere iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

vSphere Network Modeling

Physical
Ethernet
Adapter
Definitions
vSphere
Ethernet
Modeling

2009 NetApp. All rights reserved.

VSPHERE NETWORK MODELING

6-5

SAN Implementation Workshop: vSphere iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

vSphere Network Modeling (Cont.)


vSwitches model physical switches,
NICs associated with vSwitch

vmnics associate with


an IP range

VMkernels support iSCSI


or NFS stacks using Pluggable
Storage Architecture (PSA)

Port Groups define bandwidth limitation


and VLAN tagging
policies for each member NIC

Service Console communicates


over vmnic0 with vswif0 IP
address by default

2009 NetApp. All rights reserved.

VSPHERE NETWORK MODELING (CONT.)

Switch virtualization (vSwitch)

Port virtualization (VMkernel)

6-6

Associating Physical NICs with virtual switch


Designate a subnet to the switch
Use separate switches to isolate VMs
Creates a port between VM(s) and network
Associate a specific switchs NIC(s) to a particular IP address
NICs can be active, failover, or inactive
Multiple active NICs use teaming (IEEE x.x)
Has pluggable storage architecture (PSA)

SAN Implementation Workshop: vSphere iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

VMkernels PSA
VMkernel
Pluggable Storage Architecture (PSA)
Native Multipathing Plug-in (NMP)

Storage Array

Type Plug-in (SATP)

Path Selection

Third-Party

Plug-in (PSP)

Multiple

Multipathing

Plug-ins (MPP)

2009 NetApp. All rights reserved.

VMKERNELS PSA
vSphere has a new architecture that allows third-party multiple multipathing plug-ins (MPP)
to take complete control of the path failover and load balancing operations for a specific
storage devices. Alternatively,, Native Multipathing Plug-in (NMP) supports all storage
arrays on the VMware storage hardware compatibility list (HCL) and provide default path
selection algorithms for both failover (by way of the Storage Array Type Plug-in or SATP)
and load balancing operations (by way of Path Selection Plug-in or PSP).

6-7

SAN Implementation Workshop: vSphere iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

ESX Network Design for IP Storage


The goal is to design a network that:
Is redundant across physical switches
Uses multiple physical paths simultaneously
Can scale to additional physical interfaces

Two high-level options


Traditional Ethernet: two storage subnets,
multiple storage and host IPs
Cross-stack EtherChannel: one storage subnet,
multiple storage IPs, and IP load balancing

2009 NetApp. All rights reserved.

ESX NETWORK DESIGN FOR IP STORAGE


When designing an ESX network for IP storage, the goal is a network that is redundant across
physical switches and that can use multiple physical paths simultaneously. You also want a
network that can scale to additional physical interfaces. There are two high-level ESX
network designs that will meet this goalone with and the other without cross-stack
EtherChannel.
The design that uses Traditional Ethernet uses two storage subnets, with multiple storage and
multiple host IPs.
The cross-stack EtherChannel design uses one storage subnet, with multiple storage IPs, and
IP load balancing.

6-8

SAN Implementation Workshop: vSphere iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Traditional Ethernet
Multiple storage and ESX IPs are required
ESX requires two VMkernel ports, each on a different subnet
Storage node requires IP on each subnet

2009 NetApp. All rights reserved.

TRADITIONAL ETHERNET
By contrast, the traditional IP storage design without cross-stack EtherChannel uses multiple
storage IPs and multiple ESX IPs. There must be two VMkernel ports on the ESX server,
each on a different subnet. In addition, the storage node needs an IP address on each subnet.
This design uses single-mode vifs between the storage controllers and the Ethernet
infrastructure.

6-9

SAN Implementation Workshop: vSphere iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Traditional Ethernet (Cont.)


VMkernel port NIC teaming properties for two VMkernel ports
Connections are manually balanced by selecting a different active
adapter for each VMkernel port

2009 NetApp. All rights reserved.

TRADITIONAL ETHERNET (CONT.)


When using the traditional IP storage design without cross-stack EtherChannel, open the NIC
Teaming tab on each VMkernel port Properties screen and select the Link status only network
failover option. Manually balance connections by first selecting Override vSwitch Failover
Order and by selecting a different Active Adapter for each VMkernel port.

6-10

SAN Implementation Workshop: vSphere iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Traditional Ethernet (Cont.)


Distribute datastores across storage IP addresses

2009 NetApp. All rights reserved.

TRADITIONAL ETHERNET (CONT.)


This slide shows IP storage and cross-stack EtherChannel with datastores distributed across
storage IP addresses.

6-11

SAN Implementation Workshop: vSphere iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Subnet Failover on Link Failure


On link failure, the affected subnet or connection moves to the
other NIC, sharing the link

2009 NetApp. All rights reserved.

SUBNET FAILOVER ON LINK FAILURE


On link failure, the affected subnet or connection moves to the other NIC and shares the link.

6-12

SAN Implementation Workshop: vSphere iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Cross-Stack EtherChannel
Multiple storage IPs are required
ESX host requires one VMkernel port

2009 NetApp. All rights reserved.

CROSS-STACK ETHERCHANNEL
This slide shows IP storage and cross-stack EtherChannel. Notice that multiple storage IP
addresses are required but the ESX host needs only one VMkernel port. Multi-mode vifs
provide link redundancy on the storage side.

6-13

SAN Implementation Workshop: vSphere iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Cross-Stack EtherChannel (Cont.)


VMkernel port NIC Teaming properties for cross-stack
EtherChannel
IP load balancing balances connections across links

2009 NetApp. All rights reserved.

CROSS-STACK ETHERCHANNEL (CONT.)


When using the cross-stack EtherChannel design, open the NIC Teaming tab on the
VMkernel port Properties screen and select Route based IP hash load balancing and Link
status only network failover. The ESX Server will then IP load balance connections across the
available links.

6-14

SAN Implementation Workshop: vSphere iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

IP Exercise Environment
ESX 4.0
e0a are used
for management
access only for
storage systems
without e0M
e0a

e0b

e0c

Storage System 1

e0a

e0b

e0c

Storage System 2

iSCSI
2009 NetApp. All rights reserved.

IP EXERCISE ENVIRONMENT

6-15

SAN Implementation Workshop: vSphere iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Data ONTAP

2009 NetApp. All rights reserved.

DATA ONTAP

6-16

SAN Implementation Workshop: vSphere iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Review from Day 1


iSCSI service licensed and started:

Using system 2
for iSCSI

system2> license add XXXXXX


system2> iscsi start

The following interfaces configured for iSCSI:


system2> iscsi interface show
Interface e0a disabled
Interface e0b enabled
Interface e0c enabled
Interface e0d disabled

NOTE: No TPGroup
changes

Identify the WWN on NetApp storage:


system2> iscsinodename
iSCSI target nodename: iqn.1992-08.com.netapp:
sn.101201757

2009 NetApp. All rights reserved.

REVIEW FROM DAY 1

6-17

SAN Implementation Workshop: vSphere iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

vSphere (ESX 4.0)

2009 NetApp. All rights reserved.

VSPHERE (ESX 4.0)

6-18

SAN Implementation Workshop: vSphere iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

ESX as an iSCSI Initiator


NetApp has supported ESX as an iSCSI
initiator OS since ESX 3.0
ESX 4.0 has many advantages over previous
versions
New implementation of iSCSI stack
True multiple path over standard NICs
Pluggable Storage Architecture (PSA) for
multipath I/O support

ESX 4.0 must be properly configured for iSCSI


connectivity over a standard network interface

2009 NetApp. All rights reserved.

ESX AS AN ISCSI INITIATOR

6-19

SAN Implementation Workshop: vSphere iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

ESX 4.0 iSCSI Design and Installation


1. Verify ESX releases required patches, and ESX Host
Utilities Kit (HUK) with Interoperability Matrix
Interoperability Matrix can be found on the
NOW site
2. Install compatible ESX HUK from NetApp
Provides Perl scripts to monitor and diagnose
iSCSI on vSphere
Example: esx_info provides information about
the iSCSI configuration
3. Configure standard network interface(s) or install
supported iSCSI HBA(s)

2009 NetApp. All rights reserved.

ESX 4.0 ISCSI DESIGN AND INSTALLATION

6-20

SAN Implementation Workshop: vSphere iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

ESX/NIC iSCSI Implementation


After installation, to configure an ESX
standard NIC software initiator implementation:
1. Configure the virtual infrastructure:

Identify the local network adapter(s) and switch(es) to use


Configure VMkernel(s)
Configure jumbo frames if desired

2. Enable Software iSCSI Client and associate


VMkernel(s) to initiator
3. Identify the WWN (IQN or eui) for the local ESX host
4. Identify which method of discovery to use and enter
the storage systems portal IP address
5. Configure authentication security if necessary
6. Verify discovery and log on to the storage system
2009 NetApp. All rights reserved.

ESX/NIC ISCSI IMPLEMENTATION

6-21

SAN Implementation Workshop: vSphere iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Configure Virtual Infrastructure


vSwitch Considerations
NetApp recommends separating IP-based storage traffic
from public IP network traffic by implementing:
Separate physical network segments, or
VLAN segments

It is recommended to not allow routing of data between


networks
Normally, administrators would:
Create a new vSwitch to isolate VMs
Add two or more adapters to the second switch
Configure a service console port for the second switch

Exercise environment
Only have two NICs configured
Will not configure a second vSwitch, use the vSwitch0
2009 NetApp. All rights reserved.

CONFIGURE VIRTUAL INFRASTRUCTURE

6-22

SAN Implementation Workshop: vSphere iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Configure Virtual Infrastructure


1. Identify the local network adapter(s)
Currently
two adapters

Need to add the second


adapter to the vSwitch

One vSwitch with


only one NIC

2009 NetApp. All rights reserved.

CONFIGURE VIRTUAL INFRASTRUCTURE

6-23

SAN Implementation Workshop: vSphere iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Configure Virtual Infrastructure (Cont.)


1. Add NIC to vSwitch

Add an NIC to
vSwitch

After wizard,
NIC added

2009 NetApp. All rights reserved.

CONFIGURE VIRTUAL INFRASTRUCTURE (CONT.)

6-24

SAN Implementation Workshop: vSphere iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Configure Virtual Infrastructure (Cont.)


1. First VMkernel creation

NOTE: VMkernel
creation wizard is
abridged in this slide
2009 NetApp. All rights reserved.

CONFIGURE VIRTUAL INFRASTRUCTURE (CONT.)

6-25

SAN Implementation Workshop: vSphere iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Configure Virtual Infrastructure (Cont.)


1. Second VMkernel creation

New
VMkernel

NOTE: VMkernel
creation wizard is
abridged in this slide
2009 NetApp. All rights reserved.

CONFIGURE VIRTUAL INFRASTRUCTURE (CONT.)

6-26

SAN Implementation Workshop: vSphere iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Configure Virtual Infrastructure (Cont.)


VMkernel 1

VMkernel 2

1:1
Mapping of
adapters
VMkernels

2009 NetApp. All rights reserved.

CONFIGURE VIRTUAL INFRASTRUCTURE (CONT.)

6-27

SAN Implementation Workshop: vSphere iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Jumbo Frame Support


ESX can be configured to use jumbo frame
packets up to 9000 bytes
Network must support jumbo frames end-to-end
Enabled at vSwitch:
# vicfg-vswitch -m MTU vSwitch

Enabled at VMkernel:
# esxcfg-vmknic -a -I ip -n netmask -m
MTU port_group_name

Data ONTAP supports jumbo frame


Configure on the interface that uses iSCSI:
system2> ifconfig e0b mtusize 9000
2009 NetApp. All rights reserved.

JUMBO FRAME SUPPORT

6-28

SAN Implementation Workshop: vSphere iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Software iSCSI Client


2. Enable the Software iSCSI Client
Select

Check to enable

or # esxcfg-swiscsi --enable

2009 NetApp. All rights reserved.

SOFTWARE ISCSI CLIENT

6-29

SAN Implementation Workshop: vSphere iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Associate VMkernels with Initiator


Log in to Service Console or use CLI Packages to
issue the following commands:
VMkernel name
# esxcli swiscsi nic add -n vmk0 -d vmhba33
# esxcli swiscsi nic add -n vmk1 -d vmhba33

Verify:

Initiator
name

# esxcli swiscsi nic list -d vmhba33


vmk0
pNic name: vmnic0
link connected: true
ipv4 address:10.254.135.203
ethernet speed: 1000
ipv4 net mask:255.255.252.0
packets received: 531701
ipv6 addresses:
packets sent: 8719
mac address:00:21:5e:6f:2c:a0 NIC driver: bnx2
mtu: 1500
driver version: 1.6.9
toe: false
firmware version: 4.4.1
tso: true
ipms 1.6.0
tcp checksum: false
vmk1
vlan: true
...
2009 NetApp. All rights reserved.

ASSOCIATE VMKERNELS WITH INITIATOR

6-30

SAN Implementation Workshop: vSphere iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

ESX Worldwide Node Name (WNN)


3. Identify ESX WNN

ESX WNN

2009 NetApp. All rights reserved.

ESX WORLDWIDE NODE NAME (WNN)

6-31

SAN Implementation Workshop: vSphere iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Discovery
Initiator

Ethernet

Discovery is not
automatic

Ethernet

Target

2009 NetApp. All rights reserved.

DISCOVERY

6-32

SAN Implementation Workshop: vSphere iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Discovery Methods
4. Configure discovery method
Dynamic

(Send Targets)

Static

Obtains a list of accessible Can only access particular


target portals at NetApp
targets from the NetApp
storage system designated
storage system
by ESX Administrator

2009 NetApp. All rights reserved.

DISCOVERY METHODS

6-33

SAN Implementation Workshop: vSphere iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Dynamic Discovery Configure

Storage
systems
iSCSI-enabled
interface address

To set security

2009 NetApp. All rights reserved.

DYNAMIC DISCOVERY CONFIGURE

6-34

SAN Implementation Workshop: vSphere iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Authentication Security Method


5. To increase security, iSCSI may be
configured to require authentication
Authentication method:
CHAP

Unidirectional - targets will authenticate


initiators
Bidirectional - initiators and targets will
authenticate each other
This course will discuss using CHAP
authentication, but will not use it in the exercise

2009 NetApp. All rights reserved.

AUTHENTICATION SECURITY METHOD

6-35

SAN Implementation Workshop: vSphere iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

ESX Unidirectional CHAP Configuration


system2> iscsi security add
-i iqn.1998-01.com.vmware:esx
-s CHAP
-n iqn.1998-01.com.vmware:esx
-p thisismysecret

To configure
bidirectional, check here
and then...

2009 NetApp. All rights reserved.

ESX UNIDIRECTIONAL CHAP CONFIGURATION

6-36

SAN Implementation Workshop: vSphere iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

ESX Bidirectional CHAP Configuration


system2> iscsi security add
-i iqn.1998-01.com.vmware:esx
-s CHAP
-n iqn.1998-01.com.vmware:esx
-p thisismysecret
-m iqn.1992-08.com.netapp:ss
-o thisismysecret2

2009 NetApp. All rights reserved.

ESX BIDIRECTIONAL CHAP CONFIGURATION

6-37

SAN Implementation Workshop: vSphere iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Limit an Interface from Discovery


Restricting connection over specific interfaces

Select interface
to remove

or
Click to remove

system2> iscsi interface disable e0a

or
system2> iscsi interface accesslist

2009 NetApp. All rights reserved.

LIMIT AN INTERFACE FROM DISCOVERY

6-38

SAN Implementation Workshop: vSphere iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

ESX Discovery from CLI


Verify iSCSI discovery from CLI:
# vmkiscsi-tool -D vmhba33
===Discovery Properties for Adapter vmhba33===
iSnsDiscoverySettable
: 0
...

staticDiscoverySettable : 0
staticDiscoveryEnabled
: 1
sendTargetsDiscoverySettable : 0
sendTargetsDiscoveryEnabled
: 1
slpDiscoverySettable : 0
DISCOVERY ADDRESS
: 10.254.133.239
STATIC DISCOVERY TARGET
NAME
: iqn.1992-08.com.netapp:sn.101201757
ADDRESS : 10.254.133.239:3260
STATIC DISCOVERY TARGET
NAME
: iqn.1992-08.com.netapp:sn.101201757
ADDRESS : 10.254.133.240:3260
2009 NetApp. All rights reserved.

ESX DISCOVERY FROM CLI

6-39

SAN Implementation Workshop: vSphere iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

ESX Binding
A session occurs automatically:
# vmkiscsi-tool -T vmhba33
NAME: iqn.1992-08.com.netapp:sn.101201757
ALIAS:
DISCOVERY METHOD FLAGS: 0
SEND TARGETS DISCOVERY SETTABLE: 0
SEND TARGETS DISCOVERY ENABLED: 0
Portal 0: 10.254.133.239:3260
Portal 1: 10.254.133.240:3260

To view the session on the storage system:


system2> iscsi session show
Session 25
Initiator Information
Initiator Name: iqn.1998-01.com.vmware:esx
ISID: 00:02:3d:00:00:03
2009 NetApp. All rights reserved.

ESX BINDING

6-40

SAN Implementation Workshop: vSphere iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary

2009 NetApp. All rights reserved.

MODULE SUMMARY

6-41

SAN Implementation Workshop: vSphere iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary
In this module, you should have learned to:
Describe multiple path implementation with
iSCSI connectivity for vSphere and NetApp
systems
Configure network ports on vSphere systems
Identify the worldwide node (WWN) on
vSphere systems
Set up and verify multiple path iSCSI
connectivity between vSphere and NetApp
systems
2009 NetApp. All rights reserved.

MODULE SUMMARY

6-42

SAN Implementation Workshop: vSphere iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Exercise
Module 6: vSphere
iSCSI Connectivity
Estimated Time: 30 minutes

EXERCISE
Please refer to your Exercise Guide materials for more instructions.

6-43

SAN Implementation Workshop: vSphere iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

vSphere FC
Connectivity
Module 7
SAN Implementation Workshop

VSPHERE FC CONNECTIVITY

7-1

SAN Implementation Workshop: vSphere FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Objectives
By the end of this module, you should be able to:
Describe multiple path implementation with
Fibre Channel (FC) connectivity for vSphere
and NetApp systems
Configure FC ports on vSphere systems
Identify the worldwide node name (WWNN)
and worldwide port name (WWPN) on vSphere
systems
Set up and verify multiple path FC connectivity
between vSphere and NetApp systems
2009 NetApp. All rights reserved.

MODULE OBJECTIVES

7-2

SAN Implementation Workshop: vSphere FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

FC Topology

2009 NetApp. All rights reserved.

FC TOPOLOGY

7-3

SAN Implementation Workshop: vSphere FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

FC SAN Storage Virtualization


VM

VM

VDisk0

VDisk0

ESX Cluster

SCSI Controller

SCSI Controller

Virtualization Layer

FC

HBA

CNA

Fibre

LAN

HBA

CNA

1.vmx
1.vmdk

FCoE

2.vmx
2.vmdk

VMFS
Flexible Volume

LUN

NetApp FAS Array

2009 NetApp. All rights reserved.

FC SAN STORAGE VIRTUALIZATION

7-4

SAN Implementation Workshop: vSphere FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

N-Port ID Virtualization (NPIV)


When the VM is powered
on, a VPORT is created

WWPNs must be zoned properly


and masked in igroups

VM VPORT

LUN

VDisk0

HBA

VM VPORT
HBA

VDisk0

FCP

ESX

LUN
NetApp FAS Array

HBAs and switch


Each VPORT appears as a
must be NPIV
physical HBA to the fabric compatible
gets its own WWNN and
WWPN

LUNs must be
RDM

2009 NetApp. All rights reserved.

N-PORT ID VIRTUALIZATION (NPIV)


N-Port ID Virtualization (NPIV) allows HBAs to create VPORTs that can be assigned a
separate WWNN and WWPN from the fabric. These VPORTs can then be associated a VM
allowing unique access to VMs raw device mapping (RDM)LUN.
NPIV requires the following:
Either an Emulex HBA with NPIV-capable firmware or a QLogic4-Gb HBA.
Brocade switches with Fabric OS 5.1.0or later.
Cisco switches with SAN-OS 3.0(1)
McData switches with E/OS 8.0
Data ONTAP 7.2 or later
ESX 3.5 or later
Always check the Interoperability Matrix for the current supported configurations.

7-5

SAN Implementation Workshop: vSphere FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

N-Port ID Virtualization (Cont.)


Reasons for NPIV:
VM-level chargebacks: I/O throughput tracked
per VM enabling application or user-level
chargebacks
Bi-directional association of VMs with storage:
Trace VM to RDM and RDM back to VM
VM migration: VMware VMotion supports
preservation of VPORT ID when a VM is moved
to a new ESX server
HBA upgrades: Physical adapters can be
replaced or upgraded with minimal change to
SAN configuration
2009 NetApp. All rights reserved.

N-PORT ID VIRTUALIZATION (CONT.)


NPIV allows storage administrators tighter control over their storage. Specifically, NPIV
allows:

VM-level chargebacks which allows I/O for a specific VM to be tracked using the virtual
WWPN.
Bi-directional association of VMs with storage which allow SAN administrators to trace
from VM to LUN for troubleshooting connectivity issues.
VM migration which is possible because VMware VMotion supports preservation of the
VPORT ID when a VM is moved to a new ESX Server.
HBA upgrades which can be upgraded or replaced with only minimal impact to the SAN
configurations such as zoning.

7-6

SAN Implementation Workshop: vSphere FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

FC Exercise Environment
ESX 4.0 Server
8

switch
port
0c - 0

0d - 1

Storage System 1

0c - 2

0d - 3

Storage System 2

Fibre Channel
2009 NetApp. All rights reserved.

FC EXERCISE ENVIRONMENT

7-7

SAN Implementation Workshop: vSphere FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Data ONTAP

2009 NetApp. All rights reserved.

DATA ONTAP

7-8

SAN Implementation Workshop: vSphere FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Review from Day 1


FC service was licensed and started:
system and system2> license add XXXXXX
system and system2> fcp start

The following interfaces were configured as an


FC target:
system and system2> fcadmin config
Local
Adapter Type
State
Status
-----------------------------------------------0a
initiator CONFIGURED
online
0b
initiator CONFIGURED
online
0c
target
CONFIGURED
online
0d
target
CONFIGURED
online
2009 NetApp. All rights reserved.

REVIEW FROM DAY 1

7-9

SAN Implementation Workshop: vSphere FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Review from Day 1 (Cont.)


Configure the dual storage systems as an active-active
configuration
License cluster and reboot:
system and system2> license add XXXXXX
system and system2> reboot

Verify cfmode:
system> fcp show cfmode
fcp show cfmode: single_image

Enable the cluster:


system> cf enable

Verify the active-active relationship:


system> cf status
Cluster enabled, system2 is up.
2009 NetApp. All rights reserved.

REVIEW FROM DAY 1 (CONT.)

7-10

SAN Implementation Workshop: vSphere FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Review from Day 1 (Cont.)


system or system2> fcp nodename
Fibre Channel nodename: 50:0a:09:80:86:f7:c7:86 (500a098086f7c786)
system> fcp config
0c:
ONLINE <ADAPTER UP> PTP Fabric
host address 011000
portname 50:0a:09:81:96:f7:c7:86 nodename 50:0a:09:80:86:f7:c7:86
mediatype auto speed auto
0d:
ONLINE <ADAPTER UP> PTP Fabric
host address 011100
portname 50:0a:09:82:96:f7:c7:86 nodename 50:0a:09:80:86:f7:c7:86
mediatype auto speed auto

Verify configuration
system2> fcp config
0c:
ONLINE <ADAPTER UP> PTP Fabric
host address 011200
portname 50:0a:09:81:86:f7:c7:86 nodename 50:0a:09:80:86:f7:c7:86
mediatype auto speed auto
0d:
ONLINE <ADAPTER UP> PTP Fabric
host address 011300
portname 50:0a:09:82:86:f7:c7:86 nodename 50:0a:09:80:86:f7:c7:86
mediatype auto speed auto
2009 NetApp. All rights reserved.

REVIEW FROM DAY 1 (CONT.)

7-11

SAN Implementation Workshop: vSphere FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

vSphere (ESX 4.0)

2009 NetApp. All rights reserved.

VSPHERE (ESX 4.0)

7-12

SAN Implementation Workshop: vSphere FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

ESX as an FC Initiator
NetApp has supported ESX as an FC initiator
OS since ESX 3.0
ESX 4.0 has many advantages over previous
versions
NPIV support
Updated popular HBA drivers
Pluggable Storage Architecture (PSA) for
multipath I/O support

ESX must be properly configured for FC


connectivity
2009 NetApp. All rights reserved.

ESX AS AN FC INITIATOR

7-13

SAN Implementation Workshop: vSphere FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

ESX 4.0 FC Design and Installation


1. Verify ESX releases required patches, and VMware
Host Utilities Kit (HUK) with Interoperability Matrix
Interoperability Matrix can be found on the NOW site

2. Install compatible host bus adapters (HBAs)


3. Install and configure required HBA drivers and utilities
4. Verify an HBA
Emulex: Use HBAnyware
QLogic: Use SANsurfer
All: Use lspci command

5. Install compatible VMware HUK from NetApp:


Provides Perl scripts to monitor and diagnose FC on
vSphere
Example: esx_info provides information about the FC
configuration such as the adapters file
2009 NetApp. All rights reserved.

ESX 4.0 FC DESIGN AND INSTALLATION

7-14

SAN Implementation Workshop: vSphere FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

vSphere/Emulex Implementation
After installation, to configure a vSphere/Emulex
implementation:
Verify the HBA is enabled
Identify the WWNN on the host
Identify the WWPN on the host
Verify connectivity between the initiator and
target

2009 NetApp. All rights reserved.

VSPHERE/EMULEX IMPLEMENTATION

7-15

SAN Implementation Workshop: vSphere FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

vSphere/Emulex Implementation (Cont.)


Verify ESX 4.0 has identified the HBA(s)
Selected

HBAs
Selected

2009 NetApp. All rights reserved.

VSPHERE/EMULEX IMPLEMENTATION (CONT.)

7-16

SAN Implementation Workshop: vSphere FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

vSphere/Emulex Implementation (Cont.)


Verify ESX 4.0 has identified the HBA(s) (Cont.)
# cd /opt/netapp/santools
# ./esx_info fc
...
# cd /tmp/netapp/netapp/esx_info
# cat adapters
adapter name:
vmhba3
WWPN:
10000000c958299a
WWNN:
20000000c958299a
driver name:
lpfc820
model:
111-00308
model description: NetApp 111-00308 4Gb 2-port
PCI-X2 Fibre Channel Adapter
...
adapter name:
vmhba4
Same as the output of
./sanlun fcp show adapter -v
...
2009 NetApp. All rights reserved.

VSPHERE/EMULEX IMPLEMENTATION (CONT.)

7-17

SAN Implementation Workshop: vSphere FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

vSphere/Emulex Implementation (Cont.)


Verify ESX 4.0 has identified the HBA(s) (Cont.)
# lspci | grep -i Fibre
14:01.0 Fibre Channel: Emulex Corporation LP11000
4Gb Fibre Channel Host Adapter (rev 01)
14:01.1 Fibre Channel: Emulex Corporation LP11000
4Gb Fibre Channel Host Adapter (rev 01)

Identify the driver associated with the HBA(s)


# vmkload_mod -l | grep -i "lpfc
lpfc820 0x418030089000 0x72000 0x417ff0e9dca0
0xd000 33 Yes
Specific driver
name; driver is
loaded

lpfc is the Emulexs


driver name

2009 NetApp. All rights reserved.

VSPHERE/EMULEX IMPLEMENTATION (CONT.)

7-18

SAN Implementation Workshop: vSphere FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

vSphere/Emulex Implementation (Cont.)


Verify the HBA(s) configuration:
# cd /opt/netapp/santools
# ./config_hba --query
lpfc820 enabled=1 options=lpfc_devloss_tmo=120

or

Specific loaded
driver name

# esxcfg-module -g lpfc820
lpfc820 enabled=1 options=lpfc_devloss_tmo=120

Configure the HBA(s):


# ./config_hba --configure

Done during HUK


install; only revisit
when LUNs are
present

2009 NetApp. All rights reserved.

VSPHERE/EMULEX IMPLEMENTATION (CONT.)

7-19

SAN Implementation Workshop: vSphere FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

vSphere/Emulex Implementation (Cont.)


Identify the local WWNN/WWPN(s) on ESX 4.0
Selected

HBAs
Selected

WWNN

WWPN
for selected HBA

2009 NetApp. All rights reserved.

VSPHERE/EMULEX IMPLEMENTATION (CONT.)

7-20

SAN Implementation Workshop: vSphere FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

vSphere/Emulex Implementation (Cont.)


Identify the local WWNN on ESX 4.0:

Look for this

exactly
# esxcfg-info | grep -i Node Number
|--World Wide Node Number......0x20000000c958299a
|--World Wide Node Number......0x20000000c958299b

Identify the local WWNP(s) on ESX 4.0:Look for this


# esxcfg-info | grep -i Port Number
exactly
|--World Wide Port Number......0x10000000c958299a
|--World Wide Port Number......0x10000000c958299b

WWNN and WWPNS also available from:


# cd /opt/netapp/santools
# ./sanlun fcp show adapter -v

2009 NetApp. All rights reserved.

VSPHERE/EMULEX IMPLEMENTATION (CONT.)

7-21

SAN Implementation Workshop: vSphere FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Discovery
Initiator

Fibre Channel

When ports are


active, discovery
is automatic
Fibre Channel

Target

2009 NetApp. All rights reserved.

DISCOVERY

7-22

SAN Implementation Workshop: vSphere FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Data ONTAP Discovery of Initiators


Verify connectivity from the storage system:
system2> fcp show initiators
Initiators connected on adapter 0c:
Portname
Group
-----------10:00:00:00:c9:58:29:9b
10:00:00:00:c9:58:29:9a
Initiators connected on adapter 0d:
ESX WWPNs
Portname
Group
-----------10:00:00:00:c9:58:29:9b
10:00:00:00:c9:58:29:9a

NOTE: For convenience, you may assign an alias to the


ESX WWPN
2009 NetApp. All rights reserved.

DATA ONTAP DISCOVERY OF INITIATORS

7-23

SAN Implementation Workshop: vSphere FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

vSphere Discovery of Targets


Verify connectivity from ESX 4.0 using
Provide a local
HBAnyware
WWPN

# cd /usr/sbin/hbanyware
# ./hbacmd allnodeinfo 10:00:00:00:c9:58:29:9a
All Node Info for 10:00:00:00:c9:58:29:9a
Node Type
:
FCP ID
:
SCSI Bus Number:
SCSI Target Num:
Node WWN
:
Port WWN
:
OS Device Name :
...

WWPN
10000
Storage System 1
0
0c port
0
50:0A:09:80:86:88:37:5D
50:0A:09:81:96:88:37:5D
/proc/scsi/lpfc820/40,0

All four ports (0c, 0d) within high-availability pair are visible

2009 NetApp. All rights reserved.

VSPHERE DISCOVERY OF TARGETS


Node Type
FCP ID

: WWPN
: 10200

SCSI Bus Number: 0


SCSI Target Num: 2
Node WWN
Port WWN

: 50:0A:09:80:86:88:37:5D
: 50:0A:09:81:86:88:37:5D

OS Device Name : /proc/scsi/lpfc820/40,2

Node Type
FCP ID

: WWPN
: 10300

SCSI Bus Number: 0


SCSI Target Num: 3
Node WWN
Port WWN

: 50:0A:09:80:86:88:37:5D
: 50:0A:09:82:86:88:37:5D

OS Device Name : /proc/scsi/lpfc820/40,3

7-24

SAN Implementation Workshop: vSphere FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary

2009 NetApp. All rights reserved.

MODULE SUMMARY

7-25

SAN Implementation Workshop: vSphere FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary
In this module, you should have learned to:
Describe multiple path implementation with
Fibre Channel (FC) connectivity for vSphere
and NetApp systems
Configure FC ports on vSphere systems
Identify the worldwide node name (WWNN)
and worldwide port name (WWPN) on vSphere
systems
Set up and verify multiple path FC connectivity
between vSphere and NetApp systems
2009 NetApp. All rights reserved.

MODULE SUMMARY

7-26

SAN Implementation Workshop: vSphere FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Exercise
Module 7: vSphere FC Connectivity
Estimated Time: 60 minutes

EXERCISE
Please refer to your Exercise Guide materials for more instructions.

7-27

SAN Implementation Workshop: vSphere FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

vSphere LUN
Access
Module 8
SAN Implementation Workshop

VSPHERE LUN ACCESS

8-1

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Objectives
By the end of this module, you should be able to:
Describe the steps to allow a vSphere
initiator to access a LUN on a storage system
as a VMFS datastore
Describe the steps to allow a vSphere initiator
to create a VM with a raw device mapping
(RDM) disk from a storage systems LUN

2009 NetApp. All rights reserved.

MODULE OBJECTIVES

8-2

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

LUN Access Review

2009 NetApp. All rights reserved.

LUN ACCESS REVIEW

8-3

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

LUN Access
To connect an initiator to a targets LUN:
1. Create an igroup if necessary
2. Create the LUN
3. Map the LUN to the igroup
4. Find the LUN on the initiator
5. Prepare the LUN as a new disk on the
initiator

2009 NetApp. All rights reserved.

LUN ACCESS

8-4

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

LUN Setup

Initiator

File System

Ethernet

Fibre Channel

Ethernet

Target

Fibre Channel

My_IP_igroup
iqn.1998-01.com.vmware:esx
OS Type: vmware

My_FC_igroup
10:00:00:00:c9:58:29:9a
OS Type: vmware

2
LUNa

LUNb

2009 NetApp. All rights reserved.

LUN SETUP

8-5

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Data ONTAP Setup

2009 NetApp. All rights reserved.

DATA ONTAP SETUP

8-6

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Data ONTAP SAN Configuration


Create storage container
Create an aggregate
Create a volume
Ensure automatic
Snapshot copies
are disabled
Create a qtree
(optional)
Selected
Add
In this exercise, we
and click
HA
pair

will use NetApp


to create
System Manager to
an aggregate
perform these tasks

Then create
volume and
qtree

2009 NetApp. All rights reserved.

DATA ONTAP SAN CONFIGURATION


NetApp System Manager provides comprehensive management and capability of one or more
arrays through a simple, easy to use, intuitive GUI.
NetApp System Manager is a Windows MMC 3.0 application that supports discovery, set
up, FC, iSCSI, CIFS, NFS, deduplication, provisioning, thin provisioning, Snapshot
technology and configuration management of multiple NetApp storage systems from a single
pane of glass.
To learn more, go to the NOW site.

8-7

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Create an igroup
Selected

Selected
first
Then add the
ESX WWPNs

2009 NetApp. All rights reserved.

CREATE AN IGROUP

8-8

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Create a LUN
Selected

Selected
first

For a VMFS datastore,


the type will be VMware
For an RDM datastore,
the type will be specific
to the OS installed

At the end of Wizard, the LUN will be


created and mapped appropriately

2009 NetApp. All rights reserved.

CREATE A LUN

8-9

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

LUN Decisions for ESX


When creating a LUN, keep the following in mind:
One LUN per VMFS datastore
You might want fewer, larger LUNs:
More flexibility to create VMs without storage changes
More flexibility for resizing virtual disks
Fewer VMFS datastores to manage

You might want more, smaller LUNs:


More efficient use of storage
More flexibility in multipathing policy
Microsoft Cluster requires each cluster disk on its
own LUN

2009 NetApp. All rights reserved.

LUN DECISIONS FOR ESX

8-10

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

vSphere Setup

2009 NetApp. All rights reserved.

VSPHERE SETUP

8-11

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

vSphere Steps
To connect an initiator to a targets LUN:
1. Create an igroup
2. Create the LUN
3. Map the LUN to the igroup
4. Find the LUN on the initiator
5. Prepare the LUN as a new disk on the
initiator
NOTE: Step 5 differs depending on whether
you are creating a VMFS or RDM datastore

2009 NetApp. All rights reserved.

VSPHERE STEPS

8-12

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

4. Find the LUN on vSphere (iSCSI)

Selected
Selected

Rescan if LUN
doesnt appear

Identifier
Storage adapter Channel # Target # LUN #
2009 NetApp. All rights reserved.

4. FIND THE LUN ON VSPHERE (ISCSI)

8-13

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

4. Find the LUN on vSphere (FC)

Selected

The LUN

Fine tune
with the
Host Utilities
Kit

To configure LUN discovery

Configure the new LUN with optimal NetApp/ESX settings:


# ./config_hba --configure
2009 NetApp. All rights reserved.

4. FIND THE LUN ON VSPHERE (FC)

8-14

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Configuring LUN Discovery


Limits scans
on target to
Logical Units
with numbers
0 - 255

Selected

When set to
1, looks for
nonsequential
Logical Unit
numbering

2009 NetApp. All rights reserved.

CONFIGURING LUN DISCOVERY

8-15

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

4. Find the LUN on vSphere (CLI)


With the Host Utilities Kit:
# cd /opt/netapp/santools
# ./sanlun lun show
controller: lun-pathname device filename
adapter protocol lun size
lun state
vmkdisk name
system2: /vol/vol_VMFS_fcp/qt_Store1/lun1 /dev/sdc
vmhba3
FCP
40.0g (42953867264) GOOD
Identifier
naa.60a980004335432d576f525844457678

2009 NetApp. All rights reserved.

4. FIND THE LUN ON VSPHERE (CLI)

8-16

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

4. Find the LUN on vSphere (CLI)


With ESX:

NMP = Native Multipathing Plug-in

# esxcli nmp device list


...
naa.60a980004335432d576f525844457678
Device Display Name: NETAPP Fibre Channel Disk
(naa.60a980004335432d576f525844457678)
Storage Array Type: VMW_SATP_ALUA
Storage Array Type Device Config:
{implicit_support=on;explicit_support=off;
explicit_allow=on;alua_followover=on;
{TPG_id=2,TPG_state=AO}
{TPG_id=3,TPG_state=ANO}}
Path Selection Policy: VMW_PSP_MRU
Path Selection Policy Device Config:
Current Path=vmhba4:C0:T2:L0
Working Paths: vmhba4:C0:T2:L0
....

2009 NetApp. All rights reserved.

4. FIND THE LUN ON VSPHERE (CLI)

8-17

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Manage Paths

2009 NetApp. All rights reserved.

MANAGE PATHS

8-18

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Manage Paths on the Service Console


With ESX:

SATP = Storage Array Type Plug-in

# esxcli nmp satp setdefault -psp <PSP> -satp <satp>


....

PSP types:
VMW_PSP_RR = Round robin
VM_PSP_Fixed = Fixed
VM_PSP_MRU = Most Recently Used

SATP types supported for NetApp arrays:


VMW_SATP_DEFAULT_AA = Symmetric access
VM_SATP_ALUA = Asymmetric access

2009 NetApp. All rights reserved.

MANAGE PATHS ON THE SERVICE CONSOLE

8-19

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

5. Preparing the LUN for VMFS


Creating a VMFS Datastore

2009 NetApp. All rights reserved.

5. PREPARING THE LUN FOR VMFS

8-20

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

5. Preparing the LUN for VMFS (Cont.)

2009 NetApp. All rights reserved.

5. PREPARING THE LUN FOR VMFS (CONT.)

8-21

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

5. Preparing the LUN for VMFS (Cont.)


Give the datastore
a name

2009 NetApp. All rights reserved.

5. PREPARING THE LUN FOR VMFS (CONT.)

8-22

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

5. Preparing the LUN for VMFS (Cont.)

2009 NetApp. All rights reserved.

5. PREPARING THE LUN FOR VMFS (CONT.)

8-23

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Create a VM in a VMFS Datastore

Give a name
to the VM

2009 NetApp. All rights reserved.

CREATE A VM IN A VMFS DATASTORE

8-24

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Create a VM in a VMFS Datastore (Cont.)


Choose the
VMFS datastore

2009 NetApp. All rights reserved.

CREATE A VM IN A VMFS DATASTORE (CONT.)

8-25

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Create a VM in a VMFS Datastore (Cont.)

2009 NetApp. All rights reserved.

CREATE A VM IN A VMFS DATASTORE (CONT.)

8-26

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Create a VM in a VMFS Datastore (Cont.)

2009 NetApp. All rights reserved.

CREATE A VM IN A VMFS DATASTORE (CONT.)

8-27

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Create a VM in a VMFS Datastore (Cont.)

2009 NetApp. All rights reserved.

CREATE A VM IN A VMFS DATASTORE (CONT.)

8-28

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Create a VM in a VMFS Datastore (Cont.)

2009 NetApp. All rights reserved.

CREATE A VM IN A VMFS DATASTORE (CONT.)

8-29

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Create a VM in a VMFS Datastore (Cont.)

2009 NetApp. All rights reserved.

CREATE A VM IN A VMFS DATASTORE (CONT.)

8-30

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Using Datastores in vSphere


You can use datastores to create secondary
hard disks for existing VMs
RDMs can
also be used
if available

Secondary
disk will
show up in
guest OS
2009 NetApp. All rights reserved.

USING DATASTORES IN VSPHERE

8-31

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Multiple Disk Prioritization


Change the priority of multiple disks to
maximize throughput of thrashed disks

Possible choices:
Low
Normal
High
Custom

2009 NetApp. All rights reserved.

MULTIPLE DISK PRIORITIZATION

8-32

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Physical Files Investigation


Browse the datastore to investigate files

2009 NetApp. All rights reserved.

PHYSICAL FILES INVESTIGATION

8-33

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

How VMs Access Data on a SAN


Guest OS reads or writes
SCSI disk

Application

Device drivers communicate


with virtual SCSI controllers

VM

Operating System
CPU

Memory

NIC

Disk

Virtual SCSI controller forwards


command to VMkernel

VMkernel
VMware Virtualization Layer
x86/x64 Architecture
CPU

Memory

NIC

HBA

Disk

VMkernel:
Locates the .vmdk in VMFS
Maps request to block on .vmdk
Sends I/O to iSCSI initiator or
FC HBA
Software initiator or HBA send I/O to
NetApp storage

2009 NetApp. All rights reserved.

HOW VMS ACCESS DATA ON A SAN

8-34

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Raw Device Mapping


(RDM) Devices

2009 NetApp. All rights reserved.

RAW DEVICE MAPPING (RDM) DEVICES

8-35

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Create a VM with an RDM

Create a LUN with


a type associated with
the guest OS and
then start VM
creation wizard

Make sure
you choose
Custom
2009 NetApp. All rights reserved.

CREATE A VM WITH AN RDM

8-36

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Create a VM with an RDM (Cont.)

2009 NetApp. All rights reserved.

CREATE A VM WITH AN RDM (CONT.)

8-37

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Create a VM with an RDM (Cont.)

2009 NetApp. All rights reserved.

CREATE A VM WITH AN RDM (CONT.)

8-38

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Create a VM with an RDM (Cont.)

Check here
to set NPIV
settings
(if using NPIV)

The properties
dialog appears

2009 NetApp. All rights reserved.

CREATE A VM WITH AN RDM (CONT.)

8-39

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Setting NPIV Settings


Select to generate
new WWNs

Later it
will have
the WWNN(s)
and WWPN(s)
NOTE: Add these WWPN(s) to igroups
2009 NetApp. All rights reserved.

SETTING NPIV SETTINGS

8-40

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

NPIV Observed
Before boot up of NPIV-supported VM:
switch> switchshow
...
Area Port Media Speed State
Proto
=====================================
0
0
id
N2
Online
F-Port 50:0a:09:81:96:88:37:5d
1
1
id
N2
Online
F-Port 50:0a:09:82:96:88:37:5d
2
2
id
N2
Online
F-Port 50:0a:09:81:86:88:37:5d
3
3
id
N2
Online
F-Port 50:0a:09:82:86:88:37:5d
...
Only one connection (non-NPIV)
8
8
id
N4
Online
F-Port 10:00:00:00:c9:58:29:9a
9
9
id
N4
Online
F-Port 10:00:00:00:c9:58:29:9b
switch> portcfgshow
Ports of Slot 0 0 1 2 3
...
NPIV capability ON ON ON ON
...

ON ON ON ON

9 10 11

12 13 14 15

ON ON ON ON ON ON ON ON

NPIV turned on
2009 NetApp. All rights reserved.

NPIV OBSERVED

8-41

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

NPIV Observed (Cont.)


After boot up of NPIV-supported VM:
switch> switchshow
...
Area Port Media Speed State
Proto
=====================================
0
0
id
N2
Online
F-Port
1
1
id
N2
Online
F-Port
2
2
id
N2
Online
F-Port
3
3
id
N2
Online
F-Port
...
Two connections (NPIV)
8
8
id
N4
Online
F-Port
9
9
id
N4
Online
F-Port
switch> portshow 8
...
portWwn: 20:08:00:05:1e:02:99:c4
portWwn of device(s) connected:
27:ee:00:0c:29:00:05:a7
10:00:00:00:c9:58:29:9a

50:0a:09:81:96:88:37:5d
50:0a:09:82:96:88:37:5d
50:0a:09:81:86:88:37:5d
50:0a:09:82:86:88:37:5d
2 NPIV public
2 NPIV public

switch> portshow 9
...
portWwn: 20:08:00:05:1e:02:99:c4
portWwn of device(s) connected:
27:ee:00:0c:29:00:06:a7
10:00:00:00:c9:58:29:9a

NPIVs WWPNs show up


2009 NetApp. All rights reserved.

NPIV OBSERVED (CONT.)

8-42

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

NPIV Observed (Cont.)


After boot up of NPIV-supported VM (Cont.):
switch> portloginshow 8
Type PID
World Wide Name
credit df_sz cos
=====================================================
fe 010801 27:ee:00:0c:29:00:05:a7
16 2048
c
fe 010800 10:00:00:00:c9:58:29:9a
16 2048
c
ff 010801 27:ee:00:0c:29:00:05:a7
12 2048
c
ff 010800 10:00:00:00:c9:58:29:9a
12 2048
c

scr=3
scr=3
d_id=FFFFFC
d_id=FFFFFC

switch> portloginshow 9
Type PID
World Wide Name
credit df_sz cos
=====================================================
fe 010901 27:ee:00:0c:29:00:06:a7
16 2048
c
fe 010900 10:00:00:00:c9:58:29:9b
16 2048
c
ff 010901 27:ee:00:0c:29:00:06:a7
12 2048
c
ff 010900 10:00:00:00:c9:58:29:9b
12 2048
c

scr=3
scr=3
d_id=FFFFFC
d_id=FFFFFC

2009 NetApp. All rights reserved.

NPIV OBSERVED (CONT.)

8-43

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Physical Files Investigation


Browse the datastore to investigate files

.vmdk
is pointer
to LUN

2009 NetApp. All rights reserved.

PHYSICAL FILES INVESTIGATION

8-44

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Alignment Issues

2009 NetApp. All rights reserved.

ALIGNMENT ISSUES

8-45

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

VMFS Datastore and Alignment Problems


ESX Cluster
NOTE:
RDM and other
LUNs dont suffer
from this problem
if the LUN type
attribute is set
correctly

VM1

VM2

VM3

VM4

FC / FCoE / iSCSI
1.vmx

1.vmdk

2.vmx

2.vmdk

3.vmx

3.vmdk

4.vmx

4.vmdk

VMFS LUNs are set However, vmdk willNetApp FAS Array


to vmware type
be of other OS types
This can lead to problems...
depending on the OS
2009 NetApp. All rights reserved.

VMFS DATASTORE AND ALIGNMENT PROBLEMS

8-46

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Partition Alignment and Misalignment


Failure to align the VMs file systems partition to
the physical storage array can cause negative
impact on performance leading to multiple I/O
operations for a single I/O request from the VM

512 byte LBA

4-KB WAFLBlock

Un-aligned write, creates partial writes in WAFL


2009 NetApp. All rights reserved.

PARTITION ALIGNMENT AND MISALIGNMENT

8-47

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

10

Identifying Alignment Partition on VM


In the VM, use msinfo32.exe to identify the
current partition starting offset

Windows Server
2003 has an offset
of 32256

2009 NetApp. All rights reserved.

IDENTIFYING ALIGNMENT PARTITION ON VM

8-48

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

MBRscan Tool
Within ESX Host Utilities Kit 5.1, use MBRscan
to identify current alignment:
# cd /opt/netapp/santools
# ./mbrscan --all
Building file list...
Windows Server 2008 R2 (64-bit)
...
aligned correctly from install
-------------------/vmfs/volumes/4a96738c-187213b0-a38d00215e6f2ca0/Dev05s2/Dev05s2-flat.vmdk p1 (NTFS)
lba:2048
offset:1048576 aligned:Yes
/vmfs/volumes/4a96738c-187213b0-a38d00215e6f2ca0/Dev05s2/Dev05s2-flat.vmdk p2 (NTFS)
lba:206848
offset:105906176
aligned:Yes
-------------------/vmfs/volumes/4a96738c-187213b0-a38d-00215e6f2ca0/Win2003 FC
VMFS/Win2003 FC VMFS-flat.vmdk p1 (NTFS)
lba:63 offset:32256
aligned:No
MBRscan confirms

Not aligned with


WAFL blocks

Windows Server 2003 has


an offset of 32,256

2009 NetApp. All rights reserved.

MBRSCAN TOOL

8-49

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

MBRalign Tool
Within ESX Host Utilities Kit 5.1, use MBRalign
to correct alignment on existing VMs:
# cd /opt/netapp/santools
# ./mbralign "/vmfs/volumes/4a96738c-187213b0-a38d00215e6f2ca0/Win2003 FC VMFS/Win2003 FC VMFS-flat.vmdk"
Part
Type old LBA New Start LBA New End LBA
Length in KB
P1
07
63
64
16755796
8377866
NOTICE:
This tool does not check for the existence of Virtual Machine
snapshots or linked clones.
The use of this tool on a vmdk file that has a snapshot or
linked clone associated with it can result in unrecoverable data
loss and/or data corruption.
Are you sure that no snapshots/linked clones exist for this vmdk?
(y/n)y

2009 NetApp. All rights reserved.

MBRALIGN TOOL

8-50

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

MBRalign Tool (Cont.)


Within ESX Host Utilities Kit 5.1, use MBRalign
to correct alignment on existing VMs:
Creating a backup of
00215e6f2ca0/Win2003
Creating a backup of
00215e6f2ca0/Win2003

/vmfs/volumes/4a96738c-187213b0-a38dFC VMFS/Win2003 FC VMFS.vmdk


/vmfs/volumes/4a96738c-187213b0-a38dFC VMFS/Win2003 FC VMFS-flat.vmdk

Creating a copy the Master Boot Record


Working on partition P1 (3): Starting to migrate blocks from
32256 to 32768.
Change alignment from 32256 to 32768
12801 read ops in 4 sec. 98.88% read (24.87 mB/s). 98.88%
written (24.87 mB/s)
Working on space not in any partition: Starting to migrate
blocks.
100.00 percent complete. 100.00 percent written. .
Making adjustments to /vmfs/volumes/4a96738c-187213b0-a38d00215e6f2ca0/Win2003 FC VMFS/Win2003 FC VMFS-flat.vmdk.
Adjusting the descriptor file.
2009 NetApp. All rights reserved.

MBRALIGN TOOL (CONT.)

8-51

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Verify Alignment Changes


Within ESX Host Utilities Kit 5.1, use MBRscan
to verify alignment:
# cd /opt/netapp/santools
# ./mbrscan --all
Building file list...
-------------------Failed to open /vmfs/volumes/4a76cd5b-ed77e188-1d1d00215e6f2ca2/esxconsole-4a76cd30-32f4-2f68-f71000215e6f2ca0/esxconsole-flat.vmdk - [Device or resource busy]
...
-------------------/vmfs/volumes/4a96738c-187213b0-a38d-00215e6f2ca0/Win2003 FC
VMFS/Win2003 FC VMFS-flat.vmdk p1 (NTFS)
lba:64 offset:32768
aligned:Yes
-------------------MBRscan confirms

Aligned with
WAFL blocks

Windows Server 2003


now has 32768

2009 NetApp. All rights reserved.

VERIFY ALIGNMENT CHANGES

8-52

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Verify Alignment Change in VM


In the VM, use msinfo32.exe to verify the new
partition starting offset

Windows Server
2003 confirms the
new offset

2009 NetApp. All rights reserved.

VERIFY ALIGNMENT CHANGE IN VM

8-53

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Properly Aligned Partitions for New VMs


If you have new Windows Server 2003 VMs, you
can pre-create the partition table to be properly
aligned before installation:
1. Boot the VM with Microsofts Windows Preinstall
Environment (WinPE) CD
2. Select Start -> Run and enter DISKPART
3. Enter Select Disk0
4. Enter Create Partition Primary Align=32
5. Reboot the VM with WinPE CD
6. Install the operating system as normal
NOTE: Similar steps can be taken with other operating systems
2009 NetApp. All rights reserved.

PROPERLY ALIGNED PARTITIONS FOR NEW VMS

8-54

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary

2009 NetApp. All rights reserved.

MODULE SUMMARY

8-55

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary
In this module, you should have learned to:
Describe the steps to allow a vSphere initiator to
access a LUN on a storage system as a VMFS
datastore
Describe the steps to allow a vSphere initiator to
create a VM with a raw device mapping (RDM) disk
from a storage systems LUN

2009 NetApp. All rights reserved.

MODULE SUMMARY

8-56

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Exercise
Module 8: vSphere LUN Access
Estimated Time: 60 minutes

EXERCISE
Please refer to your Exercise Guide materials for more instructions.

8-57

SAN Implementation Workshop: vSphere LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Red Hat Overview


Module 9
SAN Implementation Workshop

RED HAT OVERVIEW

9-1

SAN Implementation Workshop: Red Hat Overview

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Overview
In this module, we will cover the following:
Describe Red Hat Enterprise Linux
Explain how NetApp storage is ideal for LUNs
managed by Red Hat Enterprise Linux

2009 NetApp. All rights reserved.

MODULE OVERVIEW

9-2

SAN Implementation Workshop: Red Hat Overview

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Linux
Linux is a free UNIX-type operating system
originally created by Linus Torvalds
Linux is:
Open source
Licensed under GNU GPL
Has a kernel developed by the Linux kernel
mailing list

2009 NetApp. All rights reserved.

LINUX

9-3

SAN Implementation Workshop: Red Hat Overview

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Red Hat and Linux


Red Hat Enterprise Linux 5 (RHEL 5)
Supports Linux kernel 2.6.18
Server editions:
Red Hat Enterprise Linux Advanced Platform
For mission-critical and enterprise computer systems

Red Hat Enterprise Linux


For supported network servers
Limited to two CPUs

RHEL 5:
Update 3: Latest stable build during the building
of this course
Update 4: Introduces hypervisor technology
2009 NetApp. All rights reserved.

RED HAT AND LINUX

9-4

SAN Implementation Workshop: Red Hat Overview

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Red Hat and Storage


RHEL 5 supports:
SAN
FC SAN
FC protocol by way of Emulex and QLogic HBAs

IP SAN
iSCSI by way of a built-in software initiator and
standard NICs

Logical Volume Manager (LVM) for creating and


managing pools of virtual storage
DM Multipath for managing multiple paths to a
LUN
2009 NetApp. All rights reserved.

RED HAT AND STORAGE

9-5

SAN Implementation Workshop: Red Hat Overview

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

NetApp and RHEL 5


NetApp provides an integrated solution that
enables storage, delivery and management of
data and content to achieve your business
needs
Together Red Hat and NetApp provide an
excellent platform for applications such as:
Oracle
Virtualization with Red Hat Enterprise Linux 5.4

2009 NetApp. All rights reserved.

NETAPP AND RHEL 5

9-6

SAN Implementation Workshop: Red Hat Overview

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary

2009 NetApp. All rights reserved.

MODULE SUMMARY

9-7

SAN Implementation Workshop: Red Hat Overview

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary
In this module, you should have learned to:
Describe Red Hat Enterprise Linux
Explain how NetApp storage is ideal for LUNs
managed by Red Hat Enterprise Linux

2009 NetApp. All rights reserved.

MODULE SUMMARY

9-8

SAN Implementation Workshop: Red Hat Overview

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Exercise
Due to hardware constraints, your Windows
Server 2008 R2 (W2K8R2) machine has been
reimaged to Red Hat Enterprise Linux (RHEL)
5.3
The WWPNs of the W2K8R2 are now the
WWPNs of RHEL 5.3
To prepare for this day, this exercise asks you:
To offline all FC-attached LUNs that were
associated with the W2K8R2 FC igroups
To delete the W2K8R2 FC igroups

2009 NetApp. All rights reserved.

EXERCISE

9-9

SAN Implementation Workshop: Red Hat Overview

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Exercise
Module 9: Red Hat Overview
Estimated Time: 10 minutes

EXERCISE
Please refer to your Exercise Guide materials for more instructions.

9-10

SAN Implementation Workshop: Red Hat Overview

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Red Hat FC
Connectivity
Module 10
SAN Implementation Workshop

RED HAT FC CONNECTIVITY

10-1

SAN Implementation Workshop: Red Hat FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Objectives
By the end of this module, you should be able to:
Describe multiple path implementation with
Fibre Channel (FC) connectivity for Red Hat
and NetApp systems
Configure FC ports on Red Hat systems
Identify the worldwide node name (WWNN)
and worldwide port name (WWPN) on Red Hat
systems
Set up and verify multiple path FC connectivity
between Red Hat and NetApp systems
2009 NetApp. All rights reserved.

MODULE OBJECTIVES

10-2

SAN Implementation Workshop: Red Hat FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

FC Topology

2009 NetApp. All rights reserved.

FC TOPOLOGY

10-3

SAN Implementation Workshop: Red Hat FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

FC Exercise Environment
Red Hat 5.3
6

switch
port
0c - 0

0d - 1

Storage System 1

0c - 2

0d - 3

Storage System 2

Fibre Channel
2009 NetApp. All rights reserved.

FC EXERCISE ENVIRONMENT

10-4

SAN Implementation Workshop: Red Hat FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Data ONTAP

2009 NetApp. All rights reserved.

DATA ONTAP

10-5

SAN Implementation Workshop: Red Hat FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Review from Day 1


FC protocol service was licensed and started:
system and system2> license add XXXXXX
system and system2> fcp start

The following interfaces were configured as an


FC target:
system and system2> fcadmin config
Local
Adapter Type
State
Status
-----------------------------------------------0a
initiator CONFIGURED
online
0b
initiator CONFIGURED
online
0c
target
CONFIGURED
online
0d
target
CONFIGURED
online
2009 NetApp. All rights reserved.

REVIEW FROM DAY 1

10-6

SAN Implementation Workshop: Red Hat FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Review from Day 1 (Cont.)


Configure the dual storage systems as an active-active
configuration
License cluster and reboot:
system and system2> license add XXXXXX
system and system2> reboot

Verify cfmode:
system> fcp show cfmode
fcp show cfmode: single_image

Enable the cluster:


system> cf enable

Verify the active-active relationship:


system> cf status
Cluster enabled, system2 is up.
2009 NetApp. All rights reserved.

REVIEW FROM DAY 1 (CONT.)

10-7

SAN Implementation Workshop: Red Hat FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Review from Day 1 (Cont.)


system or system2> fcp nodename
Fibre Channel nodename: 50:0a:09:80:86:f7:c7:86 (500a098086f7c786)
system> fcp config
0c:
ONLINE <ADAPTER UP> PTP Fabric
host address 011000
portname 50:0a:09:81:96:f7:c7:86 nodename 50:0a:09:80:86:f7:c7:86
mediatype auto speed auto
0d:
ONLINE <ADAPTER UP> PTP Fabric
host address 011100
portname 50:0a:09:82:96:f7:c7:86 nodename 50:0a:09:80:86:f7:c7:86
mediatype auto speed auto
system2> fcp config
Verify configuration
0c:
ONLINE <ADAPTER UP> PTP Fabric
host address 011200
portname 50:0a:09:81:86:f7:c7:86 nodename 50:0a:09:80:86:f7:c7:86
mediatype auto speed auto
0d:
ONLINE <ADAPTER UP> PTP Fabric
host address 011300
portname 50:0a:09:82:86:f7:c7:86 nodename 50:0a:09:80:86:f7:c7:86
mediatype auto speed auto

2009 NetApp. All rights reserved.

REVIEW FROM DAY 1 (CONT.)

10-8

SAN Implementation Workshop: Red Hat FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Red Hat

2009 NetApp. All rights reserved.

RED HAT

10-9

SAN Implementation Workshop: Red Hat FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Red Hat as an FC Initiator


NetApp has supported Red Hat as an FC
initiator OS since Red Hat 3
Red Hat 5.3 has many advantages over
previous versions:
Updated FC and iSCSI drivers
Better scalability

Red Hat must be properly configured for FC


connectivity

2009 NetApp. All rights reserved.

RED HAT AS AN FC INITIATOR

10-10

SAN Implementation Workshop: Red Hat FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Red Hat 5.3 Design and Installation


1. Verify host operating system releases, required
patches, and NetApp Linux Host Utility Kit with
Interoperability Matrix
Use /etc/redhat-release and uname -a to verify Red
Hat version
Interoperability Matrix can be found on the NOW site

2. Install compatible host bus adapters (HBAs)


3. Install and configure required HBA drivers and utilities
if needed
4. Verify an HBA:
All HBA Types: lspci
Emulex: Use /usr/sbin/lpfc/lputil or HBAnyware
QLogic: Use /usr/bin/scli or SANsurfer
2009 NetApp. All rights reserved.

RED HAT 5.3 DESIGN AND INSTALLATION

10-11

SAN Implementation Workshop: Red Hat FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Red Hat/Emulex Implementation


After installation, to configure a Red Hat/Emulex
implementation:
1. Identify the correct HBA driver
2. Verify that Red Hat has correctly identified
the HBA and loaded the driver
3. Identify the WWNN and WWPN on the host
4. Verify connectivity between the initiator and
target

2009 NetApp. All rights reserved.

RED HAT/EMULEX IMPLEMENTATION

10-12

SAN Implementation Workshop: Red Hat FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Red Hat/Emulex Implementation (Cont.)


Verify that Red Hat has identified the HBA(s)
# cd /sys/class/scsi_host
# ls
Emulex HBA(s) or in this
host0 host1 host2
case on HBA with two ports

Identify the driver associated with the HBA(s)


# cat fwrev
# cd host1
2.72A2 (B3F2.72A2), sli-3
# ls
... fwrev
NetApp part number
of NetApp sold HBAs
... modeldesc
... modelname
# cat lpfc_drvr_version
Emulex LightPulse
... npiv_info
Fibre Channel SCSI
... portnum
driver 8.2.0.33.3p
... serialnum
... lpfc_drvr_version
Current installed driver
2009 NetApp. All rights reserved.

RED HAT/EMULEX IMPLEMENTATION (CONT.)

10-13

SAN Implementation Workshop: Red Hat FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Red Hat/Emulex Implementation (Cont.)


Interoperability Matrix requires an update to
driver based upon existing firmware:
#
#
#
#

tar zxf lpfc_2.6_driver_kit-8.2.0.39-1.tar.gz


cd lpfc_2.6_driver_kit-8.2.0.39-1
./lpfc-install ...
The kernel build
reboot
number of Linux

The driver can be found here:

# ls/lib/modules/2.6.18-128.el5/kernel/drivers/scsi/lpfc
lpfc.ko

To verify the driver when loaded:


# modprobe -c | greplpfc

To load the driver (if needed):


# modprobe -v lpfc
2009 NetApp. All rights reserved.

RED HAT/EMULEX IMPLEMENTATION (CONT.)

10-14

SAN Implementation Workshop: Red Hat FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Red Hat/Emulex Implementation (Cont.)


Install compatible Linux Host Utility Kit (HUK)
Provides Perl scripts to configure and tune Red Hat
Example: sanlun application to manage LUNs from Red
Use
Hat

HUK requires packages to be installed:


libnl.so = libnl-1.0-0.10.pre5.5.x86_64.rpm
libnl.so = libnl-1.0-0.10.pre5.5.i386.rpm
HBAnyware = elxlinuxapps-4.0a31-8.2.0.39-1-1.tar

64-bit
version

Use
32-bit
version

NOTE: You must install these libraries in this order

2009 NetApp. All rights reserved.

RED HAT/EMULEX IMPLEMENTATION (CONT.)

10-15

SAN Implementation Workshop: Red Hat FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Red Hat/Emulex Implementation (Cont.)


Install HBAnyware:
# tar xvf elxlinuxapps-4.0a31-8.2.0.39-1-1.tar
# cd elxlinuxapps-4.0a31-8.2.0.39-1-1
# ./install
Select desired mode of operation for HBAnyware
1 Local Mode : HBA's on this Platform can be managed by
HBAnyware clients on this Platform Only.
2 Managed Mode: HBA's on this Platform can be managed by
local or remote HBAnyware clients.
3 Remote Mode : Same as '2' plus HBAnyware clients on this
Platform can manage local and remote HBA's.
...

2009 NetApp. All rights reserved.

RED HAT/EMULEX IMPLEMENTATION (CONT.)

10-16

SAN Implementation Workshop: Red Hat FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Red Hat/Emulex Implementation (Cont.)


Install HUK:
# tar zxf netapp_linux_host_utilities_5_0.tar.gz
# cd netapp_linux_host_utilities_5_0
# ./install
# cd /opt/netapp/santools
# ./san_version
NetApp Linux Host Utilities version 5.0

Identify WWPN of HBA(s):

Same as
# cd /opt/netapp/santools
/sys/class/scsi_host
# sanlunfcp show adapters
host1
WWPN:10000000c96b77b4
host2
WWPN:10000000c96b77b3
or
# cat /sys/class/fc_host/host1/port_name
0x10000000c96b77b4

2009 NetApp. All rights reserved.

RED HAT/EMULEX IMPLEMENTATION (CONT.)

10-17

SAN Implementation Workshop: Red Hat FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Discovery and Session Creation


Initiator

Fibre Channel

When ports are active


(and properly zoned),
discovery is automatic
Fibre Channel

Target

2009 NetApp. All rights reserved.

DISCOVERY AND SESSION CREATION

10-18

SAN Implementation Workshop: Red Hat FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Data ONTAP Discovery of Initiators


Verify connectivity from the storage system:
system> fcp show initiators
Initiators connected on adapter 0c:
Portname
Group
-----------10:00:00:00:c9:6b:77:b3
Red Hat WWPNs
10:00:00:00:c9:6b:77:b4

NOTE: For convenience, alias the Red Hat WWPNs

2009 NetApp. All rights reserved.

DATA ONTAP DISCOVERY OF INITIATORS

10-19

SAN Implementation Workshop: Red Hat FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary

2009 NetApp. All rights reserved.

MODULE SUMMARY

10-20

SAN Implementation Workshop: Red Hat FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary
In this module, you should have learned to:
Describe multiple path implementation with
Fibre Channel (FC) connectivity for Red Hat
and NetApp systems
Configure FC ports on Red Hat systems
Identify the worldwide node name (WWNN)
and worldwide port name (WWPN) on Red Hat
systems
Set up and verify multiple path FC connectivity
between Red Hat and NetApp systems
2009 NetApp. All rights reserved.

MODULE SUMMARY

10-21

SAN Implementation Workshop: Red Hat FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Exercise
Module 10: Red Hat FC Connectivity
Estimated Time: 30 minutes

EXERCISE
Please refer to your Exercise Guide materials for more instructions.

10-22

SAN Implementation Workshop: Red Hat FC Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Red Hat iSCSI


Connectivity
Module 11
SAN Implementation Workshop

RED HAT ISCSI CONNECTIVITY

11-1

SAN Implementation Workshop: Red Hat iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Objectives
By the end of this module, you should be able to:
Describe multiple path implementation with
iSCSI connectivity for Red Hat and NetApp
systems
Configure network ports on Red Hat systems
Identify the worldwide node (WWN) on Red
Hat systems
Set up and verify multiple path IP connectivity
between Red Hat and NetApp systems

2009 NetApp. All rights reserved.

MODULE OBJECTIVES

11-2

SAN Implementation Workshop: Red Hat iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

IP Connectivity

2009 NetApp. All rights reserved.

IP CONNECTIVITY

11-3

SAN Implementation Workshop: Red Hat iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

IP Exercise Environment
RHEL
e0a are used
for management
access only for
storage systems
without e0M
e0a

e0b

e0c

Storage System 1

e0a

e0b

e0c

Storage System 2

iSCSI
2009 NetApp. All rights reserved.

IP EXERCISE ENVIRONMENT

11-4

SAN Implementation Workshop: Red Hat iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Data ONTAP

2009 NetApp. All rights reserved.

DATA ONTAP

11-5

SAN Implementation Workshop: Red Hat iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Review from Day 1


iSCSI service licensed and started:
system> license add XXXXXX
system> iscsi start

Using system 1
for iSCSI

The following interfaces configured for iSCSI:


system> iscsi interface show
Interface
Interface
Interface
Interface

e0a
e0b
e0c
e0d

disabled
enabled
enabled
disabled

NOTE: No TPGroup
changes

Identify the WWN on NetApp storage:


system> iscsi nodename
iSCSI target nodename: iqn.1992-08.com.netapp:system

2009 NetApp. All rights reserved.

REVIEW FROM DAY 1

11-6

SAN Implementation Workshop: Red Hat iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Red Hat

2009 NetApp. All rights reserved.

RED HAT

11-7

SAN Implementation Workshop: Red Hat iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Red Hat as an iSCSI Initiator


NetApp has supported Red Hat as an iSCSI
Initiator OS since Red Hat 4
Red Hat 5.3 has many advantages over
previous versions with:
Packages providing iSCSI device drivers and
utilities
iSCSI software initiator for standard network
interfaces

Red Hat must be configured properly for iSCSI


connectivity over a standard network interface

2009 NetApp. All rights reserved.

RED HAT AS AN ISCSI INITIATOR

11-8

SAN Implementation Workshop: Red Hat iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Red Hat 5.3 Design and Installation


1. Verify host operating system releases, required
patches, and NetApp Linux Host Utility Kit with
Interoperability Matrix:
Use /etc/redhat-release and uname -a to verify
Red Hat version
Interoperability Matrix can be found on the NOW site

2. Install iSCSI software packages and patches:


# rpm -ivh iscsi-initiator-utils-6.2.0.8680.18.el5.x86_64.rpm

3. Identify and verify a network interface is properly


configured
4. Install compatible Linux Host Utility Kit
Provides Perl scripts to configure and tune Red Hat for iSCSI
Example: sanlun application to manage LUNs from Red Hat
2009 NetApp. All rights reserved.

RED HAT 5.3 DESIGN AND INSTALLATION

11-9

SAN Implementation Workshop: Red Hat iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Red Hat/NIC Implementation


After installation, to configure a Red Hat standard
NIC software initiator implementation:
1. Identify the local network interface(s) to use
2. Verify the iSCSI service and WWN for host
3. Configure authentication security if necessary
4. Identify which method of discovery to use and
enter the storage systems portal IP address
5. Verify discovery and sessions with the target

2009 NetApp. All rights reserved.

RED HAT/NIC IMPLEMENTATION

11-10

SAN Implementation Workshop: Red Hat iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Red Hat/NIC Implementation (Cont.)


1. Using the software initiator, Red Hat supports
iSCSI over standard network interface
Investigate and configure interfaces:
# ifconfig -a
eth0
Link encap:Ethernet
HWaddr 00:21:5E:6F:18:C4
inet addr:10.254.132.63 Bcast:10.254.135.25
Mask:255.255.252.0
inet6 addr: fe80::221:5eff:fe6f:18c4/64
Scope:Link UP BROADCAST RUNNING MULTICAST
MTU:1500 Metric:1
Unconfigured adapter; configure
...
adapter with ifconfig and
eth1
add to rc daemon
Link encap:Ethernet HWaddr 00:21:5E:6F:18:C6
BROADCAST MULTICAST MTU:1500 Metric:1 ...
2009 NetApp. All rights reserved.

RED HAT/NIC IMPLEMENTATION (CONT.)

11-11

SAN Implementation Workshop: Red Hat iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Red Hat/NIC Implementation (Cont.)


2. View the iSCSI Service and initiator node
name
Start iSCSI service:
# service iscsi start

Verify status of iSCSI service:


# service iscsi status
iscsid (pid 5236 5235) is running...

Identify the initiators node name:


# cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1994-05.com.redhat:rhel
Red Hat WWN
2009 NetApp. All rights reserved.

RED HAT/NIC IMPLEMENTATION (CONT.)

11-12

SAN Implementation Workshop: Red Hat iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Discovery
Initiator

Ethernet

Discovery is not
automatic

Ethernet

Target

2009 NetApp. All rights reserved.

DISCOVERY

11-13

SAN Implementation Workshop: Red Hat iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

iSCSI Authentication in Red Hat 5


3. To increase security, iSCSI may be
configured to require authentication
Authentication methods:
CHAP session authentication
Unidirectional - targets will authenticate initiators
Bidirectional - initiators and targets will authenticate
each other

CHAP discovery (send targets) authentication


Unidirectional - targets will authenticate initiators
Bidirectional - initiators and targets will authenticate
each other

This course will discuss using CHAP session


authentication, but will not use it in the
exercise
2009 NetApp. All rights reserved.

ISCSI AUTHENTICATION IN RED HAT 5

11-14

SAN Implementation Workshop: Red Hat iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

iSCSI Unidirectional CHAP Authentication


Configure initiators user name and password:
# vi /etc/iscsi/iscsid.conf
...
#node.session.auth.username = username
#node.session.auth.password = thisismysecret

To configure CHAP, enable CHAP authentication:


# vi /etc/iscsi/iscsid.conf
...
#node.session.auth.authmethod = CHAP
# service iscsi restart

Uncomment

On the storage system, register the CHAP secret:


system> iscsi security add
-i iqn.1994-05.com.redhat:rhel
-s CHAP
-p thisismysecret
-n username

Same user name and


password

2009 NetApp. All rights reserved.

ISCSI UNIDIRECTIONAL CHAP AUTHENTICATION

11-15

SAN Implementation Workshop: Red Hat iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

iSCSI Bidirectional CHAP Authentication


Configure unidirectional CHAP and then configure the
reverse direction:
system> iscsi security add
-i iqn.1994-05.com.redhat:rhel
-s CHAP
-p thisismysecret
User name and password
-n username
of inbound and outbound
-o thisismysecret2
cannot be the same
-m username2

On Red Hat, register the storage systems CHAP secret:


Dont forgot to
# vi /etc/iscsi/iscsid.conf
uncomment
...
#node.session.auth.username = username2
#node.session.auth.password = thisismysecret2

On Red Hat, restart the iSCSI service:


# service iscsi restart
2009 NetApp. All rights reserved.

ISCSI BIDIRECTIONAL CHAP AUTHENTICATION

11-16

SAN Implementation Workshop: Red Hat iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Red Hat/NIC Implementation


4. Discovery is possible through either:
Static discovery
iSCSI targets added manually

Send-targets discovery
IP address of the target is added
Initiator communicates to target over port 3260

Internet Storage Name Service (iSNS)


Centralized management of discovery and
configuration of iSCSI networks

This course will focus on the send-target


discovery method
2009 NetApp. All rights reserved.

RED HAT/NIC IMPLEMENTATION

11-17

SAN Implementation Workshop: Red Hat iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Red Hat/NIC Implementation (Cont.)


Set up iSCSI interface (for multiple paths):
# iscsiadm -m iface -I iface0 --op=new
# iscsiadm -m iface -I iface0 --op=update
-n iface.net_ifacename -v eth0
Red Hats eth0 and eth1 interface name
# iscsiadm -m iface -I iface1 --op=new
# iscsiadm -m iface -I iface1 --op=update
-n iface.net_ifacename -v eth1

Verify iSCSI interfaces:


# iscsiadm -m iface
iface0 tcp,default,eth0
iface1 tcp,default,eth1
# ls /var/lib/iscsi/ifaces
iface0 iface1
Use vi or iscsiadm to configure
2009 NetApp. All rights reserved.

RED HAT/NIC IMPLEMENTATION (CONT.)

11-18

SAN Implementation Workshop: Red Hat iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Red Hat/NIC Implementation (Cont.)


Set up send targets discovery with interfaces:
# iscsiadm -m discovery -t sendtargets -p
10.254.133.239 -I iface0 -I iface1
10.254.133.239:3260,1001
10.254.133.239:3260,1001
10.254.133.240:3260,1002
10.254.133.240:3260,1002

IP address of iSCSI-enabled
interface on the storage system

iqn.1992-08.com.netapp:system
iqn.1992-08.com.netapp:system
iqn.1992-08.com.netapp:system
iqn.1992-08.com.netapp:system

Target portals discovered


by each Red Hat interface

2009 NetApp. All rights reserved.

RED HAT/NIC IMPLEMENTATION (CONT.)

11-19

SAN Implementation Workshop: Red Hat iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Red Hat/NIC Implementation (Cont.)


5. Explore the iSCSI targets discovered:
# iscsiadm -m node --op=show
#BEGIN RECORD 2.0-868
node.name = iqn.1992-08.com.netapp:system
node.tpgt = 1002
node.startup = automatic
iface.hwaddress = 00:21:5E:6F:18:C6
iface.iscsi_ifacename = iface1
iface.net_ifacename = default
iface.transport_name = tcp
node.discovery_address = 10.254.133.239
...

#BEGIN RECORD 2.0-868


node.name = iqn.1992-08.com.netapp:system
node.tpgt = 1002
node.startup = automatic
iface.hwaddress = 00:21:5E:6F:18:C4
iface.iscsi_ifacename = iface0
...
2009 NetApp. All rights reserved.

RED HAT/NIC IMPLEMENTATION (CONT.)

11-20

SAN Implementation Workshop: Red Hat iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Red Hat/NIC Implementation (Cont.)


Observe discovered targets:
# iscsiadm -m node
10.254.133.239:3260,1001
10.254.133.239:3260,1001
10.254.133.240:3260,1002
10.254.133.240:3260,1002

iqn.1992-08.com.netapp:system
iqn.1992-08.com.netapp:system
iqn.1992-08.com.netapp:system
iqn.1992-08.com.netapp:system

2009 NetApp. All rights reserved.

RED HAT/NIC IMPLEMENTATION (CONT.)

11-21

SAN Implementation Workshop: Red Hat iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Red Hat/NIC Implementation (Cont.)


Create a session with discovered targets:
# iscsiadm -m node -l
Logging in to [iface: iface1, target: iqn.199208.com.netapp:system, portal: 10.254.133.240,3260]
Logging in to [iface: iface0, target: iqn.199208.com.netapp:system, portal: 10.254.133.240,3260]
Logging in to [iface: iface1, target: iqn.199208.com.netapp:system, portal: 10.254.133.239,3260]
Logging in to [iface: iface0, target: iqn.199208.com.netapp:system, portal: 10.254.133.239,3260]
...

2009 NetApp. All rights reserved.

RED HAT/NIC IMPLEMENTATION (CONT.)

11-22

SAN Implementation Workshop: Red Hat iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Red Hat iSCSI Sessions


View the current sessions:
# iscsiadm -m session
tcp: [1] 10.254.133.240:3260,1002
tcp: [2] 10.254.133.240:3260,1002
tcp: [3] 10.254.133.239:3260,1001
tcp: [4] 10.254.133.239:3260,1001

iqn.1992-08.com.netapp:system
iqn.1992-08.com.netapp:system
iqn.1992-08.com.netapp:system
iqn.1992-08.com.netapp:system

View the sessions on the storage system:


system> iscsi session show
Session 30
Initiator Information
Initiator Name: iqn.1994-05.com.redhat:rhel
ISID: 00:02:3d:01:00:00
Initiator Alias: dev05s2.development.netappu.com
Session 31
Initiator Information
...

2009 NetApp. All rights reserved.

RED HAT ISCSI SESSIONS

11-23

SAN Implementation Workshop: Red Hat iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary

2009 NetApp. All rights reserved.

MODULE SUMMARY

11-24

SAN Implementation Workshop: Red Hat iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary
In this module, you should have learned to:
Describe multiple path implementation with
iSCSI connectivity for Red Hat and NetApp
systems
Configure network ports on Red Hat systems
Identify the worldwide node (WWN) on Red
Hat systems
Set up and verify multiple path IP connectivity
between Red Hat and NetApp systems

2009 NetApp. All rights reserved.

MODULE SUMMARY

11-25

SAN Implementation Workshop: Red Hat iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Exercise
Module 11: Red Hat
iSCSI Connectivity
Estimated Time: 30 minutes

EXERCISE
Please refer to your Exercise Guide materials for more instructions.

11-26

SAN Implementation Workshop: Red Hat iSCSI Connectivity

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Red Hat
LUN Access
Module 12
SAN Implementation Workshop

RED HAT LUN ACCESS

12-1

SAN Implementation Workshop: Red Hat LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Objectives
By the end of this module, you should be able to:
Describe the steps to allow a Red Hat initiator
to access a LUN on a storage system

2009 NetApp. All rights reserved.

MODULE OBJECTIVES

12-2

SAN Implementation Workshop: Red Hat LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

LUN Access Review

2009 NetApp. All rights reserved.

LUN ACCESS REVIEW

12-3

SAN Implementation Workshop: Red Hat LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

LUN Access
To connect an initiator to a targets LUN:
1. Create an igroup if necessary
2. Create the LUN
3. Map the LUN to the igroup
4. Find the LUN on the initiator
5. Prepare the LUN as a new disk on the
initiator

2009 NetApp. All rights reserved.

LUN ACCESS

12-4

SAN Implementation Workshop: Red Hat LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

LUN Access Review


Initiator

File System

Ethernet

Fibre Channel

Ethernet

Target

Fibre Channel

My_IP_igroup
iqn.1999-05.com.redhat:rhel
OS Type: linux

My_FC_igroup
10:00:00:00:c9:6b:77:b4
OS Type: linux

1
LUNa

LUNb

2009 NetApp. All rights reserved.

LUN ACCESS REVIEW

12-5

SAN Implementation Workshop: Red Hat LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Data ONTAP Setup

2009 NetApp. All rights reserved.

DATA ONTAP SETUP

12-6

SAN Implementation Workshop: Red Hat LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Data ONTAP SAN Configuration


Create storage container:
system>
Create an aggregate system>
system>
Create a volume
system>
Ensure automatic
Snapshot copies
are disabled
Create a qtree
(optional)
In this exercise, you
can use either the
command-line
interface or NetApp
System Manager

aggr create ...


vol create ...
vol options...no_snap
qtree create...

2009 NetApp. All rights reserved.

DATA ONTAP SAN CONFIGURATION

12-7

SAN Implementation Workshop: Red Hat LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Create an igroup
You can create an igroup (LUN masking)
using:

Command-line interface (CLI)


NetApp System Manager
Provisioning Manager
Host Utilities Kit

# cd /opt/netapp/santools
# sanlunfcp show adapter -c
Enter this controller command to create an
initiator group for this system:
igroup create -f -t linux
"dev05s2.development.netappu.com"
10000000c96b77b4 10000000c96b77b3
2009 NetApp. All rights reserved.

CREATE AN IGROUP

12-8

SAN Implementation Workshop: Red Hat LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Create a LUN
You can create a LUN using:

CLI
NetApp System Manager
Provisioning Manager
SnapDrive (see appendix)

2009 NetApp. All rights reserved.

CREATE A LUN

12-9

SAN Implementation Workshop: Red Hat LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Red Hat Setup

2009 NetApp. All rights reserved.

RED HAT SETUP

12-10

SAN Implementation Workshop: Red Hat LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Red Hat Steps


To connect an initiator to a targets LUN:
1. Create an igroup
2. Create the LUN
3. Map the LUN to the igroup
4. Find the LUN on the initiator
5. Prepare the LUN as a new disk on the
initiator

2009 NetApp. All rights reserved.

RED HAT STEPS

12-11

SAN Implementation Workshop: Red Hat LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

4. Find the LUN / 5. Prepare the LUN


Rescan HBA
# cd /usr/sbin/lpfc
# ./lun_scan

Use fdisk -l command


# fdisk -l

The LUN shows up 8


times... sdb - sdi;
therefore use multipath

...

Disk /dev/sdb: 2147 MB, 2147483648 bytes


67 heads, 62 sectors/track, 1009 cylinders
Units = cylinders of 4154 * 512 = 2126848 bytes
Disk /dev/sdb doesn't contain a valid partition table
Disk /dev/sdc: 2147 MB, 2147483648 bytes
67 heads, 62 sectors/track, 1009 cylinders
Units = cylinders of 4154 * 512 = 2126848 bytes
Disk /dev/sdc doesn't contain a valid partition table
...
2009 NetApp. All rights reserved.

4. FIND THE LUN / 5. PREPARE THE LUN

12-12

SAN Implementation Workshop: Red Hat LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Multipath I/O

2009 NetApp. All rights reserved.

MULTIPATH I/O

12-13

SAN Implementation Workshop: Red Hat LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Multiple I/O Configuration


Red Hat may be configured for multipath I/O
(MPIO) using:
DM-Multipath:
Allows native support
Provides redundancy
Improves performance

Veritas Storage Foundation:


Heterogeneous storage management software

In this module, we will focus on DM-Multipath

2009 NetApp. All rights reserved.

MULTIPLE I/O CONFIGURATION

12-14

SAN Implementation Workshop: Red Hat LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Device Identifiers
RHEL will see a total of eight devices
without MPIO in your exercise
environment
/dev/sdb... /dev/sdi

RHEL
6

Under DM-Multipaths control, these


01 2 3 4 5 6 7 8 9
devices may be seen:
/dev/mapper/mpathn
0 1
2 3
Used when creating
Storage System
Storage System
logical volumes
1
2
/dev/mpath/mpathn
Convenience list under one
directory, never use
/dev/dm-n
When
Internal use, never use
user_friendly_names
option set to yes
2009 NetApp. All rights reserved.

DEVICE IDENTIFIERS

12-15

SAN Implementation Workshop: Red Hat LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

DM-Multipath Configuration Overview


1. Install/verify device-mapper-multipath
rpm
2. Edit the multipath.conf file:
Comment out the default blacklist
Change any of the existing defaults as needed
Save the configuration file

3. Start the multipath daemons


4. Create multipath device with the multipath
command

2009 NetApp. All rights reserved.

DM-MULTIPATH CONFIGURATION OVERVIEW

12-16

SAN Implementation Workshop: Red Hat LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

DM-Multipath Configuration
1. Install/verify device-mapper-multipath:
In Red Hat 5.3, DM-Multipath is installed by
default
Verify rpms installation:
# rpm -q device-mapper
device-mapper-1.02.28-2.el5
device-mapper-1.02.28-2.el5
# rpm -q device-mapper-multipath
device-mapper-multipath-0.4.7-23.el5

2009 NetApp. All rights reserved.

DM-MULTIPATH CONFIGURATION

12-17

SAN Implementation Workshop: Red Hat LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

DM-Multipath Configuration (Cont.)


2. Edit the multipath.conf file
Comment out the default blacklist:
#blacklist {
#
devnode "*"
#}

Verify that user_friendly_names is set to true:


defaults {
user_friendly_names yes
}

Disable local drive from using DM-Multipath:


blacklist {
devnode "sd[a]$
}
2009 NetApp. All rights reserved.

DM-MULTIPATH CONFIGURATION (CONT.)

12-18

SAN Implementation Workshop: Red Hat LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

DM-Multipath Configuration (Cont.)


2. Edit the multipath.conf file (Cont.)
Set up NetApp device:
devices { device {
vendor "NetApp"
product "LUN"
path_grouping_policy group_by_prio
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
prio_callout /sbin/santools/mpath_prio_ontap /dev/%n"
features "1 queue_if_no_path"
path_checker readsector0
failback immediate}}

path_grouping _policy values:

failover = 1 path per priority group


multibus = all valid paths in 1 priority group
group_by_serial = 1 priority group per serial number
group_by_prio = 1 priority group per priority value
group_by_node_name = 1 priority group per WWNN

2009 NetApp. All rights reserved.

DM-MULTIPATH CONFIGURATION (CONT.)

12-19

SAN Implementation Workshop: Red Hat LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

DM-Multipath Configuration (Cont.)


3. Configure the multipath daemons
Start up:
# modprobe dm-multipath
# service multipathd start
Starting multipathd daemon:

Load the driver


Start the service

Reload:
# service multipathd reload

Stop:
# service multipathd stop

Restart:
# service multipathd restart

Check the status:

Start and stops the


service

# service multipathd status


2009 NetApp. All rights reserved.

DM-MULTIPATH CONFIGURATION (CONT.)

12-20

SAN Implementation Workshop: Red Hat LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

DM-Multipath Configuration (Cont.)


4. Create multipath device with the multipath
command:
# multipath -v2
create: mpath0 (360a980004335432d576f526c526d6d6a)NETAPP,LUN
[size=2.0G][features=1 queue_if_no_path][hwhandler=0][n/a]
\_ round-robin 0 [prio=16][undef]
\_ 1:0:2:0 sdd 8:48 [undef][ready]
\_ 1:0:3:0 sde 8:64 [undef][ready]
\_ 2:0:2:0 sdh 8:112 [undef][ready]
The LUN is
\_ 2:0:3:0 sdi 8:128 [undef][ready]
visible
\_ round-robin 0 [prio=4][undef]
\_ 1:0:0:0 sdb 8:16 [undef][ready]
\_ 1:0:1:0 sdc 8:32 [undef][ready]
\_ 2:0:0:0 sdf 8:80 [undef][ready]
\_ 2:0:1:0 sdg 8:96 [undef][ready]

2009 NetApp. All rights reserved.

DM-MULTIPATH CONFIGURATION (CONT.)

12-21

SAN Implementation Workshop: Red Hat LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Single Disk or Manage by Volume Manager


Initiator

Ethernet

Fibre Channel

/
mount mount
luna
lunb

dev
- disk 1
- disk 2
- disk 3

Treat LUNs as a single


disk or combine together
using a volume manager

Ethernet

Target

Fibre Channel

My_IP_igroup
iqn.1999-05.com.redhat:rhel
OS Type: linux

My_FC_igroup
10:00:00:00:c9:6b:77:b4
OS Type: linux

1
LUNa

LUNb

2009 NetApp. All rights reserved.

SINGLE DISK OR MANAGE BY VOLUME MANAGER


LUNs may be used as single disk or combined together using a host-based volume manager.

12-22

SAN Implementation Workshop: Red Hat LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Single Disk Configuration


Format the multipath device:
# fdisk /dev/mapper/mpath0
Command (m for help): n
Command action
e
extended
p
primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-261, default 1): 1
Last cylinder or +size or +sizeM or +sizeK
(1-261, default 261): 261
Command (m for help): t
Hex code (type L to list codes): 83
Command (m for help): w
2009 NetApp. All rights reserved.

SINGLE DISK CONFIGURATION

12-23

SAN Implementation Workshop: Red Hat LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Single Disk Configuration (Cont.)


Create a file system in the multipath device:
# mkfs -t ext3 /dev/mapper/mpath0

Mount the multipath device:


# mkdir /mnt/lunx
# mount /dev/mapper/mpath0 /mnt/lunx

Test access:
# cd /mnt/lunx
# touch foo

2009 NetApp. All rights reserved.

SINGLE DISK CONFIGURATION (CONT.)

12-24

SAN Implementation Workshop: Red Hat LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Volume Managers
Red Hat 5.3 supports the following volume
managers:
Logical Volume Manager (LVM) 2
Native support
Flexible capacity
New ASCII metadata format

Veritas

This module and accompanying exercise will


focus on LVM2

2009 NetApp. All rights reserved.

VOLUME MANAGERS

12-25

SAN Implementation Workshop: Red Hat LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

LVM2 Architecture
Logical
Volume

Logical
Volume

Volume Group

Physical
Volume

Physical
Volume

LUN 1

LUN 2

2009 NetApp. All rights reserved.

LVM2 ARCHITECTURE
To create a LVM2 logical volume, the physical volumes and/or LUNs are combined into a
volume group (VG). This creates a pool of disk space out of which LVM2 logical volumes
(LV) can be allocated. This is very much like how NetApp can assign physical disks to an
aggregate (the LVM2s volume group) and then can create flexible volumes (the LVM2s
logical volume) out of the resources of the aggregate.

12-26

SAN Implementation Workshop: Red Hat LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

LVM2 Configuration Overview


1. Initialize partitions for a LVM2 volume group:
This step labels the partitions appropriately

2. Create a volume group


3. Create a logical volume from the resources in
a volume group
4. Create a file system with the logical volume
5. Mount the logical volume:
Create a mount point
Mount the logical volume

2009 NetApp. All rights reserved.

LVM2 CONFIGURATION OVERVIEW

12-27

SAN Implementation Workshop: Red Hat LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

LVM2 Configuration
1. Initialize partitions for a LVM2 volume group
# multipath

-v1 -l

mpath0 mpath2

Identify the LUNs to add to


the volume group

# dd if=/dev/zero of=/dev/mapper/mpath0 bs=512 count=1


# lvm
Create physical partitions
Not required
/dev/mapper/mpath0 /dev/mapper/mpath2
lvm>
pvcreate
if new
LUN; lvm2
requires
lvm> pvs
View physical partitions
no partition
table
PV
VG
FmtAttrPSizePfree
/dev/dm-2
/dev/dm-3
/dev/sda2

lvm2 -lvm2 -VolGroup00 lvm2 a-

2.00G 2.00G
2.00G 2.00G
68.12G
0

2009 NetApp. All rights reserved.

LVM2 CONFIGURATION

12-28

SAN Implementation Workshop: Red Hat LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

LVM2 Configuration (Cont.)


2. Create a volume group:
lvm> vgcreate vgGroup1 /dev/mapper/mpath0
/dev/mapper/mpath2
lvm> vgs
VG
#PV #LV #SN Attr
VSize VFree
VolGroup00
1
2
0 wz--n- 68.12G
0
vgGroup1
2
0
0 wz--n- 3.99G 3.99G
lvm> vgdisplay vgGroup1
--- Volume group --VG Name
System ID
Format
Metadata Areas
Metadata Sequence No
VG Access
VG Status
....

vgGroup1
lvm2
2
1
read/write
resizable

2009 NetApp. All rights reserved.

LVM2 CONFIGURATION (CONT.)

12-29

SAN Implementation Workshop: Red Hat LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

LVM2 Configuration (Cont.)


3. Create a logical volume from the resources in
a volume group:
lvm> lvcreate -L 3G -n lvVolA vgGroup1
lvm> lvs
LV
VG
Attr
Lsize...
...
lvVolA
vgGroup1
-wi-a- 3.00G
lvm> lvdisplay -v /dev/vgGroup1/lvVolA
Using logical volume(s) on command line
--- Logical volume --LV Name
/dev/vgGroup1/lvVolA
VG Name
vgGroup1
LV UUID
ViZevW-AxTx-UUKe-ASig-SDOG-J3FN-feRDf7
LV Write Access
read/write
LV Status
available
...
2009 NetApp. All rights reserved.

LVM2 CONFIGURATION (CONT.)

12-30

SAN Implementation Workshop: Red Hat LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

LVM2 Configuration (Cont.)


4. Create a file system with the logical volume:
EXT3 file system:
# mkfs -t ext3 /dev/vgGroup1/lvVolA

GFS file system:


# gfs_mkfs -plock_nolock -j 1 /dev/vgGroup1/lvVolA

NOTE: The exercise environment will use the EXT3


file system

2009 NetApp. All rights reserved.

LVM2 CONFIGURATION (CONT.)

12-31

SAN Implementation Workshop: Red Hat LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

LVM2 Configuration (Cont.)


5. Mount the logical volume
Create a mount point:
# mkdir /mnt/lvVolAfs
# mount /dev/mapper/vgGroup1-lvVolA /mnt/lvVolAfs

Or use /etc/fstab
Test access:
# cd /mnt/lvVolAfs
# touch foo

2009 NetApp. All rights reserved.

LVM2 CONFIGURATION (CONT.)

12-32

SAN Implementation Workshop: Red Hat LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Red Hat Stack


Mounts

/mnt/lvVolAfs

File System Added

LVM2

LUN 1

DM-Multipath
Devices Path
SCSI Devices Path

EXT3

EXT3

Logical Volume

Single Volume

LUN 2

Logical Volume

Volume Group
LUN 1

/mnt/lunx

/dev/mapper/mpath0

LUN 1

/dev/mapper/mpath0

LUN 2 /dev/mapper/mpath2
LUN 1

/dev/sdb - /dev/sdi

LUN 2

/dev/sbj - /dev/sdm

LUN 1

LUN 2

2009 NetApp. All rights reserved.

RED HAT STACK

12-33

SAN Implementation Workshop: Red Hat LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary

2009 NetApp. All rights reserved.

MODULE SUMMARY

12-34

SAN Implementation Workshop: Red Hat LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary
In this module, you should have learned to:
Describe the steps to allow a Red Hat initiator
to access a LUN on a storage system

2009 NetApp. All rights reserved.

MODULE SUMMARY

12-35

SAN Implementation Workshop: Red Hat LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Exercise
Module 12: Red Hat LUN Access
Estimated Time: 60 minutes

EXERCISE
Please refer to your Exercise Guide materials for more instructions.

12-36

SAN Implementation Workshop: Red Hat LUN Access

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

LUN Provisioning
Module 13
SAN Implementation Workshop

LUN PROVISIONING

13-1

SAN Implementation Workshop: LUN Provisioning

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Objectives
By the end of this module, you should be able to:
Describe how and when a LUN consumes
space from its containing volume
Discuss backup guarantees through
Snapshot reserve
Discuss the overwrite guarantee for spacereserved LUNs
Analyze the default LUN configuration and two
thin provisioning configurations

2009 NetApp. All rights reserved.

MODULE OBJECTIVES

13-2

SAN Implementation Workshop: LUN Provisioning

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Space-Reserved LUNs

2009 NetApp. All rights reserved.

SPACE-RESERVED LUNS

13-3

SAN Implementation Workshop: LUN Provisioning

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Space Reservation
LUNs are either:
Space reserved, that is, full provisioning (default)
The LUN size is reserved or taken out of the volume when the
LUN is created
May provide overwrite protection depending upon fractional
reserve setting - discussed later
Only allowed on a FlexVol volume with volume or file volume
space guarantee

Nonspace reserved, that is, thin provisioning


Data is taken out of the volume when written to the LUN by the
host client

LUN reservation set by:


lun create -o noreserve...
lun set reservation {enable|disable}
During lun setup command:
... Do you want the LUN to be space reserved? [y]:
2009 NetApp. All rights reserved.

SPACE RESERVATION

13-4

SAN Implementation Workshop: LUN Provisioning

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Protection for LUNs


When creating a LUN, two objectives become
apparent:
How to create backups for a LUN
How to ensure that an application can continue
to write to a LUN

To analyze these objectives, this presentation


will investigate three concepts:
Snapshot copies
Snapshot reserve
Fractional reserve

2009 NetApp. All rights reserved.

PROTECTION FOR LUNS

13-5

SAN Implementation Workshop: LUN Provisioning

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Backup Guarantee

2009 NetApp. All rights reserved.

BACKUP GUARANTEE

13-6

SAN Implementation Workshop: LUN Provisioning

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Snapshot Copies
Snapshot copies allow quick and efficient
backups of volumes that contain a LUN
However, when taking Snapshot copies, Data
ONTAPcannot guarantee that no application
is writing to the LUN
Results: possible inconsistent Snapshot copies

To guarantee consistent Snapshot copies:


Shut down applications using the LUN
Unmount the file system
Use APIs to activate hot backup modes
SnapDrive
SnapManager
2009 NetApp. All rights reserved.

SNAPSHOT COPIES

13-7

SAN Implementation Workshop: LUN Provisioning

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Snapshot Reserve
Snapshot reserve defines a percentage of the
volume that is reserved for Snapshot copies
Set at the volume level
system> snap reserve
Volume vol_SAN1:
current snapshot
reserve is 20% or
2097152 k-bytes.

Volume 1 Space Reservation

Snapshot Reserve
20%

Volumes used in SAN environments should


have the Snapshot reserve set to zero

2009 NetApp. All rights reserved.

SNAPSHOT RESERVE

13-8

SAN Implementation Workshop: LUN Provisioning

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Verify Space Available for a LUN


lun maxsize returns the maximum possible
size of a LUN on a given volume or qtree with
or without Snapshot reserve
system> lun maxsize /vol/vol_SAN1
Space available for a LUN of type: solaris, aix, hpux,
linux, netware, vmware, openvms or image
Without snapshot reserve:
5.9g (6356467712)
With snapshot reserve:
2.0g (2134900736)
With complete snapshot reserve:
2.0g (2134900736)
Space available for a LUN of type: windows, windows_gpt or
windows_2008
Without snapshot reserve:
5.9g (6349916160)
With snapshot reserve:
2.0g (2130347520)
With complete snapshot reserve:
2.0g (2130347520)
2009 NetApp. All rights reserved.

VERIFY SPACE AVAILABLE FOR A LUN

13-9

SAN Implementation Workshop: LUN Provisioning

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Overwrite Guarantee

2009 NetApp. All rights reserved.

OVERWRITE GUARANTEE

13-10

SAN Implementation Workshop: LUN Provisioning

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Overwrite Protection
Overwrites
When a Snapshot copy is first taken, the data
blocks are owned by the active file system and
the Snapshot copy
Only as the data is overwritten in a LUN are the
data blocks exclusively owned by the Snapshot
copy

Fractional reserve
Provides extra protection for ensuring that a host
can write to a LUN that has Snapshot copies
Independent of Snapshot reserve
2009 NetApp. All rights reserved.

OVERWRITE PROTECTION

13-11

SAN Implementation Workshop: LUN Provisioning

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Fractional Reserve Defined


Fractional reserve
A volume attribute
vol options vol_name fractional_reserve [%]

Fractional reserve can be between 0% to 100%


Defaults to 100%
Less than 100% allows thin provisioning Snapshot space of
LUNs

A multiplier for space-reserved LUNs


Space is reserved when a Snapshot copy is taken
Equaling to the LUNs Snapshot size, up to the total overwrite
protection size
Total overwrite protection = LUN size x fractional reserve

The overwrite protection space is used only after all


other space in the volume
2009 NetApp. All rights reserved.

FRACTIONAL RESERVE DEFINED

13-12

SAN Implementation Workshop: LUN Provisioning

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Fractional Reserve Example


10-GB volume is created
80% Active File System (AFS)
and 20% Snapshot Reserve(SSR)

df -r report:

Space
Reservation

Enabled

Fractional
Reserve

vol1

100%
1 GB

2009 NetApp. All rights reserved.

FRACTIONAL RESERVE EXAMPLE

13-13

SAN Implementation Workshop: LUN Provisioning

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Fractional Reserve Example (Cont.)


Create a space-reserved LUN:
lun create s 2g
t windows /vol/vol1/LUN1

df -r reports:

Space
Reservation

Enabled

Fractional
Reserve

100%

vol1

1 GB

Full

Reserved

LUN1 (2GB)

SSR

2009 NetApp. All rights reserved.

FRACTIONAL RESERVE EXAMPLE (CONT.)

13-14

SAN Implementation Workshop: LUN Provisioning

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Fractional Reserve Example (Cont.)


Write 100% to the LUN

df -r reports:

Space
Reservation

Enabled

Fractional
Reserve

100%

vol1

1 GB

Full

Reserved

LUN1 (2GB)

SSR

2009 NetApp. All rights reserved.

FRACTIONAL RESERVE EXAMPLE (CONT.)

13-15

SAN Implementation Workshop: LUN Provisioning

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Fractional Reserve Example (Cont.)


First Snapshot copy is created
snap create vol1 snap1

df -r reports:

Space
Reservation

Enabled

Actual LUN usage is still 4

Fractional
Reserve

100%

vol1

1 GB

Full

Reserved

LUN1 (2GB)

SSR

2009 NetApp. All rights reserved.

FRACTIONAL RESERVE EXAMPLE (CONT.)

13-16

SAN Implementation Workshop: LUN Provisioning

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Fractional Reserve Example (Cont.)


Overwrite the entire LUN

df -r reports:

Space
Reservation

Enabled

Actual LUN usage is still 4

Fractional
Reserve

100%

vol1

1 GB

Full

Reserved

LUN1 (2GB)

SSR

2009 NetApp. All rights reserved.

FRACTIONAL RESERVE EXAMPLE (CONT.)

13-17

SAN Implementation Workshop: LUN Provisioning

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Fractional Reserve Example (Cont.)


Second Snapshot copy is created
snap create vol1 snap2

df -r reports:

Space
Reservation

Enabled

Actual LUN usage is still 4

Fractional
Reserve

100%

vol1

1 GB

Full

Reserved

LUN1 (2GB)

SSR

2009 NetApp. All rights reserved.

FRACTIONAL RESERVE EXAMPLE (CONT.)

13-18

SAN Implementation Workshop: LUN Provisioning

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Fractional Reserve Example (Cont.)


Overwrite the entire LUN again

df -r reports:

Space
Reservation

Enabled

Actual LUN usage is still 4

Fractional
Reserve

100%

vol1

1 GB

Full

Reserved

LUN1 (2GB)

SSR

2009 NetApp. All rights reserved.

FRACTIONAL RESERVE EXAMPLE (CONT.)

13-19

SAN Implementation Workshop: LUN Provisioning

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Fractional Reserve Example (Cont.)


Third Snapshot copy is created
snap create vol1 snap3

df -r reports:

Space
Reservation

Enabled

Actual LUN usage is still 4

Fractional
Reserve

100%

vol1

1 GB

Full

Reserved

LUN1 (2GB)

SSR

2009 NetApp. All rights reserved.

FRACTIONAL RESERVE EXAMPLE (CONT.)

13-20

SAN Implementation Workshop: LUN Provisioning

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Fractional Reserve Example (Cont.)


Overwrite the entire LUN again

df -r reports:

Space
Reservation

Enabled

Actual LUN usage is still 4

Fractional
Reserve

100%

vol1

1 GB

Full

Reserved

LUN1 (2GB)

SSR

2009 NetApp. All rights reserved.

FRACTIONAL RESERVE EXAMPLE (CONT.)

13-21

SAN Implementation Workshop: LUN Provisioning

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Fractional Reserve Example (Cont.)


Fourth Snapshot copy is created
snap create vol1 snap4

df -r reports:

Space
Reservation

Enabled

Actual LUN usage is still 4

Fractional
Reserve

100%

vol1

1 GB

Full

Reserved

LUN1 (2GB)

SSR

2009 NetApp. All rights reserved.

FRACTIONAL RESERVE EXAMPLE (CONT.)

13-22

SAN Implementation Workshop: LUN Provisioning

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Fractional Reserve Example (Cont.)


Overwrite the entire LUN again

df -r reports:

Space
Reservation

Enabled

Actual LUN usage is 2

Fractional
Reserve

100%

vol1

1 GB

Full

Reserved

LUN1 (2GB)

SSR

2009 NetApp. All rights reserved.

FRACTIONAL RESERVE EXAMPLE (CONT.)

13-23

SAN Implementation Workshop: LUN Provisioning

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Fractional Reserve Example (Cont.)


Fifth Snapshot copy fails
Cannot guarantee overwrite space
= Snapshot size x fractional reserve

df -r reports:

Space
Reservation

Enabled

Fractional
Reserve

100%

vol1

1 GB

Full

Reserved

LUN1 (2GB)

SSR

2009 NetApp. All rights reserved.

FRACTIONAL RESERVE EXAMPLE (CONT.)

13-24

SAN Implementation Workshop: LUN Provisioning

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Analysis

2009 NetApp. All rights reserved.

ANALYSIS

13-25

SAN Implementation Workshop: LUN Provisioning

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Snapshot and Fractional Reserve


Snapshot reserve set asides space in a volume
for backups
Snapshot reserve may expand into the active file
system

Fractional reserve is used in calculating the


amount of space in a volume set aside to ensure
overwrites
If a LUN is completely filled and a Snapshot copy is
taken, then with a fractional reserve of 100%, there is
enough space guaranteed to completely overwrite the
LUN and still preserve the old data through the
Snapshot copy
2009 NetApp. All rights reserved.

SNAPSHOT AND FRACTIONAL RESERVE

13-26

SAN Implementation Workshop: LUN Provisioning

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Snapshot and Fractional Reserve (Cont.)


Fractional reserve may not be an efficient use
of space
Even with fractional reserve set to 100%,
the system still ran out of space

Fractional reserve only


delayed the inevitable

Example 1:
Fully Provisioned

Conclusion:
Better Snapshot copies management is needed

2009 NetApp. All rights reserved.

SNAPSHOT AND FRACTIONAL RESERVE (CONT.)

13-27

SAN Implementation Workshop: LUN Provisioning

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Snapshot and Fractional Reserve (Cont.)


Fractional reserve may not be an efficient use
of space
Administrators have to
plan a larger volume size
to provide for the
guaranteed overwrite
reserve
And the LUN may never
need the overwrite reserve

Example 2:
Fully Provisioned

Conclusion:
Thin Provisioning

2009 NetApp. All rights reserved.

SNAPSHOT AND FRACTIONAL RESERVE (CONT.)

13-28

SAN Implementation Workshop: LUN Provisioning

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Snapshot / Fractional Reserve: Conclusion


Use fractional reserve if you need guaranteed
overwrites
To minimize space usage, you may consider disabling
fractional reserve

However, Snapshot copies can fill up a volume if not


managed properly
Could prevent writes to a LUN if no space available

To better manage Snapshot copies, administrators


may:
Delete Snapshot copies (automatically)
Expand the size of the volume that contains the LUN

2009 NetApp. All rights reserved.

SNAPSHOT / FRACTIONAL RESERVE: CONCLUSION

13-29

SAN Implementation Workshop: LUN Provisioning

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Snapshot Automatic Delete


Snapshot autodelete determines when (if) Snapshot
copies will be automatically deleted
Set at volume level
snap autodelete vol[on|off|show|reset]

If autodelete is enabled, then options:


snap autodelete vol options option val
OptionsValue
commitment
trigger
target_free_space
delete_order
defer_delete
prefix

try, disrupt
volume, snap_reserve, space_reserve
1-100
oldest_first, newest_first
scheduled, user_created, prefix, none
<string>

2009 NetApp. All rights reserved.

SNAPSHOT AUTOMATIC DELETE

13-30

SAN Implementation Workshop: LUN Provisioning

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Volume Autosize
Might want to grow the volume
vol autosize determines if a volume should grow
when nearly full
Both snapshot autodelete and vol autosize
use the value wafl_reclaim_threshold
Data ONTAP 7.1 - 7.2.3: 98%
Data 7.2.4 and later (threshold depends on volume
size):
Variable Name

VOL size

tiny vols < 20G


small vols from 20G to < 100G
medium vols from 100G to < 500G
large vols from 500G to < 1T
wafl_reclaim_threshold_ xl: xlarge vols from 1T up
wafl_reclaim_threshold_t:
wafl_reclaim_threshold_s:
wafl_reclaim_threshold_m:
wafl_reclaim_threshold_l:

Value

threshold = 85%
threshold = 90%
threshold = 92%
threshold = 95%
threshold = 98%

2009 NetApp. All rights reserved.

VOLUME AUTOSIZE
This value when changed from the defaults is not persistent; it will change back to the default
values after reboot. So if you want to change this value (for example 90% for tiny volumes of
less than 20G) and have it persist after reboots then you should add the following line to
EACH /etc/rcfile on BOTH controllers:
priv set q diag;
setflagwafl_reclaim_threshold_t90;
priv set;

13-31

SAN Implementation Workshop: LUN Provisioning

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Volume Autosize (Cont.)


Configuration
Set at volume level
Possible values:
ON
Increment size (default 5% of original size)
Maximum size (default 120% of original size)

OFF
vol autosize vol_name [-m size[k|m|g|t]]
[-i size[k|m|g|t]] [on|off|reset]

2009 NetApp. All rights reserved.

VOLUME AUTOSIZE (CONT.)


NOTE: Volume autosize can only be run a maximum of 10 times on any particular volume.
If you set the incremental size too small, you will not be able to expand it as much as you
may like. For that reason, you generally should use the -m and -i switch when configuring
the volume autosize feature to set the incremental size and the maximum size to something
larger than the defaults. ALSO NOTE: The volume can grow only to a maximum size that is
10 times the original volume size.

13-32

SAN Implementation Workshop: LUN Provisioning

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Administrators Choice
Administrators may choose which procedure to
employ first:
snapshot auto delete
vol autosize
Use the volume option:
try_first
Possible values:
snap_delete
volume_grow (default)

Example:
vol options vol_name try_first snap_delete

2009 NetApp. All rights reserved.

ADMINISTRATORS CHOICE

13-33

SAN Implementation Workshop: LUN Provisioning

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Configurations

2009 NetApp. All rights reserved.

CONFIGURATIONS

13-34

SAN Implementation Workshop: LUN Provisioning

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Thin Provisioning Configuration #1


Configuration

Default Value

Guarantee

volume

LUN reservation

on

Fractional Reserve

0%

Snapshot Reserve

20%

Auto delete

snap_reserve

Auto grow

on

try_first

snap_delete

Easy to manage
Sacrifices Snapshot copies before LUNs
Doesnt use shared space of aggregate until
auto_grow is used
2009 NetApp. All rights reserved.

THIN PROVISIONING CONFIGURATION #1

13-35

SAN Implementation Workshop: LUN Provisioning

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Thin Provisioning Configuration #2


Configuration

Default Value

Guarantee

volume

LUN reservation

on

Fractional Reserve

0%

Snapshot Reserve

0%

Auto delete

volume

Auto grow

on

try_first

auto_grow

Uses shared space of aggregate


Deletes Snapshot copies when maximum size
reached
Volumes are independent of each other
2009 NetApp. All rights reserved.

THIN PROVISIONING CONFIGURATION #2

13-36

SAN Implementation Workshop: LUN Provisioning

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary

2009 NetApp. All rights reserved.

MODULE SUMMARY

13-37

SAN Implementation Workshop: LUN Provisioning

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary
In this module, you should have learned to:
Describe how and when a LUN consumes
space from its containing volume
Discuss backup guarantees through Snapshot
reserve
Discuss the overwrite guarantee for spacereserved LUNs
Analyze the default LUN configuration and two
thin provisioning configurations

2009 NetApp. All rights reserved.

MODULE SUMMARY

13-38

SAN Implementation Workshop: LUN Provisioning

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Exercise
Module 13: LUN Provisioning
Estimated Time: 45 minutes

EXERCISE
Please refer to your Exercise Guide materials for more instructions.

13-39

SAN Implementation Workshop: LUN Provisioning

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

SAN Management
Module 14
SAN Implementation Workshop

SAN MANAGEMENT

14-1

SAN Implementation Workshop: SAN Management

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Objectives
By the end of this module, you should be able to:
Perform administrative tasks on FC target ports
Perform administrative tasks on LUNs
Perform administrative tasks on igroups

2009 NetApp. All rights reserved.

MODULE OBJECTIVES

14-2

SAN Implementation Workshop: SAN Management

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Port Management

2009 NetApp. All rights reserved.

PORT MANAGEMENT

14-3

SAN Implementation Workshop: SAN Management

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Port Management
Storage administrators may need to perform the
following tasks while managing FC target ports
on a storage system:
Enable and disable the FC target port
(previously discussed)
Assign onboard FC ports to be targets
(previously discussed)
Designate an alias for FC initiator or target
ports (previously discussed)
Configure FC attributes to FC target ports
(previous discussed)
Configure the WWPN on an FC target port
2009 NetApp. All rights reserved.

PORT MANAGEMENT

14-4

SAN Implementation Workshop: SAN Management

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Port Assignment CLI Command


With Data ONTAP 7.3 and later, WWPNs may
be directly assigned by administrators
An assigned WWPN may only be chosen from
a set of valid WWPNs
Use fcp subcommand: portname

In active-active environments, you can only set


the WWPNs for locally managed ports and not
the partners ports

2009 NetApp. All rights reserved.

PORT ASSIGNMENT CLI COMMAND

14-5

SAN Implementation Workshop: SAN Management

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Steps to Assign a WWPN on an FC port


1. Use fcp portname show to display all
assigned WWPNs
2. Use fcp config adapter down to bring
adapter offline
3. Use fcp portname set adapter wwpn to
assign an unused WWPN to a port
Use fcp portname swap adapter1 adapter2
to swap WWPNs between ports

4. Use fcp config adapter up to bring the


adapter online again
5. Validate the new configuration
2009 NetApp. All rights reserved.

STEPS TO ASSIGN A WWPN ON AN FC PORT

14-6

SAN Implementation Workshop: SAN Management

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Example: fcp portname show


Use fcp portname show -v to display used and
unused WWPNs:
Example:

system> fcp portname show -v

Portname
-------50:0a:09:81:96:f7:c7:86
50:0a:09:82:96:f7:c7:86
50:0a:09:83:96:f7:c7:86
50:0a:09:84:96:f7:c7:86
50:0a:09:85:96:f7:c7:86
50:0a:09:86:96:f7:c7:86
50:0a:09:87:96:f7:c7:86
50:0a:09:88:96:f7:c7:86
...

Adapter
------0c
0d
unused
unused
unused
unused
unused
unused

2009 NetApp. All rights reserved.

EXAMPLE: FCP PORTNAME SHOW

14-7

SAN Implementation Workshop: SAN Management

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Setting FC Target Ports WWPNs


Adapter must be offline before changing
WWPN :
fcp config adapter down

Use fcp portname set to configure the port


Example:
system> fcp portname set 0d 50:0a:09:83:96:f7:c7:86
WARNING! Changing WWPN on an adapter may cause
initiator(s) fail to reconnect to this adapter.
Do you want to continue (y/n)? y
Change will take effect when adapter is onlined.

2009 NetApp. All rights reserved.

SETTING FC TARGET PORTS WWPNS

14-8

SAN Implementation Workshop: SAN Management

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Swapping FC Target Ports WWPNs


WWPNs of two ports may be swapped
Adapters must be offline to perform this action
Use fcp portname swap command
Example:
system> fcp portname swap 0c 0d
WARNING! Changing WWPNs on adapters may cause
initiator(s) fail to reconnect to these adapters.
Do you want to continue (y/n)? y
Changes will take effect when adapters are onlined.

Bring the adapter online after configuration:


fcp config adapter up

2009 NetApp. All rights reserved.

SWAPPING FC TARGET PORTS WWPNS

14-9

SAN Implementation Workshop: SAN Management

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

LUN Management

2009 NetApp. All rights reserved.

LUN MANAGEMENT

14-10

SAN Implementation Workshop: SAN Management

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

LUN Management
Storage administrators often need to perform the
following tasks while managing LUNs on a
storage system:
Disable or enable a LUN
Add a comment to a LUN
Rename a LUN
Resize a LUN
Clone a LUN
Remove a LUN

2009 NetApp. All rights reserved.

LUN MANAGEMENT

14-11

SAN Implementation Workshop: SAN Management

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

LUN Offline/Online
To control availability of LUN, administrators
may:
Take a LUN offline
lun offline lun_path
Example:
system> lun offline /vol/vol_SAN1/lun0

Bring a LUN online


lun online lun_path
Example:
system> lun online /vol/vol_SAN1/lun0

2009 NetApp. All rights reserved.

LUN OFFLINE/ONLINE

14-12

SAN Implementation Workshop: SAN Management

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Add a Comment to a LUN


A LUN description is usually to identify the
purpose of a LUN
To add a comment:
lun comment lun_path [comment]
Example:
system> lun comment /vol/vol_SAN1/lun0 For Payroll

NOTE: If you are using spaces within the comment, place the
comment within quotes

To view a LUN comment:


lun comment lun_path command
lun show -v
2009 NetApp. All rights reserved.

ADD A COMMENT TO A LUN

14-13

SAN Implementation Workshop: SAN Management

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Rename a LUN
To rename a LUN:
lun move old_lun_path new_lun_path
Example:
system> lun move /vol/vol_SAN1/lun0
/vol/vol_SAN1/newlun
NOTE: Both paths must be in the same volume

2009 NetApp. All rights reserved.

RENAME A LUN

14-14

SAN Implementation Workshop: SAN Management

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Resize a LUN
Administrators may increase or decrease the size of a
LUN
NOTE: It is usually more reliable to create a new LUN and
copy the data to it than resize an existing LUN

A host file system supports only limited usage of this


feature
Windows supports resizing only on basic disks
Veritas does not support resizing on 3.5 or earlier
A LUN can only grow 10x its original size
To resize a LUN:
Take the LUN offline
lun resize [-f] lun_pathnew_size
2009 NetApp. All rights reserved.

RESIZE A LUN

14-15

SAN Implementation Workshop: SAN Management

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Clone a LUN
A LUN clone is a point-in-time, writeable copy
of a LUN in a Snapshot copy
The LUN clone shares space with a LUN in its
backing Snapshot copy
Initial LUN clones content exists in the backing
Snapshot copy while changed data is written to
the active file system
LUN clone may be used for testing

2009 NetApp. All rights reserved.

CLONE A LUN
To clone a LUN, complete the following steps:
1. Begin the clone operation, enter the following command:
lun clone start lun_clone_path
2. To display the progress of the clone operation,enter the following command:
lun clone status lun_clone_path
3. To display all the clones,enter either of the following commands:
lun clone show [lun-path] lun show -c
4. If you need to stop the clone process,enter the following the command:

14-16

lun clone stop lun-path

SAN Implementation Workshop: SAN Management

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Clone a LUN (Cont.)


system> snap create /vol/vol_SAN1 s0
Active
File System
Create a Snapshot
copy of the volume that
contains the LUN

Snapshot
File System

Unallocated block in LUN


Block visible from lower
layer (shared) LUN
Allocated block in same
LUN (same inode) at
given layer

2009 NetApp. All rights reserved.

CLONE A LUN (CONT.)

14-17

SAN Implementation Workshop: SAN Management

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Clone a LUN (Cont.)


system> lun clone create /vol/vol_SAN1/lun0clone
b /vol/vol_SAN1/lun0 s0 Active
Snapshot
File
System
File System
Create a LUN clone with s0

Snapshot copy as its backing

LUN Attribute Pointer


to Snapshot Backup

These blocks are different

Unallocated block in LUN


Block visible from lower
layer (shared) LUN

LUN Clone with Backing

Allocated block in same


LUN (same inode) at
given layer
Allocated block in LUN at
given layer with backing

2009 NetApp. All rights reserved.

CLONE A LUN (CONT.)

14-18

SAN Implementation Workshop: SAN Management

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Clone a LUN (Cont.)


Snapshot copies that are used as a backing Snapshot
copy for a LUN clone are in a busy state and cannot be
deleted:
Use lun snap usage to identify which LUN
clones are using a particular Snapshot copy
Example:
system> lun snap usage vol_SAN1 s0
Active:
LUN: /vol/vol_SAN1/lun0clone
Backed By: /vol/vol_SAN1/.snapshot/s0/lun0

NOTE: You must split or destroy the LUN clone to


delete the Snapshot copy
Use lun show -v to display all LUNs and LUN
clones with their backing Snapshot copies
2009 NetApp. All rights reserved.

CLONE A LUN (CONT.)

14-19

SAN Implementation Workshop: SAN Management

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Clone a LUN (Cont.)


system> lun clone split start
/vol/vol_SAN1/lun0clone
Active
File System
Start the split

Snapshot
File System

LUN Attribute Pointer


to Snapshot Backup

These blocks are different

Unallocated block in LUN


Block visible from lower
layer (shared) LUN

LUN Clone with Backing

Allocated block in same


LUN (same inode) at
given layer
Allocated block in LUN at
given layer with backing

2009 NetApp. All rights reserved.

CLONE A LUN (CONT.)

14-20

SAN Implementation Workshop: SAN Management

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Remove a LUN
The process to remove a LUN depends on the
initiator OS
This course will investigate:
Windows Server 2003/2008/2008 R2
vSphere
Red Hat Enterprise Linux 5

Regardless of the initiator OS, the first step is


to stop all applications that are using the LUN
as storage

2009 NetApp. All rights reserved.

REMOVE A LUN

14-21

SAN Implementation Workshop: SAN Management

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Remove a LUN: Windows Server


Take the LUN offline

Right-click,
select
offline

2009 NetApp. All rights reserved.

REMOVE A LUN: WINDOWS SERVER

14-22

SAN Implementation Workshop: SAN Management

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Remove a LUN: Windows Server(Cont.)


In Windows Server 2008, you cant take the
disk offline, so use Device Manager to
uninstall the device

Right-click and
select uninstall

2009 NetApp. All rights reserved.

REMOVE A LUN: WINDOWS SERVER(CONT.)

14-23

SAN Implementation Workshop: SAN Management

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Remove a LUN: Windows Server (Cont.)


Regardless of whether the LUN was accessed
through FC or iSCSI, the following steps are
used to destroy it:
Take the LUN offline:
system> lun offline /vol/vol_SAN2/lun3

FC
example

Unmap the LUN from igroup:


system> lun unmap /vol/vol_SAN2/lun3 iWin_fcp

Destroy the LUN offline:


system> lun destroy /vol/vol_SAN2/lun3

2009 NetApp. All rights reserved.

REMOVE A LUN: WINDOWS SERVER(CONT.)

14-24

SAN Implementation Workshop: SAN Management

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Remove a LUN: Windows Server(Cont.)


Rescan and the LUN should disappear

Right-click,
select
Rescan
Disks

2009 NetApp. All rights reserved.

REMOVE A LUN: WINDOWS SERVER(CONT.)

14-25

SAN Implementation Workshop: SAN Management

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Remove a LUN: vSphere


Removing VM(s) using the LUN

Right-click,
select
remove

2009 NetApp. All rights reserved.

REMOVE A LUN: VSPHERE

14-26

SAN Implementation Workshop: SAN Management

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Remove a LUN: vSphere (Cont.)


Removing a LUN destroys data

Right-click,
select
Delete

To keep the data, take the LUN offline on the storage system
2009 NetApp. All rights reserved.

REMOVE A LUN: VSPHERE (CONT.)

14-27

SAN Implementation Workshop: SAN Management

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Remove a LUN: vSphere (Cont.)


Rescan and the LUN should disappear

LUN is gone

2009 NetApp. All rights reserved.

REMOVE A LUN: VSPHERE (CONT.)

14-28

SAN Implementation Workshop: SAN Management

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Remove a LUN: vSphere (Cont.)


Regardless of whether the LUN was accessed
through FC or iSCSI, the following steps are
used to destroy it:
Take the LUN offline:
iSCSI
system> lun offline /vol/lun2/lun2

Unmap the LUN from igroup:

example

system> lun unmap /vol/lun2/lun2 win-1617b81a

Destroy the LUN offline:


system> lun destroy /vol/lun2/lun2

2009 NetApp. All rights reserved.

REMOVE A LUN: VSPHERE (CONT.)

14-29

SAN Implementation Workshop: SAN Management

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Remove a LUN: Linux


Unmount the logical volume
# umount /dev/mapper/vgGroup1-lvVolA

Remove the mount point


# rmdir /mnt/lvVolA

Inactivate the logical volume


# lgchange -an lvVolA

Remove the logical volume


# lvremove /dev/vgGroup1/lvVolA

2009 NetApp. All rights reserved.

REMOVE A LUN: LINUX

14-30

SAN Implementation Workshop: SAN Management

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Remove a LUN: Linux (Cont.)


Inactivate the volume group:
# vgchange -an vgGroup1

Either remove a physical volume (LUN):


# pvs -0+pv_used
PV
VG
Fmt Attr
/dev/dm-2 vgGroup1
lvm2 a/dev/dm-3 vgGroup1
lvm2 a/dev/sda2 VolGroup00 lvm2 a# vgreduce vgGroup1 /dev/dm-3
# vgchange -ay vgGroup1

Psize
2.00G
2.00G
68.12G

Pfree
2.00G
2.00G
0

Used
0
0
68.12G

Or destroy the volume group all together:


# vgremove vgGroup1

2009 NetApp. All rights reserved.

REMOVE A LUN: LINUX (CONT.)

14-31

SAN Implementation Workshop: SAN Management

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Remove a LUN: Linux (Cont.)


Check the SCSI device definitions:
# multipath -d
...

create: mpath2 (360a980004335432d576f52726b663350)


NETAPP,LUN
[size=2.0G][features=1 queue_if_no_path][hwhandler=0][n/a]
\_ round-robin 0 [prio=8][undef]
\_ 3:0:0:0 sdj 8:144 [undef][ready]
\_ 4:0:0:0 sdk 8:160 [undef][ready]
\_ 5:0:0:0 sdl 8:176 [undef][ready]
\_ 6:0:0:0 sdm 8:192 [undef][ready]

One LUN;
record SCSI
device IDs
2009 NetApp. All rights reserved.

REMOVE A LUN: LINUX (CONT.)

14-32

SAN Implementation Workshop: SAN Management

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Remove a LUN: Linux (Cont.)


List the current multipath definitions:
# ls -l /dev/mapper
...

brw-rw---- 1 root disk 253,


brw-rw---- 1 root disk 253,
brw-rw---- 1 root disk 253,

2 Sep 13 13:53 mpath0


3 Sep 13 13:53 mpath2
5 Sep 14 08:31 mpath3

...

Remove the LUNs multipath definition:


# multipath -f mpath2
# ls -l /dev/mapper

LUN
definition
gone

...

brw-rw---- 1 root disk 253,


brw-rw---- 1 root disk 253,

2 Sep 13 13:53 mpath0


5 Sep 14 08:31 mpath3

...
2009 NetApp. All rights reserved.

REMOVE A LUN: LINUX (CONT.)

14-33

SAN Implementation Workshop: SAN Management

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Remove a LUN: Linux (Cont.)


List the current SCSI device definitions:
# ls -l /sys/bus/scsi/drivers/sd

One LUN

...
lrwxrwxrwx 1 root root
0 Sep 14 09:47 3:0:0:0 ->
../../../../devices/platform/host3/session1/target3:0:0/3:0:0:0
lrwxrwxrwx 1 root root
0 Sep 14 09:47 3:0:0:1 ->
../../../../devices/platform/host3/session1/target3:0:0/3:0:0:1
lrwxrwxrwx 1 root root
0 Sep 14 09:47 4:0:0:0 ->
../../../../devices/platform/host4/session2/target4:0:0/4:0:0:0
lrwxrwxrwx 1 root root
0 Sep 14 09:47 4:0:0:1 ->
../../../../devices/platform/host4/session2/target4:0:0/4:0:0:1
lrwxrwxrwx 1 root root
0 Sep 14 09:47 5:0:0:0 ->
...
../../../../devices/platform/host5/session3/target5:0:0/5:0:0:0
lrwxrwxrwx 1 root root
0 Sep 14 09:47 5:0:0:1 ->
../../../../devices/platform/host5/session3/target5:0:0/5:0:0:1
lrwxrwxrwx 1 root root
0 Sep 14 09:47 6:0:0:0 ->
../../../../devices/platform/host6/session4/target6:0:0/6:0:0:0
lrwxrwxrwx 1 root root
0 Sep 14 09:47 6:0:0:1 ->
../../../../devices/platform/host6/session4/target6:0:0/6:0:0:1
...
2009 NetApp. All rights reserved.

REMOVE A LUN: LINUX (CONT.)

14-34

SAN Implementation Workshop: SAN Management

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Remove a LUN: Linux (Cont.)


Take the LUN offline:
system> lun offline /vol/lun2/lun2

Unmap the LUN from igroup:


system> lun unmap /vol/lun2/lun2 lunix-igroup

Destroy the LUN offline:


system> lun destroy /vol/lun2/lun2

2009 NetApp. All rights reserved.

REMOVE A LUN: LINUX (CONT.)

14-35

SAN Implementation Workshop: SAN Management

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Remove a LUN: Linux (Cont.)


Remove the SCSI definitions:
#
#
#
#

echo
echo
echo
echo

1
1
1
1

>
>
>
>

3:0:0:0/delete
4:0:0:0/delete
5:0:0:0/delete
6:0:0:0/delete

Verify:
# ls -l /sys/bus/scsi/drivers/sd

Local
devices are
gone

...
lrwxrwxrwx 1 root root
0 Sep 14 09:47 3:0:0:1 ->
../../../../devices/platform/host3/session1/target3:0:0/3:0:0:1
lrwxrwxrwx 1 root root
0 Sep 14 09:47 4:0:0:1 ->
../../../../devices/platform/host4/session2/target4:0:0/4:0:0:1
lrwxrwxrwx 1 root root
0 Sep 14 09:47 5:0:0:1 ->
../../../../devices/platform/host5/session3/target5:0:0/5:0:0:1
lrwxrwxrwx 1 root root
0 Sep 14 09:47 6:0:0:1 ->
../../../../devices/platform/host6/session4/target6:0:0/6:0:0:1
...
2009 NetApp. All rights reserved.

REMOVE A LUN: LINUX (CONT.)

14-36

SAN Implementation Workshop: SAN Management

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

igroup Management

2009 NetApp. All rights reserved.

IGROUP MANAGEMENT

14-37

SAN Implementation Workshop: SAN Management

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

igroup Management
Storage administrators often need to perform the
following tasks while managing igroups on a
storage system:
Map and unmap LUNs and igroups
Define port sets for an igroup

2009 NetApp. All rights reserved.

IGROUP MANAGEMENT

14-38

SAN Implementation Workshop: SAN Management

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Map or Unmap igroups and LUNs


To unmap an igroup and LUN:
Take a LUN offline
lun unmap lun_path igroup LUN_ID
Example:
system> lun offline /vol/vol_SAN1/lun0
system> lun unmap /vol/vol_SAN1/lun0 iUnix_fcp 0

To map an igroup and LUN:


lun map lun_path igroup LUN_ID
Example:
system> lun map /vol/vol_SAN1/lun0 iUnix_fcp 0

2009 NetApp. All rights reserved.

MAP OR UNMAP IGROUPS AND LUNS

14-39

SAN Implementation Workshop: SAN Management

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Port Sets
Port sets are a collection of ports associated with
an igroup
If an igroup is not associated with a port set,
then an initiator that belongs to that igroup can
see its target LUNs on all ports
If an igroup is associated with a port set, then
an initiator that belongs to that igroup can see
its target LUNs on the ports belonging only to
the port set

2009 NetApp. All rights reserved.

PORT SETS

14-40

SAN Implementation Workshop: SAN Management

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Port Set Administration


To create a port set:
system> portset create -f myportset

NOTE: -f means FC
Only FC port sets are currently supported

Add port to a port set:


system> portset add myportset 0c
system> portset show
myportset (FCP):
ports:
system 0c
system2 0c

To associate a port set to an igroup:


system> igroup create -f -t solaris -a myportset newigroup

NOTE: The port set cannot be empty for an igroup


to bind to it
2009 NetApp. All rights reserved.

PORT SET ADMINISTRATION

14-41

SAN Implementation Workshop: SAN Management

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Port Set Administration (Cont.)


To bind an igroup and a port set (without
creating the igroup):
system> igroup bind newigroup myportset

To disassociate an igroup and a port set:


system> igroup unbind newigroup

To destroy a port set:


system> portset destroy myportset

2009 NetApp. All rights reserved.

PORT SET ADMINISTRATION (CONT.)

14-42

SAN Implementation Workshop: SAN Management

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary

2009 NetApp. All rights reserved.

MODULE SUMMARY

14-43

SAN Implementation Workshop: SAN Management

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary
In this module, you should have learned to:
Perform administrative tasks on FC target ports
Perform administrative tasks on LUNs
Perform administrative tasks on igroups

2009 NetApp. All rights reserved.

MODULE SUMMARY

14-44

SAN Implementation Workshop: SAN Management

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Exercise
Module 14: SAN Management
Estimated Time: 30 minutes

EXERCISE
Please refer to your Exercise Guide materials for more instructions.

14-45

SAN Implementation Workshop: SAN Management

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

SAN
Troubleshooting
Module 15
SAN Implementation Workshop

SAN TROUBLESHOOTING

15-1

SAN Implementation Workshop: SAN Troubleshooting

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Objectives
By the end of this module, you should be able to:
Describe how to diagnose a problem within a
SAN environment
Review diagnostic tools and techniques
available for:
Data ONTAP

2009 NetApp. All rights reserved.

MODULE OBJECTIVES

15-2

SAN Implementation Workshop: SAN Troubleshooting

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

SAN Troubleshooting

2009 NetApp. All rights reserved.

SAN TROUBLESHOOTING

15-3

SAN Implementation Workshop: SAN Troubleshooting

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Origin of Problems
Initiator

Fabric/Network

Windows or UNIX
FC
Driver

Application

Target

FC HBAs

SCSI over FC

Storage System
FC
Driver

iSCSI HBAs
File
System

SCSI
Driver

SCSI over TCP/IP


(iSCSI)
iSCSI
Driver

TCP
/IP

SCSI over TCP/IP


(iSCSI)

SAN Services

WAFL
iSCSI
Driver

RAID

TCP
/IP

Data ONTAP

Ethernet NICs
SCSI Adapters

LUN

Direct-Attached Storage

2009 NetApp. All rights reserved.

ORIGIN OF PROBLEMS

15-4

SAN Implementation Workshop: SAN Troubleshooting

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

FC or SATA
Attached

The SAN Troubleshooting Process


Is there a problem with the fabric or network?
Is there a problem with one of your switches?
Is there a problem with the target?
Is there a problem with the host?

2009 NetApp. All rights reserved.

THE SAN TROUBLESHOOTING PROCESS

15-5

SAN Implementation Workshop: SAN Troubleshooting

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Diagnose the Problem


Understand the SAN environment
What is the fabric or network configuration?

Are you using Fibre Channel protocol or iSCSI?


What kind of zoning or VLANs are in use (if any)?
What is the switch type?
Has any new software been installed on the host or target?

What is the storage system configuration?


What is the storage system model?
What version of Data ONTAP is running?
Is FC or iSCSI licensed and started?
Is the logical unit number (LUN) created and mapped to the
correct igroup?
Is the system cabled in an active-active storage controller
configuration?

2009 NetApp. All rights reserved.

DIAGNOSE THE PROBLEM

15-6

SAN Implementation Workshop: SAN Troubleshooting

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Diagnose the Problem (Cont.)


Understand the SAN environment
What host(s) are involved and what is their
configuration?
What is the OS version? Are necessary patches
installed?
Is this an iSCSI or FC solution? If iSCSI, do you
have an HBA or software initiator?
What applications are running (SQL Server,
Oracle, Exchange, and so on)?
What host-side multipathing and/or clustering
options are being used?
Is SnapDrivebeing used?
2009 NetApp. All rights reserved.

DIAGNOSE THE PROBLEM (CONT.)

15-7

SAN Implementation Workshop: SAN Troubleshooting

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Determine the Nature of the Problem


Can the host and target communicate?
Is the problem performance related?
Are LUNs visible to host?
Do you receive error messages?
Did the host panic? Did the storage system
(target) panic?
When does the problem occur?
After reboot (of host or target)?
During heavy load?
Does the problem occur when SnapMirror is
started?
2009 NetApp. All rights reserved.

DETERMINE THE NATURE OF THE PROBLEM

15-8

SAN Implementation Workshop: SAN Troubleshooting

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Fabric and Network Troubleshooting


What is the fabric or network protocol (FC or
iSCSI)?
What type of fabric and network is being used?
Direct Attached (FC or iSCSI) Issues may be
related to cabling, port hardware
Single fabric (FC) - Issues may be related to
switches, zoning
Dual fabric (FC) - Issues may be related to
switches, zoning
Network attached (dedicated or shared
Ethernet) (iSCSI) Issues may be related to
VLANs, a firewall, router tables
2009 NetApp. All rights reserved.

FABRIC AND NETWORK TROUBLESHOOTING

15-9

SAN Implementation Workshop: SAN Troubleshooting

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

FC Fabric Checklist
Do you have path connectivity?
Are the switches on? Do they have unique
domain IDs?
If multiple switches are present, do they have
identical fabric parameters?
Are zones set up correctly?
Are switches running correct firmware
versions?
Are the ports configured correctly?
Are hard and soft zones configured correctly?
Is the physical cabling correct?
2009 NetApp. All rights reserved.

FC FABRIC CHECKLIST

15-10

SAN Implementation Workshop: SAN Troubleshooting

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

iSCSI Storage Network Checklist


Do you have path connectivity?
Are duplex and jumbo frame Ethernet
parameters configured correctly?
Are you using a firewall?
Are VLANs configured correctly?
Are hosts and controllers cabled correctly? Is
port hardware functioning correctly?
Was the port state reconfigured by an
administrator (shared Ethernet)?
Are clients configured correctly to find the DNS
and iSNS servers?
2009 NetApp. All rights reserved.

ISCSI STORAGE NETWORK CHECKLIST

15-11

SAN Implementation Workshop: SAN Troubleshooting

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Data ONTAP

2009 NetApp. All rights reserved.

DATA ONTAP

15-12

SAN Implementation Workshop: SAN Troubleshooting

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Data ONTAP Troubleshooting


To troubleshoot a SAN environment in Data
ONTAP:
Verify SAN configuration on a storage system
Display general usage statistics
Analyze FC statistics
Analyze iSCSI statistics
Verify read and write performance

2009 NetApp. All rights reserved.

DATA ONTAP TROUBLESHOOTING

15-13

SAN Implementation Workshop: SAN Troubleshooting

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Verify LUN Environment


system> lun config_check -v
Checking igroup ostype & fcp cfmode compatibility
======================================================
No Problems Found
Checking local and partner cfmode
======================================================
No Problems Found
Checking for down fcp interfaces
======================================================
The following FCP HBAs appear to be down
0c OFFLINED BY USER/SYSTEM
Checking initiators with mixed/incompatible settings
======================================================
No Problems Found

Potential Problem

2009 NetApp. All rights reserved.

VERIFY LUN ENVIRONMENT

15-14

SAN Implementation Workshop: SAN Troubleshooting

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Verify LUN Environment (Cont.)


Checking igroup ALUA settings
======================================================
No Problems Found
Checking for nodename conflicts
======================================================
No Problems Found
Checking for initiator group and lun map conflicts
======================================================
No Problems Found
Checking for igroup ALUA conflicts
======================================================
No Problems Found

2009 NetApp. All rights reserved.

VERIFY LUN ENVIRONMENT (CONT.)

15-15

SAN Implementation Workshop: SAN Troubleshooting

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Usage Statistics
system> uptime
12:57pm up 4:42 0 NFS ops, 0 CIFS ops, 0 HTTP ops, 70 FCP ops,
425 iSCSI ops
system> availtime
Service statistics as of Thu Apr 24 12:57:20 PDT 2008
System (UP). First recorded (15393111) on Mon Oct 29 09:05:29 PDT 2007
P 16, 1441184, 1439532, Fri Dec 14 06:44:46 PST 2007
U 3, 542, 206, Wed Nov 14 14:28:39 PST 2007
PF 1, 36368, 36368, Sat Dec 15 02:36:13 PST 2007
...
FCP (UP). First recorded (11421816) on Fri Dec 14 07:13:44 PST 2007
P 12, 14325, 13548, Mon Apr 7 18:48:06 PDT 2008
PF 1, 36375, 36375, Sat Dec 15 02:36:20 PST 2007
iSCSI (UP). First recorded (13485623) on Tue Nov 20 09:56:57 PST 2007
P 12, 5394, 3078, Thu Apr 17 15:43:29 PDT 2008
PF 1, 36375, 36375, Sat Dec 15 02:36:20 PST 2007

2009 NetApp. All rights reserved.

USAGE STATISTICS

15-16

SAN Implementation Workshop: SAN Troubleshooting

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Display FC Protocol Statistics


system> fcp stats
Statistics for adapter 0c
Read Ops:
0
Write Ops:
0
Other Ops:
25
KBytes In:
0
KBytes Out:
0
Adapter Resets:
1
Frame Overruns:
0
Frame Underruns:
0
Initiators Connected:
3
Link Breaks:
0
LIP Resets:
0
SCSI Requests Dropped: 0
Interrupts:
66
Spurious Interrupts:
0
Total Logins:
11
Total Logouts:
8
CRC Errors:
0
Adapter Qfulls:
0
Queue Depth:
0

Statistics for adapter 0d


Read Ops:
0
Write Ops:
0
Other Ops:
30
KBytes In:
0
KBytes Out:
0
Adapter Resets:
0
Frame Overruns:
0
Frame Underruns:
0
Initiators Connected:
4
Link Breaks:
0
LIP Resets:
0
SCSI Requests Dropped: 0
Interrupts:
88
Spurious Interrupts:
0
Total Logins:
8
Total Logouts:
4
CRC Errors:
0
Adapter Qfulls:
0
Queue Depth:
0

2009 NetApp. All rights reserved.

DISPLAY FC PROTOCOL STATISTICS

15-17

SAN Implementation Workshop: SAN Troubleshooting

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Display FC Protocol Statistics (Cont.)


Statistics for adapter vtic
Read Ops:
0
Write Ops:
0
Other Ops:
6
KBytes In:
1
KBytes Out:
0
out_of_vtic_cmdblks:
0
out_of_vtic_msgs:
0
out_of_vtic_resp_msgs: 0
out_of_bulk_msgs:
0
out_of_bulk_buffers:
0
out_of_r2t_buffers:
0
...

Zero out the statistics to get a clear reading


system> fcp stats -z

2009 NetApp. All rights reserved.

DISPLAY FC PROTOCOL STATISTICS (CONT.)

15-18

SAN Implementation Workshop: SAN Troubleshooting

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Display iSCSI Statistics


system> iscsi stats
iSCSI PDUs Received
SCSI-Cmd:
226
|
LoginReq:
620
|
DataOut:
0
|
Total: 880
iSCSI PDUs Transmitted
SCSI-Rsp:
89
|
LoginRsp:
620
|
Data_In:
218
|
Reject:
0
Total: 963

Nop-Out:
LogoutReq:
SNACK:

8 | SCSI TaskMgtCmd:
21 | Text Req:
0 | Unknown:

0
5
0

Nop-In:
LogoutRsp:
R2T:

8 | SCSI TaskMgtRsp:
21 | TextRsp:
0 | Asyncmsg:

0
5
2

2009 NetApp. All rights reserved.

DISPLAY ISCSI STATISTICS

15-19

SAN Implementation Workshop: SAN Troubleshooting

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Display iSCSI Statistics (Cont.)


iSCSI CDBs
DataIn Blocks:
218 | DataOut Blocks:
Error Status:
8 | Success Status:
Total CDBs: 226
iSCSI ERRORS
Failed Logins:
0 | Failed TaskMgt:
Failed Logouts:
0 | Failed TextCmd:
Protocol:
0
Digest:
0
PDU discards (outside CmdSN window): 0
PDU discards (invalid header):
0
Total: 0

0
218

0
0

Zero out the statistics to get a clear reading


system> iscsi stats -z

2009 NetApp. All rights reserved.

DISPLAY ISCSI STATISTICS (CONT.)

15-20

SAN Implementation Workshop: SAN Troubleshooting

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Read and Write Performance of a LUN


To improve read/write performance of a LUN, it
is important to have the LUN (a single file)
sequentially within the aggregate:
system> reallocate on
Reallocation scans are on
system> reallocate start /vol/vol_SAN1/lun0

NOTE: This will create a normal reallocation


scan that runs every 24 hours
To disable the scans:
system> reallocate stop /vol/vol_SAN1/lun0

2009 NetApp. All rights reserved.

READ AND WRITE PERFORMANCE OF A LUN

15-21

SAN Implementation Workshop: SAN Troubleshooting

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

nSANity
nSANity is an extensible diagnostic data collection
framework
Download from the NOW site (Windows and Linux version)
Supports:
Data ONTAP 7G, GX, and 8.0 7-Mode
Switches
Cisco
Brocade
QLogic

More supported
platforms coming soon

Hosts

Windows Server 2003 and 2008


VMware 3.0.1, 3.5, and 4.0
Linux with any 2.6 kernel-based distributions
Solaris

2009 NetApp. All rights reserved.

NSANITY

15-22

SAN Implementation Workshop: SAN Troubleshooting

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

nSANity (Cont.)
With nSANity, you can interrogate your
Windows, Linux, and storage system
C:\nsanity-win32> nsanity.exe windows://localhost
Using internal command definitions
Starting collection @ 2009-09-08 17:30:49
Beginning data collection for windows @ localhost
Running select * from Win32_OperatingSystem
Running select * from Win32_QuickFixEngineering
Running select * from Win32_LogicalDisk
Running select * from
Win32_LogicalDiskRootDirectory
Running select * from
Win32_LogicalDiskToPartition
...
Collected data saved to file,
20090909003049_nsanity.xml.gz
2009 NetApp. All rights reserved.

NSANITY (CONT.)

15-23

SAN Implementation Workshop: SAN Troubleshooting

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

nSANity (Cont.)
You can then pass this file off to your NetApp
Professional Services representative to read
the resulting xml output

2009 NetApp. All rights reserved.

NSANITY (CONT.)

15-24

SAN Implementation Workshop: SAN Troubleshooting

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary

2009 NetApp. All rights reserved.

MODULE SUMMARY

15-25

SAN Implementation Workshop: SAN Troubleshooting

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary
In this module, you should have learned to:
Describe how to diagnose a problem within a
SAN environment
Review diagnostic tools and techniques
available for:
Data ONTAP

2009 NetApp. All rights reserved.

MODULE SUMMARY

15-26

SAN Implementation Workshop: SAN Troubleshooting

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Additional Resources
Education

Instructor-led training
Design and Implement VMware Solutions on NetApp
Storage
Design and Implement MS Hyper-V Solutions on NetApp
Storage
Design and Implement Virtualization Solutions on NetApp
Storage

Online courses

SnapDrive for Windows Overview and Install


SnapDrive for Windows Operate and Troubleshoot

Web sites

NOW (NetApp on the Web) Site


http://NOW.NetApp.com

NetApp

http://www.netapp.com

2009 NetApp. All rights reserved.

ADDITIONAL RESOURCES

15-27

SAN Implementation Workshop: SAN Troubleshooting

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Thank you!
Please fill out an evaluation.

THANK YOU!

15-28

SAN Implementation Workshop: SAN Troubleshooting

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

FC Details
Appendix 1
SAN Implementation Workshop

FC DETAILS

A1-1

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Objectives
Bythe end of this module, you should be able to:
Describe FC details important to configure and
troubleshoot FC topologies

2009 NetApp. All rights reserved.

MODULE OBJECTIVES

A1-2

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Fabric Cabling

2009 NetApp. All rights reserved.

FABRIC CABLING

A1-3

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Cabling: Types
Multimode fiber
Typically orange or aqua
blue in color
Cable type is always
printed on the cable
50 or 62.5m core
50/125 = 50m core and
125m cladding
Singlemode fiber
Not directly supported by
NetApp
Typically yellow in color
9m core
2009 NetApp. All rights reserved.

CABLING: TYPES

A1-4

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Cabling: Illustration
Multimode will have a 50 m or 62.5 m core
Singlemode will have an 8.3/9 m core
Cable Jacket: The outermost layer of the fiber cable
Strengthening fibers: The strengthening fibers that help
protect the core against damage during installation or from
being crushed
Coating: This layer of thicker plastic surrounds the cladding
and helps protect the fiber core
Cladding: The layer that protects the core and causes the
necessary reflection to allow light to travel through the fibercore segment
Core: The physical component that transports the optical data
signal, made up of a continuous strand of glass. The core's
diameter is measured in microns
2009 NetApp. All rights reserved.

CABLING: ILLUSTRATION

A1-5

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Cabling: Distances
Multimode optical cable distances:
All wavelengths below are 850 nm
Media Type Speed
Distance
Channel Attenuation
---------------------------------------------------------------------------------50 m
1 Gb
2-500m 3.25
62.5 m
1 Gb
2-300m 2.55
50 m
2 Gb
2-300m 2.55
62.5 m
2 Gb
2-150m 2.03
50 m
4 Gb
2-150m
62.5 m
4 Gb
2-70m

2009 NetApp. All rights reserved.

CABLING: DISTANCES

A1-6

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Cabling: Distances

Note: Distances assume perfect


(no) joints, patch panels, etc.
YMMV

2009 NetApp. All rights reserved.

CABLING: DISTANCES

A1-7

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Cabling: Cable Attenuation


Do not mix 50 m and 62.5 m
This commonly occurs with extension cables
Patch panels and fiber extension cables
significantly reduce total cable length
ENSURE that you have accounted for every
segment of the cable run
If a patch panel is in a run, you MUST
determine channel attenuation
Cable distance formulas are available on the
NOW TMsite
2009 NetApp. All rights reserved.

CABLING: CABLE ATTENUATION


Channel attenuation is the maximum amount of signal loss, in decibels (dB), supported by the
interface specification over the maximum stated cable and data rate parameters. The loss
value is based upon a direct point-to-point connection that has a single source and single
destination connection and continuous fiber, no patch or splice connections.
For 50 and 62.5 m at 850 nm:

A1-8

Cable attenuation: 3.5 dB/km


Connector pair loss: 0.75 dB
Splice loss: 0.3 dB

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Cabling: Connectors
Most commonly seen:
Lucent Connector (LC)
Siemens Connector (SC)
Straight Tip (ST)
Multi-fiber Push-On (MPO)
Small Form Factor Pluggable (SFP)
2009 NetApp. All rights reserved.

CABLING: CONNECTORS

A1-9

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

FC Addressing

2009 NetApp. All rights reserved.

FC ADDRESSING

A1-10

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Nodes Attached to a Fabric


Nodes (ports) link to
a fabric

Host
Initiator
Host
Initiator
Port

Host
Initiator
Port

Switch1
Port
Port

Port
Port

Port

Port
JBOD
Target

System
Target

Send out a FLOGI


(fabric login)
Get back a Fiber
Channel ID (unique)
Register with fabric as an
initiator or target
Initiators can request
targets which are visible
(zoning)

Initiators then can PLOGI


(login into target port)
Creates a session

2009 NetApp. All rights reserved.

NODES ATTACHED TO A FABRIC


Nodes (ports) link to a fabric
Send out a FLOGI
Get back a Fabric Channel ID (unique ID)
Register with Fabric as Initiator or Target
Initiator can say give me targets allowed to see (zoning)
Then Initiators can then PLOGI (login in to target Port)

Create a session

A1-11

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Port Types in Switched Configuration


Host
N_Port
Host
N_Port
F_Port

Host
N_Port
F_Port

Switch1
U_Port
F_Port

F_Port
E_Port

U_Port

System
N_Port

Host
N_Port

ISL

FL_Port

U_Port

U_Port

Switch2
E_Port
U_Port

NL_Port

F_Port
U_Port

U_Port

FL_Port

JBOD
NL_Port

2009 NetApp. All rights reserved.

PORT TYPES IN SWITCHED CONFIGURATION


Device port types:

N_Port: Node port, a fabric device directly attached (Host or RAID)


NL_Port: Node loop port, a device attached to a loop (just a bunch of disks, also called
JBOD) - FAS270 (Virtual port)
Switch port types:
E_Port: An expansion port that is used for Inter-Switch Links
ISL: An Inter-Switch Link is a link that connects switches together utilizing E_Ports
F_Port: A fabric port to which an N_Port (node) attaches
FL_Port: A fabric loop port is a port to which an NL_Port (loop device) attaches
G_Port: A generic port that is in a transitional state either to become an E_Port or F_Port
U_Port: A universal port that is waiting to become some other port
An N_Port connects to an F_Port
An NL_Port connects to an FL_Port
Switch port types are typically auto-configured
E ports carry frames between switches for configuration and fabric management. They serve
as a conduit between switches for frames destined to remote N ports and NL ports. E ports
support class 2, class 3, and class F service.

A1-12

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

There are several less common switch port types:


QLogic and Cisco:

TL_Port - In translative loop port (TL port) mode, an interface functions as a translative
loop port. It may be connected to one or more private loop devices (NL ports). TL mode
allows nonfabric-aware devices to communicate with fabric devices.
Cisco-specific:

TE_Port: In trunking E port (TE port) mode, an interface functions as a trunking


expansion port. It may be connected to another TE port to create an extended ISL (EISL)
between two switches.
Brocade-specific:

EX_Port - An E_Port from a router to an edge fabric; the router terminates EX_Ports
preventing fabric merges
L_Port - A loop port that is only displayed in switchshow output
VE_Port - A virtual E_Port that terminates at the switch and does not propagate fabric
services or routing topology information from one edge fabric to the other
VEX_Port - A virtual E_Port that terminates at the switch and does not propagate fabric
services or routing topology information from one edge fabric to the other, when an FCIP
connection is involved

A1-13

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Identifiers
Host
N_Port
Host
N_Port
F_Port

Host
N_Port
F_Port

Switch1
U_Port
F_Port

F_Port
E_Port

U_Port

System
N_Port

FL_Port

NL_Port

Each switch has


a unique
Domain ID
Each port has
an associated
Area ID
If the port is associated
with multiple nodes
(such as a NL_Port),
the Port Number is
something other than
00

2009 NetApp. All rights reserved.

IDENTIFIERS

A1-14

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Fibre Channel ID
Each device in the fabric is assigned a 24-bit address
during FLOGI
The format is XXYYZZ, where:
XX = Domain ID
YY = Area ID
ZZ = Port Number (AL_PA)

Domain
ID
(8 bits)

Area
ID
(8 bits)

Port
Number
(8 bits)

24-bit Address
2009 NetApp. All rights reserved.

FIBRE CHANNEL ID
The node address, also known as arbitrated loop physical address (ALPA) identifies a device
on a Fibre-Channel Arbitrated Loop (FC-AL). For example, it could identify a particular disk
within an FC-AL JBOD. For point-to-point connections between the switch and the edge
device, the node address (ZZ) is set to 0x00.
The domain ID usually represents the domain ID assigned to the FC switch where the FC
node connects.
The area ID usually represents the port number on the FC switch where the FC node connects
(for example, if a host is connected to port 77, area ID = 4D).
Cisco switches use a different method for determining the area ID.
The domain ID and the area ID parts of the FC ID are assigned during the fabric login
(FLOGI) process by the FC login server fabric service. The login server fabric service is
discussed further as part of the fabric services section.

A1-15

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Domain ID
A unique number between 1 and 239 that
identifies the switch (or virtual switch) in a
fabric
Unless configured statically, the principal
switch manages domain ID addressing
Recommendation is to use static domain IDs
Useful when hard zoning
Needed for persistent FC IDs

Brocade:
Use configure to set the Domain ID and other
parameters
2009 NetApp. All rights reserved.

DOMAIN ID

A1-16

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Switch Roles
Principal switch:
Default behavior is that the switch with the
lowest WWN will become the principal switch
Principal switch is defined during a fabric
change event
A new switch cannot become the principal switch
when it joins a stable fabric

Recommendation is to designate a core switch


as principal

2009 NetApp. All rights reserved.

SWITCH ROLES

A1-17

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Principal Switch
Principal switch functions

Assigns domain IDs to subordinate switches


Merges zoning information
Exchanges FSPF routing tables
Exchanges SNS information
In Brocade fabrics, it handles time
synchronization for all the switches in the fabric

2009 NetApp. All rights reserved.

PRINCIPAL SWITCH
If not using static domain IDs, preferred IDs, or contiguous assignment (Cisco) when a switch
attempts to join the fabric, it will send a random domain ID to the principal switch for
approval. The principal switch will either accept the ID (if it is not in use) or deny the ID (if it
is in use).

A1-18

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Fibre Channel Login


Types of Fibre Channel Login
Fabric login
Port login
Process login

2009 NetApp. All rights reserved.

FIBRE CHANNEL LOGIN

A1-19

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Fabric Login
Host
N_Port
Host
N_Port
F_Port

Host
N_Port
F_Port

Switch1
U_Port
F_Port

U_Port

Disk
N_Port

Host
N_Port

F_Port

U_Port

E_Port

Switch2
E_Port

FL_Port

U_Port

JBOD
NL_Port

U_Port

F_Port
U_Port

U_Port

FL_Port

JBOD
NL_Port

FLOGI: Fabric login is a process by which a node makes a logical connection


to the fabric domain
2009 NetApp. All rights reserved.

FABRIC LOGIN
After a fabric-capable Fibre Channel device is attached to a fabric switch, it will carry out a
fabric login, or FLOGI

Similar to port login, FLOGI is an extended link service command that sets up a session
between two participants. With FLOGI, a session is created between an N_Port or
NL_Port and the switch.
An N_Port will send a FLOGI frame that contains its node name, its N_Port Name, and
service parameters to a well-known address of 0xFFFFFE (login server).

A1-20

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Fabric Services: Login Service


Handles FLOGI
A port that needs to connect to the fabric must
log in with this server
The node sends a FLOGI frame with the
source ID (S_ID) filed filled in only for its
AL_PA value
The Login Service then sends a response with
the destination ID (D_ID) field filled in with the
device's AL_PA and the newly assigned
domain and area values

2009 NetApp. All rights reserved.

FABRIC SERVICES: LOGIN SERVICE


AL_PA: Arbitrated loop physical address also known as node address. Used with Fibre
Channel-Arbitrated Loop (FC-AL) devices, otherwise AL_PA=0.

A1-21

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

FC ID in Data ONTAP
Before FLOGI
system> fcp show adapter -v 0c
Slot:
0c
Description:
Fibre Channel Target Adapter
0c (Dual-channel, QLogic 2322 (2362) rev. 3)
Status:
ONLINE
No Fabric ID
Host Port Address: 0x000000...

After FLOGI
system> fcp show adapter -v 0c
Slot:
0c
Description:
Fibre Channel Target Adapter 0c
(Dual-channel, QLogic 2322 (2362) rev. 3)
Status:
ONLINE
Fabric ID assigned
Host Port Address: 0x010000...
2009 NetApp. All rights reserved.

FC ID IN DATA ONTAP

A1-22

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Port Login
Host
N_Port
Host
N_Port
F_Port

Host
N_Port
F_Port

Switch1
U_Port
F_Port

U_Port

Disk
N_Port

Host
N_Port

F_Port

U_Port

E_Port

Switch2
E_Port

FL_Port

U_Port

JBOD
NL_Port

U_Port

F_Port
U_Port

U_Port

FL_Port

JBOD
NL_Port

PLOGI: Port login is a port-to-port login process where initiators establish


sessions with targets
2009 NetApp. All rights reserved.

PORT LOGIN
Port login, or PLOGI, is used to establish a session between two N_Ports (devices)
Required before any upper-level commands or operations can be performed.
Two N_Ports swap service parameters and make themselves known to each other.
In a point-to-point topology, PLOGI is the process by which an FC initiator port establishes a
connection with an FC target port. In a switched topology, PLOGI is the process by which an
FC initiator port or FC target port establishes a connection with the switch.

A1-23

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Fabric Services: Name Service


Simple name server is also known as SNS
The name server is a database used to store
information about devices attached to a fabric
The name server gets information from a
device through the PLOGI frame at
initialization and through subsequent
registration frames
Maps the 24-bit fabric address and the 64-bit
WWN
Hosts query the name server for all attached
devices and their capabilities
2009 NetApp. All rights reserved.

FABRIC SERVICES: NAME SERVICE

A1-24

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Process Login
Host
N_Port
Host
N_Port
F_Port

Host
N_Port
F_Port

Switch1
U_Port
F_Port

U_Port

Disk
N_Port

Host
N_Port

F_Port

U_Port

E_Port

Switch2
E_Port

FL_Port

U_Port

JBOD
NL_Port

U_Port

F_Port
U_Port

U_Port

FL_Port

JBOD
NL_Port

PRLI: Process login is used to set up the environment between related


processes on an originating N_Port and a responding N_Port
2009 NetApp. All rights reserved.

PROCESS LOGIN
A group of related processes is collectively known as an image pair. Use of process login is
optional from the perspective of the Fibre Channel FC-2 layer, but may be required by a
specific upper-level protocol, as in the case of SCSI-FCP mapping.

A1-25

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Fabric Addressing: Example


120100

120200

ca0100

ISL

Domain ID: 18

Domain ID: 202

AL_PA: 15

RAID
120300

JBOD
12040f

RAID
ca0200

2009 NetApp. All rights reserved.

FABRIC ADDRESSING: EXAMPLE


Domain ID: 18 = 1216
Domain ID: 202 = ca16
AP_PA for JBOD: 15 = 0F16
AP_PA for P2P nodes: 0

A1-26

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

FC Services

2009 NetApp. All rights reserved.

FC SERVICES

A1-27

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Fabric Services
All switches implement a set of services that
are distributed across all the devices
Types of Fabric Services
Login services - 0xFFFFFE
Name services - 0xFFFFFC
Registered State Change Notification (RSCN) 0xFFFFFD

2009 NetApp. All rights reserved.

FABRIC SERVICES
Switches distribute fabric information among themselves through Class F service frames.
Switches exchange information in their servers so that all individual switch servers contain
the same information. This creates a singular fabric entity and makes it appear that there is
only one of each type of server.
Like nodes, fabric services have addresses, but the address of a fabric service is a fixed value
called a well-known address. Well-known addresses are reserved by the FC standard.

A1-28

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Fabric Services: RSCN


Provides state change notification service to registered
nodes
Devices that use this service are generally hosts that
want to keep track of a number of storage targets
A device registers for state change notification by
transmitting a state change registration frame (SCR) to
the well-known address
When there is a change in fabric topology, the switch
controller transmits a registered state change
notification frame to the device to let it know that there
has been a change
It is up to the device to query the name server to
assess the state of the fabric at this time
2009 NetApp. All rights reserved.

FABRIC SERVICES: RSCN

A1-29

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

FC Routing
Mechanisms

2009 NetApp. All rights reserved.

FC ROUTING MECHANISMS

A1-30

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Routing Mechanisms
Complex fabric can be made of interconnected
switches and director

ISL

2009 NetApp. All rights reserved.

ROUTING MECHANISMS

A1-31

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Routing Mechanism: FSPF


Fabric shortest path first (FSPF) is the
standard path selection protocol in FC

ISL

2009 NetApp. All rights reserved.

ROUTING MECHANISM: FSPF

FSPF keeps tracks of links on all switches and associates a cost with each link
Cost = proportional to the number of hops
FSPF computes paths from a switch to all other switches in the fabric by adding the cost
of all links traversed by the path
FSPF chooses the path that minimizes the cost

A1-32

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Routing Mechanism: Trunking


Trunking is a feature of switches that enables
traffic to be distributed across available ISLs
while still preserving in-order delivery

2009 NetApp. All rights reserved.

ROUTING MECHANISM: TRUNKING

A1-33

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

ISLs Without Trunking


ISLs provide for connection between switches
Any switch in the fabric can have one or more
links to another switch in the fabric
ISLs are allocated in a round-robin fashion to
share the load on the system (S_IDs are
assigned a default path through a specific ISL)
Adding an ISL between switches is dynamic
and can be done while the switch is active

2009 NetApp. All rights reserved.

ISLS WITHOUT TRUNKING


Adding a new Inter-Switch Link (ISL) will result in a routing re-computation and new
allocation of ISL links between source and destination ports. Similarly, removing a link will
result in fabric shortest path first (FSPF) routing re-computation across the fabric and possible
fabric reconfiguration.
Adding ISLs will cause routing traffic and zoning data to be updated across the fabric through
a spanning tree. The total number of ISLs is not as relevant as the amount of configuration
changing, as each change will result in a recalculation of routes in the fabric. When numerous
fabric reconfigurations occur (removing or adding links, rebooting a switch, and so on), the
load on the switches CPUs will be increased and some fabric events may time out while
waiting on CPU response. This occurs only during fabric reconfiguration activities and does
affect not frame traffic per se, only tasks that require use of the CPU. No CPU intervention is
required for normal frame routing; this is all done by switch hardware.
No more than eight ISLs between any two switches are supported. More than eight ports can
be used on a switch for ISL traffic as long as no more than eight go to a single adjacent
switch.
NOTE: A spanning tree connects all switches from the so called principal switch to all
subordinate switches. This tree spans in a way such that each switch (or leaf of the tree) is
connected to other switches, even if there is more than one ISL between themthat is to say,
there are no loops.

A1-34

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

ISLs Without Trunking (Cont.)


4 parallel ISLs

Director

Director
load diminished

The switch guarantees in-order delivery.


However, if one Fibre Channel device loads up
its assigned ISL highly and for lengthy periods of
time, a second device assigned to this same ISL
may not get all of its data through in a timely
manner.

2009 NetApp. All rights reserved.

ISLS WITHOUT TRUNKING (CONT.)

A1-35

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

ISLs With Trunking


Combines two to eight ISLs into a single logical
connection between switches

Director

Director
load diminished

Director

Director
no load impaired

Director

Director

ISL Trunking

2009 NetApp. All rights reserved.

ISLS WITH TRUNKING


ISL trunking:
- Load is balanced among the ISLs in the trunk
- Minimizes congestion
- Is typically auto-configured
ISL trunking is an optional license for Brocade switches.

A1-36

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Cisco PortChannels
Trunking is a commonly used term in the storage
industry
However, the Cisco SAN-OS software and switches in
the Cisco MDS 9000 Family implement trunking and
PortChanneling as follows:
PortChanneling enables several physical links to be
combined into one aggregated logical link
Trunking enables a link transmitting frames in the
EISL format to carry (trunk) multiple VSAN traffic
When trunking is operational on an E port, that E
port becomes a TE port
A TE port is specific to switches in the Cisco MDS 9000
Family
An industry standard E port can link to other vendor
switches and is referred to as a non trunking interface (see
Figure 14-2 and Figure 14-3)

2009 NetApp. All rights reserved.

CISCO PORTCHANNELS

A1-37

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Fabric Events

2009 NetApp. All rights reserved.

FABRIC EVENTS

A1-38

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Fabric Connectivity: FSPF


Fabric shortest path first
Standard path selection protocol used by Fibre
Channel fabrics
FSPF keeps track of the links on all switches in
the fabric and associates a cost with each link
Enabled by default on all switches
Automatically calculates the best path between
any two switches in a fabric

2009 NetApp. All rights reserved.

FABRIC CONNECTIVITY: FSPF


FSPF dynamically compute routes throughout a fabric by establishing the shortest and
quickest path between any two switches.
Select an alternative path in the event of the failure of a given path. FSPF supports multiple
paths and automatically computes an alternative path around a failed link.
It provides a preferred route when two equal paths are available. The preferred route is
chosen by different algorithms by different vendors.
FSPF uses a topology database to keep track of the state of the links on all switches in the
fabric and associates a cost with each link.
Each time a new switch enters the fabric, a link state record (LSR) is sent to the neighboring
switches, and then flooded throughout the fabric.
Link costs are based on speed and hop count:
1 Gb = 1,000
2 Gb = 500
Links can be given a static value and a reference for doing so is provided here.
http://www.redbooks.ibm.com/abstracts/tips0034.html?Open

A1-39

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Fabric Events: RSCN


Registered state change notification (RSCN)
Fibre Channel service that informs hosts about
changes in the fabric
Hosts receive this information by registering with
the fabric controller (0xFFFFFD)
It is up to the nodes to query the name server
again to obtain the new information
The details of the changed information are not
delivered by the switch in the RSCN that was
sent to the nodes

2009 NetApp. All rights reserved.

FABRIC EVENTS: RSCN

A1-40

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Fabric Events: RSCN (Cont.)


An RSCN is typically disruptive, I/O
momentarily pauses
Significantly impacts tape devices and hosts
that require streaming read or write

The impact of an RSCN can be mitigated


through zoning or RSCN suppression (each
switch vendor implements it differently)
Configure RSCN suppression for initiators only

2009 NetApp. All rights reserved.

FABRIC EVENTS: RSCN (CONT.)

A1-41

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Fabric Events: RSCN (Cont.)


An RSCN is generated when:
Disks join or leave the fabric
A name server registration change occurs
A host joins or leaves the fabric

A new zone is activated

If each host is in a separate zone, RSCN


suppression is not required

2009 NetApp. All rights reserved.

FABRIC EVENTS: RSCN (CONT.)

A1-42

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Brocade FC Switches: Licensing


Many features are optional

Advanced zoning
ISL trunking
Advanced performance monitoring
Extended fabric: Allows higher port buffer
credits for long-distance connections
Fabric watch: SAN health monitor
Full fabric
VL2 switches only support 2 switches in a fabric
VL4 switches only support 4 switches in a fabric

2009 NetApp. All rights reserved.

BROCADE FC SWITCHES: LICENSING

A1-43

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Brocade FC Switches: Licensing (Cont.)


Read the product bulletins for each switch
http://web.netapp.com/engineering/projects/releases/sled
gehammer/brocade/brocade_project_page.htm

On newer model switches, do not assume all ports are


licensed
200e ships by default with 8 SFPs and only 8 of the 16
ports licensed
Can be upgraded to 12 or 16 ports (license)
The 200e is a no-fabric switch by default

4900 ships by default with only 32 of its 64 ports licensed

Always check customers equipment for a full fabric


license through licenseShow

2009 NetApp. All rights reserved.

BROCADE FC SWITCHES: LICENSING (CONT.)

A1-44

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Brocade FC Switches: Licensing (Cont.)


NetApp packages the following licenses:

Restricted fabric 3200/3800/3250/3850/200E


Web tools
Advanced zoning
Full fabric
3200/3800/3250/3850/200E/3900/4100/4900/48000
Web tools
Advanced zoning
Fabric watch
All switches may add the extra cost optional licenses
of Advanced Performance Monitoring and the
Distance bundle that includes licenses for extended
fabrics, remote switch, and trunking

2009 NetApp. All rights reserved.

BROCADE FC SWITCHES: LICENSING (CONT.)

A1-45

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

FC Zoning

2009 NetApp. All rights reserved.

FC ZONING

A1-46

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Zoning
A zone is a list of Fibre Channel ports or nodes
that can communicate with each other in a
Fibre Channel fabric
A zone is essentially a filter that limits the
information the name server returns to the
initiator during a query

2009 NetApp. All rights reserved.

ZONING
Devices can be members of more than one zone. This enables the creation of zones that
contain some, but not all, of the same members. This type of configuration is referred to as
overlapping zones. In this case, a concept of most permissible access is employed. As long
as at least one zone contains both the initiator and target, the target is made available to the
initiator, even though other zones containing the initiator do not contain the target device.

A1-47

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Zoning: Reasons for Use


Provides control over which nodes in a fabric
can view and access which resources
Increases network security
Limits data loss or corruption
Reduces or eliminates cross-talk between
initiator HBAs
Limits available paths

2009 NetApp. All rights reserved.

ZONING: REASONS FOR USE


Some hosts require that the number of available paths between each host and target pair be
limited. For example, HP PV-Links can use only eight paths; additional paths can expose
the LUN to unregulated access and possibly to data corruption.
Further, reducing the number of paths provides the following two benefits:

Simplifies administration of attached disks on a UNIX host by making the output of


host platform tools (such as iostat, sanlun, and ioscan/lsdev) more concise and easier to
read.
Eliminates the additional CFO time that occurs with single-path active-passive setups. For
example, consider a 32-path setup where the multipathing software uses a single-path
active-passive path selection algorithm, as required by DotHillSANpath on AIX hosts
clustered with high-availability cluster multiprocessing (HACMP). If a cluster failover
occurs, the host must attempt and fail all 16 primary paths consecutively before reaching
the failed storage systems partner through a secondary path. This can take much longer
than the regular failover time. Reducing the number of paths to eight or less can prevent
this increased downtime.
With fewer paths to monitor, zoning results in a limited number of registered state change
notifications (RSCNs) to a given group of devices on a storage area network (SAN), thereby
improving SAN availability and performance.
By allowing data access to be restricted, zoning enhances security of the cluster.

A1-48

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Zoning: Reasons for Use (Cont.)


Reduces fabric service traffic
Increases overall stability
Separate tape-based SAN traffic from diskbased SAN traffic
Separate NetApp SAN traffic from other
vendors SAN traffic (recommended)

2009 NetApp. All rights reserved.

ZONING: REASONS FOR USE (CONT.)


Whenever a change occurs in the name server, such as a device addition to, or removal from,
the fabric, the fabric OS generates a State Change Notification (SCN). In the absence of
zoning, an SCN is sent to all devices in the fabric, with each device querying the name server
to determine how the membership of the fabric has changed. This process occurs even if the
fabric change does not affect the device being notified. Especially in large fabrics, this can
result in a significant amount of fabric service traffic (although typically for only a short
time).
For instance, if a new initiator joins the fabric, there is little reason to notify all the other
initiators of the change because initiators do not usually communicate with each other. An
equivalent situation exists with targets. Because targets generally do not communicate with
each other, they have little use in being notified about the addition or removal of another
target.
Although all devices are supposed to handle SCN traffic without affecting normal operation,
this is not always the case. Thus, the overall stability of the fabric increases after zoning is
implemented: the fabric can restrict SCN delivery to only those devices in zones which
contain the added device. The same holds true when a device is removed from the fabric. The
list of available devices is restricted to just those of interest to the initiator, rather than all
devices in the fabric.

A1-49

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Zone Configuration
To configure a zone:

Configure default zoning mode


Identify the fabric type of zone to create
Create the zone
Add fabric objects to the zone
Create zone configuration
Identify how to enforce zoning

This appendix will focus on Brocades Fabric


OS 6.2

2009 NetApp. All rights reserved.

ZONE CONFIGURATION

A1-50

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Default Zoning Mode


Default zoning mode determines what occurs if
no zoning implemented or if no effective zone
configuration
Choices:
All access
No access

To set the default zoning mode


switch> defzone --noaccess [--allaccess]

To view the default zoning mode


switch> defzone --show

2009 NetApp. All rights reserved.

DEFAULT ZONING MODE

A1-51

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Zone Types
Different vendors have different types of zones
Brocade with Fabric OS 6.2:

Regular zones
Broadcast zones
Traffic isolation zones
QoS zones
Redirection zones

This module will focus on regular zones

2009 NetApp. All rights reserved.

ZONE TYPES
Regular zone
Broadcast zones restricts boardcast well-known address (FFFFFF) packets to only those
devices that are members of the broadcast zone. A broadcast zone does not allow access
within its members in any way. If you want to allow or restrict access between any devices,
you must create regular zones for that purpose.
Traffic isolation zones is an implementation of a traffic isolation routing features which
allows restricting of ISL communication between switches by using failover or loadbalancing techniques.
Quality of Service (QoS) zones allows administrators to assign high or low priority to
prioritize traffic.
Frame Redirection zones provides a means to redirect traffic flow between a host and a target
to virtualization and encryption applications so that those applications can perform without
having to reconfigure the host and target.
Please see the Fabric OS Administrators Guide for your version of Brocades Fabric OS.

A1-52

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Best Practices of Zoning


Always zone using the highest Fabric OS-level
switch
Zone using the core switch versus an edge
switch
Zone using an enterprise-class platform rather
than a switch
NetApp recommends creating a separate zone
for with a 1:1 mapping between an initiator port
and a target port
If this is not possible, create zones from each
initiator port and all target ports
2009 NetApp. All rights reserved.

BEST PRACTICES OF ZONING

A1-53

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Zoning: Naming Conventions


If working on an existing SAN, use the
established naming convention
Naming convention goals
Simple
Meaningful
Consistent

Use A-Z, a-z, 0-9, _


Zoning Implementation Strategies for Brocade
SAN Fabrics
Search for Zoning_Imp_WP_00.pdf at
www.brocade.com

2009 NetApp. All rights reserved.

ZONING: NAMING CONVENTIONS


Zoning Implementation Strategies for Brocade SAN Fabrics:
http://www.brocade.com/san/pdf/whitepapers/Zoning_Imp_WP_00.pdf

A1-54

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Zoning: Naming Conventions (Cont.)


Alias prefixes

SRV: Server
STO: Storage
TPE: Tape
VRA: Virtualization appliance

Examples:

SRV_MAILSERVER_SLT5: Server with hostname


mailserver in PCI slot 5
TPA_LTO9_SNG: Tape with LTO drive number 9, singleattached
STO_DSK3456_5C: Storage unit with serial ID 3456 on
the fifth card in port C
STO_FAS3050c_C1_1a: Controller 1 of a FAS3050c
connected to target port 1a

2009 NetApp. All rights reserved.

ZONING: NAMING CONVENTIONS (CONT.)

A1-55

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Zoning: Naming Conventions (Cont.)


For dual fabrics, append a fabric identifier
SRV_MAILSERVER_SLT5_fabA
SRV_MAILSERVER_SLT5_fabric_A

2009 NetApp. All rights reserved.

ZONING: NAMING CONVENTIONS (CONT.)

A1-56

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Zoning: Naming Conventions (Cont.)


Zones

Zone name should identify the host or cluster


Use the ZNE_ prefix
Include the fabric name if applicable
Zone for a host
ZNE_MAILSERVER_SLT5 (single fabric)
ZNE_MAILSERVER_SLT5_fabric_A

Zone for a cluster:


ZNE_MSCS_EXCH_fabric_A (would contain)
SRV_MSCS_EXCH1_fabric_A
SRV_MSCS_EXCH2_fabric_A
STO_x
2009 NetApp. All rights reserved.

ZONING: NAMING CONVENTIONS (CONT.)

A1-57

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Zoning: Naming Conventions (Cont.)


Zone sets / configurations
Use PROD_ prefix for the primary fabric
configuration
PROD_A or PROD_FABRIC_A
PROD_B or PROD_FABRIC_B

For temporary configurations use descriptive


names
BACKUP_A
RECOVERY_2
TEST_18jun02

2009 NetApp. All rights reserved.

ZONING: NAMING CONVENTIONS (CONT.)

A1-58

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Zoning: Naming Conventions (Cont.)


Valid characters: a-z, A-Z, 0-9, $-^_
Alias names cannot use a hyphen (-), use
underscore (_)
NetApp Zoning Central
Search for Zoning.html at
http://eng-web.rtp.netapp.com

2009 NetApp. All rights reserved.

ZONING: NAMING CONVENTIONS (CONT.)


NetApp Zoning Central

Browse to http://eng-web.rtp.netapp.com/techpubs/docs/zoning/zoning.html

A1-59

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Zone Creation
To create a regular zone
switch> zonecreate zonename,
member[;member...]
switch> cfgsave

zonename is the logical name of the zone


member is a single member or list of members:

A domain,port pair
Device WWNN
Device WWPN
Zone alias

A collection of device WWNN or WWPN


created using aliCreate command

To add members to the zone


switch> zoneadd zonename,
member[;member...]
switch> cfgsave Save the configuration after any modifications
2009 NetApp. All rights reserved.

ZONE CREATION

A1-60

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Zoning: Zone Sets


A zone set consists of one or more zones
A zone set can be activated or deactivated as
a single entity across all switches in the fabric
Only one zone set can be activated at any time
A zone can be a member of more than one
zone set
Zone set = Brocade configuration

2009 NetApp. All rights reserved.

ZONING: ZONE SETS

A1-61

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Zoning: Zone Sets (Cont.)


SRV_Host1_1a

SRV_Host2_1a

STO_FAS3050a_0c

SRV_Host1_1a

SRV_Host2_1a

STO_FAS3050a_0c

ZNE_Host1_fab_a_new
Switch
ZNE_Host1_fab_a

Switch
ZNE_Host2_fab_a_new

ZNE_Host2_fab_a

STO_RAID1_p1

STO_RAID2_p1

Zone Set: ZNE_Host1_fab_a

STO_RAID1_p1

STO_RAID2_p1

Zone Set: NEW_fab_a

2009 NetApp. All rights reserved.

ZONING: ZONE SETS (CONT.)

A1-62

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Zone Configuration
To create a zone configuration
switch> cfgcreate cfgname,
member[;member...]
switch> cfgsave
cfgname is the logical name of the zone
member is a zone name or list of zone names

Enable and disable a zone configuration


switch> cfgenable cfgname
switch> cfgdisable cfgname

Viewed active configuration


switch> cfgactvshow

You can also add and remove members and delete


zone configuration
2009 NetApp. All rights reserved.

ZONE CONFIGURATION

A1-63

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Zone Enforcement
Zones have a list of fabric objects
Zone enforcement can be done with:
Software-enforced
Hardware-enforced

2009 NetApp. All rights reserved.

ZONE ENFORCEMENT

A1-64

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Software-Enforced Zoning
Called soft zoning, fabric-based zoning or
session-based zoning
Prevents hosts from discovering unauthorized
target devices
Ensures that the name server does not return
any information to an unauthorized initiator in
response to a name server query
Does not prohibit access to the device... just
limits access to name service query

2009 NetApp. All rights reserved.

SOFTWARE-ENFORCED ZONING

A1-65

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Hardware-Enforced Zoning
Called hard zoning or ASIC-enforced
zoning
Prevents host from discovering and
communicating to unauthorized target devices
Each frame is checked by hardware (ASIC)
before the packet is delivered

2009 NetApp. All rights reserved.

HARDWARE-ENFORCED ZONING

A1-66

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Zone Enforcement Identification


Brocades Fabric OS uses hardware-enforced
zoning (for each zone) whenever the fabric
membership or zone configuration changes
NOTE: connecting a device specified by WWN
to a port specified in a domain,port zone results
in loss of the hardware enforcement in both
zones

To identify the current zone enforcement


mechanism:
switch> portzoneshow

2009 NetApp. All rights reserved.

ZONE ENFORCEMENT IDENTIFICATION

A1-67

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary

2009 NetApp. All rights reserved.

MODULE SUMMARY

A1-68

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary
In this module, you should have learned to:
Describe FC details important to configure and
troubleshoot FC topologies

2009 NetApp. All rights reserved.

MODULE SUMMARY

A1-69

SAN Implementation Workshop: FC Details

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Introduction to
FCoE
Appendix 2
SAN Implementation Workshop

INTRODUCTION TO FCOE

A2-1

SAN Implementation Workshop: Introduction to FCoE

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Objectives
By the end of this module, you should be able to:
Distinguish the differences between Fibre
Channel (FC), Fibre Channel over Ethernet
(FCoE) and Internet SCSI (iSCSI) protocols
Describe path implementation with FCoE
connectivity
Describe how to configure FCoE ports on
Windows and NetApp systems

2009 NetApp. All rights reserved.

MODULE OBJECTIVES

A2-2

SAN Implementation Workshop: Introduction to FCoE

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Protocols Stacks

OSI

iSCSI

FCoE

FC

6 Presentation

SCSI

FC-4 Protocol

FC-4 Protocol

5 Session

iSCSI

FC-3 Service

FC-3 Service

4 Transport

TCP

FC-2 Frame

FC-2 Frame

3 Network

IP

FCoE Mapping

2 Data Link

MAC

MAC

FC-1 Data

1 Physical

Physical

Physical

FC-0 Physical

2009 NetApp. All rights reserved.

PROTOCOLS STACKS

A2-3

SAN Implementation Workshop: Introduction to FCoE

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

FCoE Connectivity Configuration


The following are the steps to configure a
FCoE SAN:
1.
2.
3.
4.
5.

Determine the FC Topology


Cable the nodes
Configure the target(s)
Configure the initiator(s)
Verify Connectivity

2009 NetApp. All rights reserved.

FCOE CONNECTIVITY CONFIGURATION

A2-4

SAN Implementation Workshop: Introduction to FCoE

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

FCoE Topology

2009 NetApp. All rights reserved.

FCOE TOPOLOGY

A2-5

SAN Implementation Workshop: Introduction to FCoE

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

FCoE: Past Topologies


FC
Initiator

Ethernet

FC Fabric

Web

Before FCoE, Ethernet


and FC were discreet
networks

2009 NetApp. All rights reserved.

FCOE: PAST TOPOLOGIES

A2-6

SAN Implementation Workshop: Introduction to FCoE

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

FC
Target

FCoE: Current Topologies


FCoE
Initiator
FCoE
Enabled
Switch

Ethernet

FC
Initiator

FC Fabric

Web

With FCoEand a FCoE-enabled switch,


initiators can communicate with FC and
FCoE networks;
Still two discreet networks

2009 NetApp. All rights reserved.

FCOE: CURRENT TOPOLOGIES

A2-7

SAN Implementation Workshop: Introduction to FCoE

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

FC
Target

FCoE: Future Topologies


FCoE
Initiator
FCoE
Initiator

Lossless
Ethernet cloud
(1Gb or 10Gb)

FCoE
Enabled
Switch

Ethernet

FC
Initiator

FC Fabric

Web

Add FCoE initiators


in the Ethernet cloud

2009 NetApp. All rights reserved.

FCOE: FUTURE TOPOLOGIES

A2-8

SAN Implementation Workshop: Introduction to FCoE

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

FC
Target

FCoE: Future Topologies (Cont.)


FCoE
Initiator

FCoE
Target
FCoE
Initiator

Lossless
Ethernet cloud
(1Gb or 10Gb)

Ethernet

FCoE
Enabled
Switch

FC
Initiator

FC Fabric

Web

Add FCoE targets


in the Ethernet cloud

2009 NetApp. All rights reserved.

FCOE: FUTURE TOPOLOGIES (CONT.)

A2-9

SAN Implementation Workshop: Introduction to FCoE

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

FC
Target

FCoE:Traffic flow

FCoE
Initiator
MAC

FCoE
FCFC
MAC
Enabled
Switch

FC

Blade Servers or Racks of


Servers w/ CNA or S/W FCoE

FC
Target
FC

FCFabric
Fabric
FC

Cisco Nexus
5020 (available)
Brocade (soon)

4GB or 8GB
FC environment

2009 NetApp. All rights reserved.

FCOE:TRAFFIC FLOW

A2-10

SAN Implementation Workshop: Introduction to FCoE

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Current FC + FCoE = SINGLE SAN


Consolidation without
management issues
FCoE
Target

FC
Initiator
FCoE
Enabled
Switch

FCoE
Initiator

Standard FC
Session

FC Fabric

Stateless, just encapsulates


and decapsulates
Web

Single namespace,
single management space

2009 NetApp. All rights reserved.

CURRENT FC + FCOE = SINGLE SAN

A2-11

SAN Implementation Workshop: Introduction to FCoE

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

FC
Target

Future FC + FCoE = SINGLE SAN


Consolidation without
management issues
FCoE
Target
FCoE
Initiator

FC
Initiator

Ethernet

Standard FC
Session

FCoE
Enabled
Switch

FC Fabric

Stateless, just encapsulates


and decapsulates
Web

Single namespace,
single management space

2009 NetApp. All rights reserved.

FUTURE FC + FCOE = SINGLE SAN

A2-12

SAN Implementation Workshop: Introduction to FCoE

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

FC
Target

Network Topology
Windows

Converged
Network
Adapters
(CNAs)

...

Windows

FCoE 10-GbE
switch
Storage System 1

Requires
Lossless
Ethernet

iSCSI
2009 NetApp. All rights reserved.

NETWORK TOPOLOGY
Requires lossless Ethernet
Either physical channel pause (non-consolidated)
or Priority based Flow Control (PFC 802.1Qbb)
Requires mini-Jumbo Frames (~2.5k)

there is no segmentation of FC frames

A2-13

SAN Implementation Workshop: Introduction to FCoE

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

FCoE Adapters Types


Since FCoE implements SAN over Ethernet,
administrators have choices on how to connect
to an FCoE network:
10 Gb NIC &
FCoE Soft Initiator

FCoE Hardware
Initiator/CNA

Application

Application

SCSI

SCSI

FCoE Layer

FCoE Layer

Net Device

Net Device

Ethernet Driver

Ethernet Driver

Ethernet

Ethernet

Server
Processing

NIC/HBA
Processing

2009 NetApp. All rights reserved.

FCOE ADAPTERS TYPES


For Data ONTAP 7.3.2, the product requirements for this HBA are:
Support product quality of FCoE target function. The level of feature and quality are expected
to be comparable to 8G FC target HBA.
Support demo quality of NIC function to be used by iSCSI software target. Although we have
implemented the driver for NIC function in 7.3.2, it is disabled by default. That means an user
can not use its NIC function. To enable the NIC function, it needs to go through PVR process.

A2-14

SAN Implementation Workshop: Introduction to FCoE

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Switch Configuration

2009 NetApp. All rights reserved.

SWITCH CONFIGURATION

A2-15

SAN Implementation Workshop: Introduction to FCoE

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Switch Configuration Overview


Configuring a Cisco Nexus 5020 for FCoE
3 step process for loading and enabling FCoE
Load the license
Install license <licensefile>

Enable the FCoE feature


Via Gui or feature fcoe

Reboot the switch


Reload

2009 NetApp. All rights reserved.

SWITCH CONFIGURATION OVERVIEW

A2-16

SAN Implementation Workshop: Introduction to FCoE

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

FCoE Switch: Cisco Nexus 5020

2009 NetApp. All rights reserved.

FCOE SWITCH: CISCO NEXUS 5020

A2-17

SAN Implementation Workshop: Introduction to FCoE

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Switch Configuration Steps


How to configure a port to accept FCoE
Enter configuration mode
# conf t

Create Virtual interface group (VIG)


(config)# int vig 4

Bind a physical interface to the VIG


(config-if)# bind int eth 1/4

Create a virtual FC port on the VIG then enable the port


(config-if)# int vfc 4/1
(config-if)# no shut

Create a virtual ethernet port on the VIG then enable the port
(config-if)# int veth 4/1
(config-if)# no shut

Exit the VIG configuration mode


(config-if)# exit

Force enable FCoE on the physical ethernet port


(config)# int eth 1/4
(config-if)# fcoe mode on

2009 NetApp. All rights reserved.

SWITCH CONFIGURATION STEPS

A2-18

SAN Implementation Workshop: Introduction to FCoE

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Switch Configuration Example


----------------------------------------------------------------------------------------------------------------Interface Vsan Admin Admin Status
SFP Oper Oper
Port
Mode Trunk
Mode Speed Channel
Mode
(Gbps)
----------------------------------------------------------------------------------------------------------------fc2/1
1
auto
on
trunking
swl
TE
4
-fc2/2
1
auto
on
sfpAbsent
-------------------------------------------------------------------------------------------------------------------Interface
Status IP Address
Speed MTU Port
Channel
----------------------------------------------------------------------------------------------------------------Ethernet1/27
sfpIsAbsen --1500 -Ethernet1/28
notConnect -10000 1500 -Ethernet1/34
up
-10000 1500 -----------------------------------------------------------------------------------------------------------------Interface
Status IP Address
Speed MTU Port
Channel
----------------------------------------------------------------------------------------------------------------vethernet17/1
init
--1500
-vethernet19/1
holdDown --1500 -vethernet34/1
up
-10000 1500 -----------------------------------------------------------------------------------------------------------------Interface Vsan Admin Admin Status
SFP Oper Oper
Port
Mode Trunk
Mode Speed Channel
Mode
(Gbps)
----------------------------------------------------------------------------------------------------------------vfc17/1
1
F
-down
---vfc18/1
1
F
-Init
---vfc34/1
1
F
-up
-F
auto
- 2009 NetApp. All rights reserved.

SWITCH CONFIGURATION EXAMPLE

A2-19

SAN Implementation Workshop: Introduction to FCoE

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Target Configuration

2009 NetApp. All rights reserved.

TARGET CONFIGURATION

A2-20

SAN Implementation Workshop: Introduction to FCoE

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Data ONTAP as an FCoE Target


Data ONTAP 7.3.1 and later has support for
FCoE
Data ONTAP 7.3.1 supported the following
configuration:

VMware ESX 3.5U3,


RHEL 5.3, Windows
2003/2008; QLogic,
Emulex CNA

Cisco Nexus 5020

NetApp FAS/V-Series with


QLogic Target
Card(X1138A-R6)

2009 NetApp. All rights reserved.

DATA ONTAP AS AN FCOE TARGET


Here are the 3 key project deliverables:
1. The host initiators are being qualified by the SAN Engineering/QA teams. These are 3rd
party HBA's referred too as CNAs (Converged Network Adapters). The 2 initial vendors
are Qlogic (partno QLE8042) and Emulex (partno 21000 and 21002). Both CNAs are 2
port and both support a maximum of 4G FC bandwidth (next generation cards will
support a maximum of 10G FC bandwidth). Both CNA's autonegotiate to 10GbE. Both
CNA's use SFP+ pluggable modules. Both support copper and optical SFP+ pluggables.
The NetApp qualifications are not tied to any new Host Utilities or software release. The
IMT will be updated with a new FCoE protocol once the qualifications complete. The
initial OS's to support FCoE are Windows, ESX and Linux.
2. The Cisco Nexus 5020 switch. This switch supports 40 10GbE ports capable of FCoE,
and through an optional plug in component, 8 native FC ports. The NPI page for this
switch is located HERE. The 5010 is planned for early 2009.
3. NetApp FCoE target expansion card (internal code name is Mercury). This new adapter is
tied to ONTAP 7.3.1. The internal partno is X1138A-R6. This is essentially the same
Qlogic CNA initiator card. Sales and support for this target HBA is purely via PVR. This
is a gen1 card from Qlogic so there are future cards on the horizon to reduce cost and
power consumption.

A2-21

SAN Implementation Workshop: Introduction to FCoE

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

First Generation CNA functional operation

FCoE
Engine

10Gb Ethernet port

NIC
Regular Ethernet traffic
flows through to NIC

All MAC, FCoE


and Ethernet
magic is
handled here
making it
transparent to
the existing
host FC S/W.

FC

PCIe
Existing host
NIC driver

Existing host/
target FC driver

2009 NetApp. All rights reserved.

FIRST GENERATION CNA FUNCTIONAL OPERATION

A2-22

SAN Implementation Workshop: Introduction to FCoE

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Data ONTAP as an FCoE Target - Phase 2


NetApp introduces Phase 2 of the FCoE
Support in Data ONTAP 7.3.2
Data ONTAP 7.3.2 supported the following
target configuration:

QLogic QLE8152 Dual Port


10-GbE Ethernet to PCIe CNA
(X1109A-R6)

Brocade BR 1020 Dual Port


10 Gbps FCoE/CEE CNA
(X1113A-R6)

2009 NetApp. All rights reserved.

DATA ONTAP AS AN FCOE TARGET - PHASE 2

A2-23

SAN Implementation Workshop: Introduction to FCoE

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

2nd Generation CNA functional operation


NEW Integrated Component
containing all 3 functions

FCoE
Engine

10Gb Ethernet port

NIC
Regular Ethernet traffic
flows through to NIC

All MAC, FCoE


and Ethernet
magic is
handled here
making it
transparent to
the existing
host FC S/W.

FC

PCIe
Existing host
NIC driver

Existing host/
target FC driver

2009 NetApp. All rights reserved.

2ND GENERATION CNA FUNCTIONAL OPERATION

A2-24

SAN Implementation Workshop: Introduction to FCoE

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Configuration Examples Target


system> fcp show adapters
Slot:
Description:
Adapter Type:
Status:
FC Nodename:
FC Portname:
Standby:

4a
Fibre Channel Target Adapter 4a (Dual-channel, QLogic 2432 (8432) rev. 3)
Local
ONLINE
50:0a:09:80:87:d9:2b:7d (500a098087d92b7d)
50:0a:09:83:87:d9:2b:7d (500a098387d92b7d)
No

Slot:
Description:
Adapter Type:
Status:
FC Nodename:
FC Portname:
Standby:

1a
Fibre Channel Target Adapter 1a (Dual-channel, QLogic 2532 (2562) rev. 2)
Local
ONLINE
50:0a:09:80:87:d9:2b:7d (500a098087d92b7d)
50:0a:09:85:87:d9:2b:7d (500a098587d92b7d)
No

2009 NetApp. All rights reserved.

CONFIGURATION EXAMPLES TARGET

A2-25

SAN Implementation Workshop: Introduction to FCoE

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Configuration Examples - Target (Cont.)


system> fcp show adapters -v 4a
Slot:
4a
Description:
Fibre Channel Target Adapter 4a (Dual-channel, QLogic 2432 (8432) rev. 3)
Status:
OFFLINED BY USER/SYSTEM
Host Port Address:
0x000000
Firmware Rev:
4.5.2
PCI Bus Width:
64-bit
PCI Clock Speed:
33 MHz
FC Nodename:
50:0a:09:80:87:d9:2b:7d (500a098087d92b7d)
FC Portname:
50:0a:09:83:87:d9:2b:7d (500a098387d92b7d)
Cacheline Size:
16
FC Packet Size:
2048
SRAM Parity:
Yes
External GBIC:
No
Data Link Rate:
0 GBit
Adapter Type:
Local
Fabric Established:
No
Connection Established:
Loop
Mediatype:
auto
Partner Adapter:
None
Standby:
No
Target Port ID:
0x3

2009 NetApp. All rights reserved.

CONFIGURATION EXAMPLES - TARGET (CONT.)

A2-26

SAN Implementation Workshop: Introduction to FCoE

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Initiator Configuration

2009 NetApp. All rights reserved.

INITIATOR CONFIGURATION

A2-27

SAN Implementation Workshop: Introduction to FCoE

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Windows Initiator
Windows
Versions
2003 x32 and x64
2008 x32 and x64, including HyperV

Drivers
Both QLogic and Emulex FCoE CNA drivers are available
for download at the vendors website
NOTE: FCoE CNAs can not be installed in the same box as
FC HBAs.

Host Utilities
WHU 5.0 supports the FCoE CNAs
Correct SNIA API is included with the drivers
Both FCoE cards use the same HBA timeout setting as their
FC HBA counterpart.
2009 NetApp. All rights reserved.

WINDOWS INITIATOR

QLogic and Emulex 4Gb HBAs represent over $10M in revenue FY2008
50 50 split between QLogic and Emulex

A2-28

SAN Implementation Workshop: Introduction to FCoE

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Configuration Examples - Host

2009 NetApp. All rights reserved.

CONFIGURATION EXAMPLES - HOST

A2-29

SAN Implementation Workshop: Introduction to FCoE

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

VMware Initiator Support


VMware

Versions

ESX 3.5u2
ESXi 3.5u2
ESX 3.5u3
ESXi 3.5u3

Drivers

ESX 3.5 Update 2 - Both QLogic and Emulex FCoE CNA drivers
are available for download at http://www.vmware.com
ESX 3.5 Update 3 Should be inbox.
NOTE: Just like with Windows, FCoE CNAs can not be installed in
the same box as FC HBAs.

Host Utilities

EHU 5.0 supports the FCoE CNAs


QLogic SNIA API still needs updating. (the EHU 5.0 ships
v4.00build12) See BURT#312983 for more information.
Correct Emulex SNIA API is included in EHU 5.0
Both FCoE cards use the same HBA timeout setting as their FC
HBA counterpart. (This info will be in EHU 5.0 TOI)

2009 NetApp. All rights reserved.

VMWARE INITIATOR SUPPORT

QLogic and Emulex 4Gb HBAs represent over $10M in revenue FY2008
50 50 split between QLogic and Emulex

A2-30

SAN Implementation Workshop: Introduction to FCoE

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Linux Initiator Support


Linux
Versions
RHEL 5u2 x32 and x64

Drivers
Both QLogic and Emulex FCoE CNA drivers are
available for download at the vendors website
NOTE: FCoE CNAs cannot be installed in the
same box as FC HBAs.

Host Utilities
HU 4.1.2
Both FCoE cards use the same HBA timeout
setting as their FC HBA counterpart.
2009 NetApp. All rights reserved.

LINUX INITIATOR SUPPORT

QLogic and Emulex 4Gb HBAs represent over $10M in revenue FY2008
50 50 split between QLogic and Emulex

A2-31

SAN Implementation Workshop: Introduction to FCoE

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary

2009 NetApp. All rights reserved.

MODULE SUMMARY

A2-32

SAN Implementation Workshop: Introduction to FCoE

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary
In this module, you should have learned to:
Distinguish the differences between Fibre
Channel (FC), Fibre Channel over Ethernet
(FCoE) and Internet SCSI (iSCSI) protocols
Describe path implementation with FCoE
connectivity
Describe how to configure FCoE ports on
Windows and NetApp systems

2009 NetApp. All rights reserved.

MODULE SUMMARY

A2-33

SAN Implementation Workshop: Introduction to FCoE

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Internet Storage
Name Service
Appendix 3
SAN Implementation Workshop

INTERNET STORAGE NAME SERVICE

A3-1

SAN Implementation Workshop: Internet Storage Name Service

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Objectives
By the end of this module, you should be able to:
Describe Internet Storage Name Service
(iSNS) on Windows Server 2008 R2
Configure a storage system to use an iSNS

2009 NetApp. All rights reserved.

MODULE OBJECTIVES

A3-2

SAN Implementation Workshop: Internet Storage Name Service

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Internet Storage Name


Service Overview

2009 NetApp. All rights reserved.

INTERNET STORAGE NAME SERVICE OVERVIEW

A3-3

SAN Implementation Workshop: Internet Storage Name Service

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Internet Storage Name Service (iSNS)


iSNS provides similar services as found in
Fibre Channel networks
iSNS Server
Repository of iSCSI nodes
Initiators
Targets
Management

Dynamic repository = ping nodes to determine


whether still present on fabric

Windows Server 2008 R2


Integrates iSNS with Active Directory
2009 NetApp. All rights reserved.

INTERNET STORAGE NAME SERVICE (ISNS)

A3-4

SAN Implementation Workshop: Internet Storage Name Service

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Discovery Domains
iSNS organizes initiators and targets into
Discovery Domains (DD)
DD are like zones in Fibre Channel fabrics
DD partition resources in your SAN

Default DD

DD-W2KR2

2009 NetApp. All rights reserved.

DISCOVERY DOMAINS

A3-5

SAN Implementation Workshop: Internet Storage Name Service

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Discovery Domains Sets


Discovery Domains can then be added to
Discovery Domains Set (DDS)
DDS group DD(s)
DDS are then enabled or disabled (activating or
inactivating the DD(s) within the DDS)
DDS-W2KR2

DD-W2KR2
DD-W2KR2
DD-W2KR2

2009 NetApp. All rights reserved.

DISCOVERY DOMAINS SETS

A3-6

SAN Implementation Workshop: Internet Storage Name Service

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Windows Server
2008 R2s iSNS

2009 NetApp. All rights reserved.

WINDOWS SERVER 2008 R2S ISNS

A3-7

SAN Implementation Workshop: Internet Storage Name Service

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Install iSNS Feature

Add the iSNS


Feature and
complete the install
2009 NetApp. All rights reserved.

INSTALL ISNS FEATURE

A3-8

SAN Implementation Workshop: Internet Storage Name Service

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

iSNS Interfaces

But you can also use


Storage Explorer

Found under
Start menu

2009 NetApp. All rights reserved.

ISNS INTERFACES

A3-9

SAN Implementation Workshop: Internet Storage Name Service

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Create a Discovery Domain

2009 NetApp. All rights reserved.

CREATE A DISCOVERY DOMAIN

A3-10

SAN Implementation Workshop: Internet Storage Name Service

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Discovery Domain
The new discovery domain

2009 NetApp. All rights reserved.

DISCOVERY DOMAIN

A3-11

SAN Implementation Workshop: Internet Storage Name Service

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Create a Discovery Domain Set


The new initiator in the discovery domain

2009 NetApp. All rights reserved.

CREATE A DISCOVERY DOMAIN SET

A3-12

SAN Implementation Workshop: Internet Storage Name Service

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Adding DD to a DDS
The new discovery domain set is disabled

Drag the discovery


domain to
the discovery
domain set

2009 NetApp. All rights reserved.

ADDING DD TO A DDS

A3-13

SAN Implementation Workshop: Internet Storage Name Service

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Adding DD to a DDS (Cont.)


Discovery Domain added to Discovery Domain Set

Enable to
Discovery
Domain Set by
clicking here

2009 NetApp. All rights reserved.

ADDING DD TO A DDS (CONT.)

A3-14

SAN Implementation Workshop: Internet Storage Name Service

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Configuration Complete

Disabled the
default
Discovery
Domain Set

Node register with default Discovery Domain if present

2009 NetApp. All rights reserved.

CONFIGURATION COMPLETE

A3-15

SAN Implementation Workshop: Internet Storage Name Service

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

iSNS on Data ONTAP


Data ONTAP has supported Data ONTAP
since Data ONTAP 6.4
Data ONTAP 7.3.1.1 supports:
iSNS communicating over IPv4 and IPv6

To configure iSNS:
Configure the iSNS service
Start the iSNS service
Verify iSNS service registered successfully with
the iSNS server

2009 NetApp. All rights reserved.

ISNS ON DATA ONTAP

A3-16

SAN Implementation Workshop: Internet Storage Name Service

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Configuring iSNS Service


Configure the iSNS Service
system> iscsi isns config 10.254.132.63

Start the iSNS Service

IP address of iSNS server


(W2k8 R2)

system> iscsi isns start

Verify the iSNS Service


system> iscsi isns show
iSNS Entity id:
iSNS Server ip-addr:
iSNS Status:
system> iscsi isns show
iSNS Entity id:
iSNS Server ip-addr:
iSNS Status:

NOT CONFIGURED
10.254.132.63
Still hasnt
Enabled
registered...
wait for a moment
isns:00000003
10.254.132.63
Enabled

2009 NetApp. All rights reserved.

CONFIGURING ISNS SERVICE

A3-17

SAN Implementation Workshop: Internet Storage Name Service

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Configuration Complete

The storage
system
successfully
registered with
the iSNS
server
If node registered with default Discovery Domain,
drag and drop it into your customized DD

2009 NetApp. All rights reserved.

CONFIGURATION COMPLETE

A3-18

SAN Implementation Workshop: Internet Storage Name Service

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Windows/NIC Implementation (Cont.)


iSCSI Initiator Properties Dialog - Discovery

Add iSNS
server to poll

2009 NetApp. All rights reserved.

WINDOWS/NIC IMPLEMENTATION (CONT.)

A3-19

SAN Implementation Workshop: Internet Storage Name Service

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Windows/NIC Implementation (Cont.)


iSCSI Initiator Properties Dialog - Discovered

Storage system
is discovered

2009 NetApp. All rights reserved.

WINDOWS/NIC IMPLEMENTATION (CONT.)

A3-20

SAN Implementation Workshop: Internet Storage Name Service

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary

2009 NetApp. All rights reserved.

MODULE SUMMARY

A3-21

SAN Implementation Workshop: Internet Storage Name Service

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary
In this module, you should have learned to:
Describe Internet Storage Name Service
(iSNS) on Windows Server 2008 R2
Configure a storage system to use an iSNS

2009 NetApp. All rights reserved.

MODULE SUMMARY

A3-22

SAN Implementation Workshop: Internet Storage Name Service

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Multiple Connection
Sessions
Appendix 4
SAN Implementation Workshop

MULTIPLE CONNECTION SESSIONS

A4-1

SAN Implementation Workshop: Multiple Connection Sessions

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Objectives
By the end of this module, you should be able to:
Describe multiple connection sessions (MCS)
in Windows Server 2008 R2

2009 NetApp. All rights reserved.

MODULE OBJECTIVES

A4-2

SAN Implementation Workshop: Multiple Connection Sessions

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Connections versus Sessions


SESSION 64

CONN 64/1

SESSION 65

CONN 65/1

SESSION 66

CONN 66/1

SESSION 67

CONN 67/1

Four Sessions ( 4 TPG) with


One Connection (1 interface)
each

Microsofts Multipath I/O (MPIO

CONN 68/3

SESSION 68

CONN 68/1
CONN 68/4

One Session (1 TPG) with


Four Connections (4 interface)

CONN 68/2

Microsofts Multiple Connections


per Session (MCS)
2009 NetApp. All rights reserved.

CONNECTIONS VERSUS SESSIONS

A4-3

SAN Implementation Workshop: Multiple Connection Sessions

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

MCS

2009 NetApp. All rights reserved.

MCS

A4-4

SAN Implementation Workshop: Multiple Connection Sessions

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

iSCSI MCS on Windows


To enable iSCSI MCS on Windows:
1. Install and enable NICs on target
2. Verify IP Connectivity
3. Create a new TPG on the storage system and
add necessary interfaces
4. Set the Microsoft software initiator to discover
both interfaces with the new TPG
5. Connect to the TPG
6. Create more than one connection with the
Microsoft software initiator
7. Verify multiple paths
2009 NetApp. All rights reserved.

ISCSI MCS ON WINDOWS

A4-5

SAN Implementation Workshop: Multiple Connection Sessions

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Create a New Target Portal Group


To create a TPG
system> iscsi tpgroup create mytp

To assign an interface to a TPG


system> iscsi tpgroup add -f mytp e0b e0c

Force it because
the interfaces
system> iscsi tpgroup show
already belong to
TPGTag Name
Member Interfaces
another TPG
1
mytp
e0b, e0c

Verify TPG
1000
1001
1002
1003

e0a_default
e0b_default
e0c_default
e0d_default

e0a
(none)
(none)
e0d

Place more than one interface into the TPG


to allow failure of a connections without
disrupting the iSCSI session
2009 NetApp. All rights reserved.

CREATE A NEW TARGET PORTAL GROUP

A4-6

SAN Implementation Workshop: Multiple Connection Sessions

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Discovery
Discovery the storage system

2009 NetApp. All rights reserved.

DISCOVERY

A4-7

SAN Implementation Workshop: Multiple Connection Sessions

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Discovery (Cont.)
Discovery the storage system

2009 NetApp. All rights reserved.

DISCOVERY (CONT.)

A4-8

SAN Implementation Workshop: Multiple Connection Sessions

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Connect to the Target Portal Group


Log on to the TPG

Choose one of the portals


(i.e. interfaces) that you
want to be the first connection

2009 NetApp. All rights reserved.

CONNECT TO THE TARGET PORTAL GROUP

A4-9

SAN Implementation Workshop: Multiple Connection Sessions

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Configuring MCS
One Session

One Connection
Error occurs if
Log on pressed
twice

2009 NetApp. All rights reserved.

CONFIGURING MCS

A4-10

SAN Implementation Workshop: Multiple Connection Sessions

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Configuring MCS (Cont.)


Verify current sessions
system> iscsi session show
Session 58
Only one session
Initiator Information
Initiator Name: iqn.1991-05.com.microsoft:windowsmachine
ISID: 40:00:01:37:00:00

Verify current connections


system> iscsi connection show -v
No new connections
Session connections
Connection 58/1:
State: Full_Feature_Phase
Remote Endpoint: 10.254.132.63:50055
Local Endpoint: 10.254.144.75:3260
Local Interface: e0b

Only one connection

2009 NetApp. All rights reserved.

CONFIGURING MCS (CONT.)

A4-11

SAN Implementation Workshop: Multiple Connection Sessions

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Configuring MCS (Cont.)


Click Connections

One Connection

2009 NetApp. All rights reserved.

CONFIGURING MCS (CONT.)

A4-12

SAN Implementation Workshop: Multiple Connection Sessions

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Configuring MCS (Cont.)


Set the load balance policy and click Apply

2009 NetApp. All rights reserved.

CONFIGURING MCS (CONT.)

A4-13

SAN Implementation Workshop: Multiple Connection Sessions

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Configuring MCS (Cont.)


Add another connection

Select different source IP


and target portal

2009 NetApp. All rights reserved.

CONFIGURING MCS (CONT.)

A4-14

SAN Implementation Workshop: Multiple Connection Sessions

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Verify Paths
New connection shows up

One Session

Two connections

2009 NetApp. All rights reserved.

VERIFY PATHS

A4-15

SAN Implementation Workshop: Multiple Connection Sessions

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Verify Paths (Cont.)


Verify current sessions
system> iscsi session show
Session 58
Only one session
Initiator Information
Initiator Name: iqn.1991-05.com.microsoft:windowsmachine
ISID: 40:00:01:37:00:00

Verify current connections


system> iscsi connection show -v
...
Session connections
Connection 58/1:
State: Full_Feature_Phase
Remote Endpoint: 10.254.132.63:50055
Local Endpoint: 10.254.144.75:3260
Local Interface: e0b
Connection 58/2:
State: Full_Feature_Phase
Remote Endpoint: 10.254.132.64:50061
Local Endpoint: 10.254.144.81:3260
Local Interface: e0c

Two connections

2009 NetApp. All rights reserved.

VERIFY PATHS (CONT.)

A4-16

SAN Implementation Workshop: Multiple Connection Sessions

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary

2009 NetApp. All rights reserved.

MODULE SUMMARY

A4-17

SAN Implementation Workshop: Multiple Connection Sessions

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary
In this module, you should have learned to:
Describe multiple connection sessions (MCS)
in Windows Server 2008 R2

2009 NetApp. All rights reserved.

MODULE SUMMARY

A4-18

SAN Implementation Workshop: Multiple Connection Sessions

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Storage Manager
for SANs
Appendix 5
SAN Implementation Workshop

STORAGE MANAGER FOR SANS

A5-1

SAN Implementation Workshop: Storage Manager for SANs

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Objectives
By the end of this module, you should be able to:
Explore the Windows Server 2008 Server R2s
Storage Manager for SANs tool to create a
LUN

2009 NetApp. All rights reserved.

MODULE OBJECTIVES

A5-2

SAN Implementation Workshop: Storage Manager for SANs

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Storage Manager
for SANs

2009 NetApp. All rights reserved.

STORAGE MANAGER FOR SANS

A5-3

SAN Implementation Workshop: Storage Manager for SANs

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Storage Manager for SAN


Windows Server 2003 R2 introduced a new
tool for SAN administrators to manager LUN
on enterprise storage systems
Windows Server 2008 R2 also has the Storage
Manager for SAN (SMfS) tool available to SAN
administrators
To use SMfS:
Install the SMfS feature
Install Data ONTAP VDS Hardware Provider
Configure LUNs with SMfS

2009 NetApp. All rights reserved.

STORAGE MANAGER FOR SAN

A5-4

SAN Implementation Workshop: Storage Manager for SANs

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Install iSNS Feature

Add the SMfS


Feature and
complete the install
2009 NetApp. All rights reserved.

INSTALL ISNS FEATURE

A5-5

SAN Implementation Workshop: Storage Manager for SANs

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Install VDS Hardware Provider


NetApp provides at the NOW site a VDS
Hardware Provider that communicates with
NetApp storage systems
Works with FCP configuration using Data
ONTAP 7.03 and later
Works with iSCSI configuration using Data
ONTAP 7.1 and later
Reboot after install

2009 NetApp. All rights reserved.

INSTALL VDS HARDWARE PROVIDER

A5-6

SAN Implementation Workshop: Storage Manager for SANs

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Configure VDS Hardware Provider


Launch the VDS Hardware Provider from
Control Panel

Add the storage system


to the provider

2009 NetApp. All rights reserved.

CONFIGURE VDS HARDWARE PROVIDER

A5-7

SAN Implementation Workshop: Storage Manager for SANs

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Storage Manager for SAN

2009 NetApp. All rights reserved.

STORAGE MANAGER FOR SAN

A5-8

SAN Implementation Workshop: Storage Manager for SANs

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Manage Server Connections

2009 NetApp. All rights reserved.

MANAGE SERVER CONNECTIONS

A5-9

SAN Implementation Workshop: Storage Manager for SANs

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Create a LUN

2009 NetApp. All rights reserved.

CREATE A LUN

A5-10

SAN Implementation Workshop: Storage Manager for SANs

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Create a LUN (Cont.)

2009 NetApp. All rights reserved.

CREATE A LUN (CONT.)

A5-11

SAN Implementation Workshop: Storage Manager for SANs

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Create a LUN (Cont.)

2009 NetApp. All rights reserved.

CREATE A LUN (CONT.)

A5-12

SAN Implementation Workshop: Storage Manager for SANs

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Create a LUN (Cont.)

2009 NetApp. All rights reserved.

CREATE A LUN (CONT.)

A5-13

SAN Implementation Workshop: Storage Manager for SANs

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Create a LUN (Cont.)

2009 NetApp. All rights reserved.

CREATE A LUN (CONT.)

A5-14

SAN Implementation Workshop: Storage Manager for SANs

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Create a LUN (Cont.)


Wizard creates:
1. Volume
2. A LUN with new
igroup(s)

2009 NetApp. All rights reserved.

CREATE A LUN (CONT.)

A5-15

SAN Implementation Workshop: Storage Manager for SANs

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary

2009 NetApp. All rights reserved.

MODULE SUMMARY

A5-16

SAN Implementation Workshop: Storage Manager for SANs

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary
In this module, you should have learned to:
Explore the Windows Server 2008 Server R2s
Storage Manager for SANs tool to create a
LUN

2009 NetApp. All rights reserved.

MODULE SUMMARY

A5-17

SAN Implementation Workshop: Storage Manager for SANs

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Hyper-V
Introduction
Appendix 6
SAN Implementation Workshop

HYPER-V INTRODUCTION

A6-1

SAN Implementation Workshop: Hyper-V Introduction

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Objectives
Bythe end of this module, you should be able to:
Introduce Microsofts Hyper-V
Implement Hyper-V on Microsoft Windows
Server 2008 R2
Configure a LUN to Hyper-V to host VMs

2009 NetApp. All rights reserved.

MODULE OBJECTIVES

A6-2

SAN Implementation Workshop: Hyper-V Introduction

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Hyper-V

2009 NetApp. All rights reserved.

HYPER-V

A6-3

SAN Implementation Workshop: Hyper-V Introduction

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

What is Hyper-V?
Hyper-V (previously code-named Viridian) is
Microsoft's new server virtualization technology
Hypervisor Model

Does not run on top of a host OS

Loads at boot time

Creates a layer of virtualization between the


server hardware and the operating systems it
hosts

Optional feature of Windows Server 2008

Will be available also as a stand-alone product


called Microsoft Hyper-V Server

Free Feature

Included with Windows Server 2008 license

2009 NetApp. All rights reserved.

WHAT IS HYPER-V?

A6-4

SAN Implementation Workshop: Hyper-V Introduction

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Windows Server 2008 R2 Role

Selected

2009 NetApp. All rights reserved.

WINDOWS SERVER 2008 R2 ROLE

A6-5

SAN Implementation Workshop: Hyper-V Introduction

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Windows Server 2008 R2 Role (Cont.)

2009 NetApp. All rights reserved.

WINDOWS SERVER 2008 R2 ROLE (CONT.)

A6-6

SAN Implementation Workshop: Hyper-V Introduction

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Windows Server 2008 R2 Role (Cont.)

Selected interfaces to
be used with VMs

2009 NetApp. All rights reserved.

WINDOWS SERVER 2008 R2 ROLE (CONT.)

A6-7

SAN Implementation Workshop: Hyper-V Introduction

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Windows Server 2008 R2 Role (Cont.)

2009 NetApp. All rights reserved.

WINDOWS SERVER 2008 R2 ROLE (CONT.)

A6-8

SAN Implementation Workshop: Hyper-V Introduction

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Windows Server 2008 R2 Role (Cont.)

2009 NetApp. All rights reserved.

WINDOWS SERVER 2008 R2 ROLE (CONT.)

A6-9

SAN Implementation Workshop: Hyper-V Introduction

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Hyper-V Administration

2009 NetApp. All rights reserved.

HYPER-V ADMINISTRATION

A6-10

SAN Implementation Workshop: Hyper-V Introduction

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

How is Hyper-V Different from VMware ESX


Integrated Hypervisor
Drivers run within
hypervisor
Requires ESX virtual
device drivers

Micro-kernel Hypervisor
Drivers run within guest
and parent OS
Requires VSP (modified)
parent OS drivers
Requires VSC (modified)
guest OS drivers
(Parent OS)

VMware
(Console OS)

VM 1
(Guest OS)

VM 2
(Guest OS)

VMware Hypervisor
Drivers
Drivers
Drivers

Virtualization
Stack

VM 1
(Child OS)

VM 2
(Child OS)

Drivers
Drivers
Drivers

Drivers
Drivers
Drivers

Drivers
Drivers
Drivers

Microsoft Hypervisor

Physical Hardware

Physical Hardware

VMware ESX Server

Microsoft Hyper-V

2009 NetApp. All rights reserved.

HOW IS HYPER-V DIFFERENT FROM VMWARE ESX

A6-11

SAN Implementation Workshop: Hyper-V Introduction

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

How the I/Os are Processed


Microsoft Hyper-V
Parent OS

Guest VMs
VM0

I/O Service

VMware ESX Server


Console OS
(Optional)
VM0

Guest VMs
VMn

VMn
Guest OS & Apps

Device Drivers

Guest OS & Apps

Guest OS & Apps

Guest OS & Apps

I/O Service

Hypervisor
Hypervisor

Device Drivers

Shared Devices

Requires special drivers


Installed as part of integration
components

Shared Devices

Does not require specialized


drivers in the Guest OS

2009 NetApp. All rights reserved.

HOW THE I/OS ARE PROCESSED

A6-12

SAN Implementation Workshop: Hyper-V Introduction

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Provisioning Storage for Hyper-V


Create a VM using NetApp storage

Create a
new VM

Click
Next

2009 NetApp. All rights reserved.

PROVISIONING STORAGE FOR HYPER-V

A6-13

SAN Implementation Workshop: Hyper-V Introduction

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Provisioning Storage for Hyper-V (Cont.)


Create a VM using NetApp storage (Cont.)

Place the VMs


configuration
file on a NetApp
LUN for better
protection

NetApp LUN that is formatted and mounted


2009 NetApp. All rights reserved.

PROVISIONING STORAGE FOR HYPER-V (CONT.)

A6-14

SAN Implementation Workshop: Hyper-V Introduction

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Provisioning Storage for Hyper-V (Cont.)


Create a VM using NetApp storage (Cont.)

2009 NetApp. All rights reserved.

PROVISIONING STORAGE FOR HYPER-V (CONT.)

A6-15

SAN Implementation Workshop: Hyper-V Introduction

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Provisioning Storage for Hyper-V (Cont.)


Create a VM using NetApp storage (Cont.)
Create Virtual
Hard Disk (VHD) file
on NetApp LUN

2009 NetApp. All rights reserved.

PROVISIONING STORAGE FOR HYPER-V (CONT.)

A6-16

SAN Implementation Workshop: Hyper-V Introduction

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Provisioning Storage for Hyper-V (Cont.)


Create a VM using NetApp storage (Cont.)

2009 NetApp. All rights reserved.

PROVISIONING STORAGE FOR HYPER-V (CONT.)

A6-17

SAN Implementation Workshop: Hyper-V Introduction

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Provisioning Storage for Hyper-V


Create a VM using NetApp storage (Cont.)

New VM using
a VHD on a NetApp LUN

2009 NetApp. All rights reserved.

PROVISIONING STORAGE FOR HYPER-V

A6-18

SAN Implementation Workshop: Hyper-V Introduction

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary

2009 NetApp. All rights reserved.

MODULE SUMMARY

A6-19

SAN Implementation Workshop: Hyper-V Introduction

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary
In this module, you should have learned to:
Introduce Microsofts Hyper-V
Implement Hyper-V on Microsoft Windows
Server 2008 R2
Configure a LUN to Hyper-V to host VMs

2009 NetApp. All rights reserved.

MODULE SUMMARY

A6-20

SAN Implementation Workshop: Hyper-V Introduction

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

SAN
Troubleshooting on
Windows
Appendix 7
SAN Implementation Workshop

SAN TROUBLESHOOTING ON WINDOWS

A7-1

SAN Implementation Workshop: SAN Troubleshooting on Windows

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Objectives
By the end of this module, you should be able to:
Review diagnostic tools and techniques
available for Windows
Define and configure queue depths on
Windows

2009 NetApp. All rights reserved.

MODULE OBJECTIVES

A7-2

SAN Implementation Workshop: SAN Troubleshooting on Windows

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Windows SAN
Troubleshooting

2009 NetApp. All rights reserved.

WINDOWS SAN TROUBLESHOOTING

A7-3

SAN Implementation Workshop: SAN Troubleshooting on Windows

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Windows Troubleshooting
To troubleshoot a SAN environment in Windows:
View Fibre Channel statistics
Verify partition within Windows
Use the scripts provided in the Host Utilities Kit
(HUK)
SnapDrive collection utility to capture
diagnostic information (if available)

2009 NetApp. All rights reserved.

WINDOWS TROUBLESHOOTING

A7-4

SAN Implementation Workshop: SAN Troubleshooting on Windows

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

View FC Statistics
To verify FC statistics in Windows Server 2008:

Select to
view statistics

Select the HBA, right-click


and select Properties

2009 NetApp. All rights reserved.

VIEW FC STATISTICS

A7-5

SAN Implementation Workshop: SAN Troubleshooting on Windows

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

View FC Statistics (Cont.)


To verify FC statistics in Windows Server 2008:

Port statistics

2009 NetApp. All rights reserved.

VIEW FC STATISTICS (CONT.)

A7-6

SAN Implementation Workshop: SAN Troubleshooting on Windows

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

View FC Statistics (Cont.)


To verify FC statistics in Windows Server 2003:
QLogic example

Emulex example

2009 NetApp. All rights reserved.

VIEW FC STATISTICS (CONT.)

A7-7

SAN Implementation Workshop: SAN Troubleshooting on Windows

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Verify Partition Configuration


To verify partition configuration from Windows:
C:\> wmic partition get BlockSize, StartingOffset, Name,
BlockSize Index Name
StartingOffset
512
0
Disk #1, Partition #0 33571840
512
0
Disk #2, Partition #0 65536
512
0
Disk #0, Partition #0 32256
512
1
Disk #0, Partition #1 32901120

Index, Type
Type
GPT: Basic Data
Installable File System
Unknown
Installable File System

2009 NetApp. All rights reserved.

VERIFY PARTITION CONFIGURATION

A7-8

SAN Implementation Workshop: SAN Troubleshooting on Windows

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Host Utilities Kit Scripts


Windows Host Utilities Kit provides the following scripts:
brocade_info.exe - collects information about Brocade switches
installed
cisco_info.exe - collects information about Cisco switches installed
fcconfig.exe - used by the installation program to set HBA timeout
values
fcconfig.ini - used by the fcconfig.exe program
filer_info.exe - collects information about the storage system
hba_info.exe - collects information about the host bus adapters
(HBAs) installed on the host machine
mcdata_info.exe - collects information about McDATA switches
installed
qlogic_info.exe - collects information about QLogic switches installed
san_version.exe - displays the version of Host Utilities
windows_info.exe - collects configuration information about your OS

2009 NetApp. All rights reserved.

HOST UTILITIES KIT SCRIPTS

A7-9

SAN Implementation Workshop: SAN Troubleshooting on Windows

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Windows SnapDrive Data Collection Utility


What is SnapDrvDc?
SnapDrvDc.exe is a Windows utility that gathers
Windows host and storage system information
Downloadable at the NOW site
Data is helpful for troubleshooting
rsh must be enabled to use SnapDrvDc.exe

2009 NetApp. All rights reserved.

WINDOWS SNAPDRIVE DATA COLLECTION UTILITY

A7-10

SAN Implementation Workshop: SAN Troubleshooting on Windows

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Tools - SnapDrvDc Utility


1 Add controllers DNS
name here, and click Add

2
Click here
to start data
collection

3
Information is gathered for
the host and all storage
system you have added
2009 NetApp. All rights reserved.

TOOLS - SNAPDRVDC UTILITY

A7-11

SAN Implementation Workshop: SAN Troubleshooting on Windows

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Sample of Files Collected


The files are placed
in a directory named
NetApp_SNAPDRV_
DC_XX-XX-XX
The folder will be
created wherever
you place the
SnapDrvDc.exe file

2009 NetApp. All rights reserved.

SAMPLE OF FILES COLLECTED

A7-12

SAN Implementation Workshop: SAN Troubleshooting on Windows

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

View the Log


The following is a sample of the data collection
successes and failures:
SnapDrive Data collection tool file version:2.0.0.336
=======================================================
Successfully dumped the application eventlog
Successfully dumped the system eventlog
Successfully opened registry key =
SYSTEM\CurrentControlSet\Services\Eventlog\Application\SnapDrive
Successfully read registry key information = EventMessageFile
Successfully opened registry key =
SYSTEM\CurrentControlSet\Services\SWSvc
Successfully read registry key information = DependOnGroup
Successfully opened registry key =
SYSTEM\CurrentControlSet\Services\SWSvc
...

2009 NetApp. All rights reserved.

VIEW THE LOG

A7-13

SAN Implementation Workshop: SAN Troubleshooting on Windows

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Host Bus Adapters

2009 NetApp. All rights reserved.

HOST BUS ADAPTERS

A7-14

SAN Implementation Workshop: SAN Troubleshooting on Windows

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

HBA API
Application Programming Interface for
management of Fibre Channel Host Bus
Adapters
Controlled by SNIA (Storage Networking
Industry Association)
Platform independent
Vendor independent
Interoperable

2009 NetApp. All rights reserved.

HBA API
The HBA API is a Storage Networking Industry Association (SNIA) specification that
provides a vendor neutral interface for managing aspects of HBAs. SNIA has since turned
over the specification to T11.
For more information see: http://www.snia.org/tech_activities/hba_api/ and
ftp://ftp.t11.org/t11/docs/02-149v0.pdf.

A7-15

SAN Implementation Workshop: SAN Troubleshooting on Windows

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Command Queues Device to HBA


Device Queues

Disk 1:

Host HBA Queue

Command Queue the number of


outstanding commands allowed
per LUN and per target
Devices: Command queue limit
imposed by OS and sometimes
initiator
Initiators: Limit the command
queue per target

Disk 2

Target: Command queue for all


initiators and LUNs
Disk 3

Disk N

2009 NetApp. All rights reserved.

COMMAND QUEUES DEVICE TO HBA


Initiators are configured to limit the number of commands it sends to a target and LUN.
Limiting the number of outstanding command is also called a throttle. The optimal throttle is
value is one that does not allow the host to over utilize target queues.

A7-16

SAN Implementation Workshop: SAN Troubleshooting on Windows

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Queues Transmission Example

Queue Depth = 1

Queue Depth = 10

2009 NetApp. All rights reserved.

QUEUES TRANSMISSION EXAMPLE


Littles Law describes the relationship between throughput and latency. It measures
throughput as work in progress divided by response time. This has implications for queues,
the number of outstanding commands (per target or per LUN). As response time increases,
the queue depth must become larger in order to get the same number of I/Os. To translate
Littles Law to storage I/O terminology, the throughput is recognized as the number of I/Os
completed per measurement period (e.g., second). The maximum needed queue depth then, is
equal to the number of I/Os per second (throughput) multiplied the response time.
The example at the top shows a queue depth of 1. The example at the bottom demonstrates a
queue depth of 10.
The response time (latency) is the same regardless of the queue depth. Although the
throughput is much better when more I/Os are in flight simultaneously. Maximum
performance is achieved when throughput is greatest.

A7-17

SAN Implementation Workshop: SAN Troubleshooting on Windows

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary

2009 NetApp. All rights reserved.

MODULE SUMMARY

A7-18

SAN Implementation Workshop: SAN Troubleshooting on Windows

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary
In this module, you should have learned to:
Review diagnostic tools and techniques
available for Windows
Define and configure queue depths on
Windows

2009 NetApp. All rights reserved.

MODULE SUMMARY

A7-19

SAN Implementation Workshop: SAN Troubleshooting on Windows

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

NFS Datastores
Appendix 8
SAN Implementation Workshop

NFS DATASTORES

A8-1

SAN Implementation Workshop: NFS Datastores

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Objectives
By the end of this module, you should be able to:
Connect a NetApp volume as a vSpheres NFS
Datastores

2009 NetApp. All rights reserved.

MODULE OBJECTIVES

A8-2

SAN Implementation Workshop: NFS Datastores

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

NFS Datastores

2009 NetApp. All rights reserved.

NFS DATASTORES

A8-3

SAN Implementation Workshop: NFS Datastores

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

NAS Datastore
ESX Cluster
Datastore
VM1

VM2

VM3

VM4

VDisk0

VDisk0

VDisk0

VDisk0

NIC
LAN

NFS

NIC
1.vmx
1.vmdk

2.vmx
2.vmdk

Flexible Volume

3.vmx
3.vmdk

4.vmx
4.vmdk

NetApp FAS Array

2009 NetApp. All rights reserved.

NAS DATASTORE

A8-4

SAN Implementation Workshop: NFS Datastores

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Data ONTAP NFS Configuration


License NFS
system2> license add xxxxxxx

Create volume for an existing aggregate


system2> vol create nfs_store aggr1 20g

Turn access updates off


system2> vol optons nfs_store
no_atime_update on

Ensuring volumes security style to UNIX


system2> qtree security /vol/nfs_store unix

2009 NetApp. All rights reserved.

DATA ONTAP NFS CONFIGURATION

A8-5

SAN Implementation Workshop: NFS Datastores

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Data ONTAP NFS Configuration (Cont.)


Verify exports:
system2> exportfs /vol/nfs_store -sec=sys,rw,nosuid
New volumes automatically get exported (default):
system2> options nfs.export.auto-update
nfs.export.auto-update
on

Verify connectivity:
system2> ifconfig e0a
e0a: flags=948043
<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM>
mtu 1500
inet 10.254.144.91
netmask 0xfffffc00
broadcast 10.254.147.255
ether 00:a0:98:09:f2:42 (auto-1000t-fd-up)
flowcontrol full
2009 NetApp. All rights reserved.

DATA ONTAP NFS CONFIGURATION (CONT.)

A8-6

SAN Implementation Workshop: NFS Datastores

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Data ONTAP NFS Configuration (Cont.)


Alternatively, you can use System Manager to
create a NFS datastore:

2009 NetApp. All rights reserved.

DATA ONTAP NFS CONFIGURATION (CONT.)

A8-7

SAN Implementation Workshop: NFS Datastores

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Configure Virtual Infrastructure


Configure Virtual Infrastructure with standard
NICs :
1. Configure the virtual infrastructure

Identify the local network interface(s) to use


Configure switch
Configure VMkernel(s)
Configure jumbo frames if desired

2. Verify NFS connection to the storage system

2009 NetApp. All rights reserved.

CONFIGURE VIRTUAL INFRASTRUCTURE

A8-8

SAN Implementation Workshop: NFS Datastores

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

vSphere NFS Configuration


Verify number of NFS datastores
Verify Max
of 64

2009 NetApp. All rights reserved.

VSPHERE NFS CONFIGURATION

A8-9

SAN Implementation Workshop: NFS Datastores

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

vSphere NFS Configuration (Cont.)


Verify TCP/IP Heap Size settings:
Verify 30
Verify 120

2009 NetApp. All rights reserved.

VSPHERE NFS CONFIGURATION (CONT.)

A8-10

SAN Implementation Workshop: NFS Datastores

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

vSphere NFS Configuration (Cont.)


Verify NFS Heatbeat settings:
Verify 12

Verify 10

2009 NetApp. All rights reserved.

VSPHERE NFS CONFIGURATION (CONT.)

A8-11

SAN Implementation Workshop: NFS Datastores

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

vSphere NFS Configuration (Cont.)

Create the
datastore

Select Network
File System

2009 NetApp. All rights reserved.

VSPHERE NFS CONFIGURATION (CONT.)

A8-12

SAN Implementation Workshop: NFS Datastores

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

vSphere NFS Configuration (Cont.)

Path
to export

2009 NetApp. All rights reserved.

VSPHERE NFS CONFIGURATION (CONT.)

A8-13

SAN Implementation Workshop: NFS Datastores

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

vSphere NFS Configuration (Cont.)

NFS
Datastore

Now, Create a VM
in the datastore

2009 NetApp. All rights reserved.

VSPHERE NFS CONFIGURATION (CONT.)

A8-14

SAN Implementation Workshop: NFS Datastores

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary

2009 NetApp. All rights reserved.

MODULE SUMMARY

A8-15

SAN Implementation Workshop: NFS Datastores

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary
In this module, you should have learned to:
Connect a NetApp volume as a vSpheres NFS
Datastores

2009 NetApp. All rights reserved.

MODULE SUMMARY

A8-16

SAN Implementation Workshop: NFS Datastores

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Server
Consolidation
Appendix 9
SAN Implementation Workshop

SERVER CONSOLIDATION

A9-1

SAN Implementation Workshop: Server Consolidation

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Objectives
By the end of this module, you should be able to:
Migrate servers from unreliable storage to
reliable NetApp storage

2009 NetApp. All rights reserved.

MODULE OBJECTIVES

A9-2

SAN Implementation Workshop: Server Consolidation

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Serious Datacenter Problem


You can setup new servers using SAN booting
techniques to reliable NetApp storage
However, many pre-existing servers are out
there in most existing datacenters running on
unreliable storage

With NetApp and VMware, you can change this


2009 NetApp. All rights reserved.

SERIOUS DATACENTER PROBLEM

A9-3

SAN Implementation Workshop: Server Consolidation

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

vCenter

2009 NetApp. All rights reserved.

VCENTER

A9-4

SAN Implementation Workshop: Server Consolidation

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

vCenter
vCenter provides a enterprise management
interface for vSpheres in your data center

2009 NetApp. All rights reserved.

VCENTER

A9-5

SAN Implementation Workshop: Server Consolidation

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

vCenter Features
vCenter provides additional ways to report
storage with the Storage Views

2009 NetApp. All rights reserved.

VCENTER FEATURES

A9-6

SAN Implementation Workshop: Server Consolidation

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

vCenter Features (Cont.)


vCenter provides additional ways to report
storage with the Storage Views (Cont.)

2009 NetApp. All rights reserved.

VCENTER FEATURES (CONT.)

A9-7

SAN Implementation Workshop: Server Consolidation

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

vCenter Plugins
Additional features are available through
vCenters plugins

2009 NetApp. All rights reserved.

VCENTER PLUGINS

A9-8

SAN Implementation Workshop: Server Consolidation

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Guided Consolidation

2009 NetApp. All rights reserved.

GUIDED CONSOLIDATION

A9-9

SAN Implementation Workshop: Server Consolidation

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

vCenters Guided Consolidation


Guided Consolidation is a plugin to vCenter
that allow small enterprise to find servers by
domain, IP, host name or a range of IPs and
convert them from physical-to-virtual (P2V)

2009 NetApp. All rights reserved.

VCENTERS GUIDED CONSOLIDATION

A9-10

SAN Implementation Workshop: Server Consolidation

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Install vCenter Converter


After install, make the plug-in available to
vCenter

2009 NetApp. All rights reserved.

INSTALL VCENTER CONVERTER

A9-11

SAN Implementation Workshop: Server Consolidation

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Verify Configuration

Everything is installed
and running properly

2009 NetApp. All rights reserved.

VERIFY CONFIGURATION

A9-12

SAN Implementation Workshop: Server Consolidation

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Analysis Phase Start

2009 NetApp. All rights reserved.

ANALYSIS PHASE START

A9-13

SAN Implementation Workshop: Server Consolidation

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Analysis Phase

Analysis phase
takes at least 24 hours
to gather information about
servers you plan to consolidate

2009 NetApp. All rights reserved.

ANALYSIS PHASE

A9-14

SAN Implementation Workshop: Server Consolidation

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Analysis Phase Complete

Analysis phase
is complete;
ready for consolidation
to NetApp storage

Or right-click and
manually consolidate
the server

2009 NetApp. All rights reserved.

ANALYSIS PHASE COMPLETE

A9-15

SAN Implementation Workshop: Server Consolidation

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Plan Consolidation Wizard

This only shows


one server, but
the wizard support
many servers

Wizard automatically
chooses best
NetApp datastore
or use manual method
to choose

2009 NetApp. All rights reserved.

PLAN CONSOLIDATION WIZARD

A9-16

SAN Implementation Workshop: Server Consolidation

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Consolidation Phase

Consolidation
in progress

2009 NetApp. All rights reserved.

CONSOLIDATION PHASE

A9-17

SAN Implementation Workshop: Server Consolidation

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Consolidation Complete
Your important
servers now
running on NetApp
reliable storage

2009 NetApp. All rights reserved.

CONSOLIDATION COMPLETE

A9-18

SAN Implementation Workshop: Server Consolidation

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary

2009 NetApp. All rights reserved.

MODULE SUMMARY

A9-19

SAN Implementation Workshop: Server Consolidation

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary
In this module, you should have learned to:
Migrate servers from unreliable storage to
reliable NetApp storage

2009 NetApp. All rights reserved.

MODULE SUMMARY

A9-20

SAN Implementation Workshop: Server Consolidation

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

NPIV
Troubleshooting
Appendix 10
SAN Implementation Workshop

NPIV TROUBLESHOOTING

A10-1

SAN Implementation Workshop: NPIV Troubleshooting

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Objectives
By the end of this module, you should be able to:
Convert a VMFS .vmdk to a RDM .vmdk
Describe the VPORT creation flow and
troubleshoot NPIV

2009 NetApp. All rights reserved.

MODULE OBJECTIVES

A10-2

SAN Implementation Workshop: NPIV Troubleshooting

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Converting a VMFS
Disk Image to RDM

2009 NetApp. All rights reserved.

CONVERTING A VMFS DISK IMAGE TO RDM

A10-3

SAN Implementation Workshop: NPIV Troubleshooting

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

VMFS Disk Image to RDM


NPIV requires RDM-based VMs
If you have existing VMFS-based disk images,
you can convert the VMFS image to an RDM:
# vmkfstools -i <from_disk> <to_disk> -d
<rdm:|rdmp:> device
<from_disk> = Name of existing VMFS .vmdk
<to_disk> = Name of new RDM .vmdk
<rdm:|rdmp:> = Disk type to map via VMFS
<device> = Raw device name of SAN-disk disk

Example:
# vmkfstools -i
/vmfs/volumes/storage1/rhel5/rhel5.vmdk
/vmfs/volumes/storage1/rhel5-rdm.vmdk -d
rdm:/vmfs/devices/disks/vmhba4:0:0:0
2009 NetApp. All rights reserved.

VMFS DISK IMAGE TO RDM

A10-4

SAN Implementation Workshop: NPIV Troubleshooting

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

NPIV Troubleshooting

2009 NetApp. All rights reserved.

NPIV TROUBLESHOOTING

A10-5

SAN Implementation Workshop: NPIV Troubleshooting

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Setting NPIV Settings


Verify generated
new WWNs

Later it
will have
the WWNN(s)
and WWPN(s)

2009 NetApp. All rights reserved.

SETTING NPIV SETTINGS

A10-6

SAN Implementation Workshop: NPIV Troubleshooting

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

LUN Masking Verification


Verify WWPN(s) added to igroups
system2> igroup show iESX_fcp
iESX_fcp (FCP) (ostype: vmware):
10:00:00:00:c9:58:29:9a (logged in
WWPN Alias(es): ESX1-FC
10:00:00:00:c9:58:29:9b (logged in
WWPN Alias(es): ESX2-FC
27:ee:00:0c:29:00:05:a7 (logged in
27:ee:00:0c:29:00:06:a7 (logged in

on: vtic, 0d, 0c)


on: vtic, 0c, 0d)
on: 0c, vtic, 0d)
on: 0d, vtic, 0c)

VPORT WWPNs

2009 NetApp. All rights reserved.

LUN MASKING VERIFICATION

A10-7

SAN Implementation Workshop: NPIV Troubleshooting

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

VPORT Creation Flow


VM off
Power on VM

NPIV
option
on?
No

VMkernel request
VPORT creation

Driver sends
FDISC to fabric

Fabric
PLOGI

Switch
doesnt support NPIV

HBA / firmware
doesnt support NPIV

Operation
unsuccessful

SCSI target
reports LUNs

RDM mounted via


physical HBA port

RDM mounted via


NPIV port

VM power on complete
2009 NetApp. All rights reserved.

VPORT CREATION FLOW

A10-8

SAN Implementation Workshop: NPIV Troubleshooting

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Verify Supported HBA(s)


From the Service Console:
# cd /proc/scsi/lpfc820
Example using Emulex
# ls
4 5
Port 0
# cat 4
Emulex LightPulse Fibre Channel SCSI driver 8.2.0.30.49vmw
NetApp 111-00308 4Gb 2-port PCI-X2 Fibre Channel Adapter on PCI
bus 14 device 08 irq 145 port 0
BoardNum: 0
Verify Supported Firmware
Firmware Version: 2.72A2 (B3F2.72A2)
Portname: 10:00:00:00:c9:58:29:9a
Confirmed 1 VPORT
Nodename: 20:00:00:00:c9:58:29:9a

created

SLI Rev: 3
NPIV Supported: VPIs max 100
RPIs max 512 RPIs used 18

VPIs used 1

2009 NetApp. All rights reserved.

VERIFY SUPPORTED HBA(S)

A10-9

SAN Implementation Workshop: NPIV Troubleshooting

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Verify Supported HBA(s) (Cont.)


From the Service Console (Cont.):
Vport List:
Confirmed
Vport DID 0x10801, vpi 1, state 0x20
VPORT usage
Portname: 27:ee:00:0c:29:00:05:a7
Nodename: 27:ee:00:0c:29:00:04:a7
Link Up - Ready:
PortID 0x10800
Fabric
Current speed 4G
Physical Port Discovered Nodes: Count 4
t00 DID 010000 State 06 WWPN 50:0a:09:81:96:88:37:5d WWNN
50:0a:09:80:86:88:37:5d
t01 DID 010100 State 06 WWPN 50:0a:09:82:96:88:37:5d WWNN
50:0a:09:80:86:88:37:5d
t02 DID 010200 State 06 WWPN 50:0a:09:81:86:88:37:5d WWNN
50:0a:09:80:86:88:37:5d
t03 DID 010300 State 06 WWPN 50:0a:09:82:86:88:37:5d WWNN
50:0a:09:80:86:88:37:5d
2009 NetApp. All rights reserved.

VERIFY SUPPORTED HBA(S) (CONT.)

A10-10

SAN Implementation Workshop: NPIV Troubleshooting

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Verify Brocade Support for NPIV


Verify NPIV-supported on the Brocade:
switch> portcfgshow
Ports of Slot 0 0 1 2 3
...
NPIV capability ON ON ON ON
...

ON ON ON ON

9 10 11

12 13 14 15

ON ON ON ON ON ON ON ON

NPIV turned on

2009 NetApp. All rights reserved.

VERIFY BROCADE SUPPORT FOR NPIV

A10-11

SAN Implementation Workshop: NPIV Troubleshooting

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

NPIV Observed
After boot up of NPIV-supported VM:
switch> switchshow
...
Area Port Media Speed State
Proto
=====================================
0
0
id
N2
Online
F-Port 50:0a:09:81:96:88:37:5d
1
1
id
N2
Online
F-Port 50:0a:09:82:96:88:37:5d
2
2
id
N2
Online
F-Port 50:0a:09:81:86:88:37:5d
3
3
id
N2
Online
F-Port 50:0a:09:82:86:88:37:5d
...
Two connections (NPIV)
8
8
id
N4
Online
F-Port 2 NPIV public
9
9
id
N4
Online
F-Port 2 NPIV public
switch> portshow 8
...
portWwn: 20:08:00:05:1e:02:99:c4
portWwn of device(s) connected:
27:ee:00:0c:29:00:05:a7
10:00:00:00:c9:58:29:9a

switch> portshow 9
...
portWwn: 20:08:00:05:1e:02:99:c4
portWwn of device(s) connected:
27:ee:00:0c:29:00:06:a7
10:00:00:00:c9:58:29:9a

NPIVs WWPNs show up


2009 NetApp. All rights reserved.

NPIV OBSERVED

A10-12

SAN Implementation Workshop: NPIV Troubleshooting

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

NPIV Observed (Cont.)


After boot up of NPIV-supported VM (Cont.):
switch> portloginshow 8
Type PID
World Wide Name
credit df_sz cos
=====================================================
fe 010801 27:ee:00:0c:29:00:05:a7
16 2048
c
fe 010800 10:00:00:00:c9:58:29:9a
16 2048
c
ff 010801 27:ee:00:0c:29:00:05:a7
12 2048
c
ff 010800 10:00:00:00:c9:58:29:9a
12 2048
c

scr=3
scr=3
d_id=FFFFFC
d_id=FFFFFC

switch> portloginshow 9
Type PID
World Wide Name
credit df_sz cos
=====================================================
fe 010901 27:ee:00:0c:29:00:06:a7
16 2048
c
fe 010900 10:00:00:00:c9:58:29:9b
16 2048
c
ff 010901 27:ee:00:0c:29:00:06:a7
12 2048
c
ff 010900 10:00:00:00:c9:58:29:9b
12 2048
c

scr=3
scr=3
d_id=FFFFFC
d_id=FFFFFC

2009 NetApp. All rights reserved.

NPIV OBSERVED (CONT.)

A10-13

SAN Implementation Workshop: NPIV Troubleshooting

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary

2009 NetApp. All rights reserved.

MODULE SUMMARY

A10-14

SAN Implementation Workshop: NPIV Troubleshooting

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary
In this module, you should have learned to:
Convert a VMFS .vmdk to a RDM .vmdk
Describe the VPORT creation flow and
troubleshoot NPIV

2009 NetApp. All rights reserved.

MODULE SUMMARY

A10-15

SAN Implementation Workshop: NPIV Troubleshooting

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

VMware Snapshots
and NetApp
Snapshot Copies
Appendix 11
SAN Implementation Workshop

VMWARE SNAPSHOTS AND NETAPP SNAPSHOT COPIES

A11-1

SAN Implementation Workshop: VMware Snapshots and NetApp Snapshot Copies

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Objectives
By the end of this module, you should be able to:
Distinguish between VMware snapshots and
NetApp Snapshot copies

2009 NetApp. All rights reserved.

MODULE OBJECTIVES

A11-2

SAN Implementation Workshop: VMware Snapshots and NetApp Snapshot Copies

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Snapshot
Technologies

2009 NetApp. All rights reserved.

SNAPSHOT TECHNOLOGIES

A11-3

SAN Implementation Workshop: VMware Snapshots and NetApp Snapshot Copies

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

VMware Snapshots
ESX provides snapshot functionality

Managed in vCenter
Only supported on VMDKs and Virtual RDMs
Limited to 32 snapshots
VMs can remain online while taking snapshots
VMDKs are locked and disk IO is written to log files
VM memory is also snapped and logged
More VMs and/or snapshots degrades performance
Up to 30% performance penalty

VMware recommends storage vendor snapshots for


performance and scalability
VMware recommends storage vendor snapshots for VM
backup and recovery

Typical use case

Upgrading and patching guest OS in VM


Enablement of Storage vMotion, vRDMs, and linked clones

2009 NetApp. All rights reserved.

VMWARE SNAPSHOTS
Performance degradations are per VMware's documentation.

A11-4

SAN Implementation Workshop: VMware Snapshots and NetApp Snapshot Copies

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

NetApp Snapshot Copies


NetApp Snapshot copies can be easily integrated

Maximum of 255 per volume


Supports all datastore types
VMs can remain online while taking a Snapshot
Operation is scripted (see TR3428)

Single script can coordinate Snapshot of entire datacenter

For crash consistency we call VMware snapshots to


prepare the VM for a NetApp Snapshot
No performance degradation
Required for consistent FlexClone, LUN clone,
SnapMirror, SnapVault operations
Group data in FlexVol volumes by Snapshot policies

Typical use case

Regular backup and recovery operations of VM virtual


disks

2009 NetApp. All rights reserved.

NETAPP SNAPSHOT COPIES


The VMs need to be quiesced (set in hot backup mode) before NetApp Snapshot copies are
taken off the storage that provisions the VMware datastores used by each VM. These
operations can be scripted as shown in TR3428 Netapp and VMware ESX 3.0 Storage
Best Practices. This is very similar in principle to the way a database needs to be quiesced
before its storage is snapped.
NetApp Snapshot copies can be taken off of storage provisioning NFS, RDM (physical and
virtual), and VMFS datastores, that is all VMware datastore types.

A11-5

SAN Implementation Workshop: VMware Snapshots and NetApp Snapshot Copies

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

NetApp Snapshot Caveats


VMDKs on VMFS
Snapshot is at FlexVol level
Entire VMFS is backed up and restored
VMDK restore is a copy out operation from Snapshot to
production datastore

VMDKs on NFS
Single File SnapRestore for VMDK recovery

RDMs
SnapRestore or LUN clone for RDM recovery
Can be connected by any server (virtual or physical)
Easy single file recovery

2009 NetApp. All rights reserved.

NETAPP SNAPSHOT CAVEATS


VMDKs on VMFs is LUN clone or Single File SnapRestore (LUN clone recommended)
VMDK on NFS is Single File SnapRestore or FlexClone (Single File SnapRestore preferred)
RDMs Single File SnapRestore, LUN clone, or FlexClone work
Single file recovery from Snapshot copies is easy with RDMs, with VMDKs they cannot be
release so in practice recovery is through a proxy server.

A11-6

SAN Implementation Workshop: VMware Snapshots and NetApp Snapshot Copies

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

SnapMirror and SnapVault


To ensure that VMs are crash consistent is critical

To ensure that a VM will not require a checkdisk during


boot and avoid possible data loss, take consistent
Snapshot copies, then replicate them

Data layout is critical

Group datacenter datastores by Snapshot policy

Each datacenter should have its own volume


Multiple volumes per datacenter for multiple replication
schedules
Separate swap, page file, user and sys temp from OS
datastores

RDM DR recovery require re-creating map files


2009 NetApp. All rights reserved.

SNAPMIRROR AND SNAPVAULT

A11-7

SAN Implementation Workshop: VMware Snapshots and NetApp Snapshot Copies

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

VMware Clones
VM clones duplicate:

VM configuration file (.vmx file)


VM Virtual Disk(s) (.vmdk file)
In VirtualCenter RDMs clone to VMDKs by default

Strengths

Can be completed within VirtualCenter


Is used to easily deploy new VMs

Areas of concern

Consumes storage (no thin provisioning*)


Cloning takes a long time to complete

Typical use case

Permanent VM deployment from template

2009 NetApp. All rights reserved.

VMWARE CLONES
*Thin provisioning can be accomplished at the command line.
VMware clones are good for permanent VM deployment from a template.

A11-8

SAN Implementation Workshop: VMware Snapshots and NetApp Snapshot Copies

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

NetApp Clones
Permanent VM Deployment (from template)
NFS through Single File SnapRestore
Clones internally versus over Ethernet from ESX

RDM through LUN clone

Temporary VM Deployment
For test, dev, training, demos, and so on
NFS through FlexClone
VMFS and RDM by way of LUN clone or FlexClone

Typical use case


Fast temporary deployment of VM for training, demos

2009 NetApp. All rights reserved.

NETAPP CLONES
NetApp clones are good for fast temporary deployment of VMs for training, test, dev, demos,
and so on. For VMs with either RDMs or NFS VMDKs, disconnect disk first and complete
VM template.

A11-9

SAN Implementation Workshop: VMware Snapshots and NetApp Snapshot Copies

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

VMware Thin Provisioning


VMware thin provisioning is available on
VMDKs
Does not RDMs
NFS VMDKs are always thin provisioned
No way to thin provision within VirtualCenter
You must script

Thin-provisioned VMDKs
Have performance degradation
Trade-off capacity for performance

Can be converted to zero-thick VMDK

2009 NetApp. All rights reserved.

VMWARE THIN PROVISIONING


VMDKs cannot be converted to thin-provisioned format.
Zero thin: space in the VMDK file is allocated on demand.
Zero thick: empty space is pre-allocated in the VMDK file.

A11-10

SAN Implementation Workshop: VMware Snapshots and NetApp Snapshot Copies

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

NetApp Thin Provisioning


NetApp Thin Provisioning is available on
VMFS, NFS, and RDMs
No performance degradation
VMFS

Reduce overhead when using multiple datastores


Can be combined with thin-provisioned VMDKs
for maximum savings

NFS

Reduces FlexVol size

RDMs

Reduces FlexVol size


Reduce overhead of unused LUN capacity

2009 NetApp. All rights reserved.

NETAPP THIN PROVISIONING

A11-11

SAN Implementation Workshop: VMware Snapshots and NetApp Snapshot Copies

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary

2009 NetApp. All rights reserved.

MODULE SUMMARY

A11-12

SAN Implementation Workshop: VMware Snapshots and NetApp Snapshot Copies

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary
In this module, you should have learned to:
Distinguish between VMware snapshots and
NetApp Snapshot copies

2009 NetApp. All rights reserved.

MODULE SUMMARY

A11-13

SAN Implementation Workshop: VMware Snapshots and NetApp Snapshot Copies

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Virtual Storage
Console
Appendix 12
SAN Implementation Workshop

VIRTUAL STORAGE CONSOLE

A12-1

SAN Implementation Workshop: Virtual Storage Console

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Objectives
By the end of this module, you should be able to:
Describe the NetApp Virtual Storage Console
and how it assist in administrating virtual
machines using NetApp storage

2009 NetApp. All rights reserved.

MODULE OBJECTIVES

A12-2

SAN Implementation Workshop: Virtual Storage Console

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Virtual Storage
Console

2009 NetApp. All rights reserved.

VIRTUAL STORAGE CONSOLE

A12-3

SAN Implementation Workshop: Virtual Storage Console

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Virtual Storage Console


Virtual Storage Console (VSC)
Is a plug-in to VMware vCenter
Provides a Storage Management pane to show
discovered NetApp controllers and ESX hosts
along with information related to each
Provides a drop down menu per ESX host for
setting NetApp recommended HBA/CNA
Adapter Settings, MPIO Settings and/or NFS
settings

2009 NetApp. All rights reserved.

VIRTUAL STORAGE CONSOLE


Netapp Virtual Storage Console (VSC) is a vSphere NetApp plug-in to provide similar
support as the ESX Host Utilities. VSC supports the following:

Provides a Storage Management pane to show discovered NetApp controllers and ESX
hosts along with information related to each (for example, IP information, Data ONTAP
version, status, capacity, usage, protocols, adapter and MPIO status settings).
Provides a drop down menu per ESX host for setting NetApp recommended HBA/CNA
Adapter settings, MPIO settings and/or NFS settings.

A12-4

SAN Implementation Workshop: Virtual Storage Console

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Virtual Storage Console (Cont.)


Virtual Storage Console (VSC)
Provides a Storage Details pane to show
controller datastore info in addition to detailed
Lun and volume info, deduplication settings and
capacity readings (datastore, lun, volume).
Provides a Data Collection pane to allow for
Controller, ESX host and switch info script data
collection. VSC also provides limited integration
with nSANity off the toolchest. VSC 1.0 will not
package nSANity.
NFS storage details panel
2009 NetApp. All rights reserved.

VIRTUAL STORAGE CONSOLE (CONT.)


VSC also supports the following:

Provides a Storage Details pane to show controller datastore information in addition to


detailed LUN and volume info, deduplication settings and capacity readings for
datastores, LUNs, and volumes.
Provides a Data Collection pane to allow for Controller, ESX host and switch info script
data collection. VSC also provides limited integration with nSANity off the toolchest.
VSC 1.0 will not package nSANity.
NFS storage details panel.

A12-5

SAN Implementation Workshop: Virtual Storage Console

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

VSC 1.0 Requirements


VSC 1.0

Supports ESX 4.0 and ESXi 4.0


Offers limited support for ESX 3.5 and ESXi 3.5
Requires vCenter Server 4.0
Supports Data ONTAP 7.3.1.1 and later for all
SAN and NAS functions

2009 NetApp. All rights reserved.

VSC 1.0 REQUIREMENTS

A12-6

SAN Implementation Workshop: Virtual Storage Console

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

VSC: Integrated Storage View


The VSC tab

SAN
and
NAS
support

Automatic
discovery

2009 NetApp. All rights reserved.

VSC: INTEGRATED STORAGE VIEW

A12-7

SAN Implementation Workshop: Virtual Storage Console

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

VSC: Virtual Storage Infrastructure

Storage
system
configuration

ESX host
configuration

2009 NetApp. All rights reserved.

VSC: VIRTUAL STORAGE INFRASTRUCTURE

A12-8

SAN Implementation Workshop: Virtual Storage Console

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

VSC: Configuration Optimization

Optimize
host
settings

2009 NetApp. All rights reserved.

VSC: CONFIGURATION OPTIMIZATION

A12-9

SAN Implementation Workshop: Virtual Storage Console

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

VSC: Controller Status and Capacity

Status

Deduplication
savings

2009 NetApp. All rights reserved.

VSC: CONTROLLER STATUS AND CAPACITY

A12-10

SAN Implementation Workshop: Virtual Storage Console

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

VSC: Support Tools

Data collection
on support
tools

2009 NetApp. All rights reserved.

VSC: SUPPORT TOOLS

A12-11

SAN Implementation Workshop: Virtual Storage Console

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

VSC Takeaways
Integrated Virtual Host and Storage
Management
See storage back and host in views of ESX
datastores (SAN and NAS)

Storage Status and Capacity Information


Know how much your datastore is using + how
much is available

Simplified configuration for improved RAS


Provides automated tools for host settings and
configuration optimization

2009 NetApp. All rights reserved.

VSC TAKEAWAYS

A12-12

SAN Implementation Workshop: Virtual Storage Console

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary

2009 NetApp. All rights reserved.

MODULE SUMMARY

A12-13

SAN Implementation Workshop: Virtual Storage Console

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary
In this module, you should have learned to:
Describe the NetApp Virtual Storage Console
and how it assist in administrating virtual
machines using NetApp storage

2009 NetApp. All rights reserved.

MODULE SUMMARY

A12-14

SAN Implementation Workshop: Virtual Storage Console

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

SnapDrive for UNIX


(Linux version)
Appendix 13
SAN Implementation Workshop

SNAPDRIVE FOR UNIX(LINUX VERSION)

A13-1

SAN Implementation Workshop: SnapDrive for UNIX (Linux Version)

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Objectives
By the end of this module, you should be able to:
Describe the steps to configure SnapDrive for
UNIX on Linux

2009 NetApp. All rights reserved.

MODULE OBJECTIVES

A13-2

SAN Implementation Workshop: SnapDrive for UNIX (Linux Version)

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

SnapDrive for UNIX

2009 NetApp. All rights reserved.

SNAPDRIVE FOR UNIX

A13-3

SAN Implementation Workshop: SnapDrive for UNIX (Linux Version)

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

SnapDrive 4.1 for UNIX


Features:
Runs as a daemon service
Supports Red Hat DM-Multipath and Veritas
Dynamic Multi-Pathing
Intelligent provisioning and managing storage
Creates and restores Snapshot copies
Volume-based single file Snapshot restore
Integrates with SnapManager software through a
Web service
NOTE: SnapDrive does not support FC drivers and
OpeniSCSI
Command-line interface:
# snapdrive ...
2009 NetApp. All rights reserved.

SNAPDRIVE 4.1 FOR UNIX

A13-4

SAN Implementation Workshop: SnapDrive for UNIX (Linux Version)

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Installation of SnapDrive on Linux


Install the application

# rpm -U -v netapp.snapdrive.linux_4_1_1.rpm
Preparing packages for installation...
netapp.snapdrive-4.1.1-1
Starting snapdrive daemon: Successfully started
daemon
Daemon starts
Explore the application
automatically
# cd /opt/NetApp/snapdrive
Configuration file
# ls
bin
docs
snapdrive.conf snapdrived.rcscript
diag SDU4_1_1_notice.txt snapdrived_cron
snapdrived_var.rcscript
# cd bin
The daemon
Main file
# ls
snapdrive snapdrived

2009 NetApp. All rights reserved.

INSTALLATION OF SNAPDRIVE ON LINUX

A13-5

SAN Implementation Workshop: SnapDrive for UNIX (Linux Version)

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Installation of SnapDrive on Linux (Cont.)


SnapDrive requires two support libraries:
sg3_utils-libs-1.25-4.el5.i386.rpm
# rpm -U -vh sg3_utils-libs-1.25-4.el5.i386.rpm

sg3_utils-1.25-4.el5.i386.rpm
# rpm -U -vh sg3_utils-1.25-4.el5.i386.rpm

2009 NetApp. All rights reserved.

INSTALLATION OF SNAPDRIVE ON LINUX (CONT.)

A13-6

SAN Implementation Workshop: SnapDrive for UNIX (Linux Version)

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

SnapDrive Daemon
To use SnapDrive 4.1 for Linux, start the
daemon:
# cd /opt/NetApp/snapdrive
# snapdrived start
Successfully started daemon

To stop the daemon:

User must be logged


in as root user to
execute daemon
commands

# snapdrived stop
Successfully stopped daemon

Status of the daemon:


# snapdrived status
Snapdrive Daemon Version
: 4.1.1 (Change 942392
Built Fri Jul 17 04:56:45 PDT 2009)
Snapdrive Daemon start time : Sun Sep 27 12:01:55 2009
Total Commands Executed
: 5
Job Status: No command in execution
2009 NetApp. All rights reserved.

SNAPDRIVE DAEMON

A13-7

SAN Implementation Workshop: SnapDrive for UNIX (Linux Version)

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Set Up SnapDrive Access


By default, SnapDrive 4.1 uses SSL
Configure the storage system for SSL
system> secureadmin setup ssl

Configure SSL with RHEL

Follow the wizard

# openssl genrsa 1024 > host.key


# chmod 400 host.key
# openssl req -new -x509 -nodes -sha1 -days 365 -key
host.key > host.cert
Follow the wizard
...
# openssl x509 -noout -fingerprint -text < host.cert >
host.info
# cat host.cert host.key >
/opt/NetApp/snapdrive/snapdrive.pem
# rm host.key
# chmod 400 /opt/NetApp/snapdrive/snapdrive.pem
2009 NetApp. All rights reserved.

SET UP SNAPDRIVE ACCESS

A13-8

SAN Implementation Workshop: SnapDrive for UNIX (Linux Version)

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Set Up SnapDrive Access (Cont.)


Verify snapdrive.conf values
# cat snapdrive.conf
...
#use-https-to-sdu-daemon=on
#contact-https-port-sdu-daemon=4095
#sdu-daemon-certificatepath=/opt/NetApp/snapdrive/snapdrive.pem
https server certificate ...

# location of

Uncomment, if you are using a secure connection

2009 NetApp. All rights reserved.

SET UP SNAPDRIVE ACCESS (CONT.)

A13-9

SAN Implementation Workshop: SnapDrive for UNIX (Linux Version)

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Set Up SnapDrive Access (Cont.)


Verify snapdrive.conf values
# cat snapdrive.conf
...
#PATH="/sbin:/usr/sbin:/bin:/usr/bin:/opt/NTAP/SANToolkit/
bin:/opt/sanlun/bin" # toolset search path
...

Verify the path to the Host Utilities Kit is correct

Verify that OS can properly resolve host name


Requires both forward and reverse resolution
For example:
DNS
# vi /etc/hosts
2009 NetApp. All rights reserved.

SET UP SNAPDRIVE ACCESS (CONT.)

A13-10

SAN Implementation Workshop: SnapDrive for UNIX (Linux Version)

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Set Up SnapDrive Access (Cont.)


Verify current permissions to a storage system:
# snapdrive config access list storage_system_1_IP
ALL ACCESS
Commands allowed:
snap create
snap restore
snap delete
snap rename
storage create
storage resize
snap connect
storage connect
storage delete
snap disconnect
storage disconnect

2009 NetApp. All rights reserved.

SET UP SNAPDRIVE ACCESS (CONT.)

A13-11

SAN Implementation Workshop: SnapDrive for UNIX (Linux Version)

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Set Up SnapDrive Access (Cont.)


Set the user name and password for access:
# snapdrive config set root storage_system_1_IP
Password:
Verify Password:

Using root user


for educational
purposes only

2009 NetApp. All rights reserved.

SET UP SNAPDRIVE ACCESS (CONT.)

A13-12

SAN Implementation Workshop: SnapDrive for UNIX (Linux Version)

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Creating a LUN with SnapDrive


To create a LUN
# snapdrive storage create -lun system:/vol/SANvol/lun
-lunsize 100m

To create a LUN, provide LUN access, add a file


system, and mount the virtual disk
# snapdrive storage create -fs /mnt/lun -lun
system:/vol/SANvol/lun -lunsize 100m -nolvm -fstype ext3
-igroup my_ig
igroup
# cd /mnt/lun
created
# touch foo
earlier
# ls
foo
lost+found

2009 NetApp. All rights reserved.

CREATING A LUN WITH SNAPDRIVE

A13-13

SAN Implementation Workshop: SnapDrive for UNIX (Linux Version)

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

LUN Management
Configure Linux environment variable
# export LVM_SUPPRESS_FD_WARNINGS=1
# snapdrived stop
Restart the
# snapdrived start
SnapDrive

List LUNs available

daemon

# snapdrive storage list -all


Host devices and file systems:
raw device:/dev/sdj1 mount point:/mnt/lun(persistent) fstype ext3
device ... size proto state clone lun path
backing snapshot
------ ... ---- ----- ----- ----- ----------------------/dev/sdj
100m fcp
online No
system:/vol/SANvol/lun
-

2009 NetApp. All rights reserved.

LUN MANAGEMENT

A13-14

SAN Implementation Workshop: SnapDrive for UNIX (Linux Version)

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Snapshot Creation with SnapDrive


To create a Snapshot of a mounted LUN:
# snapdrive snap create -fs /mnt/lun -snapname my_snap
Starting snap create /mnt/lun
WARNING: DO NOT CONTROL-C!
If snap create is interrupted, incomplete snapdrive
generated data may remain on the filer volume(s)which
may interfere with other snap operations.
Successfully created snapshot my_snap on
system:/vol/SANvol
snapshot my_snap contains:
file system: /mnt/lun

2009 NetApp. All rights reserved.

SNAPSHOT CREATION WITH SNAPDRIVE

A13-15

SAN Implementation Workshop: SnapDrive for UNIX (Linux Version)

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

List Snapshot Copies with SnapDrive


List the Snapshot copies for a mounted LUN:
# snapdrive snap list -v -fs /mnt/lun
snap name
host
date
snapped
-------------------------------------------------------------system:/vol/SANvol:my_snap
rhel
Oct 22 16:23 /mnt/lun
host OS: Linux 2.6.18-128.el5 #1 SMP Wed Dec 17 11:41:38 EST 2008
snapshot name: my_snap
file system:
type: ext3

mountpoint: /mnt/lun

lun path
dev paths
-------------------------------------system:/vol/SANvol/lun
/dev/sdj

2009 NetApp. All rights reserved.

LIST SNAPSHOT COPIES WITH SNAPDRIVE

A13-16

SAN Implementation Workshop: SnapDrive for UNIX (Linux Version)

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Snapshot Restore with SnapDrive


To restore a mounted LUN:
# pwd
/mnt/lun
# ls
foo lost+found
# rm foo
Remove a file
# ls
lost+found
# cd ..
Verify that you are not
# pwd
in the mountpoint
/mnt
# snapdrive snap restore -fs /mnt/lun -snapname my_snap
...
Requires snaprestore
# cd lun
license on the storage system
# ls
foo lost+found
The file is restored

2009 NetApp. All rights reserved.

SNAPSHOT RESTORE WITH SNAPDRIVE

A13-17

SAN Implementation Workshop: SnapDrive for UNIX (Linux Version)

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Removing a LUN with SnapDrive


To remove a mounted LUN:
# pwd
/mnt
Verify that you are not
# ls
in the mountpoint
lun otherlun
# snapdrive storage delete -fs /mnt/lun
delete file system /mnt/lun
- fs /mnt/lun ... deleted
- LUN system:/vol/SANvol/lun ... deleted
# ls
otherlun

2009 NetApp. All rights reserved.

REMOVING A LUN WITH SNAPDRIVE

A13-18

SAN Implementation Workshop: SnapDrive for UNIX (Linux Version)

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary

2009 NetApp. All rights reserved.

MODULE SUMMARY

A13-19

SAN Implementation Workshop: SnapDrive for UNIX (Linux Version)

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

Module Summary
In this module, you should have learned to:
Describe the steps to configure SnapDrive for
UNIX on Linux

2009 NetApp. All rights reserved.

MODULE SUMMARY

A13-20

SAN Implementation Workshop: SnapDrive for UNIX (Linux Version)

2009 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

NetApp University - Do Not Distribute

You might also like