You are on page 1of 23

Clustered Data ONTAP

MCC Deployment
April 10th 2013

Metro Cluster C-Mode


Two Sites
MCC deployment consists of two CMode clusters, one cluster at each of
two sites.
Cross-site synchronous disaster
recovery
Connectivity between site:
One or more Ethernet Wan link(s)
Four Fiber links for FCVI and Storage
Traffic

Terminology 1
FC - Fiber Channel is a high-speed, low latency network technology primarily
used for storage networking.
FC Fabric - a network topology where FC nodes connect with each other via one
or more FC switches.

Local / Remote - Subjective term referring to one SITE in MCC


Site - Geographical location of the two clusters. There are two (2) SITES in a
MCC

Cluster Peering - A management relationship between two clusters that allows


the clusters to exchange data and request remote operations.

FCVI - Fiber Channel Virtual Adapter, FCVI adapter uses RDMA operation to copy
the local NVRAM section to remote NVRAM section.

Terminology 2

RDMA - Remote direct memory access is a direct memory access from


the memory of one computer into that of another without involving
either one's operating system. This permits high-throughput, lowlatency networking, which is especially useful in massively parallel
computer clusters.
SAS - Serial attached SCSI. Newer NetApp disk shelfs have SAS
connectivity
FC-SAS Bridge - Converts SAS traffic into fiber channel traffic. FC
overcomes the distance limitations of the SAS interface. Atto Fibre
bridge.
ISL - Inter-Switch link. A physical link between fiber channel switches
used to extend the FC fabric. MCC configurations connect sites with
two (2) ISL links.

8-Pack MCC Diagram

Whats Really Different


FC Switch
(brocade)

FC Switch
(brocade)

FC Bridge
(Atto)
Site A

Site B

Atto (Armadillo) Converts SAS to FC. This converts


short range SAS traffic on to long range FC media.
Each Atto provides a single path to storage.

FC Bridge
(Atto)

Redundant Fabrics

FC Zoning
Think of a FC zone as a VLAN.
Nothing more than a traffic isolation
zone.
MCC requires Hard Zoning (port
zoning.)
Each Brocade needs to have a
different Domain ID.
Keep it simple!

Brocade zoning example

Goal: We want ports 0 and 1 on both


switches to be in the same zone.
Format (domain ID, switch port)
zone: mcc_3240_4_storage
1,0
1,1
2,0
2,1

FC Zones
Each Fabric will have two zones:
Effective configuration:
cfg: mcc_3240_4
zone: mcc_3240_4_fcvi
1,6 <- node A1 fcvi port 0
1,7 <- node A2 fcvi port 0
2,6 <- node B1 fcvi port 0
2,7 <- node B2 fcvi port 0
zone: mcc_3240_4_storage
1,0 <-- Fibre bridge A1
1,1 <-- node A1 port 0c
1,2 <-- node A2 port 0c
2,0 <-- Fibre bridge B1
2,1 <-- node B1 port 0c
2,2 <-- node B2 port 0c
mcc-brcd-fab-b1:FID128:admin>

Showing backend FC Fabric


eng-mcc-site-a1> storage show switch
Switch:
Fabric:
Name:
Domain:
Type:
Version:
Vendor:
Inc.

WWN[1:000:00051e:0ffd2f]
WWN[1:000:00051e:bd5f16]
fab-a1
1
switch
v6.1.0
Brocade Communications,

Switch:
WWN[1:000:00051e:bd5f16]
Fabric:
WWN[1:000:00051e:bd5f16]
Name:
fab-a2
Domain:
2
Type:
switch
Version:
v6.2.0c
Vendor:
Brocade Communications,
Inc.

Switch:
WWN[1:000:000533:47f585]
Fabric:
WWN[1:000:000533:47f585]
Name:
fab-b1
Domain:
1
Type:
switch
Version:
v6.4.0a
Vendor:
Brocade
Communications, Inc.
Switch:
WWN[1:000:000533:2bef34]
Fabric:
WWN[1:000:000533:47f585]
Name:
fab-b2
Domain:
2
Type:
switch

Finding FCVI ports

eng-mcc-site-a1> sysconfig -v 1
slot 1: FCVI Host Adapter 1a (QLogic 2532(2562) rev. 2, F-port
Physical link: UP
FC Node Name: 21:00:00:1b:32:11:d4:78
SFP Capabilities: 2, 4, 8 or 16 Gbit
Link Data Rate: 8 Gbit
Switch Port: fab-a1:5
Switch Vendor: Brocade Communications, Inc.
Switch Model: 66.1
Switch Firmware Version: v6.1.0
slot 1: FCVI Host Adapter 1b (QLogic 2532(2562) rev. 2, F-port
Physical link: UP
FC Node Name: 21:01:00:1b:32:31:d4:78
SFP Capabilities: 2, 4, 8 or 16 Gbit
Link Data Rate: 8 Gbit
Switch Port: fab-b1:5
Switch Vendor: Brocade Communications, Inc.
Switch Model: 71.2
Switch Firmware Version: v6.4.0a
eng-mcc-site-a1>

MCC Disk Assignment


MCC mirrors each aggregate to a remote set of disks.
Each node will have two disk pools. Each pool will
contain the same number and capacity of disks.
Pool 0 (always local)
Pool 1 (always remote)

Example node A1:


Pool 0 = shelf 00 (local)
Pool 1 = shelf 04 (remote)

Example node B1:


Pool 0 = shelf 06 (local)
Pool 1 = shelf 02 (remote)

Useful Disk / Storage


Commands
After Full Steam is installed, boot all
nodes to Maintenance mode
Storage show shelf
-Displays Full Shelf Name for disk assignment
Storage show disk p
- Shows disk paths, also show you link breaks

Storage show disk -p


*> storage show disk -p
PRIMARY
PORT SECONDARY
PORT SHELF
------------------------ ---- ------------------------ ---- --------mcc-brcd-fab-b1:0.126L13 B mcc-brcd-fab-a1:0.126L39 A
mcc-brcd-fab-a1:0.126L40 A mcc-brcd-fab-b1:0.126L14 B
mcc-brcd-fab-b1:0.126L15 B mcc-brcd-fab-a1:0.126L41 A
mcc-brcd-fab-a1:0.126L42 A mcc-brcd-fab-b1:0.126L16 B
mcc-brcd-fab-a2:0.126L1 A mcc-brcd-fab-b2:0.126L27 B
mcc-brcd-fab-a2:0.126L2 A mcc-brcd-fab-b2:0.126L28 B
mcc-brcd-fab-b2:0.126L29 B mcc-brcd-fab-a2:0.126L3 A
mcc-brcd-fab-b2:0.126L30 B mcc-brcd-fab-a2:0.126L4 A
mcc-brcd-fab-b2:0.126L31 B mcc-brcd-fab-a2:0.126L5 A
mcc-brcd-fab-b2:0.126L32 B mcc-brcd-fab-a2:0.126L6 A
mcc-brcd-fab-b2:0.126L33 B mcc-brcd-fab-a2:0.126L7 A

BAY
2
2
2
2
0
0
0
0
0
0
0

12
13
14
15
0
1
2
3
4
5
6

Storage show shelf


Shelf name: fab-b2:0.shelf3
Channel:
fab-b2:0
Module:
B
Shelf id:
3
Shelf UID:
50:0a:09:80:01:00:1b:c1
Shelf S/N:
6000287610
Term switch: N/A
Shelf state: ONLINE
Module state: OK
Partial Path Link Invalid Running
Loss Phy
CRC
Phy
Disk Port Timeout
Rate
DWord Disparity Dword Reset
Error Change
Id
State Value (ms) (Gb/s) Count Count
Count Problem Count Count
-------------------------------------------------------------------------------------------[SQR0] OK
7
6.0
0
0
0
0
0
1
[SQR1] OK
7
6.0
0
0
0
0
0
1
[SQR2] OK
7
6.0
0
0
0
0
0
1

Disk Assignment by shelf


*> disk
usage: disk <options>
Options are:

assign {<disk_name> | all | [-T <storage type> | -shelf <shelf name>] [-n
<count>] | auto} [-p <pool>] [-o <ownername>] [-s <sysid>] [-c block|zoned] [-f] - assign a disk
to a filer or all unowned disks by specifying "all" or <count> number of unowned disk
dump_globals - disk driver global variables
encrypt { lock | rekey | destroy | sanitize | show | unlock } - perform tasks specific to self-encrypting disks
fail [-i] [-f] <disk_name> - fail a file system disk
maint { start | abort | status | list} - run maintenance tests on one or more disks
remove [-w] <disk_name>
- remove a spare disk
replace {start [-f] [-m] <disk_name> <spare_disk_name>} | {stop <disk_name>} - replace a file system disk with a
spare disk or stop replacing
sanitize { start | abort | status | release } - sanitize one or more disks
scrub { start | stop }
- start or stop disk scrubbing
show [-o <ownername> | -s <sysid> | -n | -v | -a] - lists disks and owners
simpull <disk_name1> [<disk_name2> [<disk_name3> ... ]] - simulate one or more disk pulls
simpush [<sim_disk_path_name1> [<sim_disk_path_name2> [<sim_disk_path_name3> ...]] | -l] - simulate one or
more disk pushes or list available disks to push
zero spares
- Zero all spare disks

*> disk assign -shelf fab-b2:0.shelf3 -n 24 -p 0

MCC Setup Overview 1

1. Connect the SAS cables to Armadillos


2. Connect the Armadillos the FC switches
3. Connect the FC-VI to the FC switches
4. Connect the Filers FC ports to the FC switches
5. Configure Armadillos, FC switches, 10G Custer switches and
set shelf IDs. See Disk shelf IDs
6. Zone the Armadillos and storage initiators into single-target
single-initiator zones. See FC Zoning
7. Zone the FC-VI ports. see FC Zoning
8. Install MCC support Ontap version.
9. Ok done with the wiring and OS install. Now some checks to
make sure everything is okay.
Go to maintenance mode.
Check the initiators on each controller. Via "fcadmin link_state" you
should see that the state is up and connected to a switch.

MCC Setup Overview 2


You should be able to see all the disks from each controller via disk show.
At least verify you can see all the armadillo ports "storage show bridge"
or "fcadmin device_map" or "fcadmin channels"
Make sure each disk has two-paths to the controller. Use "storage show
disk -p"
See Checking Storage Zoning and Multi-Path
Check the FC-VI connections in sysconfig -v. The FC-VI slot will show that
it is connected to the switch.

10. In maint-mode, assign the disks. Refer to the disk


assignment section for details.
11. Reboot the node and do a full-zero of the disks. RAID
Syncmirror doesn't like uninitialized parity.
12. On one site, pick one node and its HA partner. Create a
cluster one and have the other join the cluster. Follow same
steps for the nodes on the other site.

MCC Setup Overview 3


13. Follow the cluster setup. Make sure storage failover
is active and cluster is healthy before moving to next
steps.
14. Complete cluster-peer join to remote site cluster.
See Custer Peer Join
15. Enable Metro Cluster on both clusters. See Enable
Metro Cluster

Setting HA-Partner SYSID


Mirroring root aggregate
Create one non-root aggr on each cluster, setup mirroring
Create Vservers on ROOT AGGREGATE on both cluster pairs
Enable Metro Cluster

Enabling MCC 1
Mirror the root Aggregate.

mcc-3240-4-a1-a2::*> aggr mirror aggregate


<root aggregate name>
Create one non-root aggr on each cluster,
setup mirroring
mcc-3240-4-a1-a2::*> aggr create aggregate
<aggregate name> -diskcount 6 -nodes <node
name> -mirror true

Enabling MCC 2
Create Vservers (3) on ROOT AGGREGATE on both cluster
pairs (4 pack)
mcc-3240-4-a1-a2::*> vserver create -vserver CRS_Vserver
-rootvolume CRS_Vserver_vol -aggr <root aggr name> -nsswitch nis -rootvolume-security-style unix -language en_US
mcc-3240-4-a1-a2::*> vserver create -vserver MCC_Vserver
-rootvolume MCC_Vserver_vol -aggr <root aggr name> -nsswitch nis -rootvolume-security-style unix -language en_US
mcc-3240-4-a1-a2::*> vserver create -vserver vs0
-rootvolume vs0_vol -aggr <root aggr name> -ns-switch nis
-rootvolume-security-style unix -language en_US

Enabling MCC 3
When Metro Cluster is successfully enabled at both Sites, run
the following command to check Metro Cluster health.
mcc-3240-4-a1-a2::metrocluster*> show
Cluster Node
State
------- ------------------ ---------------------------------------------------mcc-3240-4-a1-a2
mcc-3240-4-a1
Enabled
mcc-3240-4-a2
Enabled
mcc-3240-4-b1-b2
mcc-3240-4-b1
Enabled
mcc-3240-4-b2
Enabled
4 entries were displayed.

You might also like