You are on page 1of 4

dual vios configuration

http://doc.baidu.com/view/67132630b90d6c85ec3ac601.html
http://www.redbooks.ibm.com/Redbooks.nsf/RedbookAbstracts/redp4224.html

The basis of setting up dual VIO servers will always be:


* Setup First Virtual I/O Server
* Test Virtual I/O setup
* Setup Second Virtual I/O Server
* Test Virtual I/O setup
Once the Virtual I/O Servers are prepared focus moves to the client partitions
* Setup client partition(s).
* Test client stability
In the following - no hints are provided on testing. Thus, not 4 parts, but two parts: setup VIOS; setup
client. The setup is based on a DS4300 SAN Storage server. Your attribute and interface names may be
different.

Setting up Multiple Virtual I/O Servers(VIOS)


The key issue is the LUNs exported from the SAN environment must always be available to any
VIOS that is going to provide the LUN via the Virtual SCSI interface to Virtual SCSI client
partitions (or client LPAR). The hdisk attribute that must be set on each VIOS for each LUN that
will be used for a MPIO configuration is reserve_setting. When there are problems the first thing I
check is whether the disk attribute reserve_setting has been set to none.
But it is not just this one setting. For failover we also need setup the adapters. Note: to change any
of these attributes the adapters and disks need to be in a defined state - OR - you make the changes
to the ODM and reboot. This second, i.e. reboot, option is required when you cannot get the disks
and/or adapters offlined (varyoffvg followed by several rmdev -l commands).
To make an ODM change just add the parameter -P to the chdev -l commands below. And be sure
to not use rmdev -d -l <device> as this just removes the device definition from the ODM and you
have to restart.

This example is using an IBM DS4000 storage system. Your adapter names may be different.
First, make sure the disks are offline. If no clients are using the disks this is easy, otherwise you
must get the client partition to varyoffvg the disks, and use rmdev -l to each disk/LUN.
Once the disks are not being used the following commands bring the disks and adapters offline (or
into a defined state). The command cfgmgr will bring them back online with the new settings
active.Do not run cfgmgr until AFTER the disks attributes have been set.
Note: I have used the command oem_setup_env to switch from user padmin to root. Below I will
switch back to padmin (note $ or # prompt for commands for padmin and root, respectively).
# rmdev -l dar0 -R
# rmdev -l fscsi0 -R
Update the adapter settings:
# chdev -l fscsi0 -a fc_err_recov=fast_fail
# chdev -l fscsi0 -a dyntrk=yes
On my system the disks the first four disks are local and I have 8 LUNs that I want to make MPIO
ready for the clients. So, with both VIO servers having the disks offline, or only one VIO server
active, it is simple to get the disks setup for no_reserve. I also add that I want the VIO server to be
aware of the pvid the clients put, or modify on these disks. Finally, I make sure the client sees
the disk as "clean".
# for i in 4 5 6 7 8 9 10 11
> do
> chdev -l hdisk$i -a pv=yes
> chdev -l hdisk$i -a reserve_policy=no_reserve
> chpv -C hdisk$i ## this "erases the disk so be careful!
> done
Now I can activate the disks again using:
# cfgmgr

I assume you already know how to configure VSCI clients (you have already configured at least
two VIO Servers!!) so I'll skip over the commands on the HMC for configuring the client
partition.
Once the client LPAR has been configured - but before it is activated you need to map the LUNs
available on the VIOS to the client. The command to do this is mkvdev.
Assuming that the VIOS are setup to have identical disk numbering any command you use on the
first VIOS can be repeated on the second VIOS.
$ mkvdev -vdev hdisk4 -vadapter vhost0
vtscsi0 Available
You can verify the virtual target (vt) mapping using the command:
$ lsmap -vadapter vhost 0
Repeat these commands on the other VIOS(s) and make sure they are all connected to the same
VSCSI client (VX-TY). The VX part is the most important - that signifies that it is the same client
partition (X == client partiion number). The TY is the Virtual I/O bus slot number and this will be
different for each mapping. The TY part signify the multiple paths.
The easiest way to setup the client drivers is to just install it. The AIX installation sees the MPIO
capability and install the drivers and creates the vpaths. After AIX reboots there are few changes
we want to make sure AIX recivers automatically incase one of the VIO servers goes offline, or
loses connection with the SAN.

For each LUN coming from the SAN do:


# chdev -l hdiskX -a hcheck_mode=nonactive -a hcheck_interval=20 \
-a algorithm=failover -P
As this is probably including your rootvg disks a reboot is required

What's the quickest way to get to know your network team? Just bring down the entire
network.
I actually know of people who have caused network outages by misconfiguring dual VIOS.
However, this isn't another of my scary stories--I just want to tell you how to avoid stirring
up your own broadcast network storm.

Start with this sample example:

mkvdev -sea ent0 -vadapter ent2 -default ent2 -defaultid 2 -attr ha_mode=auto
ctl_chan=ent1

When you run this command, make sure that each VIOS is set up to use the same control
channel VLAN (ent1 in this case). If not, the two servers will be unable to communicate
with one another. And if that happens, each will respond as if the other VIOS is down, and
each will attempt to function as the primary server.

From IBM Support:

"A Shared Ethernet Adapter (SEA) can be used to connect a physical network to a virtual
Ethernet network. It provides the ability for several client partitions to share one physical
adapter. SEA can only be configured on the Virtual I/O Server (VIOS) and requires the
POWER Hypervisor and Advanced POWER Virtualization feature. The SEA, hosted on the
VIOS, acts as a Layer-2 bridge between the internal and external network.

"One SEA on one VIOS acts as the primary (active) adapter and the second SEA on the
second VIOS acts as a backup (standby) adapter. Each SEA must have at least one virtual
Ethernet adapter with the
'Access external network' flag (previously known as trunk flag) checked. This enables the
SEA to provide bridging functionality between the two VIO servers.

"This adapter on both the SEAs has the same PVID, but will have a different priority value.
A SEA in ha_mode (Failover mode) might have more than one trunk adapters, in which
case all should have the same priority value. The priority value defines which of the two
SEAs will be the primary and which will be the backup. The lower the priority value, the
higher the priority -- e.g. an adapter with priority 1 will have the highest priority. An
additional virtual Ethernet adapter, which belongs to a unique VLAN on the system, is used
to create the control channel between the SEAs, and must be specified in each SEA when
configured in ha_mode. The purpose of this control channel is to communicate between
the two SEA adapters to determine when a failover should take place."

In other words: When setting up VIOS, you must set up a control channel so that the two
servers can communicate with one another. You also need to establish one VIOS as the
primary server and the other as the backup.

This document states the consequences of misconfiguring your SEAs:


"In this section, you will create the control channel virtual Ethernet adapters on VIOS1 and
VIOS2, which will communicate on VLAN ID 12. It is very important to create this adapter
on both VIOS partitions before creating SEA adapters to support failover for the same
VLAN. Failing to have proper control channel configuration can result in causing a
broadcast storm when both SEA adapters are activated on the same VLAN (VLAN ID 2 in
this case).

"First you will create the control channel adapters on each VIOS partition. These control
channel adapters are used to determine the health of the SEAs and are required to avoid a
broadcast storm (which can result when two trunking virtual adapters are available on the
same VLAN)."

In another part of this document, we read:

"Failing to have proper control channel configuration can result in causing a broadcast
storm when both SEA adapters are activated on the same VLAN (VLAN ID 2 in this case)."

And again:

"When you run the mkvdev -sea command, it is very important that you specify the
ha_mode and ctl_chan attributes. If you fail to do this, creation of the primary adapter on
VIOS2 could result in a network broadcast storm."

And again:

"STOP!!! Before you continue to the next step, ask a lab instructor to determine that you
have the correct adapter configuration. Failure to properly configure an SEA failover
scenario can result in a broadcast storm than can affect the entire lab network."

A network guy I know recommends enabling BPDU on our Cisco switches to try to address
this issue. This website seems to agree with that assessment:

"As a precaution, you can enable Bridge Protocol Data Unit (BPDU) Guard on the switch
ports connected to the physical adapters of the SEA. BPDU Guard detects looped Spanning
Tree Protocol BPDU packets and shuts down the port. This helps prevent broadcast storms
on the network."

Maybe some networking gurus out there can let us know whether using BPDU is advisable
on our VIOS-connected ports.

Even those of us who routinely work with VIOS shouldn't get cocky, because one wrong
move can take out a network. So be careful. The stakes are high.

You might also like