Professional Documents
Culture Documents
Certified documentation
according to DIN EN ISO 9001:2000
To ensure a consistently high quality standard and
user-friendliness, this documentation was created to
meet the regulations of a quality management system which
complies with the requirements of the standard
DIN EN ISO 9001:2000.
cognitas. Gesellschaft fr Technik-Dokumentation mbH
www.cognitas.de
Contents
1
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.1
Notational Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2
Compatibility Check
1.3
4.1
4.2
4.3
5.1
5.2
5.3
5.4
5.5
5.6
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
21
. . . . 23
Contents
5.7
5.7.1
5.7.2
5.7.3
iSCSI Configurations . . . . . . . . . . . . . . . .
Editing Registry Timeout Values for iSCSI Initiator . .
iSCSI Single-Controller, Direct Attached Configuration
iSCSI Dual Controller, Switch Attached Configuration
5.8
5.9
5.9.1
5.9.2
5.9.3
5.9.4
5.9.5
5.9.6
5.9.7
5.9.8
5.9.9
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
24
24
25
26
31
31
32
33
34
. . 35
. . 36
. . 37
. . 39
. . 40
Using vdisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
6.1
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
6.2
6.3
7.1
7.2
7.3
8.1
8.2
Troubleshooting LUNs
. . . . . . . . . . . . . . . . . . . 45
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Contents
9.1
9.2
9.3
9.4
9.5
Taking Snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
9.6
9.7
9.8
10
11
11.1
SNMP Settings
11.2
11.3
11.4
12
. . . . . . . . . . . . . . . . . . . . . . 63
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
63
Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
1 Preface
This guide describes how to install and initially configure the following array models:
This guide does not apply to the FibreCAT SX40 model which is covered by separate
documents.
Meaning
Italics
Commands, options, file names and path names are written in italic letters
in continuous text
fixed font
<variable>
semi-bold
Highlights text
Quotation marks
I
V CAUTION
1
Preface
Host ports
CLI port
Ethernet port
Expansion port
The figure above shows a FibreCAT SX60 / SX80 / SX88 model as example. These models
have two FC host ports and one SAS expansion port per controller.
The FibreCAT SX80 iSCSI is eqipped with two Ethernet host ports per controller instead of
the FC host ports.
Preface
Power switch
Host ports
CLI port
Ethernet port
Expansion ports
The figure above shows a FibreCAT SX100 model. It has four FC host ports and two SAS
expansion ports per controller.
On the front side of an controller or expansion enclosure, each hard disk drive has two
LEDs.
For a detailed overview of the hardware components and LEDs, please consult the
FibreCAT SX Series Operating Manual.
Before installing your FibreCAT SX, make sure you read the supplied Safety manual
and the FibreCAT SX Release Notes at
http://ts.fujitsu.com/products/storage/disk/fibrecat_sx/index.html.
Also consult the installation poster before proceeding with the installation of your
FibreCAT SX.
Task
Unpack the FibreCAT SX base enclosure box, and check its contents. Also
unpack and check the FibreCAT SX expansion enclosures if you will be using
these.
Connect the optional FibreCAT SX expansion enclosures to the base enclosure. Installing and Cabling
FibreCAT Enclosures
on page 13
Switch on the optional expansion enclosures first, then switch on the FibreCAT
SX base enclosure. Switch on the host computer or computers last.
Connect the Ethernet port on the FibreCAT SX with your LAN, and open the Webbased Interface, called the FibreCAT SX Manager (FSM).
8
9
Establishing a
Connection to the
FibreCAT SX on
page 17
Preparing and
Connecting the
Set the parameters for the FibreCAT SX interconnects.
FibreCAT SX and
Install, connect and configure the (FC or iSCSI) Host Bus Adapter or Adapters in Servers on page 19
your servers.
10
11
12
If you will be using Windows Server: install and setup the SES driver and in case
of path redundancy configuration, install the MPIO DSM tool.
11
Step
Task
13
Using vdisks on
page 41
14
Build volumes.
Mapping Volumes to
Hosts on page 47
15
Mapping Volumes to
Hosts on page 47 and
Using LUNs on Hosts
on page 51
12
13
The FibreCAT SX80, SX88 and SX80 iSCSI support up to four expansion enclosures.
Figure 5: Expanding the FibreCAT SX80 or SX88 or SX80 iSCSI With Two Enclosures
When you install an expansion enclosure, you can plug and unplug cables while the system
is up or down.
As you can see in the illustrations, the connections can be set up in such a way that both
expansion enclosures are directly connected to the base enclosure. The advantage of this
approach is the fact that even after an expansion enclosure fails the other one is still accessible.
14
Figure 6: Expanding the FibreCAT SX80 or SX88 or SX80 iSCSI With Three Enclosures
15
The FibreCAT SX100 supports up to eight expansion enclosures. When connecting multiple
expansion enclosures to a FibreCAT SX100, distribute expansion enclosures as evenly as
possible between expansion channels. For example, if connecting four expansion enclosures to a single controller enclosure, attach two expansion enclosures to channel 0 and
two to channel 1.
In
Out
In
Out
In
Out
In
Out
In
Out
In
Out
In
Out
In
Out
Controller B
Enclosure
ID 0
Enclosure
ID 2
Out
In
Controller A
Enclosure
ID 3
Out
In
Out
Enclosure
ID 4
Out
In
In
Enclosure
ID 5
Out
In
Enclosure
ID 1
Enclosure
ID 6
Out
In
Out
Enclosure
ID 7
Out
In
In
Enclosure
ID 8
In
Out
In
Out
Enclosure
ID 1
Controller A
Controller B
Enclosure
ID 0
16
Before using the FibreCAT SX Manager web based interface (FSM), ensure that your web
browser is properly configured according to the following guidelines:
Because the FSM uses popup windows to indicate the progress of user-requested
tasks, disable any browser features or tools that block popup windows.
17
To optimize performance, set your browser to never check for newer versions of stored
pages.
To optimize display, use a color monitor and set its color quality to the highest setting.
For Internet Explorer, to ensure you can navigate beyond the WBI login page, set the
local-intranet security option to medium or medium-low.
18
19
The FibreCAT SX controller enclosure equipped with two FC RAID controllers has four FC
connections, two per controller. To maintain redundancy, connect one data host to both
Controller A and Controller B.
For more detailed regarding the host port configuration, please use the FibreCAT
SX Series Operation Manual.
Fiber optic cables are fragile. Do not bend, twist, fold, pinch, or step on the fiber
optic cables. Doing so can degrade performance or cause data loss.
3. Connect the other end of each fiber optic cable to the HBAs as shown in the figures
shown in the paragraph "Connecting Data Hosts Directly" in the FibreCAT SX Series
Operating Manual.
20
A+B LUNs
A+B LUNs
Figure 8: Highly Available Configuration Connected Through Redundant Switches and HBAs
21
Fujitsu MultiPath is only supported on PRIMERGY servers. Its advantage over MPIO,
though, is greater ease of use through a graphical user interface. For installation and configuration instructions for MultiPath, consult the manual that came with your software, and
check for an updated manual on http://ts.fujitsu.com/support/downloads.html
Also, check out there for a (future) white paper about the use of MultiPath with the FibreCAT
SX.
MPIO is not a product, but a specification and a driver development kit. Vendors such as
Fujitsu use the driver development kit for developing a Device Specific Module (DSM) that
provides the MPIO functionality in Windows 2000 Server, Windows Server 2003 and
Windows Server 2003 R2.
To implement MPIO, follow these steps.
Run setup.exe on every server running Windows Server that will connect to your
FibreCAT SX using multipathing. Accept the default settings.
This configuration program uses a command line interface. Information about its syntax
can be found by entering help.
Identify the Fibre Channel Host Bus Adapters by entering devinfo. In most cases, you
will see that the FC Host Bus Adapters operate in RoundRobin mode.
In order to load balance traffic across all controllers, type devinfo all weighted.
In Windows Server Disk Management, your FibreCAT SX will now also show up as
FibreCAT SX Multipath Device.
22
All Fibre Channel ports on the FibreCAT SX must be in loop mode for failover to
happen properly.
FibreCAT_SX1"
23
CAUTION!
Use caution when editing the Windows registry. Editing the wrong entry
or setting an incorrect value for a setting can introduce errors that cause the system
to malfunction. Create a registry back up before following instructions in this
section.
24
Single controller mode is only supported in direct attached configuration. In single controller
mode the system is set to a failed over state to support a future upgrade to dual controller
system. When configuring a single controller system you will need to assign ghost host IP
configuration to the system. This will ensure that when an additional controller is inserted
no additional configuration is required.
Ensure that when configuring a single controller system you assign a host port IP
address to both the active controller and the host B controller.
25
Figure 10: High-Availability Connection Through Two Switches to Two Dual-Port Data Hosts (iSCSI)
26
During active-active operation, both controllers' mapped volumes are visible to both data
hosts.
A & B volumes
A & B volumes
A volumes, A0 IP
A volumes, A1 IP
B volumes, B0 IP
B volumes, B1 IP
A dual-controller FibreCAT SX80 iSCSI storage system uses port 0 of each controller as
one failover pair and port 1 of each controller as a second failover pair. If one controller fails,
all mapped volumes remain visible to all hosts. Dual IP-address technology is used in the
failed over state, and is largely transparent to the host system. However, for complete fault
tolerance, host-based path failover software is recommended.
The IP addresses of port A0 and B0 have to be in the same subnet. Likewise, the
IP addresses of port A1 and B1 have to be in the same subnet.
27
A & B volumes
A & B volumes
28
Setting IP addresses for each iSCSI host port (called a target portal) located on the
storage system.
Logging on to iSCSI host ports on each controller module (called a target) from the data
host to initiate connectivity between the data host and the storage system.
1. Double-click the Microsoft iSCSI Software Initiator icon located on the desktop of the
host system.
2. In the Target Portals area of the Discovery tab, click Add.
3. Enter the IP address of an iSCSI host port on your storage system, leave the Port field
set at 3260, and click Add.
4. Repeat Step 2 and Step 3, adding IP addresses for the remaining iSCSI host ports on
the storage system.
IP addresses for storage system host ports (targets) are identified on the data host.
5. On the Targets tab, verify that two targets have been configured (.a and .b).
If two targets are not configured, one or more of the following issues may need to be
resolved:
Controller enclosure host port addresses may not be set correctly on the data host.
Cables between the controller enclosure and/or switches and/or data hosts may not
be connected correctly.
Correct the issue, return to the Targets tab and click Refresh.
6. If two targets are configured, select the first target (controller module) and click Log On.
7. On the Log On to Target dialog, set the following options:
a) For connectivity settings to persist across system reboots, check Automatically
Restore this Connection When the System Boots.
b) For fault-tolerant configurations, select Enable Multi-path.
29
At the Local Adapter field, select Microsoft iSCSI Initiator from the dropdown
menu.
At the Source IP field, select the IP address for the local data Ethernet port that
is on the same subnet as the first target portal (iSCSI host port) to which you
want the host to connect.
At the Target Portal field, select the IP address for the iSCSI host port on the
target (controller module) to which you are connecting.
Repeat the log on procedure (Step i through Step iii) to initiate connectivity for
the second target portal on the selected target.
8. To allow LUN access through all available ports during failover, change default multipathing settings as follows:
a) On the Targets tab, select the target and click Details.
b) On the Devices tab of the Target Properties dialog, select the first device and click
Advanced.
c) On the MPIO tab of the Device Details dialog, select Round Robin from the Load
Balance Policy drop-down menu and click OK.
d) Repeat Step b and Step c for all devices listed.
9. Repeat tasks in Step 6, Step 7, and Step 8 for the second target.
10. On the Persistent Targets tab, verify that two entries appear for each controller (.a and .b)
for a total of four connections.
Configuring more than one session per controller port will use additional host interface
resources and may cause failover to function improperly.
If two persistent targets are not configured for each controller host port, complete the
following steps to remove and reconfigure targets:
a) Select each entry and click Remove.
b) Log off for each connection by selecting Targets Details Sessions Log Off.
c) Verify that IP addresses were set correctly. If not, correct IP address settings.
d) Log on again for each target using the instructions in this section, starting at Step 2.
The data host can now communicate with the controllers through iSCSI Ethernet host
ports.
i
30
For more information, please refer to the "Microsoft iSCSI Software Initiator Version
2.X Users Guide".
Usage: Benefits of 4-Gbit/sec speed and robust storage for video streaming/editing;
individual server systems
Example Business: Small business with a single server system, video production
31
Usage: Individual users have direct access to a file server; users can store all CAD work
on robust storage
Figure 14: Single-Controller, Direct Attached Connection to Two Single-Port Data Hosts
32
Port interconnects:
interconnected by FibreCAT SX Manager
A + B LUNs
A + B LUNs
Figure 15: High Availability, Dual-Controller, Direct Attached to One Dual-Port Data Host
33
Port interconnects:
interconnected by FibreCAT SX Manager
A& B LUNs
A& B LUNs
A& B LUNs
A& B LUNs
Figure 16: High-Availability, Dual-Controller, Direct Attached to Two Dual-Port Data Hosts
i
34
CAUTION!
If a controller fails, the host loses access to volumes owned by that controller.
Usage: Benefits of 4-Gbit/sec speed and robust storage for video streaming/editing;
individual user systems
Figure 17: High-Performance, Dual-Controller, Direct Attached Connection to Two Dual-Port Data Hosts NonFault Tolerant
35
5.9.6 High Availability, Dual-Controller, Through One Switch to One DualPort Data Host
This connection is for configurations in which there is a single switch. It requires that host
port interconnects are set to Straight-through.
Advantage: Low-cost; enables growth of more servers, which provides more applications; enables expansion of user account storage space for files and video
Figure 18: High-Availability, Dual-Controller Connection Through One Switch to One Dual-Port Data Host
36
The FC switch must be configured according to port zoning rules (only one initiator
and one target per zone) and with fixed speed and topology settings.
Usage: Cluster servers for fault tolerance of OS and server; dual switches for fault
tolerance of switches; dual controllers for fault tolerance of storage (very high fault
tolerance and also high speed)
Advantage: Achieves higher uptime rate (99% uptime) by eliminating many points of
failure; enables more growth of additional cluster servers; enables more growth of data
storage; offers additional security through switches
37
Figure 19: High-Availability, Dual-Controller Connection Through Two Switches to Two or More Dual-Port Data
Hosts
38
The FC switch must be configured according to port zoning rules (only one initiator
and one target per zone) and with fixed speed and topology settings.
Advantage: Low cost; multiple web servers; switch access to allow multiple hosts
Figure 20: High-Availability, Dual-Controller Connection Through a Switch to Two or More Dual-Port Data Hosts
The FC switch must be configured according to port zoning rules (only one initiator
and one target per zone) and with fixed speed and topology settings.
39
Usage: High level of security; file server; web server; email server; SQL server
Advantage: Switch access allow multiple hosts; switch adds extra security with zoning
on storage ports, which allows volumes to be mapped to specific zones
Figure 21: High-Availability, Dual-Controller Connection Through a Switch to Two or More Dual-Port Data Hosts
i
40
The FC switch must be configured according to port zoning rules (only one initiator
and one target per zone) and with fixed speed and topology settings.
6 Using vdisks
6.1 Glossary
A virtual disk ("vdisk") is a set of disks that form a RAID configuration, otherwise known as
an array.
The resilience of a vdisk can be improved by assigning one or more spare disks. The
FibreCAT SX offers a choice between two types of spare disks:
A local spare disk is dedicated to a specific RAID set and will only be used if a disk in
that RAID set fails
A global spare disk will be used to replace the first hard disk that fails in any RAID set.
The process for creating and modifying vdisks is described in Manually Creating a Virtual
Disk on page 43.
A volume is a certain amount of disk capacity on a vdisk, otherwise known as a LUN. To a
large extent, a volume behaves like a partition of a PC hard disk, so it can be formatted with
a file system such as NTFS if you use Microsoft Windows Server 2003. Also, you can assign
a drive letter to it, e.g. E:. However, unlike partitions on a hard disk that is located within a
server or directly attached to it, volumes in a FibreCAT SX can be mapped to multiple hosts,
which is a prerequisite for clustering.
The process for creating and modifying volumes is described in Mapping Volumes to
Hosts on page 47.
A host is a server that will be accessing one or more volumes on a FibreCAT SX. During or
after the creation of a volume, you can decide to what extent it will be mapped (or exposed)
to hosts. Only after a volume has been mapped to a host can it be used by that host. The
relationship between physical disks, virtual disks, volumes and partitions is illustrated in
Figure 22.
41
Using vdisks
The process for creating and modifying volume-to-host mappings is described in Mapping
Volumes to Hosts on page 47.
42
Be very careful when mapping volumes to hosts, and only map hosts to volumes on
an as-needed basis. Superfluous volume-to-host mapping may lead to loss of data,
for instance when a Linux volume is mapped to a Windows host and the volume,
whose format isn't recognized by Windows, is formatted.
Using vdisks
Once a volume has been exposed to a host, it can be used by that host. The process for
using a volume on a host is described Using LUNs on Hosts on page 51
The FibreCAT SX also allows automatic creation of vdisks. With manual creation
you can exercise more control over the parameters, so this manual only describes
manual creation.
Before you create a vdisk manually, you need to take the following decisions.
Decision
Yes
No
The disks that you want to use for the vdisk. Yes, RAID sets can be expanded with the
exception of RAID 1 sets
Assign local spare disks to this vdisk.
Yes
Yes
Yes
Yes
Yes
The vdisks and volumes on those vdisks can be used right away, even before the
initialization has completed.
Click Manage.
Under Virtual Disk Creation Method, choose Manual Virtual Disk Creation (Detail-based).
Choose a virtual disk name and the RAID level, then click Create New Virtual Disk. Under
Select Drives to Add to Virtual Disk, select the drives that you want to include in this vdisk.
43
Using vdisks
Click Calculate Virtual Disk Size in order to check the net capacity that this selection will
provide.
Click Continue.
Set your choice for Would you like to name your volumes to No.
44
Don't change the default block size. Doing so will degrade performance in most
cases. Only change the block size if you have explicit instructions.
Click Add Volumes to start the creation of the vdisk and volumes.
Using vdisks
By default, the Show global spares screen will be shown. Global spares are shown using
the color grey.
In the box Select Drives to Add as Global Spares, select the disk or disks that you want to
use as global spares.
When you change ownership of a virtual disk, the volume-to-host mappings for to
the virtual disk's volumes become invalid. After changing ownership, you must
reassign volumes to hosts before the volumes can be used. Assigning volumes to
hosts is covered in Mapping Volumes to Hosts on page 47.
45
In the box Global Host Port List, you see the WWN or IQN of each port that has been
found in the infrastructure to which you connected the FibreCAT SX. Enter a port
nickname, then click Edit Name.
47
You will only see WWNs or IQNs if the FibreCAT SX has already been
connected and can actually see the ports. If for instance you have set up zones
on your switch and then assigned the FibreCAT SX to a specific zone, only the
ports in the same zone will be shown. If the WWNs/WWPNs or IQNs don't show
up automatically, they can be added manually as described in the next step.
If the port that you want to name does not appear in the Global Host Port List, you can
enter the port WWN and the port nickname in the box Add Port to Global Host Port List,
and then click Add New Port. The port will be added to the Global Host Port List.
In this section, "host" refers to port on a server or switch. If you want to access a
volume through multiple ports, e.g. for setting up MPIO, you must map the volume
to each port. This is illustrated in Figure 23.
48
In the box Select a virtual disk to view volume information, select the virtual disk where the
volume that you want to map to one or more hosts is located.
In the box Volume Menu for the selected vdisk, you can see which volume is about to be
mapped ("Selected X" is shown at the left of this volume). Change this selection by
clicking on volume that you want to map.
In the box Map Host or Default, enter the host WWN or IQN and/or host name, the LUN
and Port 0 Access and Port 1 Access. LUN or logical unit is a number assigned to a
logical unit (0-127).
You must set the same value to Port 0 Access and Port 1 Access.
Each host port on the server expects to see unique LUNs, so avoid duplicate LUNs.
Click Map Host or Default.
Host names other than Default are only shown if you set port nicknames. Setting
port nicknames makes it easier to identify ports on servers and switches. The
process for setting and changing port nicknames is described in section Managing
the Host List on page 47.
If you choose Default for Host WWN or Host Name, the volume will be visible to all
servers. This may jeopardize security and reliability. We therefore recommend that
you only expose a volume to the host or hosts that need access.
49
On the Windows Server, start the tool for managing your Host Bus Adapter (for FC
configurations: HBAnywhere if you use an Emulex controller, SANsurfer if you use a
QLogic controller).
Run a discovery cycle after assigning or reassigning a volume to this server. In HBAnywhere, this is done by clicking View, then Start discovery cycle. You should now see the
LUN that you associated with the volume.
In some cases HBAnywhere does not show new LUNs after a discovery cycle.
Restarting HBAnywhere is then usually sufficient.
When the new LUN shows up, then use the Windows Device Manager for discovering the
new disk. To do so, follow these steps.
Click Start.
For every FibreCAT SX volume that has been mapped to this server, you will now see
an entry FibreCAT_SX1 SCSI Disk Device. Right-click on each disk, and click Properties.
The Location field shows the LUN (you may have to scroll right to see the LUN).
51
If the LUN that you have exposed does not yet appear, right-click on Disk Drives, then
click Scan for hardware changes.
If the new LUN does not appear in the list, use the troubleshooting checklist that
you will find in Troubleshooting LUNs on page 52.
New LUNs will be shown as Disk [n], Unknown, [size], Not initialized.
Continue configuring and using the new disks as you would with any other newly added
disks, i.e. initialize, partition and format.
The word Volume is not related to the same word in the context of the
FibreCAT SX.
52
Check your host mapping settings. Did you use the correct WWN or host name for the
volume? Did you use a LUN that is not already being used on the server? Did you
choose the same value for Port 0 Access and Port 1 Access, i.e. read-only or
read+write?
Check your settings in HBAnywhere or SANsurfer. For instance, the speed set in the
FibreCAT SX must be the same as the speed set for the port to which it is connected
on the switch or on the server. Read Configuring the FC Ports on the FibreCAT SX on
page 19 for details.
Check the software prerequisites as needed according to the latest FibreCAT SX information at
http://ts.fujitsu.com/products/storage/disk/fibrecat_sx/index.html.
If you have replaced your HBA you have to modify the volume-mapping accordingly.
Before you start using snapshots, consider reading the white paper "Data
Protection with FibreCAT SX Snapshots", which you can download from
http://ts.fujitsu.com/products/storage/disk/fibrecat_sx/index.html.
53
In the box Select a virtual disk to view volume information, click on the vdisk where you want
to store the snapshots for this controller. Your choice will be confirmed by means of a
blue box around the vdisk.
In the box Create Snap Pool, enter the Snap Pool Size and the Snap Pool Name. Next, click
Create Snap Pool. In the box Volume Menu for this vdisk, the snap pool will be shown in
blue.
Consider setting the size of the snap pool to 10% of the capacity of the volumes of
which snapshots will be taken. You can increase the size of the snapshot pool later,
but you cannot decrease it.
If you have a FibreCAT SX with two controllers, repeat this process for the other
controller. To do so, select a vdisk for which you set the other controller as the
preferred controller.
Before you can enable a volume for snapshots, you must create a snap pool for the
preferred controller of the vdisk where the volume is located. Please refer to section
Creating a Snap Pool on page 54 for the process of creating a snap pool.
54
In the box Select a virtual disk to view volume information, click on the vdisk where the
volume that you want to snapshot-enable is located.
In the box Volume Menu for the selected vdisk, you can see which volume is about to be
mapped ("Selected X" is shown at the left of this volume). Change this selection by
clicking on volume that you want to snapshot-enable.
In the box Convert to Master Volume for the volume that you are about to snapshotenable, select the snap pool that you want to use. Next, click Convert to Master Volume.
Before you can build a master volume, you must create a snap pool for the preferred
controller of the vdisk where the master volume will be located. Please refer to
section Creating a Snap Pool on page 54 for the process of creating a snap pool.
Click create master volume in the manage menu at the left side of the screen.
In the box Create Master Volume on Virtual Disk, select a snap pool, enter a volume size,
enter a volume name and enter a default LUN. Next, click Create Master Volume.
55
In the box Select a virtual disk to view volume information, click on the vdisk where the
volume of which you want to make a snapshot is located.
In the box Volume Menu for the selected vdisk, you can see for which volume you are
about to create a snapshot ("Selected X" is shown at the left of this volume). Change
this selection by clicking on the volume of which you want to take a snapshot.
You can only take a snapshot of a master volume, i.e. a volume that has been
snapshot-enabled. The type of such as volume is listed as Master volume in the
volume menu. Refer to section Enabling an Existing Volume for Snapshots on
page 54 for snapshot-enabling standard volumes (volumes that have not yet
been snapshot-enabled).
In the box Enable snapshot, enter a descriptive name of your snapshot, then click Take
snapshot. Your snapshot will be listed in the Volume menu.
Consider including the date and time that you take a snapshot. This will make it
easier to select the right snapshot when rolling back a volume.
As standard, the FibreCAT SX60 allows two snapshots per controller. The
FibreCAT SX80, SX88, SX 100 and SX80 iSCSI allow four snapshots per
controller. The ability to retain more snapshots can be obtained. The maximum
number of snapshots may change in future releases of the FibreCAT SX. Check
the FibreCAT SX Release Notes for the actual number.
Some applications like relational database management systems are often
used with multiple volumes, e.g. separate volumes for the database itself and
for the transaction log. If this is the case in your infrastructure, you should
synchronize the snapshot creation of all volumes. This can be done by using a
VSS-aware backup application. If you don't have such an application, this can
be done through the CLI by entering the following command:
create snapshots master-volumes [master-volumes] [snapshot names]
56
For example, if your application uses volumes Vdisk1V1 and Vdisk1V2, you
might enter the following command:
create snapshots master-volumes Vdisk1V1,Vdisk1V2 Snap1,Snap2
This feature is very important because it is the only way to learn about the date and
time that a snapshot was taken.
In the box Select a virtual disk to view volume information, click on the vdisk where the
snapshot volume is located.
In the box Volume Menu for the selected vdisk, you can see for which volume you see
snapshot information ("Selected X" is shown at the left of this volume). Change this
selection by clicking on the snapshot volume.
In the box Volume status, you see all information related to this snapshot.
Visit
http://ts.fujitsu.com/products/storage/disk/fibrecat_sx/index.html
where you can download a white paper about integration between CA BrightStor
ARCserve and FibreCAT SX snapshots.
57
Available for
Critical Policy
Notify only
Delete snapshots
Invalidate snapshots
Halt writes
Repeat this procedure for every snap pool unless you accept the default settings.
58
In the box Select a virtual disk to view volume information, click on the vdisk where the snap
pool for which you want to set the policy is located.
In the box Volume Menu for the selected vdisk, you can see which volume is about to be
mapped ("Selected X" is shown at the left of this volume). Change this selection by
clicking on snap pool for which you want to set a policy.
In the box Change Policy Configuration, set the Warning Policy, i.e. the utilization of the
snap pool at which you want to be notified.
In the same box, set the Error Policy, i.e. the utilization of the snap pool at which an
automated action such as deletion of the oldest snapshot should happen. Also, set the
action.
In the same box, set the Critical Policy, i.e. the action that should be taken automatically
if the snap pool reaches 99% utilization.
59
As an alternative to the FibreCAT SX Manager you can use the Mircosoft Storage
Manager for SANs to configure the vdisks and volumes on your FibreCAT SX.
Required for this kind of configuration is the installation of the VDS provider and the
CAPI proxy.
One of the tools that use VDS is Microsoft Storage Manager for SANs, which is included in
Windows Server 2003 R2. Storage Manager for SANs allows the creation and modification
of virtual disks and volumes, and provides LUN mapping. Since it relies on VDS you don't
have to know the specifics of the FibreCAT SX in order to carry out these tasks.
To install Storage Manager, follow these steps.
Install the VDS Provider and CAPI Provider on the server that will be used for managing
the FibreCAT SX as needed according to the latest FibreCAT SX information at
http://ts.fujitsu.com/products/storage/disk/fibrecat_sx/index.html.
Click Start.
Click on Details.
Click OK.
Click Next.
Click Start.
61
Use Create LUN for creating a virtual disk with a volume (here called LUN) on the
FibreCAT SX.
Use Assign LUN to expose the volume (here called LUN) to one or more servers.
The other actions (Delete LUN, Extend LUN, Rename LUN and Unassign LUN) are selfexplanatory.
62
You can only create volumes (here called LUNs) on free disks in the FibreCAT SX.
For assigning local or global spare disks, using other RAID levels and other
advanced functions you must use the FSM (FibreCAT SX Manager) or the CLI.
Enter the IP address of your ServerView management server in the field SNMP Trap
Host IP Address.
By default, the FibreCAT SX uses the SNMP community Public as its read community and
Private as its write community. To change this, follow these steps.
Open the Command Line Interface and log in as described in Using CLI Commands to
Assign an IP Address on page 17.
63
Next, determine to what extent you want to monitor your FibreCAT SX with ServerView
using the following steps.
64
Make sure that Notification Enabled in the column SNMP Traps is set to Enable.
Critical events are events that require operator intervention. We recommend that you
enable this setting.
Warning events are events that should be looked at. They may signal preventative
action to perform. We recommend that you enable this setting.
Informational events are events that are purely informational; no action required.
Consider enabling this setting initially and switch it off later if you feel that you no longer
need this level of monitoring and alerting.
Verify that the message Your change was successful is shown at the top of the screen.
Open the ServerView server list by browsing to your ServerView Management Server,
e.g. by browsing from your PC to http://<yourserver>/serverview.
Right-click anywhere in the server browser or server list, and click Add new server.
In the tab Server Address, enter the IP address of your FibreCAT SX, then click Search.
ServerView should then find your FibreCAT SX.
Open the tab Network/SNMP, and set the SNMP community name.
In the ServerList screen, you can click on the name of the FibreCAT SX, which is shown
as a hyperlink, to open the FibreCAT SX FSM.
If the FibreCAT SX raised an alarm, you can open the ServerView S2 Alarm Monitor,
which will display a screen similar to the one shown in Figure 25.
If you want to use other monitoring and alerting preferences than you set for your
(PRIMERGY) servers, click on monitoring and configure alarm settings.
65
66
67
Manage: enables access to all functions on the Monitor and Manage menus.
Only one Manage-level user and up to five Monitor-level users can be logged in
concurrently. SX Manager distinguishes users by their IP addresses. If you log in to
SX Manager using multiple browser instances on the same management host, SX
Manager considers all instances as a single user; actions you take in one instance
are reflected in the other instances on the same host. Do not log in more than once
from the same host.
Diagnostic: advanced user rights plus access to troubleshooting functions for use by
service technicians.
To modify a user's configuration, select a user name from the dropdown list and click
Select Username to Modify. The Modify Selected User panel appears.
69
70
Click Update User. When processing is complete, the updated System User List panel
appears.
Figures
Figure 1: FibreCAT SX 60 / SX80 / SX88 Rear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Figure 2: FibreCAT SX100 Rear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Figure 3: Hard Disk Drives on the Front of an Controller Enclosure . . . . . . . . . . . . . . . . . 9
Figure 4: Expanding a FibreCAT SX With One Enclosure. . . . . . . . . . . . . . . . . . . . . . . . 13
Figure 5: Expanding the FibreCAT SX80 or SX88 or SX80 iSCSI
With Two Enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Figure 6: Expanding the FibreCAT SX80 or SX88 or SX80 iSCSI
With Three Enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Figure 7: Expanding the FibreCAT SX100 With up to Eight Enclosures
(non-redundant) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Figure 8: Highly Available Configuration Connected Through Redundant
Switches and HBAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Figure 9: Single-Controller, Direct Attached Connection to a Single Data Host (iSCSI) . 25
Figure 10: High-Availability Connection Through Two Switches to Two Dual-Port
Data Hosts (iSCSI) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Figure 11: iSCSI Storage Presentation During Normal, Active-Active Operation . . . . . . 27
Figure 12: iSCSI Storage Presentation During Failover . . . . . . . . . . . . . . . . . . . . . . . . . 28
Figure 13: Single-Controller, Direct Attached to One Single-Port Data Host. . . . . . . . . . 31
Figure 14: Single-Controller, Direct Attached Connection to Two Single-Port
Data Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Figure 15: High Availability, Dual-Controller, Direct Attached to One Dual-Port
Data Host. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Figure 16: High-Availability, Dual-Controller, Direct Attached to Two Dual-Port
Data Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
71
Figures
72
Safety manual
Basic safety information including the handling of racks and rack mount enclosures.
Supported with the hardware as printed manual.
[2]
[3]
[4]
[5]
[6]
73
[7]
(for partners of Fujitsu Technology Solutions only; user name and password required)
[8]
[9]
MatrixEP
http://de.ts.fujitsu.com/matrixep
[10]
[11]
[12]
74