Professional Documents
Culture Documents
Before data transactions can occur, the two N_Ports must perform an N_Port login to
assign N_Port addresses or N_Port IDs.
Arbitrated loop is still commonly used for JBODs (just a bunch of disks) and for mass
storage to support NAS processors.
Arbitrated loop is shared, gigabit transport, as with shared Ethernet or Token Ring
segments.
There are usually no more than four to six initiators making requests to those drives (In
Cascade mode ).
Design can have large configurations on a single loop without dividing the bandwidth
down to the level of ordinary Ethernet.
Fig: A daisy chain arbitrated loop between a server and two storage arrays
One of the most useful features of a loop hub is bypass circuitry at each port,
which allows the loop to circumvent a disabled or disconnected node while
maintaining operation.
A cascade is simply a normal cable connection between port on one hub and a
port on another.
Cascading consumes one port on the first and last hubs in a chain, and two ports
on intervening ones.
Cascading three 8-port hubs, for example, would yield 20 usable ports, with
one-fifth of the total ports sacrificed to achieve the cascade.
Private loop: If no switch connection exists, the upper two bytes of this address are
zeroed to x'00 00'.
Public loop : If the loop is attached to a fabric and if an NL_Port supports fabric login,
the switch assigns a positive value to the upper two bytes (and possibly the last byte).
In both public and private arbitrated loops, the last byte of the 24-bit port address is
referred to as the arbitrated loop physical address( AL_PA).
The 1-byte AL_PA provides a compact addressing scheme and lets you include a device's
identity as part of a 4-byte ordered set.
The total number of AL_PAs available for arbitrated loop addressing is 127.
The loop-specific ordered sets may include AL_PAs in the last two byte positions,
negative running disparity is facilitated if these values are neutral.
If the destination and source AL_PAs are neutral disparity, the Open transmission word
will leave the running disparity negative and it satisfies the requirement for the next start of
frame (SOF).
The AL_PA values are determined on the basis of neutral disparity a listing of hex values
of AL_PAs seems to jump randomly over some byte values and not others.
This is significant for some Fibre Channel disk drives, which allow the user to set
jumpers or dip switches on a controller card to manually assign a fixed AL_PA.
A server with an AL_PA of x'01' will have a statistically higher chance of winning
arbitration against lower-priority contenders, although arbitrated loop also provides
safeguards against starvation of any port.
A server with an AL_PA of x'01' will have a statistically higher chance of winning
arbitration against lower-priority contenders.
An NL_Port's AL_PA can change with every initialization of the loop or reset of the
device.
Arbitrated loop initialization guarantees that each attached device will have a unique
AL_PA.
loop initialization, the loop enters a stable monitoring mode and begins (or resumes)
normal activity.
loop initialization primitive sequences (LIPs): A number of ordered sets have been
defined to cover the various conditions that an NL_Port may sense as it launches the
initialization process.
Example :
If HBA server power down:
The HBA immediately begins transmitting LIP(F7, F7) onto the loop.
An HBA attached to an active loop is moved from one hub port to another:
HBA therefore starts streaming LIP(F8, F7) onto the loop.
The receiver of the HBA is broken, or the receive cable is broken, and the server
has been power cycled:
The F8 indicates that the HBA senses loop failure.
The AL_PS is the source AL_PA of
ME - CSE [SAN- Unit II] 13
the alert.
Fig : An NL_Port streaming LIP(F8)s onto an arbitrated loop
As each downstream device receives the LIP stream, it enters a state known as Open-
Init, which suspends any current operations and prepares the device for the loop
initialization procedure.
The LIPs are forwarded along the loop until all NL_Ports, including the originator, are in
an Open-Init condition.
Arbitrated loop also provides a selective reset LIP that is directed by one NL_Port to
another. but the selective reset LIP(AL_PD, AL_PS) may cause the target device to reboot.
Loop initialization provides a selection process to determine which device will be the
temporary loop master.
After selecting the loop master, it is responsible for conducting the rest of the
initialization procedure and returning the loop to normal operation.
LISM frames that contain a port type identifier (x'00' for FL_Port, x'EF' for NL_Port)
and its 64-bit World-Wide Name (WWN).
The first task of the temporary loop master is to issue a series of four frames that will
allow each participant on the loop to select a unique AL_PA.
The frame format contains a 128-bit field that represents an abbreviated mapping of all
possible AL_PAs.
The position of each bit in the AL_PA map corresponds to the sequential list of AL_PAs,
beginning with x'00' and ending with x'EF'.
The header of each AL_PA map frame contains an identifier that defines which of the
loop devices is allowed to select an AL_PA.
The first frame issued has an identifier of loop initialization fabric address, or LIFA.
The fabric have permission to select a bit corresponding to their original AL_PA.
When the frame is issued for the third time, the identifier is changed to loop
initialization hard address or LIHA.
The last frame issued has an identifier of loop initialization soft address or LISA.
This frame is for any NL_Ports that did not qualify for or were unsuccessful in the
previous rounds.
After the LISA frame returns to the temporary loop master, each loop device will have a
unique AL_PA.
The loop initialization process was closed after this phase and the loop returned to
normal operation.
A positional map is useful for determining how AL_PAs are physically positioned in
a loop topology. It gives the diagnostic capability and the opportunity to fine-tune a loop
configuration for optimal performance.
The frame fills with a positional map that details how loop devices are physically
positioned in relationship to one another. When the positional map is complete, the loop
master distributes it in a loop initialization loop position, or LILP frame.
The LILP (if all devices support positional mapping) or LISA (if some do not), the
temporary master finishes loop initialization by issuing a close (CLS) ordered set,
followed by IDLEs.
Proper selection of components, HBAs, disks, and arbitrated loop hubs will help to
ensure that the topology change implied by loop initialization, and it will not impact on
loop stability.
Both login processes are part of a set of extended link services that are used to
exchange communication parameters and identities and thus establish logical sessions
between devices on the topology.
The server attempts to establish login sessions with targets by issuing PLOGI frames
addressed to each of the 126 possible NL_Port AL_PAs.
The targets that accept the PLOGI from the server will return an ACC (accept) frame to
the server, informing it of the target's WWN, buffer-to-buffer credit capability, maximum
frame size support, and so on.
ME - CSE [SAN- Unit II] 20
At some point, an association must be made between the link-level AL_PAs and the
application's logical definition of SCSI bus, target, and LUN (logical unit number)
addressing.
The extended link services that interface to FC-2 can determine which devices are out
there.
The FC-4 protocol mapping—for example, Fiber Channel Protocol for SCSI-3—can
determine what can be done with them.
The responsibility for maintaining the association between lower-level AL_PAs and upper-
level SCSI addressing is assumed by the device driver of the HBA or controller installed in
an initiator.
This layer, called the loop port state machine, resides between the FC-1 encoding/decoding
function and FC-2's frame management and flow control functions.
These processes are expressed in 11 different states and are controlled by a number of
input and output variables.
Requests are handed down by the FC-2 layer, the loop port state machine must determine
its current state and decide which further actions are required to fulfill the pending request.
Example:
The upper layer has frames to send, the loop port state machine must transition
from its normal monitoring state.
Enter an arbitrating state to gain access to the loop.
When it wins arbitration, it changes to an arbitration won state.
Then changes to an open state as it notifies the target that a transaction is
under way.
When the frame transmission is complete, the loop port state machine enters a
transmitted close or received close state, and finally it returns to monitoring mode.
Only reset its access bit and will be able to arbitrate again when no other devices are
arbitrating.
This scheme allows even low-priority devices on the loop to win arbitration and thus
prevents starvation of any port.
An arbitrate primitive, ARB(x), is transmitted whenever a loop device needs access to the
loop.
The arbitration strategy has design implications for SANs because it may be helpful for
manipulating fairness to increase performance.
The following are the application-driven criteria that you should consider when
implementing arbitrated loop:
Bandwidth requirements:
Applications—such as online transaction processing (OLTP), Internet service
provider (ISP) Web servers, and relational database queries may have more
predictable bandwidth requirements. a configuration with one server (equipped with
an efficient HBA) and an 8-drive JBOD provides enough throughput to nearly
saturate a 100MBps Fibre Channel pipe on a 1Gbps loop. Most enterprise
applications do not require sustained throughput. Radiology, geological resource
mapping, prepress, and other applications that require very large file transfers,
Distance requirements:
Arbitrated loop is a closed ring topology, with the total circumference
determined by the distances between nodes. You can easily calculate the total
propagation delay incurred by the loop's circumference by multiplying the lengths
(both transmit and receive) of copper and fiber-optic cabling deployed by the
appropriate delay factor. A single 10km link to an NL_Port, for example, would
cause
ME - CSE [SAN-aUnit
50-microsecond
II] propagation delay
26 in each direction, or 100 microseconds
total.
Managed or unmanaged environments:
Arbitrated loop hubs have resulted in products with various levels of management
capability, from simple enclosure services to enhanced analyzer-type functionality.
Managed hubs usually provide a graphical interface, typically written in Java or
platform-specific programming languages. Mission-critical applications, populous
loops, and heterogeneous (multivendor) configurations, however, almost demand the
higher level of visibility that management provides.
High-availability requirements:
The traditional SCSI architecture does not lend itself to high-availability
configurations. The networking characteristics of SANs make it much easier to design
and implement high availability to storage. High-availability configurations using dual
loops are given additional reliability if managed hubs are used for each loop.
The introduction of new devices to an arbitrated loop, further divides the shared
bandwidth, adding new devices on each port of a fabric increases the aggregate
bandwidth.
Cut-through technology is also used in Ethernet switches to speed packet routing from
port to port.
When a frame enters a fabric port from an attached N_Port, cut-through logic examines
only the link-level destination ID (D_ID) of the frame.
In Fibre Channel, the destination ID is the 24-bit port address of the target N_Port.
A routing decision is based on the D_ID, and the frame is switched by internal routing
logic to the appropriate port.
The D_ID resides in the first 4-byte word of the frame header, a routing decision
can be made immediately as the frame enters the fabric port.
Fabrics, unlike arbitrated loop, are not restricted to an address space based on
transmission word running disparity (AL_PAs).
Fabric-attached N_Ports are assigned a 24-bit port address, which theoretically allows
for 224, or more than 16 million, possible addresses.
The address allocation scheme introduced by the Fibre Channel switch fabric (FC-SW)
standard.
The most significant byte is the domain, the next significant is the area, and the least
significant is the port.
The 8-bit domain field has a number of reserved addresses, leaving 239 available for
assignment to individual switches.
This allows each switch in an extended fabric to have a unique identifier and also limits
the largest possible fabric to 239 interconnected switches.
In practice, most SANs do not connect more than a few switches in a single fabric.
You can use these addresses to identify individual FL_Ports that support loops, or groups
of F_Ports.
Example: Multi-port card installed in a switch chassis, which can be governed by a
separate part of the routing engine.
Fabrics provide a number of services that rationalize activity on the transport and
facilitate adds, moves, and changes of attached devices.
Fabric login services allow N_Ports and public loop NL_Ports to register with the fabric
for communication.
Simple Name Server (SNS) functions streamline the process of discovery between
communicating ports.
State Change Notification (SCN) allows all fabric devices to keep current with
topology changes caused by the introduction or removal of devices.
An N_Port uses a 24-bit port address of x000000 when it logs in to the fabric. This
allows the fabric to assign the appropriate port address to the device, based on the
domain/area/port address format.
The newly assigned address is contained in the destination (D_ID) field of the ACC
response.
During the subsequent loop initialization sequence, the NL_Port will now have a fabric-
assigned address that it can select during the LIFA phase.
Fabric login establishes the initial relationship between the switch and an attached device.
Fabrics rationalize the process of device discovery by providing a name server function in
each switch.
The SNS is a database of objects that allows each fabric device to register or query useful
information, including the name, address, and class of service, capability of other
participants.
The name server contains a table of both 24- bit addresses and the corresponding 64-bit
WWNs.
This would relieve the file server of the need to poll 15.5 million addresses. The file server
could then perform PLOGIs with each reported address to discover, via upper-layer
commands, which devices were SCSI-3 targets.
This function is provided by State Change Notification (SCN) and a variant, Registered
State Change Notification (RSCN).
SCN and RSCN are voluntary functions in that an N_Port or NL_Port must register its
interest in being notified of network changes if another device alters state.
A fabric service, RSCN allows all devices that have registered for notification to be
informed if a topology change occurs.
Segmentation transforms what was previously a physical arbitrated loop into a virtual
arbitrated loop.
You can implement this strategy by manipulating the upper two bytes or loop identifier of
the private loop device's port address.
On the private loop segment, the loop identifier is x0000, indicating that the device is non-
fabric.
Translational private loop support is thus better suited to the access of private loop storage
by fabric-capable servers.
This manipulation implies a passive role for the private loop device, because it would
require considerable overhead to allow private loop servers to discover and communicate
with fabric-capable storage.
Zoning resolves these potential problems by assigning devices or ports to separate zones
or groups, and enforcing separation via selective switching.
Port-based zoning is less involved, because you can use filtering logic in the routing
engine to maintain the assignment of ports to one or more zones.
A classic application for zoning is tape backup for heterogeneous operating systems:
Port zoning is relatively more secure than WWN-based zoning because an intruder cannot
spoof port zoning by manipulating Fibre Channel frame information.
Zone merging is now part of the switch-to-switch protocol and, like SNS exchange,
requires additional handshaking between switches before the fabric can become stable and
operational.
The scalability promised by Fibre Channel is being challenged at every step of the
technology's adoption, forcing standards to prove themselves in workable products.
Extended fabrics built for disaster recovery applications employ multiple switches
separated by long fiber-optic runs, DWDM (dense wave division multiplexing), or IP
tunneling.
Latency across multiple switches is also an issue, and switch vendors have recommended
hop count limitations of three to seven switches to avoid timing issues
Like OSPF, FSPF is a link state routing protocol. Link state routing protocols were
developed as an enhancement to distance vector routing protocols such as RIP (Routing
Information Protocol).
Protocols simply determine the optimum path based on the number of hops from
source to destination.
Optimum routes through FSPF Fibre Channel fabrics are based on a single metric
expressed in the following formula: FSPF Cost = S(1,000) x1.0625 [Gbps] / Line Rate
[Gbps].
The principal switch facility was developed to automate link layer addressing in Fibre
Channel fabrics.
If multiple E_Port connected switches are powered up or enabled simultaneously, they
will attempt to contact their peers by issuing frames to a well-known address of x'FF FF
FD', which represents the fabric controller of each switch.
This frame flooding continues until a preset timeout value (F_S_TOV, or fabric stability
timeout value) is reached. In the end, the switch with the highest principal switch priority is
determined, and address blocks are assigned.
The build fabric link service is nondestruptive and enables a newly introduced switch to
gain a unique Domain_ID and participate in the enlarged fabric.
New switches must be either disabled or powered off before they are inserted into an
operational fabric, and switch-to-switch links over wide area connections must be
monitored for stability.
The most common loop devices on a fabric switch and JBODs, which appear on a
switch port as a series of downstream NL_Ports on a common loop.
It may also be reasonable, however, to group servers on a loop segment if the
bandwidth required per server is well below the 1Gbps or 2Gbps bandwidth provided by
the switch.
JBODs do not have the throughput of RAID, but they offer an economical alternative
to high-performance RAID storage.
The ability to integrate loop and fabric components in a single solution is also useful
for customers who have already deployed loop-based SANs in various departments and
now wish to tie them together into an enterprise-wide storage network.