You are on page 1of 2

STORAGE PROTOCOLS

FC/FC-AL/FC-SW
ESCON/FICON
Infiniband
CIFS
NFS*
FCoE
FCoTR

---------------------------------

InfiniBand (abbreviated IB) is a computer-networking communications standard used


in high-performance computing that features very high throughput and very low
latency.
It is used for data interconnect both among and within computers. InfiniBand is
also used as either a direct or switched interconnect between servers and storage
systems, as well as an interconnect between storage systems.

As of 2014 it was the most commonly used interconnect in supercomputers. Mellanox


and Intel manufacture InfiniBand host bus adapters and network switches,
and in February 2016 it was reported[2] that Oracle Corporation had engineered its
own Infiniband switch units and server adapter chips
for use in its own product lines and by third parties. Mellanox IB cards are
available for Solaris, RHEL, SLES, Windows, HP-UX, VMware ESX,[3] and AIX.[4]
It is designed to be scalable and uses a switched fabric network topology. As an
interconnect, IB competes with Ethernet, Fibre Channel, and proprietary
technologies[5] such as Intel Omni-Path.

InfiniBand is a type of communications link for data flow between processors and
I/O devices that offers
throughput of up to 2.5 gigabytes per second and support for up to 64,000
addressable devices. Because it is also scalable and supports
quality of service (QoS) and failover, InfiniBand is often used as a server
connect in high-performance computing (HPC) environments.

The internal data flow system in most PCs and server systems is inflexible and
relatively slow. As the amount of data coming into and flowing
between components in the computer increases, the existing bus system becomes a
bottleneck.
Instead of sending data in parallel (typically 32 bits at a time, but in some
computers 64 bits) across the backplane bus,
InfiniBand specifies a serial (bit-at-a-time) bus. Fewer pins and other electrical
connections are required, saving
manufacturing cost and improving reliability. The serial bus can carry multiple
channels of data at the same time
in a multiplexing signal. InfiniBand also supports multiple memory areas, each of
which can addressed by both processors and storage devices.

The InfiniBand Trade Association views the bus itself as a switch because control
information determines the
route a given message follows in getting to its destination address. InfiniBand
uses Internet Protocol Version 6 (IPv6), which enables an almost limitless amount
of device expansion.

With InfiniBand, data is transmitted in packets that together form a communication


called a message.
A message can be a remote direct memory access (RDMA) read or write operation, a
channel send or receive message,
a reversible transaction-based operation or a multicast transmission. Like the
channel model many mainframe users are familiar with,
all transmission begins or ends with a channel adapter. Each processor (your PC or
a data center server, for example) has what is called
a host channel adapter (HCA) and each peripheral device has a target channel
adapter (TCA). These adapters can potentially
exchange information that ensures security or work with a given Quality of Service
level.

The InfiniBand specification was developed by merging two competing designs, Future
I/O, developed by Compaq, IBM,
and Hewlett-Packard, with Next Generation I/O, developed by Intel, Microsoft, and
Sun Microsystems.
----------------------------------------------------

You might also like