You are on page 1of 34

Alcatel-Lucent OmniSwitch

Hardware

OS 9600
OS 9700
OS 9800
CMM & NIs
System architecture

OS 6850
OS 6200

Alcatel IP networking portfolio


Comprehensive end-to-end network solutions
WLAN

Aggregation LAN

Access LAN

Core LAN

OmniStack LS 6200

OmniAccess
WLAN

OmniSwitch 6850

WAN/MAN

ESS/SR
7450/7750

OmniSwitch 7000

OmniSwitch 9000

OmniAccess 700

Consistency of network services


OmniVista 2500

VitalSuite
High
Ava ila bility

Brick
Opera ting
System
Extensive
Ma na gea bility

VitalQIP

Enha nced
Security

OA SafeGuard

OmniSwitch
Range
18 slot chassis (16 NI slots)
Back-plane architecture
Fan trays (hot swappable)

10 slot chassis (10 NI slots)


Back-plane architecture
Fan trays (hot swappable)

OS9800
2 CMMs (hot swappable)
x management
x switching fabric

8/16 NIs (hot swappable)

OS9700
3/4 PSUs (hot swappable)
x N+1 redundancy
x 110/220V input (AC)
x 48V input (DC)

5 slot chassis (4 NI slots)


Back-plane architecture
Fan trays (hot swappable)
2 PSUs (hot swappable)
1 CMM (hot swappable)

OS9600
OS6850

OmniSwitch 9600
5 Slot Chassis
4 slots for NIs

NIs are common across the OS9000


family
NIs are hot swappable & not slot
dependent

1 slot for OS9600-CMM mgt. module

Control
Switching Fabric

Backplane capacity
960 Gbps

Switch Fabric Performance


24/48 Gbps per slot

192 Gbps per chassis

Maximum aggregated throughput


35.7 Mpps per slot

142,8 Mpps per chassis

Hardware specifications
L2: 16K hosts 4K VLANs
L3:

8K hosts (IPv4) 4K interfaces 12K


LPM (IPv4)
ACL: 2K network policies for L1/L2/L3/L4
classification
QoS: 8 priorities, 2K policers, egress
shaping

OmniSwitch 9700
10 Slot Chassis
8 slots for NIs

NIs are common across the OS9000


family
NIs are hot swappable & not slot
dependent

2 slots for OS9700-CMM mgt. modules


Active/Standby redundancy for
Control
Active/Active redundancy for
Switching Fabric

Backplane capacity
960 Gbps
Switch Fabric Performance
24/48 Gbps per slot with 1/2 CMM
installed

384 Gbps per chassis

Maximum aggregated throughput


35.7 Mpps per slot
285 Mpps per chassis

Hardware specifications
L2: 16K hosts 4K VLANs
L3: 8K hosts (IPv4) 4K interfaces 12K

LPM (IPv4)
ACL: 2K network policies for L1/L2/L3/L4
classification
QoS: 8 priorities, 2K policers, egress
shaping

OmniSwitch 9800
18 Slots Chassis
2 slots for OS9800-CMM mgt. modules

Active/Standby redundancy for


Control
Active/Active redundancy for
Switching Fabric

16 slots for NIs


NIs are common across the OS9000
family
NIs are hot swappable & not slot
dependent

Backplane capacity
1.92 Tbps
Switch Fabric Performance
24/48 Gbps per slot with 1/2 CMM
installed

768 Gbps per chassis

Maximum aggregated throughput


35.7 Mpps per slot

570 Mpps per chassis

Hardware specifications
L2: 16K hosts 4K VLANs
L3:

8K hosts (IPv4) 4K interfaces 12K


LPM (IPv4)
ACL: 2K network policies for L1/L2/L3/L4
classification
QoS: 8 priorities, 2K policers, egress
shaping

System Architecture
Fabric Load Sharing
Principle of operation for Fabric Load Sharing
Traffic intra-module is processed & forwarded locally
Traffic inter-module is forwarded through the Virtual Switching Fabric

Each CMM provides a 24Gbps connection to each module

Full switching capacity reached in dual CMM configuration


Virtual Switching Fabric
CMM-A
CMM-B

NI-1

NI-8

35.7Mpps per slot


(locally or through the fabric)

OS-9700 Backplane (1)


High speed frame bus
Data path : FBUS+ (24 Gbps)

Active FBUS+ link


Fwd
eng

Fwd
eng

NI-1
Fwd
eng

NI-5
CFM

NI-2

CMM-A

Fwd
eng

Fwd
eng

NI-6
Fwd
eng

NI-3

CFM

NI-7
CMM-B

Fwd
eng

NI-4

Fwd
eng

NI-8

Fabric Load Balancing

Port Trunk

RAM

CPU
PHY

Fabric

CMM-A

FBUS+

Prim.
CMM

PHY

Standard
Fwd Engine

PHY
PHY

FBUS+

Fabric

Sec.
CMM

PHY
PHY

CMM-B

PoE feed

PoE Socket

HASH BASED
SELECTION

Formula for hashing: IPSA + IPDA + TCP/UDP port numbers


OR MAC SA + MAC DA

System Architecture
Distributed Processing
Principle of operation for Distributed Processing
Each module provides a high performance CPU

CMMs CPU responsible for management & overall coordination

Modules CPU responsible for most operations

Management Bus (BBus)is a dedicated Gigabit Ethernet bus

Each modules CPU has a direct connection with each CMMs CPU
CMM-A

NI-1

CMM-B
CPU

Dedicated Mgmt
Bus (Gigabit
Ethernet)

CPU

NI-8

CPU

CPU

OS-9700 Backplane (2)


High speed, connections between modules
Control path : BBUS+ (2 Gbps)

Active BBUS+ link

Standby BBUS+ link

CPU

NI-1

STP BPDU
ARP Request
Routing Update
Management Access

CPU

NI-5

CPU
CPM

CPU

CMM-A

NI-2
CPU

CPU

NI-6
CPU

NI-3

NI-7
CPM

CPU

NI-4

CPU

CMM-B

CPU

NI-8

OS-9000-CMM
CMM 192 Gbps (119 Mpps)
Based on a new Fabric board,

providing 24 Gbps per slot


Based on a brand new Processor

Board

CPU: Freescale MPC8540, 833 MHz


8 MB Boot Flash
Red. CMM
RAM: 256MB (DDR SDRAM)
Int. Flash: CF 128MB
Ext. Flash: USB
NI slot #1
EMP: RJ45 (10/100/1000)
Console: RJ45
New AOS LEDs

NI slot #8

Flash

CPU

USB

Processor board
Switch
Fabric
(8 x 10G)

OK1, OK2
Control
Fabric
Temp, Fan & PSU

RAM
Eth. Switch

Fabric board

Logically
independent
but
physically
one board

CMM Failover
Primary CPU failure
SW crash or processor failure on the primary CMM triggers a failover to the

standby CMM processor


The failing CPU reboots and becomes standby CPU
NI/Line card CPUs detect the failure through a backplane signal and switch

to the new control plane switch


Fabric load sharing continues => no data plane interruption
Fabric failure
Detection by monitoring error rates & data traffic on FBUS+ links
Whenever CFM failure is detected that fabric card is disabled

Fabric Load sharing is disabled, NI to use the remaining CFM only

The second fabric card continues to operate normally

CMM Failover
Distributed architecture
ARP table & Layer 3 FDB are duplicated & synchronized

On each NI

On the CMM

CMM
ARP
Cache

BBUS

NI

ARP Cache

ARP Cache

ARP Cache

During Fail-over

Chassis informs the NI No more L3 (ARP and FDB) learning

Normal forwarding continues based on actual knowledge

Secondary CMM

retrieves the ARP table from one NI


retrieves L3 LPM and Host Table from one NI and adds an old tag for each entry
adds/updates new entries with a new tag
flushes all old entries after a timer expires

OS9000 GNIs

OS9-GNI-C24

24 ports 10/100/1000 using RJ45 connectors

OS9-GNI-U24

OS9-GNI-P24

24-port 10/100/1000 (RJ45) w/ PoE

Require dedicated Power Shelf

OS9-GNI-C20L

24 ports GigE using LC connectors (SFP)

20-port 10/100 using RJ45 connectors


SW upgradeable to 10/100/1000 (Lite concept)

2-port 100/1000 using SFP connectors

OS9-GNI-C48T
48 ports 10/100/1000BaseT/TX using 8 Mini-RJ21 connectors

Gigabit Ethernet density


(SFP, RJ45 and Mini-RJ21
connectors)

9600

96
192

9700
9800

384
9600 = 192

9700 = 384

9800 = 768

OS9000 XNIs

OS9-XNI-U2

OS9-XNI-U6

2 ports 10GigE using X2 connectors (XFP)

6 ports 10GigE using XFP transceivers


Port # 1-3 => channel-A (12Gbps)
Port # 4-6 => Channel-B (12Gbps)

10 Gigabit Ethernet density (XFP connectors)


9600
9700

16
32

9800
9600
9700
9800

24

50

100

48
96

OS9000 Forwarding Engine

Memory

Red. CMM

Eth. Switch

RAM
NI slot #1

Flash

CPU

Routing
Engine

Classification
Engine

Switching
Engine

Buffer
Management

Security
Engine

Traffic
Management

Parser

Modification

Memory

USB
4 x GigE
4 x GigE
4 x GigE
4 x GigE
4 x GigE
4 x GigE
1 x 10GigE 1 x 10GigE 1 x 10GigE 1 x 10GigE 1 x 10GigE 1 x 10GigE

Processor board
Switch
Fabric
(8 x 10G)

NI slot #8

Fabric board

Logically
independen
t,but
physically
one board

RAM

CPU
PHY

All Fwd Engines share the same

Prim. CMM
PHY

architecture

Standard
Fwd Engine

In-chip memory table


Broadcom 56500 serie Firebolt

PHY

PHY

Sec. CMM

PHY

PHY

PoE feed

PoE Socket

Parser
Parsing all first 128 Bytes of each packet
Partial parsing on Fbus+ ports (FBUS+ Header)
Full parsing (128 Bytes) only needed on original ingress Ethernet ports

Information is used for :

Subsequent engines : switching, routing & classification

sFlow

Extra information available on Fabric Interface


Fbus+ header contains information

extracted by the ingress parser like

type of packet, MC/UC bit etc.


Memory

4 x GigE
1 x 10GigE

4 x GigE
1 x 10GigE

Routing
Engine

Classification
Engine

Switching
Engine

Buffer
Management

Security
Engine

Traffic
Management

Parser

Modification

4 x GigE
1 x 10GigE

4 x GigE
1 x 10GigE

Memory

4 x GigE
1 x 10GigE

4 x GigE
1 x 10GigE

Security Engine
DoS attack Detection (packets dropped based on the following

conditions)
SIP = DIP for IPv4/IPv6 packets
TCP packets with control flags = 0, and sequence

number = 0
TCP packets with FIN, URG and PSH bits set, and
sequence number = 0
TCP packets with SYN and FIN bits set
TCP source port no. = TCP destination port no.
First TCP fragment does not have the full
TCP header (less than 20 bytes)
TCP header has fragment offset value as 1
UDP source port no. = UDP destination port no.
ICMP ping packets payload is larger than the
programmed value of ICMP maximum size
Fragmented ICMP packets

Memory

4 x GigE
1 x 10GigE

4 x GigE
1 x 10GigE

Routing
Engine

Classification
Engine

Switching
Engine

Buffer
Management

Security
Engine

Traffic
Management

Parser

Modification

4 x GigE
1 x 10GigE

4 x GigE
1 x 10GigE

Memory

4 x GigE
1 x 10GigE

4 x GigE
1 x 10GigE

Switching Engine
Provides the Switching information
VLAN type select
VLAN look-up

L2 unicast look-up (VLAN+MAC)


L2 multicast look-up (non IP Multicast)

Standard Fwd Engine Key figures


4k VLANs
256 Spanning Trees
16k MAC
1k Mobile/Authenticated MAC
Memory

4 x GigE
1 x 10GigE

4 x GigE
1 x 10GigE

Routing
Engine

Classification
Engine

Switching
Engine

Buffer
Management

Security
Engine

Traffic
Management

Parser

Modification

4 x GigE
1 x 10GigE

4 x GigE
1 x 10GigE

Memory

4 x GigE
1 x 10GigE

4 x GigE
1 x 10GigE

Routing Engine
Provides the Routing information (IPv4/IPv6)
L3 unicast look-up
L3 multicast look-up

LPM : Longest Prefix Match Wire Rate from the first packet

Standard Fwd Engine Key figures


12k LPM / 8K Host

1 entry per IPv4 entry (LPM/Host)

2 entries per IPv6 entry (LPM/Host)

w/ max of 8k Next Hop

4k Interfaces
128 Tunnels

Memory

IPX handled in software

4 x GigE
1 x 10GigE

4 x GigE
1 x 10GigE

Routing
Engine

Classification
Engine

Switching
Engine

Buffer
Management

Security
Engine

Traffic
Management

Parser

Modification

4 x GigE
1 x 10GigE

4 x GigE
1 x 10GigE

Memory

4 x GigE
1 x 10GigE

4 x GigE
1 x 10GigE

Classification Engine
and Buffer Management
Known as Content Aware Processor

or Fast Filtering Processor (FFP)


Supports
Access Control List (filtering)
QoS (prioritization / priority

mapping)
Policers
Counters

Manages a total of 16k 128 Byte

buffers
Tracks the number of buffers in use

Queuing and Scheduling control

Controls memory thresholds


Congestion Control
Monitoring of ingress/egress buffers

Provides the classification according

to the user-defined rules


Memory

4 x GigE
1 x 10GigE

4 x GigE
1 x 10GigE

Routing
Engine

Classification
Engine

Switching
Engine

Buffer
Management

Security
Engine

Traffic
Management

Parser

Modification

4 x GigE
1 x 10GigE

4 x GigE
1 x 10GigE

Memory

4 x GigE
1 x 10GigE

4 x GigE
1 x 10GigE

Traffic Management
and Modification
Traffic Management

Modification

Queuing

Packets are modified due to:

8 COS Queues per egress port


Pre-configure Queue length thresholds
Scheduling Options

Strict Priority (default)


Weighted Round Robin

Deficit Round Robin

VLAN Tag insertion/removal


L3 Routed packet modification
Tunneling

Shaping

Per port or per-flow)


Packets rate control

Memory

4 x GigE
1 x 10GigE

4 x GigE
1 x 10GigE

Routing
Engine

Classification
Engine

Switching
Engine

Buffer
Management

Security
Engine

Traffic
Management

Parser

Modification

4 x GigE
1 x 10GigE

4 x GigE
1 x 10GigE

Memory

4 x GigE
1 x 10GigE

4 x GigE
1 x 10GigE

First Packet wire rate


OSPF

RIP

BGP4
RIB

Software table
is updated via
IPC from the
CMM

Routing/forwarding
is wire rate first
packet because
the LPM hardware
table is updated
before the packet is
received

IPC

Software FIB

LPM Hardware
Table

Software FIB

LPM Hardware
Table

OmniSwitch 6850

Triple-speed 10/100/1000 Gigabit interfaces

System status and slot indicator LEDs

10Gig uplinks

20 or 44 unshared 10/100Base-T ports


(upgradeable to 1000)

Maximum aggregated throughput

4 shared combo 10/100/1000Base-T ports

4 Combo SFP slots for 1000Base-X connections

Console port (RJ-45)

USB port (USB 2.0)

101,2 Mpps

24 and 48 port models all stackable


Delivering Power over Ethernet (PoE)

Replaceable and redundant power supplies


(AC, DC)

OmniSwitch 6850

2 build-in 10 Gig stacking ports

10Gbase-SR: MMF 300 m

100Base-FX

10Gbase-LR: SMF 10 km

100Base-LX10

10Gbase-ER: SMF 40 km

100Base-BX10

10Gbase-ZR: SMF 80 km

1000BaseT-SFP

20/44-port 10/100 (RJ45)

1000Base-SX

SW upgradeable to 10/100/1000

1000Base-LX

4-port combo

1000Base-LH
Dual rate optics

OS6850-U24X
22-port 100/1000 (SFP)
2-port combo

10/100/1000 (RJ45)
100/1000 (SFP)

OS6850-Lite

10/100/1000 (RJ45)
1000 (SFP)

Supported models

OS6850-24L / OS6850-48L
OS6850-P24L / OS6850-P48L

OmniSwitch 6850

Stack LED

RJ-45
console

4 miniGBIC /
Copper combo
ports.

OS6850-48

For the combo ports (SFP):

TX- 100BaseTX Transceiver RJ45


T 1000Base-T SX MiniGBIC Transceiver
SX 1000BaseSX MM (LC) MMF for up to 550 m.
LX 1000BaseLX SM (LC) Transceiver supports SMF for up to 10km
LH-70 1000BaseLH (long haul) Transceiver supports SMF for up to 70 km.
CWDM Gig SFP

For 10 Gig ports (XFP):


10G-XFP-LR:10 Gigabit Ethernet optical transceiver supports SMF for up to 10km.
10G-XFP-SR: 10 Gigabit Ethernet optical transceiver supports MMF for up to 300m.

OmniSwitch 6850 Stacking


All of the models in the 6850 family are stackable
Dedicated 2 10Gigabit stacking links on each model
Up to 8 chassis in a stack

384 Gigabit ports

16 10 Gig ports

PoE and non-PoE can be mixed

Distributed
and
resilient
management
40G full
duplex
stack loop

Smart
Continuous
Switching

Image /
config
rollback

OS6850-U24X is stackable as well


Virtual chassis, single IP for management

Hot swap
everythin
g

Primary, secondary, idle and pass-through elements in the stack


Each module in the stack is capable to act as Primary
Stack module IDs are set using CLI and displayed on the panel
sw1> more boot.slot.cfg
boot slot 1

802.3ad
802.1w
OSPF ECMP
VRRP

OmniSwitch 6850 Stacking


Primary Management Module

Selection
Primary
Secondar
y
Idle
Idle
Idle
Idle

Chassis MAC address (Lowest)


Saved slot number (boot.slot.cfg)

Chassis uptime

Secondary Management Module

Selection
Switch connected to primary stacking

port A
Second slowest slot value

Idle switch similar to NI card in

chassis
Pass-Through mode
Dupplicate slot number
Forced by >stack clear slot
No disruption of the stack

Inserting Switches Into an

Existing Stack
Avoid duplicate saved slot numbers
Never attempt to operate more than

eight switches in a single stack


Make sure all switches are running the
same software version

OmniSwitch 6850 Stacking methods


A stack of eight switches in a

crossed

A stack of eight switches in a

straight

Stacking port A to stacking port B

Stacking port A to stacking port A

Redundant stacking cable connection

Stacking port B to stacking port B

exists between top and bottom


switches
Required for effective redundancy

across the stack

Redundant stacking cable connection

exists between top and bottom


switches
Required for effective redundancy

across the stack


A B

A B

OmniStack LS 6200
Stackable 24/48+4 1U fixed configuration all-in-one designs
2 Gig combo ports (copper, miniGBIC) (100Base-FX SFP supported)
2 10/100/1000 RJ-45 copper stacking ports

Uses standard Ethernet cabling/connectors for dedicated stacking


PoE versions
L2 switching with advanced L3/L4 services

OmniStack 6200 Overview


Stackable edge switch
Available in 3 flavors

12, 24, 48 ports 10/100


12, 24, 48 ports 10/100 PoE
24 ports 100BaseX with external SFPs

2 Gig combo ports (copper, miniGBIC)

100Base-FX SFP supported

100Base-FX, 100Base-LX, 100Base-BX

1000Base-SX, 1000Base-LX, 1000Base-LH

2 10/100/1000 RJ-45 copper stacking ports

can be used as network ports in standalone


configuration

OS-LS-6212
OS-LS-6212P

48 x 10/100 w/ PoE
2 x 10/100/1000
2 x combo
OS-LS-6224
OS-LS-6224P

24 x 10/100 w/ PoE
2 x 10/100/1000
2 x combo
OS-LS-6248
OS-LS-6248P

Uses standard Ethernet cabling/connectors

for dedicated stacking


In standalone mode the Gigabit stacking

ports are available as additional 2 Network


Gigabit ports (copper)
L2 switching with advanced L3/L4 services

48 x 10/100 w/ PoE
2 x 10/100/1000
2 x combo
OS-LS-6224U

24* 100BaseX SFP Optics


100Base-FX, 100Base-BX

OmniStack LS 6200

Availability
Resilient stacking, across stack features
& management
Backup power supply
DC power
802.1w, 802.1s
802.3ad
QoS: Flow based, 4 egress queues
(strict, WRR), L3 stamping & mapping,
rate limiting per flow/port

Security
Radius & TACACS+
MAC, Proto & IP subnet VLANs
Port mapping (private VLAN)
MAC address lockdown
802.1x
Extended access control lists
SSL, SSH, SNMPv3
Multicast IPTV VLAN registration
VLAN Stacking
DHCP Snooping option 82

Manageability
Industry standard CLI
Dual image/config
WebView element manager
GVRP, AMAP
Virtual cable tester

You might also like