You are on page 1of 97

Data Centre Design For The Small

to Medium Enterprise

Nic Rouhotas Technology Solutions Architect


BRKDCT-2218
Agenda
Design Evolution
Optics & Cabling considerations
STP, vPC and Spine/Leaf
Scaling Brownfield and Greenfield Data Center Networks
Building Blocks, Availability Zones and Segmentation
Data Centre Interconnect
Programmability, Automation and Orchestration
Summary
LOTS of Related Sessions
Session ID Title Presenter Date / Time

BRKDCT-2375 End-to-End Application-Centric Data Center Carlos Pereira Check on CiscoLive App

BRKDCT-2334 Data Centre Deployments and Best Practices with Brenden Buresh Check on CiscoLive App
NX-OS
BRKDCT-3378 Building Data Center networks with VXLAN BGP-EVPN Lukas Krattiger Check on CiscoLive App

BRKDCT-2134 Data Centre Interconnect with OTV and Other Solutions Dave Jansen Check on CiscoLive App

BRKDCT-2615 How to Achieve True Active-Active Data Centre John Schaper Check on CiscoLive App
Infrastructures
BRKACI-2001 Integration and Interoperation of Existing Nexus Azeem Suleman Check on CiscoLive App
Networks into an ACI Architecture
BRKDCN-2025 Maximizing Network Programmability and Automation Nicolas Delecroix Check on CiscoLive App
with Open NX-OS
BRKACI-2004 How to Setup an ACI Fabric from Scratch Camillo Rossi Check on CiscoLive App

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 4
Design Evolution
Small and Medium Business

What is a Small to Medium Business for an Australian market?


According to the ASIC a small business entity has an annual
revenue of less than $25m and fewer than 50 employees.*
A medium business has 200 or fewer employees with an annual 1SMB
revenue of around $150m. 2Large

Australian SMBs make 97% of all Australian businesses,


produced one third of total GDP, and employ 4.7 million people*

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public
Many different aspects to todays Data Centers?
O/S, Server Virtualisation & Containers
Automation/Orchestration/Mngmt Operating Systems
Device-by-Device vs System Level Hypervisors; Virtual Machine Managers
Containers (e.g. Linux; Docker)
Manual vs Scripted vs Programmable
Applications Applications Applications Applications Applications Applications Applications Applications Microservices
Applications Applications Applications
SDN: none; s/w Add-on; Built-In
Applications Applications Applications Applications Applications Applications Applications Applications Applications Applications Applications
Management: CLI; GUI; API
Applications Applications Applications Applications Applications Applications Applications Applications Applications Applications Applications
Cloud, Service, Infrastructure Orchestration
Applications Applications Applications Applications Applications Applications Applications Compute Type & Form Factor
Applications Applications Applications Applications
Applications Applications Applications Applications Applications Applications Applications Applications
Network Centric > Policy Centric Applications Applications Applications
X86 Rack & Blade Servers
Applications Applications Applications Applications Applications Applications Applications Applications Applications Applications Applications
Network/Fabric Build UCS Managed (Blade and/or Rack)
Applications Applications Applications Applications Applications Applications Applications Applications Applications Applications Applications
Applications Applications Applications Applications Applications Non X86; Legacy (P-Series, Mainframe etc.)
Applications Applications Applications Applications Applications Applications
Type (Traditional; CLOS)
Applications Applications Applications Applications Applications Applications Applications Applications Applications Applications Applications
Compute Connectivity
Capabilities; Ease of Integration
Applications Applications Applications Applications Applications Applications Applications Applications Applications Applications Applications
Facility & Cabling Scale & Efficiency
Applications Applications Applications Applications Applications
Varying Speed 10M/100M; 1G/10G/25G
Applications Applications Applications Applications Applications Applications
Power/Cooling (Per Rack/Row/Hall)
Applications Applications Applications Applications Applications NIC/HBA/CNA # interfaces per-server
Applications Applications Applications
L2 vs L3 Based; Central vs Distr. Applications Applications Applications
Applications Applications Applications Applications Applications
Applications
Flood & Learn vs Control PlaneApplications NIC Teaming models
Applications Applications Applications Applications
Cabling Style: Oversubscription 25:1 to 1:1
Applications Applications Applications
End of Row (EoR); Middle of Row (MoR); Applications Applications Applications Applications Applications
Applications Applications Applications
Cabling to ToR; MoR; EoR
Speed (10M-100G+)
Applications Applications Applications Applications Applications
Top of Rack (ToR) Applications Applications Applications Applications Applications Applications
Applications Applications Applications Applications Applications Applications Applications Applications Applications Applications Applications
Intra Rack: Copper, Direct Attach Network Services/Peripheral Storage & Protocols
Inter Rack: AOC; MMF; SMF NAS; SAN
Firewalls; LBs
WAN; Campus; OOB IP (File, Block), Fibre Channel,
FCoE
Brownfields/Legacy DC network
Distributed, HyperConverged
DC Interconnect (DCI)
Public Cloud BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public
Why Data Center Fabrics?
Definition: ensemble of switches that behave and
get configured like a single giant switch
Internet
Public Offsite DC
Flexibility: allows workload mobility, VLANs Cloud Enterprise
everywhere Network Site B
Robustness: reduce failure domains, L2/L3 Mobile

NORTH - SOUTH TRAFFIC


boundary on leafs, anycast gateway
FC
(Virtual) Network services move out to border
leafs, policy-based service chaining DATA FCoE
API CENTER iSCSI / NAS

Performance: full cross sectional bandwidth FABRIC Storage


(any-to- any) with ECMP, avoid oversubscription Orchestration/
Latency: deterministic at scale, single hop away Monitoring
Scalability: add end nodes, maintain
oversubscription
Cost: fixed switches vs modular switches Services
Server/Compute

EAST WEST TRAFFIC

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 8
Designing Small to Medium Sized Data Centers
Defining Small to Medium Client Access

WAN / DCI
Require Dedicated DC Switches, moving away
from collapsed core with campus Campus

Mostly virtualized, some baremetal servers


Scalability L3
-----------
L2
Size for current needs, reuse components in FC
larger designs
FCoE

Design Options iSCSI / NAS

Feature choice + priority = tradeoffs


Where the industry is going:
Programmability, Automation (SDN buzz)
Virtual Network Services

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public
Network Topologies

Ring Star Full Mesh Hub

Tree N-Tiered Spine Leaf

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 10
Examples of DC Networking Topologies

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public
Switching Architecture Changes
Shifting of Internal Architecture

DBUS SOC SOC SOC SOC


RBUS
EOBC CROSSBAR

Linecard Linecard Linecard Linecard Linecard


SOC SOC SOC SOC

Design Shifts Resulting from Increasing Gate Density and Bandwidth

10/100M 100M/1G 1G/10G 10G/100G


BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public
Fabric Evolution
Shift from scale up to scale out

STP VPC Fabric Path VXLAN

L3

L3

Traditional Overlay Based

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 13
Which network model would YOU choose?
Example: 350 VMs; 25 Baremetal Servers @ 1/10GE, (HA), IP Based Storage

L3

OVERLAY e.g. VXLAN, ACI


L3
96 front L2
facing
ports
96 front L2
facing
ports

Collapsed/Single Tier Spine Leaf

Scalability?
1RU
Fixed vs Modular switches?

Oversubscription? Downlink ports: 48p Uplink/Breakout ports: 6-12p Cost (today vs tomorrow)?

Management and Automation?


BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public
Optics & Cabling Considerations
Distance and speed dependent on cable medium and type

Cabling the Network Topology


Twinax= Direct Attach Copper (CX1)

AOC= Active Optical Cable


RJ-45 = Cat5e, Cat6a, Cat7/7a, Cat8

MMF= Multi Mode Fiber


SMF=Single Mode Fiber

Traditional Physical Layer Connectivity


Spine Leaf
Type/Distance/Speed
(examples)
L3
Core
10/40/100G Twinax; AOC (10G <15m; 40G<15m; 100G <30m)
L3/L2
Spine
Distribution MMF (10G <400m; 40G <150m; 100G <100m)
VXLAN
SMF (40G LR4 <10km; 40G ER4 <40km; 100G
1/10/40G LR4 <10k, 100G ER4-L <25k) 40G/100G L3

Leaf
Access L2

L2

Twinax Copper; AOC (10/40G <15m)


1/10/25G Copper RJ-45 (10G <100m; 40G <30m) 1/10/25G
MMF (<400m)

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public
Host to Switch Cabling Style Switches

Servers

Interconnects

Top of Rack Middle of Row End of Row


ToR MoR EoR

Intra- Rack Inter Rack Inter Rack


Not all options listed
Twinax Direct Attach Copper (Passive CX1 10G 1,3,5m, 40G 1,3,5m; 100G 1-3m); (Active 10G 7-10m ; 40G 7-15m) Copper
Active Optical Cable (AOC) ( 10G 7,10M ; 40G 1-15m; 100G 1-30m) Fiber
Copper Base-T/RJ-45 Connector (10G Cat6A <100m; 40G Cat7/7A 30-60m; 100G Cat8 <30m) Copper
Fabric Extender Transceiver (FET)* (FET-10G ; FET-40G <150m with MPO-12) *Requires N2k and supporting parent switch Fiber
MMF (10G BiDi <150m, 40G SR4 <300m; 100G SR10, SR4 <100m) Fiber
SMF (10G ER <40km; 10G LR; 40G LR <2km; 100G ER4 <40km; 100G LR4 <10km) Fiber

Pictures from jimenez_3bq_01_0711.pdf, 802.3bq Ethernet Alliance


BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public
* 10/100M not always available on newer 1/10/25G models

ToR Access/Leaf Switch Model Selection


UPLINKS
SFP+ Models:
10G
QSFP+ Models:
Example Nexus 9300 series switch 10G (Breakout, QSA), 40G
QSFP28 Models:
10, 25,40,50,100G combinations

Downlink ports: 48p Uplink/Breakout ports: 6-12p

DOWNLINKS
RJ-45 Models (combo examples):
10M/100M/1000M*, 1G/10G, 100M/1G/10G
SFP+ Models (combo examples):
10G, 1/10G, 10/25G, 1/10/25G

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 18
Multi-Medium at the Row level

Leaf pair with SFP+ downlinks

Leaf pair with RJ-45 downlinks

SPINE SPINE

SFP+ Pluggable Optics


(e.g. Twinax DAC; AOC; MMF)

1/10GBase-T
(e.g. Cat 6A RJ-45)

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 19
Direct Attached Cabling - Whats Better?
- Twinax Direct Attach Cabling (passive or active) vs AOC
QSFP 40G CABLE DIAMETER

AOC Active Optical Cables


SFP+ AOC 10G SFP+ Twinax AOC cables are thinner than Twinax Copper cables
copper cable
AOC Enable improved air Flow vs Twinax Copper
Primarily used for ToR to server or ToR to Leaf connectivity
No Electromagnetic Interference with AOC
Reach up to 30m (Copper cables have reach limitations with increasing speeds)
Less weight and Flexible: Significantly simplifies Cable management
3mm 10mm

AOC Twinax copper


cable

10G AOC vs 10G Twinax CX1


BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 20
Network fiber migration with increase in data rates
Multimode Fiber Single Mode Fiber

1G
1km 10km

10G
400m 10km

40G
150m 2km 10km

100G
100m 500m 2km 10km

Shorter link distances with higher data


rates on OM4 multimode fiber.
Short reach optics for single mode fiber at higher data rates.

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public
Multi-Mode Fiber (MMF) Fiber Core implications
10G vs 40G vs 100G
IEEE
10G Optical Link (10GBase-SR)

LC Fiber Cores LC

Rx Tx LC connector SC connector
Rx (SFP+) (X2, XENPAK)
Tx
2 fiber cores (i.e. duplex) 1 Tx/Rx pair

40G Optical Link (40GBase-SR4)


MPO-12 MPO-12

Rx Tx

Tx
Rx MPO -12 has 4 fiber cores un-used MPO-12

8 fiber cores (used) 4 Tx/Rx pairs

100G Optical Link (100GBase-SR10)


MPO-24 MPO-24

Rx Tx

MPO-24
Tx
Rx
MPO-24 has 4 fiber cores un-used

20 fiber cores (used) 10 Tx/Rx pairs


BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 22
Parallel vs BiDi Duplex
QSFP-40G-SR4 vs. QSFP-40G-SR-BD BiDi Optics
QSFP-40G-SR4 QSFP-40G-SR-BD
QSFP SR QSFP SR QSFP-BIDI
TX/RX
TX
4 x 10G 2 x 20G
12-Fiber 12-Fiber Duplex Duplex
infrastructure infrastructure Multimode Multimode
4 x 10G Fiber 2 x 20G Fiber

RX TX/RX

12-Fiber ribbon cable with Duplex multimode fiber


MPO connectors at both with Duplex LC connectors
ends at both ends

Higher cost to upgrade from 10G Use of duplex multimode fiber lowers cost of
to 40G due to 12-Fiber upgrading from 10G to 40G by leveraging
Infrastructure per interface pair existing 10G multimode infrastructure
150m (OM4); 100m (OM3)
BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public
For your
Transceiver form factors reference

10G, 40G, 100G

18

72
Max
outer

10G, 40G Transceivers 100G Transceivers

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public
Optics Pluggable Interfaces For your
reference
SFP & QSFP, QSFP28 SLIC= Subscriber line interface Converter (QSFP to SFP;
QSFP to QSFP)
BiDi= Bidirectional Optics (<150m)
QSA = QSFP to SFP Adapter
* * = Roadmap

SFP SFP+ QSFP+ QSFP28


(H x W x D): 8.5 x 13.4 x 56.5mm
(H x W x D) 13.5 x 18.4 x 72.4 mm.

Pluggable Options Pluggable Options


1G SFP 1G SFP (via QSA)
10G SFP+ 10G SFP+ (via QSA)
25G SFP28 25G SFP+ (via QSFP to SFP+ SLIC)
40G QSFP+, BiDi
50G (via SLIC) **
100G QSFP28, Bidi**

Host: 1/10/25G Network: 10/40/50/100G


Refer to the Compatibility Matrix
http://www.cisco.com/c/en/us/products/collateral/interfaces-modules/transceiver-modules/datasheet-c78-736282.html
http://www.cisco.com/c/en/us/td/docs/interfaces_modules/transceiver_modules/compatibility/matrix/10GE_Tx_Matrix.html
http://www.cisco.com/c/en/us/td/docs/interfaces_modules/transceiver_modules/compatibility/matrix/40GE_Tx_Matrix.html
http://www.cisco.com/c/en/us/td/docs/interfaces_modules/transceiver_modules/compatibility/matrix/100GE_Tx_Matrix.html
BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public
Breakout Support NOTE: Breakout support is platform hardware and software dependent

QSFP to SFP for high density SFP


QSFP40G 40G-SR
Cisco NX-OS supports breakout interfaces. The breakout command works at the MPO-LC break-out cable
module level and splits the 40G interface of a module into four 10G interfaces. The SR
module is reloaded and the configuration for the interface is removed when the SFP+
command is executed. snippet from NX-OS configuration guide QSFP+
SR
SFP+

40G-SR4/CSR4 SR
SFP+

SR
SFP+

LC-Duplex

QSFP 4x10G-LR
MPO-LC break-out cable
LR
SFP+

QSFP+
LR
SFP+

Cisco QSFP to SFP Breakout Examples: 4x10G LR LR


SFP+
Breakout 40G QSFP to 4x10G SFP+ LR or Active Copper or AOC LR
SFP+
Breakout 100G QSFP to 10x10G SFP+ LR or ER-L
Breakout 100G QSFP to 4x25G SFP+ Passive Copper LC-Duplex

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public
Impact of Link Speed
Three non-oversubscribed topologies:
2010Gbps 540Gbps 2100Gbps
Uplinks Uplinks Uplinks

2010Gbps 2010Gbps 2010Gbps


Downlinks Downlinks Downlinks

http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6627738
BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 27
Impact of link Speed Intuition
Higher speed links improve ECMP efficiency

2010Gbps 540Gbps 2100Gbps 20x10G Probability of 100% throughput 3%


Uplinks Uplinks Uplinks

1 2 20

5x40G Probability of 100% throughput 75%

1 2 3 4 5

2x100G Probability of 100% throughput 99%

1110Gbps flows
(55% load)
1 2
BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public
Design Considerations
Increased BW Utilization due to 40G / 100G speedup

2x
40/100G
8 x 10G

8 x 10G 8 x 10G

Expected Max Effective Throughput = 86.33% Expected Max Effective Throughput = 65.6%

Uplink speed ups avoid hashing collisions to the provide effective utilization of available capacity
A speedup of 40G on uplinks for 10G attached servers results in FCT that are 1240% lower than
that of a 10G fabric

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 29
For your
reference

Data Center Cabling Architecture Considerations


Optics will drive physical architecture
Employ a main cross-connect at a MDA, for best management, scalability and growth
Plan for future device optics density and reach; fiber density parallel (SM/MM); <300m
(MM)
Use low loss connectors (<0.35dB/fiber) to enable MDA
Allocate rack space for orderly fiber count growth and network device interface scale
Plan for future high density 10G/25Gserver access to be addressed by 25/40/100GE
break out cables
Switch Architecture Type for Access Modular or ToR
Choice of SM or SM, Duplex or Parallel adoption time frame will determine choice

MDA: Main Distribution Area


BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 30
STP, vPC and Spine/Leaf
History Lesson: Spanning tree

Spanning Tree introduced around 1985,


prevents loops
32 years ago, we also saw:
Windows 1.0
DNS come out of academia
First Nintendo Entertainment System
Host or
People searching for Turbo button on i286 PC Switch

Since then, most DC Designs built to


work around STP

2017 Cisco and/or its affiliates. All rights reserved. Cisco Public
Virtual Port Channel (VPC)

vPC Domain

VPC invented to overcome STP limitations


IEEE standard in 2000 (802.3ad)
Not perfect, but a good workaround
STP is still there on every link
Host or
Human error, misconfiguration, bug can Switch

still cause issues

2017 Cisco and/or its affiliates. All rights reserved. Cisco Public
Virtual Port Channel (vPC) Fabric
vPC Domain 1

vPC Northbound & Southbound


More efficient than native STP
STP is still running
Dual sided vPC
Back-to-Back vPC

Another good workaround


mini-fabric

Configuration can become complex as


switch counts grow vPC Domain 2

vPC makes two switches look as


one.but what about 4 switches?
Host or
Switch

2017 Cisco and/or its affiliates. All rights reserved. Cisco Public
Why Spine-Leaf Design? Flexibility and Efficiency
To speed up flow
completion times, add
Need
Need even
moremore
host more backplane,

Utilization
Per Spine
host ports?
ports? spread load across
AddAdd
another
a leafleaf more spines

Per Spine
Utilization
40G fabric ports
FCT

FCT

FCT
FCT

FCT

FCT
* FCT = Flow Completion Times 10G host ports
Lower FCT = FASTER
96 ports APPLICATIONS
144 ports
2x48 10G192
(960
ports
Gbps total)
3x48 10G (1440 Gbps total)
4x48 10G (1920 Gbps total)

2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 35
Multi-path Fabric Based Designs - FabricPath

Natural migration from vPC


MAC in MAC encapsulation
Easy to turn on (Nexus 5/6/7k)
No STP within Fabric; BPDUs dont FP

traverse fabric L3

Distributed L3 gateway to edge, VLAN L2


Border
MAN/WAN
anywhere notion
TRILL=Standard based, limited
capabilities
Fabric Path = Cisco proprietary features

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public
L3-Based Fabrics
Every link forwarding
L3 routing fast convergence
VXLAN overlay (MAC in UDP) VXLAN

Flood and learn vs VXLAN L3


BGP-EVPN control plane L2
MAN/WAN Border
STP might still exist on edges,
but not within the fabric
VPC still needed at edge
Spine/Leaf:
Flexible and efficient design
Consistent hop count & latency

2017 Cisco and/or its affiliates. All rights reserved. Cisco Public
Scaling Brownfield and Greenfield
Data Center Networks
Scaling a VPC-based DC design

Access L3
Layer L2
VLANs
100-150 Host Host Host

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 39
Scaling a VPC-based DC design

Consolidated
Core/Agg
Layer

Access L3 Access
Layer L2 Layer
VLANs VLANs
100-150 Host Host Host Host Host Host 151-200
BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 40
Scaling a VPC-based DC design

Distributed or
Consolidated
Core/Agg
Layer
L3
L2

Access Access
Layer Layer
VLANs VLANs
100-150 Host Host Host Host Host Host 151-200
BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public
Integrating ACI with an existing network

http://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/migration_guides/migrating_existing_networks_to_aci.html

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public
Integrating ACI with an existing network
ACI Pod
New DC
Row Upgrade
New Application
Distributed or
Consolidated
Core/Agg Spine
Layer Layer
L3
ACI Fabric
L2 (VXLAN based)
Dual sided
vPC

ACI Border Leafs


Access Access Access
Layer Layer Layer
VLANs VLANs VLAN
100-150 151-200
Host
201-250
Host Host

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 43
Integrating ACI with an existing network
ACI Pod
New DC
Row Upgrade
New Application
Distributed or
Consolidated
Core/Agg Spine
Layer Layer
L3
ACI Fabric
L2 (VXLAN based)

Access ACI
Layer
VLANs
Leafs
100-150
Host and Border Leafs
Host Host

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 44
Building Blocks, Availability Zones
and Segmentation
WAN Campus
Building Block Design Zone Zone
Campus
WAN Building Building Block DC
Block Zone Site

Layer 3
Routing for
segmentation
Si Si

Demarcation L3 Routing only

Border
Leaf/Transit
switches
Connect OOB
Management
into Campus
Core or WAN

Data Center OOB


Building Block Management

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 46
Segmentation Granularity

DEV WEB
PRODUCTION WEB
DMZ VLAN 1 VXLAN 2
POD

TEST APP

SHARED WEB
VLAN 3
SERVICES
PROD DB

Basic DC Network Segment by Application Network centric Per Application-tier / Intra-EPG Container Security
Segmentation Lifecycle Segmentation Service Level Micro-Segmentation
Micro-Segmentation

Coarse Fine
Level of Segmentation/Isolation/Visibility

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public
Network Centric Deployment
Leverage known networking constructs.
VLANs, IP address / subnets, Flood domains etc.
Existing network Nexus, Catalyst, etc
VRF VNI: 300100 VRF VNI: 300100

MAC_A,IP_A: 10.1.1.2 MAC_B,IP_B:20.1.1.2


VNI L2: 100100 VNI L2: 100200
VLAN 100:
10.1.1.0/24 VLAN 100 VLAN 200

VLAN 200:
20.1.1.0/24 VXLAN EVPN Fabric

Existing network Nexus, Catalyst, etc TENANT

PRIVATE NETWORK (VRF)

BD: Blue BD: Red


10.1.1.0/24 20.1.1.0/24
VLAN 100:
10.1.1.0/24 EPG: Blue-100 EPG: Red-200
VLAN 200:
20.1.1.0/24 ACI Fabric

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public
Policy Centric Deployment (with ACI)
Leverage grouped constructs Network Level; Application Policy Level
Abstraction with Application network profiles, Policy Oriented
Automation of network services
APIC Controller: declaration of intention, translate network constructs via policy
Application
External Profile
Connectivity L3 Outside

EPG EPG

EPG EPG EPG


Web App DB
Web App DB
EP EP EP EP
FW / SLB FW / SLB
App Profile EPs are devices which attach to the network either virtually or physically, e.g:
Virtual Machine
Physical Server (running Bare Metal or Hypervisor)
External Layer 2 device
External Layer 3 device
Combined Network Model: Virtual Ports, Physical Ports, External L2 VLAN, External L3 subnet VLAN
Subnet
Firewall
Load balancer
BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public
Data Center Design Requirements
Things will fail, so how can we protect ourselves

* http://techblog.netflix.com/2012/07/chaos-monkey-released-into-wild.html
BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 50
DCN Building Blocks with PoDs Spine Node

Leaf Node

PoD = Point of Delivery Layer 3

Collection of compute, storage, and network PoD


resources that conform to a standard operating
footprint sharing the same failure domain Data Center

DCI
RJ-45 SFP+
Campus
Hosts Hosts WAN
Fabric

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 51
DCN Building Blocks with Availability Zones
Using Amazon Web Services terms

Global Multi-Site Data Center Deployment

Region Region

Avail
Pod
Zone

Avail Avail
Pod Pod
Zone Zone

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 52
Multi-Pod
Smaller instances of fabrics with distinct control plane
but shared policy and management planes Transit Leaf Node /Inter POD
Network (IPN) node

Spine Node
PoD = Point of Delivery

Collection of compute, storage, and network Leaf Node

resources that conform to a standard operating


footprint sharing the same failure domain Layer 3

PoD 1 PoD 2

Within or Across DC locations using:


VXLAN EVPN Multi-Pod
PoD 3
ACI Multi-Pod

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 53
Creative" use of Multi-Pod
Smaller instances of fabrics with distinct control plane but shared policy and
management planes

Development Cycle Evergreen Concept

Pre-Prod
Ver n Ver n+1
(Stage)
Stack Stack

Test/Dev Prod
Test

Multi-Pod Multi-Pod

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 54
Data Center Interconnect
IT Trends - Distributed Data Centers
Building the Data Center Cloud

Distributed Data Center Goals:


Seamless workload mobility between multiple
datacenters
Distributed applications closer to end users
Pool and maximize global compute resources
Ensure business continuity and disaster
avoidance with workload mobility, distributed
deployments and clustered applications
Small to Medium Enterprises often rely on two
datacenters

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 56
NX-OS Overlay Transport Virtualization Technology
Extend VLANs Across Datacenters

OTV

Classical Pod Scalable Pod Leaf Spine Pod


Spanning Tree Protocol vPC, N-tier design VXLAN, ACI

Feature: Benefits: In the news:


Simplified Layer 2 / VLAN Extensions Field proven, very mature M3 linecards with NX OS 8.0
Restricted fault domain (STP domain), Enable L2 elasticity across DCs OTV Loopback Join Interface for
loop prevention Multicast-based OTV control-plane
Simplify Virtual Machine Mobility
Optimized Multicast replication Wire-rate 256-bit AES MACsec on all
Extend Layer 2 without the risks of
ports at all speeds along with OTV
Dual homing large fault domain
Works over dark fibre, MPLS or IP Simple 3 easy commands !

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public
Data Centre Interconnect Options with OTV Client Access

Options for L2 Interconnect ASR1000


WAN / DCI
Client Access Campus

WAN / DCI

Campus

ASR1000
L3
-----------
L3 L2
-----------
L2 N7K

Virtual DC Virtual DC
Services in Services in
Software VM VM VM VM VM VM VM VM VM VM VM VM Software

Virtualised Servers with Nexus CSR1000v Virtualised Servers with Nexus


1000v, vPath, CSR 1000v 1000v, vPath, CSR 1000v

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 58
VXLAN as a Data Center Interconnect?
Building

DCI is an architectural discussion

VXLAN is an encapsulation technique

DCI has always been an architectural


discussion, but recently some
vendors have made it become an
encapsulation discussion
VXLAN can absolutely fit into a DCI
architecture if you handle it
CAREFULLY

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public
VXLAN as a Data Center Interconnect?
Enablers to VXLAN in an DCI implementation?
VXLAN with BGP/EVPN fabric end-to-end
Host Reachability (MAC/IP) information is managed and distributed end-to-end

When to use VXLAN/EVPN Stretched Fabric ?


Across short distances (Metro), Private L3 DCI, IP Multicast available end-to-end
Multiple Greenfield DC with VXLAN/EVPN, continuity of control/data plane
Host Reachability is End-to-End
Caveats:
VXLAN for DCI not mature yet
VXLAN
DCI Leaf nodes
No architectural protection mechanisms at edge tunnel

Use hardware protection mechanisms:


Storm Control, BPDU Guard, HMM Route Tracking
Control-Plane with MAC-learning, ARP suppress VXLAN Leaf VXLAN Stretched Fabric
Bud Node

Preference for OTV or ACI MultiPOD (safer approach) Traffic is encapsulated & de-encapsulated on each far end side

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public
DCI with ACI Multi-Fabric options
Single APIC Cluster/Single Domain (BC-DA) Multiple APIC Clusters/Multiple Domains (DR)
Stretched Fabric Dual-Fabric Connected (L2 and L3 Extension)
ACI Fabric ACI Fabric 1 ACI Fabric 2
Site 1 Site 2

L2/L3

More Scale
Multi-Pod (6 pods today; 12+ pods roadmap) More Flexibility Multi-Site (Future)
More Automation
Pod A IP Network Pod n Site A IP Network Site n

MP-BGP - EVPN

MP-BGP - EVPN

APIC Cluster
BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public
VXLAN for DCI: Final Considerations

VXLAN is an encapsulation, not an architecture


VXLAN for DCI is no more in its infancy... but still be
very careful when using VXLAN for DCI, you have to
take care of many moving parts
Its like building your own custom animal
o Looks cool
...... but takes special care
Prefer OTV or ACI MultiPOD
... or rely on Cisco AS experience

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 62
Programmability, Automation and
Orchestration
Why Network Programmability?

More Speed More Repeatability More Flexibility More Innovation

Fewer Mistakes
BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public
Programmability, Automation, Orchestration
Network-Enabled Applications

Orchestration
CloudCenter
UCSD

Cloud | On-Prem
Open North Bound RESTful APIs

Automation
NFM

Controllers, Configuration Management Tools


Open & Programmable | Standards-Based South Bound APIs

Open Device Programmability

Physical and Virtual Network Infrastructure


BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 65
Controlling, Managing & Automating the Fabric
IT Service Management (e.g. Service Now)

Multi Cloud (e.g. Cisco CloudCenter) Application Policy


Fabric Manager Infrastructure
(i.e. DCNM, NFM) Orchestration & Workflow Automation Controller
Connection Private cloud: compute | network | storage
Creation Expansion e.g. Cisco UCS Director, Openstack, Puppet, Chef, Ansible, Salt etc. APIC APIC
APIC
Using APIs
Reporting Fault Mgmt

Network Centric
Network Centric Policy Centric
CAMPUS CAMPUS Application
WAN / DCI WAN / DCI
Centric
Spine
Spine

L3 with Overlay ACI Fabric

Leaf Leaf
Standalone NX-OS Application Centric Infrastructure
Device by Device or Do it Yourself Fabric (ACI)
Choice of NMS, Fabric Manager Turnkey System
BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 66
Choice in Automation and Programmability
Open NX-OS

Application Centric Programmable Fabric Programmable Network


Application Optimized Networks
Turnkey integrated solution with BGP EVPN standard-based
w/Segment Routing
security, centralized management,
compliance and scale 3rd party controller support
Modern NX-OS with
Nexus Fabric Manager / DCNM 10 for NX-API REST/YANG/OpenConfig
Automated application centric-policy
model with embedded security automation and management across
DevOps toolset used for
N2K-N9K
Network Mgmt
Broad and deep ecosystem
(Puppet, Chef, Ansible etc.)

Fabric
Management

DB DB

Web Web App Web App

Automation, APIs, Controllers and Tool-chains

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 67
Nexus Programmability Nexus 7K Nexus 5K / 6K Nexus 9K
Puppet/Chef/Ansible Shipping Shipping Shipping
Provisioning &
PoAP Shipping Shipping Shipping
Orchestration
OpenStack Shipping Shipping Shipping

XMPP Shipping Shipping Shipping


NetConf(SSH/XML) Shipping Shipping Shipping
NX API CLI (JSONXML) Shipping Shipping Shipping
Protocols and
NX API REST Future Future Shipping
Data Models
YANG Future Future Shipping
RESTconf/gRPC Future Future Shipping
Streaming Telemetry Future No Shipping
Native Python Shipping Shipping Shipping

Programmatic Linux container Shipping Shipping Shipping


Interfaces Guest Shell Future Future Shipping
OpenFlow Shipping Future Shipping

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 68
Open Fabric Programming Git Hub Repository For Your
Reference

https://github.com/datacenter/

2017 Cisco and/or its affiliates. All rights reserved. Cisco Public
USE
Enterprise Customer Igniting The Programmable Network CASE
Fabric Provisioning Automation with Open NX-OS Ignite and Puppet

Programmable
Network Customer Benefits
Reduced Operational Costs

Fully Automate day zero fabric provisioning with


Ignite

Automate full day one configuration


management with Puppet and Open NX-OS

Eliminate errors

Use repeatable process for consistent


provisioning

Enable idempotent configurations, push


changes only when necessary

https://github.com/datacenter/ignite
https://github.com/cisco/cisco-network-puppet-module

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public
Commercial Cloud Provider USE
Tenant Onboarding Automation with Ansible and Open NX-OS CASE

Customer Benefits
Fast and secure tenant onboarding

Time reduced from days to minutes with Open


NX-API
NX-OS and Ansible OE

Ansible secure agentless architecture ensures


no agents to exploit or update

Drive Compliance and Consistency

Eliminate configuration drift across network

Continuously check network for standards and


model compliance
https://developer.cisco.com/opennxos
http://github.com/datacenter

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public
Programming a Fabric For Your
Reference

A lot of work is being done to provide customers maximum flexibility in


programming & automation interfaces
Free Open Programmability book:
http://www.cisco.com/c/dam/en/us/td/docs/switches/datacenter/nexus9000/sw/open_n
xos/programmability/guide/Programmability_Open_NX-OS.pdf
New community site dedicated to NXOS programmability:
https://opennxos.cisco.com

A lot of work has been done to increase available knowledge on network


programming across all Cisco products
DevNet: If you havent visited, please do
https://devnet.cisco.com
SANDBOX! FREE 24 X 7 hosted labs

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 72
Choice in Automation and Programmability
Open NX-OS

Application Centric Programmable Fabric Programmable Network


Application Optimized Networks
Turnkey integrated solution with BGP EVPN standard-based
w/Segment Routing
security, centralized management,
compliance and scale 3rd party controller support
Modern NX-OS with
Nexus Fabric Manager / DCNM 10 for NX-API REST/YANG/OpenConfig
Automated application centric-policy
model with embedded security automation and management across
DevOps toolset used for
N2K-N9K
Network Mgmt
Broad and deep ecosystem
(Puppet, Chef, Ansible etc.)

Fabric
Management

DB DB

Web Web App Web App

Automation, APIs, Controllers and Tool-chains

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 73
Along the Spectrum from CLI to ACI
A New Way To Do Fabric Management

Scripting to the
CLI and/or API

Basic
Element
Manager

CLI ACI

2017 Cisco and/or its affiliates. All rights reserved. Cisco Public
Fabric Management functionality
Common Customer Asks

Fabric management automation High interest


Many Fabrics are based multiple protocols such
as VXLAN, BGP/EVPN, IS-IS, Multicast
New protocols, new configurations, new things to
learn
Simple tool to ease burden of adoption
CVD/Best practices done for you!

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public
Cisco Nexus Fabric Manager (NFM)
Fabric aware tool that simplifies and automates VXLAN
deployments
Connection
Intelligent fabric lifecycle management
Automates via knowledge of
underlying fabric
Creation Expansion
Simplifies fabric management via GUI
NFM Actively builds & manages: NFM

VXLAN/EVPN Underlay Fabric


Overlay connectivity Reporting Fault Mgmt

Manages Nexus 9000 switches


Spine Leaf Only, Max 50 leafs (current) Intelligent Fabric Management
Lifecycle

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public
Data Center Network Manager (DCNM) 10
Extensible, Automatable Network
Management System tool for Storage, LAN
and Programmable (NX-OS Mode) Fabrics
Advanced Feature support including VPC, VXLAN, POAP,
Storage Zoning, Slowdrain, OTV, Topology, S/W Image,
Configuration
and Template management
Aids troubleshooting through monitoring and reporting and
Topology views
Cisco Nexus (2k, 5k, 7k, 9k) and Cisco MDS switching families
Multi-Site, Multi-DC capable

FP
VXLAN

Underlay+Overlay
Solution
Templates
Multi-Fabric
Multi-Site
2017 Cisco and/or its affiliates. All rights reserved. Cisco Public
Financial Customer USE
Open NX-OS Network Analytics with Splunk Forwarder CASE
Customer Requirements
Monitor CPU, disks, fans, temperature sensors

Monitor network statistics (packets, drops, errors etc.)

Extensible framework to include additional data


sources/attributes on-demand

Customer Benefits

Time Scale
Network visibility and operational intelligence at scale

(minutes)
Pro-active problem identification
Splunk Cisco Network (incl. Nexus)
Add-on Dash Board Faster MTTR (Mean Time To Repair)

https://github.com/inspired/cisco_ios One Splunk dashboard for network monitoring along with


applications, servers and services

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public
Choice in Automation and Programmability
Open NX-OS

Application Centric Programmable Fabric Programmable Network


Application Optimized Networks
Turnkey integrated solution with BGP EVPN standard-based
w/Segment Routing
security, centralized management,
compliance and scale 3rd party controller support
Modern NX-OS with
Nexus Fabric Manager / DCNM 10 for NX-API REST/YANG/OpenConfig
Automated application centric-policy
model with embedded security automation and management across
DevOps toolset used for
N2K-N9K
Network Mgmt
Broad and deep ecosystem
(Puppet, Chef, Ansible etc.)

Fabric
Management

DB DB

Web Web App Web App

Automation, APIs, Controllers and Tool-chains

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 79
FCAPS and Automation
Application Centric
Programmable Fabric Programmable Network
Infrastructure
Connection

Creation Expansion
VTS

Reporting Fault Mgmt

Turnkey integrated solution with security, centralized


management, compliance and scale Integrated stack Modern NX-OS with enhanced NX-APIs
Or
Automated application centric-policy model with A-la-carte Automation DevOps toolset used for Network Management
embedded security (Puppet, Chef, Ansible etc.)
Streamlined Workflow Management
Customer Script based Operations and Workflows
Broad and deep ecosystem

Security
External
Performance Tools
External
Accounting
Integrated Tools
Tools
Fault Integrated
Tools
Configuration
BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public
ACI Manages Infrastructure through Abstraction

External Connectivity

SLB Configuration

SLB Configuration
Host Connectivity
FW Configuration

FW Configuration
Host Connectivity

Host Connectivity
ACL
ACL
QoS

QoS

QoS
ACL

Network Path
Forwarding QoS QoS QoS

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 81
ACI Manages Infrastructure through Abstraction

External Connectivity

SLB Configuration

SLB Configuration
Host Connectivity
FW Configuration

FW Configuration
Host Connectivity

Host Connectivity
ACL
ACL
QoS

QoS

QoS
ACL

Network Path
Forwarding QoS QoS QoS

Application Network Profile abstracts the


logical state from the physical infrastructure

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 82
ACI APIC Interfaces
CLI

GUI

REST API ACI Toolkit

REST API
Full Object Model exposed Cobra
Objects
Attributes
Children
Relationships
Relative and Distinguished Names
JSON or XML Ruby
https://github.com/datacenter

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public
Different People, Different Tools
Powerful/Complex

SDK
REST
API
APIC
GUI
APIC
CLI

Simple/Rigid
BRKDCT-2218
2016
2017 Cisco
Ciscoand/or its its
and/or affiliates. All All
affiliates. rights reserved.
rights CiscoCisco
reserved. PublicPublic 21
Custom Dashboards with Ruby SDK

https://github.com/datacenter/aci

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 51
Common Ask: Simplify Layer 2 integration
ACI
Brownfields Network

VLAN 100
Fabric

VLAN 200

Cat3750 Cat3750

VLAN 100 VLAN 200

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 86
Open
Network Centric Application Plus (NCAplus) Source
Tool to deploy classic VLANs into ACI tailored to customer process Tool
Deploy L2 Networks easily without having to view ACI GUI

Networks

Access
Definitions

Monitoring

https://github.com/datacenter/NCAplus
BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public
ACI Network Automation for NetOps
Cisco Open Source Projects for the Data Center
https://github.com/datacenter
ACI SDK Cobra
The complete API SDK for the Cisco APIC
Acitoolkit
A basic toolkit for accessing the Cisco APIC
https://github.com/datacenter/acitoolkit
NCAplus
An application designed for ACI to simplify the creation of L2 networks.

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 88
ACI App Centerhttps://aciappcenter.cisco.com/

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public
App Center: New Apps Contract Viewer

VisuDash

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public
Programmability

Think different!
When the shoe doesnt fit, see if its
possible to customize the shoe.

2017 Cisco and/or its affiliates. All rights reserved. Cisco Public
Key Takeaways
Cisco has many options for building data
center networks, you own the key
All solutions can start small and grow
No Cisco solution has to be a rip and
replace
Spine-Leaf does not have to be expensive
Programmable fabrics provide new tools for
simplified operations
Automated fabrics provide new methods of
managing DC Networking

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 92
Call to Action

www.ciscolive.com for presentations and recordings


Visit World of Solutions and look for Data Center
Meet the Engineer
Visit DevNet Zone and see what they have to offer!

BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public
Q&A
Complete Your Online Session Evaluation
Give us your feedback and receive a
Cisco Live 2017 Cap by completing the
overall event evaluation and 5 session
evaluations.

All evaluations can be completed via


the Cisco Live Mobile App.

Caps can be collected Friday 10 March Learn online with Cisco Live!
at Registration. Visit us online after the conference
for full access to session videos and
presentations.
www.CiscoLiveAPAC.com
BRKDCT-2218 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 95
Thank you

You might also like