Está en la página 1de 73

Cisco Complete Nexus portfolio

Deployment best practices


BRKDCT-2204

Presentation_ID

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Session Goal
Understand how to design a scalable data center based upon customer
requirements
How to choose different flavor of the designs using Nexus family.
Share a case study.

Presentation_ID

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Recommended Sessions
BRKARC-3470: Cisco Nexus 7000 Hardware Architecture
BRKARC-3452: Cisco Nexus 5000/5500 and 2000 Switch Architecture
BRKARC-3471: Cisco NX-OS Software Architecture
BRKVIR-3013: Deploying and Troubleshooting the Nexus 1000v Virtual Switch
BRKDCT-2048: Deploying Virtual Port Channel in NX-OS
BRKDCT-2049: Overlay Transport Virtualization
BRKDCT-2081: Cisco FabricPath Technology and Design
BRKDCT-2202: FabricPath Migration Use Case
BRKDCT-2121: VDC Design and Implementation Considerations with Nexus 7000
BRKRST-2509: Mastering Data Center QoS
BRKDCT-2214: Ultra Low Latency Data Center Design - End-to-end design approach
BRKDCT-2218: Data Center Design for the Small and Medium Business

Presentation_ID

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Session Agenda
Nexus Platform Overview
Data Center Design and Considerations
Case Study #1: Green Field Data Center Design
Key Takeaways

Presentation_ID

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Data Center Architecture


Life used to be easy
The Data Centre Switching Design was based on the hierarchical switching we
used everywhere
Three tiers: Access, Aggregation and Core

Core

L2/L3 boundary at the aggregation


Add in services and you were done
What has changed? Most everything

Layer 3
Layer 2

Aggregation

Hypervisors
Cloud Iaas, Pass, Sass

Services

MSDC
Ultra Low Latency
Competition (Merchant Silicon, )
We now sell compute !!
Presentation_ID

2012 Cisco and/or its affiliates. All rights reserved.

Access
Cisco Public

Data Center Drivers


Business
Agility

Regulatory
Compliance

Security
Threats

Budget
Constraints

Business Challenges

Technology Trends
Cloud

Presentation_ID

Big Data

Proliferation
of Devices

Energy Efficiency

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Data Centre Architecture


There is no single design anymore

Spectrum of Design Evolution

Ultra Low Latency

High Frequency Trading


Layer 3 & Mul:cast
No Virtualization
Limited Physical Scale
Nexus 3000 & UCS
10G edge moving to 40G
Presentation_ID

HPC/GRID
Layer 3 & Layer 2
No Virtualization
iWARP & RCoE
Nexus 2000, 3000, 5500,
7000 & UCS
10G moving to 40G

blade1
slot 1
blade2
slot 2
blade3
slot 3
blade4
slot 4
blade5
slot 5
blade6
slot 6
blade7
slot 7
blade8
slot 8

blade1
slot 1
blade2
slot 2
blade3
slot 3
blade4
slot 4
blade5
slot 5
blade6
slot 6
blade7
slot 7
blade8
slot 8

blade1
slot 1
blade2
slot 2
blade3
slot 3
blade4
slot 4
blade5
slot 5
blade6
slot 6
blade7
slot 7
blade8
slot 8

blade1
slot 1
blade2
slot 2
blade3
slot 3
blade4
slot 4
blade5
slot 5
blade6
slot 6
blade7
slot 7
blade8
slot 8

blade1
slot 1
blade2
slot 2
blade3
slot 3
blade4
slot 4
blade5
slot 5
blade6
slot 6
blade7
slot 7
blade8
slot 8
blade1
slot 1
blade2
slot 2
blade3
slot 3
blade4
slot 4
blade5
slot 5
blade6
slot 6
blade7
slot 7
blade8
slot 8

blade1
slot 1
blade2
slot 2
blade3
slot 3
blade4
slot 4
blade5
slot 5
blade6
slot 6
blade7
slot 7
blade8
slot 8
blade1
slot 1
blade2
slot 2
blade3
slot 3
blade4
slot 4
blade5
slot 5
blade6
slot 6
blade7
slot 7
blade8
slot 8

blade1
slot 1
blade2
slot 2
blade3
slot 3
blade4
slot 4
blade5
slot 5
blade6
slot 6
blade7
slot 7
blade8
slot 8
blade1
slot 1
blade2
slot 2
blade3
slot 3
blade4
slot 4
blade5
slot 5
blade6
slot 6
blade7
slot
7
blade8
slot 8

blade1
slot 1
blade2
slot 2
blade3
slot 3
blade4
slot 4
blade5
slot 5
blade6
slot 6
blade7
slot 7
blade8
slot 8
blade1
slot 1
blade2
slot 2
blade3
slot 3
blade4
slot 4
blade5
slot 5
blade6
slot 6
blade7
slot
7
blade8
slot 8

Virtualized Data Center

SP and Enterprise
Hypervisor Virtualiza:on
Shared infrastructure
Heterogenous
1G Edge moving to 10G
Nexus 1000v, 2000, 5500, 7000
Cisco Public
2012 Cisco and/or its affiliates. All rights
reserved.
& UCS

MSDC

Layer 3 Edge (iBGP, ISIS)


1000s of racks
Homogeneous Environment
No Hypervisor virtualiza:on
1G edge moving to 10G
Nexus 2000, 3000, 5500, 7000 &
UCS

Cisco DC Switching Portfolio


LAN/SAN

Scalability

LAN

Nexus 7000
Nexus 5000

Nexus 1010

Nexus 2000

Nexus 4000
B22 FEX

Nexus 3000

Nexus 1000V

Cisco NX-OS: One OS from the Hypervisor to the Data Center Core
Convergence

Presentation_ID

VM-Aware
Networking

10/40/100G
Switching

2012 Cisco and/or its affiliates. All rights reserved.

Fabric
Extensibility
Cisco Public

Cloud Mobility

Nexus 7000 Series


Broad Range of Deployment Options

Highest 10GE Density in


Modular Switching
Nexus 7004

Nexus 7009

Nexus 7010

Nexus 7018

7 RU

14 RU

21 RU

25 RU

440 Gig/Slot

550 Gig/Slot

550 Gig/Slot

550 Gig/slot

96/12/4

336/42/14

384/48/16

768/96/32

Air Flow

Side-to-Rear

Side-to-Side

Front-to-Back

Side-to-Side

Power Supply Configurations

4 x 3KW AC

2 x 6KW AC/DC
2 x 7.5KW AC

3 x 6KW AC/DC
3 x 7.5KW AC

4 x 6KW AC/DC
4 x 7.5KW AC

Height
Max BW per Slot
Max 10/40/100GE ports

Application Presentation_ID

Small to Medium Core/


Data Center and
Edge
Campus Core
2012 Cisco
and/or its affiliates. All rights reserved.

CiscoData
Public Center

Large Scale Data


Center

Nexus 5596UP & 5548UP


Virtualized Data Center Access

Lossless Ethernet
FCoE, iSCSI, NAS

Native FC

Innovations
Unified Port capability

Benefits / Use-cases

Layer-2, -3 support

Investment protection in action!

FEX support (24/L2)


Multihop FCoE/Lossless Ethernet
Cisco FabricPath (future)
Presentation_ID

High density 1RU/2RU ToR Switches


10GE / 1GE / FCoE / 8G FC
Reverse airflow / DC-power

2012 Cisco and/or its affiliates. All rights reserved.

Proven, resilient NX-OS, designs


Low, predictable latency at scale
Cisco Public

Cisco Nexus 2000 Series


Platform Overview

Nexus B222
48 Port 1000M Host Interfaces
4 x 10G Uplinks

N2248TP
48 Port 100/1000M Host Interfaces
4 x 10G Uplinks
Presentation_ID

2012 Cisco and/or its affiliates. All rights reserved.

N2224TP
24 Port 100/1000M Host Interfaces
2 x 10G Uplinks

N2232PP
32 Port 1/10G FCoE Host Interfaces
8 x 10G Uplinks
Cisco Public

Changing the device paradigm


Cisco Nexus 7000
Cisco Nexus 5500

+
Distributed High Density Edge
Switching System
(up to 4096 virtual Ethernet
interfaces)

Cisco Nexus 2000 FEX


Cisco Nexus 2000 FEX
Presentation_ID

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Nexus 3000: For Ultra Low Latency


1RU NX-OS Switches for 1/10/40G Connectivity
Major wins
in HFT/Web 2.0

Nexus 3048
48 ports
100M/1GE

Nexus 3064

Nexus 3016
16 ports
10/40GE

64 ports
1/10GE

Robust NX-OS with Dieren:ated Feature Set


Wire-rate L2/L3
feature-set

FOR

Presentation_ID

VPC, PTP, Congurable CoPP,


ERSPAN

Power-on Auto-Provisioning,
IPv6

High-Frequency Trading | Big Data | Web 2.0


2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Cisco Nexus 1000V



VM

Nexus 1000V
VSM

Nexus
1000V
VEM

VM

VM


VM

VM

Nexus 1000V
VSM

VMware vSphere

VMware vCenter

VM

Nexus
1000V
VEM

Windows 8 Hyper-V

SCVMM

Consistent architecture, feature-set & network services ensures


opera]onal transparency across mul]ple hypervisors
Presentation_ID

2012 Cisco and/or its affiliates. All rights reserved.

VM

Cisco Public

VM

Virtual Services for Nexus 1000V


Server

Server
VM

VM

VM

VM

VM

VM

VM

VM

Virtual Services
Virtual ASA

VSG

Nexus 1000V Switch


VMWare or Hyper-V

VMWare or Hyper-V

Adapter

Adapter

vWAAS

VSG: Virtual Security Gateway


ASA: Adap]ve Security Appliance
WAAS: Wide Area Accelera]on Service
NAM: Network Analysis Module

(e.g. Nexus Network)

Customer Benefits

Presentation_ID

Operational consistency across physical and virtual networks


Network team manages physical and virtual networks
Integrated advanced Cisco NX-OS networking features
Support existing Cisco virtual network services
2012 Cisco and/or its affiliates. All rights reserved.

NAM

Cisco Public

Session Agenda
Nexus Platform Overview
Data Center Design and Considerations
Case Study #1: Green Field Data Center Design
Key Takeaways

Presentation_ID

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

16

Data Center Architecture Building Blocks


Data Center
(Internet/DMZ)

Security
Perimeter

DATA CENTER INTERCONNECT (DCI)

VIRTUALIZATION

BUSINESS & ENTERPRISE APPLICATIONS

Enterprise
Core

Core

Data Center
Aggregation

Core

Data Center
Aggregation

Aggregation

Service POD

Extranet

Core

Core

Aggregation

Data Center
Access

Extranet

Security
Perimeter

Intranet

Data Center
Core

Data Center
(Extranet)

DC Services
DC Services

MANAGEMENT

Intranet

SECURITY

Data Center
(Intranet)

Access
Access

COMPUTE

STORAGE

FACILITIES

Allow customization within blocks while maintain overall architecture


Blocks aligned to meeting business and technical requirements
Presentation_ID

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

17

A Cloud Ready Data Center Architecture


Cisco Virtualized Multi-Tenant Data Center
Validated reference
architecture that delivers a
highly scalable, available,
secure, flexible, and efficient
data center infrastructure.
Proven layered approach
Reduced time to
deployment
Reduced risk
Increased flexibility
Improved operational
efficiency
hap://www.cisco.com/en/US/partner/solu]ons/ns340/ns414/ns742/ns743/ns1050/landing_vmdc.html
Presentation_ID

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

18

What Makes Designing Networks for


the Data Center Different?
Extremely high density of end nodes and
switching
Power, cooling, and space management
constraints
Mobility of servers a requirement, without DHCP
The most critical shared end-nodes in the
network, high availability required with very
small service windows
Multiple logical multi-tier application
architectures built on top of a common physical
topology
Server load balancing, firewall, other services
required
Presentation_ID

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

19

The Evolving Data Centre Architecture


Data Center 2.0 (Physical Design == Logical Design)
The IP portion of the Data Center Architecture
has been based on the hierarchical switching
design
Workload is localized to the Aggregation Block
Services localized to the applications running on
the servers connected to the physicalpod

Core
Layer 3
Layer 2

Aggregation

Mobility often supported via a centralized cable


plant

Architecture is often based on optimized design


for control plane stability within the network fabric

Services

Goal #1: Understand the constraints of the


current approach (De-Couple the Elements of the
Design)
Goal #2: Understand the options we have to build
a more efficient architecture (Re-assemble the
elements into a more flexible design)
Presentation_ID

2012 Cisco and/or its affiliates. All rights reserved.

Compute
Cisco Public

Evolving Data Centre Architecture


Design Factor #1 to Re-Visit Where are the VLANs
As we move to new designs need to
re-evaluate the L2/L3 scaling and design
assumptions
Need to consider VLAN Usage
Policy assignment (QoS, Security, Closed User
Groups)
IP Address Management

Where is the L2/


L3 boundary?
Layer 3
Layer 2

Aggregation
Scalability
ARP, STP, FHRP

Some factors are fixed (e.g. ARP load)


Some factors can be modified by altering VLAN/
Subnet ratio

Still need to consider L2/L3 Boundary Control


Plane Scaling
ARP scaling (how many L2 adjacent devices)
FHRP, PIM, IGMP
STP logical port count (BPDUs generated per
second)

Goal: Evaluate which elements can change in


your architecture
Presentation_ID

2012 Cisco and/or its affiliates. All rights reserved.

VLAN/Subnet Ratio?
VLAN span?
Cisco Public

Evolving Data Centre Architecture


Design Factor #2 to Re-Visit Where are the Pods?
Network Pod:
Repeatable physical, compute and network infrastructure including
L2/L3 boundary equipment. The pod is traditionally the L2 failure
domain fate-sharing domain

Access Pod:
Collection of compute nodes and network ports behind a pair
of access switches
10GE

10GE

Compute Pod:
Collection of compute nodes behind a single management
domain or HA domain
Network and Fabric design ques]ons that depend on the choice of the
Compute Pod
How Large is a Pod?
Is Workload local to a Pod?
Are Services local to a Pod?

Presentation_ID

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Evolving Data Centre Architecture

Design Factor #2 to Re-Visit Where are the Pods?

The efficiency of the power


and cooling for the Data
Center is largely driven by the
physical/logical layout of the
Compute Pod
The design of the network
and cable plant interoperate
to define the flexibility
available in the architecture
Evolution of Server and
Storage connectivity driving
changes in the cable plant

Presentation_ID

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Evolving Data Centre Architecture


Design Factor #2 to Re-Visit Where are the cables?

Presentation_ID

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Evolving Data Centre Architecture

Design Factor #2 to Re-Visit Where are the cables?

Presentation_ID

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Evolving Data Centre Architecture


Design Factor #3 to Re-Visit How is the compute attached?
How is striping of workload across the physical Data Center accomplished (Rack,
Grouping of Racks, Blade Chassis, )?
How is the increase in percentage of devices attached to SAN/NAS impacting the
aggregated I/O and cabling density per compute unit?
Goal: Define the unit of Compute I/O and how it is managed (how does the cabling
connect the compute to the network and fabric)
blade1
slot 1
blade2
slot 2
blade3
slot 3
blade4
slot 4
blade5
slot 5
blade6
slot 6
blade7
slot 7
blade8
slot 8

blade1
slot 1
blade2
slot 2
blade3
slot 3
blade4
slot 4
blade5
slot 5
blade6
slot 6
blade7
slot 7
blade8
slot 8
blade1
slot 1
blade2
slot 2
blade3
slot 3
blade4
slot 4
blade5
slot 5
blade6
slot 6
blade7
slot 7
blade8
slot 8

Rack Mount
Presentation_ID

Blade Chassis
2012 Cisco and/or its affiliates. All rights reserved.

Integrated - UCS
Cisco Public

Evolving Data Centre Architecture


Design Factor #4 to Re-Visit - Where Is the Edge?

FC
3/11

Edge of the Network and Fabric


NIC

HBA

pNIC

PCI-E Bus
VETH

VMFS
SCSI

Still 2 PCI
Addresses
on the BUS

Edge of the
Fabric

VNIC

Operating
System and
Device Drivers

Converged
Network
Adapter
provides
virtualization of
the physical
Media

Link

HBA

PCI-E Bus

vFC
3
Ethernet

10GbE Fibre Channel

FC
3/11

Eth
2/12

10GbE

Eth
2/12

Eth
2/12

VETH

PCIe

VMFS
SCSI

PCI-E Bus
Edge of
the Fabric

VNIC

Hypervisor provides
virtualization of PCI-E
resources

Hypervisor provides
virtualization of PCI-E
resources

Compute and Fabric Edge are Merging


Presentation_ID

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

veth vFC vFC


3
1
2
VETH

vFC
126
SR-IOV adapter
provides
multiple PCIe
resources

vFC
4

10GE - VNTag

Eth
1

FC
2

Eth
3

FC
4

Eth
126

PCI-E Bus
Edge of
the Fabric

Pass
Thru

VMFS
SCSI

VNIC

Hypervisor provides
virtualization of PCI-E
resources

Evolving Data Centre Architecture


connected?

The Flexibility of a
Unified Fabric
Transport
Any RU to Any
Spindle

Presentation_ID

FCoE SAN

iSCSI
Appliance

iSCSI
Gateway

NAS
Appliance

NAS
Gateway

Computer System

Computer System

Computer System

Computer System

Computer System

Applica:on

Applica:on

Applica:on

Applica:on

Applica:on

File System
Volume
Volume MManager
anager

File System
Volume Manager

File System
Volume Manager

File System

File System

I/O Redirector

I/O Redirector

NIC

NIC

SCSI Device Driver

FCoE Driver
NIC

SCSI Device Driver


iSCSI Driver
TCP/IP Stack

NIC

SCSI Device Driver


iSCSI Driver
TCP/IP Stack

NIC

NFS/CIFS
TCP/IP Stack

Block I/O

NFS/CIFS
TCP/IP Stack

File I/O

SAN

IP

IP

FCoE

NIC
TCP/IP Stack
iSCSI Layer
Bus Adapter

NIC
TCP/IP Stack
iSCSI Layer
FC HBA

NIC
TCP/IP Stack
File System
Device Driver

NIC
TCP/IP Stack
File System
FC HBA

FC

Block I/O

FC

2012 Cisco and/or its affiliates. All rights reserved.

IP

Cisco Public

IP

Evolving Data Centre Architecture

Design Factor #6 to Re-VisitWhere Are the Services?

Client

In the non-virtualized model


services are inserted into the Data
Path at choke points
Logical Topology matches the
Physical
Virtualized workload may require a
re-evaluation of where the services
are applied and how they are
scaled
Virtualized Services associated
with the Virtual Machine
(Nexus 1000v & vPath)
Virtual Machine Isolation (VXLAN)
Presentation_ID

2012 Cisco and/or its affiliates. All rights reserved.

VM VM VM
#2 #3 #4

Virtualized Services Nexus 1000v &


vPath
VSG, vWAAS
Cisco Public

Hierarchical Design Network Layers


Defining the Terms
Data Center Core
Routed layer which is distinct from enterprise network core

Enterprise Network

Provides scalability to build multiple aggregation blocks


Data Center
Core

Aggregation Layer
Provides the boundary between layer-3 routing and layer-2
switching
Point of connectivity for service devices (firewall, LB, etc.)

Aggrega3on

Access Layer

Layer 3 Links
Layer 2 Trunks

Provides point of connectivity for servers and shared


resources
Typically layer-2 switching

Presentation_ID

2012 Cisco and/or its affiliates. All rights reserved.

Access

Cisco Public

30

Data Center Core Layer Design


Core Layer Function & Key Considerations

High speed switching &100% layer 3


Fault domain isolation between Enterprise and DC
Enterprise Network

AS / Area boundary
Routing table scale
Fast routing convergence

Data Center
Core

Layer 3 Links

Aggrega3on

Layer 2 Trunks

Access
Presentation_ID

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

31

Data Center Core Layer Design


Commonly Deployed Platform and Modules

Platform: Nexus 7K
Modules

M2-10G LC
M2-40G LC
M2-100G LC*

M1-10G LC

F2-Series LC

M1: L2/L3/L4 with large forwarding


tables

Software

6.1 or above

4.0 and
later*

6.0(1) and
later

240G/200G*

80G

480G*

and rich feature set

Fabric
Connection
L3 IPv4
Unicast

128K/1M

128K/1M

32K

N/A

32K

16K

6K/350K

Up to 350K

32K

N/A

16K

8K

64/128K

128K

16K

MPLS

LISP and OTV

F2: Low-cost, high density with high


performance, low latency and low power

Classic layer 3 Core: M1 or F2


Large routing and ACL tables: M1
High density linerate10G: F2
MPLS: M1
Presentation_ID

2012 Cisco and/or its affiliates. All rights reserved.

L3 IPv4
Multicast
L3 IPv6
Unicast
L3 IPv6
Multicast
ACL Entries

Cisco Public

32

Data Center Aggregation Layer Design


Virtualized Aggregation Layer provides
Enterprise Network

L2 / L3 boundary
Access layer connectivity point: STP root,

loop-free features

Data Center
Core

Service insertion point


Network policy control point: default GW,

DHCP Relay, ACLs

Aggrega3on

Access

Presentation_ID

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

33

Data Center Aggregation Layer Design


Commonly Deployed Platform and Modules
M1-10G LC

Platform: N7K/N5K
Features(L3, OTV
etc.)
Scalability
Performance and
port density

Presentation_ID

F2-Series LC

N5500 with L3

Min. Software

4.0*

5.1(1)

6.0(1)

5.0(3)N1(1)

Fabric Connection

80G

230G

480G*

128K/1M

32K

8K

L3 IPv4 Multicast

32K

16K

2K

MAC Entries

128K

16K (per SOC)

16K (per SOC)

32K

FEX Support

Yes*

No

Yes

Yes

L2 Portchannel

8 active

16 active

16 active

16 active

LISP and OTV

FabricPath

FCOE Support

L3 IPv4 Unicast

(routing/mac table)

F1-Series LC

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

34

Data Center Aggregation Layer Design


Key Design Considerations

Enterprise Network

Data Center physical infrastructure


POD design & cabling infrastructure
Size of the layer 2 domain

Data Center
Core

Oversubscription ratio
Traffic flow
No. of access layer switches to aggregate

Aggrega3on

Scalability requirement

Service insertion
Service chassis vs. appliance

Access

Firewall deployment model


Load balancer deployment model
Presentation_ID

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

35

Data Center Access Layer Design


Access Layer & Virtualized Edge

Access Layer provides


Hosts connectivity point
Mapping from virtual to physical
L2 Services: LACP, VLAN Trunking

Virtualized Edge provides


Virtual host connectivity point
Virtual extension of access services
Network policy enforcement point

Presentation_ID

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

36

Data Center Access Layer Design


Access Layer Key Considerations & Commonly Deployed Platform
Physical infrastructure

N5010/N5020

N7 F-Series LC

960G/1.92T

520G/1.04T

230G/480G

N5548/N5596

TOR vs. MoR

Server Types

Fabric Throughput

1 G vs 10G

Port Density

48/96

26/52

32/48 per LC

A/A or A/S NIC

No. of Vlans

4096

512

4096

Single attached

MAC Entries

32K

16K

16K (per SOC)

No. of FEXs

24

12

32 (F2 only)

1G FEX Ports

1152

576

32/48 per LC

10G FEX Ports

768

384

32/48 per LC

48/96

6/12

Oversubscription ratio
No. of servers and uplinks

Virtual Access Requirements


Virtual Machine Visibility

8G Native FC Ports
FabricPath

Virtual Machine Management


Boundary

Presentation_ID

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

37

Cisco FEXlink: Virtualized Access Switch

Changing the device paradigm

De-Coupling of the Layer 1 and Layer 2 Topologies


New Approach to Structured Building Block
Simplified Management Model, plug and play provisioning,
centralized configuration
Technology Migration with minimal operational impact
Long Term TCO due to ease of Component Upgrade

...
Virtualized Switch
Presentation_ID

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Evolutionary Fabric Edge


Mixed 1/10G, FC/FCoE, Rack and Blade
Consolidation for all servers both rack and blade onto the same virtual switch
Support for 1G, migration to 10G, FC and migration to FCoE
1G server racks are supported by 1G FEX (2248TP,
2224TP) or future proofed with 1/10G FEX
(2232PP or 2232TM)

1G, 10G and FCoE connectivity for HP or


Dell Blade Chassis
Presentation_ID

Support for direct connection of HBA to


Unified Ports on Nexus 5500UP

10G server racks are supported by the


addition of 10G FEX (2232PP or 2232TM,
2248PQ)

2012 Cisco and/or its affiliates. All rights reserved.

Support for NPV attached blade switches


during FC to FCoE migration
Cisco Public

Data Center Interconnect Design


Data Center Interconnect Drivers

DC to DC IP connectivity

Main Data
Center

IP Routed
Service

L3

DC to DC LAN extension
Workload scaling with vMotion

EoMPLS

L2

Backup
Data Center

L3

L2
WAAS

EoMPLSoGRE

WAAS

L2

GeoCluster
Disaster recovery

SAN

Non-disruptive DC migration

L2
DWDM/
CWDM

SAN

FC

FC

Storage extension and replication


Storage

Presentation_ID

2012 Cisco and/or its affiliates. All rights reserved.

Storage

Cisco Public

40

Data Center Interconnect Design


DCI LAN Extension Key Considerations

STP domain isolation


Multihoming and loop avoidance
Unknown unicast flooding and
broadcast storm control
FHRP redundancy and localization
Scalability and convergence time
Three Nexus based options
OTV
vPC
Fabric Path
Presentation_ID

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

41

Cisco FabricPath
Switching

Rou:ng

Easy Congura:on
Plug & Play
Provisioning Flexibility

Mul:-pathing (ECMP)
Fast Convergence
Highly Scalable

FabricPath

FabricPath brings Layer 3 rou5ng benets to


exible Layer 2 bridged Ethernet networks

Presentation_ID

2:1

FabricPath

8:1

Pods

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Fully Non-Blocking

Oversubscription 16:1

Blocked Links

4
2

Overlay Transport Virtualization

Technology Pillars

OTV is a MAC in IP technique to


extend Layer 2 domains
OVER ANY TRANSPORT

Dynamic Encapsulation
No Pseudo-Wire State
Maintenance
Optimal Multicast
Replication
Multipoint Connectivity
Point-to-Cloud Model
Presentation_ID

Protocol Learning
Nexus 7000
First platform to support OTV
(since 5.0 NXOS Release)

Preserve Failure
Boundary
Built-in Loop Prevention

ASR 1000
Now also supporting OTV
(since 3.5 XE Release)

2012 Cisco and/or its affiliates. All rights reserved.

Automated Multi-homing
Site Independence
Cisco Public

43

DCI Architectures

OTV

Greeneld

Browneld

ASR 1K

Greeneld
L3
OTV

Nexus 7K

OTV

Si

L2

Si

L3
OTV

Nexus 7K

L3
L2

L3
L2
FabricPath
OTV Virt. Link

Presentation_ID

OTV

OTV

Nexus 7K

OTV

Si

Nexus 7K

Si

L2

Leverage OTV capabili]es on Nexus 7000 (Greeneld) and ASR


1000 (Browneld)
Build on top of the tradi]onal DC L3 switching model (L2-L3
boundary in Agg, Core is pure L3)
Possible integra]on with the FabricPath/TRILL model
2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

44

Overlay Transport Virtualization

Extensions over any transport (IP, MPLS)

Automated Built-in Multihoming

Failure boundary preservation

End-to-End loop prevention

Optimal BW utilization
(no head-end replication)

ARP Optimization

Transport
Infrastructure

MAC TABLE
VLAN

MAC

100
100

OTV

IF

MAC 1

Eth 2

MAC 2

Eth 1

100

MAC 3

IP B

100

MAC 4

IP B

MAC 1 MAC 3
Presentation_ID

MAC 1

MAC TABLE
VLAN

IP A

IP B

OTV

100

MAC

IF

MAC 1

IP A

OTV

MAC 1 MAC 3

West
Site

IP A IP B

2012 Cisco and/or its affiliates. All rights reserved.

MAC 1 MAC 3

East
Site

IP A IP B

OTV

100

MAC 2

IP A

100

MAC 3

Eth 3

100

MAC 4

Eth 4

MAC 1 MAC 3

MAC 3
Cisco Public

Fabric Simplicity, Scale and Flexibility


Nexus Edge, Core & Boundary Nodes
Nexus Boundary
(OTV, LISP, MPLS)

Isolation of function when possible


Spine provides transport
Compute Edge provides media type and scaled control plane
Boundary provides localization of complex functions

Nexus Spine (Redundant


and Simple)

Nexus
Edge

Presentation_ID

2012 Cisco and/or its affiliates. All rights reserved.

blade1
slot 1
blade2
slot 2
blade3
slot 3
blade4
slot 4
blade5
slot 5
blade6
slot 6
blade7
slot 7
blade8
slot 8

blade1
slot 1
blade2
slot 2
blade3
slot 3
blade4
slot 4
blade5
slot 5
blade6
slot 6
blade7
slot 7
blade8
slot 8

blade1
slot 1
blade2
slot 2
blade3
slot 3
blade4
slot 4
blade5
slot 5
blade6
slot 6
blade7
slot 7
blade8
slot 8

blade1
slot 1
blade2
slot 2
blade3
slot 3
blade4
slot 4
blade5
slot 5
blade6
slot 6
blade7
slot 7
blade8
slot 8

blade1
slot 1
blade2
slot 2
blade3
slot 3
blade4
slot 4
blade5
slot 5
blade6
slot 6
blade7
slot 7
blade8
slot 8

blade1
slot 1
blade2
slot 2
blade3
slot 3
blade4
slot 4
blade5
slot 5
blade6
slot 6
blade7
slot 7
blade8
slot 8

blade1
slot 1
blade2
slot 2
blade3
slot 3
blade4
slot 4
blade5
slot 5
blade6
slot 6
blade7
slot 7
blade8
slot 8

blade1
slot 1
blade2
slot 2
blade3
slot 3
blade4
slot 4
blade5
slot 5
blade6
slot 6
blade7
slot 7
blade8
slot 8

blade1
slot 1
blade2
slot 2
blade3
slot 3
blade4
slot 4
blade5
slot 5
blade6
slot 6
blade7
slot 7
blade8
slot 8

blade1
slot 1
blade2
slot 2
blade3
slot 3
blade4
slot 4
blade5
slot 5
blade6
slot 6
blade7
slot 7
blade8
slot 8

blade1
slot 1
blade2
slot 2
blade3
slot 3
blade4
slot 4
blade5
slot 5
blade6
slot 6
blade7
slot 7
blade8
slot 8

blade1
slot 1
blade2
slot 2
blade3
slot 3
blade4
slot 4
blade5
slot 5
blade6
slot 6
blade7
slot 7
blade8
slot 8

blade1
slot 1
blade2
slot 2
blade3
slot 3
blade4
slot 4
blade5
slot 5
blade6
slot 6
blade7
slot 7
blade8
slot 8

blade1
slot 1
blade2
slot 2
blade3
slot 3
blade4
slot 4
blade5
slot 5
blade6
slot 6
blade7
slot 7
blade8
slot 8

blade1
slot 1
blade2
slot 2
blade3
slot 3
blade4
slot 4
blade5
slot 5
blade6
slot 6
blade7
slot 7
blade8
slot 8

blade1
slot 1
blade2
slot 2
blade3
slot 3
blade4
slot 4
blade5
slot 5
blade6
slot 6
blade7
slot 7
blade8
slot 8

blade1
slot 1
blade2
slot 2
blade3
slot 3
blade4
slot 4
blade5
slot 5
blade6
slot 6
blade7
slot 7
blade8
slot 8

blade1
slot 1
blade2
slot 2
blade3
slot 3
blade4
slot 4
blade5
slot 5
blade6
slot 6
blade7
slot 7
blade8
slot 8

Cisco Public

Session Agenda
Nexus Platform Overview
Data Center Design and Considerations
Case Study #1: Green Field Data Center Design
Key Takeaways

Presentation_ID

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

47

Case Study #1
Data Center High Level Requirements
A leading online higher education
institution
More than 500,000 students, 24,000
faculty members
Approximately 1200 servers and 600 VMs
across 5 data centers
Current data centers reach the limit of
switching, power, and cooling capacity
Business decision made to build two new
green field data centers to consolidate and
provide DR capability

Presentation_ID

Customer Business Challenges


x10G based virtualized next
generation data center architecture
No STP blocking topology
Firewall protection for secured servers
Support vMotion within and between
data centers
Network team gains visibility to VM
networking

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

48

Virtualized Access Layer Requirements


400 10G capable server connections
30 ESX servers with roughly 600 VMs
800 1G connections for standalone servers, and
out of band management network
Support both active/active and active/standby NIC
teaming configuration
Network team manages network, Server team
manages server/virtual machines
Network policies are retained during vMotion

Presentation_ID

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

49

Data Center Access Layer Design


N2K being remote line card to reduce number of devices to
manage
Migration to ToR for 10GE servers or selective 1GE server racks if
required (mix of ToR and EoR)
Mixed cabling environment (optimized as required)
Flexible support for Future Requirements

. . .
Nexus 5000/2000 Mixed
ToR & EoR
Combina:on of EoR (End of Row) and ToR (Top of Rack) cabling
Presentation_ID

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

50

Access Layer Port Counts & Oversubscription


For 10G server off the 5596s

For 1G server off the 5548s

Total 10G ports = 20*32 = 640

Total 1G ports = 20*48 = 960

Server NIC utilization = 50%

Server NIC utilization = 50%

Total uplink BW = 16*10 = 160G

Total uplinks BW = 8*10 = 80G

Oversubscription ratio = 160/(640*0.5*10) = 1/20

Oversubscription ratio = 80/(960x0.5) = 1/6

Presentation_ID

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

51

N1KV Gains Visibility Into VM Environment


Cisco Nexus 1000V
Sokware Based
VM

Built on Cisco NX-OS

Compa]ble with all switching plalorms

Maintain vCenter provisioning model


unmodied for server administra]on;
allow network administra]on of virtual
network via familiar Cisco NX-OS CLI

VM

VM

VM

Nexus
1000V

vSphere

Nexus 1000V

Policy-Based
VM Connec]vity

Presentation_ID

Mobility of Network and


Security Proper]es

2012 Cisco and/or its affiliates. All rights reserved.

Non-Disrup]ve
Opera]onal Model

Cisco Public

52

Nexus 1000V Uplink Options


Spanning-Tree (Active/Passive)

Mac Pinning
UCS Blade Server Environment
3rd party blade server environment in non-MCEC
topologies

Channel-group auto mode


on mac-pinning

Single Switch Port-Channel

Port-Channel with single switch

Channel-group auto mode


[active | passive]

Upstream switches do not support MCEC

Port-channel with two switches

Multi-Chassis EtherChannel
Channel-group auto mode
[active | passive]

Any server connected to upstream switches that


supports Multi-Chassis EtherChannel (MCEC)

Presentation_ID

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

53

Access Layer Design Highlight


Requirement

Solution

Flexible cabling
Ease of management

N5K/2K provide mixed ToR & EoR


Configurations only done on the 5Ks

1G, 10G server connectivity


with active/active, active/
standby NIC teaming

Straight-through FEX supports all the


NIC teaming options
Note: EVPC provides flexible server/
FEX topology

vMotion within the Data


Center

N5K operates in layer 2 only to make


larger layer 2 adjacency possible

Visibility to VMs

N1KV provides network visibility to VM

Network team manages


network, server team
manages server

Clear management boundary defined


by N1KV

Presentation_ID

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

54

Aggregation Requirements
Facility
Drop any server anywhere in the data center

L2-L3
Layer 2 domain within data center
No STP blocking topology

Service Layer
Secured zone and non-secured zone
FW protection between zones, no FW protection within the zone
LB service is required for Web server, server needs to track the client IP
High performance FW and LB are required
NAM and IPS solution are also required

Presentation_ID

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

55

Physical Infrastructure and Network Topology


Physical to Logical Mapping

Presentation_ID

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

56

Aggregation Oversubscription Ratio


Large layer 2 domain with single pair of 7Ks
Worse Case Calculation
Assume all the traffic is north-south bound
Assume 100% utilization from the 5Ks
All the ports operated in dedicated mode

Presentation_ID

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

57

Service Integration at Aggregation Layer


Service chassis vs. Appliance
Virtual Service with Nexus 1000V
Virtual/Cloud Data Center
VDC-1

APP
OS
Hypervisor

VDC-2

Virtual
Service
Node
(VSN)

Presentation_ID

2012 Cisco and/or its affiliates. All rights reserved.

Virtual appliance form factor


Dynamic instantiation/provisioning
Service transparent to VM mobility
Support scale-out
Large scale multitenant operation

Cisco Public

58

Service Integration-Physical Design


High performance solution
ASA5585 Firewall and IPS
ACE30 module

Low TCO
6500 repurpose

Most scalable
NAM module inside service chassis
Available slot for future expansion

Presentation_ID

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

59

Firewall Logical Deployment Model


Bridging

Router GW

1.1.1.0
Router GW Vlan 10

Transparent Mode

Router GW

Pros : Easy to implement

3.3.3.0

3.3.3.0
Vlan 30

Vlan 31
1.1.1.0
FW

Vlan 11

Vlan 40
4.4.4.0

Vlan 41
4.4.4.0

Cons: 8 bridge-group per contextRouting

Routing Mode

1.1.1.0
Router

Pros: More scalable

Vlan 10

FW

GW

3.0
3.3.
30
Vlan
2.2.2.0

GW

Vlan 20

GW V
lan 40
4.4.4.0

Cons: Configuration complexity


VRF sandwich
VDC sandwich

3.3.3.0
GW

Nexus
(external vlan100
VDC)

FW

Vlan 10

GW
Presentation_ID

Vlan 31

Nexus
1.1.1.0
(internal
GW Vlan 11
VDC)

3.3.3.0
Vlan 30

Nexus
(external

vlan100
Vrf)

FW

Vlan 10
Vlan 40

GW

Vlan 31

Nexus
1.1.1.0
(internal
GW Vlan 11
Vrf)
GW

Vlan 41

Vlan 41
4.4.4.0

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

60

Load Balancer Logical Deployment Model


Bridging

Transparent Mode

1.1.1.0
Router GW Vlan 10

Pros: Ease of deployment and multicast support


Cons: 8 bridge-group per context
Routing Mode
Pros: Separate STP domain

Router GW

Router GW

Vlan 31
1.1.1.0
LB

Vlan 11

Vlan 40
4.4.4.0

Vlan 41
4.4.4.0

GW

3.0
3.3.
30
Vlan
2.2.2.0

GW

Vlan 20

Routing
1.1.1.0
Router

Cons: No routing protocol support

Vlan 10

4.4.4.0

LB
Vlan 201

Pros: Non-LB traffic bypass the LB One Arm

GW

Cons: SNAT or PBR required


Vlan 10

3.3.3.0
Vlan 31

Nexus 1.1.1.0
GW Vlan 11
GW

2012 Cisco and/or its affiliates. All rights reserved.

LB

GW V
lan 40

One Arm Mode

Presentation_ID

3.3.3.0

3.3.3.0
Vlan 30

Cisco Public

Vlan 41
4.4.4.0

61

Service Integration Logical Design


VRF sandwich design
Three VRF created on Agg N7K
Server default gateway is Agg N7k
No VRF route leaking

LB in transparent mode
Tracking client IP is possible
Multicast application behind LB is possible
Two ACE contexts plus admin context

FW in routing mode
FW provides routing between VRFs
Two FW contexts plus admin and system context
Presentation_ID

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

62

Spanning Tree Recommendations


N

Network port

E Edge

Data Center Core


Primary
vPC

Secondary
vPC
vPC
Domain

HSRP
ACTIVE

Aggregation

Primary
Root
R

or portfast port type


Normal port type

BPDUguard

Rootguard

Loopguard

Layer 3
Layer 2 (STP + Rootguard)

Access
-

Presentation_ID

HSRP
STANDBY
Secondary
Root

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Layer 2 (STP + BPDUguard)

63

vPC Best Practice Features


Feature

Benefit

Overview

vPC auto-recovery
(reload restore)

Increase High-availability

allows the one vPC device to assume STP / vPC primary role and
bring up all local vPCs in case other vPC peer device is down after DC
power outage

vPC Peer-Gateway

Service continuity

Allows a vPC switch to act as the active gateway for packets


addressed to the peer router MAC

vPC orphan-ports
suspend

Increase High-availability

When vPC peer-links go down, vPC secondary shuts down all the vPC
member ports as well as orphan ports. It avoids single attached
devices like FW,LB or NIC teamed device get isolated during vPC
peer-link failure

vPC ARP SYNC

Improve Convergence time

Improve Convergence for Layer 3 flows after vPC peer-link is UP

vPC Peer-Switch

Improve Convergence time

Virtualize both vPC peer devices so they appear as a unique STP root

BRKDCT-2048: Deploying Virtual Port Channel in NX-OS


Presentation_ID

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

64

Aggregation Layer Design Highlight


Requirement

Solution

Drop any server anywhere in


the DC
vMotion within the DC

Single pair of 7Ks provide data center


wide layer 2 domain

No STP blocking Topology

Double sided vPC between 7K and 5K


eliminating blocking ports

Enterprise Network

Data Center
Core

FW protection between secure FW virtualization and VDC sandwich


zone and non-secure zone
design to provide logical separation and
protection
Web servers require load
balancing service

LB in transparent mode provides service


per Vlan basis

High throughput services are


required and future scalability

Mixed of service chassis and appliance


design is able to provide flexible and
scalable service choices

Low subscription ratio (target


15:1)

Presentation_ID

Aggrega3on

Layer 3 Links
Layer 2 Trunks

Access

M1 10G line cards configured in


dedicated mode to provide lower
subscription ratio
2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

65

Core Layer Design


Nexus 7010 with Redundant M1 line cards
10G layer 3 port channels to Aggregation switches

Enterprise Network

OSPF as IGP
Inject default into data center

Data Center
Core

Fault domain separation via BGP


eBGP peering with enterprise network
eBGP peering with remote data center

Aggrega3on

DC Interconnects are connected onto the Core N7Ks

Layer 3 Links
Layer 2 Trunks

Access

Presentation_ID

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

66

DCI Requirements and Design Choices


Requirements

IGP + PIM Peering

L2 connectivity to provide
workload scaling with
vMotion

OTV
VDC

OTV
VDC

PIM Interface
L3 Join Interface
L2 Internal Interface

Presentation_ID

CO
RE
AG
GR

Server Farms

2012 Cisco and/or its affiliates. All rights reserved.

Server Farms

Cisco Public

ACC
ESS

Fabric Path

DC 2

ACC
ESS

vPC

Dark Fiber

AG
GR

OTV

DC 1

CO
RE

DCI Design Choices

OTV
VDC

IGMPv3

Data replication between


the data centers
Potential 3rd data center

OTV
VDC

67

OTV Design
Dark ber links

Why OTV
Native STP and broadcast isolation
Easy to add

3rd

site

Data Center
Core

Existing multicast core


Aggrega]on

Design
OTV VDC on Aggregation

No HSRP localization(phase1)

OTV
VDC

VPC

OTV
VDC

OTV
VDC

VPC

OTV
VDC

simplify configuration
minimal latency via dark fiber

Presentation_ID

2012 Cisco and/or its affiliates. All rights reserved.

Data Center 2

Data Center 1

Cisco Public

68

Nexus 1000v Deployment for VM Mobility


Both VSM in the same data center
Layer 3 control on Nexus 1000V
Stretched cluster supports live vMotion (5ms latency)
Data Center #1
VSM (Ac]ve)

vCenter
(Ac]ve)

vSphere
Nexus 1000V VEM

vCenter SQL/Oracle
Database

Presentation_ID

Data Center #2

VSM
(Standby)
vSphere
Nexus 1000V VEM

Layer 2 Extension
(OTV)
Virtualized Workload Mobility

Stretched Cluster

Dark Fiber
2012 Cisco and/or its affiliates. All rights reserved.

vSphere
Nexus 1000V VEM

vSphere
Nexus 1000V VEM

Replicated vCenter SQL/Oracle


Database

Cisco Public

69

Traffic Flow for VM Mobility


vMotion between Data Centers

Virtual machines still use the original ACE and gateway after vMotion
Traffic will trombone the DCI link for vMotioned virtual machines
No HSRP localization and Source NAT

Presentation_ID

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

70

Overall Design Highlight Case Study #1


Requirement

Solution

x10G based virtualized generation


data center architecture

Nexus 7K,5K,2K provide scalable x10G


architecture with end to end virtualization

No STP blocking Topology

Double sided vPC between 7K and 5K


eliminating blocking ports

FW protection between secure


zone and non-secure zone

FW virtualization and VRF sandwich design to


provide logical separation and protection

Support vMotion within and


between data centers

L2/L3 boundary is placed at aggregation layer


to provide data center wide layer 2 domain
OTV provide layer 2 extension between data
centers

Network team gains visibility to


VM networking

Nexus 1000v provides clear management


boundary between network and server team.
Network policy through N1KV is implemented
on VMs

Presentation_ID

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

71

Key Takeaways
Nexus family and NX-OS are designed for modern data center
architecture
3 tier design model (Core, Aggregation, Access) ensure high availability &
scalability
Nexus 5K/2K offer flexible cabling solution at Access
Nexus 7K/5K double sided vPC supports non-blocking topology and
larger layer 2 domain. Fabric path is the new trend
Nexus 7K virtualization provides flexible service insertion at Aggregation
OTV/FabricPath/vPC simplify DCI and migration solution
Nexus 1000v provides network policy control & visibility into VM, and
offers integrated virtual services (VSG, vWAAS, NAM, ASA) at VM level
Presentation_ID

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

72

THANK YOU for Listening & Sharing


Your Thoughts

Presentation_ID

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

También podría gustarte