Está en la página 1de 103

Cisco ASR 9000 System

Architecture
Javed Asghar, Technical Marketing Architect - Speaker

Dennis Cai, Distinguished Engineer, Technical Marketing Question Manager

CCIE #6621, R&S, Security

BRKARC-2003
Swiss Army Knife Built for Edge Routing World
Cisco ASR9000 Market Roles 1. High-End Aggregation &
Transport

Carrier Ethernet Cable/MSO 1. Mobile Backhaul


2. L2/Metro Aggregation
Mobile Backhaul 3. CMTS Aggregation
4. Video Distribution &
Services

Web/OTT
Multiservice Edge 2. DC Gateway Router

DC gateway 1. DC Interconnect
2. DC WAN Edge
Broadband 3. WEB/OTT
Gateway
3. Services Router
Large Enterprise 1. Business Services
WAN 2. Residential Broadband
3. Converged Edge/Core
4. Enterprise WAN
Other ASR9000 or Cisco IOS XR Sessions
you might be interested in

BRKSPG-2904 - ASR-9000/IOS-XR Understanding forwarding, troubleshooting the system and XR


operations
TECSPG-3001: Advanced - ASR 9000 Operation and Troubleshooting
BRKSPG-2202: Deploying Carrier Ethernet Services on ASR9000
BRKARC-2024: The Cisco ASR9000 nV Technology and Deployment
BRKMPL-2333: E-VPN & PBB-EVPN: the Next Generation of MPLS-based L2VPN
BRKARC-3003: ASR 9000 New Scale Features - FlexibleCLI(Configuration Groups) & Scale ACL's
BRKSPG-3334: Advanced CG NAT44 and IOS XR Deployment Experience
Agenda
ASR9000 Hardware Overview
Line Card System Architecture
Typhoon, Trident, SIP-700, VSM
Tomahawk
Interface QoS Capability

Switch Fabric Architecture


Bandwidth and Redundancy Overview
Fabric QoS Virtual Output Queuing

Packet Flow and Control Plane Architecture


Unicast
Multicast
L2

IOS-XR Overview
32-bit and 64-bit OS
IOS-XRv 9000 Virtual Forwarder
Netconf/Yang
ASR9000 Hardware Overview
ASR9k Chassis Portfolio Offers Maximum Flexibility
Physical and Virtual
Compact & Powerful High Density Service Edge
Flexible Service Edge
Access/Aggregation and Core
Small footprint with full IOS-XR Optimized for ESE and MSE with Scalable, ultra high density service
feature capabilities for distributed high M-D scale for medium to large routers for large, high-growth sites
environments (BNG, Pre-agg etc.) sites
ASR 9922

ASR 9912

ASR 9010 ASR 9910


nV Satellites: ASR 9000v, ASR 901/903

ASR 9006
x86 ASR 9904
IOS XRv ASR 9001 / 9001-S

Fixed 2RU 2 LC/2RU 4 LC/10RU 8 LC/21RU 8 LC/21RU 10 LC/30RU 20 LC/44RU


120 Gbps 2.4 Tbps 3.2 Tbps 6.4 Tbps 24 Tbps 30 Tbps 60T Tbps

MSE E-MSE Peering IOS XR


P/PE CE Mobility
Edge Linecard Silicon Slice Evolution
Past
Trident
Class Trident Octopus Santa Cruz PowerPC
120G 90nm 130nm 130nm Dual Core
15 Gbps 60 Gbps 90 Gbps 1.2 Ghz

Now
Typhoon
Class Typhoon Skytrain Sacramento PowerPC
360G 55nm 65nm 65nm Quad Core
60 Gbps 60 Gbps 220 Gbps 1.5 Ghz

Future
Tomahaw
k Class Tomahawk Tigershark SM15 X86
800G 28nm 28nm 28nm 6 Core
240 Gbps 200 Gbps 1.20 Tbps 2 Ghz
Shipping since IOS-XR 4.2.1
ASR 9001 Compact Chassis May 2012
Side-to-Side airflow Front-to-back air flow with air flow
2RU baffles, 4RU, require V2 fan
Sub-slot 0 with MPA Sub-slot 1 with MPA

Supported MPAs:
Redundant Fixed 4x10G
(AC or DC) 20x1GE
Power Supplies 2x10GE
SFP+ ports
Field Replaceable 4x10GE
1x40GE
Fan Tray
Field Replaceable
ASR-9001 System Architecture Overview
MPAs
2,4x10GE
20xGE
1x40GE
Typhoon
FIA

ASIC
Switch Fabric
On-board SFP+ 10GE
2x10 SFP+ SFP+ 10GE LC RP
ports Internal CPU CPU
SFP+ 10GE EOBC
SFP+ 10GE
MPAs Typhoon
2,4x10GE
20xGE
FIA
1x40GE

RP CPU and linecard CPU same arch. as the larger systems


uses a single crossbar ASIC (just due to smaller bandwidth requirements)
Shipping since IOS-XR 4.3.1
ASR 9001-S Compact Chassis May 2013
Side-to-Side airflow Front-to-back air flow with air flow
2RU baffles, 4RU, require V2 fan

Supported MPAs: Pay As You Grow


20x1GE Low entry cost
2x10GE SW License upgradable to full 9001
4x10GE
1x40GE

Sub-slot 0 with MPA Sub-slot 1 with MPA

60G bandwidth are disabled by


software. SW license to enable it
ASR-9001S System Architecture Overview
MPAs
2,4x10GE
20xGE
1x40GE
Typhoon
FIA

ASIC
Switch Fabric
On-board SFP+ 10GE
2x10 SFP+ SFP+ 10GE LC RP
ports Internal CPU CPU
SFP+ 10GE EOBC
SFP+ 10GE
MPAs Typhoon
2,4x10GE
20xGE
FIA
1x40GE
Disabled by Default
Upgradable via License
RP CPU and linecard CPU same arch. as the larger systems
uses a single crossbar ASIC (just due to smaller bandwidth requirements)
Cisco ASR 9006 Overview
Feature Description
Total Capacity 3.68T
Front-to-back air flow
Capacity per Slot 920G
with air flow baffles,
13RU, vertical Slots 6 slots - 4 Line Cards and 2 RSPs
Rack size 10RU
Side-to-back airflow, 10 RU Power 1 Power Shelf, 4 Power Modules
2.1 KW DC / 3.0 KW AC supplies
Fan: Side to Side Airflow
Optional Baffle for Front-to-Back Airflow
2 Fan Trays, FRU
RSPs Integrated Fabric, 1+1 Redundancy
Line cards Tomahawk
Typhoon
VSM
SIP700 & SPAs
Cisco ASR 9010 Overview
Feature Description
Total Capacity 7.36T
Capacity per Slot 920G
Slots 10 slots - 8 Line Cards and 2 RSPs
Rack size 21RU
Power: 2 Power Trays
2.1 KW DC / 3.0 KW AC supplies
4.4 KW DC / 6.0 KW AC supplies
Fan: Front to Back Airflow
2 Fan Trays, FRU
RSPs Integrated Fabric, 1+1 Redundancy
Line cards Tomahawk
Typhoon
VSM
SIP700 & SPAs
Cisco ASR 9904 Overview
Feature Description
Total Capacity 6T
Capacity per Slot 3T
Front-to-back air flow with air flow Slots 4 slots - 2 Line Cards and 2 RSPs
baffles, 10RU Rack size 6RU
Power 1 Power Trays, 4 Power Modules
2.1 KW DC / 3.0 KW AC supplies
Fan Side to Side Airflow, Front-to-Back Optional
1 Fan Tray, FRU
RSPs Integrated Fabric, 1+1 Redundancy
Line cards Tomahawk
Typhoon
SIP700, VSM

SW XR 5.1.0 September 2013


Cisco ASR 9912 Overview
Features Description
Total Capacity 30T
Capacity per Slot 3T
Slots 10 slot chassis
Rack Size 30 RU
Power 4 Power Trays
2.1 KW DC / 3.0 KW AC supplies
4.4 KW DC / 6.0 KW AC supplies
Fan 2 Fan Trays
Front to back airflow
RP 1+1 RP redundancy
Fabric (SFC) 6+1 fabric redundancy
SW XR 4.3.2 Shipping
Cisco ASR 9922 Overview
Features Description
Total Capacity 60T
Capacity per Slot 3T
Slots 20 Line cards, 2 RP, 7 SFC
Rack Size 44 RU (Full Rack)

Power 4 Power Trays


2.1 KW DC / 3.0 KW AC supplies
4.4 KW DC / 6.0 KW AC supplies
Fan 4 Fan Trays
Front to back airflow
RP 1+1 RP redundancy
Fabric (SFC) 6+1 fabric redundancy
Line cards Tomahawk
Typhoon, VSM
Target IOS XR 6.1
Q1CY16
Cisco ASR 9910 Overview
Feature Description
Total Capacity 24T
Capacity per Slot 3T
Slots 10 slots - 8 Line Cards and 2 RSPs
Rack size 21RU
Power: 2 Power Trays
4.4 KW DC supplies
6.0 KW AC supplies
Fan: Front to Back Airflow
2 Fan Trays, FRU
RSPs Integrated Fabric, 1+1 Redundancy
Fabric Cards 5 Fabric cards on rear of chassis for additional capacity
230G per FC at FCS
Up to 6+1 Redundancy using RSPs integrated fabric
Line cards Tomahawk
Typhoon
VSM
SIP700

Note: This information may change until FCS


Target IOS XR 6.1
Q1CY16
Cisco ASR 9910 Details

Greater Capacity, Greater Flexibility:


2 Fan Trays
Start with 2 RSPs, then add fabric
cards to scale beyond 460G per slot
ASR 9910 comes prepared with
8 Line Card Slots
sufficient power and cooling to
support high density 100G line cards. 2 RSPs
Available with IOS XR 6.1 with the 5 Fabric Cards
same feature parity as the rest of the
ASR 9000 family.

2 Power Trays

This information may change until FCS


ASR-9910 Mid-plane Architecture
New Mid-plane Architecture for Greater Flexibility

RSP0 230G FT0


M FT1
1G/10G/40G/100G
LC0 I
D

P
L Fabric card 0 230G
1G/10G/40G/100G A
LC7
N
E
RSP1 230G Fabric card 4 230G

Power Tray 0 Fabric


Control
Power Tray 1
Line Side
ASR 9000 Route Switch Processor
Common for ASR 9904, ASR 9010, and ASR 9010
Common internal HW with RP for feature parity on IOS XR
Integrated Multi-Stage Switch Fabric
ASR 9010 TR and SE Memory options
Time and Synchronization Support

RPS440 RSP880

ASR 9006
Availability Q1CY12 Q1CY15
Processor Four Cores - 2.1GHz Eight Cores - 2.2GHz
NPU Bandwidth 60G 240G
Fabric Planes 5 7
Fabric Capacity 440G 880G
ASR 9904
Memory 6G for TR 16G for TR
12G for SE 32G for SE
SSD 2x 16GB Slim SATA 2x 32GB Slim SATA
LC Support Typhoon/Trident Tomahawk/Typhoon
ASR 9900 Route Processor
ASR 9922 Common for ASR 9912 and ASR 9922
Built for massive control place scale
Ultra High Speed Control Plane with Multi-Core Intel CPU
Huge Scale through High Memory options
Time and Synchronization Support
ASR 9912
RP1 RP2
Availability Q1CY12 Q1CY15
Processor Four Cores Eight Cores
2.1GHz 2.2GHz
NPU Bandwidth 60G 240G
Fabric Planes 5 7
Memory 6G for TR 16G for TR
12G for SE 32G for SE
SSD 2x 16GB Slim SATA 2x 32GB Slim SATA
LC Support Typhoon Tomahawk/Typhoon
Route Switch Processors and Route Processors
RSP used in ASR9904/9006/9010, RP in ASR9922/9912
9904/9006/9010 9922/9912 9922/9912
RSP880
RSP440 RP1 RP2
2nd Gen RP and Fabric ASIC 3rd Gen RP and Fabric ASIC

Intel x86 Intel x86 Intel x86 (Ivy Bridge EP) Intel x86 (Ivy Bridge EP)
Processors
4 Core 2.27 GHz 4 Core 2.27 GHz 8 Core 2GHz 8 Core 2GHz

RSP440-TR: 6GB -TR: 6GB -TR: 16GB -TR: 16GB


RAM
RSP440-SE: 12GB -SE: 12GB -SE: 32GB -SE: 32GB

SSD 2x 16GB Slim SATA 2x 16GB Slim SATA 2x 32GB Slim SATA 2x 32GB Slim SATA

nV EOBC
2 x 1G/10G SFP+ 2 x 1G/10G SFP+ 4 x 1/10G SFP+ 4 x 1/10G SFP+
ports

Punt BW 10GE 10GE 40GE 40GE

220G + 220G (9006/9010)


Switch fabric 660G+110G 450G + 450G (9006/9010) 1.61Tb + 230G
385G + 385G (9904)
bandwidth (separated fabric card) 800G + 800G (9904) (separated fabric card)
(fabric integrated on RSP)

23
ASR 9900 - Switch Fabric Cards
ASR 9922 Common for ASR 9912 and ASR 9922
7 Fabric Card Slots
Decoupled, multi-stage switch fabric hardware
True HW separation between control and data plane
Add bandwidth per slot easily & independently
ASR 9912
Similar architecture to CRS

SFC110 SFC2
Availability Q1CY12 Q1CY15
Fabric Capacity 110G 230G
per SFC
Fabric Capacity 660G N+1 1.38T N+1
Per Line Card Slot 770G N+0 1.61T N+0
Fabric N+1 N+1
Redundancy
LC Support Typhoon Tomahawk/Typhoon
In-Service
Upgrade
New ASR-9922 and ASR-9006 V2 Fans
ASR 9922-FAN-V2
IOS XR 5.2.2

ASR 9006-FAN-V2
IOS XR 5.3.0

V2 Fan provides higher cooling capacity for ultra high density cards.
Motor and blade shape optimized to produce higher CFM.
New material capable of dissipating more heat
In service upgrade
PAYG Power
N+1 Redundancy for DC
N+N Redundancy for AC
Typhoon and Tomahawk LCs supported with V2
Plan for the future beyond 1T per slot with V3

Line Card Used BW Consumption at 27C Consumption at 40C

Typhoon 360G / Slot 2.8 Watts/G 3.5 Watts/G

Tomahawk 1T / Slot 1.6 Watts/G 1.9 Watts/G

Version AC DC Chassis

V2 3.0KW 2.1KW ASR9006, ASR9010,


ASR9904, ASR9912, ASR9922
V3 6.0KW 4.4KW ASR9010, ASR9912, ASR9922
Benefit of Power Reductions for 100G
4 Tbps: Tomahawk vs. Typhoon in Provider-owned European Data
Center

4x 100G Density vs Typhoon


Power Cost

$3460 per port per year power savings


$1,384,000 savings over 10 years

Notes: 9922 with 20 2x100G vs. 9912 with 5 8x100G 8-year facility
Year amortization with 1.7 PUE, 40C max, France power (USD $.19 per
kWh) N:N Power redundancy
Line Card System Architecture
1. Typhoon, Trident, SIP-700, VSM
2. Tomahawk
3. Inteface QoS
Modular SPA Linecard
20Gbps, feature ritch, high scale, low speed Interfaces
Scalability
Quality of Service
128k Queues Distributed Control and
Data Plane
High Availability
128k Policers
20Gbits, 4 SPA Bays IC-Stateful Switch Over
H-QoS Capability
L3 i/f, route, session
Color Policing protocol scaled for MSE MR-APS
needs
IOS-XR base for high
Powerful & Flexible QFP
scale and Reliability
Processor
Flexible uCode Architectue SPA Support
for Feature Richness SIP-700
ChOC-3/12/48 (STM1/4/16)
L2 + L3 ServicesL FR, PPP,
HDLC, MLPPP, LFI POS: OC3/STM1, OC12/STM4,
OC-48/STM16, OC192/STM64
L3VPN, MPLS, Netflow,
6PE/6VPE SPAs ChT1/E1, ChT3/E3, CEoPs, ATM
ASR 9000 Ethernet Line Card Overview
-L, -B, -E
First-generation LC
Trident NPU:
15Gbps, ~15Mpps,
bi-directional
A9K-40G A9K-4T A9K-8T/4 A9K-2T20G A9K-8T A9K-16T/8

-TR, -SE
Second-gen LC A9K-MOD160
Typhoon NPU:
60Gbps, ~45Mpps, A9K-MOD80
bi-directional
A9K-24x10GE A9K-2x100GE MPAs
(A9K-1x100G) 20x1GE
2x10GE
4x10GE
8x10GE
1x40GE
2x40GE

A9K-36x10GE
-L: low queue, -B: Medium queue, -E: Large queue, -TR: transport optimized, -SE: Service edge optimized
ASR 9000 80-360G Typhoon Class Linecards
Hyper Intelligence & Service Scalability

High Control Plane Scale


4M IPv4 or 2M IPv6 FIB per line card
2x100GE
2M MACs learned in hardware

High Performance
Line rate performance on all line cards 24x10GE

End-to-End Internal System QoS


Efficient multicast replication

Micro-CPU based forwarding chip 36x10GE

Feature flexibility, future proven


Programmable forwarding tables
Modular 80G & 160G
Network Processor Architecture Details
TR and SE has different
TR and SE has same
memory size
memory size NPU Complex

STATS MEMORY
FIB MAC
LOOKUP Forwarding chip (multi core) FRAME MEMORY
MEMORY TCAM
-

TCAM: VLAN tag, QoS and ACL classification


Stats memory: interface statistics, forwarding statistics etc
Frame memory: buffer, Queues
Lookup Memory: forwarding tables, FIB, MAC, ADJ
TR/SE
Different TCAM/frame/stats memory size for different per-LC QoS, ACL, logical interface scale
Same lookup memory for same system wide scale mixing different variation of LCs doesnt impact system wide scale

-TR: transport optimized, -SE: Service edge optimized


MAC Learning and Sync
Hardware based MAC learning: ~4Mpps/NP
1 NP learn MAC address in hardware (around
4M pps)
RP
2 NP flood MAC notification (data plane) Punt
CPU
message to all other NPs in the system to sync FPGA FIA
up the MAC address system-wide. MAC
notification and MAC sync are all done in
hardware
Switch Fabric Switch Fabric

CPU LC1
LC2
CPU
Data 3x10GE
SFP + 1NP 2
packet 3x10GE
SFP + NP
FIA 3x10GE
SFP +
NP
FIA
3x10GE
3x10GE
SFP +
NP 2

Fabric ASIC
Switch
SFP + NP
FIA 3x10GE
3x10GE NP

Fabric ASIC
Switch
NP SFP +
SFP + FIA
3x10GE
3x10GE
SFP +
NP
SFP + NP
FIA 3x10GE
3x10GE
SFP +
NP
SFP + NP FIA
3x10GE
3x10GE
SFP +
NP
SFP + NP
3x10GE FIA 3x10GE
NP
NP SFP +
SFP + FIA
3x10GE
SFP +
NP

33
Typhoon LC: 24x10G Ports Architecture
3x 10G
3x10GE 30G
Typhoon 60G
SFP +
3x 10G FIA 90G RSP 440 Switch
3x10GE
SFP +
Typhoon No Congestion Point
30G Fabric

3x10GE
3x 10G Non-Blocking everywhere
Typhoon 30G 60G Line cardlocal
SFP + Fabric
3x 10G FIA 90G Complex
3x10GE 30G (NextGen
Typhoon S/F ASIC)
SFP +
Typhoon NPU
3x 10G BW and Performance 8x55G
RSP 0
3x10GE
1. 120G and 90Mpps Typhoon
Uni-directional 30G 60G
SFP +
2. 60G and 45Mpps full-duplex (each direction ingress/egress)
3x 10G FIA 90G
3x10GE 30G
Typhoon
SFP +
3x 10G
3x10GE 30G 60G
Typhoon RSP 1
SFP +
3x 10G FIA 90G
3x10GE 30G
Typhoon
SFP +
Typhoon Line Card Architectures
36x10G Port LC 2x100G Port LC

MOD80 LC MOD160 LC
Trident vs. Typhoon Major Feature Scale

Metric Trident Typhoon

FIB Routes (v4/v6) 1.3M/650K* 4M/2M


Multicast FIB 32K 128K
MAC Addresses 512K* 2M
L3 VRFs 4K 8K
Bridge Domains / VFI 8K 64K
PW 64K 128K
L3 Subif / LC 4K 8K (TR)
20K (SE)
L2 Interfaces (EFPs) / LC 4K (-L) 16K (TR)
16K (-B) 64K (SE)
32K (-E)
MPLS labels 256K 1M

Queue Scale (per NP) 64K egress + 32K ingress 192K egress + 64K ingress
Policer scale (per NP) 32K/64K (-L,-B/-E) 32K/256K (-TR/-SE)

* It require scale profile configuration to reach maximum FIB or MAC on the Trident line
card. On Typhoon, FIB and MAC has dedicated memory
ASR9k Full-Mesh IPv4 Performance
ASR9010,
2x RSP-440
8x Typhoon 24x10G = 192 ports
Full mesh of flows = ~72,000 flows
(36k Full-duplex Flows)
ASR9k

. .
IXIA . . IXIA
Tx/Rx . . Tx/Rx
.
.
A9k-24x10GE L3 Multicast Scale Performance
Setup and Profile
Scale Profile:
1. Unidirectional Multicast
L3 sub-interface multicast
2. 2k (S,G) mroutes receiver facing ports
3. 24 OIFs per (S,G) mroute
port 0
ASR 9010 port 1
IXIA Multicast Receivers

IXIA Multicast Receivers


L3 sub-interface multicast
Source facing ports .

A9k-24x10GE

A9k-24x10GE
.
IXIA Multicast Source port 10 10G IGMP Joins
.
.
.
port 21 IXIA Multicast Receivers
port 22
IXIA Multicast Receivers
port 23
IXIA Multicast Receivers
A9k-24x10GE L3 Multicast Scale
Throughput Performance Results

A9k-24X10GE - L3 Multicast Throughput Performance


Scale: 2k (S,G), 24 OIFs per (S,G)
260 240G357M 240G 240G 240G 240G 240G 240G 240G 240G 360
240 340
320
220 300
200 280
260
180 240
160 220
203M 200
Gbps

Mfps
140
180
120 160
100 140
80 120
109M 100
60 80
40 56M 60
40
20 29M 20M 20
15M 7M
0 3M 0
64 128 256 512 1,024 1,518 2,048 4,096 9,000
Frame Size
Agg Rx Throughput (Gbps) Agg Rx Throughput (Mfps)
Virtualized Services Module (VSM) Overview
Service-3 Service-4
ASR 9000 VSM
Service-1 Service-2 Data Center Compute:
VM-4 VM-3
VM-1 VM-2 4 x Intel 10-core x86 CPU
VMM 2 Typhoon NPU for hardware network processing
OS / Hypervisor
120 Gbps of Raw processing throughput
HW Acceleration
40 Gbps of hardware assisted Crypto throughput
Hardware assist for Reg-Ex matching
Virtualization Hypervisor (KVM)
Service VM life cycle management integrated into IOS-XR

CGN IPSec SecGW Firewall Anti-DDOS* 3rd Party


Shipping Shipping On Radar Q1,CY15 Apps*
Q2/Q3 CY15
VSM Details
RSP/RP
Switch Fabric

DDR3
DDR3 DIMM Crypto Typhoon NPU
DIMM Crypto

Linecard
10GE/FCoE SFP+

local Fabric
Complex
RSP/
Data
RP 0
Path
Switch
Typhoon NPU

DDR3 Crypto
DIMM DDR3 Crypto RSP
DIMM /RP1

Virtualized Services Sub- Module Router Infrastructure Sub-Module


New Tomahawk LCs
Tomahawk 8x100GE CPAK Line Card
LAN Only
Shipping 5.3.0

LAN/WAN/OTN
April 2015
(5.3.1)

Tomahawk 4x100GE CPAK Line Card

LAN/WAN/OTN
April 2015
(5.3.1)
Flex 100G CPAK

Investment
Protection
Start with 10 GE
and upgrade to
100 GE in the
future
100 GE LR4 10x10 GE 2x40 GE 100 GE SR10

CPAK Options CPAK 100 GE ER4 CPAK 100 GE LR4 CPAK 100 GE SR10 CPAK
10x10-LR
Tomahawk LC: 8x100GE Architecture
Slice Based Architecture Ivy Bridge CPU
VoQ buffering, Fabric credits, mcast
hashing, scheduler for fabric and
FPOE, Auto-Spread, Complex
L2/L3/L4 lookups, all VPN types, all DWRR, RBH, replication
CPAK: Macsec Suite B+, G.709, OTN, feature processing, mcast replication, egress port
100G, 40G, 10G Clocking QoS, ACL, etc

240G 240G

PHY
Tomahawk Tigershark
NP FIA

240G 240G

PHY
Tomahawk Tigershark
NP FIA

Central XBAR
(SM15)
XBAR
240G 240G

PHY
Tomahawk Tigershark
NP FIA

240G 240G
Per Slice Power Management:
PHY
Tomahawk
(100-200W Power Savings)
Tigershark
PE1(admin-config)# hw-moduleNP FIA
power disable slice [0-3] location
Tomahawk Vs Typhoon Hardware Capability
Metric Tomahawk-SE Scale Typhoon SE Scale
MPLS Labels 1M 1M
2M @ FCS
MAC Addresses 2M
6M - future
4M(v4) / 2M(v6) - XR
FIB Routes (v4/v6) Search
10M(v4) / 5M(v6) possible in 4M(v4+v6)
Memory
future release
VRF 16K 8K/LC
Bridge Domains 256K/LC 64K/LC
TCAM TCAM (80Mb) TCAM (40Mb)
12GB/NPU 2GB/NPU
Packet Buffer
200ms 100ms
EFPs 256K/LC (64K/NP) 64K/LC
L3 Subif (incl. BNG) 128K (64K/NP) 20k/LC (sys)

Egress Queues 1M/NPU (4M for 8x100GE) 256k/NPU

Policers 512k/NPU 256k


-Note: Actual available LC/NPU capacity depends on software release and subject to change!
Tomahawk Line Card CPU
CPU Subsystem:
Intel IVY Bridge EN 6 cores @ ~2GHz with 3 DIMMs
Integrated acceleration engine for Crypto, Pattern Matching and Compression
1x32GB SSD

HW Parameter Typhoon LC CPU Tomahawk-SE Linecard

P4040 Ivy Bridge EN


Processor
4 Core 1.5GHz 6 Core 2GHz

LC CPU Memory 4GB 24 GB

L1: 32KB Instructions,


Cache L1, L2, L3 L2: 256KB
L3: 2.5MB per core
Tomahawk Modular LCs Mod400 & Mod200
MPAs Supported:
MPA #1 2 x 100G 2x100GE CPAK. 1x100GE CPAK. 20x10GE SFP+
Mod400 LC

(2x100G) CPAKs All Typhoon MPA

Flexibility to use CPAK 10G/40G/100G optics on 100G MPAs

MPA #2 20 x 10G 2 Flavors for Flexibility


(20x10G) SFP+ MOD400 has 2 Tomahawk ASICS (FCS August 2015)
MOD200 has 1 Tomahawk ASIC (FCS Oct 2015)

MOD200 Comb 1 Comb 2 Comb 3 MOD400 Comb 1 Comb 2 Combo 3 Comb 4


Support Support
Matrix Matrix
EP0 2x100G-MPA 20x10G- 1x100G or EP0 2x100G-MPA 20x10G-MPA 2x100G-MPA 1x100G or any
MPA any Typhoon Typhoon MPAs
MPAs

EP1 2x100G-MPA 20x10G-MPA 20x10G-MPA 2x100G-MPA


EP1 None None 1x100G or OR
any Typhoon 20x10G-MPA
MPAs
Tomahawk Line Card Architectures
8x100G/80x10G Port LC 4x100G/40x10G Port LC

MOD400 LC MOD200 LC
12x100G Tomahawk LC Skyhammer
High Level Differences:
Tomahawk 8x100G and 12x100G
8x100GE Octane 12x100GE Skyhammer

8-port 100GE with 4 NPU slices 12-port 100GE with 6 NPU slices

CPAK Optics QSFP28 Optics

5-Fabrics (7-Fabric in roadmap for Nov/Dec 2015 FCS) 7-Fabric card

Compatible with all 99xx-series and 90xx-series chassis Support only for 99xx-series chassis

Full L3VPN/L2VPN feature support Only L3 features in Ph-1, L2 support in Ph-2

Has external TCAM for High Qos/ACL scale Has no external TCAM . Will use only 5Mb internal TCAM

Total TCAM entries: 192K Total TCAM entries: 32K

32K sub-interfaces 1K sub-interfaces

4M V4 Route, 2M V6 routes Under discussion


Tomahawk TCAM Optimized Scale
Metric Tomahawk Tomahawk Skyhammer
-TR Scale -SE Scale Scale
MPLS Labels 1M
Addresses 2M (6m Future)
FIB Routes (v4/v6) Search 4M(v4) / 2M(v6) XR (10m/5m Future)
Memory
Mroute/MFIB (v4/v6) 128k/32k (256k/64k Future)
VRF 8K (16k Future)
Bridge Domains 64K
TCAM (acl space v4/v6) TCAM 1/4 SE TCAM (80Mbit) Internal TCAM (5Mbit)
Packet Buffer Packet Buffer 100ms Packet Buffer 200ms Packet Buffer 100ms
(6G/NPU) (12G/NPU)
EFPs 16K/LC 128K/LC (64K/NP) N/A
L3 Subif (incl. BNG) 8K 128K (64K/NP) 1K
IP/PPP/LAC subscriber 16K 256K (64K/NP) N/A
sessions per LC
Egress Queues 8 Queues / port + nV Sat Qs 1M/NPU (4M for 8x100GE!) 8 Queues / port
Policers 32K/NPU 512K/NPU
QOS/ACL (v4/v6) 16k v4 or 4k v6 ACEs/LC 98k v4 / 16k v6 ACEs 24K v4/1.5K v6
ASR9k Optical Solution Before IPoDWDM LC
ASR9k + NCS2k Optical Shelf
Integrated nV Optical System

ASR9000 NCS2k

Low Cost
Interconnect

SR10
CFP/CPAK SR10
transceiver CXP/CPAK
100GE Linecard Coherent Transponder
FCS
Mid-CY15
400G IPoDWDM LC Overview (Tomahawk Based LC)
Tomahawk based Linecard
Feature and scale parity with other TR and SE Tomahawk cards
2xCFP2 based DWDM ports (100G, 200G)
BPSK, QPSK, 16QAM modulation options
96 channels, ITU-T 50GHz spacing
FlexSpectrum support
HD FEC, SD FEC (3000+ km w/o regen)
20x10GE SFPP ports (SR, LR, ZR, CWDM, DWDM)
Flexible port options up to 400 Gbps total capacity
2 x 200G DWDM (CFP2) or
2 x 100G DWDM (CFP2) + 20 x 10G (SFP+) or
1 x 100G + 1x200G DWDM (CFP2) + 10 x 10G (SFP+)
OTN and pre-FEC FRR
Target FCS: mid-CY15 (5.3.2) for 100G DWDM

10G Gray Ports and 200G DWDM in a future release


400G IPoDWDM LC Specific HW components
Ivy Bridge
NPMEM
CPU Complex
VOQ

Tomahaw
Coherent HD-FEC
CFP2 Etna X240 k
FPGA FIA
200G capable

TCAM

#1 10GE SFP+

X240 SerDes
#20 10GE SFP+ Mux
SM15

NPMEM VOQ

Tomahaw
Coherent HD-FEC
CFP2 Etna X240 k
FPGA FIA
200G capable

TCAM
400Gbs IPoDWDM LC Internals Ivy Bridge
NPMEM
CPU Complex
VOQ

Tomahaw
#0
Coherent HD-FEC
CFP2 Etna X240 k
FPGA FIA
200G capable

TCAM

#1 10GE SFP+

X240 SerDes
#20 10GE SFP+ Mux
SM15

NPMEM VOQ

Tomahaw
Coherent HD-FEC
#1
CFP2 Etna X240 k
FPGA FIA
200G capable

TCAM
Tomahawk Per Slice MACSEC PHY Capability
MACSEC Security Standards Compliant with:
IEEE 802.1EA-2006
IEEE 802.1AEbn- 2011 (256-bit key)
IEEE 802.1AEbw-2013 (extended packet numbering)

Security Suites Supported:


AES-GCM-128, 128-bit key (32 bits)
AES-GCM-256, 256-bit key (32 bits)
AES-GCM-XPN-128, provides extended packet number counter (64 bits)
AES-GCM-XPN-256, provides extended packet number counter (64 bits)

Unique Security Attributes Per Security Association (SA):


10G port = 32 SA
40G port = 128 SA
100G port = 256 SA

Per Slice Port Combination Supported (CPAK)


2x100G, 20x10G, 4x40G, 1x100G + 10x10G, 2x40G + 10x10G, 2x40G + 1x100G

All Tomahawk LC variations support MACSEC


8x100G, 4x100G, MOD-400, MOD-200
ASR9k MACSEC Phase 1: XR 5.4.0 Release
SP/CE/DCI/Enterprise Usecases
Usecase #1: Link MACSEC in MPLS/IP Topology Usecase #2: Link MACSEC over LAG members

ASR9k MACSEC Member link


CE PE P PE CE on LAG Inheritance

MACSEC Links
MACSEC Links

Usecase #3 CE Port Mode MACSEC over L2VPN Usecase #4 VLAN Clear Tags MACSEC over L2VPN

ASR9k MKA ASR9k ASR9k MKA ASR9k


DCI/CE DCI/CE DCI/CE DCI/CE
port port vlan vlan
mode mode clear-tags clear-tags
L2VPN L2VPN
CE/WAN CE/WAN

MACSEC Links MACSEC Links


Secure L2VPN as a Service:
PW/EVPN/PBB (any L2) Circuit Encryption using MACSEC

Clear Frame
DA SA VLA PAYLOA FCS
N D
EFP1
Port 1 1
0
G Clear Frame
DA SA VLA PAYLOA FCS
Frame Encryption on Port 2 inside PHY N D

DA SA VLA
N
SEGTA
G
PAYLOA
D
ICV FCS EFP2 Fabric
1

Port 2 0
G PHY NPU

1
EFP3
Port 3 0
G

Encrypted EoMPLS PW
Frame bypass Port 3 MACSEC inside PHY
Phy Secure Channel VC DA SA VLA SEGTA PAYLOA ICV FCS
Loopback + Clear Tags DA SA VLA SEGTA PAYLOA ICV FCS Label N G D
N G D
(Cisco IPR)
Tomahawk MACSEC
Raw Performance

Tomahawk MACSEC
AES-GCN-256-bit Raw Performance (full-duplex) AES-GCN-256-bit Raw Scale
Per LC Slice 200Gbps Total MACSEC Ports 10G = 1,600
Raw Scale
Per System 40G = 320
Per LC Slot 800Gbps
100G = 160
Per Chassis ASR9006 = 3.2Tbps Total MACSEC SAs 10G Tx/Rx SAs = 51,200
ASR9010 = 6.4Tbps Per System 40G Tx/Rx SAs = 40,960
ASR9904 = 1.6Tbps 100G Tx/Rx SAs = 40,960
ASR9012 = 8Tbps
ASR9922 = 16Tbps
Customer Profile Tomahawk Performance
Generally target >3.3x performance of Typhoon system
Cust benchmark: 2x 100GE/Tomahawk NPU vs 6x 10GE/Typhoon NPU
Customer Tomahawk Typhoon LR Frame Performance Incr
Application Profile
Profile LR Frame @6x10G Ratio

LSR: top label swap + Bundle + TE Tunnel + iMarking +


192B@32.7mpps 3.8x
eMarking/Queue 160B IP
Web/OTT 1
LER: MPLS imp + ieQoS + iNF + ieBundle + RSVP @126mpps
293B@22.7mpps 5.5x
TE/FRR
LSR: label swap + Bundle + TE Tunnel + iMarking +
128B@32.7mpps 3.8x
eMarking/queuing 160B IP
Web/OTT 2
LER: IP + Bundle +TE Tunnel + iMarking + @126mpps
340B@19.8mpps 6.3x
eMarking/queuing + iNF + Recursive
Tier1
Ipv4 & v6 recursive + in/out ACL + uRPF + in NF + iMAC 460B IP
ISP/Peering 582B@12.1mpps 4.2x
Accounting @52mpps
1
Tier1
IP to IP (recursive) + QoS(out) + ACL(in+out) + 286B IP
SP/Peering iMix LR@6x10G > 3.3x
Netflow(1:5K,in) + Bundle @81.6mpps
2
Tomahawk Baseline Performance
Generally target 3.3x performance of Typhoon system
Benchmark set: 2x100GE/T-hawk NPU vs 6x 10GE/Typhoon NPU

LineCard Packet IPv6


IPv4 NR IPv6 NR + IPv4 + in NF
BW Fwding In Policing L2 Xconnect VPLS + QoS
+ Rx ACL Rx ACL (1:1k)
NPU Capacity +Out Shaping

128B Eth 196B Eth 225B Eth 233B Eth 449B Eth
149 123B Eth @
Tomahawk 240G @ @ @ @ @
mpps 174.6mpps
169mpps 115mpps 102mpps 104mpps 53.3mpps

178B Eth 300B Eth 205B Eth 316B Eth 445B Eth
45 148B Eth @
Typhoon 60G @ @ @ @ @
mpps 44.6mpps
37.8mpps 23.4mpps 33.4mpps 22.3mpps 16.1mpps

Perf Incr Ratio 4x 3.3x 4.48x 4.48x 3.07x 4.42x 3.91x 3.3x
All Platforms

9910, 9912, 9922

Tomahawk Roadmap 90xx, 9904, 9910

9912, 9922

8x100GE
MOD400 12x100G
OTN*

8x100GE
Line Cards LAN PHY*
4x100GE 400G 8x100G
MOD200
OTN IPoDWDM 7-Fab

SFC2
Commons RSP880
RP2

Jan 2015 April 2015 August 2015 Nov 2015 Mar 2016
XR 5.3.0 XR 5.3.1 XR 5.3.2 XR 6.0.0 XR 6.1.0

* Oversubscribed on 9010, 9006 with Single RSP


Tomahawk Line Card Port QOS Overview
4 Priority Interface Queuing:
PQ1, PQ2, PQ3 Strict Priority
PQ4 = Remaining Queues (CBWFQs)
Configurable QoS policies using IOS XR MQC CLI
QoS policy is applied to interface (physical, bundle or logical*), attachment
points
Main Interface
MQC applied to a physical port will take effect for traffic that flows across all sub-interfaces on that
physical port
will NOT coexist with MQC policy on sub-interface **
you can have either port-based or subinter-face based policy on a given physical port
L3 sub-interface
L2 sub-interface (EFP)
QoS policy is programmed into hardware microcode and queue ASIC on the
Line card NPU

* Some logical interface could apply qos policy, for example PWHE, BVI
** it could have main interface level simple flat qos co-exist with sub-interface level H-QoS on ingress direction
Tomahawk Line Card Port QoS Overview cont
Dedicated queue ASIC TM (traffic
manager) per NPU for QoS functions
NPU
-SE and -TR LC version has different queue FIA Switch
TM Fabric
buffer/memory size, different number of ASIC
queues
5 level hierarchy flexible queuing/scheduling support
5 level scheduling hierarchy:
Port groups (L0), Ports (L1), Logical Ports (L2), Subinterfaces (L3), Classes (L4)
Egress & Ingress, shaping and policing
Three strict priority scheduling with priority propagation
Flexible & granular classification, and marking
Full layer 2, full layer 3/4 IPv4, IPv6, mpls
5 Level Hierarchy QoS Queuing Overview
L0 L1 L2 L3 L4 L0, L1 level schedulers are automated
PG Port Logical Sub- Classes by TM, not user configurable
Port interfaces

L2, L3 and L4 can be flexibly mapped


to parent level scheduler
Port Group

Class
EFP1 C-VLAN Class Hierarchy levels used are determined
(S-Vlan
or Class
by how many nested levels a policy-
Vlans) C-VLAN Class map is configured for and applied to a
Port

given sub-interface
Class
EFP2 C-VLAN Class
(S-Vlan
or
Up to 16 classes per child/grandchild
Class
Vlans) C-VLAN Class level (L4)
Queue/Scheduler Hierarchy MQC Capabilities
L1 L2: 2-param L3: 3-param L4: 2-param
Shape (PIR, Shape Shape Shape, BRR W (Bw or BwR), Priority, WRED/Q-Limit
or Port BRR Weight Bandwidth
Shaper) (Bw or BwR) BRR W. Priority 1

P2

P3

Normal
Pri Qs
P1

P2

Normal
Pri Qs
QoS Classification Criteria
L2 Header Fields L3 Header Fields Internal Marking
L2 Inner/outer COS, Outer EXP Discard-class
Interfaces/EF inner/outer vlan, DSCP/TOS Qos-group
DEI TTL, TCP flags,
Ps Source/Destination MAC Source/destination L4 ports
address* Protocol
Or L3 match all/match any Source/Destination IPv4
Interfaces address*

Notes: - Support match all or match any


- Max 8 match statements per class, max 8 match entries per match statement
- Not all header fields can be used in one MQC policy-map, see details next
Tomahawk 3 Level Policing
Supports grand parent policing
Conform-aware coupled between child and parent, parent and grandparent
- Only 1R2C policer at grand-parent level
Color aware policing 1R2C, MEF 2R3C, No IETF 2R3C
- Color value can be one of the fields: outer CoS, Exp, Prec, QoS-group, DSCP and Discard-class
- Only one conform-color value per policer, others are considered as exceed-color
Color-aware coupled policing
- Child level: incoming packets color field value is used for matching a color (Parent or child remarking will not take effect for color matching)
- Parent level: the child level marking if any will be effective in matching the color at parent level
- Grand parent level: only support 1R2C without color aware

Child-Level Parent-Level Grand-Parent Level Allowed


No-policer No-policer Policer Allow
Policer No-Policer Policer Allow
No-Policer Policer Policer Allow
Policer Policer Policer Allow
Coupled Child Coupled Parent Policer Reject
Coupled Child No-Policer Coupled-Parent Reject
Policer Coupled Child Coupled-Parent Allow
Shaping, Policing Overhead Accounting
L2 frame length is being used by default without preamble and IFG.
Same for ingress/egress
Same for all QOS actions
QoS policy can be configured to take into account arbitrary L1 framing and L2
overhead

Length/
Inter Frame Gap Preamble SFD DA SA VLAN Type Payload FCS
12 7 1 6 6 4 2 46-1500 4
Switch Fabric Architecture
1. Bandwidth and Redundancy Overview
2. Fabric QoS Virtual Output Queuing
Cisco ASR 9000 High Level System Architecture
At a Glance
Linecard
RSP/RP

CPU

Multi stage (1,2,3)


FIA CPU fabric operation

CPU FIA
EP0 P
CPU Scalable & flexible
H
Y
NP FIA Slice based data plane
EP0 P
H
Y
NP FIA
SerDes
XBAR
Switch
SerDes Fabric
XBAR (SM15)
EP1 P Switch
H NP FIA Fabric
Y
EP1 P
(SM15)
Fully distributed &
H
Y
NP FIA Redundant system

Switch Fabric
integrated on RSP
or Separated
72
ASR 9006/9010 Switch Fabric Overview
3-Stage Non-Blocking Fabric (Separate Unicast and Multicast Crossbars)
Stage 1 Stage 2 Stage 3
Fabric frame format: Active-Active
Super-frame Fabric
Fabric load balancing:
Unicast
Unicast is per-packet
Multicast is per-flow Crossbar
8x55Gbps
fabric

fabric fabric
Arbiter FIA
FIA Multicast FIA
FIA RSP0
Crossbar FIA
FIA
Typhoon LC
Typhoon LC
fabric
8x55Gbps Egress Linecard
Ingress Linecard
Fabric bandwidth:
Arbiter
8x55Gbps =440Gbps/slot with dual RSP
RSP1 4x55Gbps =220Gbps/slot with single RSP
RSP440
73
ASR9k End-2-End System QoS Overview
End-to-End priority (P1,P2, 2xBest-effort) propagation
Unicast VOQ and back pressure
Unicast and Multicast separation

Ingress side of LC Egress side of LC

CPU CPU 4
NP PHY
2 3
NP PHY
FIA FIA
PHY NP Switch
Fabric
PHY NP1

1 2 3 4
Ingress Port 4 VOQ per each SFP virtual 4 Priority Egress Destination Egress Port
QoS port in the entire system Queues (DQs) per SFP (VQI) QoS
Up to 8K VOQs per TSK FIA virtual port, aggregated at
(vs 4k per SKT FIA) egress port rate

74
ASR9k Virtual Output Queuing (VoQ) System Architecture
VoQ Components (Where are they)
Egress NPU Backpressure and VoQ in Action
Result is No Head of Line Blocking (HOLB)
ASR9904 RSP880 Switch Fabric Architecture
Active/Active 3-Stage Fabric, Scale to 1.6Tbps LCs Fabric frame format:
Stage 1 Stage 2 Stage 3 Super-frame
Fabric load balancing:
Unicast is per-packet
Multicast is per-flow
Ingress Linecard 2x 5x 115Gbps
Egress Linecard
Fabric ~ 1.15Tbps
SM15

Arbiter
FIA FIA
FIA Fabric Fabric FIA
FIA SM15
RSP880
SM15 FIA

Tomahawk Line Card


Tomahawk Linecard
Fabric
SM15
2x 5x 115Gbps
~ 1.15Tbps
Fabric bandwidth: Fabric bandwidth:
Arbiter
10x 115Gbps ~1.1Tbps/slot with dual RSP 10x 115Gbps ~1.15Tbps/slot with dual RSP
RSP880 5 x 115Gbps ~ 575Gbps/slot with single RSP
5 x 115Gbps ~ 575Gbps/slot with single RSP

77
ASR90xx RSP880 and Mixed LC Operation
Fabric frame format:
Super-frame
Stage 1 Stage 2 Stage 3 Fabric load balancing:
Unicast is per-packet
Multicast is per-flow

Egress Linecard
Ingress Linecard Fabric 8x115Gbps
SM15

Fabric
Arbiter
FIA
FIA Fabric FIA
FIA RSP880 SM15 FIA
FIA
Tomahawk Line
Typhoon Linecard Card
Fabric
SM15

Fabric bandwidth: 8x55Gbps Fabric bandwidth:


Arbiter
8x55Gbps =440Gbps/slot with dual RSP 8x115Gbps ~ 900Gbps/slot with dual RSP
RSP880
4x55Gbps =220Gbps/slot with single RSP 4x115Gbps ~ 450Gbps/slot with single RSP

78
ASR90xx RSP440 and Mixed LC Operation
Fabric frame format:
Stage 1 Stage 2 Stage 3 Super-frame
Fabric load balancing:
Unicast is per-packet
Multicast is per-flow
Egress Linecard
Ingress Linecard 8x55Gbps
fabric

fabric Arbiter
FIA
FIA Fabric FIA
FIA RSP0 SM15 FIA
FIA
Tomahawk Line
Typhoon Linecard Card
fabric

Fabric bandwidth: Arbiter


8x55Gbps Fabric bandwidth:
8x55Gbps =440Gbps/slot with dual RSP
4x55Gbps =220Gbps/slot with single RSP RSP1 8x55Gbps =440Gbps/slot with dual RSP
RSP440 4x55Gbps =220Gbps/slot with single RSP

79
ASR99xx Switch Fabric Card (FC2) Overview
6+1 All Active 3-Stage Fabric Planes, Scale to 1.6Tbps LCs
Fabric frame format:
Super-frame
5x2x115G bi-directional
Fabric load
= 1.15Tbps
balancing:
Unicast is per-packet
Multicast is per-flow

FIA
FIA
FIA Fabric Fabric FIA
SM15 SM15 FIA
FIA
Tomahawk Line Tomahawk Line
Card Card

5x 2x115G (120G raw) Fabric bandwidth:


bi-directional 10x115Gbps ~ 1.15 Tbps/slot with 5x FC2
= 1.15Tbps SFC v2
8x115Gbps ~ 920 Gbps/slot with single RSP

80
ASR9922/12 SFC2 and Mixed Gen LCs
When using 3rd Gen Fabric Cards

(5-1)x2x115G bi-directional
= 920Gbps (protected)

FIA
FIA FIA
FIA Fabric
SM15 FIA
fabric FIA
Typhoon Linecard Tomahawk Line
Card

(5-1)x2x55G bi-directional Fabric Lanes 6 & 7 are only used towards


= 440Gbps (protected) 3rd Gen 7-fabric plane linecards
SFC v2

81
ASR9K Tomahawk Module
Fabric Redundancy and Bw Allocation Summary for 8x100G LCs with
RSP880s/SFCv2

Fabric Per Slot Per 8x100G System


Redundancy Fabric BW LC Data BW QoS/Priority System
Protection
Dual RSP880s 920G 800G Y ASR9010
Single RSP880 460G 400G Y ASR9006

Dual RSP880s 1.15T 800G Y


ASR9904
Single RSP880 575G 500G Y
5x SFCv2 1.15T 800G Y
4x SFCv2 920G 800G Y ASR9922
3x SFCv2 690G 600G Y ASR9912

2x SFCv2 460G 400G Y


Packet Flow and Control Plane
Architecture
1. Unicast
2. Multicast
3. L2
ASR9000 Fully Distributed Control Plane
CPU
LPTS (local packet transport service):
control plane policing
RP
CPU Punt
FPGA FIA

Switch Fabric Switch Fabric

Punt Switch

Control 3x10GE
SFP +
Typhoo
LPTS
n
FIA
packet 3x10GE
SFP +
NP
3x10G RP CPU: Routing, MPLS, IGMP, PIM,
E NP

Fabric ASIC
Switch
SFP +
3x10GE FIA HSRP/VRRP, etc
SFP +
NP
3x10GE
SFP +
NP LC CPU: ARP, ICMP, BFD, NetFlow,
FIA
3x10GE
SFP +
NP OAM, etc
3x10GE
SFP +
NP
FIA
3x10GE
SFP +
NP

84
Layer 3 Control Plane Overview

BGP OSPF
LDP RSVP-TE
Static
ISIS EIGRP

LSD RIB RP

Over internal EOBC

ARP Hierarchical FIB table


FIB Adjacency
SW FIB structure for prefix
AIB LC NPU independent
convergence: TE/FRR,
IP/FRR, BGP, Link
LC CPU AIB: Adjacency Information Base
RIB: Routing Information Base
bundle
FIB: Forwarding Information Base
LSD: Label Switch Database
IOS-XR 2-Stage Lookup Packet Flow
Unicast Packet Flow
Ingress lookup yields packet egress port and applies ingress features

Egress lookup performs packet-rewrite and applies egress features

All ingress packets are switched by Central switch fabric

3x
3x10GE
10G
Typho
SFP + on
3x10GE
3x
Typho FIA
10G Ingress
SFP + on FIA Typhoo 100
3x G 100GE
3x10GE
10G
Typho n

ASIC
Switch Fabric
SFP + MAC/P
on
HY
3x
Typho FIA Switch Egress

ASIC
Switch Fabric
3x10GE
10G Fabric Typhoo 100
SFP + on FIA
n G
3x
3x10GE
10G
Typho
SFP + on Ingress
3x10GE
3x
Typho FIA FIA Typhoo 100
10G n
SFP + on G 100GE
3x
Typho MAC/P
3x10GE
10G Egress HY
SFP + on
3x FIA Switch FIA Typhoo 100
3x10GE
10G
Typho Fabric n G
SFP + on
L3 Unicast Forwarding
Packet Flow (Simplified) Example
from wire
Rx LAG hashing
lookup key LAGID SFP Packet rewrite
LAG System headers added
L3: (VRF-ID, IP DA)
ECH Type:
L3_UNICAST
TCAM rxIDB L3FIB rx-adj rewrite
SFP
Packet Source L3 FIB Next-hop Switch Fabric Port
classification interface info lookup (egress NPU)
SFP
ACL and QoS Lookup
Ingress NPU also happen in parallel
Fabric
Tx LAG hashing
LAG
ECH Type:
L3_UNICAST
rewrite txIDB tx-adj L3FIB
=> L3FIB lookup
destination Next-hop L3 FIB
interface info lookup

ACL and QoS Lookup


Egress NPU happens before rewrite

to wire ECH type: tell egress NPU type of lookup it should execute
L3 Multicast Control Plane

1. Incoming joins (IGMP, PIM, MLD, MLDP, etc )


2. NPU punts joins to RP directly bypassing LC CPU
3. Each protocol updates multicast info (VRF, route, olist, etc ) to MRIB
4. MRIB downloads all multicast info (VRF, route, olist, etc ) to MFIB in each LC
5. MFIB programs HW with multicast info: Typhoon NPU, FIA and LC Fabric
L2 Multicast Control Plane

1. Incoming joins (IGMP, PIM, MLD, etc )


2. NPU punts joins to RP directly bypassing LC CPU
3. Each snooping protocol updates L2FIB (mrouter, port-list, input port, BD, etc )
4. RP L2FIB downloads L2 multicast info to each LC L2FIB
5. LC L2FIB programs HW with multicast info: Typhoon NPU, FIA and LC Fabric
Multicast Replication Model Overview
2-Stage Replication
Multicast Replication in ASR9k is like an SSM tree
2-stage replication model:
1. Fabric to LC replication
2. Egress NP OIF replication

ASR9k doesnt use inferior binary tree or root uniary tree replication model
FGID (Slotmask)
FGIDs: 10 Slot Chassis FGIDs: 6 Slot Chassis
Phy
Logical
Slot
Slot
Number Logical
5 Slot
LC 3

4 LC 2
LC 7
LC 6
LC 5
LC 4
RSP 0

RSP 1
LC 3

LC 2

LC 1
LC 0
3 LC 1

2 LC 0

9 8 7 6 5 4 3 2 1 0 1 RSP 1

0 RSP 0

Slot Slot Mask

Logical Physical Binary Hex

LC3 5 0000100000 0x0020

LC2 4 0000010000 0x0010


Slot Slot Mask
LC1 3 0000001000 0x0008
Logical Physical Binary Hex
LC0 2 0000000100 0x0004
LC7 9 1000000000 0x0200
RSP1 1 0000000010 0x0002
LC6 8 0100000000 0x0100
LC5 7 0010000000 0x0080 RSP0FGID Calculation
0 Examples
0000000001 0x0001

LC4 6 0001000000 0x0040


RSP0 5 0000100000 0x0020
Target Linecards FGID Value (10 Slot Chassis)

RSP1 4 0000010000 0x0010 LC6 0x0100


LC3 3 0000001000 0x0008
LC2 2 0000000100 0x0004 LC1 + LC5 0x0002 | 0x0080 = 0x0082

LC1 1 0000000010 0x0002


LC0 + LC3 + LC7 0x0001 | 0x0008 | 0x0200 = 0x0209
LC0 0 Cisco Systems,0000000001
2006 Inc. All rights reserved. Cisco0x0001
Confidential EDCS:xxxx 5
ASR9k NG-MVPN (MLDP/P2MP-TE Packet Flow
Imposition PE
ASR9k NG-MVPN (MLDP/P2MP-TE Packet Flow
Disposition PE
2 Stage Forwarding Model
VPLS Packet Flow Example (Known Unicast)

ETH ETH ETH ETH L2


VC VC Local Fab Hdr ETH VC Local Fab Hdr ETH VC LDP Rewrite

3x
3x10GE
10G
Typho
SFP + on
3x10GE
3x
Typho FIA
10G Ingress
SFP + on FIA Typhoo 100
3x G 100GE
3x10GE
10G
Typho n

ASIC
Switch Fabric
SFP + MAC/P
on
HY
3x
Typho FIA Switch Egress

ASIC
Switch Fabric
3x10GE
10G Fabric Typhoo 100
SFP + on FIA
n G
3x
3x10GE
10G
Typho
SFP + on Ingress
3x10GE
3x
Typho FIA FIA Typhoo 100
10G n
SFP + on G 100GE
3x
Typho MAC/P
3x10GE
10G Egress HY
SFP + on
3x FIA Switch FIA Typhoo 100
3x10GE
10G
Typho Fabric n G
SFP + on
IOS-XR Architecture
1. 32-bit and 64-bit OS
2. IOS-XRv 9000 Virtual Forwarder
3. Netconf/Yang Programmability
Cisco IOS A Recap
Cisco IOS Cisco IOS-XE Cisco IOS-XR Cisco Virtual IOS-XR
Control Data Mgmt
Plane Plane Plane XR Code v1 XR Code v2

Hosted App 1

Hosted App 2
IOSd

NetFlow
SNMP
OSPF

LPTS

XML
BGP

NetFlow
QoS

NetFlow
ACL
PIM

SNMP
SNMP

OSPF
OSPF

LPTS
LPTS

BGP
BGP

XML
XML

QoS
ACL
QoS
ACL

PIM
PIM
System
IOS Blob Admin

Operational Infra Distributed Infra Distributed Infra Distributed Infra

Kernel Kernel Kernel Kernel Kernel


Linux-BinOS QNX, 32bit Linux, 64bit Linux, 64bit Linux, 64bit

Virtualization Layer

1990s 2000s 2003-14 Present Day

Incremental Development, with Industry leading investment protection


32-bit XR Key Concepts
Two Stage Commit
Config History Database
Rollback
Atomic vs. Best Effort
Multiple Config Sessions
Etc, etc
XR Command Modes
Exec Normal operations - monitoring routing and CEF
RP/0/RP0/CPU0:router#
show ipv4 interfaces brief show running-config
show install active show cef summary location 0/5/CPU0

Config Configuration for L3 Node


RP/0/RP0/CPU0:router(config)#
router bgp 100 taskgroup admins policy-map foo
mpls ldp ipv4 access-list block-junk

Admin Chassis operations, outside of SDRs


RP/0/RP0/CPU0:router(admin)#
show controllers fabric plane all (CRS) config-register 0x0
show controllers fabric clock (12K) install add (also in SDR)

Admin Config
RP/0/RP0/CPU0:router(admin-config)#
sdr backbone location 0/5/*
pairing reflector location 0/3/* 0/4/*
Cisco IOS XR (Virtualized)
Distributed system architecture
Cisco Virtual IOS-XR
Scale, fault isolation, control plane expansion
XR Code v1 XR Code v2
Enhanced OS Infrastructure
Scale, SMP support, 64-bit CPU Support, Open Source apps
Virtualization

NetFlow
NetFlow

SNMP
SNMP

OSPF
OSPF

LPTS
LPTS

BGP
BGP

XML
QoS
XML

ACL
QoS
ACL

PIM
PIM
System
ISSU, control and admin plane separation, simplifies SDR
Admin
Architecture
ISSU and HA Architecture
Provide Zero Packet Loss (ZPL) and Zero Topology Loss (ZTL) to Distributed Infra Distributed Infra
avoid service outages Kernel Kernel Kernel
Linux, 64bit Linux, 64bit Linux, 64bit
Fault Management
Carrier class fault handling, correlation and alarm management
Available since IOS XR 5.0 on NCS6K Virtualization Layer
Other platforms rollout NCS4K, ASR9K, Skywarp, Fretta,
Sunstone and Rosco
IOS-XRv 9000: Efficient Virtual Forwarder
High-end Feature Rich Data Plane on x86
Innovative Virtual Forwarder IOS-XRv 9000
x86-optimized SW based hardware assists:
2x10G Line Rate FDX PCIe pass-through IOS-XRv Control Plane
SW hierarchical traffic manager with 3 level HQoS, High speed interface
512K queues @ 20G FDX performance per core classification
and fine grained load balancing
SW policers that is color aware and nearly IOS-XRv Virtual Forwarder
4x faster than DPDK based SW routers
Hierarchical QOS Scheduler
SW TCAM with logical super key & heuristic cuts RX & Traffic 30Gbps+ throughput
algorithms Interface Manager TM on a single CPU core
Classification & TX million Queues
Data plane optimized for fast convergence 20Gbps+ 3-Layer H-QOS
forwarder with features for IMIX traffic with features
(on a single socket) Forwarding
Forwarding and Feature Path
& Features
ACLs
Portable 64bit C-code (to ARM based platforms) uRPF
TCAM PLU Pkt. replication Marking, Policing
Common code base with Cisco nPower X family IPv4, IPv6, MPLS
Segment routing
BFD
SW based HW Assists
NETCONF/YANG Supported on XR
NETCONF NETwork CONFiguration Protocol
Network Management protocol defines management
operations
Initial focus on configuration, but extended for monitoring
operations
First standard - RFC 4741 (December 2006)
N
Latest rev is RFC 6241 (June 2011) E
Does not define content in management operations YANG Data T
C
YANG Yet Another Next Generation
Data modeling language to define NETCONF payload
O
Defined in the context of NETCONF, but not tied to NETCONF N
Addresses gaps in SMIv2 (SNMP MIB language) F
Previous failed attempt SMI NG
First approved standard - RFC 6020 (October 2010)
Complete Your Online Session Evaluation
Give us your feedback to be
entered into a Daily Survey
Drawing. A daily winner
will receive a $750 Amazon
gift card.
Complete your session surveys
though the Cisco Live mobile
app or your computer on
Cisco Live Connect.
Dont forget: Cisco Live sessions will be available
for viewing on-demand after the event at
CiscoLive.com/Online
Thank you

También podría gustarte