cord: fabric - Open Networking Foundation

Transcription

cord: fabric - Open Networking Foundation
CORD: FABRIC
An Open-­‐Source Leaf-­‐Spine L3 Clos Fabric Saurav Das Principal System Architect, ONF In collabora8on with: ONF Operator Member Survey
36 Responders
February 6, 2015
Date Created: Monday, January 26, 2015
Date Ended: Saturday, February 07, 2015
36 Total Responses
© 2015 Open Networking Foundation
Problem: Today’s Telco Central Offices (COs) Message
Router Large number of COs
DPI SGSN/GGSN/
PDN-­‐GW Evolved over 40-50 years
• 
• 
• 
CDN Session Border
Controller Firewall Carrier Grade NAT PE Router BNG Fragmented non-­‐commodity hardware. Physical install per appliance per site Nearly 300+ unique deployed appliances. Huge source of CAPEX/OPEX
Huge source of CAPEX/OPEX Not geared for Agility/ Programmability Does not benefit from Commodity Hardware
CORD: Central Office Re-­‐architected as Datacenter Applications
SDN Control Plane-­‐ ONOS GPON OLT
Spine Switches Control Fabric
Leaf Switches I
O
PON I
OLT O
MACs Simple CPE ONT NFVI Orch-­‐ XOS GPON
Access Link LDAP
vBNG
vOLT
vCPE
DHCP
I
O
Metro Core Link RADIUS
Commodity hardware Data Open-­‐Source Leaf-­‐Spine Fabric HA, scales to 16 racks, OF 1.3, Topo-­‐Discovery, Configura8on, GUI, CLI, Troubleshoo8ng, ISSU Open Source SDN-­‐based Bare-­‐metal White Box Fabric Control Applica8on: Addressing, ECMP Rou8ng, Recovery, Interoperability, API support ONOS Controller Cluster White Box White Box White Box White Box White Box White Box White Box White Box White Box White Box White Box White Box Slow I/O: PON OLT MACs Access
Links
White Box Fast I/O Metro
Core
Links
CORD Pod – up to 16 Racks
Open-­‐Source Leaf-­‐Spine Fabric Spine Switch GE mgmt. White Box SDN Switch Leaf/Spine Switch SoDware Stack to controller OpenFlow 1.3 32 x 40G ports downlink to leaf switches 40G QSFP+ Leaf Switch 6 -­‐12 x 40G ports uplink to different spine switches ECMP across all uplink ports OCP
Software
ONL
ONIE
Indigo OF Agent OF-­‐DPA API OF-­‐DPA BRCM SDK API BRCM ASIC OCP Bare Metal Hardware
White Box SDN Switch GE mgmt. 48 x 10G ports downlink to servers in the same rack (subnet) 10G Base T or 10G SFP+ OCP: Open Compute Project ONL: Open Network Linux ONIE: Open Network Install Environment BRCM: Broadcom Merchant Silicon ASICs OF-­‐DPA: OpenFlow Datapath Abstrac8on SPRING-OPEN
Segment Routing on
Bare Metal Hardware
Learn more:
https://wiki.onosproject.org/display/ONOS/Segment+Routing
First Step: PoC Demo at SoluHon Showcase Segment Routed Fabric Control ONOS Controller Cluster SDN controlled L3 Leaf-­‐Spine Clos fabric. Dell Dell Dell Dell Dell Dell Dell Dell 4 racks, 2 servers/rack, Dell 4810 bare metal, ONOS Cardinal controller cluster CORD: Fabric 103 ECMP RouHng Policy Driven Traffic Engineering AnalyHcs Driven Traffic Engineering AnalyHcs Driven Traffic Engineering Control Plane Failure Recovery CORD Roadmap – From demo to deployment AT&T and ONOS project define CORD SoluHon POC Jan’15 June’15 CORD Lab trials Lab trials with CORD POD Dec’15 CORD POC demo at ONS June’16 CORD trial deployments – phase 2 Dec’16 CORD trial deployments – phase 1 2017 Service Provider deployments Deployments by mul8ple Service Providers Note- these timelines are ON.Lab’s projections and forward looking
1
5
Summary •  CORD Fabric • 
• 
• 
• 
• 
• 
• 
• 
Open-­‐source Spine-­‐leaf architecture: L3 Clos Bare metal hardware SDN based – no use of distributed protocols OF 1.3 mul8-­‐tables & ECMP groups ONOS cluster controllers IP/MPLS network using Segment Rou8ng sFlow based analy8cs for TE of elephant flows •  Next? • 
• 
• 
• 
Integra8on with vCPE-­‐vOLT-­‐NFaaS Special CORD requirements eg. QinQ Pod based deployment requirements eg. BGP peering Move to open source hardware i.e OCP/ONL/ONIE/OF-­‐DPA