Implementing QoS with Nexus and NX-OS

Transcription

Implementing QoS with Nexus and NX-OS
Implementing QoS with Nexus and NX-OS
BRKRST-2930
Follow us on Twitter for real time updates of the event:
@ciscoliveeurope, #CLEUR
Housekeeping
 We value your feedback- don't forget to complete your online session
evaluations after each session & the Overall Conference Evaluation
which will be available online from Thursday
 Visit the World of Solutions and Meet the Engineer
 Visit the Cisco Store to purchase your recommended readings
 Please switch off your mobile phones
 After the event don’t forget to visit Cisco Live Virtual:
www.ciscolivevirtual.com
 Follow us on Twitter for real time updates of the event:
@ciscoliveeurope, #CLEUR
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
2
Session Goal
 This session will provide a technical description of the NX-OS QoS
capabilities and hardware implementations of QoS functions on the
Nexus 7000, 5500/5000, 3000 and Nexus 2000 I
 t will also include a design and configuration level discussion on the
best practices for use of the Cisco Nexus family of switches in
implementing QoS for Medianet in additional to new QoS
capabilities leveraged in the Data Centre to support FCoE, NAS,
iSCSI and vMotion.
 This session is designed for network engineers involved in network
switching design. A basic understanding of QoS and operation of the
Nexus switches 2000/5000/5500/7000 series is assumed.
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
3
Housekeeping
 We value your feedback- don't forget to complete your online session
evaluations after each session & the Overall Conference Evaluation
which will be available online from Thursday
 Visit the World of Solutions and Meet the Engineer
 Visit the Cisco Store to purchase your recommended readings
 Please switch off your mobile phones
 After the event don’t forget to visit Cisco Live Virtual:
www.ciscolivevirtual.com
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
4
Implementing QoS with Nexus and NX-OS
Agenda
 Nexus and QoS
 New QoS Requirements
 New QoS Capabilities
 Understanding Nexus QoS Capabilities and
Configuration
 Nexus 7000
1K
Cisco Nexus
x86
 Nexus 5500
 Nexus 2000
 Nexus 3000
 Applications of QoS with Nexus
 Converting a Voice/Video IOS (Catalyst 6500)
QoS Configuration to an NX-OS (Nexus 7000)
Configuration
 Configuring Storage QoS Policies on Nexus 5500
and 7000 (FCoE & iSCSI)
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
5
Evolution of QoS Design
Switching Evolution and Specialization
 Quality of Service is not just about
protecting voice and video anymore

Campus Specialization
 Desktop based Unified Communications
 Blended Wired & Wireless Access
 Data Center Specialization
 Compute and Storage Virtualization
VMotion
 Cloud Computing
 Consolidation of more protocols onto the
fabric

Storage – FCoE, iSCSI, NFS

Inter-Process and compute
communication (RCoE, vMotion, … )
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
FCoE
Cisco Public
6
NX-OS QoS Design Requirements
Where are we starting from
 VoIP and Video are now mainstream technologies
 Ongoing evolution to the full spectrum of Unified
Communications
 High Definition Executive Communication Application requires
stringent Service-Level Agreement (SLA)
Reliable Service—High Availability Infrastructure
Application Service Management—QoS
BRKRST-2930
2012
Cisco
and/or its affiliates.
All rights reserved.
14497_04_2008_c1
© 2007 Cisco Systems, Inc.©All
rights
reserved.
Cisco Confidential
Cisco Public
7
NX-OS QoS Design Requirements
QoS for Voice and Video is implicit in current Networks
Application
Per-Hop
Admission
Queuing &
Application
Class
Behavior
Control
Dropping
Examples
VoIP Telephony
EF
Required
Priority Queue (PQ)
Cisco IP Phones (G.711, G.729)
Broadcast Video
CS5
Required
(Optional) PQ
Cisco IP Video Surveillance / Cisco Enterprise TV
Realtime Interactive
CS4
Required
(Optional) PQ
Cisco TelePresence
Multimedia Conferencing
AF4
Required
BW Queue + DSCP WRED
Cisco Unified Personal Communicator, WebEx
Multimedia Streaming
AF3
Recommended
BW Queue + DSCP WRED
Cisco Digital Media System (VoDs)
Network Control
CS6
BW Queue
EIGRP, OSPF, BGP, HSRP, IKE
Call-Signaling
CS3
BW Queue
SCCP, SIP, H.323
Ops / Admin / Mgmt (OAM)
CS2
BW Queue
SNMP, SSH, Syslog
Transactional Data
AF2
BW Queue + DSCP WRED
ERP Apps, CRM Apps, Database Apps
Bulk Data
AF1
BW Queue + DSCP WRED
E-mail, FTP, Backup Apps, Content Distribution
Best Effort
DF
Default Queue + RED
Default Class
Scavenger
CS1
Min BW Queue (Deferential)
YouTube, iTunes, BitTorent, Xbox Live
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
8
NX-OS QoS Design Requirements
QoS for Voice and Video is implicit in current Networks
4-Class Model
8-Class Model
12-Class Model
Voice
Voice
Realtime Interactive
Interactive Video
Realtime
Multimedia Conferencing
Broadcast Video
Streaming Video
Signaling / Control
Multimedia Streaming
Call Signaling
Call Signaling
Network Control
Network Control
Network Management
Critical Data
Critical Data
Transactional Data
Bulk Data
Best Effort
Best Effort
Scavenger
Scavenger
Best Effort
http://www.cisco.com/en/US/docs/solutions/Enterprise/WAN_and_MAN/QoS_SRND_40/QoSIntro_40.html#wp61135
Time
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
9
NX-OS QoS Design Requirements
Attributes of Voice and Video
Voice Packets
1400
1400
1000
1000
Video Packets
Video
Frame
Video
Frame
Video
Frame
Bytes
600
Audio
Samples
600
200
200
20 msec
BRKRST-2930
Time
© 2012 Cisco and/or its affiliates. All rights reserved.
33 msec
Cisco Public
10
NX-OS QoS Design Requirements
Trust
Boundary
Trust Boundaries – What have we trusted?
Access-Edge Switches
Conditionally Trusted Endpoints
Example: IP Phone + PC
Unsecure Endpoint
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
Trust Boundary
Secure Endpoint
Example: Software-protected PC
With centrally-administered QoS markings
11
NX-OS QoS Design Requirements
What else do we need to consider?
 The Data Center adds a number of new traffic types and
requirements
No Drop, IPC, Storage, Vmotion, …
 New Protocols and mechanisms
802.1Qbb, 802.1Qaz, ECN, …
Spectrum of Design Evolution
Ultra Low Latency
• Queueing is designed
out of the network
whenever possible
• Nanoseconds matter
BRKRST-2930
HPC/GRID
• Low Latency
• Bursty Traffic
(workload migration)
• IPC
• iWARP & RCoE
blade1
slot 1
blade2
slot 2
blade3
slot 3
blade4
slot 4
blade5
slot 5
blade6
slot 6
blade7
slot 7
blade8
slot 8
blade1
slot 1
blade2
slot 2
blade3
slot 3
blade4
slot 4
blade5
slot 5
blade6
slot 6
blade7
slot 7
blade8
slot 8
blade1
slot 1
blade2
slot 2
blade3
slot 3
blade4
slot 4
blade5
slot 5
blade6
slot 6
blade7
slot 7
blade8
slot 8
blade1
slot 1
blade2
slot 2
blade3
slot 3
blade4
slot 4
blade5
slot 5
blade6
slot 6
blade7
slot 7
blade8
slot 8
blade1
slot 1
blade2
slot 2
blade3
slot 3
blade4
slot 4
blade5
slot 5
blade6
slot 6
blade7
slot 7
blade8
slot 8
blade1
slot 1
blade2
slot 2
blade3
slot 3
blade4
slot 4
blade5
slot 5
blade6
slot 6
blade7
slot 7
blade8
slot 8
blade1
slot 1
blade2
slot 2
blade3
slot 3
blade4
slot 4
blade5
slot 5
blade6
slot 6
blade7
slot 7
blade8
slot 8
blade1
slot 1
blade2
slot 2
blade3
slot 3
blade4
slot 4
blade5
slot 5
blade6
slot 6
blade7
slot 7
blade8
slot 8
blade1
slot 1
blade2
slot 2
blade3
slot 3
blade4
slot 4
blade5
slot 5
blade6
slot 6
blade7
slot 7
blade8
slot 8
blade1
slot 1
blade2
slot 2
blade3
slot 3
blade4
slot 4
blade5
slot 5
blade6
slot 6
blade7
slot 7
blade8
slot 8
blade1
slot 1
blade2
slot 2
blade3
slot 3
blade4
slot 4
blade5
slot 5
blade6
slot 6
blade7
slot 7
blade8
slot 8
blade1
slot 1
blade2
slot 2
blade3
slot 3
blade4
slot 4
blade5
slot 5
blade6
slot 6
blade7
slot 7
blade8
slot 8
Virtualized Data Center
• vMotion, iSCSI, FCoE,
NAS, CIFS
• Multi Tenant Applications
• Voice & Video
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
MSDC
• ECN & Data Center TCP
• Hadoop and Incast Loads on
the server ports
12
NX-OS QoS Requirements
What do we trust and where do classify and mark?
 Data Centre architecture can be
provide a new set of trust
boundaries
N7K –
CoS/DSCP
Marking,
Queuing and
Classification
 Virtual Switch extends the trust
boundary into the memory space
of the Hypervisor
vPC
COS/DSCP
Based Queuing
in the extended
Fabric
 Converged and Virtualized
Adapters provide for local
classification, marking and
queuing
N5K –
CoS/DSCP
Marking,
Queuing and
Classification
COS Based
Queuing in the
extended Fabric
vPC
N2K – CoS
Marking
COS Based
Queuing in the
extended Fabric
CNA/A-FEX - Classification
and Marking
Trust Boundary
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
VM
#2
Cisco Public
VM
#3
VM
#4
N1KV –
Classification,
Marking & Queuing
13
NX-OS QoS Requirements
CoS or DSCP?
 We have non IP based traffic to consider again
FCoE – Fibre Channel Over Ethernet
RCoE – RDMA Over Ethernet
 DSCP is still marked but CoS will be required and used in
Nexus Data Center designs
PCP/COS
Network
priority
Acronym
Traffic characteristics
1
0 (lowest)
BK
Background
0
1
BE
Best Effort
2
2
EE
Excellent Effort
3
3
CA
Critical Applications
4
4
VI
Video, < 100 ms latency
5
5
VO
Voice, < 10 ms latency
6
6
IC
Internetwork Control
IEEE 802.1Q-2005
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
14
NX-OS QoS Requirements
Where do we put the new traffic types?
 In this example of a Virtualized Multi-Tenant Data Center there is
a potential overlap/conflict with Voice/Video queueing
assignments, e.g.
COS 3 – FCoE ‘and’ Call Control
COS 5 – NFS ‘and’ Voice bearer traffic
Traffic Type
Infrastructure
Tenant
Storage
Non Classified
BRKRST-2930
Network Class
COS
Class, Property, BW
Allocation
Control
6
Platinum, 10%
vMotion
4
Silver, 20%
Gold, Transactional
5
Gold, no drop, 30%
Silver, Transactional
2
Bronze, 15%
Bronze, Transactional
1
Best effort, 10%
FCOE
3
No Drop, 15%
NFS datastore
5
Silver
Data
1
Best Effort
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
15
Implementing QoS with Nexus and NX-OS
Agenda
 Nexus and QoS
 New QoS Requirements
 New QoS Capabilities
 Understanding Nexus QoS Capabilities
1K
 Nexus 7000
Cisco Nexus
 Nexus 5500
x86
 Nexus 2000
 Nexus 3000
 Nexus 1000v
 Applications of QoS with Nexus
 Voice and Video
 Storage & FCoE
 Future QoS Design Considerations (Data Center
TCP, ECN, optimized TCP)
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
16
Data Center Bridging Control Protocol
DCBX Overview - 802.1Qaz
 Negotiates Ethernet capability’s : PFC, ETS, CoS
values between DCB capable peer devices
 Simplifies Management : allows for configuration and
distribution of parameters from one node to another
 Responsible for Logical Link Up/Down signaling of
Ethernet and Fibre Channel
 DCBX is LLDP with new TLV fields
 The original pre-standard CIN (Cisco, Intel, Nuova)
DCBX utilized additional TLV’s
DCBX Switch
 DCBX negotiation failures result in:
 per-priority-pause not enabled on CoS values
 vfc not coming up – when DCBX is being used in
FCoE environment
DCBX CNA
Adapter
dc11-5020-3# sh lldp dcbx interface eth 1/40
Local DCBXP Control information:
Operation version: 00 Max version: 00 Seq no: 7
Type/
Subtype
Version
En/Will/Adv Config
006/000
000
Y/N/Y
00
<snip>
Ack no: 0
https://www.cisco.com/en/US/netsol/ns783/index.html
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
17
Priority Flow Control
FCoE Flow Control Mechanism – 802.1Qbb
 Enables lossless Ethernet using PAUSE based on a COS as defined
in 802.1p
 When link is congested, CoS assigned to “no-drop” will be PAUSED
 Other traffic assigned to other CoS values will continue to transmit
and rely on upper layer protocols for retransmission
 Not only for FCoE traffic
Transmit Queues
Fibre Channel
Ethernet Link
One
One
Two
Three
R_RDY
Packet
B2B Credits
BRKRST-2930
Receive Buffers
Two
STOP
PAUSE
Three
Four
Four
Five
Five
Six
Six
Seven
Seven
Eight
Eight
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
Eight
Virtual
Lanes
18
Enhanced Transmission Selection (ETS)
Bandwidth Management – 802.1Qaz
 Prevents a single traffic class of “hogging” all the bandwidth and
starving other classes
 When a given load doesn’t fully utilize its allocated bandwidth, it is
available to other classes
 Helps accommodate for classes of a “bursty” nature
Offered Traffic
3G/s
3G/s
10 GE Link Realized Traffic Utilization
2G/s
3G/s
3G/s
3G/s
3G/s
4G/s
6G/s
t1
t2
BRKRST-2930
t3
© 2012 Cisco and/or its affiliates. All rights reserved.
3G/s
HPC Traffic
3G/s
2G/s
3G/s
Storage Traffic
3G/s
3G/s
3G/s
LAN Traffic
4G/s
5G/s
t1
Cisco Public
t2
t3
19
Data Center TCP
Explicit Congestion Notification (ECN)
ECN is an extension to TCP that provides end-to-end congestion notification without dropping
packets. Both the network infrastructure and the end hosts have to be capable of supporting
ECN for it to function properly. ECN uses the two least significant bits in the Diffserv field in
the IP header to encode four different values. During periods of congestion a router will mark
the DSCP header in the packet indicating congestion (0x11) to the receiving host who should
notify the source host to reduce its transmission rate.
Diffserv field Values in the IP Header
0x00 – Non ENC-Capable Transport
0x10 - ECN Capable Transport (0)
0x01 – ECN Capable Transport (1)
0x11 – Congestion Encountered
ECN Configuration:
The configuration for enabling ECN is very similar to the previous WRED example, so only the
policy-map configuration with the ecn option is displayed for simplicity.
N3K-1(config)# policy-map type network-qos traffic-priorities
N3K-1(config-pmap-nq)# class type network-qos class-gold
N3K-1(config-pmap-nq-c)# congestion-control random-detect ecn
WRED and ECN are always
applied to the system policy
Notes: When configuring ECN ensure there are not any queuing policy-maps
applied to the interfaces. Only configure the queuing policy under the system policy.
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
20
Implementing QoS with Nexus and NX-OS
Agenda
 Nexus and QoS
 Nexus and NX-OS
 New QoS Capabilities and Requirements
 Understanding Nexus QoS Capabilities
1K
 Nexus 7000
Cisco Nexus
 Nexus 5500
x86
 Nexus 2000
 Nexus 3000
 Nexus 1000v
 Applications of QoS with Nexus
 Voice and Video
 Storage & FCoE
 Hadoop and Web 2.0
 Future QoS Design Considerations (Data Center
TCP, ECN, optimized TCP)
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
21
Nexus 7000 I/O Module Families
M and F Series Line Cards
 M family – L2/L3/L4 with large forwarding tables and
rich feature set
N7K-M148GT-11/N7K-M148GT-11L
N7K-M108X2-12L
N7K-M148GS-11/N7K-M148GS-11L
N7K-M132XP-12/
N7K-M132XP-12L
 F family – High performance, low latency, low power
and streamlined feature set
N7K-F132XP-15
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
N7K-F248XP-25
Now Shipping
22
Nexus 7000 M1 I/O Module
QoS Capabilities
 Modular QoS CLI Model
 3-step model to configure and apply policies:
– Define match criteria (class-map)
– Associate actions with the match criteria (policy-map)
– Attach set of actions to interface (service-policy)
 Two types of class-maps/policy-maps (C3PL provides option of type)
– type qos – to configure marking rules (default type)
– type queuing – to configure port based QoS rules
Ingress Queuing
policies enforced here
Ingress QoS policies
enforced here
R2D2
PHY
EARL
EARL
© 2012 Cisco and/or its affiliates. All rights reserved.
R2D2
PHY
Egress Linecard
Ingress Linecard
BRKRST-2930
Egress Queuing
policies enforced here
Egress QoS policies
enforced here
Cisco Public
23
Nexus 7000 M1 I/O Module
QoS Ingress Capabilities
Applied at
ingress port
ASIC
Applied at ingress forwarding
engine (ingress pipe)
Input Queuing
& Scheduling
Ingress
Mutation
 COS-toqueue
mapping
 Bandwidth
allocation
(DWRR)
 Buffer
allocation
 Congestion
Avoidance
(WRED1 and
tail drop)
 Set COS
 CoS
mutation
 IP Prec
mutation
 IP DSCP
mutation
BRKRST-2930
1. WRED on ingress
GE ports only
Ingress Classification
Marking
 Class-map matching
criteria:
 IP Prec
 IP DSCP
 QoS Group
 Discard
Class
 ACL-based
(SMAC/DMAC, IP
SA/DA, Protocol, L4
ports, L4 protocol
fields)
 CoS
 IP Precedence
 DSCP
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
Ingress Policing
 1-rate 2-color and
2-rate 3-color
aggregate
policing
 Shared policers
 Color-aware
policing
 Policing actions:
 Transmit
 Drop
 Change CoS/IPPrec/DSCP
 Markdown
 Set QoS Group
or Discard Class
24
Nexus 7000 M1 I/O Module
QoS Egress Capabilities
Applied at
egress port
ASIC
Applied at ingress forwarding
engine (egress pipe)
Egress Classification
 Class-map matching
criteria:
 ACL-based (L2
SA/DA, IP SA/DA,
Protocol, L4 port
range, L4 protocol
specific field)
 CoS
 IP Precedence
 DSCP
 Protocols (non-IP)
 QoS Group
 Discard Class
BRKRST-2930
Marking
 CoS
 IP Prec
 IP DSCP
Egress Policing
 1-rate 2-color and
2-rate 3-color
aggregate
policing
 Shared policers
 Color-aware
aggregate
policing
 Policing actions:
 Transmit
 Drop
 Change CoS/IPPrec/DSCP
 Markdown
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
Egress
Mutation
Output Queuing
& Scheduling
 CoS
mutation
 IP Prec
mutation
 IP DSCP
mutation
 COS-toqueue
mapping
 Bandwidth
allocation
 Buffer
allocation
 Congestion
avoidance
(WRED & tail
drop)
 Priority
queuing
 SRR (no PQ)
25
How to Configure Queuing on Nexus 7000
Key concept: Queuing service policies
 Queuing service policies leverage port ASIC capabilities to map traffic to
queues and schedule packet delivery
 Define queuing classes
Class maps that define the COS-to-queue mapping
i.e., which COS values go in which queues?
 Define queuing policies
Policy maps that define how each class is treated
i.e., how does the queue belonging to each class behave?
 Apply queuing service policies
Service policies that apply the queuing policies
i.e., which policy is attached to which interface in which direction?
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
26
Queuing Classes
 class-map type queuing – Configure COS-queue mappings
 Queuing class-map names are static, based on port-type and
queue
tstevens-7010(config)# class-map type queuing match-any
1p3q4t-out-pq1
1p7q4t-out-q-default 1p7q4t-out-q6
8q2t-in-q1
1p3q4t-out-q-default 1p7q4t-out-q2
1p7q4t-out-q7
8q2t-in-q2
1p3q4t-out-q2
1p7q4t-out-q3
2q4t-in-q-default 8q2t-in-q3
1p3q4t-out-q3
1p7q4t-out-q4
2q4t-in-q1
8q2t-in-q4
1p7q4t-out-pq1
1p7q4t-out-q5
8q2t-in-q-default 8q2t-in-q5
tstevens-7010(config)# class-map type queuing match-any 1p3q4t-out-pq1
tstevens-7010(config-cmap-que)# match cos 7
tstevens-7010(config-cmap-que)#
8q2t-in-q6
8q2t-in-q7
10G ingress port type
1G ingress port type
1G egress port type
10G egress port type
 Configurable only in default VDC
Changes apply to ALL ports of specified type in ALL VDCs
Changes are traffic disruptive for ports of specified type
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
27
Queuing Policies
 policy-map type queuing – Define per-queue behavior such as queue
size, WRED, shaping
tstevens-7010(config)# policy-map type queuing pri-q
tstevens-7010(config-pmap-que)# class type queuing 1p3q4t-out-pq1
tstevens-7010(config-pmap-c-que)#
bandwidth
no
queue-limit
set
exit
priority
random-detect
shape
tstevens-7010(config-pmap-c-que)#
 Note that some “sanity” checks only performed when you attempt to
tie the policy to an interface
e.g., WRED on ingress 10G ports
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
28
Queue Attributes
 priority — defines queue as the priority queue
 bandwidth — defines WRR weights for each queue
 shape — defines SRR weights for each queue
Note: enabling shaping disables PQ support for that port
 queue-limit — defines queue size and defines tail-drop
thresholds
 random-detect — sets WRED thresholds for each
queue
Note: WRED and tail-drop parameters are mutually exclusive
on a per-queue basis
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
29
Queuing Service Policies
 service-policy type queuing – Attach a queuing policymap to an interface
 Queuing policies always tied to physical port
 No more than one input and one output queuing policy
per port
tstevens-7010(config)# int e1/1
tstevens-7010(config-if)# service-policy type queuing input my-in-q
tstevens-7010(config-if)# service-policy type queuing output my-out-q
tstevens-7010(config-if)#
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
30
QoS “Golden Rules”
Assuming DEFAULTS –
 For bridged traffic, COS is preserved, DSCP is
unmodified
 For routed traffic, DSCP is preserved, DSCP[0:2] (as
defined by RFC2474) copied to COS
For example, DSCP 40 (b101000) becomes COS 5 (b101)
 Changes to default queuing policies, or application of
QoS marking policies, can modify this behavior
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
31
Implementing QoS with Nexus and NX-OS
Agenda
 Nexus and QoS
 Nexus and NX-OS
 New QoS Capabilities and Requirements
 Understanding Nexus QoS Capabilities
1K
 Nexus 7000
Cisco Nexus
 Nexus 5500
x86
 Nexus 2000
 Nexus 3000
 Applications of QoS with Nexus
 Voice and Video
 Storage & FCoE
 Hadoop and Web 2.0
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
32
Nexus 5000/5500 QoS
QoS Capabilities and Configuration
 Nexus 5000 supports a new set of QoS capabilities designed to
provide per system class based traffic control
 Lossless Ethernet—Priority Flow Control (IEEE 802.1Qbb)
 Traffic Protection—Bandwidth Management (IEEE 802.1Qaz)
 Configuration signaling to end points—DCBX (part of IEEE
802.1Qaz)
 These new capabilities are added to and managed by the common
Cisco MQC (Modular QoS CLI) which defines a three-step
configuration model
 Define matching criteria via a class-map
 Associate action with each defined class via a policy-map
 Apply policy to entire system or an interface via a service-policy
 Nexus 5000/7000 leverage the MQC qos-group capabilities to
identify and define traffic in policy configuration
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
33
Nexus 5000/5500 QoS
Packet Forwarding: Ingress Queuing
Traffic is Queued on all ingress interface
buffers providing a cumulative scaling of
buffers for congested ports
 In typical Data Center access
designs multiple ingress access
ports transmit to a few uplink
ports
 Nexus 5000 and 5500 utilize an
Ingress Queuing architecture
 Packets are stored in ingress
buffers until egress port is free
to transmit
 Ingress queuing provides an
additive effective
v
 The total queue size available is
equal to [number of ingress
ports x queue depth per port]
Egress Queue
0 is full, link
congested
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
 Statistically ingress queuing
provides the same advantages
as shared buffer memory
architectures
Cisco Public
34
Nexus 5000/5500 QoS
Virtual Output Queues
 Nexus 5000 and 5500 use an 8 Queue
QoS model for unicast traffic
 Traffic is Queued on the Ingress buffer
until the egress port is free to transmit
the packet
VoQ Eth VoQ Eth
1/20
1/8
Packet is able to
be sent to the
fabric for Eth 1/8
Packets
Queued for
Eth 1/20
 Each ingress port has a unique set of 8
virtual output queues for every egress
port (1024 Ingress VOQs = 128
destinations * 8 classes on every
ingress port)
Unified Crossbar
Fabric
Egress
Queue 0
is free
Eth 1/8
BRKRST-2930
Egress
Queue 0
is full
 To prevent Head of Line Blocking
(HOLB) Nexus 5000 and 5500 use a
Virtual Output Queue (VoQ) Model
 If Queue 0 is congested for any port
traffic in Queue 0 for all the other ports
is still able to be transmitted
 Common shared buffer on ingress,
VoQ are pointer lists and not physical
buffers
Eth 1/20
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
35
Nexus 5000/5500 QoS
QoS Policy Types
 There are three QoS policy types used to define
system behavior (qos, queuing, network-qos)
 There are three policy attachment points to apply
these policies to
Ingress UPC
Unified Crossbar
Fabric
 Ingress interface
 System as a whole (defines global behavior)
Egress UPC
 Egress interface
Policy Type
Function
Attach Point
qos
Define traffic classification rules
system qos
ingress Interface
queuing
Strict Priority queue
Deficit Weight Round Robin
system qos
egress Interface
ingress Interface
network-qos
System class characteristics (drop or nodrop, MTU), Buffer size, Marking
system qos
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
36
Nexus 5500 QoS
QoS Defaults
 QoS is enabled by default (not possible to turn it off)
 Three default class of services defined when system
boots up
 Two for control traffic (CoS 6 & 7)
Gen 2 UPC
 Default Ethernet class (class-default – all others)
 Cisco Nexus 5500 switch supports five user-defined
classes and the one default drop system class
 FCoE queues are ‘not’ pre-allocated
Unified Crossbar
Fabric
 When configuring FCoE the predefined service policies
must be added to existing QoS configurations
# Predefined FCoE service policies
service-policy type qos input fcoe-default-in-policy
service-policy type queuing input fcoe-default-in-policy
service-policy type queuing output fcoe-default-out-policy
service-policy type network-qos fcoe-default-nq-policy
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
Gen 2 UPC
37
Nexus 5500 Series
Packet qos-group is not passed to
Lithium, leverages CoS dot1p
Layer 3 QoS Configuration
 Internal QoS information determined by ingress
Carmel (UPC) ASIC is ‘not’ passed to the
Lithium L3 ASIC
 Need to mark all routed traffic with a dot1p CoS
value used to:
Layer 3 Forwarding
Engine
Gen 2 UPC
Routed packet
is queued on
egress from
Lithium based
on dot1p
Gen 2 UPC
 Queue traffic to and from the Lithium L3
ASIC
 Restore qos-group for egress forwarding
Unified Crossbar Fabric
Gen 2
 Mandatory to setup CoS for the frame in the
network-qos policy, one-to-one mapping
between a qos-group and CoS value
 Classification can be applied to physical
interfaces (L2 or L3, including L3 port-channels)
not to SVIs
Gen 2 UPC
Gen 2 UPC
class-map type network-qos nqcm-grp2
match qos-group 2
If traffic is congested on ingress to L3 ASIC
it is queued on ingress UPC ASIC
On initial ingress packet QoS matched and packet is
associated with a qos-group for queuing and policy
enforcement
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
class-map type network-qos nqcm-grp4
match qos-group 4
policy-map type network-qos nqpm-grps
class type network-qos nqcm-grp2
set cos 4
class type network-qos nqcm-grp4
set cos 2
Cisco Public
38
Nexus 5500 Series
Layer 3 QoS Configuration
 Apply “type qos” and network-qos policy for
classification on the L3 interfaces and on
the L2 interfaces (or simply system wide)
 Applying “type queuing” policy at system
level in egress direction (output)
Layer 3 Forwarding
Engine
Gen 2 UPC
Gen 2 UPC
 Trident has CoS queues associated with
every interface
8 Unicast
Queues
4 Multicast
Queues
8 Unicast
Queues
8 Multicast
Queues
 8 Unicast CoS queues
Unified Crossbar Fabric
Gen 2
 4 Multicast CoS queues
 The individual dot1p priorities are mapped
one-to-one to the Unicast CoS queues
 This has the result of dedicating a
queue for every traffic class
 With the availability of only 4 multicast
queues the user would need to explicitly
map dot1p priorities to the multicast queues
 wrr-queue cos-map <queue ID> <CoS
Map>
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Gen 2 UPC
Gen 2 UPC
Nexus-5500(config)# wrr-queue cos-map 0 1 2 3
Nexus-5500(config)# sh wrr-queue cos-map
MCAST Queue ID
Cos Map
0
0 1 2 3
1
2
4 5
3
6 7
Cisco Public
39
Nexus 5000/5500 QoS
Mapping the Switch Architecture to ‘show queuing’
dc11-5020-4# sh queuing int eth 1/39
SFP SFP SFP SFP
Interface Ethernet1/39 TX Queuing
qos-group sched-type oper-bandwidth
0
WRR
50
1
WRR
50
Egress (Tx) Queuing
Configuration
UPC
Interface Ethernet1/39 RX Queuing
qos-group 0
q-size: 243200, HW MTU: 1600 (1500 configured)
drop-type: drop, xon: 0, xoff: 1520
Statistics:
Pkts received over the port
: 85257
Ucast pkts sent to the cross-bar
: 930
Unified
Mcast pkts sent to the cross-bar
: 84327
Crossbar
Ucast pkts received from the cross-bar : 249
Fabric
Pkts sent to the port
: 133878
Pkts discarded on ingress
: 0
Per-priority-pause status
: Rx (Inactive), Tx (Inactive)
<snip – other classes repeated>
Total Multicast crossbar statistics:
Mcast pkts received from the cross-bar
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
: 283558
Cisco Public
Packets Arriving on this port
but dropped from ingress
queue due to congestion on
egress port
40
Configuring QoS on the Nexus 5500
Create New System Class
Step 1 Define qos Class-Map
N5k(config)# ip access-list acl-1
N5k(config-acl)# permit ip 100.1.1.0/24 any
N5k(config-acl)# exit
N5k(config)# ip access-list acl-2
N5k(config-acl)# permit ip 200.1.1.0/24 any
N5k(config)# class-map type qos class-1
N5k(config-cmap-qos)# match access-group name acl-1
N5k(config-cmap-qos)# class-map type qos class-2
N5k(config-cmap-qos)# match access-group name acl-2
N5k(config-cmap-qos)#
Step 2 Define qos Policy-Map
N5k(config)# policy-map type qos policy-qos
N5k(config-pmap-qos)# class type qos class-1
N5k(config-pmap-c-qos)# set qos-group 2
N5k(config-pmap-c-qos)# class type qos class-2
N5k(config-pmap-c-qos)# set qos-group 3
Step 3 Apply qos Policy-Map under
“system qos” or interface
N5k(config)# system qos
N5k(config-sys-qos)# service-policy type qos input policy-qos


N5k(config)# class-map type qos class-1
N5k(config-cmap-qos)# match ?
access-group Access group
cos
IEEE 802.1Q class of service
dscp
DSCP in IP(v4) and IPv6 packets
ip
IP
precedence Precedence in IP(v4) and IPv6 packets
protocol
Protocol
N5k(config-cmap-qos)# match

Qos-group range for userconfigured system class is 2-5

Policy under system qos applied
to all interfaces
Policy under interface is preferred
if same type of policy is applied
under both system qos and
interface

N5k(config)# interface e1/1-10
N5k(config-sys-qos)# service-policy type qos input policy-qos
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Create two system classes for
traffic with different source
address range
Supported matching criteria
Cisco Public
41
Configuring QoS on the Nexus 5500
Create New System Class(Continue)
Step 4 Define network-qos Class-Map

N5k(config)# class-map type network-qos class-1
N5k(config-cmap-nq)# match qos-group 2
N5k(config-cmap-nq)# class-map type network-qos class-2
N5k(config-cmap-nq)# match qos-group 3

Step 5 Define network-qos Policy-Map

N5k(config)# policy-map type network-qos policy-nq
N5k(config-pmap-nq)# class type network-qos class-1
N5k(config-pmap-nq-c)# class type network-qos class-2


Step 6 Apply network-qos policy-map under
system qos context
© 2012 Cisco and/or its affiliates. All rights reserved.
No action tied to this class
indicates default network-qos
parameters.
Policy-map type network-qos will
be used to configure no-drop
class, MTU, ingress buffer size
and 802.1p marking
Default network-qos parameters
are listed in the table below
Network-QoS
Parameters
Class Type
MTU
Ingress Buffer Size
Marking
N5k(config-pmap-nq-c)# system qos
N5k(config-sys-qos)# service-policy type network-qos policy-nq
N5k(config-sys-qos)#
BRKRST-2930
Match qos-group is the
only option for networkqos class-map
Qos-group value is set by
qos policy-map in
previous slide
Cisco Public
Default Value
Drop class
1538
20.4KB
No marking
42
Configuring QoS on the Nexus 5500
Strict Priority and Bandwidth Sharing
 Create new system class by using policy-map qos and networkqos(Previous two slides)
 Then Define and apply policy-map type queuing to configure strict
priority and bandwidth sharing
 Checking the queuing or bandwidth allocating with command show
queuing interface
N5k(config)# class-map type queuing class-1
N5k(config-cmap-que)# match qos-group 2
N5k(config-cmap-que)# class-map type queuing class-2
N5k(config-cmap-que)# match qos-group 3
N5k(config-cmap-que)# exit
Define queuing class-map
N5k(config)# policy-map type queuing policy-BW
N5k(config-pmap-que)# class type queuing class-1
N5k(config-pmap-c-que)# priority
N5k(config-pmap-c-que)# class type queuing class-2
N5k(config-pmap-c-que)# bandwidth percent 40
N5k(config-pmap-c-que)# class type queuing class-fcoe
N5k(config-pmap-c-que)# bandwidth percent 40
N5k(config-pmap-c-que)# class type queuing class-default
N5k(config-pmap-c-que)# bandwidth percent 20
Define queuing policy-map
N5k(config-pmap-c-que)# system qos
N5k(config-sys-qos)# service-policy type queuing output policy-BW
N5k(config-sys-qos)#
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
Apply queuing policy under
system qos or egress interface
43
Configuring QoS on the Nexus 5500
Set Jumbo MTU
 Nexus 5000 supports different MTU for each system
class
 MTU is defined in network-qos policy-map
 No interface level MTU support on Nexus 5000
 Following example configures jumbo MTU for all
interfaces
N5k(config)# policy-map type network-qos policy-MTU
N5k(config-pmap-uf)# class type network-qos class-default
N5k(config-pmap-uf-c)# mtu 9216
N5k(config-pmap-uf-c)# system qos
N5k(config-sys-qos)# service-policy type network-qos policy-MTU
N5k(config-sys-qos)#
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
44
Configuring QoS on the Nexus 5500
Adjust N5k Ingress Buffer Size
Step 1 Define qos class-map
Step 4 Define network-qos Class-Map
N5k(config)# ip access-list acl-1
N5k(config-acl)# permit ip 100.1.1.0/24 any
N5k(config-acl)# exit
N5k(config)# ip access-list acl-2
N5k(config-acl)# permit ip 200.1.1.0/24 any
N5k(config)# class-map type qos class-1
N5k(config-cmap-qos)# match access-group name acl-1
N5k(config-cmap-qos)# class-map type qos class-2
N5k(config-cmap-qos)# match access-group name acl-2
N5k(config-cmap-qos)#
N5k(config)# class-map type network-qos class-1
N5k(config-cmap-nq)# match qos-group 2
N5k(config-cmap-nq)# class-map type network-qos class-2
N5k(config-cmap-nq)# match qos-group 3
Step 2 Define qos policy-map
N5k(config)# policy-map type qos policy-qos
N5k(config-pmap-qos)# class type qos class-1
N5k(config-pmap-c-qos)# set qos-group 2
N5k(config-pmap-c-qos)# class type qos class-2
N5k(config-pmap-c-qos)# set qos-group 3
Step 3 Apply qos policy-map under
system qos
N5k(config)# system qos
N5k(config-sys-qos)# service-policy type qos input policy-qos
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Step 5 Set ingress buffer size for class-1
in network-qos policy-map
N5k(config)# policy-map type network-qos policy-nq
N5k(config-pmap-nq)# class type network-qos class-1
N5k(config-pmap-nq-c) queue-limit 81920 bytes
N5k(config-pmap-nq-c)# class type network-qos class-2
Step 6 Apply network-qos policy-map
under system qos context
N5k(config-pmap-nq-c)# system qos
N5k(config-sys-qos)# service-policy type network-qos policy-nq
N5k(config-sys-qos)#
Step 7 Configure bandwidth allocation
using queuing policy-map
Cisco Public
45
Configuring QoS on the Nexus 5500
Configure no-drop system class
Step 1 Define qos class-map
N5k(config)# class-map type qos class-nodrop
N5k(config-cmap-qos)# match cos 4
N5k(config-cmap-qos)#
Step 2 Define qos policy-map
N5k(config)# policy-map type qos policy-qos
N5k(config-pmap-qos)# class type qos class-nodrop
N5k(config-pmap-c-qos)# set qos-group 2
Step 3 Apply qos policy-map under
system qos
N5k(config)# system qos
N5k(config-sys-qos)# service-policy type qos input policy-qos
Step 4 Define network-qos Class-Map
N5k(config)# class-map type network-qos class-1
N5k(config-cmap-nq)# match qos-group 2
Step 5 Configure class-nodrop as no-drop
class in network-qos policy-map
N5k(config)# policy-map type network-qos policy-nq
N5k(config-pmap-nq)# class type network-qos class-nodrop
N5k(config-pmap-nq-c) pause no-drop
Step 6 Apply network-qos policy-map
under system qos context
N5k(config-pmap-nq-c)# system qos
N5k(config-sys-qos)# service-policy type network-qos policy-nq
N5k(config-sys-qos)#
Step 7 Configure bandwidth allocation
using queuing policy-map
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
46
Configuring QoS on the Nexus 5500
Configure CoS Marking
Step 1 Define qos class-map
Step 4 Define network-qos Class-Map
N5k(config)# ip access-list acl-1
N5k(config-acl)# permit ip 100.1.1.0/24 any
N5k(config-acl)# exit
N5k(config)# class-map type qos class-1
N5k(config-cmap-qos)# match access-group name acl-1
N5k(config-cmap-qos)#
N5k(config)# class-map type network-qos class-1
N5k(config-cmap-nq)# match qos-group 2
Step 2 Define qos policy-map
N5k(config)# policy-map type qos policy-qos
N5k(config-pmap-qos)# class type qos class-1
N5k(config-pmap-c-qos)# set qos-group 2
Step 3 Apply qos policy-map under
system qos
N5k(config)# system qos
N5k(config-sys-qos)# service-policy type qos input policy-qos
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Step 5 Enable CoS marking for class-1 in
network-qos policy-map
N5k(config)# policy-map type network-qos policy-nq
N5k(config-pmap-nq)# class type network-qos class-1
N5k(config-pmap-nq-c) set cos 4
Step 6 Apply network-qos policy-map
under system qos context
N5k(config-pmap-nq-c)# system qos
N5k(config-sys-qos)# service-policy type network-qos policy-nq
N5k(config-sys-qos)#
Step 7 Configure bandwidth allocation for
new system class using queuing policymap
Cisco Public
47
Configuring QoS on the Nexus 5500
Check System Classes
N5k# show queuing interface ethernet 1/1
Interface Ethernet1/1 TX Queuing
qos-group sched-type oper-bandwidth
0
WRR
20
1
WRR
40
Strict priority and
2
priority
0
configuration
3
WRR
40
WRR
Interface Ethernet1/1 RX Queuing
qos-group 0:
class-default
q-size: 163840, MTU: 1538
drop-type: drop, xon: 0, xoff: 1024
Statistics:
Packet counter
Pkts received over the port
: 9802
for each class
Ucast pkts sent to the cross-bar
:0
Mcast pkts sent to the cross-bar
: 9802
Ucast pkts received from the cross-bar : 0
Drop counter for
Pkts sent to the port
: 18558
Pkts discarded on ingress
:0
each class
Per-priority-pause status
: Rx (Inactive), Tx (Inactive)
class-fcoe
qos-group 1:
q-size: 76800, MTU: 2240
drop-type: no-drop, xon: 128, xoff: 240
Statistics:
Pkts received over the port
:0
Ucast pkts sent to the cross-bar
:0
Current PFC status
Mcast pkts sent to the cross-bar
:0
Ucast pkts received from the cross-bar : 0
Pkts sent to the port
:0
Pkts discarded on ingress
:0
Per-priority-pause status
: Rx (Inactive), Tx (Inactive)
Continue…
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
User-configured system
qos-group 2:
q-size: 20480, MTU: 1538
class: class-1
drop-type: drop, xon: 0, xoff: 128
Statistics:
Pkts received over the port
:0
Ucast pkts sent to the cross-bar
:0
Mcast pkts sent to the cross-bar
:0
Ucast pkts received from the cross-bar : 0
Pkts sent to the port
:0
Pkts discarded on ingress
:0
Per-priority-pause status
: Rx (Inactive), Tx
(Inactive)
qos-group 3:
User-configured system
q-size: 20480, MTU: 1538
class: class-2
drop-type: drop, xon: 0, xoff: 128
Statistics:
Pkts received over the port
:0
Ucast pkts sent to the cross-bar
:0
Mcast pkts sent to the cross-bar
:0
Ucast pkts received from the cross-bar : 0
Pkts sent to the port
:0
Pkts discarded on ingress
:0
Per-priority-pause status
: Rx (Inactive), Tx
(Inactive)
Total Multicast crossbar statistics:
Mcast pkts received from the cross-bar
N5k#
Cisco Public
: 18558
48
Implementing QoS with Nexus and NX-OS
Agenda
 Nexus and QoS
 Nexus and NX-OS
 New QoS Capabilities and Requirements
 Understanding Nexus QoS Capabilities
1K
 Nexus 7000
Cisco Nexus
 Nexus 5500
x86
 Nexus 2000
 Nexus 3000
 Nexus 1000v
 Applications of QoS with Nexus
 Voice and Video
 Storage & FCoE
 Hadoop and Web 2.0
 Future QoS Design Considerations (Data Center
TCP, ECN, optimized TCP)
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
49
Nexus 2000 QoS
Tuning the Port Buffers
 Each Fabric Extender (FEX) has local port
buffers
 You can control the queue limit for a specified
Fabric Extender for egress direction (from the
network to the host)
 You can use a lower queue limit value on the
Fabric Extender to prevent one blocked
receiver from affecting traffic that is sent to
other non-congested receivers ("head-of-line
blocking”)
 A higher queue limit provides better burst
absorption and less head-of-line blocking
protection
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
Gen 2 UPC
Unified Crossbar Fabric
Gen 2 UPC
Nexus 2000
FEX ASIC
50
N5k/N2k QoS Processing Flow
1 Incoming traffic is classified based on CoS.
Nexus 5000
2 Queuing and scheduling at egress of NIF
Unified Swtich Fabric
3 Traffic classification, buffer allocation, MTU
check and CoS marking at N5k ingress
4 Queuing and scheduling at N5k egress.
5 CoS based classification at the ingress of NIF
ports
Unified Port
Controller
Unified Port
Controller
3
2
6 Queuing and scheduling at egress of HIF
ports. Egress tail drop for each HIF port
4
5
FEX(2148, 2248, 2232)
6
1
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
51
Nexus 2000 QoS
Tuning the Port Buffers
 Each Fabric Extender (FEX) has local port
buffers
 You can use a lower queue limit value on the
Fabric Extender to prevent one blocked receiver
from affecting traffic that is sent to other noncongested receivers ("head-of-line blocking”)
 A higher queue limit provides better burst
absorption and less head-of-line blocking
protection
10G NFS
 You can control the queue limit for a specified
Fabric Extender for egress direction (from the
network to the host)
10G Source (NFS)
Gen 2 UPC
Unified Crossbar Fabric
Gen 2 UPC
40G Fabric
Nexus 2000
FEX ASIC
# Disabling the per port tail drop threshold
dc11-5020-3(config)# system qos
dc11-5020-3(config-sys-qos)# no fex queue-limit
dc11-5020-3(config-sys-qos)#
# Tuning of the queue limit per FEX HIF port
dc11-5020-3(config)# fex 100
dc11-5020-3(config-fex)# hardware N2248T queue-limit 356000
1G Sink
dc11-5020-3(config-fex)# hardware N2248T queue-limit ?
<CR>
<2560-652800> Queue limit in bytes
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
52
Nexus 2248TP-E
10G Attached Source (NAS
Array)
32MB Shared Buffer
• Speed mismatch between 10G NAS and
1G server requires QoS tuning
NAS
iSCSI
• Nexus 2248TP-E utilizes a 32MB shared
buffer to handle larger traffic bursts
10G
NFS
• Hadoop, NAS, AVID are examples of bursty
applications
• You can control the queue limit for a
specified Fabric Extender for egress
direction (from the network to the host)
• You can use a lower queue limit value on
the Fabric Extender to prevent one blocked
receiver from affecting traffic that is sent to
other non-congested receivers ("head-ofline blocking”)
N5548-L3(config-fex)# hardware N2248TPE queue-limit 4000000 rx
N5548-L3(config-fex)# hardware N2248TPE queue-limit 4000000 tx
N5548-L3(config)#interface e110/1/1
N5548-L3(config-if)# hardware N2348TP queue-limit 4096000 tx
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
VM VM VM
#2 #3 #4
1G Attached Server
Tune 2248TP-E to support a extremely
large burst (Hadoop, AVID, …)
53
Nexus 2248TP-E Counters
N5596-L3-2(config-if)# sh queuing interface e110/1/1
Ethernet110/1/1 queuing information:
Input buffer allocation:
Qos-group: 0
frh: 2
drop-type: drop
cos: 0 1 2 3 4 5 6
Ingress queue
xon
xoff
buffer-size
limit(Configurable)
---------+---------+----------0
0
65536
Queueing:
queue
qos-group
cos
priority bandwidth mtu
--------+------------+--------------+---------+---------+---2
0
0 1 2 3 4 5 6
WRR
100
9728
Queue limit: 2097152 bytes
Egress queues:
CoS to queue mapping
Bandwidth allocation
MTU
Egress queue limit(Configurable)
Queue Statistics:
---+----------------+-----------+------------+----------+------------+----Que|Received /
|Tail Drop |No Buffer
|MAC Error |Multicast
|Queue
No |Transmitted
|
|
|
|Tail Drop
|Depth
---+----------------+-----------+------------+----------+------------+----2rx|
5863073|
0|
0|
0|
|
0
2tx|
426378558047|
28490502|
0|
0|
0|
0
---+----------------+-----------+------------+----------+------------+-----
Drop due to oversubscription
<snib>
BRKRST-2930
Per port per
queue
counters
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
54
Implementing QoS with Nexus and NX-OS
Agenda
 Nexus and QoS
 Nexus and NX-OS
 New QoS Capabilities and Requirements
 Understanding Nexus QoS Capabilities
1K
 Nexus 7000
Cisco Nexus
 Nexus 5500
x86
 Nexus 2000
 Nexus 3000
 Nexus 1000v
 Applications of QoS with Nexus
 Voice and Video
 Storage & FCoE
 Hadoop and Web 2.0
 Future QoS Design Considerations (Data Center
TCP, ECN, optimized TCP)
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
55
Nexus 3000 QoS
Overview
 QoS is enabled by default on the Nexus 3000 (NX-OS Default)
 All ports are “trusted” (CoS/DSCP/ToS values are preserved) by default
 Default interface queuing policy uses QoS-Group 0 (Best Effort - Drop class),
WRR (tail drop), 100% throughput (bandwidth percent)
 Unicast and Multicast traffic defaults to a 50% WRR bandwidth ratio of the egress
interface traffic data rate. (system wide Configuration)
 The default interface MTU is 1500 bytes (system wide configuration)
 Control plane traffic destined to the CPU is prioritized by default to improve
network stability.
QoS Policy Types (CLI):
Type (CLI)
Description
Applied To…
QoS
Packet Classification based on Layer 2/3/4 (Ingress)
Interface or System
Network-QoS
Packet Marking (CoS), Congestion Control WRED/ECN (Egress)
System
Queuing
Scheduling - Queuing Bandwidth % / Priority Queue (Egress)
Interface or System
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
56
Nexus 3000 QoS
Shared Memory Architecture
Buffer/Queuing Block
Egress
port 1
Multi-Level Scheduling
Per-port Per-group
80%
Shared
Egress
port 2
Deficit Round Robin
….
A pool of 9MB Buffer space
is divided up among Egress
reserved and Dynamically
shared buffer
20% Per Port
Reserved
9MB
Total
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
Egress
port 64
UC Queue 0
UC Queue 1
UC Queue 2
UC Queue 3
UC Queue 4
UC Queue 5
UC Queue 6
UC Queue 7
MC Queue 0
MC Queue 1
MC Queue 2
MC Queue 3
UC Queue 0
UC Queue 1
UC Queue 2
UC Queue 3
UC Queue 4
UC Queue 5
UC Queue 6
UC Queue 7
MC Queue 0
MC Queue 1
MC Queue 2
MC Queue 3
….
UC Queue 0
UC Queue 1
UC Queue 2
UC Queue 3
UC Queue 4
UC Queue 5
UC Queue 6
UC Queue 7
MC Queue 0
MC Queue 1
MC Queue 2
MC Queue 3
57
Nexus 3000 QoS Configuration
WRR Example
The next six slides contain configuration and verification examples for creating an ingress
classification policy and an egress queuing policy to prioritize egress traffic if congestion
occurs on the egress interface. The ingress classification policy trusts the IP DSCP values
assigned by hosts and maps them into QoS-Groups. The egress queuing policy assigns a
predefined bandwidth percentage to traffic class.
Example Traffic Class Definitions:
Traffic Class
QoS-Group
Throughput Percentage
Gold
1
40
Silver
2
30
Bronze
3
20
Best Effort (Default)
0
10
Ingress
Egress
10%
20%
30%
40%
Traffic is prioritized based on traffic bandwidth percentage
Excess traffic is dropped based on bandwidth ratios
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
58
Traffic Classification Configuration
Ingress traffic is classified based on IP DSCP values and associated to different QoS-Groups.
In this example, the hosts are trusted and set the IP DSCP values. However, if the hosts were
not trusted, a classification policy could be configured to set/rewrite the DSCP values.
N3K-1(config)# class-map type qos match-all qos-group-1
N3K-1(config-cmap-qos)# description Gold
N3K-1(config-cmap-qos)# match dscp 46
N3K-1(config-cmap-qos)# class-map type qos match-all qos-group-2
N3K-1(config-cmap-qos)# description Silver
N3K-1(config-cmap-qos)# match dscp 36
N3K-1(config-cmap-qos)# class-map type qos match-all qos-group-3
N3K-1(config-cmap-qos)# description Bronze
N3K-1(config-cmap-qos)# match dscp 26
N3K-1(config)# policy-map type qos traffic-classification
N3K-1(config-pmap-qos)# class qos-group-1
N3K-1(config-pmap-c-qos)# set qos-group 1
N3K-1(config-pmap-c-qos)# class qos-group-2
N3K-1(config-pmap-c-qos)# set qos-group 2
N3K-1(config-pmap-c-qos)# class qos-group-3
N3K-1(config-pmap-c-qos)# set qos-group 3
Define the Policy-Map and set
the QoS-Groups
N3K-1(config)# interface ethernet 1/30
N3K-1(config-if)# service-policy type qos input traffic-classification
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Define the Class-Maps and match
the DSCP values
Cisco Public
Apply the Policy-Map to the
interface or system
59
Queuing (WRR) Configuration
Egress traffic is matched on QoS-Group and guaranteed a percentage of bandwidth when
traffic exceeds the egress Ethernet interface throughput. It is important to note that the
class-default has to be modified to prevent the bandwidth percentage from being greater
than 100%. In the example below the class-default has been reduced to 10% from 100%.
N3K-1(config)# class-map type queuing qos-group-1
N3K-1(config-cmap-que)# description Gold
N3K-1(config-cmap-que)# match qos-group 1
N3K-1(config-cmap-que)# class-map type queuing qos-group-2
N3K-1(config-cmap-que)# description Silver
N3K-1(config-cmap-que)# match qos-group 2
N3K-1(config-cmap-que)# class-map type queuing qos-group-3
N3K-1(config-cmap-que)# description Bronze
N3K-1(config-cmap-que)# match qos-group 3
N3K-1(config)# policy-map type queuing traffic-priorities
N3K-1(config-pmap-que)# class type queuing qos-group-1
N3K-1(config-pmap-c-que)# bandwidth percent 40
N3K-1(config-pmap-c-que)# class type queuing qos-group-2
N3K-1(config-pmap-c-que)# bandwidth percent 30
N3K-1(config-pmap-c-que)# class type queuing qos-group-3
N3K-1(config-pmap-c-que)# bandwidth percent 20
N3K-1(config-pmap-c-que)# class type queuing class-default
N3K-1(config-pmap-c-que)# bandwidth percent 10
Define the Policy-Map and set
the bandwidth percentages
N3K-1(config)# interface ethernet 1/10
N3K-1(config-if)# service-policy type queuing output traffic-priorities
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Define the Class-Maps and match
the QoS-Group values
Cisco Public
Apply the Policy-Map to the
interface or system
60
Queuing - Network-QoS
Configuration
The network-qos policy instantiates the QoS-Groups when applied to the system policy. This
enables the QoS-Groups and interface statistics collection per QoS-Group.
N3K-1(config)# class-map type network-qos qos-group-1
N3K-1(config-cmap-nq)# match qos-group 1
N3K-1(config-cmap-nq)# class-map type network-qos qos-group-2
N3K-1(config-cmap-nq)# match qos-group 2
N3K-1(config-cmap-nq)# class-map type network-qos qos-group-3
N3K-1(config-cmap-nq)# match qos-group 3
Define the Class-Maps and match
the QoS-Group values
N3K-1(config)# policy-map type network-qos qos-groups
N3K-1(config-pmap-nq)# class type network-qos qos-group-1
N3K-1(config-pmap-nq)# class type network-qos qos-group-2
N3K-1(config-pmap-nq)# class type network-qos qos-group-3
Define the Policy-Map and
match the Class-Maps previously
defined
N3K-1(config)# system qos
N3K-1(config-sys-qos)# service-policy type network-qos qos-groups
Apply the Policy-Map to the
system
Notes: QoS-Group 0 is already included in the default QoS policy.
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
61
Queuing – Priority Queue
This configuration example puts all packets with an IP DSCP value of 46 (EF) into a priority queue
that is scheduled before any other queues (i.e. QoS-Group 0). The ingress interface classifies
packets on the Ingress interface matching DSCP 46 and puts it in QoS-Group 1. The Egress
Queue matches QoS-Group 1 and configures that QoS-Group as a priority queue. All other
traffic is is placed the QoS-Group 0 (Best Effort/Drop queue).
N3K-1(config)# class-map type qos match-all dscp-priority
N3K-1(config-cmap-qos)# match dscp 46
Ingress Classification Configuration:
The qos service-policy can be applied
per interface or per system
N3K-1(config)# policy-map type qos dscp-priority
N3K-1(config-pmap-qos)# class dscp-priority
N3K-1(config-pmap-c-qos)# set qos-group 1
N3K-1(config)# interface ethernet 1/30
N3K-1(config-if)# service-policy type qos output dscp-priority
N3K-1(config)# class-map type queuing dscp-priority
N3K-1(config-cmap-que)# match qos-group 1
Egress Queue Configuration:
The queuing service-policy can be
applied per interface or per system
N3K-1(config)# policy-map type queuing dscp-priority
N3K-1(config-pmap-que)# class type queuing dscp-priority
N3K-1(config-pmap-c-que)# priority
N3K-1(config)# interface ethernet 1/10
N3K-1(config-if)# service-policy type queuing output dscp-priority
N3K-1(config)# class-map type network-qos qos-group-1
N3K-1(config-cmap-nq)# match qos-group 1
N3K-1(config)# policy-map type network-qos qos-groups
N3K-1(config-pmap-nq)# class type network-qos qos-group-1
N3K-1(config)# system qos
N3K-1(config-sys-qos)# service-policy type network-qos qos-groups
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
Network-QoS Configuration:
The network-qos service-policy is
applied per system
62
Queuing – WRED
The default WRR Queue behavior is to tail drop packets when congestion is experienced. A
network-qos policy can be configured to enable WRED, which drops packets prior to
experiencing congestion (based on min/max/probability ratios). This is beneficial for
applications that use TCP, since the source can reduce its transmission rate when the TCP
stream experiences lost packets.
N3K-1(config)# class-map type qos match-all class-gold
N3K-1(config-cmap-qos)# match dscp 8
Traffic Classification:
Match packets with a IP DSCP 8 and
transmit them in QoS-Group 1
N3K-1(config)# policy-map type qos traffic-classification
N3K-1(config-pmap-qos)# class class-gold
N3K-1(config-pmap-c-qos)# set qos-group 1
N3K-1(config)# interface ethernet 1/20
N3K-1(config-if)# service-policy type qos input traffic-classification
N3K-1(config)# class-map type network-qos class-gold
N3K-1(config-cmap-nq)# description Gold
N3K-1(config-cmap-nq)# match qos-group 1
Network-QoS:
Match packets in QoS-Group 1 and
enable WRED for the QoS-Group
N3K-1(config)# policy-map type network-qos traffic-priorities
N3K-1(config-pmap-nq)# class type network-qos class-gold
N3K-1(config-pmap-nq-c)# congestion-control random-detect
N3K-1(config)# system qos
N3K-1(config-sys-qos)# service-policy type network-qos traffic-priorities
Notes: Bandwidth percentages were not configured in this example to keep it simple.
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
63
Implementing QoS with Nexus and NX-OS
Agenda
 Nexus and QoS
 Nexus and NX-OS
 New QoS Capabilities and Requirements
 Understanding Nexus QoS Capabilities
1K
 Nexus 7000
Cisco Nexus
 Nexus 5500
x86
 Nexus 2000
 Nexus 3000
 Nexus 1000v
 Applications of QoS with Nexus
 Voice and Video
 Storage & FCoE
 Hadoop and Web 2.0
 Future QoS Design Considerations (Data Center
TCP, ECN, optimized TCP)
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
64
Converting Catalyst 6500 to Nexus 7000
What’s Different?
 Biggest change is introduction of “queuing policies” to apply portbased QoS configuration
 Catalyst 6500 uses platform-specific syntax for port QoS
 “mls”, “rcv-queue”, “wrr-queue”, etc. commands
 Nexus 7000 uses “modular QoS CLI” (MQC) to apply both
queuing and traditional QoS (marking/policing) policies
 Class maps to match traffic
 Policy maps to define actions to take on each class
 Service policies to tie policy maps to interfaces/VLANs in a particular
direction
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
65
Typical Catalyst 6500 Egress Port QoS
Configuration
Enable QoS. Not
needed – QoS is always
enabled on Nexus 7000
mls qos
Define trust
behavior. Not
!
needed – DSCP is
interface range gig3/1-48
Define DWRR weights.
preserved (trusted)
Nexus 7000 uses
mls qos trust dscp
by default
bandwidth statements in
wrr-queue bandwidth 100 150 200 !For 1p3q8t
queuing policy-maps.
wrr-queue bandwidth 100 150 200 0 0 0 0 !For 1p7q8t
wrr-queue cos-map 1 1 1
Define COS-to-queue
wrr-queue cos-map 1 2 0
mapping, and COS-towrr-queue cos-map 2 8 4
threshold mapping.
Nexus 7000 uses
wrr-queue cos-map 2 2 2
match statements and
wrr-queue cos-map 3 4 3
queue-limit commands.
wrr-queue cos-map 3 8 6 7
priority-queue cos-map 1 5
Example: 2 8 4 =
map COS 4 to
queue 2, threshold 8
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
66
Equivalent Nexus 7000 Egress Queuing Policy
class-map type queuing
match cos 5
class-map type queuing
match cos 3,6-7
class-map type queuing
match cos 2,4
class-map type queuing
match-any 1p7q4t-out-pq1
match-any 1p7q4t-out-q2
match-any 1p7q4t-out-q3
match-any 1p7q4t-out-q-default
Define COS-to-queue
mapping in queuing classmaps (configurable for
each port type in each
direction)
Define behavior for
each queue in
queuing policy-map
match cos 0-1
!
policy-map type queuing 10G-qing-out
class type queuing 1p7q4t-out-pq1
priority level 1
queue-limit percent 15
class type queuing 1p7q4t-out-q2
queue-limit percent 25
queue-limit cos 6 percent 100
queue-limit cos 7 percent 100
queue-limit cos 3 percent 70
bandwidth remaining percent 22
class type queuing 1p7q4t-out-q3
queue-limit percent 25
queue-limit cos 4 percent 100
queue-limit cos 2 percent 50
bandwidth remaining percent 33
Define priority queue
class type queuing 1p7q4t-out-q-default
queue-limit percent 35
queue-limit cos 1 percent 50
queue-limit cos 0 percent 100
bandwidth remaining percent 45
Size the queue
Define COS-tothreshold mapping
Define DWRR weight for queue
(“bandwidth remaining”
required when using PQ)
Tie policy-map as service-policy
on appropriate interface type in
appropriate direction
!
int e1/1
service-policy type queuing output 10G-qing-out
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
67
ESE QoS SRND for Catalyst 6500
interface range TenGigabitEthernet4/1 - 4
wrr-queue queue-limit 5 25 10 10 10 5 5
wrr-queue bandwidth 5 25 20 20 20 5 5
wrr-queue
wrr-queue
wrr-queue
wrr-queue
wrr-queue
wrr-queue
wrr-queue
wrr-queue
wrr-queue
wrr-queue
wrr-queue
wrr-queue
wrr-queue
wrr-queue
wrr-queue
wrr-queue
wrr-queue
random-detect
random-detect
random-detect
random-detect
random-detect
random-detect
random-detect
random-detect
random-detect
random-detect
random-detect
random-detect
random-detect
random-detect
random-detect
random-detect
random-detect
1
2
3
4
Enables WRED on
5
non-PQs
6
7
min-threshold 1 80 100 100 100 100 100 100 100
max-threshold 1 100 100 100 100 100 100 100 100
min-threshold 2 80 100 100 100 100 100 100 100
max-threshold 2 100 100 100 100 100 100 100 100
min-threshold 3 80 100 100 100 100 100 100 100
max-threshold 3 100 100 100 100 100 100 100 100
min-threshold 4 80 100 100 100 100 100 100 100
max-threshold 4 100 100 100 100 100 100 100 100
min-threshold 5 80 100 100 100 100 100 100 100
max-threshold 5 100 100 100 100 100 100 100 100
wrr-queue
wrr-queue
wrr-queue
wrr-queue
wrr-queue
wrr-queue
wrr-queue
wrr-queue
wrr-queue
wrr-queue
wrr-queue
random-detect
random-detect
random-detect
random-detect
cos-map 1 1 1
cos-map 2 1 0
cos-map 3 1 4
cos-map 4 1 2
cos-map 5 1 3
cos-map 6 1 6
cos-map 7 1 7
min-threshold
max-threshold
min-threshold
max-threshold
6
6
7
7
80 100 100 100 100 100 100 100
100 100 100 100 100 100 100 100
80 100 100 100 100 100 100 100
100 100 100 100 100 100 100 100
Allocates buffer space
to non-PQs
Sets the DWRR weights
for non-PQs
Sets WRED min thresholds
for the non-PQs
Sets WRED max thresholds
for the non-PQs
Q3:
Video
priority-queue cos-map 1 5
BRKRST-2930
PQ:
VoIP
SRND queuing
configuration for
Catalyst 6500 1p7q8t
port type
Q7: STP
Q6: RPs
© 2012 Cisco and/or its affiliates. All rights reserved.
Q5: Call sig and
Cisco Public
critical data
Q2:
Best
effort
Assigns
scavenger/
bulk to Q1
WRED
threshold 1
Q4: NMS/
transactional data
68
Mapping QoS SRND to Nexus 7000 (1)
PQ: VoIP
class-map type queuing match-any 1p7q4t-out-pq1
match cos 5
Q2: STP
class-map type queuing match-any 1p7q4t-out-q2
match cos 7
Q3: RPs
class-map type queuing match-any 1p7q4t-out-q3
match cos 6
class-map type queuing match-any 1p7q4t-out-q4
Q4: Call sig and
critical data
match cos 4
class-map type queuing match-any 1p7q4t-out-q5
Q5: NMS/
transactional data
match cos 3
class-map type queuing match-any 1p7q4t-out-q6
Q6: Video
match cos 2
class-map type queuing match-any 1p7q4t-out-q7
Q7: Best effort
match cos 0
class-map type queuing match-any 1p7q4t-out-q-default
match cos 1
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
Q-Default:
Scavenger/ bulk
69
Mapping QoS SRND to Nexus 7000 (2)
policy-map type queuing 10G-SRND-out
Defines the PQ
class type queuing 1p7q4t-out-pq1
priority level 1
queue-limit percent 10
Sizes the PQ
class type queuing 1p7q4t-out-q2
queue-limit percent 10
Enables COS-based WRED
for the queue
bandwidth remaining percent 5
random-detect cos-based
random-detect cos 7 minimum-threshold percent 80 maximum-threshold percent 100
class type queuing 1p7q4t-out-q3
queue-limit percent 10
bandwidth remaining percent 5
I actually question enabling WRED on network control
queues as described in SRND… Your choice…
random-detect cos-based
random-detect cos 6 minimum-threshold percent 80 maximum-threshold percent 100
class type queuing 1p7q4t-out-q4
queue-limit percent 15
Sets WRED min and max
thresholds for the queue
bandwidth remaining percent 20
random-detect cos-based
random-detect cos 4 minimum-threshold percent 80 maximum-threshold percent 100
class type queuing 1p7q4t-out-q5
Sets the DWRR
weight for the queue
queue-limit percent 10
bandwidth remaining percent 20
random-detect cos-based
random-detect cos 3 minimum-threshold percent 80 maximum-threshold percent 100
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
70
Mapping QoS SRND to Nexus 7000 (3)
class type queuing 1p7q4t-out-q6
queue-limit percent 10
bandwidth remaining percent 20
random-detect cos-based
random-detect cos 2 minimum-threshold percent 80 maximum-threshold percent 100
class type queuing 1p7q4t-out-q7
queue-limit percent 30
bandwidth remaining percent 25
random-detect cos-based
I chose slightly different queue-limit sizes vs SRND –
when all 8 queues enabled, sum of queue-limit
percentages must equal 100
random-detect cos 0 minimum-threshold percent 80 maximum-threshold percent 100
class type queuing 1p7q4t-out-q-default
queue-limit percent 5
bandwidth remaining percent 5
random-detect cos-based
random-detect cos 1 minimum-threshold percent 80 maximum-threshold percent 100
!
int e1/1
service-policy type queuing output 10G-SRND-out
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Tie the policy-map to the interface as an
output queuing service-policy
Cisco Public
71
Summary
 MQC configuration for both queuing and marking/policing polices
Departure from platform-specific Catalyst 6500 configuration model
 Initially, queuing policy configuration model generates some
confusion
But, it’s modular and self-documenting
 99% of needed QoS features exist in NX-OS
DSCP-to-queue perhaps biggest gap
 A few key default changes:
QoS always enabled
Default port behavior is « trust »
 Port QoS config conversion from Catalyst 6500 IS possible ;)
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
72
Implementing QoS with Nexus and NX-OS
Agenda
 Nexus and QoS
 Nexus and NX-OS
 New QoS Capabilities and Requirements
 Understanding Nexus QoS Capabilities
1K
 Nexus 7000
Cisco Nexus
 Nexus 5500
x86
 Nexus 2000
 Nexus 3000
 Nexus 1000v
 Applications of QoS with Nexus
 Voice and Video
 Storage & FCoE
 Hadoop and Web 2.0
 Future QoS Design Considerations (Data Center
TCP, ECN, optimized TCP)
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
73
Priority Flow Control – Nexus 5000/5500
Operations Configuration – Switch Level
 On Nexus 5000 once feature fcoe is configured, 2 classes
are made by default
FCoE DCB Switch
policy-map type qos default-in-policy
class type qos class-fcoe
set qos-group 1
class type qos class-default
set qos-group 0
 class-fcoe is configured to be no-drop with an MTU of 2158
policy-map type network-qos default-nq-policy
class type network-qos class-fcoe
pause no-drop
mtu 2158
DCB CNA Adapter
 Enabling the FCoE feature on Nexus 5548/96 does ‘not’ create no-drop policies
automatically as on Nexus 5010/20
 Must add policies under system QOS:
system qos
service-policy type qos input fcoe-default-in-policy
service-policy type queuing input fcoe-default-in-policy
service-policy type queuing output fcoe-default-out-policy
service-policy type network-qos fcoe-default-nq-policy
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
74
Nexus 5000/5500 QoS
Priority Flow Control and No-Drop Queues
 Tuning of the lossless queues to support a variety
of use cases
 Extended switch to switch no drop traffic lanes
 Support for 3km with Nexus 5000 and 5500
Support for 3 km
no drop switch to
switch links
Inter Building DCB
FCoE links
 Increased number of no drop services lanes
(4) for RDMA and other multi-queue HPC and
compute applications
Gen 2 UPC
Configs for
3000m no-drop
class
Buffer size
N5020
143680 bytes
58860 bytes
38400 bytes
N5548
152000 bytes
103360 bytes
83520 bytes
Pause Threshold
(XOFF)
Resume
Threshold (XON)
Unified Crossbar Fabric
Gen 2 UPC
5548-FCoE(config)# policy-map type network-qos 3km-FCoE
5548-FCoE(config-pmap-nq)# class type network-qos 3km-FCoE
5548-FCoE(config-pmap-nq-c)# pause no-drop buffer-size 152000 pause-threshold 103360
resume-threshold 83520
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
75
Priority Flow Control – Nexus 7K & MDS
Operations Configuration – Switch Level
N7K-50(config)# system qos
N7K-50(config-sys-qos)# service-policy type network-qos default-nq-7e-policy
 No-Drop PFC w/ MTU 2K set for Fibre Channel
show policy-map system
Type network-qos policy-maps
=====================================
policy-map type network-qos default-nq-7e-policy
class type network-qos c-nq-7e-drop
match cos 0-2,4-7
congestion-control tail-drop
mtu 1500
class type network-qos c-nq-7e-ndrop-fcoe
match cos 3
match protocol fcoe
pause
mtu 2112
Template
BRKRST-2930
show class-map type network-qos c-nq-7e-ndrop-fcoe
Type network-qos class-maps
=============================================
class-map type network-qos match-any c-nq-7e-ndrop-fcoe
Description: 7E No-Drop FCoE CoS map
match cos 3
match protocol fcoe
Policy Template choices
Drop CoS
(Priority)
NoDrop CoS
(Priority)
default-nq-8e-policy
0,1,2,3,4,5,6,7
5,6,7
-
-
default-nq-7e-policy
0,1,2,4,5,6,7
5,6,7
3
-
default-nq-6e-policy
0,1,2,5,6,7
5,6,7
3,4
4
default-nq-4e-policy
0,5,6,7
5,6,7
1,2,3,4
4
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
76
Enhanced Transmission Selection - N5K
Bandwidth Management
 When configuring FCoE by default,
each class is given 50% of the available
bandwidth
 Can be changed through QoS
settings when higher demands for
certain traffic exist (i.e. HPC traffic,
more Ethernet NICs)
N5k-1# show queuing interface ethernet 1/18
Ethernet1/18 queuing information:
TX Queuing
qos-group sched-type oper-bandwidth
0
WRR
50
1
WRR
50
1Gig FC HBAs
1Gig Ethernet NICs
Traditional Server
 Best Practice: Tune FCoE queue to provide equivalent capacity
to the HBA that would have been used (1G, 2G, …)
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
77
Enhanced Transmission Selection – N5K
Changing ETS Bandwidth Configurations
 Create classification rules first by defining and applying policy-map
type qos
 Define and apply policy-map type queuing to configure strict
priority and bandwidth sharing
pod3-5010-2(config)# class-map type queuing class-voice
pod3-5010-2(config-cmap-que)# match qos-group 2
pod3-5010-2(config-cmap-que)# class-map type queuing class-high
pod3-5010-2(config-cmap-que)# match qos-group 3
pod3-5010-2(config-cmap-que)# class-map type queuing class-low
pod3-5010-2(config-cmap-que)# match qos-group 4
pod3-5010-2(config-cmap-que)# exit
pod3-5010-2(config)# policy-map type queuing policy-BW
pod3-5010-2(config-pmap-que)# class type queuing class-voice
pod3-5010-2(config-pmap-c-que)# priority
pod3-5010-2(config-pmap-c-que)# class type queuing class-high
pod3-5010-2(config-pmap-c-que)# bandwidth percent 50
pod3-5010-2(config-pmap-c-que)# class type queuing class-low
pod3-5010-2(config-pmap-c-que)# bandwidth percent 20
pod3-5010-2(config-pmap-c-que)# class type queuing class-fcoe
FCoE Traffic given
pod3-5010-2(config-pmap-c-que)# bandwidth percent 30
30% of the 10GE link
pod3-5010-2(config-pmap-c-que)# class type queuing class-default
pod3-5010-2(config-pmap-c-que)# bandwidth percent 0
pod3-5010-2(config-pmap-c-que)# system qos
pod3-5010-2(config-sys-qos)# service-policy type queuing output policy-BW
pod3-5010-2(config-sys-qos)#
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
78
ETS – Nexus 7000
Bandwidth Management
n7k-50-fcoe-2# show queuing interface ethernet 4/17
Egress Queuing for Ethernet4/17 [System]
--------------------------------------------------------Template: 4Q7E
----------------------------------------------------Group Bandwidth% PrioLevel Shape%
----------------------------------------------------0
80
1
20
-------------------------------------------------------------------------Que# Group Bandwidth% PrioLevel Shape% CoSMap
--------------------------------------------------------------------------0 0
High
5-7
1 1
100
3
2 0
50
2,4
3 0
50
0-1
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Ingress Queuing for Ethernet4/17 [System]
-----------------------------------------------------------Trust: Trusted
----------------Group Qlimit%
------------------0
70
1
30
--------------------------------------------------Que# Group Qlimit% IVL CoSMap
--------------------------------------------------0 0
45
0
0-1
1 0
10
5
5-7
2 1
100
3
3
3 0
45
2
2,4
Cisco Public
79
DC Design Details
iSCSI Storage Considerations
• iSCSI and DCB
• Where does PFC make sense in the non
FCoE design?
Flow Control from the array to
the switch
NAS
iSCSI
• Extending buffering from switch to
connected device
 End to End – Need to consider network
oversubscription carefully!
Fibre Channel and FCoE leverage very low
levels of oversubscription
No Drop for FC works due to capacity planning
• Where does ETS make sense?
• Anywhere you want to guarantee capacity
VM VM VM
#2 #3 #4
Flow Control from the server to
the switch
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
80
DC Design Details
iSCSI Storage Considerations – TCP or PFC
1. Steady state traffic is
within end to end network
capacity
iSCSI
2. Burst traffic from a source
4G
10G
3. ‘No Drop’ traffic is queued
4. Buffers begin to fill and
PFC flow control initiated
5. All sources are eventually
flow controlled
10G
BRKRST-2930
1G
1G
1G
© 2012 Cisco and/or its affiliates. All rights reserved.
• TCP not invoked
immediately as frames
are queued not dropped
• Is the optimal behaviour
for your oversubscription?
1G
Cisco Public
81
Nexus 5500 and iSCSI - DCB
PFC (802.1Qbb) & ETS 802.1Qaz
• iSCSI TLV will be supported in the 5.2 release (CY12) – 3rd
Party Adapters not validated until that release
• Functions in the same manner as the FCoE TLV
Nexus 5500 Switch
• Communicates to the compatible Adapter using DCBX (LLDP)
• Steps to configure
Configure Class Maps to identify iSCSI traffic
Configure Policy Maps to identify marking, queueing and system
behaviour
Apply policy maps
class-map type qos class-iscsi
match protocol iscsi
match cos 4
Identify iSCSI traffic
DCB CNA Adapter
class-map type queuing class-iscsi
match qos-group 4
policy-map type qos iscsi-in-policy
class type qos class-fcoe
set qos-group 1
class type qos class-iscsi
set qos-group 4
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
82
Nexus 5500 and iSCSI - DCB
PFC (802.1Qbb) & ETS 802.1Qaz
policy-map type queuing iscsi-in-policy
class type queuing class-iscsi
bandwidth percent 10
class type queuing class-fcoe
bandwidth percent 10
class type queuing class-default
bandwidth percent 80
Define policies to be
signaled to CNA
policy-map type queuing iscsi-out-policy
class type queuing class-iscsi
bandwidth percent 10
class type queuing class-fcoe
bandwidth percent 10
class type queuing class-default
bandwidth percent 80
Define switch queue
BW policies
Nexus 5500 Switch
class-map type network-qos class-iscsi
match qos-group 4
policy-map type network-qos iscsi-nq-policy
class type network-qos class-iscsi
set cos 4
pause no-drop
mtu 9216
class type network-qos class-fcoe
system qos
service-policy
service-policy
service-policy
service-policy
BRKRST-2930
type
type
type
type
Define iSCSI MTU
and ‘if’ single hop
topology no-drop
behaviour
DCB CNA Adapter
qos input iscsi-in-policy
queuing input iscsi-in-policy
queuing output iscsi-out-policy
network-qos iscsi-nq-policy
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
83
Conclusion
 You should now have a good understanding
of QoS implementation using the Nexus
Data Center switches …
 Any questions?
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
84
Recommended Reading
BRKRST-2930
Please complete your Session Survey
We value your feedback
 Don't forget to complete your online session evaluations after each session.
Complete 4 session evaluations & the Overall Conference Evaluation
(available from Thursday) to receive your Cisco Live T-shirt
 Surveys can be found on the Attendee Website at www.ciscolivelondon.com/onsite
which can also be accessed through the screens at the Communication Stations
 Or use the Cisco Live Mobile App to complete the
surveys from your phone, download the app at
www.ciscolivelondon.com/connect/mobile/app.html
1. Scan the QR code
(Go to http://tinyurl.com/qrmelist for QR code reader
software, alternatively type in the access URL above)
2. Download the app or access the mobile site
3. Log in to complete and submit the evaluations
http://m.cisco.com/mat/cleu12/
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
86
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
87
Thank you.
BRKRST-2930
© 2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public
88