here - OpenFlow

Transcription

here - OpenFlow
OpenFlow
Hands‐on
Tutorial
HOT‐Interconnects
2010
Google
Campus,
Mountain
View,
CA
Yiannis
Yiakoumis,
Stanford
Masayoshi
Kobayashi,
NEC
Brandon
Heller,
Stanford
plus
help
from
others
during
the
hands‐on
session
Our
IntroducLons
Your
IntroducLons
Goals
of
this
Tutorial
• By
the
end,
everyone
should
know:
– what
OpenFlow
is
– how
it’s
used
and
how
you
can
use
it
– where
it’s
going
– how
OpenFlow
compares
to
other
plaPorms
– how
OpenFlow
fits
in
the
SoRware‐Defined
Networking
(SDN)
spectrum
• Present
a
useful
mix
of
hands‐on
and
lecture‐
based
content
• Have
fun
Agenda
Time
Descrip5on
8:30am‐9:15am
IntroducLon
:
MoLvaLon,
History,
Interface
9:15pm‐10:30pm
Begin
Hands‐on
PorLon
(learn
tools,
build
switch)
10:30am‐11:00am
SDN
and
Current
Status
:
SoRware‐Defined
Networking,
VirtualizaLon,
Vendors,
Demos,
Deployments
11:00am‐12:10pm
ConLnue
Hands‐on
PorLon
(build
switch,
run
on
hardware
,
build
router/firewall)
12:10pm‐12:30pm
Conclusions
:
Community,
Next
Steps,
EvaluaLons
• breaks
will
be
during
the
hands‐on
porLon
• feel
free
to
ask
any
kind
of
OpenFlow
quesLon
during
the
hands‐on
Why
OpenFlow?
impact
Research
StagnaLon
• Lots
of
deployed
innovaLon
in
other
areas
– OS:
filesystems,
schedulers,
virtualizaLon
– DS:
DHTs,
CDNs,
MapReduce
– Compilers:
JITs,
vectorizaLon
• Networks
are
largely
the
same
as
years
ago
– Ethernet,
IP,
WiFi
• Rate
of
change
of
the
network
seems
slower
in
comparison
– Need
beder
tools
and
abstracLons
to
demonstrate
and
deploy
Closed
Systems
(Vendor
Hardware)
• Stuck
with
interfaces
(CLI,
SNMP,
etc)
• Hard
to
meaningfully
collaborate
• Vendors
starLng
to
open
up,
but
not
usefully
• Need
a
fully
open
system
–
a
Linux
equivalent
Open
Systems
Performance
Scale
Fidelity
Real
User
Traffic?
Complexity
Open
SimulaLon
medium
medium
no
medium
yes
EmulaLon
medium
low
no
medium
yes
SoRware
Switches
poor
low
yes
medium
yes
NetFPGA
high
low
yes
high
yes
Network
Processors
high
medium
yes
high
yes
Vendor
Switches
high
high
yes
low
no
gap
in
the
tool
space
none
have
all
the
desired
adributes!
Ethane,
a
precursor
to
OpenFlow
Centralized,
reacLve,
per‐flow
control
Controller
Flow Switch
Flow Switch
Flow Switch
Host
B
Host
A
Flow Switch
See
Ethane
SIGCOMM
2007
paper
for
details
OpenFlow:
a
pragmaLc
compromise
• +
Speed,
scale,
fidelity
of
vendor
hardware
• +
Flexibility
and
control
of
soRware
and
simulaLon
• Vendors
don’t
need
to
expose
implementaLon
• Leverages
hardware
inside
most
switches
today
(ACL
tables)
Ethernet
Switch
OpenFlow
Protocol
(SSL/TCP)
OpenFlow Example
SoRware
Layer
Controller
PC
OpenFlow
Client
Flow
Table
Hardware
Layer
MAC
src
MAC
IP
dst
Src
IP
Dst
TCP
TCP
AcLon
sport
dport
*
*
5.6.7.8
*
port
1
5.6.7.8
*
port
2
*
port
3
port
1
port
4
1.2.3.4
OpenFlow
Basics
Flow
Table
Entries
Rule
AcLon
Stats
Packet
+
byte
counters
1. Forward
packet
to
port(s)
2. Encapsulate
and
forward
to
controller
3. Drop
packet
4. Send
to
normal
processing
pipeline
5. Modify
Fields
Switch
VLAN
Port
ID
MAC
src
MAC
dst
+
mask
what
fields
to
match
Eth
type
IP
Src
IP
Dst
IP
Prot
L4
sport
L4
dport
Examples
Switching
Switch
MAC
Port
src
*
MAC
Eth
dst
type
00:1f:..
*
*
VLAN
IP
ID
Src
IP
Dst
IP
Prot
TCP
TCP
AcLon
sport
dport
*
*
*
*
IP
Dst
IP
Prot
TCP
TCP
AcLon
sport
dport
*
*
port6
Flow
Switching
Switch
MAC
Port
src
MAC
Eth
dst
type
VLAN
IP
ID
Src
port3
00:20..
00:1f..
0800
vlan1
1.2.3.4
5.6.7.8
4
17264
80
port6
Firewall
Switch
MAC
Port
src
*
*
*
MAC
Eth
dst
type
*
VLAN
IP
ID
Src
IP
Dst
IP
Prot
TCP
TCP
AcLon
sport
dport
*
*
*
*
*
22
drop
Examples
RouLng
Switch
MAC
Port
src
*
*
MAC
Eth
dst
type
*
*
VLAN
IP
ID
Src
IP
Dst
*
5.6.7.8
*
*
VLAN
IP
ID
Src
IP
Dst
IP
Prot
vlan1
*
*
*
TCP
TCP
AcLon
sport
dport
port6,
port7,
*
*
port9
*
IP
Prot
TCP
TCP
AcLon
sport
dport
*
port6
VLAN
Switching
Switch
MAC
Port
src
*
*
MAC
Eth
dst
type
00:1f..
*
Centralized
vs
Distributed
Control
Both
models
are
possible
with
OpenFlow
Centralized
Control
Controller
OpenFlow
Switch
Distributed
Control
Controller
OpenFlow
Switch
OpenFlow
Switch
Controller
OpenFlow
Switch
OpenFlow
Switch
Controller
OpenFlow
Switch
Flow
RouLng
vs.
AggregaLon
Both
models
are
possible
with
OpenFlow
Flow‐Based
Aggregated
• Every
flow
is
individually
• set
up
by
controller
• Exact‐match
flow
entries
• Flow
table
contains
one
entry
per
flow
• Good
for
fine
grain
control,
e.g.
campus
networks
• • • One
flow
entry
covers
large
groups
of
flows
Wildcard
flow
entries
Flow
table
contains
one
entry
per
category
of
flows
Good
for
large
number
of
flows,
e.g.
backbone
ReacLve
vs.
ProacLve
(pre‐populated)
Both
models
are
possible
with
OpenFlow
ReacLve
ProacLve
• First
packet
of
flow
• triggers
controller
to
insert
flow
entries
• Efficient
use
of
flow
table
• Every
flow
incurs
small
addiLonal
flow
setup
Lme
• If
control
connecLon
lost,
switch
has
limited
uLlity
• • • Controller
pre‐populates
flow
table
in
switch
Zero
addiLonal
flow
setup
Lme
Loss
of
control
connecLon
does
not
disrupt
traffic
EssenLally
requires
aggregated
(wildcard)
rules
Examples
of
OpenFlow
in
AcLon
• • • • • • • • • • • VM
migraLon
across
subnets
mobility
energy‐efficient
data
center
network
WAN
aggregaLon
network
slicing
default‐off
network
scalable
Ethernet
scalable
data
center
network
load
balancing
formal
model
solver
verificaLon
distribuLng
FPGA
processing
more
detail
on
demos
to
come
later
What
OpenFlow
Can’t
Do
(1)
• Non‐flow‐based
(per‐packet)
networking
– ex:
wireless
protocols
(CTP),
per‐packet
processing
(sample
1%
of
packets)
– yes,
this
is
a
fundamental
limitaLon
– BUT
OpenFlow
can
provide
the
plumbing
to
connect
these
systems
• Use
all
tables
on
switch
chips
– yes,
a
major
limitaLon
(cross‐product
issue)
– BUT
an
upcoming
OF
version
will
expose
these
What
OpenFlow
Can’t
Do
(2)
• New
forwarding
primiLves
– BUT
provides
a
nice
way
to
integrate
them
• New
packet
formats/field
definiLons
– BUT
a
generalized
OpenFlow
(2.0)
is
on
the
horizon
• OpLcal
Circuits
– BUT
efforts
underway
to
apply
OpenFlow
model
to
circuits
• Low‐setup‐Lme
individual
flows
– BUT
can
push
down
flows
proacLvely
to
avoid
delays
– Only
a
fundamental
issue
when
speed‐of‐light
delays
are
large
– Geung
beder
with
Lme
Where
it’s
going
• OF
v1.1:
Extensions
for
WAN,
late
2010
– mulLple
tables:
leverage
addiLonal
tables
– tags
and
tunnels
– mulLpath
forwarding
• OF
v2+
– generalized
matching
and
acLons:
an
“instrucLon
set”
for
networking
Tutorial
Flow
modify switch
to handle IP
forwarding
set up VM,
learn tools,
run provided
hub
turn hub into
an Ethernet
switch
modify switch
to push down
flows
OpenFlow tutorial flow
add multiple
switch
support
run switch on
a real network
add firewall
capabilities to
switch
on your own...
Stuff
you’ll
use
• NOX
• Reference
Controller/Switch
• OpenvSwitch
• Mininet
• cbench
• iperf
• tcpdump
• Wireshark
Tutorial
Layout
OpenFlow
Tutorial:
3hosts-1switch
topology
c0
Controller
port 6633
loopback
(127.0.0.1:6633)
s1
127.0.0.1:port6634
OpenFlowch
dpctl
(user-space
process)
s1-eth0
h2-eth0
h2
10.0.0.2
virtual
ethernet
pairs
s1-eth1
h3-eth0
s1-eth2
h4-eth0
h3
h4
10.0.0.3
10.0.0.4
virtual hosts
Virtualized
Network
Setup
Controller 1
Controller 2
..............
Controller N
Wireless Clients
FlowVisor
1
2
SSID : OpenFlow Tutorial
.
.
.
.
OF Switch 1
OF Switch 2
OF Switch 3
Web
Server
3
Control Plane
Datapath
Hands‐on
Tutorial
First
Hands‐on
Session
SoRware‐Defined
Networking
(SDN)
and
OpenFlow
Current
Internet
Closed
to
InnovaLons
in
the
Infrastructure
Closed
App
App
App
OperaLng
System
App
Specialized
Packet
Forwarding
Hardware
App
App
App
App
OperaLng
System
Specialized
Packet
Forwarding
Hardware
App
OperaLng
System
App
Specialized
Packet
Forwarding
Hardware
App
App
OperaLng
System
App
App
App
Specialized
Packet
Forwarding
Hardware
OperaLng
System
Specialized
Packet
Forwarding
Hardware
33
“SoRware
Defined
Networking”
approach
to
open
it
App
App
App
Network
OperaLng
System
App
App
App
OperaLng
System
App
Specialized
Packet
Forwarding
Hardware
App
App
App
App
OperaLng
System
Specialized
Packet
Forwarding
Hardware
App
OperaLng
System
App
Specialized
Packet
Forwarding
Hardware
App
App
OperaLng
System
App
App
App
OperaLng
System
Specialized
Packet
Forwarding
Hardware
Specialized
Packet
Forwarding
Hardware
The
“SoRware‐defined
Network”
2.
At
least
one
good
operaLng
system
3.
Well‐defined
open
API
Extensible,
possibly
open‐source
App
App
App
Network
OperaLng
System
1.
Open
interface
to
hardware
Simple
Packet
Forwarding
Hardware
Simple
Packet
Forwarding
Hardware
Simple
Packet
Forwarding
Hardware
Simple
Packet
Forwarding
Hardware
Simple
Packet
Forwarding
Hardware
Isolated
“slices”
App
App
Network
OperaLng
System
1
Many
operaLng
systems,
or
Many
versions
App
App
App
Network
OperaLng
System
2
App
App
Network
OperaLng
System
3
App
Network
OperaLng
System
4
Open
interface
to
hardware
VirtualizaLon
or
“Slicing”
Layer
Open
interface
to
hardware
Simple
Packet
Forwarding
Hardware
Simple
Packet
Forwarding
Hardware
Simple
Packet
Forwarding
Hardware
Simple
Packet
Forwarding
Hardware
Simple
Packet
Forwarding
Hardware
Virtualizing
OpenFlow
Switch Based Virtualization
Exists for NEC, HP switches but not flexible enough for GENI
Research VLAN 2
Flow Table
Controller
Research VLAN 1
Flow Table
Controller
Production VLANs
Normal
L2/L3
Processing
FLOWVISOR BASED VIRTUALIZATION
Heidi’s
Controller
Aaron’s
Controller
Craig’s
Controller
OpenFlow
Protocol
OpenFlow FlowVisor
& Policy Control
OpenFlow
Switch
OpenFlow
Protocol
OpenFlow
Switch
OpenFlow
Switch
Use
Case:
VLAN
Based
ParLLoning
• Basic
Idea:
ParLLon
Flows
based
on
Ports
and
VLAN
Tags
– Traffic
entering
system
(e.g.
from
end
hosts)
is
tagged
– VLAN
tags
consistent
throughout
substrate
Switch
MAC
Port
src
MAC
Eth
dst
type
VLAN
IP
ID
Src
IP
Dst
IP
Prot
TCP
TCP
sport
dport
*
*
*
*
1,2,3
*
*
*
*
*
*
*
*
*
4,5,6
*
*
*
*
*
*
*
*
*
7,8,9
*
*
*
*
*
FLOWVISOR BASED VIRTUALIZATION
Separation not only by VLANs, but any L1-L4 pattern
Broadcast
Multicast
http
Load-balancer
OpenFlow
Protocol
OpenFlow
FlowVisor & Policy Control
OpenFlow
Switch
OpenFlow
Protocol
OpenFlow
Switch
OpenFlow
Switch
Use
Case:
New
CDN
‐
Turbo
Coral
++
• – – – – Basic
Idea:
Build
a
CDN
where
you
control
the
enLre
network
All
traffic
to
or
from
Coral
IP
space
controlled
by
Experimenter
All
other
traffic
controlled
by
default
rouLng
Topology
is
enLre
network
End
hosts
are
automaLcally
added
(no
opt‐in)
Switch
MAC
Port
src
MAC
Eth
dst
type
VLAN
IP
ID
Src
IP
Dst
IP
Prot
TCP
TCP
sport
dport
*
*
*
*
*
*
*
*
84.65.*
*
*
*
*
*
*
*
84.65.*
*
*
*
*
*
*
*
*
*
*
*
*
*
Use
Case:
Aaron’s
IP
– A
new
layer
3
protocol
– Replaces
IP
– Defined
by
a
new
Ether
Type
Switch
MAC
Port
src
MAC
Eth
dst
type
VLAN
IP
ID
Src
IP
Dst
IP
Prot
TCP
TCP
sport
dport
*
*
*
AaIP
*
*
*
*
*
*
*
*
*
!AaIP
*
*
*
*
*
*
Demo
Previews
Videos
(go
to
openflow.org,
click
on
right‐side
link)
OpenFlow
DemonstraLon
Overview
Topic
Demo
Network
Virtualization
FlowVisor
Hardware
Prototyping
OpenPipes
Load Balancing
PlugNServe
Energy Savings
ElasticTree
Mobility
MobileVMs
Traffic Engineering
Aggregation
Wireless Video
OpenRoads
Demo
Infrastructure
with
Slicing
WiMax
WiFi APs
OpenFlow
switches
Flows
Packet
processors
FlowVisor
Creates
Virtual
Networks
OpenPipes
Demo
Each
demo
presented
here
runs
in
an
isolated
slice
of
Stanford’s
producLon
network.
OpenFlow
Switch
OpenFlow
Switch
OpenFlow
Protocol
OpenFlow
Switch
PlugNServe
Load‐balancer
OpenRoads
Demo
OpenFlow
Protocol
FlowVisor
OpenPipes
Policy
FlowVisor
slices
OpenFlow
networks,
creaLng
mulLple
isolated
and
programmable
logical
networks
on
the
same
physical
topology.
OpenPipes
• Plumbing
with
OpenFlow
to
build
hardware
systems
Partition hardware designs
Mix resources
Test
Plug‐n‐Serve:
Load‐Balancing
Web
Traffic
using
OpenFlow
Goal:
Load‐balancing
requests
in
unstructured
networks
What
we
are
showing
 OpenFlow‐based
distributed
load‐balancer
 Smart
load‐balancing
based
on
network
and
server
load
 Allows
incremental
deployment
of
addiLonal
resources
OpenFlow
means…
 Complete
control
over
traffic
within
the
network
 Visibility
into
network
condiLons
 Ability
to
use
exisLng
commodity
hardware
This
demo
runs
on
top
of
the
FlowVisor,
sharing
the
same
physical
network
with
other
experiments
and
produc;on
traffic.
Dynamic
Flow
AggregaLon
on
an
OpenFlow
Network
Scope
• Different
Networks
want
different
flow
granularity
(ISP,
Backbone,…)
• Switch
resources
are
limited
(flow
entries,
memory)
• Network
management
is
hard
• Current
SoluLons
:
MPLS,
IP
aggregaLon
How
OpenFlow
Helps?
• Dynamically
define
flow
granularity
by
wildcarding
arbitrary
header
fields
• Granularity
is
on
the
switch
flow
entries,
no
packet
rewrite
or
encapsulaLon
• Create
meaningful
bundles
and
manage
them
using
your
own
soRware
(reroute,
monitor)
Higher
Flexibility,
BeNer
Control,
Easier
Management,
Experimenta5on
InterconLnental
VM
MigraLon
Moved
a
VM
from
Stanford
to
Japan
without
changing
its
IP.
VM
hosted
a
video
game
server
with
acLve
network
connecLons.
ElasLcTree:
Reducing
Energy
in
Data
Center
Networks
• Shuts
off
links
and
switches
to
reduce
data
center
power
• Choice
of
opLmizers
to
balance
power,
fault
tolerance,
and
BW
• OpenFlow
provides
network
routes
and
port
staLsLcs
• The
demo:
• Hardware‐based
16‐node
Fat
Tree
• Your
choice
of
traffic
padern,
bandwidth,
opLmizaLon
strategy
• Graph
shows
live
power
and
latency
variaLon
demo
credits:
Brandon
Heller,
Srini
Seetharaman,
Yiannis
Yiakoumis,
David
Underhill
OpenFlow
ImplementaLons
(Switch
and
Controller)
Stanford Reference
Implementation
• Linux based Software Switch
• Release concurrently with specification
• Kernel and User Space implementations
• Note: no v1.0 kernel-space implementation
• Limited by host PC, typically 4x 1Gb/s
• Not targeted for real-world deployments
• Useful for development, testing
• Starting point for other implementations
• Available under the OpenFlow License (BSD Style) at
http://www.openflow.org
Wireless
Access
Points
• Two
Flavors:
– OpenWRT
based
• LinkSys,
TP‐LINK,
…
• Binary
images
available
from
Stanford
– Vanilla
SoRware
(Full
Linux)
• Only
runs
on
PC
Engines
Hardware
• Debian
disk
image
• Available
from
Stanford
• Both
implementa;ons
are
so?ware
only.
NetFPGA
• NetFPGA‐based
implementaLon
– Requires
PC
and
NetFPGA
card
– Hardware
accelerated
– 4
x
1
Gb/s
throughput
• • • • Maintained
by
Stanford
University
$500
for
academics
$1000
for
industry
Available
at
hdp://www.nePpga.org
Open vSwitch
• Linux-based Software Switch
• Released after specification
• Not just an OpenFlow switch; also supports VLAN
trunks, GRE tunnels, etc
• Kernel and User Space implementations
• Limited by host PC, typically 4x 1Gb/s
• Available under the Apache License (BSD Style) at
http://www.openvswitch.org
OpenFlow
Vendor
Hardware
Product
Prototype
Core
Router
Juniper
MX‐series
Cisco
Catalyst
6k
(prototype)
(prototype)
Enterprise
Cisco
Catalyst
3750
Arista
7100
series
(prototype)
(Q4
2010)
Campus
Data
Center
Circuit
Switch
HP
ProCurve
5400
and
others
Pronto
NEC
IP8800
Ciena
CoreDirector
WiMAX
(NEC)
more
to
follow...
Wireless
60
HP
ProCurve
5400
Series
(+
others)
• Chassis switch with up to 288 ports of 1G or 48x10G
(+ other interfaces available)
• Line-rate support for OpenFlow
• Deployed in 23 wiring closets at Stanford
• Limited availability for Campus Trials
• Contact HP for support details
Praveen
Yalagandula
Jean
Tourrilhes
Sujata
Banerjee
Rick
McGeer
Charles
Clark
NEC
IP8800
• 24x/48x 1GE + 2x 10 GE
• Line-rate support for OpenFlow
• Deployed at Stanford
• Available for Campus Trials
• Supported as a product
• Contact NEC for details:
• Don Clark ([email protected])
• Atsushi Iwata ([email protected])
Atsushi
Iwata
Hideyuki
Jun
Shimonishi
Suzuki
Masanori
Nobuyuki
Philavong
Takashima
Enomoto
Minaxay
Shuichi
Saito
(NEC/NICT)
Tatsuya
Yabe
Yoshihiko
Kanaumi
(NEC/NICT)
Pronto
Switch
• Broadcom based 48x1Gb/s + 4x10Gb/s
• Bare switch – you add the software
• Supports Stanford Indigo release
• See openflow.org blog post for more details
Stanford
Indigo
Firmware
for
Pronto
• Source available under OpenFlow License to parties
that have NDA with BRCM in place
• Targeted for research use and as a baseline for vendor
implementations (but not direct deployment)
• No standard Ethernet switching – OpenFlow only!
• Hardware accelerated
• Supports v1.0
• Contact Dan Talayco ([email protected])
Toroki
Firmware
for
Pronto
• Fastpath-based OpenFlow Implementation
• Full L2/L3 management capabilities on switch
• Hardware accelerated
• Availability TBD
Ciena
CoreDirector
• Circuit switch with experimental OpenFlow support
• Prototype only
• Demonstrated at Super Computing 2009
Juniper MX Series
• Up to 24-ports 10GE or 240-ports 1GE
• OpenFlow added via Junos SDK
• Hardware forwarding
• Deployed in Internet2 in NY and at Stanford
• Prototype
• Availability TBD
Umesh
Krishnaswamy
Michaela
Mezo
Parag
Bajaria
James
Kelly
Bobby
Vandalore
Cisco 6500 Series
• Various configurations available
• Software forwarding only
• Limited deployment as part of demos
• Availability TBD
Work on other Cisco models in progress
Pere
Monclus
Sailesh
Kumar
Flavio
Bonomi
NOX Controller
• Available at http://NOXrepo.org
• Open Source (GPL)
• Modular design, programmable in C++ or Python
• High-performance (usually switches are the limit)
• Deployed as main controller in Stanford
MarLn
Casado
Scod
Shenker
Teemu
Koponen
Natasha
Gude
JusLn
Peut
Simple Network Access Control (SNAC)
• Available at http://NOXrepo.org
• Policy + Nice GUI
• Branched from NOX long ago
• Available as a binary
• Part of Stanford deployment
Stanford Reference Controller
• Comes with reference distribution
• Monolithic C code – not designed for extensibility
• Ethernet flow switch or hub
Deployments
OpenFlow
Deployment
at
Stanford
Switches
(23)
APs
(50)
WiMax
(1)
73
Live
Stanford
Deployment
StaLsLcs
hdp://yuba.stanford.edu/o|allway/wide‐ofv1.html
GENI
OpenFlow
deployment
(2010)
8
UniversiLes
and
2
NaLonal
Research
Backbones
Three
EU
Projects
similar
to
GENI:
Ophelia,
SPARC,
CHANGE
Pan-European experimental facility
 L2
Packet
 EmulaLon
 Wireless
 Content
delivery
 L2
Packet
 Wireless
 RouLng
 L2
Packet
 OpLcs
 Content
delivery
 L2
Packet
 Shadow
networks
 L2
L3Packet
 OpLcs
 Content
delivery
76
Other
OpenFlow
deployments
• Japan
‐
3‐4
UniversiLes
interconnected
by
JGN2plus
• Interest
in
Korea,
China,
Canada,
…
An
Experiment
of
OpenFlow‐enabled
Network
(Feb.
2009
‐
Sapporo
Snow
FesLval
Video
Transmission)
Seoul
OpenFlow
Switch
(Linux
PC)
Suwon
NOX
OpenFlow
Controller
VLAN
on
KOREN
Data
Transmission
TJB
Daejeon
Controller
TJB
BroadcasLng
Company
KOREA OpenFlow Network
Deagu
Gwangju
Busan
Japan OpenFlow
Network
Sapporo
Studio
Sapporo
Japan
A
video
clip
of
Sapporo
snow
fesLval
is
transmided
to
TJB
(Daejeon,
KOREA)
via
ABC
server
(Osaka,
JAPAN).
Server
Asahi
BroadcasLng
CooperaLon
(ABC)
at
Osaka,
Japan
Highlights
of
Deployments
• Stanford
deployment
– McKeown
group
for
a
year:
producLon
and
experiments
– To
scale
later
this
year
to
enLre
building
(~500
users)
• NaLon‐wide
trials
and
deployments
– 7
other
universiLes
and
BBN
deploying
now
– GEC9
in
Nov,
2010
will
showcase
naLon‐wide
OF
– Internet
2
and
NLR
to
deploy
before
GEC9
• Global
trials
– Over
60
organizaLons
experimenLng
2010
likely
to
be
a
big
year
for
OpenFlow
Current
Trials
• 68
trials/deployments
spanning
13
countries
Hands‐on
Session
2
Second
Hands‐On
Session
Geung
Involved
Interest
in
OpenFlow
Live
Website
Access
Stats
Ways
to
get
involved
• Ask
and
answer
quesLons
on
mailing
lists:
– openflow‐discuss
– openflow‐spec
• Share
and
update
wiki
content
• Submit
bugreports
and/or
patches
to
OF
reference
implementaLon
Are
you
innovaLng
in
your
network?
Slide
Credits
• Guido
Appenzeller
• Nick
McKeown
• Guru
Parulkar
• Rob
Sherwood
• Lots
of
others
Thanks!
hdp://www.openflow.org/
Before
leaving,
fill
out
the
survey
(see
link
at
the
top
of
the
wiki
page)