Session Number & Title

Download Report

Transcript Session Number & Title

ITM 13.1: Converging Optics, Switching
and Routing in the Next-Gen Data Center
Scott Wilkinson
Sr. Director of Technical Marketing
1
Data Center World – Certified Vendor Neutral
Each presenter is required to certify that their presentation will be
vendor-neutral.
As an attendee you have a right to enforce this policy of having no
sales pitch within a session by alerting the speaker if you feel the
session is not being presented in a vendor neutral fashion. If the
issue continues to be a problem, please alert Data Center World
staff after the session is complete.
2
ITM 13.1: Converging Optics, Switching and Routing
in the Next-Gen Data Center
Session Description
Through the integration of the optical and packet layers, as
well as automation of the control and data plane functions
new designs are possible that addresses the capabilities of
the next-generation data center internetworking.
3
Agenda
What are DC operators saying
Optical Technologies
Packet Technologies
Convergence
SDN/NFV
4
IP is end-to-end protocol
IP related technology – fiber & WDM & Ethernet & MPLS
SDN for intra and inter Data Center networks
NFV will partially displace SBC’s Route Reflectors and
Security
Service Orchestration requires focus and development
5
6
Complexity is Increasing
 More locations
 More services
 More complex services
 Faster service turn-up
 Higher bandwidth
 New business models
7
Business Models Proliferate
outsource
Servers &
Storage
Operating Systems
& Applications
rent
Connectivity &
Bandwidth
Real Estate &
Power
own
Time
8
Comparing Leasing Dark Fiber vs.
Buying Lit Services – Buy Side
9
Data Centers Challenges
Protocol Complexity
• VLAN and WAN scalability issues
• Tunnel & Overlay complexity
Operational Friction
• Manual network configuration
• Slow service creation & repair
Legacy Equipment Integration
• Virtual Machines and Virtual Node alignment
• SDN and NFV for legacy equipment difficult
10
Agenda
What are DC operators saying
Optical Technologies
Packet Technologies
Convergence
SDN/NFV
11
Fiber Takes Center Stage
Speed is Everything
12
10G Becomes Common
1,200,000
600,000
10G
400,000
CY17
CY16
CY14
CY13
CY12
CY11
CY10
CY09
CY08
CY07
0
CY15
100G
40G
200,000
CY06
10Gbps today is what 1Gbps was two years ago
and what T1/E1 was ten years ago
10Gbps becomes technology of choice for
extending enterprise services to customers
800,000
CY05
10Gbps standard for datacenter interconnect
10Gbps taking over inside the datacenter
Enterprise and Access
Number of ports
Datacenters
Sub 10G
1,000,000
Time
Source: Infonetics
13
10G ROI
10Gbps service delivery point-to-point
Either enterprise service delivery to end customer or datacenter interconnect service between sites
Fiber optimization required
Solution: Optical Transport
Approximately $10k startup cost (chassis, commons) for both ends
Approximately $8k per 10Gbps service cost for both ends
Revenue of ~$6k per 10Gbps service per month
ROI breakeven of
approximately 2
months
$2million revenue Year 1
Deployment rate = 1 service added
per month
14
100Gbps Becomes Economical
100Gbps standardization and demand has driven costs down much faster than expected
Coherent single-wavelength solution nearing cost and size multi-wavelength solution
Simplifies network design
Move to pluggable optics throughout the industry
Separate platform costs from optics costs
Metro and gray optics available for several years
Coherent pluggable first available 2014
Optics becoming smaller
Date
Access
Long Haul
2014
CFP2
CFP
2015
CFP4
CFP2
2016/2017
QSFP
CFP4
15 15
100G ROI
100Gbps service delivery point-to-point
Either enterprise service delivery to end customer or datacenter interconnect service between sites
Fiber optimization required
Solution: Optical transport
Approximately $10k startup cost (chassis, commons) for both ends
Approximately $47k per 100Gbps service cost for both ends ($87k for coherent)
Revenue of ~ $25k per 100Gbps service per month
Metro Optics
Coherent Optics
Profit
$4M revenue in year 1
Realistic deployment
(1 service added
per month)
16
Wavelength Division Multiplexing
WDM increasingly an option within and outside the data center for capacity expansion (“virtual fiber”)
WDM has traditionally been a rigid, fixed technology
New, flexible wavelength grids have been developed that will allow programmability at the WDM layer
No longer a fixed 50GHz grid
12.5GHz channels defined that can be combined arbitrarily
Especially useful for higher bandwidth channels
Essential for flexible modulation schemes
17
Dynamic ROADMs
Once WDM is introduced, routing of optical wavelengths is essential for complete flexibility
Reconfigurable Optical Add Drop Multiplexers allow all optical wavelength re-routing
Colorless, directionless, and contentionless (CDC) technologies
Recently developed, but now widely deployed
18
Media Conversion
100GBASE-SR10
100GBASE-LR4
Application Requirements
Benefits
Two switches / routers with fixed and unmatched
100G interface types
Single card converter
Relatively short distance between switches / routers
Density
No need to use back-to-back transponders
Cost sensitive application
Multiple media converters per chassis
Bonus:
Single slot card leaves space for other services
Multiple media conversion needs in the same
location
Low cost
Other 100G needs in the same location
19
Multiservice Data Center Interconnect
Up to 800Gbps Total
Application Requirements
Benefits
Dedicated fiber pair available
Same as for Application #1 PLUS
New service initiation or rollover of existing services
for fiber consolidation
Multiprotocol & multirate
Distances from a few km to 130km (170km with
Raman amplification)
Redundant or non-redundant
Reduced complexity
Reduced spares
Can transport low speed services on the same fiber
All services 10Gbps or less
4G/8G FC/FICON
10Gbps Infiniband
OC-48/192
10GbE
20
10G x 10G Consolidation on 100G
10 x 10GbE
100Gbps – 8Tbps
10 x 10GbE
20km-1000km
Application Requirements
Benefits
Up to 800 10GbE services
Low cost
Desire to use fewer fibers
Low power
Consolidate existing services
Fiber efficient
New services on limited fiber spans
100G pluggable optics chosen based on network
requirements
20km to 1000km maximum distance
100Gbps to 8Tbps bandwidth
21
10G & 40G Consolidation on 100G
Same optical span
options as 10GbE
cases
Mix
10GbE and 40GbE
Application Requirements
Same requirements as 10x10GbE multiplexing,
but add 40GbE to the mix
Any mix of 40GbE and 10GbE, without
restriction
May or may not know what 40GbE interfaces
are required
Gray, Metro, or
Coherent Optics
Mix
10GbE and 40GbE
Benefits
Benefits of previous applications PLUS
All 40G QSFP+ types supported
No need for extra hardware
40G handled natively, not as 4x10G
No need to plan 40/10 mix in advance
Same hardware supports any mix
of 10GbE and 40GbE
Use pluggable optics
22
ROADM on Resilient Add Drop Ring
Application Requirements
Benefits
Customer base with changing bandwidth
requirements and locations
Easy to change large demands quickly
The demand is not at 100% today
Future demand is unpredictable
The network has more than 2 nodes
More nodes may need to be added
Desire to sell wavelength services to end customers
Respond to customer demands and offer new,
dynamic services to increase revenue
possibilities
Easy to add capacity on ring/mesh type networks
Respond to new customer requests quickly for
faster time to revenue
Easy to manipulate alien wavelengths
Offer new services that monetize unused assets
23
Meshed Content Delivery Network
Application Requirements
Benefits
Centralized content distribution with remote cache
sites
ROADMs allow enormous amounts of bandwidth to
be shifted quickly and painlessly
Traffic demand changes daily, weekly, or gradually
over time
100Gbps integration allows efficient use of
wavelengths
Bandwidth between sites needs to exist at all
times
Intelligent control plane makes path setup and
teardown simple
Bandwidth between specific sites needs to be
large during cache backfilling
“Virtualization” of a complex optical network to
higher order applications
Location of sites to be backfilled changes
regularly
Up to 4 directions per node (East, West, North,
South)
24
Agenda
What are DC operators saying
Optical Technologies
Packet Technologies
Convergence
SDN/NFV
25
26,000
Traffic and Processing Needs Increase
18,000
9500
Application x VMs x Traffic =Compute Cycles
8500
6000
5000
CPU cycles/packet
2250
1500
70
175
750
26
Packet Technologies
S-vlan1
C-vlan1
C-vlan1
Network/ Service Management
S-vlan1
C-vlan1
C-vlan1
VPLS Cloud
S-vlan1
C-vlan1
C-vlan1
S-vlan1
Key Packet Solution Elements
Data Plane
L2 in the access and VPLS in the core
OAM
End-to-End via MEF SOAM. VPLS OAM
via BFD and VCCV
Protection
G.8032 and G.8031 in the access. FRR
(RFC 4090) in the VPLS cloud
S-vlan1
C-vlan2
C-vlan2
27
SDN and NFV
SDN and NFV are expected to be a significant share of the packet market by 2018
The value of NFV is in the virtualized network functions software—the applications—rather than the
orchestration and control; VNF makes up over 90% of the NFV software segment
SDN and NFV exemplify the telecom industry’s shift from hardware to software: SDN and NFV
software are projected to make up of three-quarters of total SDN/NFV revenue in 2018
Software Defined Networking
•
•
Network modularity and virtualization achieved by
centralized , software-base control and
programmable network devices
Driven by university and cloud data center
operators, stirred by ONF – a consortium of more
than 100 members
(Board: Google, Facebook, Verizon, DT, Microsoft, Berkley..)
Network Function Virtualization
• Virtualization of dedicated network appliances via
software on servers
• Driven by carriers, stirred by ETSI NFV - consortium
of 150 operators, equipment vendors and IT
companies
(Board: Verizon, Telefonica, NTT, DT, BT)
28
The Move to Programmable
Hardware
FPGA is like a chip that
can be coded and
upgraded, but is limited
in space and expensive
Fastest processing is in
the Packet Processor, but
adding new features
requires a new chip.
CPU is the most flexible
and easy to upgrade, but
also the slowest.
Packet Processor
29
X86 Processing in the Network
Customer premises
Metro
Aggregation
Managed
Services
SDN Controller
CO
PoP
1
2
1
2
Network
IP/MPLS
3
4
30
Agenda
What are DC operators saying
Optical Technologies
Packet Technologies
Convergence
SDN/NFV
31
Converging Equipment Layers
(L0 through L3)
ROADM / Packet / OTN
Element
• Fully functional Layer 0/1/2
POTP
• Wavelength and OTN router
bypass
• Multi-layer core or metro
core switch
Packet
• 100GbE interfaces
• 10GbE interfaces
• Packet switching
• Layer 2 / Layer 3 protocols
OTN
• Dense high bitrate
line cards
• Dense low bitrate
client cards
• OTN switching
Packet / OTN Element
• Packet-to-OTN translation
and encapsulation
• OTN-layer router bypass
• OTN/Packet aggregation
Packet / ROADM Element
• Packet access to dynamic
optical network
• Wavelength-layer router
bypass
ROADM
• Wavelength
switching
• DWDM and CWDM
integration
OTN / ROADM Element
• OTN access to dynamic
optical network
• Core OTN pre-aggregation
32
Convergence: Services
The first priority is providing services to end customers (not just bandwidth)
Putting colored optics into a router, while technically meeting the definition, does not count as layer convergence
True packet / optical systems have intelligent control across all layers
Programmability starting to be tested in some networks, that will lead to convergence in the future
Example
 Centralized content distribution
with remote cache sites
 Traffic demand changes daily,
weekly, or gradually over time
– Bandwidth between sites needs to
exist at all times
– Bandwidth between specific sites
needs to be large during cache
backfilling
– Location of sites to be backfilled
changes regularly
33
Convergence: Automation
The second priority is to reduce operational expenses by automating the network
•
This is one of the primary goals for SDN at the packet / service layer
•
Optical / transport SDN functionality slowly developing
Stackable hardware for fast bandwidth increase
Flexible muxponder options to quickly integrate services onto wavelengths
Combined service monitoring for fault isolation and troubleshooting
Integrated signal testing at all layers
Full Layer 1 / 2 / 3 routing and control capabilities. Will not sacrifice
capabilities for convergence
Large switch fabric. Will not compete against low cost top-of-rack and end-ofaisle switches.
Limited TDM, although OTN shows promise for low-latency and
differentiated services
34
Convergence: Intelligence
The ultimate goal is an intelligent, responsive network that can respond quickly and automatically to
internal and external service demands
This is the point where true multi-layer integration will make the most progress
High capacity internal and external links
Create a pool of resources that can be virtualized and responsive
Does not necessarily require that the integration exists in a single, converged element
Software-defined network
Resource Pool
35
Router Bypass
• Minimize and optimize expensive core
router ports
• Switch wavelengths unless OTN is required
• Switch OTN unless MPLS is required
• Only handoff to the router the traffic that
needs to be routed
ROADM
ROADM
Enterprise data
consolidated and
handed off as a
wavelength
100GbE Service
carried as a
wavelength
Wavelength and
large bandwidth
services switched
at Layer 0
Multi-Layer Network
MPLS/OTN/Wavelength
OTN
Switch
Core Router
Enterprise Data
Packet
Switch
Multi-service
via OTN
Wavelengths with
OTN sub-rate
services dropped
into OTN switch
MPLS-based
services dropped
into packet switch
from OTN switch
or as wavelengths
Use LSO to
manage
across
layers.
Consolidate and
handoff only
traffic that
requires routing
36
Converging Equipment Layers and
Service Management
Orchestration &
Management
Policy Control & Rules
Analytics
Telemetry
Reporting
Inventory
Service Creation
Service Change
Packet
Service Monitoring
Service Troubleshooting
Optical
Service Reporting
37
Agenda
What are DC operators saying
Optical Technologies
Packet Technologies
Convergence
SDN/NFV
38
SDN Across Data Center & WAN
SDN Today
- Control within Data Center
SDN Tomorrow - Control across the WAN
WAN
= dynamic grouping of
functions, elements and VMs
Data
Center
39
Optical SDN Standards
Several bodies working in this area
ONF, OIF, ITU, IETF, and others
Some concerns about competing standards, but for now these does not appear to be conflict
Open Networking Foundation (ONF)
Optical Transport Working Group (OTWG) – chartered to address SDN and OpenFlow-based control capabilities for
optical transport networks
Defines a target architecture for controlling optical transport networks
Use cases
Private Enterprise or Cloud Optical Networks
Datacenter Interconnect
Packet/Optical Convergence
Optical Interworking Forum (OIF)
Programmable Virtual Network Services Specification
SDN for Transport Framework Document
APIs for Transport SDN
40
Data Center Operators Say
69% Faster time to market
41% Loser TCO
Service agility with NFV
Loser risk when introducing new services
New Business models – Pay as you go, Capex to Opex transformation
Nick Fischbach
Director of Network & Platform Strategy and Architecture
41
Data Center Operators Say
Real progress was shown at the OFC conference last month
Verizon’s view is that the optical layer has too many
analog components and is moving too fast with too many
proprietary implementations to allow full optical SDN
implementation. Note that this view is not shared by
others who presented at OFC (e.g. Google, China Telecom,
and the OIF).
In a network operator study, respondents said that Layer 0
would benefit the least from SDN (Layer 3 will benefit the
most). In an updated survey, IP/Optical was listed as an
option and immediately moved to the top.
Source: Sterling Perrin, Heavy Reading
There have been demonstrations of transport OIF,
including an OIF/ONF demonstration that included Ciena,
Fujitsu, Coriant, NEC, ADVA, and more.
In a later presentation, a demonstration of a “prototype”
network built with China Telecom was described. This
demonstration did not use existing standards (which are
not complete yet), but was used as a way to test the
progress of proposed standards. The goal was to
“abstract” a multi-vendor optical network. Ciena and
Coriant controllers were used in the demonstration.
Source: Jonathan Sadler, Coriant / OIF
Google’s view of transport SDN is optical platforms
with no controller so that the SDN controller can talk
directly to “intelligent” transponders. Intelligent in this
case means the ability to validate the session and
respond to requests from the controller.
Source: Tad Hofmeister, Google
PacNet (recently purchased by Telstra) presented their transport
SDN solution, which is very interesting and impressive. It is
primarily a bandwidth-on-demand service, but does have some
additional elements.
PacNet has now added customer-facing APIs to allow truly
application-aware networking. Customer applications can, for
example, query the network to determine the optimal time for
lowest cost connectivity. The time from bandwidth request to
connectivity is less than 2 minutes. Customers can even request
connectivity to other customers.
Pricing models are being developed on the fly as PacNet learns
more about what their customers need. For example, the latency
charges are a new innovation where customers can specify
different latencies at different prices. Also, routes that are
underused are priced lower than routes that are highly used.
42
4 Key Things You Have Learned During this Session
1.
2.
3.
4.
Recent advances in optical technologies and how they apply to your
network.
The growth of SDN/NFV as a real option at the packet layer and soon
at the optical layer.
How convergence of layers can improve services, automation, and
network intelligence
How data centers operators can increasingly dynamically change
their network services to match evolving customer needs.
43
Thank you
Scott Wilkinson
Sr. Director, Technical Marketing
[email protected]
44