Force10 Corporate Template

Download Report

Transcript Force10 Corporate Template

Design Challenges for Next
Generation, High Speed Ethernet:
40 and 100 GbE
Sponsored by:
Panel Organizer:
Ethernet Alliance®
John D’Ambrosia, Sr. Scientist
Force10 Networks
Chair, IEEE 802.3ba Task Force
Ethernet Alliance University Program
Purpose:
Who Benefits:
•
Facilitate collaboration between
academia and Ethernet industry
Faculty
•
Help students acquire practical
perspectives of academic
theories
•
•
•
•
Encourage academia in
engineering and computer
science programs to become
more involved in developing
new Ethernet concepts
•
Speaking opportunities and
press opportunities
Publication of technical papers
Connect with industry peers
Potential research funding
Students
•
•
•
•
Network with industry and
standards leaders
Contribute research studies on
Ethernet technologies
White paper challenge
Internship program
DesignCon 2009
Academic Members
January 2009
DesignCon 2009
Panel Overview
Ilango Ganga – Intel Corporation
 High speed server adoption
Joel Goergen – Force10 Networks
 Anatomy of high-speed chassis
Adam Healey – LSI Corporation
 Electrical interfaces for 40 and 100 Gigabit Ethernet
David Stauffer – IBM Corporation
 Challenges surrounding higher-bandwidth serdes,
channels and backplane technologies
Page 4
© 2009 Dell’Oro Group
4
Routers
Optical
Servers
2
Switches
Page 5
© 2009 Dell’Oro Group
20
08
20
07
20
06
20
05
20
04
20
03
20
02
0
20
01
Port Shipments in Millions
Installed Base of 10 GE Port Shipments by
Major Platform Type
2
1
Page 6
© 2009 Dell’Oro Group
20
08
20
07
20
06
20
05
20
04
0
20
03
Port Shipments in Millions
Potential 10 GE Ports for Higher-Speed
Aggregation
10 GE Server Connectivity – All Server Types
Port Shipments in Millions
15
Directly on Motherboard
10
5
© 2009 Dell’Oro Group
20
13
20
11
20
10
20
09
20
08
Page 7
20
12
Cards
0
Design Challenges for Next Gen Ethernet −
Server End Station perspective
Ilango Ganga
Communications Architect , Intel Corporation
Editor-in-Chief, IEEE P802.3ba Task Force
40GbE and 100GbE
Computing and Networking


40G optimized for server/compute BW and server traffic aggregation needs
100G optimized for Network core and network aggregation needs
1,000,000
Rate Mb/s
100,000
100 Gigabit Ethernet
Core
Networking
Doubling
≈18 mos
40 Gigabit Ethernet
10 Gigabit Ethernet
10,000
Gigabit Ethernet
1,000
Source: An Overview: Next Generation of
Ethernet – IEEE 802 HSSG_Tutorial_1107
Server
I/O
Doubling
≈24 mos
100
1995
2000
2005
2010
Date
2015
2020
Server I/O BW drivers

Higher system processing capability



Server Virtualization


Multiple I/O connections converging to single connection with fabric virtualization
Clustered servers


Consolidation of multiple logical servers in a single physical server
Converged networking and storage


Multi-core processors
Higher speed memory, systems bus, and next gen. process technologies
Scientific, financial, oil/gas exploration, engineering workloads
Internet applications

IPTV, Web 2.0
Transition to 10GbE and Multiple 10GbE will drive the future transition to 40GbE
System capabilities & design
constraints

System & I/O capabilities

Today’s Server systems are capable of 10GbE





I/O convergence happening at 10GbE
Systems capable of handling multiple 10GbE from 2009+
Next generation I/O bus upgrades (for e.g. PCIe Gen3)
Blade backplanes/midplanes are capable of multiple 10G lanes,
4 lane backplanes are scalable to 40G (KR  KR4)
Design Constraints




Performance
Cost
Power
Density (Form factor/size)
High speed LAN controllers

Today’s 10G LAN controllers handle more and more advanced packet
processing in hardware, for example:






Packet classification
I/O Virtualization
Protocol offloads
MAC/Serdes
Handle dual Ports
Design challenges for packet processing capabilities at 40G speeds



Fixed Power constrains for PCI adapters, Blade adapters
Advanced packet processing at multiple 10G (e.g. 4x10G) and 40G
Integration of 40G MAC and serdes technologies




Can leverage multiple 10G serdes technology
Host bus upgrades to next gen system I/O speeds
Convergence of NIC/HBA/Virtualization models in a single controller
SW challenges to scaling
Summary

Server consolidation, storage & network convergence, cluster,
and video applications will drive the need for higher I/O
bandwidths



Consolidation with 10G / multiple 10G, and then to 40G
Multi-core processors, next generation System busses, and blade
backplane/midplane systems expected to be capable of 40G I/O
speeds in 3 years time frame
Performance/Cost/Power constraints will drive the design choices
for 40G Network controllers

Implementations expected to leverage 10G technologies for faster
time to market
The Call for Industry Research
on Next-Generation
Electrical Signaling
Joel Goergen
Vice President of Technology,
Chief Scientist
Force10 Networks
Copyright © 2008 Force10 Networks, Inc. All rights reserved.
Anatomy of a 100 Gbps Solution:
Chassis

Chassis design issues to
consider
– Backplane and channel
signaling for higher internal
speeds
– Lower system BER
– Connectors
– N+1 switch fabric
– Reduced EMI
– Clean power routing
architecture
– Thermal and cooling
– Cable management

All design aspects must
also meet local regulatory
standards
Copyright © 2008 Force10 Networks, Inc. All rights reserved.
Anatomy of a 100 Gbps Solution:
Interface / Connectors
Interface
Connector
CAM 200
MSPS
140
Fibre
100g
MAC
and
Phy
10 x
CEI11GSR
Inter
laken/S
PI-S
Lookup
DataBase
SRAM 400
MHZ
DDRII+
50
Ingress
Packet
Parsing
Ingress L ink
List SRAM
400 MHZ
QDRII+
Lookup
DataBase
SRAM 400
MHZ
DDRII+
25
Ingress
Lookup
CAM 200
MSPS
Package &
Die Size
Ingress
Packet
Edit
50
36
Ingress
Packet
Edit
NPU
140
16
72
Connector
Egress
Lookup
10 x
CEI11GSR
Inter
laken/
SPI-S
72
16 x
72
L ink List
SRAM 400
MHZ QDRII+
Power
CEI11GLR
TM
Egress
Clock, reset,
PCI Express,
Test Pins
123 256 32
10 x
CEI11GSR
Inter
laken/
SPI-S
50
100
Ingress
Buffer
SDRAM 1
Ghz DDR
123 256
Ingress
Buffer
SDRAM 1
Ghz DDR
32
Back
Plane
100
Clock, reset,
PCI Express,
Test Pins
Memory
Copyright © 2008 Force10 Networks, Inc. All rights reserved.
Anatomy of a 100 Gbps Solution:
Signal Integrity
#1
#3
#2
Backplane
Traces
SERDES:
The Building Block
of an Interface
Copyright © 2008 Force10 Networks, Inc. All rights reserved.
Design Challenges for NextGeneration, High-Speed
Ethernet: 40 and 100 GbE
Adam Healey
LSI Corporation
Electrical interfaces for 40 and 100 Gb/s Ethernet
XLAUI (40 Gb/s) and CAUI (100 Gb/s)
10
• Chip-to-chip
• Chip-to-module (retimed)
10
ASIC
ASIC
4
WDM
4
4
4
4
n = 4 or 10
n
n
n
n
n
ASIC
Parallel Physical Interface (PPI)
• Chip-to-module (limiting)
• 40 and 100 Gb/s
10
WDM
4:10
Copper cable assembly
• Up to 10 m
• 40GBASE-CR4 (40 Gb/s)
• 100GBASE-CR10 (100 Gb/s)
4
10:4
40 Gb/s Backplane Ethernet
• Up to 1 m and 2 connectors
• 40GBASE-KR4
10
n = 4 or 10
n
n
Drivers
ASIC
n
n
n
Lasers
n
Limiters
n
Detectors
Interfaces consist of an aggregation of 10 Gb/s serial lanes
Healey
DesignCon 2009
19
Design considerations
• Expand the scope of 10 Gb/s Backplane Ethernet (10GBASE-KR)
– 10GBASE-KR is the basis of the specifications for backplane and copper
cable assemblies
– Loss, noise profiles of cable assemblies and associated host card wiring
distinct from backplane – must confirm interoperability
• Define the superset serdes
– A single serdes core that supports multiple interface standards
– Flexible interface that can face either the backplane or the faceplate
– Common receptacle for optical module and copper cable assembly
• Signal integrity challenges
– Increase in density of 10 Gb/s channels
– Increase in trace routing distance to satisfy routing constraints (more loss)
– Increase in crosstalk
• Testability
– Test each lane of the multi-lane interface in isolation (multiplies test time)
– Test interface as an aggregate (multiplies test equipment)
Healey
DesignCon 2009
20
An eye to the future
• Future demand for higher density implies the need for a more narrow
interface
100GBASE-LR4 or 100GBASE-ER4
(4 x 25 Gbs)
CAUI (10 x 10 Gb/s)
Pluggable module boundary
10
10
4
10:4
ASIC
10
10
4
10:4
OIF CEI-28-SR?
(4 x 25 Gb/s)
4
Healey
WDM
Detector 0
Detector 1
Detector 2
Detector 3
WDM
First generation
Pluggable module boundary
4:4
Color 0
Color 1
Color 2
Color 3
WDM
4:4
Detector 0
Detector 1
Detector 2
Detector 3
WDM
4
4
ASIC
4
Color 0
Color 1
Color 2
Color 3
4
4
DesignCon 2009
Next generation
21
IBM Server and Technology Group
Design Challenges for Next-Generation,
High-Speed Ethernet: 40 and 100 GbE
DesignCon 2009
February 4, 2009
David R. Stauffer
Senior Technical Staff Member
IBM ASIC Design Center
OIF Physical & Link Layer Working Group Chair
DesignCon 2009
© 2009 IBM Corporation
IBM Server and Technology Group
Bandwidth Density Projections
motivation for 40/100 GbE (802.3ba)
standards development.
100,000
Rate Mb/s
 Bandwidth growth forecasts
historically show networking
applications double bandwidth
every 18 months. This is the
1,000,000
100 Gigabit Ethernet
Core
Networking
Doubling
≈18 mos
40 Gigabit Ethernet
10 Gigabit Ethernet
10,000
Gigabit Ethernet
1,000
 Conclusion: Higher bandwidth
Serdes technology will be
required. ~25 Gb/s is optimal.
100
1995
2000
2005
2010
2015
2020
Date
3500
3000
# of High Speed Differential Pairs
 Although early 40/100 GbE
systems will depend on 10Gb/s
backplane Serdes technology
(802.3ap), this leads to an
unmanageable number of
differential pairs to meet system
bandwidth.
Server
I/O
Doubling
≈24 mos
2500
2000
1500
1000
500
0
0
1000
2000
3000
4000
5000
Switch Capacity (Gb/s)
5 Gb/s
DesignCon 2009
10 Gb/s
25 Gb/s
© 2009 IBM Corporation
IBM Server and Technology Group
Serdes & Channel Evolution
0
-5
-10
SDD21 Magnitude (dB)
 Achieving 25 Gb/s serial data on
backplanes requires evolutionary
advances in both Serdes and
backplane technology for a cost
effective solution.
-15
-20
-25
-30
 Backplane advances need to address:
 Sdd21 loss targets (see proposed
CEI-25-LR Sdd21 in figure)
 Crosstalk minimization
(better connectors?)
-35
-40
LR Loss Max
LR Loss Min
0
0.5
1
1.5
Frequency (Hz)
2
2.5
10
x 10
Transmitter
 Serdes advances need to address:
 Improved performance in the
presence of crosstalk.
 Power per 25 Gb/s link less than
1.5x power per 10 Gb/s link.
DesignCon 2009
n
Serializer
Transmit
Equalization
Driver
Receiver
n
Deserializer
Clock and
Data Recovery
& Receive
Equalizaton
Receiver
© 2009 IBM Corporation
IBM Server and Technology Group
Significant Issues
 Backplane Technology:
 Sdd21 Insertion Loss must be achieved without significant impacts to
manufacturing yield or cost.
 Advanced materials may be required but only if acceptable manufacturing
yield is achievable.
 Advanced design techniques (i.e. broadside coupling) may be required.
 Better connectors are needed to minimize crosstalk, reflections, etc.
 Serdes Technology:
 Signaling solution must be evolutionary to meet power targets and allow
current levels of integration on ASIC chips.
 Crosstalk is a significant concern at higher baud rates. Current crosstalk
cancellation schemes do not work generically in backplane environments.
 FEC schemes can achieve performance but at the cost of power and
latency. So far this cost has not found market acceptance.
 Multi-level signaling schemes have not shown promise.
DesignCon 2009
© 2009 IBM Corporation
Questions?