No Slide Title
Download
Report
Transcript No Slide Title
Internet2:
Technology Innovation and
Distributed Infrastructure
Guy Almes
Internet2 Project
<[email protected]>
NANOG Meetings
Denver — February 1, 1999
Overview
Universities, Engineering, and
Applications
Technical Innovation
Distributed Infrastructure
The challenge before us
Universities, by their nature,
• mix teaching and research
• collaborate with scholars at other universities
Thus, advanced applications for
• conferencing
• remote instrument access
• digital libraries
What networks will these need?
Applications and engineering
Applications
Motivate
Enables
Engineering
What makes this hard?
Combination of:
• high bandwidth
• wide area
• intrinsically bursty applications
Need for multicast
Need for quality of service
Need for measurements
Internet2 History / Status
Initiated 1-Oct-96 by 34 research
universities
(NGI Program announced one week later)
UCAID incorporated Oct-97
Board of Directors drawn from university
presidents
Staff mainly in three locations
Compact, growing set of international
partners
History/Status, continued
We now have about 140 universities
A few dozen corporate members also
make key contributions
Key goal: create and support advanced
applications
Key infrastructure tactic: campus,
gigapop, backbone structure
Working Group Progress
IPv6
Measurement
Multicast
Network
Management
Network Storage
Quality of Service
Routing
Security
Topology
Technical Innovation:
Measurement
Chair: David Wasley, Univ California
and Matt Zekauskas, Internet2 staff
Focus:
• Places to measure:
at campuses, at gigaPoPs, within interconnect(s)
• Things to measure:
traffic utilization
performance: delay and packet loss
traffic characterization
Backbone ‘A’
Backbone ‘B’
Backbone ‘A’
Backbone ‘B’
Backbone ‘A’
Backbone ‘B’
Active Measurements of
Performance
IETF IPPM WG defining one-way delay
Take all delay to be due to:
• Propagation
• Transmission
• Queuing
Variation in delay suggests congestion
Passive Measurements of
Traffic Characterization
OC3MON and OC12MON
• Developed by MCI vBNS engineering with
NLANR group at UCSD
• passive taps into fiber links
• extracts IP packet headers
• gradually improving maturity
Help understand nature of Internet use
Technical Innovation: Multicast
Chair: Kevin Almeroth,
Univ California at Santa Barbara
Focus: Make native IP multicast scalable
and operationally effective
• Must be coordinated across backbones,
gigaPoPs, and campuses
• Must be coordinated with unicast routing
1999: A key year for multicast
In the past, multicast has meant ‘MBone’
• core set of committed users and engineers
• ‘legacy’ non-scalable approaches to routing
Our hope:
• PIM-Sparse Mode
• MBGP, MSDP, etc.
• enable scalable use of high-speed multicast
flows throughout the Internet2 structure
Technical Innovation:
Quality of Service
Chair: Ben Teitelbaum, Internet2 staff
Focus: Multi-network IP-based QoS
• Relevant to advanced applications
• Interoperability: carriers and kit
• Architecture
• QBone distributed testbed
Big Problem #1: Understanding
Application Requirements
Range of poorly-understood needs
• Both intolerant and tolerant apps important
• Many apps need absolute, per-flow QoS
assurances
• Adaptive apps may require a minimum level of
QoS, but can exploit additional network
resources if available
Big Problem #2: Scalability
# flows through core >> # flows through edge
Goal: keep per-flow state out of the core
Design principles
• Put “smarts” in edge routers
• Allow core routers to be fast and dumb
Big Problem #3:
Interoperability
Campus
Networks
... between separately
administered and
designed clouds ...
GigaPoPs
… and between multiple
implementations of
network elements ...
GigaPoPs
Backbone Networks
(vBNS, Abilene, …)
… is crucial if we are to
provide end-to-end QoS.
Campus
Networks
DiffServ Architecture
Bandwidth Brokers
(perform admissions control,
manage network resources,
configure leaf and edge devices)
Destination
Source
BB
BB
Core
routers
Leaf Router
(police, mark flows)
Core
routers
Ingress Edge Router
Egress
Edge Router (classify, police, mark aggregates)
(shape aggregates)
Premium Service
Emulates a leased line
Contract: peak rate profile
PHB = “forward me first”
(e.g. priority queuing, WFQ)
Policing rule = drop out-of-profile packets
On egress, clouds need to shape Premium
aggregates to mask induced burstiness
Internet2 “QBone”
A “meta-testbed” for absolute diff-serv services
Many Internet2 clouds already keenly interested
in experimenting with diff-serv
Objectives:
•
•
•
•
Fostering interoperability among participant clouds
Encouraging collective problem solving
Creating opportunities for inter-disciplinary dialogue
Growing a snowball of participating clouds
Technical diversity
Topological diversity
Contiguity
Summary
Internet2’s WGs focus on project’s needs
Complement IETF WGs
Membership by invitation of chair
Distributed Infrastructure
Campuses:
• scalable 10/100 Mb/s
• multicast
GigaPoPs:
• scalable access to wide-area resources
Backbones:
• vBNS
• Abilene
Recent progress and challenges
Early gigaPoPs getting stronger
Recent major advances:
• CalREN2
• Great Plains Network
• Northern Crossroads
JET Collaboration
Joint Engineering Team
• federal NGI agency
• Internet2
NGIX effort
• exchange points appropriate for Internet2 /
NGI / non-US similar networks
Ideal: connect universities and labs with
advanced performance/functionality
Abilene: Design and
Status
Guy Almes
Internet2 Project
<[email protected]>
NANOG Meetings
Denver — February 1, 1999
Abilene and Internet2
Internet2 as infrastructure:
• 140+ campus LANs
• about 35 gigaPoPs
• a few interconnect backbones
Abilene is the 2nd Backbone
• OC-48 trunks from Qwest
• Cisco 12008 routers with IP/Sonet
• OC-3 and OC-12 access to gigaPoPs
Abilene Core at 29-Jan-99
Seattle
New York
Cleveland
Sacramento
Indianapolis
Denver
Kansas City
Los Angeles
Atlanta
Houston
Abilene Architecture
Core Architecture
Access Architecture
Network Operations Center
• at Indiana University
Schedule:
• 14-Apr-98: announced
• Sep-98: demonstrated
• 29-Jan-99: operational
Abilene Architecture: Core
Router Nodes located at Qwest PoPs
• Cisco 12008 GSR
• ICS Unix PC: IPPM and Network Mgmt
• Cisco 3640 Remote Access for NOC
• 100BaseT LAN and ‘console port’ access
• Remote 48v DC Power Controllers
Initially, ten Router Nodes
Abilene: by end of February 1999
Seattle
New York
Cleveland
Sacramento
Indianapolis
Denver
Kansas City
Los Angeles
Atlanta
Houston
Abilene Architecture: Access
Access Nodes
• Located at Qwest PoPs
• Sonet: Connects Local to Long-distance
Initially, about 120 Access Nodes:
• This list grows as the Qwest Sonet plant grows
Abilene, with Some Access Nodes
Seattle
Boston
Eugene
Minneapolis
Westfield
New York
Cleveland
Detroit
Salt Lake City
Chicago
Pittsburgh
Lincoln
Sacramento
Oakland
Indianapolis
Newar
Trent
k
on
Philadelp
Wilmington
hia
Columbus
Washington
Denver
Kansas City
Raleigh
Albuquerque
Nashville
Los Angeles
Atlanta
Anaheim
Phoenix
Dallas
Router Node
New Orleans
Access Node
New Haven
Houston
Miami
Abilene NOC
Located at Indiana University
Excellent Operations and Engineering
Skills
Commitment evidenced in Abilene
Rollout
Schedule
Design work: Mar-98 and ongoing
Rack design: May-98 to Jul-98
Initial assembly / testing: Jul-98 to Aug-98
Router Nodes / Interior Lines: Jul-98
Demo network installed: Sep-98
Production began: 29-Jan-99
Completion of OC-48 Core: mid-1999
Continuing improvement: ongoing
Jun-99: Core Architecture
Seattle
New York
Cleveland
Sacramento
Indianapolis
Denver
Kansas City
Los Angeles
Atlanta
Houston
Sep-99: Core Architecture
Seattle
New York
Cleveland
Sacramento
Indianapolis
Denver
Washington
Kansas City
Los Angeles
Atlanta
Houston
Outline of Engineering Issues
Routing:
• OSPF, BGP4, Routing Arbiter Database
Multicast
• PIM-SparseMode, MBGP, MSDP
Measurements
• Surveyor: One-way delay and loss
• Traffic utilization
• End to end flows with gigaPoP help
• OC3MON -- passive measurements
Broader Internet2, NGI, and
International Advanced Net
Initial NGIX sites
Possible CA*net3 peering sites
StarTap