No Slide Title

Download Report

Transcript No Slide Title

I
N
D
I
High Performance Networking for Colleges and
Universities: From the Last Kilometer to a
Global Terabit Research Network
A
N
A
TERENA Networking Conference 2001
Antalya, Turkey
U
N
I
V
E
R
S
I
T
Y
Michael A. McRobbie PhD
Vice President for Information Technology
and Chief Information Officer
Steven Wallace
Chief Technologist and Director
Advanced Network Management Laboratory
Indiana Pervasive Computing Research Initiative
I
N
D
I
A
N
A
U
N
I
V
E
R
S
I
T
Y
Network-Enabled Science and
st
Research in the 21 Century
• Science and research is becoming progressively
more global with network-enabled world wide
collaborative communities rapidly forming in a
broad range of areas
• Many are based around a few expensive –
sometimes unique – instruments or distributed
complexes of sensors that produce vast amounts of
data
• These global communities will carry out research
based on this data
I
N
D
I
A
N
A
U
N
I
V
E
R
S
I
T
Y
Network-Enabled Science and
st
Research in the 21 Century
• This data will be:
– collected via geographically distributed
instruments
– analyzed by supercomputers and large
computer clusters
– visualized with advanced 3-D display
technology and
– stored in massive or large data storage systems
• All of this will be distributed globally
I
N
D
I
A
N
A
U
N
I
V
E
R
S
I
T
Y
Examples of Network-Enabled
Science
• NSF funded Grid Physics Network’s (GriPhyN) need for
petascale virtual data grids (i.e. capable of analyzing
petabyte datasets)
– Compact Muon Selenoid (CMS) and A Toroidal LHC Apparatus
(ATLAS) experiments using the Large Hadron Collider (LHC)
located at (CERN) [> 2.5 Gb/s]
– Laser Interferometer Gravitational Wave Observatory (LIGO)
[200GB-5TB data sets needing 2.5 Gb/s or greater for reasonable
transfer times]
– Atacama Large Millimeter Array (ALMA)
• Collaborative video (e.g. HDTV) [20Mb/s]
• Sloan Digital Sky Survey (SDSS) [> 1Gb/s]
I
N
D
I
A Vision for 21st Century NetworkEnabled Science and Research
A
N
A
U
N
I
V
E
R
S
I
T
Y
The vision is for this global infrastructure and
data to be integrated into “Grids” –
seamless global collaborative environments
tailored to the specific needs of individual
scientific communities
I
N
D
Components of Global Grids
I
A
N
A
U
N
I
V
E
R
S
I
T
Y
• High performance networks are fundamental to
integrating Global Grids together
• There are, very broadly speaking, three
components to Global Grids:
– Campus Networks (the last kilometer)
– National and Regional Research and Education
Networks (NRRENs)
– Global connections between NRRENs
I
N
D
Impediments to Global Grids
I
A
N
A
U
N
I
V
E
R
S
I
T
Y
• Of these three components, on a world-wide
scale, investment and engineering is only
adequate for NRRENs
– Campus networks rarely provide scalable
bandwidth to the desktop commensurate with
speeds of campus connections to NRRENs
– Global connections between NRRENs are
major bottlenecks – they are very slow
compared to NRREN backbone speeds
I
N
D
Building Global Grids
I
A
N
A
U
N
I
V
E
R
S
I
T
Y
To build a true Global Grid requires:
• Scalable campus networks providing ubiquitous
high bandwidth connections to every desktop
commensurate with campus connections to
NRRENs
• Global connectivity between NRRENs of
comparable speeds to the NRREN backbones,
which is also stable, persistent and of production
quality like the NRRENs themselves
I
N
D
Presentation Overview
I
A
N
A
U
N
I
V
E
R
S
I
T
Y
This talk describes:
1. Some NRRENs and their common
characteristics
2. “Grid Ready” Campus Networks with Indiana
University’s network architecture and
management of IT as an example
3. A solution to the global connectivity problem
that scales to a terabit global research network
I
N
1. NRRENs
D
I
A
N
• Abilene
–
–
–
–
A
U
N
I
V
E
R
S
I
T
Y
•
•
•
•
•
OC48 ->OC192
OC48 connected GigaPoPs (moving to min. OC12)
ITN provider
1 Gb/s sustained data rates seen
CAnet3
US Fed nets (e.g. ESnet)
DANTE -> GEANT
APAN
CERNET
I
N
D
I
A
N
A
U
N
I
V
E
R
S
I
T
Y
I
N
D
I
A
N
A
U
N
I
V
E
R
S
I
T
Y
I
N
D
I
A
N
A
U
N
I
V
E
R
S
I
T
Y
I
N
D
I
A
N
A
U
N
I
V
E
R
S
I
T
Y
I
N
D
I
A
N
A
U
N
I
V
E
R
S
I
T
Y
I
N
D
I
A
N
A
U
N
I
V
E
R
S
I
T
Y
I
N
D
NRRENs
I
A
N
• OC48 (2.4 Gb/s) backbone implemented today
A
– Moving to OC192 (9.6 Gb/s) as next evolution
U
N
I
V
E
R
S
I
T
Y
• Institutions access backbone at OC12 or greater (a
few connections at OC48)
• Native high-speed IPv4
• Support for IPv6 (but at much lower performance
due to router constraints)
I
N
D
NRRENs
I
A
N
A
U
N
I
V
E
R
S
I
T
Y
• Advanced Services
– Typically run as open (visible) networks, not a
commercial service
– IP multicast deployed in most backbones, but still not
as production as unicast; but not reliable internationally
– QoS – mixed results, still in it’s infancy. Very little
going across more than one network
• Intra-regional interconnect speeds range from
OC3 to OC12. Soon to be OC48 in some cases
I
N
D
I
A
N
A
U
N
I
V
E
R
S
I
T
Y
2.1 Campus Networks: the Critical Last
Kilometer
• Grid applications will require “guaranteed” multi-Mbps bandwidth
“per” application – “end to end"
• That is, these speeds must be sustained from the desktop, through the
various levels of the campus network to the NRREN and then to the
application target
• Thus campus networks must be architected to provide scalable levels
of connectivity to NRRENs as their speeds increase
• It makes no sense to have a shared 10Mbps desktop connection
(delivering at most about 1 Mbps) into a 10,000 Mbps (10Gbps)
NRREN!!
• Providing appropriate levels of connectivity to the desktop and
scalable campus network architectures to support them is a top
priority in IT strategic planning at US Universities
• A sizable portion of telecommunications budgets can go to this
(e.g. $25M at Indiana University in 00/01)
I
N
D
I
A
N
A
U
N
I
V
E
R
S
I
T
Y
Grid-Ready Campus Network
Architectures
• The three main levels of campus network
architectures are:
– Desktop/intrabuilding (the last 100 meters)
– Interbuilding (connecting groups of buildings)
– Backbone (connecting those groups)
• Each level must provide progressively more
capacity and the whole architecture must be
scalable
• Campus networks commonly have two external
connections
– Commercial Internet
– Regional gigaPoPs (to NRRENs)
I
N
D
Campus Networking (last 100 meters)
I
A
N
A
U
N
I
V
E
R
S
I
T
Y
• 10Mb/s switched to the desktop
– Adequate for VHS-quality video distribution
• Video conferencing
• Digital library such as CD quality music
– Gigabyte datasets transferred in 15 minutes
• 100Mb/s switched to the desktop
– Cost of 10/100Mb switch ports less than $50USD
– Gigabyte datasets transferred in 2 minutes
– Suitable for HDTV distribution
• 1000Mb/s switched to the desktop now practical over Cat5 copper
– Cost of 100/1000Mb switch ports less than $500USD
– Terabyte datasets transferred in 2.5 hours
• Fiber to the desktop probably reserved for 10Gb/s and beyond but
necessary between buildings
I
N
D
Grid Ready Campus Network Enablers
I
A
N
A
U
N
I
V
E
R
S
I
T
Y
• Gigabit wire-speed ASIC-based routers
– currently providing 1 Gb/s uplinks, next
generation will support 10 Gb/s uplinks
– QoS support in hardware
• Commodity priced Ethernet switches that
support 10/100 Mb/s and 1 Gb/s
connections
I
N
D
I
A
N
A
U
N
I
V
E
R
S
I
T
Y
2.2 Indiana University’s Grid Ready
Campus Network
• Switched 10Mbps standard for all 55,000+
desktops
• Switched 100Mbps available on request
• 1Gbps available in selected cases
• OC12 (650 Mb/s) Internet2 connectivity
• Native support for IP multicast in both the layer 2
Ethernet switches and the layer 3 routers
• Support for DiffServ based quality of service
I
N
D
I
A
N
A
U
N
I
V
E
R
S
I
T
Y
2.3 Managing IT in US Higher
Education
• In the US IT is recognized as being:
– of central importance in higher education
– fundamental to teaching, learning & research
– essential to responsible and accountable
institutional management
– a source of institutional competitive advantage
• It is also a major source of expenditure in
US universities: (fully costed) between 5 &
10% of an institution’s total budget
I
N
D
Responsibility for Managing IT
I
A
N
A
U
N
I
V
E
R
S
I
T
Y
• US Universities have elevated IT to a portfolio of
central importance reporting directly to the president
or chief academic officer
• This portfolio tends to be the responsibility of the
chief information officer (CIO)
• University CIOs are typically responsible for central
IT and support of distributed IT (e.g. in departments,
schools, faculties)
• This parallels earlier developments in US
business
I
N
D
Strategic Planning for IT
I
A
N
A
U
N
I
V
E
R
S
I
T
Y
• Given the vital importance of IT, US
universities have developed IT strategic
plans
• These plans guide the institution’s future
development and investment in IT
• They are also used to leverage considerable
additional public and private funding for
university IT infrastructure
I
N
D
2.4 An Example: Indiana University
I
A
N
A
U
N
I
V
E
R
S
I
T
Y
• Founded in 1820
• State public university with:
- $1.9B budget (99-00) with 27% from the State of
Indiana
- 7 campuses State-wide (two largest and research
intensive campuses in Bloomington and Indianapolis)
- 97,150 students
- 4,276 faculty
- 9,844 appointed staff
- 42,000+ course sections
- $1B endowment
I
N
D
IT at Indiana University
I
A
N
A
U
N
I
V
E
R
S
I
T
Y
• CIO position created in 96; reports directly to IU President
• Responsible for central IT on all campuses
- Central IT budget from all sources $100M
- 1,200 staff
- Departments, schools & faculties expend about a further $50M
• Central IT comprises
- telecommunications (e.g. data, voice, video on campus, intra &
interstate, & internationally) represents 35% of the budget
- research & academic computing (e.g. supercomputing, massive data
storage, large-scale VR) represents 13% of the budget
- teaching & learning technologies (e.g. user support/education,
desktop life-cycle funding, classroom IT, enterprise software
licensing, student labs, Web support) represents 28% of the budget
- administrative computing (enterprise information systems –
e.g. student, financial, HR & library systems, enterprise
databases & storage) represents 24% of the budget
I
N
D
I
A
N
A
U
N
I
V
E
R
S
I
T
Y
Strategic Planning for IT at Indiana
University
• Goal of Indiana University to be a leader in the “…use and
application of information technology”.
• CIO responsible for developing IT Strategic Plan to
achieve this goal:
- first University-wide IT Strategic Plan
- used IT Committee system; 200 people involved in preparation
- prepared December 97 to May 98, then discussed University-wide
& approved by President and Board of Trustees, December 98
- CIO responsible for implementation
- 5 year plan consisting of 10 major recommendations; 68 actions
(http://www.indiana.edu/~ovpit/strategic/)
- Implementation Plan and full costings developed in parallel
- Full cost $210M over 5 years; $120M in new funding from the
State, $90M in re-programmed University funding
I
N
D
I
A
N
A
U
N
I
V
E
R
S
I
T
Y
3.1 Towards a Global Terabit
Research Network (GTRN)
• Global network-enabled collaborative Grids require true
high-speed global research and education network that:
– is of production quality (managed as production service,
redundant, stable, secure)
– is persistent (is based on a long-term agreement with carrier(s) and
others)
– provides a uniform form of connection globally through global
network access points (GNAPs)
– provides interconnect speeds comparable to NRRENS backbone
speeds (presently OC48 going to OC192)
– scales to a terabit per second data rate during this decade
I
N
D
I
A
N
A
U
N
I
V
E
R
S
I
T
Y
Impediments to Building Global
Grids
• International connections very slow compared with
NRREN backbone speeds
• Long term funding uncertain (e.g. NSF HPIIS
program)
• Global connection effort not well-coordinated
• Overly reliant on transit through US infrastructure
• Frequently connections are via ATM or IP clouds,
making management of advanced services difficult
• Poor coordination of advanced service deployment
• Extreme difficulty ensuring reasonable end-to-end
performance
I
N
D
I
A
N
A
U
N
I
V
Connectivity to US Transit
Infrastructure
Asia Pacific
Europe
APAN
100Mb/s
NORDUnet
400Mb/s
CERNET
10Mb/s
RENATER2
45Mb/s
SingAREN
45Mb/s
SURFnet
155Mb/s
TANet2
45Mb/s
MIRnet
10Mb/s
IUCC
45Mb/s
Dante
600Mb/s
CERN
155Mb/s
Americas
E
CA*net3
1.2Gb/s
R
CUDI
45Mb/s
AMPATH
45Mb/s
S
I
T
Y
I
N
D
I
A
N
A
U
N
I
V
E
R
S
I
T
Y
STAR TAP and International
Transit Service (ITN)
• STAR TAP, CAnet3 and Abilene provide some
level of International transit across North America
– Abilene offers convenient international transit at
multiple landing sites, however transit not offered to
other NRRENs (e.g. ESnet)
• STAR TAP requires a connection to AADS best
effort ATM service (reducing the ability to deploy
QoS)
I
N
D
Indiana University
I
A
N
A
U
N
I
V
E
R
S
I
T
Y
• Firsthand experience with these difficulties
through its Global NOC
• http://globalnoc.iu.edu/
I
N
D
I
A
N
A
U
N
I
V
E
R
S
I
T
Y
I
N
D
3.2 Towards a GTRN
I
A
N
A
U
N
I
V
E
R
S
I
T
Y
• A single global backbone interconnecting global
network access points (GNAPs) that provide
peering within a country or region
• Global backbone speeds comparable to those at
NRRENS, i.e. OC192 in 2002
• Based on stable carrier infrastructure
• Persistent based on long-term (5-10 year)
agreements with carriers, router vendors and
optical transmission equipment vendors
I
N
D
Towards a GTRN
I
A
N
A
U
N
I
V
E
R
S
I
T
Y
• Scalable – e.g. OC768 by 2004, multiple
wavelengths running striped OC768s by 2005,
terabit/sec transmission by 2006
• GNAPs connect at OC48 and above. To scale up
as backbone speeds scale up
• Production service with 24x7x365 management
through a global NOC
• Coordinated global advanced service deployment
(e.g. QoS, IPv6, multicast)