The California Institute for Telecommunications and

Download Report

Transcript The California Institute for Telecommunications and

The 21st Century Internet:
A Planetary-Scale Grid
Powered by Intel Processors
Invited Talk in Intel’s Forum and Seminar Series
Hillsboro, OR
February 19, 2002
Larry Smarr
Department of Computer Science and Engineering
Jacobs School of Engineering, UCSD
Director, California Institute for Telecommunications and Information Technology
The 21st Century Internet: A Planetary-Scale Grid
Powered by Intel Processors
After twenty years, the "S-curve" of building out the wired internet with hundreds of
millions of PCs as its end points is flattening out. At the same time, several new "Scurves" are reaching their steep slope as ubiquitous computing begins to sweep the
planet. As a result there will be a vast expansion in heterogeneous end-points to a
new wireless internet, moving IP throughout the physical world. Billions of internet
connected cell phones, embedded processors, hand held devices, sensors, and
actuators will lead to radical new applications. The resulting vast increase in data
streams, augmented by the advent of mass market broadband to homes and
businesses, will drive the backbone of the internet to an optical lambda-switched
network of tremendous capacity. Powering this global grid, will be Intel processors,
arranged in various size "lumps." At the high end will be very large tightly coupled IA64 clusters, exemplified by the new NSF TeraGrid. The next level will be optically
connected IA-32 PC clusters, I have termed OptIPuters. Forming the floor of the
pyramid of power will be peer-to-peer computing and storage, which will increasingly
turn the individual Intel PC "dark matter" of the Grid into a vast universal power source
for this emergent planetary computer. More speculative will be possible peer-to-peer
wireless links of hand-held and embedded processors such as the Intel StrongARM
Pocket PC processor. I will describe how the newly formed Cal-(IT)2 Institute is
organizing research in each of these areas. Large scale "Laboratories for Living in the
Future" are being designed, some of which provide opportunities for collaboration with
Intel researchers.
Governor Davis Has Initiated Four New
Institutes for Science and Innovation
California Institute for Bioengineering,
Biotechnology,
and Quantitative Biomedical Research
UCD
UCSF
Center for
Information Technology Research
in the Interest of Society
UCM
UCB
California
NanoSystems Institute
UCSC
UCSB
UCLA
UCI
California Institute for
Telecommunications and
Information Technology
UCSD
www.ucop.edu/california-institutes
Cal-(IT)2 Has Over Sixty Industrial Sponsors
From a Broad Range of Industries
Akamai Technologies Inc.
AMCC
Ampersand Ventures
Arch Venture Partners
The Boeing Company
Broadcom Corporation
Conexant Systems, Inc.
Connexion by Boeing
Cox Communications
DaimlerChrylser
Diamondhead Ventures
Dupont iTechnologies
Emulex Corporation
Enosys Markets
Enterprise Partners VC
Entropia, Inc.
Ericsson Wireless Comm.
ESRI
Extreme Networks
Global Photon Systems
Graviton
IBM
Interactive Vis. Systems
IdeaEdge Ventures
The Irvine Company
Intersil Corporation
Computers
Communications
Software
Sensors
Biomedical
Automotive
Startups
Venture Capital
Irvine Sensors Corporation
JMI, Inc.
Leap Wireless International
Link, William J. (Versant
Ventures)
Litton Industries, Inc.
MedExpert International
Merck
Microsoft Corporation
Mindspeed Technologies
Mission Ventures
NCR
Newport Corporation
Nissan Motors
Oracle
Orincon Industries
Panoram Technologies
Polexis
Printronix
QUALCOMM Incorporated
R.W. Johnson Pharma. R.I.
SAIC
Samueli, Henry (Broadcom)
SBC Communications
San Diego Telecom Council
SciFrame, Inc.
Seagate Storage Products
SGI
Silicon Wave
Sony
STMicroelectronics, Inc.
Sun Microsystems
TeraBurst Networks
Texas Instruments
Time Domain
Toyota
UCSD Healthcare
The Unwired Fund
Volkswagen
WebEx
$140 M Match From Industry
Cal-(IT)2 -- An Integrated Approach
to Research on the Future of the Internet
220 UCSD & UCI Faculty
Working in Multidisciplinary Teams
With Students, Industry, and the Community
State Gives $100 M Capital for New Buildings and Labs
www.calit2.net
Experimental Chip Design
with Industrial Partner Support
A Multiple
Crystal Interface
Phase Lock Loop
(PLL) for a
Bluetooth
Transceiver with
Voltage Control
Oscillator (VCO)
Realignment to
Reduce Noise
Source: Ian Galton, UCSD ECE, CWC
Clean Room Will House
Microanalysis and Nanofabrication Labs
Superconducting Flux Pinning by Magnetic DotsNickel Nanoarray on Niobium Thin Film
Applications:
Increases in Current Carrying Capability of Superconducting Tapes
And Reduction of Noise in Ultra-Sensitive Magnetic Field Detectors
UCSD Used
Electron Beam Lithography
To Create Ni Nanodots
With a Spacing of ~500 Nm
UCI Used Photolithography
To Link Device
to Macro World
“Commensurability” Effects From
the Matching of the Nanoarray and
the Superconductor Vortex Lattice
M. I. Montero, O. M. Stoll, I. K. Schuller, UCSD
M. Bachman, G-P Li, UCI
Cal-(IT)2
“Living-in-the-Future” Laboratories
• Technology Driven
– Ubiquitous Connectivity
– SensorNets
– Knowledge and Data Systems
– LambdaGrid
• Application Driven
– Ecological Observatory
– AutoNet
– National Repository for Biomedical Data
• Culturally Driven
– Interactive Technology and Popular Culture
The Convergence of Computing, Media and New Art Forms Is
Creating a New Cultural Landscape for the 21st Century
Cal-(IT)2 Is Bringing Together
Interdisciplinary Researchers from
UC San Diego and UC Irvine
to Develop the Modalities, Methodologies,
Vocabularies and Technologies
of This Emerging Landscape
Sheldon Brown, UCSD
Simon Penny, UCI
Adriene Jenik, UCSD
Robert Nideffer, UCI
Antoinette LaFarge, UCI
Lev Manovich, UCSD
Amy Alexander, UCSD
Celia Pearce, UCI
Miller Puckette, UCSD
Dan Frost, UCI
Peter Otto, UCSD
Sheldon Brown, UCSD
Larry Carter, UCSD
Geoff Voelker, UCSD
Mike Bailey, UCSD
Edo Stern, UCSD
Computer Gaming is a Major Focus
The ingredients for cultural transformation
of the New Media Arts Layer
• Networked Computing
Environment
• Computing As Vehicle for Media
Delivery
• Computing As Social Space
• Ubiquity of High Resolution
Graphics and Audio
• Gaming As the Domain Where
All of These Elements Are
Brought Together
Computer Gaming As the Primary Media Realm for a
New Generations Development of Media/Social
Literacy/Proficiency
Sheldon Brown, UCSD
PC Architecture Development
Provoked by Computer Gaming
PS2 vs. PC
Dynamic Data Stream
-Static Instruction Set
Architecture
optimized for realtime processing of
multi-media data
Static Data Stream –
Dynamic Instruction Set
Document
Processing
Architecture
Sheldon Brown, UCSD
The Next S-Curves of Internet Growth:
A Mobile Internet Powered by a Planetary Grid
• Wireless Access--Anywhere, Anytime
– Broadband Speeds
– “Always Best Connected”
• Billions of New Wireless Internet End Points
– Information Appliances
– Sensors and Actuators
– Embedded Processors
• Emergence of a Distributed Planetary Grid
– Broadband Becomes a Mass Market
– Internet Develops Parallel Lambda Backbone
– Scalable Distributed Computing Power
– Storage of Data Everywhere
A Planetary Scale Grid
Powered by Intel Processors
Nature of
Lump
Number of
Processors
Per Lump
Number of
National
Scale Lumps
Typical
Processor
Speed of
WAN
connection
Example
High Perf.
PC Cluster
1000s
4
Intel IA-64
10-100 Gbps
TeraGrid
OptIPuter
PC Cluster
10s-100s
1000s
Intel IA-32
1 Gbps
Dedicated
Cluster
PC
1
millions
Intel IA-32
1-100 Mbps
Entropia
embedded
processors
1
Hundreds of
millions
Intel
StrongARM
100 Kbps10 Mbps
AutoNet
Pocket PCs
Cell Phones
The Grid is “Lumpy”
Source: Smarr Talk 2000
Source: Smarr Talk January 1998
Source: Smarr Talk 2000
Source: Smarr Talk 2000
The NSF TeraGrid
Lambda Connected Linux PC Clusters
This will Become the National Backbone to Support Multiple
Large Scale Science and Engineering Projects
Applications
Caltech
0.5 TF
0.4 TB Memory
86 TB disk
Visualization
Note: Weakly Optically Coupled
Compared to Cluster I/O
TeraGrid Backbone (40 Gbps)
Data
SDSC
4.1 TF
2 TB Memory
250 TB disk
Intel, IBM, Qwest
Argonne
1 TF
0.25 TB Memory
25 TB disk
Compute
NCSA
8 TF
4 TB Memory
240 TB disk
www.intel.com/eBusiness/casestudies/snapshots/ncsa.htm
Large Data Challenges
in Medicine and Earth Sciences
• Challenges
– Each Data Object is 3D and Gigabytes
– Data are Generated and Stored in Distributed Archives
– Research is Carried Out on Federated Repository
• Requirements
–
–
–
–
Computing Requirements  PC Clusters
Communications  Dedicated Lambdas
Data  Large Objects WAN Database Operations
Visualization  Collaborative Volume Algorithms
• Response
– OptIPuter Research Project
– UCSD, UCI, USC, UIC, NW Large ITR Proposal
– Potential Industrial Partners
– IBM, HP, Intel, Microsoft, Nortel, Ciena, Velocita, SBC
NIH is Funding a National-Scale Grid
Which is an OptIPuter Application Driver
Biomedical Informatics
Research Network
(BIRN)
Part of the UCSD CRBS
National Partnership for Advanced Computational Infrastructure
Center for Research on Biological Structure
NIH Plans to Expand
to Other Organs
and Many Laboratories
Star Light
International Wavelength Switching Hub
AsiaPacific
SURFnet,
CERN
CANARIE
Seattle
Portland
NYC
AsiaPacific
TeraGrid
Caltech
SDSC
*ANL, UIC, NU,
UC, IIT, MREN
Source: Tom DeFanti, Maxine Brown
AMPATH
AMPATH
UICStarLight Metro OptIPuter
Int’l GE, 10GE
16x1 GE
16x10 GE
Metro GE, 10GE
16-Processor
McKinley at UIC
10x1 GE
+
1x10GE
16-Processor
Montecito/Shavan
o at StarLight
Nat’l GE, 10GE
Nationals: Illinois, California, Wisconsin, Indiana, Washington…
Internationals: Canada, Holland, CERN, Tokyo…
Metro Lambda Grid Optical Data Analysis
“Living Laboratory”
• High Resolution
Visualization Facilities
SDSC UCSD
SIO
– Data Analysis
– Crisis Management
• Distributed Collaboration
– Optically Linked
– Integrate Access Grid
• Data and Compute
– PC Clusters
– AI Data Mining
• Driven by Data-Intensive
Applications
– Civil Infrastructure
– Environmental Systems
– Medical Facilities
Linking Control Rooms
Cox, Panoram,
SAIC, SBC, SGI, IBM,
TeraBurst Networks
UCSD Healthcare
SD Telecom Council
Some Research Topics
in Metro OptIPuters
• Enhance Security Mechanisms:
– End-to-End Integrity Check of Data Streams
– Access Multiple Locations With Trusted Authentication Mechanisms
– Use Grid Middleware for Authentication, Authorization, Validation,
Encryption and Forensic Analysis of Multiple Systems and
Administrative Domains
• Distribute Storage While Optimizing Storewidth:
–
–
–
–
Distribute Massive Pools of Physical RAM (Network Memory)
Develop Visual TeraMining Techniques to Mine Petabytes of Data
Enable Ultrafast Image Rendering
Create for Optical Storage Area Networks (OSANs)
– Analysis and Modeling Tools
– OSAN Control and Data Management Protocols
– Buffering Strategies & Memory Hierarchies for WDM Networks
UCSD, UCI, USC, UIC, & NW
A Planetary Scale Grid
Powered by Intel Processors
Nature of
Lump
Number of
Processors
Per Lump
Number of
National
Scale Lumps
Typical
Processor
Speed of
WAN
connection
Example
High Perf.
PC Cluster
1000s
4
Intel IA-64
10-100 Gbps
TeraGrid
OptIPuter
PC Cluster
10s-100s
1000s
Intel IA-32
1 Gbps
Dedicated
Cluster
PC
1
millions
Intel IA-32
1-100 Mbps
Entropia
embedded
processors
1
Hundreds of
millions
Intel
StrongARM
100 Kbps10 Mbps
AutoNet
Pocket PCs
Cell Phones
The Grid is “Lumpy”
Source: Smarr Talk 1997
Source: Smarr Talk 1997
Source: Smarr Talk 1998
Source: Smarr Talk 1998
Source: Smarr Talk 1999
Cal-(IT)2 Latest Dedicated
Linux Intel IA-32 Cluster
• World’s Most Powerful
Dedicated Oceanographic
Computer
– 512 Intel Processors
– Dedicated December 2001
– Simulates Global Climate Change
• IBM Cal-(IT)2 Industrial Partner
• NSF and NRO Federal Funds
• Scripps Institution of
Oceanography
– Center for Observations,
Modeling and Prediction
– Director Detlef Stammer
A Planetary Scale Grid
Powered by Intel Processors
Nature of
Lump
Number of
Processors
Per Lump
Number of
National
Scale Lumps
Typical
Processor
Speed of
WAN
connection
Example
High Perf.
PC Cluster
1000s
4
Intel IA-64
10-100 Gbps
TeraGrid
OptIPuter
PC Cluster
10s-100s
1000s
Intel IA-32
1 Gbps
Dedicated
Cluster
PC
1
millions
Intel IA-32
1-100 Mbps
Entropia
embedded
processors
1
Hundreds of
millions
Intel
StrongARM
100 Kbps10 Mbps
AutoNet
Pocket PCs
Cell Phones
The Grid is “Lumpy”
Source: Smarr Talk 1997
Early Peer-to-Peer NT/Intel System
NCSA Mosaic (1994)NCSA Symbio (1997)Microsoft (1998)
Source: Smarr Talk 1997
Entropia’s Planetary Computer
Grew to a Teraflop in Only Two Years
The Great Mersenne Prime (2P-1) Search (GIMPS)
Found the First Million Digit Prime
www.entropia.com
Eight 1000p IBM Blue Horizons
Deployed in Over 80 Countries
Peer-to-Peer Computing and Storage
Is a Transformational Technology
“The emergence of Peer-to-Peer computing
signifies a revolution in connectivity
that will be as profound to
the Internet of future
as Mosaic was to the Web of the past.”
--Patrick Gelsinger, VP and CTO, Intel Corp.
March 2001
Bio-Pharma is
the P2P Killer Application
Enterprise P2P, PC Clusters,
and Internet Computing
Forbes 11.27.00
Evolution of Peer-to-Peer
Distributed Computing
Entropia DCGridtm 5.0
III.
Binary Code with Open Scheduling System (no integration)
II. Binary Code Integration
I. Source Code Integration
• Three Successive Technology Phases –
– Different Application Integration Models
• These Models Enable Increasing Numbers of Applications
Entropia DCGrid™
Enterprise System Architecture and Elements
DCGridtm Manager 5.0
Jobs
Job Management
Subjobs
Resource
Scheduling
Scheduling
Resource Management
DCGridtm Scheduler 5.0
• Job Management: Manage Applications, Ensembles of Subjobs,
Application Management
• Scheduling: Match Subjobs to Appropriate Resources and
Execute, User Account Management
• Resource Management: Manage and Condition Underlying
Desktop and Network Resources
DCGrid Performance Scales Linearly
D I S T R I B U T E D
160
40
140
HMMER
30
Compounds per Hour
Sequences per hour
35
25
20
15
Entropia
1CPU SGI
1CPU SUN
Linear (Entropia)
10
5
0
25
0
50
75
100
125
100
80
60
40
0
0
150
5
10
15
20
25
30
35
40
45
50
Number of Clients
Compounds per Hour
7000
350
AUTODOCK
300
GOLD
120
20
Number of Clients
400
Throughput (Packets per Hour)
C O M P U T I N G
250
200
150
100
6000
DOCK
5000
4000
3000
2000
1000
50
0
0
0
100
200
300
Number of Clients
400
500
600
0
100
200
300
Number of Clients
400
500
Adding Brilliance to Mobile Clients
with Internet Computing
• Napster Meets SETI@Home
– Distributed Computing and Storage
• Assume Ten Million PCs in Five Years
– Average Speed Ten Gigaflop
– Average Free Storage 100 GB
• Planetary Computer Capacity
– 100,000 TetaFLOP Speed
– 1 Million TeraByte Storage
A Mobile Internet
Powered by a Planetary Scale Computer
A Planetary Scale Grid
Powered by Intel Processors
Nature of
Lump
Number of
Processors
Per Lump
Number of
National
Scale Lumps
Typical
Processor
Speed of
WAN
connection
Example
High Perf.
PC Cluster
1000s
4
Intel IA-64
10-100 Gbps
TeraGrid
OptIPuter
PC Cluster
10s-100s
1000s
Intel IA-32
1 Gbps
Dedicated
Cluster
PC
1
millions
Intel IA-32
1-100 Mbps
Entropia
embedded
processors
1
Hundreds of
millions
Intel
StrongARM
100 Kbps10 Mbps
AutoNet
Pocket PCs
Cell Phones
The Grid is “Lumpy”
We Are About to Transition
to a Mobile Internet
Third Generation Cellular Systems
Will Add Internet, QoS, and High Speeds
Subscribers (millions)
2,000
1,800
1,600
1,400
1,200
1,000
Mobile Internet
800
600
400
Fixed Internet
200
0
1999
2000
2001
2002
2003
Source: Ericsson
2004
2005
Future Wireless Technologies Are
a Strong Academic Research Discipline
Center for
Wireless Communications
Two Dozen ECE and CSE Faculty
LOW-POWERED
CIRCUITRY
RF
Mixed A/D
ASIC
Materials
ANTENNAS AND
PROPAGATION
COMMUNICATION
THEORY
COMMUNICATION
NETWORKS
MULTIMEDIA
APPLICATIONS
Architecture
Changing
Modulation
Media Access
Smart Antennas
Environment
Channel
Coding
Scheduling
Adaptive Arrays
Protocols
Multiple Access End-to-End QoS Multi-Resolution
Compression
Hand-Off
Source: UCSD CWC
Operating System Services for
Power / Performance Management
• Management of Power and Performance
– Efficient Way to Exchange Energy/Power Related Info
– Among Hardware / OS / Applications
– Power-Aware API
Application
Power Aware API
Power Aware Middleware
POSIX
Operating
System
PA-OSL
Modified OS
Services
Operating
System
PA-HAL
Hardware Abstraction Layer
Hardware
Rajesh Gupta UCI, Cal-(IT)2
Using Students to Invent the Future
of Widespread Use of Wireless PDAs
• Makes Campus “Transparent”
– See Into Departments, Labs, and Libraries
• Year- Long “Living Laboratory” Experiment 2001-02
– 500+ Wireless-Enabled HP PocketPC PDAs
– Wireless Cards from Symbol, Chips from Intersil
– Incoming Freshmen in Computer Science and
Engineering
• Software Developed
– ActiveClass: Student-Teacher Interactions
– ActiveCampus: Geolocation and Resource Discovery
– Extensible Software Infrastructure for Others to Build On
• Deploy to New UCSD Undergrad College Fall 2002
– Sixth College Will be “Born Wireless”
– Theme: Culture, Art, and Technology
– Study Adoption
and Discover New Services
2
Cal-(IT) Team: Bill Griswold, Gabriele Wienhausen
ActiveCampus Explorer:
PDA Interface
Source: Bill Griswold, UCSD CSE
ActiveCampus Explorer:
PDA Interface
Source: Bill Griswold, UCSD CSE
The Cal-(IT)2 Grid Model for
Wireless Services Middleware
Applications
Wireless Services Interface
Data
Real-Time Power Location Mobile Security
Management
Services Control Awareness Code
UCI Wireless
Infrastructures
UCSD Wireless
Infrastructures
J. Pasquale, UCSD
Wireless Internet Puts
the Global Grid in Your Hand
802.11b Wireless
Interactive Access to:
• State of Computer
• Job Status
• Application Codes
Cellular Internet is Already Here
At Experimental Sites
• UCSD Has Been First Beta Test Site
– Qualcomm’s 1xEV Cellular Internet
• Optimized for Packet Data Services
– Uses a 1.25 MHz channel
– 2.4 Mbps Peak Forward Rate
– Part of the CDMA2000 Tech Family
– Can Be Used as Stand-Alone
• Chipsets in Development Support
–
–
–
–
–
–
PacketVideo’s PVPlayer™ MPEG-4
gpsOne™ Global Positioning System
Bluetooth
MP3
MIDI
BREW
Rooftop HDR
Access Point
Automobiles will Become
SensorNet Platforms
• Autonet Concept
–
–
–
–
–
Make Cars Mobile, Ad Hoc, Wireless, Peer-to-Peer Platforms
Distributed Sensing, Computation, and Control
Autonomous Distributed Traffic Control
Mobile Autonomous Software Agents
Decentralized Databases
• ZEVNET Partners
– UCI Institute for Transportation Studies Testbed
– UCSD Computer Vision and Robotics Research Lab (CVRRL)
Clean LimitedRange Mobility
Urban
Mobility
Rigid Line-Haul Performance
Congestion-free flow
Clean LimitedRange Mobility
Urban
Mobility
Will Recker, UCI and Mohan Trivedi, UCSD, Cal-(IT)2
ZEVNet
Current Implementation
Source: Will Recker, UCI, Cal-(IT)2
Activity diary
Tracing Records
CDPD
Wireless
Modem
Extensible Data
Collection Unit
Initial Interview
REACT!
On-line Survey
Post-Travel Updating
ISP
GPS
Sensor Sensor
...
Sensor
Pre-Travel Planning
Service
Provider
Website
Internet
Website
Currently
50 Toyotas
REACT! Application Application TRACER
Embedded and Networked Intelligence
• On-Campus Navigation Enabled
– Web Service and Seamless WLAN Connectivity
– 50 Compaq Pocket PCs
• Virtual Device / Instrument Control Over Bluetooth Links
• Energy-Aware Application Programming
• Battery-Aware Communication Links
Source: Rajesh Gupta, UCI, Cal-(IT)2