Transcript title

Global Virtual Organizations for Data Intensive Science Creating a Sustainable Cycle of Innovation Harvey B Newman, Caltech

WSIS Pan European Regional Ministerial Conference Bucharest, November 7-9 2002

Challenges of Data Intensive Science and Global VOs

  

Geographical dispersion: Scale: Complexity: of people and resources Tens of Petabytes per year of data Scientic Instruments and information 5000+ Physicists 250+ Institutes 60+ Countries Major challenges associated with: Communication and collaboration at a distance Managing globally distributed computing & data resources Cooperative software development and physics analysis New Forms of Distributed Systems: Data Grids

Emerging

Data Grid

User Communities

Grid Physics Projects (GriPhyN/iVDGL/EDG) ATLAS, CMS, LIGO, SDSS; BaBar/D0/CDF

NSF Network for Earthquake Engineering Simulation (NEES) Integrated instrumentation, collaboration, simulation Access Grid; VRVS: supporting new modes of group-based collaboration

And

Genomics, Proteomics, ...

The Earth System Grid and EOSDIS Federating Brain Data Computed MicroTomography … Virtual Observatories Grids are Having a Global Impact on Research in Science & Engineering

Global Networks for HENP and Data Intensive Science

National and International Networks with sufficient capacity and capability, are essential today for

The daily conduct of collaborative work in both experiment and theory

Data analysis by physicists from all world regions

The conception, design and implementation of next generation facilities, as “global (Grid) networks”

“Collaborations on this scale would never have been attempted, if they could not rely on excellent networks” – L. Price, ANL

Grids Require Seamless Network Systems with Known, High Performance

High Speed Bulk Throughput BaBar Example [and LHC]

Driven by:

Data volume

HENP data rates, e.g. BaBar ~500TB/year,

Moore’s law

Data rate from experiment >20 MBytes/s;

[5-75 Times More at LHC]

Grid of Multiple regional computer centers (e.g. Lyon-FR, RAL-UK, INFN-IT, CA: LBNL, LLNL, Caltech) need copies of data

Need high-speed networks and the ability to utilize them fully

High speed Today = 1 TB/day (~100 Mbps Full Time)

Develop 10-100 TB/day Capability (Several Gbps Full Time) within the next 1-2 years Data Volumes More than Doubling Each Yr; Driving Grid, Network Needs

HENP Major Links: Bandwidth Roadmap (Scenario) in Gbps

Year 2001 Production

0.155

Experimental

0.622-2.5

Remarks

SONET/SDH

2002

0.622 2.5 SONET/SDH DWDM; GigE Integ.

2003 2005

2.5 10 10 2-4 X 10 DWDM; 1 + 10 GigE Integration

Switch;

Provisioning 1 st Gen.

Grids

2007

2-4 X 10 ~10 X 10; 40 Gbps

2009

~10 X 10 or 1-2 X 40 ~5 X 40 or ~20-50 X 10 40 Gbps

Switching

2011

~5 X 40 or ~20 X 10 ~25 X 40 or ~100 X 10 2 nd Gen

Grids Terabit Networks

2013

~T erabit ~MultiTbps ~Fill One Fiber Continuing the Trend: ~1000 Times Bandwidth Growth Per Decade; We are Rapidly Learning to Use and Share Multi-Gbps Networks

8 Gbps 6 Gbps 4 Gbps 2 Gbps AMS-IX Internet Exchange Thruput Accelerating Growth in Europe (NL) Monthly Traffic 4X Growth In 14 Months 8/01 – 10/02

10 Gbps Hourly Traffic 11/02/02 5 Gbps 0 HENP & World BW Growth: 3-4 Times Per Year; 2 to 3 Times Moore’s Law

National Light Rail Footprint

SEA POR SAC SVL LAX OGD FRE SDG PHO OLG DEN STR DAL KAN CHI CLE PIT RAL NAS NYC WDC BOS WAL 15808 Terminal, Regen or OADM site Fiber route ATL NLR

Buildout Starts November 2002

Initially 4 10 Gb Wavelengths

To 40 10Gb Waves in Future NREN Backbones reached 2.5-10 Gbps in 2002 in Europe, Japan and US; US: Transition now to optical, dark fiber, multi-wavelength R&E network

    

Distributed System Services Architecture (DSSA): CIT/Romania/Pakistan

Lookup

Agents: Autonomous, Auto-

Discovery

discovering, self-organizing,

Service

collaborative

“Station Servers” (static) host mobile “Dynamic Services” Lookup Service Lookup Service Servers interconnect dynamically; form a robust fabric in which mobile agents travel, with a payload of (analysis) tasks Adaptable to Web services: OGSA; and many platforms

Adaptable to Ubiquitous, mobile working environments

Station Server Station Server Station Server Managing Global Systems of Increasing Scope and Complexity, In the Service of Science and Society, Requires A New Generation of Scalable, Autonomous, Artificially Intelligent Software Systems

MonaLisa: A Globally Scalable Grid Monitoring System

    

By I. Legrand (Caltech) Deployed on US CMS Grid Agent-based Dynamic information / resource discovery mechanism Implemented in

Java/Jini; SNMP

WDSL / SOAP with UDDI Part of a Global “ Grid Control Room ” Service http://cil.cern.ch:8080/MONALISA/

History - Throughput Quality Improvements from US to World

Bandwidth of TCP < MSS/(RTT*Sqrt(Loss))

(1)

80% annual improvement Factor ~100/8 yr Progress, but the Digital Divide is Maintained: Action is Required

100M 10M 1M 100k 10k 1k 100

NREN Core Network Size (Mbps-km):

http://www.terena.nl/compendium/2002 Logarithmic Scale Leading Nl Ir In Transition Gr Pl Advanced Ch Es It Hu Fi Cz Lagging Ro Ukr Perspectives on the Digital Divide: Int’l, Local, Regional, Political

Building Petascale Global Grids: Implications for Society

Meeting the challenges of Petabyte-to-Exabyte Grids, and Gigabit-to-Terabit Networks, will transform research in science and engineering

These developments could create the first truly global virtual organizations (GVO)

If these developments are successful, and deployed widely as standards, this could lead to profound advances in industry, commerce and society at large

By changing the relationship between people and “persistent” information in their daily lives

Within the next five to ten years

Realizing the benefits of these developments for society, and creating a sustainable cycle of innovation compels us

TO CLOSE the DIGITAL DIVIDE

Recommendations

 

To realize the Vision of Global Grids, governments, international institutions and funding agencies should:

Define IT international policies (for instance AAA)

Support establishment of international standards

Provide adequate funding to continue R&D in Grid and Network technologies

Deploy international production Grid and Advanced Network testbeds on a global scale

Support education and training in Grid & Network technologies for new communities of users

Create open policies, and encourage joint development programs, to help

Close the Digital Divide

The WSIS RO meeting, starting today, is an important step in the right direction

Some Extra Slides Follow

IEEAF: Internet Educational Equal Access Foundation; Bandwidth Donations for Research and Education

Next Generation Requirements for Physics Experiments

Rapid access to event samples and analyzed results drawn from massive data stores

From Petabytes in 2002, ~100 Petabytes by 2007, to ~1 Exabyte by ~2012.

Coordinating and managing the large but

LIMITED

computing, data and network resources effectively

Persistent access to physicists throughout the world, for collaborative work Grid Reliance on Networks

Advanced applications such as Data Grids rely on seamless operation of Local and Wide Area Networks

With reliable, quantifiable high performance

Networks, Grids and HENP

Grids are changing the way we do science and engineering

Next generation 10 Gbps network backbones are here: in the US, Europe and Japan; across oceans

Optical Nets with many 10 Gbps wavelengths will follow

Removing regional, last mile bottlenecks and compromises in network quality are now

All on the critical path

Network improvements are especially needed in SE Europe, So. America; and many other regions:

Romania; India, Pakistan, China; Brazil, Chile; Africa

Realizing the promise of Network & Grid technologies means

Building a new generation of high performance network tools; artificially intelligent scalable software systems

Strong regional and inter-regional funding initiatives to support these ground breaking developments

Closing the Digital Divide

What HENP and the World Community Can Do

 

Spread the message: ICFA SCIC, IEEAF et al. can help Help identify and highlight specific needs (to Work On)

Policy problems; Last Mile problems; etc.

 

Encourage Joint programs [Virtual Silk Road project; Japanese links to SE Asia and China; AMPATH to So. America]

NSF & LIS Proposals: US and EU to South America Make direct contacts, arrange discussions with gov’t officials

ICFA SCIC is prepared to participate where appropriate

Help Start, Get Support for Workshops on Networks & Grids

Encourage, help form funded programs

Help form Regional support & training groups [Requires Funding]

LHC Data Grid Hierarchy

~PByte/sec CERN/Outside Resource Ratio ~1:2 Tier0/(

Tier1)/(

Tier2) ~1:1:1 Experiment Online System

Tier 0 +1

~100-400 MBytes/sec CERN 700k SI95 ~1 PB Disk; Tape Robot

Tier 1

IN2P3 Center ~2.5-10 Gbps RAL Center INFN Center FNAL: 200k SI95; 600 TB 2.5-10 Gbps

Tier 2

Tier2 Center ~2.5 Gbps

Tier 3

Institute Institute

Physics data cache Workstations 0.1–10 Gbps

Tier 4

Physicists work on analysis “channels” Each institute has ~10 physicists working on one or more channels

    

Why Grids?

1,000 physicists worldwide pool resources for petaop analyses of petabytes of data A biochemist exploits 10,000 computers to screen 100,000 compounds in an hour Civil engineers collaborate to design, execute, & analyze shake table experiments Climate scientists visualize, annotate, & analyze terabyte simulation datasets An emergency response team couples real time data, weather model, population data

[email protected]

ARGONNE  CHICAGO

    

Why Grids? (contd)

Scientists at a multinational company collaborate on the design of a new product A multidisciplinary analysis in aerospace couples code and data in four companies An HMO mines data from its member hospitals for fraud detection An application service provider offloads excess load to a compute cycle provider An enterprise configures internal & external resources to support e-business workload

[email protected]

ARGONNE  CHICAGO

Grids: Why Now?

   

Moore’s law improvements in computing produce highly functional endsystems The Internet and burgeoning wired and wireless provide universal connectivity Changing modes of working and problem solving emphasize teamwork, computation Network exponentials produce dramatic changes in geometry and geography

9-month doubling: double Moore’s law!

1986-2001: x340,000; 2001-2010: x4000?

[email protected]

ARGONNE  CHICAGO

A Short List: Revolutions in Information Technology (2002-7)

 

Scalable Data-Intensive Metro and Long Haul

Network Technologies

DWDM: 10 Gbps then 40 Gbps per

; 1 to 10 Terabits/sec per fiber

10 Gigabit Ethernet (See www.10gea.org) 10GbE / 10 Gbps LAN/WAN integration

 

Metro Buildout and Optical Cross Connects Dynamic Provisioning

Dynamic Path Building

“Lambda Grids” Defeating the “Last Mile” Problem (Wireless; or Ethernet in the First Mile)

3G and 4G Wireless Broadband (from ca. 2003); and/or Fixed Wireless “Hotspots”

Fiber to the Home

Community-Owned Networks

Grid Architecture

Application “Coordinating multiple resources”: ubiquitous infrastructure services, app specific distributed services “Sharing single resources”: Negotiating access, controlling use “Talking to things”: Communication (Internet protocols) & security “Controlling things locally”: Access to, & control of resources Collective Resource Connectivity Fabric Appli cation Transport Internet Link

ARGONNE  CHICAGO

LHC Distributed CM: HENP Data Grids Versus Classical Grids

Grid projects have been a step forward for HEP and LHC: a path to meet the “LHC Computing” challenges

But: the differences between HENP Grids and classical Grids are not yet fully appreciated

The original Computational and Data Grid concepts are largely stateless, open systems: known to be scalable

Analogous to the Web

The classical Grid architecture has a number of implicit assumptions

The ability to locate and schedule suitable resources, within a tolerably short time (i.e. resource richness)

Short transactions; Relatively simple failure modes

HEP Grids are data-intensive and resource constrained

  

Long transactions; some long queues Schedule conflicts; [policy decisions]; task redirection A Lot of global system state to be monitored+tracked

Upcoming Grid Challenges: Building a Globally Managed Distributed System

Maintaining a Global View of Resources and System State

End-to-end System Monitoring

Adaptive Learning: new paradigms for execution optimization (eventually automated)

Workflow Management, Balancing Policy Versus Moment-to-moment Capability to Complete Tasks

Balance High Levels of Usage of Limited Resources Against Better Turnaround Times for Priority Jobs

Goal-Oriented; Steering Requests According to (Yet to be Developed) Metrics

Robust Grid Transactions In a Multi-User Environment

Realtime Error Detection, Recovery

Handling User-Grid Interactions: Guidelines; Agents

Building Higher Level Services, and an Integrated User Environment for the Above

Interfacing to the Grid: Above the Collective Layer

  

( Physicists’) Application Codes Experiments’ Software Framework Layer

Needs to be Modular and Grid-aware: Architecture able to interact effectively with the Grid layers

Grid Applications Layer

(Parameters and algorithms that govern system operations)

  

Policy and priority metrics Workflow evaluation metrics Task-Site Coupling proximity metrics

Global End-to-End System Services Layer

   

Monitoring and Tracking Component performance Workflow monitoring and evaluation mechanisms Error recovery and redirection mechanisms System self-monitoring, evaluation and optimization mechanisms

DataTAG Project

UK SuperJANET4 It GARR-B NewYork STARLIGHT GENEVA GEANT ABILENE ESNET STAR-TAP CALREN NL SURFnet Fr Renater

EU-Solicited Project. CERN , PPARC (UK), Amsterdam (NL), and INFN (IT); and US (DOE/NSF: UIC, NWU and Caltech) partners

Main Aims:

 

Ensure maximum interoperability between US and EU Grid Projects Transatlantic Testbed for advanced network research

2.5 Gbps Wavelength Triangle 7/02 (10 Gbps Triangle in 2003)

TeraGrid (www.teragrid.org) NCSA, ANL, SDSC, Caltech A Preview of the Grid Hierarchy and Networks of the LHC Era Abilene Chicago Urbana Indianapolis Caltech San Diego OC-48 (2.5 Gb/s, Abilene) Multiple 10 GbE (Qwest) Multiple 10 GbE (I-WIRE Dark Fiber) I-WIRE UIC ANL Starlight / NW Univ

Multiple Carrier Hubs

Ill Inst of Tech Univ of Chicago NCSA/UIUC Indianapolis (Abilene NOC) Source: Charlie Catlett, Argonne

Transoceanic Networking Integrated with the Abilene, TeraGrid, Regional Nets and Continental Network Infrastructures in US, Europe, Asia, South America Baseline BW for the US-CERN Link: HENP Transatlantic WG (DOE+NSF ) Link Bandwidth (Mbps) 20000 15000 10000 5000 0 FY2001 FY2002 FY2003 FY2004 FY2005 FY2006 BW (Mbps) 310 Baseline evolution typical of major HENP links 2001-2006 622 2500 5000 10000 20000

 

DataTAG 2.5 Gbps Research Link in Summer 2002 10 Gbps Research Link by Approx. Mid-2003

HENP As a Driver of Networks: Petascale Grids with TB Transactions

       

Problem: Extract “Small” Data Subsets of 1 to 100 Terabytes from 1 to 1000 Petabyte Data Stores Survivability of the HENP Global Grid System, with hundreds of such transactions per day (circa 2007) requires that each transaction be completed in a relatively short time. Example: Take 800 secs to complete the transaction. Then Transaction Size (TB) Net Throughput (Gbps) 1 10 10 100 100 1000 (Capacity of Fiber Today) Summary: Providing Switching of 10 Gbps wavelengths within ~3 years; and Terabit Switching within 5-8 years would enable “Petascale Grids with Terabyte transactions”, as required to fully realize the discovery potential of major HENP programs, as well as other data-intensive fields.

National Research Networks in Japan

SuperSINET

 

Started operation January 4, 2002 Support for 5 important areas:

Nagoya U 

HEP,

Genetics, Nano-Technology, Space/Astronomy, GRIDs Provides 10

’s:

10 Gbps IP connection

7 Direct intersite GbE links

Some connections to 10 GbE in JFY2002

Osaka U

Nagoya Osaka

HEPnet-J

Kyoto U

Will be re-constructed with MPLS-VPN in SuperSINET

ICR Kyoto-U NIFS

Proposal: Two TransPacific 2.5 Gbps Wavelengths, and Japan-CERN Grid Testbed by ~2003

Internet

NIG

Tokyo

ISAS NAO

IP

WDM path

IP router

KEK NII Chiba NII Hitot.

U Tokyo IMS U-Tokyo

National R&E Network Example Germany: DFN TransAtlantic Connectivity Q1 2002

2 X 2.5G Now: NY-Hamburg and NY-Frankfurt

ESNet peering at 34 Mbps

Direct Peering to Abilene and Canarie

expected UCAID will add another 2 OC48’s; Proposing a Global Terabit Research Network (GTRN)

STM 16 

FSU Connections via satellite: Yerevan, Minsk, Almaty, Baikal

Speeds of 32 - 512 kbps

SILK Project (2002): NATO funding

Links to Caucasus and Central Asia (8 Countries)

Currently 64-512 kbps

Propose VSAT for 10-50 X BW: NATO + State Funding

Modeling and Simulation: MONARC System

The simulation program developed within MONARC ( M odels O f N etworked A nalysis At R egional C enters) uses a process- oriented approach for discrete event simulation, and provides a realistic modelling tool for large scale distributed systems.

SIMULATION of Complex Distributed Systems for LHC

Globally Scalable Monitoring Service

Lookup Service Push & Pull rsh & ssh scripts; snmp Farm Monitor Lookup Service Proxy Discovery Client (other service) I. Legrand RC Monitor Service

Component Factory

GUI marshaling

Code Transport

RMI data access Farm Monitor

MONARC SONN: 3 Regional Centres Learning to Export Jobs

= 0.83

= 0.73

1MB/s ; 150 ms RTT CERN 30 CPUs CALTECH 25 CPUs NUST 20 CPUs = 0.66

Day = 9

By I. Legrand

COJAC : C MS O RCA J ava A nalysis C omponent: Java3D Objectivity JNI Web Services Demonstrated Caltech-Rio de Janeiro (Feb.) and Chile

Internet2 HENP WG [*]

Mission: To help ensure that the required

National and international network infrastructures (end-to-end)

Standardized tools and facilities for high performance and end-to end monitoring and tracking [Gridftp; bbcp…]

Collaborative systems

are developed and deployed in a timely manner, and used effectively to meet the needs of the US LHC and other major HENP Programs, as well as the at-large scientific community.

To carry out these developments in a way that is broadly applicable across many fields

Formed an Internet2 WG as a suitable framework: October 2001

[*] Co-Chairs: S. McKee (Michigan), H. Newman (Caltech); Sec’y J. Williams (Indiana)

Website: http://www.internet2.edu/henp ; also see the Internet2 End-to-end Initiative: http://www.internet2.edu/e2e

Cat3550-24-L3 C7206 w Gigabit C7513 w Gigabit Cat4000 L3 Sw

ICI 100Mbps Bucharest MAN for Ro-Grid

Victoriei Romana Gara de Nord Universitate 1G link 1G backup link NOC

10/100/1000Mbps IFIN

Eroilor Izvor Unirii Palat Telefoane

December 1, 2002 GEANT connection Timi şoara Arad Oradea Reşiţa

RoEdu Network

Satu Mare Baia Mare Suceava Botoşani 2 Mbps-POP 2 Mbps(backup) 8 Mbps 34 Mbps Zalău Bistriţa Iasi Piatra Neamţ 155 Mbps Cluj Tg-Mure ş Mircurea Ciuc Bacău Vaslui Hunedoara Alba Iulia Tîrgu Jiu Sibiu Sf. Gheorghe Braşov Buzău Focşani Rm.Vîlcae Piteşti Tîrgovişte Ploieşti Gala ţi Brăila Tulcea Tr. Severin Craiova Slatina Slobozia Alexandria Bu cureşti Giurgiu Călăraşi Constanţa