Open Science Grid Middleware
Download
Report
Transcript Open Science Grid Middleware
Open Science Grid: Project Statement & Vision
Transform compute and data
intensive science through a crossdomain self-managed national
distributed cyberinfrastructure that
brings together campus and
community infrastructure facilitating
the research of Virtual Organizations
at all scales.
VO Jobs Running on OSG in 2006
1
Why the effort is important:
Sustained growth in the needs of
traditional compute and data
intensive science;
Facility (preliminary commitments):
CPU Million Specint2000s
100
90
80
70
OTHERS
60
The steady stream of scientific
domains that add and expand the
role of computing and data
processing in their discovery process;
Coupled with the administrative and
physical distribution of compute and
storage resources and increase in
the size, diversity and scope of
scientific collaborations.
STAR
50
LIGO
CMS
40
ATLAS
30
20
10
0
2006
2007
2008
2009
Cache Disk in Petabytes
20.00
18.00
16.00
14.00
OTHERS
12.00
STAR
10.00
LIGO
CMS
8.00
ATLAS
6.00
4.00
2.00
0.00
2006
2007
2008
2009
2
Goals of the OSG:
Support data storage, distribution & computation for High Energy, Nuclear
& Astro Physics collaborations, in particular delivering to the schedule,
capacity and capability needed for LHC and LIGO science.
Engage and benefit other Research & Science of all scales through
progressively supporting their applications.
Educate & train students, administrators & educators.
Operate & evolve a petascale Distributed Facility across the US providing
guaranteed & opportunistic access to shared compute & storage resources.
Interface & Federate with Campus, Regional, other national & international
Grids (including EGEE & TeraGrid).
Provide an Integrated, Robust Software Stack for Facility & Applications,
tested on a well provisioned at-scale validation facility.
Evolve the capabilities offered by the Facility by deploying externally
developed new services & technologies.
3
Challenges: Sociological and Technical
Develop the organizational and management structure of an open
consortium that drives such a CI.
Develop the organizational and management structure for the project
that builds, operates and evolves such CI.
Maintain and evolve a software stack capable of offering powerful and
dependable capabilities to the NSF and DOE scientific communities.
Operate and evolve a dependable facility.
Boston University
California Institute of Techology
Cornell University
Indiana University
Rennaisance Computing Institute
University of California, San Diego
University of Florida
University of Iowa
University of Wisconsin, Madison
http://www.opensciencegrid.org
Brookhaven National Laboratory
Columbia University
Fermi National Accelerator Laboratory
Lawrence Berkeley National Laboratory
Stanford Linear Accelerator Center
University of Chicago/Argonne National Laboratory
4
Computational Science: Here, There and
Grid of Grids: From Local to Global
Global Research & Shared Resources
Everywhere
Software Stack
Software Release Process
5
Timeline & Milestones
(preliminary)
Contribute to Worldwide LHC Computing Grid
LHC Event Data Distribution and Analysis
Support 1000 Users; 20PB Data Archive
LHC Simulations
Contribute to LIGO Workflow and Data Analysis
LIGO data run SC5
Advanced LIGO
LIGO Data Grid dependent on OSG
STAR, CDF, D0, Astrophysics
CDF Simulation
CDF Simulation and Analysis
D0 Simulations
D0 Reprocessing
STAR Data Distribution and Jobs
10KJobs per Day
Community +1 Community +1 Community +1 Community +1 Community
Additional Science+1
Communities
006
2007
2008
+1 Community +1 Community +1 Community +1 C
2009
2010
201
Facility Security : Risk Assessment, Audits, Incident Response, Management, Operations, Technical Controls
Plan V1
1st Audit
Risk
Audit
Risk
Audit
Risk
Assessment
Assessment
Assessment
Facility Operations and Metrics: Increase robustness and scale; Operational Metrics defined and validated each year.
Audit
Risk
Assessment
Interoperate and Federate with Campus and Regional Grids
VDT and OSG Software Releases: Major Release every 6 months; Minor Updates as needed
VDT 1.4.0 VDT 1.4.1 VDT 1.4.2
…
…
…
OSG 0.6.0 OSG 0.8.0 OSG 1.0
OSG 2.0
OSG 3.0
…
…
VDT
Incremental
Updates
dCache with Accounting Auditing
Federated monitoring and
role based
information services
VDS with SRM
authorization
Common S/w Distribution
Transparent data and job
with TeraGrid
movement with TeraGrid
EGEE using VDT 1.4.X Transparent data management with EGEE
Extended Capabilities & Increase Scalability and Performance for Jobs and Data to meet Stakeholder needs
SRM/dCache
“Just in Time” Workload VO Services
Integrated Network Management
Extensions
Management
Infrastructure
Data Analysis (batch and
Improved Workflow and Resource Selection
interactive) Workflow
Work with SciDAC-2 CEDS and Security with Open Science
2006
Project start
2007
2008
End of Phase I
2009
End of Phase II
2010
2011
6