SLAC Computing Goals Mount

Download Report

Transcript SLAC Computing Goals Mount

SLAC Goals
and SLAC Role in LHC Computing
Richard P. Mount
SLUO Meeting July 17, 2009
Page 1
Evaluating US ATLAS Needs
Data Production Tasks:
• BNL Tier 1 (+Tier 2s) meet the pledged requirements
Simulation:
• Could absorb all Tier 2 capacity and more. Physics
impact of scaling back is not clear.
Analysis:
• Computing Model allocates 20% to 50% of Tier 2s.
• Is this realistic quantitatively?
• Is this realistic qualitatively (data-intensive analysis)?
• Could Tier 3s provide what appears to be missing?
SLUO Meeting July 17, 2009
Page 2
US ATLAS T1 + T2 in 2010
Assume Analysis gets 20% of Tier 2
Assume Analysis gets 50% of Tier 2
Analysis CPU
(Tier 2)
Analysis CPU
(Tier 2)
Simulation CPU
(Tier 2)
Simulation CPU
(Tier 2)
Production CPU
(Tier 1)
Production CPU
(Tier 1)
Excluding T3s
SLUO Meeting July 17, 2009
Page 3
BaBar and CDF
BaBar Tier As in 2006
CDF Fermilab Site in 2008
(+ simulation done mainly at universities)
(thanks to Rick Snider)
Analysis - Core
Analysis - Other
Analysis CPU
Simulation CPU
Production CPU
Simulation - Core
Simulation - AnalysisSpecific
N-tupling
Production
Excluding “T3s”
SLUO Meeting July 17, 2009
Page 4
Data-Intensive Analysis
1. Babar (and Run II) experience demonstrated a vital role for major
“Tier-A” computer centers specializing in high-throughput dataintensive work.
2. Physics demands chaotic, apparently random access to data. This
is a hardware/software/management challenge.
3. Achieving high-availability with a complex data-intensive workload is
hard!
4. Running simulation at major data-intensive facilities fails to exploit
them fully (BaBar ran most simulation on university facilities).
5. Significant university (Tier 3) facilities have always been very
important for final-stage analysis.
SLUO Meeting July 17, 2009
Page 5
SLAC Strengths
1. Pushing the envelope of Data Intensive Computing
e.g. Scalla/xrootd (in use at the SLAC T2)
2. Design and implementation of efficient and scalable
computing systems (1000s of boxes)
3. Strongly supportive interactions with the university
community (and 10 Gbits/s to Internet2).
Plus a successful ongoing computing operation:
• Multi-tiered multi-petabyte storage
• ~10,000 cores of CPU
• Space/power/cooling continuously evolving
SLUO Meeting July 17, 2009
Page 6
SLAC Laboratory Goals
1. Maintain, strengthen and exploit the Core Competency
in Data Intensive Computing;
2. Collaborate with universities exploiting the
complementary strengths of universities and SLAC.
SLUO Meeting July 17, 2009
Page 7
ATLAS Western Analysis Facility
Concept
1. Focus on data-intensive analysis on a “major-HEP-computingcenter” scale;
2. Flexible and Nimble to meet the challenge of rapidly evolving
analysis needs;
3. Flexible and Nimble to meet the challenge of evolving technologies:
• Particular focus on the most effective role for solid state storage
(together with enhancements to data-access software);
4. Close collaboration with US ATLAS university groups:
• Make best possible use of SLAC-based and university-based
facilities.
5. Coordinate with ATLAS Analysis Support Centers.
SLUO Meeting July 17, 2009
Page 8
ATLAS Western Analysis Facility
Possible Timeline
1. Today: Interest from “Western” ATLAS university groups in colocating ARRA-funded equipment at SLAC (under discussion).
2. 2010: Tests of various types of solid-state storage in various ATLAS
roles (conditions DB, TAG access, PROOF-based analysis, xrootd
access to AOD …). Collaborate with BNL and ANL.
3. 2010: Re-evaluation of ATLAS analysis needs after experience with
real data.
4. 2011 on: WAF implementation as part of overall US ATLAS strategy.
SLUO Meeting July 17, 2009
Page 9
ATLAS Western Analysis Facility
Ideas on Hosting Costs and Terms
Model the power bill + infrastructure maintenance + support labor for
$1M of CPU/Disk/Network equipment:
 Approximately $75k + $25k + $50k per year (~15% of $1M)
Total 4-year cost for a $1M system = $1M + $0.6M
Two (of the) possible models for hosting:
1. Cash economy: University pays SLAC ~15% of purchase price
each year;
2. Barter economy: University allows SLAC to use ~36% of the
hardware for the SLAC HEP Program (ATLAS, FGST …)
SLUO Meeting July 17, 2009
Page 10
Hosting at BNL or SLAC
Next Steps
1. BNL and SLAC should agree a common methodology for hosting
cost calculation and presentation to potential customers;
2. Get DOE blessing (Informal reactions from DOE are “OK if nobody
asks us for more money for anything”)
SLUO Meeting July 17, 2009
Page 11