ATLAS Software at UIUC “Mutability is immutable.” D

Download Report

Transcript ATLAS Software at UIUC “Mutability is immutable.” D

ATLAS Software Installation at UIUC
“Mutability is immutable.” - Heraclitus ~400 B.C.E.
D. Errede, M. Neubauer
Goals:
1) ability to analyze data locally (with constraints)
2) ability to connect to other machines around the world
3) ability to contribute to the detector commissioning and calibration locally
4) ability to store reasonable quantities of data for ease of analysis
1)
- not completely.
2)
- not completely.
3) presently done at CERN(Irene Vichou); effort to make contributions locally. (see S.
Errede’s talk)
4) Discussion with local UIUC computing center on access to available disk space ?
Ability to analyze data locally
1) Accomplished:
*installation of ATLAS release software locally and also installed
per machine. ( Restrictions on NFS type installation. ) Reduces disk-space usage.
*ROOT software for ntuple/tree analysis exists. We do not expect to
be able to generate large data sets locally of complete monte carlo data for instance
because of the well known restrictions of generation time. Perhaps fast MC data can
be generated locally (?).
*installation of Scientific Linux 3 and 4 platform on several local linux
machines
*installation of Open Science Grid User Interface allowing access to
large farms(as clients, not as a Computing Element). Software installed and simple
condor submission tested, however all local idiosyncrasies not yet understood.
Installed as User, not globally for group, yet.
*installation of dq2 software to access data files from OSGrid
Ability to analyze data locally
Not accomplished:
*installation of OSG UI centrally for all users
ability to connect to other machines around the world
2) Accomplished:
*security issues related to logging into CERN lxplus machines (
LCG access, dq2 working, ROOT is prohibitively slow), SLAC linux machines, BNL
linux machines.
*OSG UI installation allowing access to large OSG farms, simply
tested but not fully tested.
ability to contribute to the detector commissioning and calibration locally
3)
See Steve Errede’s talk. Presumably dealing with ROOT ntuples is the
primary requirement here. ROOT exists and is usable.
ability to store reasonable quantities of data for ease of analysis
4)
Discussion with CITES and NCSA on access to some fraction of petabyte
disk. NCSA has resources that perhaps we can tap into in the future. We already
have 10 Gbit/s connection to CITES and then Chicago’s “hub”.
ARE WE DONE?
ARE WE CLOSE TO BEING DONE?
NO.
We will be establishing a systematic analysis effort locally.
(see Mark Neubauer’s talk)
-------------------------------------------------------------------------------------------------------------The HECA state of Illinois grant (D. Errede P.I.) contributed over $200k toward
computing equipment etc from 2000-2004 which will be used by the ATLAS group
here, though in limited fashon due to memory constraints. 16 dual-processor
boxes for muon collaboration use.
-------------------------------------------------------------------------------------------------------------Next project:
*setting up 3 linux machines locally (OSG00,01,X0) with SL3/4 platforms on
which to install ATLAS software for testing. The software changes and is updated
constantly hence we will be starting with a ‘clean slate’ on which to work. We know
that some software doesn’t work ‘on top of’ other software, for instance. Important
to this effort will be the authority to use root privileges on our machines.
The addition of Mark Neubauer to this effort is a tremendous contribution given his
background in installing the CONDOR batch queue system at CDF and hence his
knowledge in other required languages such as Python and as can be seen from
this good idea for the next step/project.
Overall comment: because of the evolving nature of the ATLAS software what is easy
at the present time would have been quite difficult earlier on.
“ The world
according to
Mark “
•
Toward a Tier-3 @ UIUC: The “Problem”
The scale of computing requirements for the LHC experiments is
unprecedented in HEP’s history


•
Much progress has been made on pieces necessary to get ATLAS
physics done on globally distributed computing (GRID)

•
ATLAS: ~3x103 collaborators spread across the globe
3 PB / year RAW + 1 PB / year ESD
Many complex pieces to handle authentication, VOs, job
submission/handling/monitoring, data handling, etc
System needs to be exercised from the perspective of a physicist
“just trying to get their physics done” (I.e. end user)


Convenient and flexible interface to authentication, dataset
creation/consumption, job submission, monitoring, output retrieval
Support and reliability of service, deserving of a production system
These goals have not yet been fully achieved in ATLAS but need to be for
the experiment to be successful!
Toward a Tier-3 @ UIUC: A “Solution”?
•
Deploy a “large” computing cluster configured as a Tier-3 and
operated as a model Tier-3 in terms of reliability and utilization for
doing physics @ a home institution

•
Q: How large is “large” A: Large enough for ATLAS computing
organization to pay attention and for UIUC to get our physics done.
Could be as large as most Tier-2 sites
Why do this at UIUC?



We have an enormous amount of high-end computing infrastructure and
technical expertise to pull this off!
Infrastructure includes HEP & MRL computing, NCSA, recent availability
of 10 GB/s network pipe to Chicago (ATLAS Tier-2 Center)
Technical expertise includes HEP group, networking experts, new
addition: Mark Neubauer

Neubauer: At MIT then UCSD with F. Wurthwein:



lead complete redesign of CDF analysis computing -> CDF Analysis Facility (CAF)
CAF Project Leader (2001-2004)
Involved in subsequent to migration to Condor and utilization of offsite computing
(one of, if not the, first operating GRIDs in HEP)
Goals:
•
•
Drive ATLAS into a successful computing model from the analysis end
Get our physics done, and strengthen our collaborations (e.g. FTK)
Toward a Tier-3 @ UIUC: A Prototype
•
•
Have recently assembled a prototype system to begin work on ATLAS Tier-3
@ UIUC (many thanks to Dave Lesny)
3 Dual Xeon dual processor boxes, 2 Gbytes memory coming out of existing
equipment.
Toward a Tier-3 @ UIUC
Proto-CAF (Oct 2001)
FNAL CAF + 9 DCAFs (now)
Proto-Tier-3@UIUC
Insert Picture
?
Toward a Tier-3 @ UIUC
“The time to repair the roof is when the sun is shining” – JFK