ATLAS Tier 2 Plans

Download Report

Transcript ATLAS Tier 2 Plans

Computing development
projects
GRIDS
M. Turala
The Henryk Niewodniczanski Instytut of Nuclear Physics PAN
and
the Academic Computing Center Cyfronet AGH,
Kraków
Michal Turala
Warszawa, 25 February 2005
1
Outline
- computing requirements of the future HEP
experiments
- HEP world wide computing models and related
grid projects
- Polish computing projects: PIONIER and
GRIDS
- Polish participation in the LHC Computing Grid
(LCG) project
Michal Turala
Warszawa, 25 February 2005
2
LHC data rate and filtering
Data
preselection
in real time
- many different
physics
processes
- several levels
of filtering
- high efficiency
for events of
interest
- total reduction
factor of about
107
Michal Turala
Warszawa, 25 February 2005
3
Data rate for LHC p-p events
Typical parameters:
Nominal rate
- 109 events/s
(luminosity 1034/cm2s, collision rate 40MHz)
Registration rate
- ~100 events/s
(270 events/s)
Event size
- ~1 M Byte/ event
(2 M Byte/ event)
Running time
~ 107 s/ year
Raw data volume
~ 2 Peta Byte/year/experiment
Monte Carlo’s
~ 1 Peta Byte/year/experiment
The rate and volume of HEP data doubles every 12 months !!!
Already today BaBar, Belle, CDF, DO experiments produce 1 TB/ day
Michal Turala
Warszawa, 25 February 2005
4
Data analysis scheme
1-100 GB/sec
35K SI95
Event Filter
(selection &
reconstruction)
Detector
0.1 to 1
GB/sec
200 TB / year
~200 MB/sec
Event
Summary
Data
1 PB / year
Raw data
500 TB
~100 MB/sec
250K SI95
Event
Reconstruction
Event
Simulation
from M. Delfino
Michal Turala
350K SI95
64 GB/sec
Batch Physics
Analysis
Processed
Data
analysis objects
Interactive
Data Analysis
Thousands of scientists
Warszawa, 25 February 2005
5
Multi-tier model of data analysis
Michal Turala
Warszawa, 25 February 2005
6
LHC computing model (Cloud)
Lab m
Uni x
USA
Brookhaven
Lab a
Physics
Department
USA
FermiLab
UK
France
The LHC
Tier
1
Computing
Tier2
Uni a
CERN
Centre
Uni n
……….
Italy
Desktop

NL
Lab b


Germany
Lab c
Uni y
Uni b
7
ICFA Network Task Force (1998):
required network bandwidth (Mbps)
1998
2000
2005
0.05 - 0.25
(0.5 - 2)
0.2 – 2
(2-10)
0.8 – 10
(10 – 100)
0.25 - 10
1.5 - 45
34 - 622
BW to a Home Laboratory Or
Regional Center
1.5 - 45
34 - 155
622 - 5000
BW to a Central Laboratory
Housing Major Experiments
34 - 155
155 - 622
2500 - 10000
BW on a Transoceanic Link
1.5 - 20
34 - 155
622 - 5000
BW Utilized Per Physicist
(and Peak BW Used)
BW Utilized by a University
Group
100–1000 X Bandwidth Increase Foreseen for 1998-2005
See the ICFA-NTF Requirements Report:
http://l3www.cern.ch/~newman/icfareq98.html
8
LHC computing
– specifications for Tier0 and Tier1
CERN
ALICE
ATLAS
CMS
LHCb
CPU (kSI95)
Disk Pool (TB)
Aut. Tape (TB)
Shelf Tape (TB)
Tape I/O (MB/s)
Cost 2005-7 (MCHF)
824
535
3200
1200
18.1
690
410
8959
800
23.7
820
1143
1540
2632
800
23.1
225
330
912
310
400
7.0
Tier 1
CPU (kSI95)
Disk Pool (TB)
Aut. Tape (TB)
Shelf Tape (TB)
Tape I/O (MB/s)
# Tier
Cost av (MCHF)
234
273
400
1200
4
7.1
209
360
1839
800
6
8.5
417
943
590
683
800
5
13.6
140
150
262
55
400
5
4.0
Michal Turala
Warszawa, 25 February 2005
9
Development of Grid projects
Michal Turala
Warszawa, 25 February 2005
10
EU FP5 Grid Projects 2000-2004
from M. Lemke at CGW04
(EU Funding: 58 M€)
Applications
Eur oGr id
Middleware
DAMIEN
DataGr id
Infrastructure
1/ 10/ 2000
1/ 10/ 2001
•Infrastructure
DataTag
•Computing
Mam m oGr id
EuroGrid, DataGrid,
GEMSS
Damien
SeLeNe
BioGr id
AVO
OpenMolGr id•Tools and Middleware
EGSO
FlowGr id
GridLab, GRIP
GRIA MOSES
COG
•Applications
GRACE
Cr ossGr id
EGSO, CrossGrid,
BioGrid, FlowGrid,
Gr idLab
Moses, COG, GEMSS,
Grace, Mammogrid,
GRIP
OpenMolGrid, Selene,
•P2P / ASP / Webservices
DataTAG
P2People, ASP-BP,
GRIA, MMAPS,
GRASP, GRIP, WEBSI
•Clustering
1/ 10/ 2002
GridStart
11
Strong Polish Participation
in FP5 Grid Research Projects
from M. Lemke at CGW04
2 Polish-led Projects (out of 12)

CrossGrid

CYFRONET Cracow

ICM Warsaw
PSNC Poznan
INP Cracow
INS Warsaw




GridLab

CROSSGRID partners
PSNC Poznan
Significant share of funding to Poland versus EU25




FP5 IST Grid Research Funding: 9.96 %
FP5 wider IST Grid Project Funding: 5 %
GDP: 3.8 %
Population: 8.8 %
12
CrossGrid testbeds
16 sites in 10 countries, about 200 processors and 4 TB disk storage
SZCZECIN
BYDGOSZCZ
TORUŃ
OPOLE
Middleware:
from EDG 1.2
to LCG-2.3.0
Testbeds for
- development
- production
- testing
- tutorials LUBLIN
- external users
Last week CrossGrid has concluded successfully its final review
Michal Turala
Warszawa, 25 February 2005
13
CrossGrid applications
OLSZTYN
ELBLĄG
SZCZECIN
Medical
Flood
BIAŁYSTOK
prediction
TORUŃ
Blood flow simulation,
supporting vascular
surgeons in the treatment of
arteriosclerosis
POZNAŃ
Flood prediction and
WARSZAWA
simulation
based on
weather forecasts and
geographical data
PUŁAWY
WROCŁAW
KIELCE
OPOLE
Meteo
Pollution
Physics
CZĘSTOCHOWA
KATOWICE
Distributed data mining in
high energy physics,
supporting the LHC collider
experiments at CERN
Michal Turala
KRAKÓW
RZESZÓW
Large-scale weather
BIELSKO-BIAŁA
forecasting combined with
air pollution modeling (for
various pollutants)
Warszawa, 25 February 2005
14
Grid for real time data filtering
GDAŃSK
KOSZALIN
OLSZTYN
SZCZECIN
ELBLĄG
BYDGOSZCZ
BIAŁYSTOK
TORUŃ
POZNAŃ
ZIELONA
GÓRA
WARSZAWA
RADOM
ŁÓDŹ
PUŁAWY
WROCŁAW
KIELCE
OPOLE
CZĘSTOCHOWA
KATOWICE
LUBLIN
KRAKÓW
RZESZÓW
BIELSKO-BIAŁA
Studies on a possible use of remote computing farms
for event filtering; in 2004 beam test data shipped to Cracow, and back to CERN, in real time.
Michal Turala
Warszawa, 25 February 2005
15
LHC Computing Grid project-LCG
Objectives
- design, prototyping and implementation of the computing
environment for LHC experiments (MC, reconstruction and
data analysis):
- infrastructure
- middleware
- operations (VO)
Schedule
- phase 1 (2002 – 2005; ~50 MCHF); R&D and prototyping (up to
30% of the final size)
- phase 2 (2006 – 2008 ); preparation of a Technical Design
Report, Memoranda of Understanding, deployment (2007)
Coordination
- Grid Deployment Board: representatives of the world HEP
community, supervising of the LCG grid deployment and testing
Michal Turala
Warszawa, 25 February 2005
16
From F. Gagliardi at CGW04
Computing Resources – Dec. 2004
Country providing resources
Country anticipating joining EGEE/LCG
In EGEE-0 (LCG-2):
 91 sites
 >9000 cpu
 ~5 PB storage
Three Polish institutions involved
- ACC Cyfronet Cracow
- ICM Warsaw
- PSNC Poznan
Polish investment in the local infrastructure
EGEE supporting the operations
17
Polish Participation in LCG project
Polish Tier2
• INP/ ACC Cyfronet Cracow
• resources (plans for 2004)
• 128 processors (50%),
• storage: disk ~ 10TB, tape (UniTree) ~ 10 TB (?)
• manpower
• engineers/ physicists ~ 1 FTE + 2 FTE (EGEE)
• ATLAS data challenges – qualified in 2002
• INS/ ICM Warsaw
• resources
(plans for 2004)
• 128 processors (50%),
• storage: disk ~ 10TB, tape ~ 10 TB
• manpower
• engineers/ physicists ~ 1 FTE + 2 FTE (EGEE)
• Connected to LCG-1 world-wide testbed in September 2003
Michal Turala
Warszawa, 25 February 2005
18
Polish networking - PIONIER
GDAŃS K
KOSZALIN
OLS ZTYN
S ZCZECIN
Stockholm
GEANT
BYDGOS ZCZ
BIAŁYS TOK
TORUŃ
5200 km fibres
installed, connecting
21 MAN centres
POZNAŃ
WARS ZAWA
GUBIN
ZIELONA
GÓRA
S IEDLCE
Prague
ŁÓDŹ
PUŁAWY
WROCŁAW
RADOM
CZĘS TOCHOWA
LUBLIN
KIELCE
OPOLE
GLIWICE
KATOWICE
KRAKÓW
CIES ZYN
RZES ZÓW
BIELS KO-BIAŁA
Ins ta lle d fibe r
P IONIER node s
Fibe rs pla nne d in 2004
P IONIER node s pla nne d in 2004
Good connectivity of
HEP centres to MANs
- IFJ PAN to MAN
Cracow – 100 Mb/s
-> 1 Gbs,
- INS to MAN
Warsaw – 155 Mb/s
Multi-lambda
connections planned
from the report of PSNC to ICFA ICIC, Feb. 2004 (M. Przybylski)
Michal Turala
Warszawa, 25 February 2005
19
PC Linux cluster at ACC Cyfronet
CrossGrid – LCG-1
4 nodes 1U 2x PIII, 1GHz
512 MB RAM
40 GB HDD
2 x FastEthernet 100Mb/s
Ethernet
100 Mb
23 nodes 1U 2x Xeon 2,4Ghz
1 GB RAM
40 GB HDD
Ethernet 100Mb/s+1Gb/s
HP ProCurve Switch
40 ports 100Mb/s,
1 port 1Gb/s (uplink)
INTERNET
PIII 1GHz
Xeon 2,4 GHz
Monitoring: 1U unit KVM
keyboard
touch pad
LCD
Last year 40 nodes of I64 processors have been added; in 2005 investments
of 140 Linux 32 bit processors and 20 - 40 TB of disk storage are planned
Michal Turala
Warszawa, 25 February 2005
20
ACC Cyfronet in LCG-1
Sept. 2003: Sites taking part in the initial LCG service (red dots)
Small Test clusters
at 14 institutions;
Grid middleware package
(mainly parts of EDG and VDT)
 a global Grid testbed
Kraków
Poland
Karlsruhe
Germany
from K-P. Mickel
at CGW03l
This is the very first really running global computing and data
Grid, which covers participants on three continents
Michal Turala
Warszawa, 25 February 2005
21
Linux Cluster at INS/ ICM
CrossGrid – EGEE - LCG
PRESENT STATE
• cluster at the Warsaw University
(Physics Department)
• Worker Nodes: 10 CPUS (Athlon 1.7 MHz)
• Storage Element: ~ 0.5 TB
• Network: 155 Mb/s
• LCG 2.3.0, registered in LCG Test Zone
NEAR FUTURE (to be ready in June 2005)
• cluster at the Warsaw University (ICM)
• Worker Nodes: 100 - 180 CPUS (64-bit)
• Storage Element: ~ 9 TB
• Network: 1 Gb/s (PIONEER)
from K. Nawrocki
Michal Turala
Warszawa, 25 February 2005
22
PC Linux Cluster at ACC Cyfronet
CrossGrid – EGEE- LCG-1
LCG cluster at ACC Cyfronet
statistics for 2004
CPU time
Walltime
[seconds]
Atlas
73775472
90001024
Alice
8379585
12868726
LHCb
37698467
42027750
CPU time
Walltime
[hours]
Atlas
20493,18667
25000,28444
Alice
2327,6625
3574,646111
10471,79639
11674,375
LHCb
Michal Turala
Warszawa, 25 February 2005
23
ATLAS DC Status
DC2 Phase I started beginning of July, finishing now
3 Grids were used





LCG ( ~70 sites, up to 7600 CPUs)
NorduGridATLAS
(22
~3280 CPUs (800), ~14TB)
DC2sites,
- CPU usage
Grid3 (28 sites, ~2000 CPUs)
at.uibk
c a.triumf
c a.ualberta
c a.umontreal
c a.utoronto
c h.c ern
c z.golias
c z.s kurut
de.fzk
es .ifae
es .ific
es .uam
fr.in2 p3
it.infn.c naf
it.infn.lnl
it.infn.mi
it.infn.na
it.infn.na
it.infn.roma
it.infn.to
it.infn.lnf
jp.ic epp
nl.nikhef
pl.zeus
ru.ms u
tw.s inic a
uk.bham
uk.ic
uk.lanc s
uk.man
uk.rl
ATLAS DC2 -1 LCG
- September 7
%
2%
29%
0%
Grid3 29%
1%
1% 4%
41%
2%
10%
14%
2%
LC G
1%
N orduG rid
1%
G rid30 %
3%
1%
12%
NorduGrid
30%
3%
LCG 41%
0%
1%
30%
9%
4%
1%
1%
~
~
~
~
1350 kSI2k.months
120,000 jobs
10 Million events fully simulated (Geant4)
27 TB
8%
0%
4%
3%
1%
1%
5%
2%
3%
1%
from L. Robertson
at C-RRB, Oct. 2004
All 3 Grids have been proven to be usable for a real production
24
Polish LHC Tier2 - future
„In response to the LCG MoU draft document and using data of
the PASTA report the plans for Polish Tier2 infrastructure have
been prepared – they are summarized in the Table
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
CPU (kSI2000)
4
20
40
100
150
500
1000
1500
2000
Disk, LHC (TBytes)
1
5
10
20
30
100
300
500
600
Tape, LHC (TBytes)
5
5
10
10
20
50
100
180
180
155
622
10000
10000
10000
10000
20000
20000
20000
2
3
4
4
5
6
6
6
6
WAN (Mbits/s)
Manpower (FTE)
34
It is planned that in the next few years the LCG resources will grow
incrementally mainly due to local investments. A step is expected around
2007, when the matter of LHC computing fundings should be finally
resolved.”
from the report to LCG GDB, 2004
Michal Turala
Warszawa, 25 February 2005
25
Thank you for your attention
Michal Turala
Warszawa, 25 February 2005
26