Me - GridPP - UK Computing for Particle Physics

Download Report

Transcript Me - GridPP - UK Computing for Particle Physics

E-Science and LCG-2
PPAP Summary
Results from GridPP1/LCG1
Value of the UK contribution to LCG?
Aims of GridPP2/LCG2
UK special contribution to LCG2?
How much effort
will be needed to continue activities
during the LHC era?
Tony Doyle - University of Glasgow
Outline
1. What has been achieved in GridPP1?
• GridPP I (09/01-08/04)
Prototype
[7’]
complete
2. What is being attempted in GridPP2?
[6’]
• GridPP II (09/04-08/07) Production short timescale
• What is the value of a UK LCG Phase-2 contribution?
3. Resources needed in medium-long term? [10’]
• (09/07-08/10)
Exploitation medium
• Focus on resources needed in 2008
• (09/10-08/14)
Exploitation long-term
26 October 2004
PPAP
Tony Doyle - University of Glasgow
Executive Summary
• Introduction
• Project Management
•
•
•
•
•
•
•
•
• the Grid is a reality
• A project was/is needed
(under control via Project Map)
• Deployed according to planning
• Phase 1.. Phase 2
• Prototype(s) made impact
• Fully engaged (value added)
• Tier-1 production mode
• Resources now being utilised
• UK flagship project
• Preliminary planning
Resources
CERN
Middleware
Applications
Tier-1/A
Tier-2
Dissemination
Exploitation
Ref: http://www.gridpp.ac.uk/
26 October 2004
PPAP
Tony Doyle - University of Glasgow
GridPP Deployment
Status
GridPP
deployment is
part of LCG
(Currently the
largest Grid in
the world)
The future Grid
in the UK is
dependent upon
LCG releases
Three Grids on Global scale in HEP (similar functionality)
sites
CPUs
• LCG (GridPP)
82 (14) 7300 (1500)
• Grid3 [USA]
29
2800
• NorduGrid
30
3200
26 October 2004
PPAP
Tony Doyle - University of Glasgow
Deployment Status
(26/10/04)
•
Incremental releases: significant improvements in
reliability, performance and scalability
– within the limits of the current architecture
– scalability is much better than expected a year ago
•
Many more nodes and processors than anticipated
– installation problems of last year overcome
– many small sites have contributed to MC productions
•
•
Full-scale testing as part of this year’s data challenges
GridPP “The Grid becomes a reality” – widely reported
Technology
Sites
26 October 2004
British Embassy
(USA)
PPAP
British Embassy
(Russia)
Tony Doyle - University of Glasgow
Data Challenges
• Ongoing..
• Grid and
non-Grid
Production
• Grid now
significant
• ALICE - 35
CPU Years
• Phase 1
done
• Phase 2
ongoing
LCG
• CMS - 75 M events
and 150 TB: first
of this year’s Grid
data challenges
Entering Grid
Production Phase..
26 October 2004
PPAP
Tony Doyle - University of Glasgow
Data Challenge
• 7.7 M GEANT4 events and 22 TB
ATLAS DC2 - LCG - September 7
• UK ~20% of LCG
• Ongoing..
• (3) Grid
Production
• ~150 CPU years
so far
• Largest total
computing
requirement
• Small fraction of
what ATLAS
need..
Entering Grid
Production Phase..
26 October 2004
1%
2%
0%
1%
1%
4%
2%
10%
ATLAS
DC2 - CPU usage
2%
14%
1%
1%
0%
3%
1%
12%
3%
Grid3
29%
0%
1%
9%
4%
LCG
41%
1%
1%
8%
0%
4%
3%
1%
1%
5%
2%
3%
1%
Total:
NorduGrid
30%
PPAP
at.uibk
ca.triumf
ca.ualberta
ca.umontreal
ca.utoronto
ch.cern
cz.golias
cz.skurut
de.fzk
es.ifae
es.ific
es.uam
fr.in2p3
it.infn.cnaf
it.infn.lnl
it.infn.mi
it.infn.na
it.infn.na
it.infn.roma
it.infn.to
it.infn.lnf
jp.icepp
nl.nikhef
pl.zeus
ru.msu
LCG
tw.sinica
NorduGrid
uk.bham
Grid3
uk.ic
uk.lancs
uk.man
uk.rl
~ 1350 kSI2k.months
~ 95000 jobs
~ 7.7 Million events fully simulated (Geant4)
~ 22 TB
Tony Doyle - University of Glasgow
LHCb Data Challenge
424 CPU years (4,000 kSI2k months), 186M events
• UK’s input significant (>1/4 total)
Entering Grid
Production Phase..
• LCG(UK) resource:
– Tier-1 7.7%
186 M Produced EventsPhase 1
–
–
–
–
Tier-2 sites:
London 3.9%
South 2.3%
North 1.4%
• DIRAC:
–
–
–
–
Imperial 2.0%
L'pool 3.1%
Oxford 0.1%
ScotGrid 5.1%
26 October 2004
3-5 106/day
Completed
LCG
LCG
paused restarted
LCG in
action 1.8 106/day
DIRAC
alone
PPAP
Tony Doyle - University of Glasgow
Paradigm Shift
Transition to Grid…
424 CPU · Years
26 October 2004
May: 89%:11%
Jun: 80%:20%
11% of DC’04
25% of DC’04
Jul: 77%:23%
Aug: 27%:73%
22% of DC’04
42% of DC’04
PPAP
Tony Doyle - University of Glasgow
What was GridPP1?
• A team that built a working
prototype grid of significant scale
> 1,500 (7,300) CPUs
> 500 (6,500) TB of storage
> 1000 (6,000) simultaneous jobs
GridPP Goal
Clear
To develop and deploy a large scale science Grid
in the UK for the use of the Particle Physics community

1
CERN

LCG Creation

1. 1

1.1.1 1.1.2 1.1.3 1.1.4
1.1.5
• A complex project where 82% of the
190 tasks for the first three years
were completed
26 October 2004
Update
PPAP
Applications

1. 2

1.2.1 1.2.2 1.2.3 1.2.4
1.2.5 1.2.6 1.2.7 1.2.8
1.2.9 1.2.10
Fabric

1. 3

1.3.1 1.3.2 1.3.3 1.3.4
1.3.5 1.3.6 1.3.7 1.3.8
1.3.9 1.3.101.3.11
Technology

1. 4

1.4.1 1.4.2 1.4.3 1.4.4
1.4.5 1.4.6 1.4.7 1.4.8
1.4.9
Deployment

1. 5

1.5.1 1.5.2 1.5.3 1.5.4
1.5.5 1.5.6 1.5.7 1.5.8
1.5.9 1.5.10

2
DataGrid


WP1

2. 1

2.1.1 2.1.2 2.1.3 2.1.4
2.1.5 2.1.6 2.1.7 2.1.8
2.1.9
WP2

2. 2

2.2.1 2.2.2 2.2.3 2.2.4
2.2.5 2.2.6 2.2.7
WP3

2. 3

2.3.1 2.3.2 2.3.3 2.3.4
2.3.5 2.3.6 2.3.7
WP4

2. 4

2.4.1 2.4.2 2.4.3 2.4.4
2.4.5 2.4.6 2.4.7

3
Applications

6
Dissemination
7
Resources
Presentation

6. 1

6.1.1 6.1.2 6.1.3 6.1.4
6.1.5
Deployment

7. 1

7.1.1 7.1.2 7.1.3 7.1.4
Open Source

5. 2

5.2.1 5.2.2 5.2.3
Participation

6. 2

6.2.1 6.2.2 6.2.3
Monitoring

7. 2
7.2.1 7.2.2 7.2.3
Tier-2

4. 3

4.3.1 4.3.2 4.3.3 4.3.4
4.3.5
Worldwide Integration

5. 3
5.3.1 5.3.2 5.3.3
Engagement

6. 3

6.3.1 6.3.2 6.3.3 6.3.4
Developing

7. 3

7.3.1 7.3.2 7.3.3 7.3.4
CMS
3. 4

3.4.2 3.4.3 3.4.4
3.4.6 3.4.7 3.4.8
3.4.10
BaBar
3. 5

3.5.2 3.5.3 3.5.4
3.5.6 3.5.7
Testbed

4. 4

4.4.1 4.4.2 4.4.3 4.4.4
4.4.5 4.4.6
UK Integration

5. 4

5.4.1 5.4.2 5.4.3 5.4.4
5.4.5

3.4.1
3.4.5
3.4.9
WP6

2. 6
2.6.1 2.6.2 2.6.3
2.6.5 2.6.6 2.6.7
2.6.9
WP7

2. 7
2.7.1 2.7.2 2.7.3
2.7.5 2.7.6 2.7.7
CDF/DO

3. 6

3.6.1 3.6.2 3.6.3 3.6.4
3.6.5 3.6.6 3.6.7 3.6.8
3.6.9 3.6.103.6.113.6.12
UKQCD

3. 7

3.7.1 3.7.2 3.7.3 3.7.4
3.7.5 3.7.6
WP8

2. 8

2.8.1 2.8.2 2.8.3 2.8.4
2.8.5

5
Interoperability
Int. Standards

5. 1

5.1.1 5.1.2 5.1.3

3.5.1
3.5.5

2.7.4
2.7.8

Tier-A

4. 1

4.1.1 4.1.2 4.1.3 4.1.4
4.1.5 4.1.6 4.1.7 4.1.8
4.1.9
Tier-1

4. 2

4.2.1 4.2.2 4.2.3 4.2.4
4.2.5 4.2.6 4.2.7
WP5

2. 5

2.5.1 2.5.2 2.5.3 2.5.4
2.5.5 2.5.6 2.5.7

2.6.4
2.6.8
4
Infrastructure
ATLAS

3. 1

3.1.1 3.1.2 3.1.3 3.1.4
3.1.5 3.1.6 3.1.7 3.1.8
3.1.9 3.1.10
ATLAS/LHCb

3. 2

3.2.1 3.2.2 3.2.3 3.2.4
3.2.5 3.2.6 3.2.7 3.2.8
3.2.9
LHCb

3. 3

3.3.1 3.3.2 3.3.3 3.3.4
3.3.5 3.3.6
Other

3. 8
3.8.1 3.8.2 3.8.3
Rollout

4. 5

4.5.1 4.5.2 4.5.3 4.5.4
Data Challenges

4. 6

4.6.1 4.6.2 4.6.3

Status Date
1-Jan-04
Metric OK
Metric not OK
Task complete
Task overdue
Due within 60
days
Task not due soon
Not Active
No Task or metric
Navigate up
Navigate down
External link
Link to goals

1.1.1
1.1.1
1.1.1
1.1.1
1.1.1
1.1.1
1.1.1





Tony Doyle - University of Glasgow
Aims for GridPP2? From
Prototype to Production
BaBarGrid
BaBar
CDF
ATLAS
ALICE
LHCb
CMS
CERN Computer
Centre
RAL Computer
Centre
19 UK Institutes
Separate Experiments,
Resources, Multiple
Accounts
2001
26 October 2004
EGEE
SAMGrid
D0
GANGA
EDG
ARDA
LCG
LCG
CERN Prototype
Tier-0 Centre
UK Prototype
Tier-1/A Centre
CERN Tier-0
Centre
UK Tier-1/A
Centre
4 UK Tier-2
Centres
4 UK Prototype
Tier-2 Centres
Prototype Grids
2004
PPAP
'One' Production Grid
2007
Tony Doyle - University of Glasgow
Planning:
GridPP2 ProjectMap
GridPP2 Goal
To develop and deploy a large scale production quality grid in the UK for the use of the Particle Physics community

0.
Tier-A

Tier-1

1
LCG

Tier-2
1. 1


Applications

1. 2
1. 3


1. 4
2.1



2.2

2.3

Grid Deployment
2.4


2.5


2.6
Network
26 October 2004


3.2
3.3


3.4



3.5



3.6

4.2
4.3
Grid Operations

5
Management


5.1
6
External

Planning



5.2


4.4

Deployment


4.5

LHC Deployment
4.6
Portal
PPAP


Interoperability
6.3

Engagement


6.4

Knowledge
Transfer

SAMGrid

6.2

UKQCD

6.1
Dissemination
D0
PhenoGrid

4.1
CDF
CMS


BaBar
LHCb
Information
& Monitoring

3.1
Experiment Support
4
Non-LHC Apps
Ganga
Security


ATLAS
Workload
Management

Middleware Support
3
LHC Apps
Data & Storage
Management
Grid Technology


Metadata
Computing Fabric

Deployment
2
M/S/N

Production Grid
Navigate down
External link

Link to goals



Need to recognise future
requirements in each area…
Tony Doyle - University of Glasgow
Tier 0 and LCG:
Foundation Programme
• Aim: build upon Phase 1
• Ensure development
programmes are linked
• Project management:
GridPP
LCG
• Shared expertise:
26 October 2004
F. LHC Computing Grid Project
(LCG Phase 2) [review]
• LCG establishes the global
computing infrastructure
• Allows all participating
physicists to exploit LHC data
• Earmarked UK funding being
reviewed
PPAP
Required Foundation: LCG Deployment Tony Doyle - University of Glasgow
Tier 0 and LCG:
RRB meeting today
• Jos Engelen proposal to RRB members (Richard
Wade [UK]) on how a 20MCHF shortfall for LCG
phase II can be funded
• Funding from UK (£1m), France, Germany and
Italy for 5 staff. Others?
• Spain to fund ~2 staff. Others at this level?
• Now vitally important that the LCG effort
established predominantly via UK funding (40%)
is sustained at this level (~10%)
• URGENT
26 October 2004
PPAP
Tony Doyle - University of Glasgow
Value to the UK? Required Foundation: LCG Deployment
What lies ahead? Some
mountain climbing..
Annual data storage:
12-14 PetaBytes
per year
100 Million SPECint2000
Importance of
step-by-step
planning…
Pre-plan your trip, carry an ice
axe and crampons and arrange
for a guide…
CD stack with
1 year LHC data
(~ 20 km)
Concorde
(15 km)
In production terms,
 100,000 PCs (3 GHz Pentium 4)
we’ve made
base camp
Quantitatively, we’re ~7% of the way there in terms of CPU
(7,000 ex 100,000) and disk (4 ex 12-14*3-4 years)…
26 October 2004
PPAP
We are here
(1 km)
Tony Doyle - University of Glasgow
Grid and e-Science
Support in 2008
What areas require support?
IV
IV
Running the Tier-1 Data Centre
Hardware annual upgrade
IV
IV
Contribution to Tier-2 Sysman effort  (non-PPARC) hardware
Frontend Tier-2 hardware
IV
Contribution to Tier-0 support
III
III
One M/S/N expert in each of 6 areas
Production manager and four Tier-2 coordinators
II
Application/Grid experts (UK support)
I
I
I
I
I
ATLAS Computing MoU commitments and support
CMS Computing MoU commitments and support
LHCb Core Tasks and Computing Support
ALICE Computing support
Future experiments adopt e-Infrastructure methods
•
No GridPP management: (assume production mode established
+ devolved management to Institutes)
26 October 2004
PPAP
I. Experiment Layer
II. Application Middleware
III. Grid Middleware
IV. Facilities and Fabrics
Tony Doyle - University of Glasgow
PPARC Financial Input:
GridPP1 Components
Grid Application Development
LHC and US Experiments +
Lattice QCD
UK Tier-1/A Regional Centre
Hardware and Manpower
6/Feb/2004
Applications
Operations
£2.08m
£1.84m
Management
Travel etc
Tier - 1/A
£3.74m
CERN
£5.67m
DataGrid
£3.57m
LHC Computing Grid Project (LCG)
Applications, Fabrics, Technology
and Deployment
European DataGrid (EDG)
Middleware Development
26 October 2004
PPAP
Tony Doyle - University of Glasgow
PPARC Financial Input:
GridPP2 Components
A. Management,
Travel, Operations
F. LHC Computing Grid Project
(LCG Phase 2) [review]
May 2004
B. Middleware
Security
Network
Development
LCG-2
Travel
Mgr
£0.75m
Ops
£1.00m
Tier-1/A
Hardware
£0.69m
£0.88m
£2.40m
M/S/N
£2.62m
£2.79m
£3.02m
Applications
£2.75m
Tier-2
Operations
C. Grid Application Development
LHC and US Experiments +
Lattice QCD + Phenomenology
26 October 2004
Tier-1/A
Operations
E. Tier-1/A
Deployment:
Hardware,
System
Management,
Experiment
Support
D. Tier-2 Deployment: 4 Regional
Centres - M/S/N support and
System Management
PPAP
Tony Doyle - University of Glasgow
IV. Hardware Support
UK Tier-1
2008
UK Tier-2
2008
CPU Total (MSI2k)
4.2
CPU Total (MSI2k)
8.0
Disk Total (PB)
3.8
Disk Total (PB)
1.0
Total Tape (PB)
2.3
1.
Total resources required and planned in all Tier-1 Centres (except CERN)
2.
First full year of data taking (2008)
3.
All data is preliminary
Resource type
CPU (MSI2K)
Estimated requirements (note 1)
ALICE ATLAS
9.1
16.6
CMS
12.6
LHCb
9.5
Total
47.8
Total resources planned
at Tier-1 centres (note 2)
Balance
41.4
-13%
Disk (PBytes)
3.0
9.2
8.7
1.3
22.2
10.1
-55%
Tape (PBytes)
3.6
6
6.6
0.4
16.6
17.8
7%
Notes
1. Requirements will be reviewed by LHCC in January 2005
2. Current planning includes estimates of resources for which funding has
not yet been secured
26 October 2004
PPAP
4.
5.
Global shortfall of Tier-1 CPU (13%) and Disk (-55%)
UK Tier-1 input corresponds to
~40% (~10%) of global disk (CPU)
UK Tier-2 CPU and disk resources
significant
Rapid physics analysis
turnaround is a necessity
Priority is to ensure that ALL
required software (experiment,
middleware, OS) is routinely
deployed on this hardware well
before 2008
Tony Doyle - University of Glasgow
III. Middleware,
Security and Network
Grid Data Management
Network Monitoring
Security
Middleware
Networking
Configuration Management
Storage Interfaces
Information Services
Security
M/S/N builds upon UK
strengths as part of
International development
26 October 2004
PPAP
Require some support expertise
in each of these areas in order to
maintain the Grid
Tony Doyle - University of Glasgow
II. Application
Middleware
AliEn
BaBar
GANGA
Pheomenology Lattice QCD
SAMGrid
CMS
Client Applications
Web
Command line
Collective Services
Request
Formulator and
Planner
“Dataset Editor”
D0 Framework C++ codes
Python codes, Java codes
Request Manager
Cache Manager
Job Manager
“Project Master”
“Station Master”
“Station Master”
SAM Res ource M an ag ement
Batch Systems - LSF, FBS, PBS,
Condor
Storage Manager
“File Storage Server”
Job Services
Data Mov er
“Optimiser”
Significant Event Log ger
CORBA
UDP
“Stager”
Naming Service
Catalog
proto co ls
Catalog Manager
File transfer protocols ftp, b bftp, rcp
Database Manager
GridFTP
Mass Storage s ystems protocols
e.g. encp, hp ss
Connectivity and Resource
SAM-specific user, group , node, station regis tration
GSI
Bbftp ‘cookie’
Authentication and Security
Fabric
Tape
Storage
Elements
Disk
Storage
Elements
Compute
Elements
Indicates component that will be replaced
LANs and
WANs
Code
Repostory
enhanced
or added
Resource and
Services Catalog
Replica
Catalog
Meta-data
Catalog
using PPDG and Grid tools
Name in “quotes” is SAM-given software component name
26 October 2004
Require some support expertise in each of these areas in order to
maintain the Grid applications. Need to develop e-Infrastructure
portals for new experiments starting up in exploitation era.
PPAP
Tony Doyle - University of Glasgow
ATLAS UK e-science forward look (Roger Jones)
Current core and infrastructure
activities:
 Run Time Testing and Validation
Framework, tracking and trigger
instantiations
 Provision of ATLAS Distributed Analysis
& production tools
 Production management
 GANGA development
 Metadata development
 ATLFast simulation
 ATLANTIS Event Display
 Physics Software Tools
~11 FTEs mainly ATLAS e-science with
some GridPP & HEFCE
Current Tracking
and Trigger e-science:
 Alignment effort ~6FTEs
 Core software ~2.5FTEs
 Tracking tools ~6FTEs
 Trigger ~2FTEs
The current eScience funding will only
take us (at best) to first data
Expertise required for the real-world
problems and maintenance
Note for the HLT, the installation and
commissioning will continue into the
running period because of staging
Both will move from development to optimisation & maintenance
Need ~15 FTE (beyond existing rolling grant) in 2007/9 continued e-science/GridPP support
CMS UK e-science forward look (Dave Newbold)
Work area
Current FTEs
FTEs 2007-9
FTEs 2009 -
Cmp sys / support
(e-science WP1)
2.0
3.0 - ramp UP for
running phase
3.0 - steady state
Computing system / support
Monitoring / DQM
2.5
2.0 - initial running
(e-science WP3)
Development / tuning of computing
Tracker software
2.0
1.5 - initial
model + system; management
deployment
(e-science WP4)
running
User support for T1 / T2 centres
ECAL software
2.0
1.5 - initial running
(e-science
WP5)
(globally); liaison with LCG ops
Data management
1.5
1.5 - final dplymnt /
Monitoring / DQM
(GridPP2)
support
system
1.5
1.0 - final dplymnt /
Online data gathering/‘expert systems’ Analysis
(GridPP2)
support
for CMS tracker, trigger
11.5
10.5
Tracker /ECAL software
Installation / calibration support; low-level reconstruction codes
Data management
Phedex system for bulk offline data movement and tracking
System-level metadata; movement of HLT farm data online (new area)
Analysis system
CMS-specific parts of distributed analysis system on LCG
1.5 - support /
maintenance
1.0 - support /
maintenance
1.0 - support /
maintenance
1.5 - support /
maintenance
1.0 - support /
maintenance
9.0
NB: ‘First look’ estimates; well inevitably change as we approach running
Need ~9 FTE (beyond existing rolling grant) in 2007/9 continued e-science/GridPP support
LHCb UK e-science forward look (Nick Brook)
Current core activities:







GANGA development
Provision of DIRAC & production tools
Development of conditions DB
The production bookkeeping DB
Data management & metadata
Tracking
Data Challenge Production Manager
~10 FTEs mainly GridPP, e-science,
studentships with some HEFCE support
Will move from development to
maintenance phase - UK pro rata share
of LHCb core computing activities ~5
FTEs
Current RICH & VELO e-science:
 RICH: UK provide bulk of the RICH s/w
team including s/w coordinator ~7 FTEs
about 50:50 e-science funding+rolling
grant/HEFCE
 VELO: UK provide bulk of the VELO s/w
team including s/w coordinator ~4 FTEs
about 50:50 e-science funding+rolling
grant/HEFCE
ALL essential alignment activities for both
detectors through e-science funding
Will move from development to
maintenance and operational
alignment
~3FTEs for alignment in 2007-9
Need ~9 FTE (core+alignment+UK support) in 2007/9 - continued
e-science support
Grid and e-Science
funding requirements
•
•
Priorities in context of a financial snapshot in 2008
Grid (£5.6m p.a.) and e-Science (£2.7m p.a.)
OperationsTravel
Operations
3%
2%
7%
M/S/N
LCG
9%
Tier-1
20%
8%
Applications
9%
Tier-2
2%
•
•
Tier-2
22%
LHCb
19%
CMS
19%
Tier-1
18%
Assumes no GridPP project management
Savings?
–
–
Travel
4%
ALICE
2%
EGEE Phase 2 (2006-08) may contribute
UK e-Science context is
New Expts.
19%
ATLAS
37%
To be compared with
Road Map: Not a Bid
- Preliminary Input
1. NGS (National Grid Service)
2. OMII (Open Middleware Infrastructure Institute)
3. DCC (Digital Curation Centre)
•
Timeline?
26 October 2004
PPAP
Tony Doyle - University of Glasgow
Grid and e-Science
Exploitation Timeline?
•
•
•
•
•
•
•
•
•
•
PPAP initial input
Oct 2004
Science Committee initial input
PPARC call assessment (2007-2010)
2005
Science Committee outcome
Oct 2005
PPARC call
Jan 2006
PPARC close of call
May 2006
Assessment
Jun-Dec 2006
PPARC outcome
Dec 2006
Institute Recruitment/Retention
Jan-Aug 2007
Grid and e-Science Exploitation
Sep 2007 - ….
•
Note if the assessment from PPARC internal planning
differs significantly from this preliminary advice from
PPAP and SC, then earlier planning is required.
26 October 2004
PPAP
Tony Doyle - University of Glasgow
Summary
1. What has been achieved in GridPP1?
• Widely recognised as successful at many levels
2. What is being attempted in GridPP2?
• Prototype to Production – typically most difficult phase
• UK should invest further in LCG Phase-2
3. Resources needed for Grid and e-Science
in medium-long term?
• Current Road Map
• Resources needed in 2008 estimated at
• Timeline for decision-making outlined..
26 October 2004
PPAP
~£6m p.a.
£8.3m
Tony Doyle - University of Glasgow