Transcript WP8 Status

Partner
Logo
Experience with the EDG and
LCG Grids
Stephen Burke
Rutherford Appleton Laboratory
Stephen Burke – DESY seminar - 1/12/2003
Outline
Brief
description of Grids
Grid
projects
EDG
middleware
EDG
and LCG testbeds
Experience
so far
Outlook
Many slides “borrowed” from other people!
Stephen Burke – DESY seminar - 1/12/2003 - 2/61
Grid Concepts
Stephen Burke – DESY seminar - 1/12/2003 - 3/61
What is a Grid?
 Everyone
 Basic






features:
Public key technology, X509 certificates
“Virtual organisations”
Resources are not centrally managed, and are usually shared
Resource brokering, load balancing
Common information system
Standard protocols
 Originally

seems to have a different definition!
more aimed at sharing supercomputers
HEP is more interested in farms
Stephen Burke – DESY seminar - 1/12/2003 - 4/61
Grids in HEP
Typical




Hundreds to thousands of CPUs per site
A few large centres - Tier 1
Tens of major sites - Tier 2
Many more local sites – Tier 3 (?)
Data
Can

scenario:
volumes of many Pb
consume essentially unlimited CPU power
“Embarassingly parallel”, events are all independent
Stephen Burke – DESY seminar - 1/12/2003 - 5/61
LHC computing challenge
(MONARC model)
1 TIPS = 25,000 SpecInt95
~PBytes/sec
Online System
~100 MBytes/sec
Offline Farm
~20 TIPS
~100 MBytes/sec
•One bunch crossing per 25 ns
•100 triggers per second
•Each event is ~1 Mbyte
Tier
1
US Regional
Centre
Tier
3
~ Gbits/sec
or Air Freight
Italian Regional
Centre
Institute
Institute
~0.25TIPS
Workstations
CERN Computer
Centre >20 TIPS
Tier
0
French Regional
Centre
Tier
2
~Gbits/sec
Physics data
cache
PC (1999) = ~15 SpecInt95
ScotGRID++
~1 TIPS
RAL Regional
Centre
Tier2 Centre
Tier2 Centre
Tier2 Centre
~1 TIPS ~1 TIPS ~1 TIPS
Physicists work on analysis “channels”
Institute
Institute
100 - 1000
Mbits/sec
Tier
4
Each institute has ~10 physicists working on
one or more channels
Data for these channels should be cached by
the institute server
Stephen Burke – DESY seminar - 1/12/2003 - 6/61
Some comments
 MONARC


A hierarchical model doesn’t really fit with the Grid
More about service levels than resources?
 Individual


universities sometimes have large resources
Thousands of CPUs, tens of Tb of disk storage
Central sites (CERN, FNAL – DESY?) like to be in control!
 WAN

“tier” model pre-dates the Grid
bandwidth is ~ infinite
Bottlenecks are now at site level
 HEP
“virtual organisations” are pretty real
Stephen Burke – DESY seminar - 1/12/2003 - 7/61
Grid Projects
Stephen Burke – DESY seminar - 1/12/2003 - 8/61
Globus
Provides




the basic infrastructure for ~all Grids
GSI – basic security services using extended PKI
MDS – information system based on LDAP
GRAM – “gatekeeper” front-end to batch systems
GridFTP – GSI-enabled high-performance FTP
Globus
provides a toolkit, not an integrated system
Globus
is a research project


More aimed at defining protocols than implementation
Global Grid Forum is a de facto standards body
Stephen Burke – DESY seminar - 1/12/2003 - 9/61
The EU DataGrid (EDG) project

9.8 M euro EU (IST) funding over 3 years

Three year phased developments & demos (2001-2003)

Project objectives:





Middleware for fabric & Grid management (mostly funded by the EU)
Large scale testbed (mostly funded by the partners)
Production quality demonstrations (partially funded by the EU)
To collaborate with and complement other European and US projects
Contribute to Open Standards and international bodies:


 Total


Co-founder of Global Grid Forum and host of GGF1 and GGF3
Industry and Research Forum for dissemination of project results
of 21 partners
Research and Academic institutes as well as industrial companies
Main partners:
Stephen Burke – DESY seminar - 1/12/2003 - 10/61
LCG
LHC
Computing Grid – much more than just a Grid!
Should
provide the full computing infrastructure for
the LHC experiments
Mainly
a customer of other Grid projects, but may
do development where essential
Not
a research project, the focus is on a working,
production-quality system
LCG
Stephen Burke – DESY seminar - 1/12/2003 - 11/61
LCG project structure
Applications Area
Middleware Area
Development environment
Joint projects
Data management
Distributed analysis
Provision of a base set of grid
middleware – acquisition,
development, integration,
testing, support
CERN Fabric Area
Grid Deployment Area
Large cluster management
Data recording
Cluster technology
Networking
Computing service at CERN
Establishing and managing the
Grid Service - Middleware
certification, security, operations,
registration, authorisation,
accounting
Phase 1 – 2002-05
development of common software
prototyping and operation of a pilot computing service
Phase 2 – 2006-08
acquire, build and operate the LHC computing service
Joint with a
new
European
project EGEE
Enabling Grids for e-Science in
Europe
Stephen Burke – DESY seminar - 1/12/2003 - 12/61
Deploying the LCG service
 Middleware:



Testing and certification
Packaging, configuration, distribution and site validation
Support – problem determination and resolution; feedback to
middleware developers
 Operations:



Grid infrastructure services
Site fabrics run as production services
Operations centres – trouble and performance monitoring, problem
resolution – 24x7 globally
 Support:


Experiment integration – ensure optimal use of system
User support – call centres/helpdesk – global coverage; documentation;
training
Stephen Burke – DESY seminar - 1/12/2003 - 14/61
EGEE
Follow-on
project from EDG
Substantially


€32m for 2 years vs. €10m over three years
70 partners!
Initially

larger scale
for 2 years, starting April 1st 2004
Possible 2-year extension
Complex
relationship with LCG
Stephen Burke – DESY seminar - 1/12/2003 - 15/61
EGEE Activities

EGEE includes 11 Activities:

Services



SA2: Network Resource Provision
Joint Research





SA1: Grid Operations, Support and Management
JRA1: Middleware Engineering and Integration
JRA2: Quality Assurance
JRA3: Security
JRA4: Network Services Development
Networking





NA1: Management
NA2: Dissemination and Outreach
NA3: User Training and Education
NA4: Application Identification and Support
HEP is in here
NA5: Policy and International Cooperation
Stephen Burke – DESY seminar - 1/12/2003 - 16/61
Other main US & European Grid projects
The Virtual Data
Toolkit (VDT)
Many national,
regional Grid projects -GridPP(UK), INFN-grid(I),
NorduGrid, Dutch Grid, …
US projects
The DataGrid
Toolkit
European projects
Stephen Burke – DESY seminar - 1/12/2003 - 17/61
EDG Middleware
Stephen Burke – DESY seminar - 1/12/2003 - 18/61
EDG Structure
 Six






middleware work areas:
Job submission/control
Data management
Information and monitoring
Fabric management
Mass Storage
Networking
 Security
 Three



application areas:
HEP
Earth observation
Biomedical
 Testbed
operation
 Dissemination
group
Stephen Burke – DESY seminar - 1/12/2003 - 19/61
Security
 Very


important, but left out of the original plan!
Effort taken from other work areas
Liaison with many other projects
 Major




activities:
Development of procedures for Certificate Authorities
Security for java-based web services
Site-level authorisation and mapping to unix accounts
VO membership services (VOMS)
 Many
areas still missing (accounting, auditing, resource reservation,
fine-grained authorisation, delegation, encryption, …)
Stephen Burke – DESY seminar - 1/12/2003 - 21/61
Job submission

Command line tools to submit, query and cancel a job, and to retrieve
the output

Jobs are sent to a Resource Broker which manages the job lifecycle

Files can be transferred in input and output “sandboxes”



The EDG view of computing resources is a Computing Element (CE)
mapped to a batch queue in an underlying batch system (PBS, LSF,
BQS, …)
Jobs usually run under “anonymous” accounts


This is OK for file sizes up to a few tens of Mb
gid is the VO name
JDL uses Condor ClassAds to define matchmaking

Possible CEs are selected to match the requirements, and then
ranked according to a user-supplied algorithm
Stephen Burke – DESY seminar - 1/12/2003 - 22/61
A Job Submission Example
UI
Replica
Catalogue
Input “sandbox”
DataSets info
Information
Service
JDL
Output “sandbox”
Storage
Element
Globus RSL
Job Status
Logging &
Book-keeping
Publish
Job Query
Job Submit Event
Author.
&Authen.
Expanded JDL
Resource
Broker
Job Status
Job Submission
Service
Compute
Element
Stephen Burke – DESY seminar - 1/12/2003 - 23/61
JDL example
Executable = "test.sh";
Arguments = "arg1 arg2";
StdOutput = "test.out";
StdError = "test.err";
InputSandbox = "test.sh";
OutputSandbox = {"test.out", "test.err"};
VirtualOrganisation = "atlas";
InputData = "lfn:burke.test.file";
DataAccessProtocol = "file";
RetryCount = 5;
Requirements = GlueCEPolicyMaxCPUTime > 3600 && Member("ATLAS6.0.4",other.GlueHostApplicationSoftwareRunTimeEnvironment);
Rank = GlueCEStateFreeCPUs - GlueCEStateWaitingJobs;
Stephen Burke – DESY seminar - 1/12/2003 - 25/61
Job submission features
Many









other features, largely untested so far:
Job dependencies (DAGs)
Checkpointing
Proxy renewal
Automatic uploading of output files
Parallel processing with MPICH
Interactive jobs
Scheduling by data access cost
Fuzzy ranking
Accounting (economic model)
Stephen Burke – DESY seminar - 1/12/2003 - 26/61
Data management
File
catalogues store the locations of replicas of a
given file
Replicas
Data
are stored on Storage Elements (SE)
is transferred with GridFTP
Tools
are provided to manage replicas and update
the catalogues
Network
performance information is used to
optimise the choice of replica
Stephen Burke – DESY seminar - 1/12/2003 - 27/61
Naming conventions

Globally Unique Identifier (GUID) - A non-human readable unique

Logical File Name (LFN) – A non-unique alias created by a user to

Site URL (SURL) - The identifier for a file replica on a storage

identifier for a file e.g. “guid:f81d4fae-7dec-11d0-a76500a0c91e6bf6”
refer to a file e.g. “lfn:cms/20030203/run2/track1”
system e.g. “srm://tbn03.nikhef.nl/mcprod/higgs10_1”
Transfer URL (TURL) – A URL for a particular protocol which can be
used to access a copy of a file, e.g.
“gsiftp://tbn03.nikhef.nl/gridstore/atlas/mcprod/higgs10_1
Logical File Name 1
Logical File Name 2
Logical File Name n
Physical File SURL 1
GUID
Physical File SURL n
Stephen Burke – DESY seminar - 1/12/2003 - 28/61
Replication: basic functionality
Each file has a unique Grid ID.
Locations corresponding to the
GUID are kept in the Replica
Location Service.
Users may assign aliases to the
GUIDs. These are kept in the
Replica Metadata Catalog.
Files have replicas stored at
many Grid sites on Storage
Elements.
Replica Metadata
Catalog
Replica Manager
Storage
Element
Storage
Element
Replica Location
Service
The Replica Manager provides
atomicity for file operations, assuring
consistency of SE and catalog
contents.
Stephen Burke – DESY seminar - 1/12/2003 - 29/61
Mass storage
Many

Also “bare” disk servers
Grid



mass storage systems in use
services need a uniform interface
SRM project: LBL, Jefferson, FNAL, CERN, RAL
Transaction based: create/write/commit and
cache/getTURL/read
EDG provides an implementation (edg-se)
Interface
is hidden behind the Replica Manager
Stephen Burke – DESY seminar - 1/12/2003 - 31/61
Information and monitoring
Globus




No tables or relations
Hierarchical data flow
Hard to update schema to publish new information
Scalability and performance problems
EDG



solution (MDS) is based on LDAP
has developed a new system called R-GMA
Relational model, tables, SQL-style queries
Distributed system
Easy to publish new information
Stephen Burke – DESY seminar - 1/12/2003 - 32/61
R-GMA (Relational Grid Monitoring
Architecture)


Producer

execute
or
stream
Use the GMA from GGF
A relational
implementation
Powerful data model and
query language
Registry


Consumer


All data modelled as
tables
SQL can express most
queries in one expression
Applied to both
information and
monitoring
Creates impression that
you have an RDBMS
Stephen Burke – DESY seminar - 1/12/2003 - 33/61
GLUE schema
Joint
project between European and US Grids to
have a common information schema
Currently
covers CE and SE, networking still in
development
Some
problems found, particularly for the SE, so
changes are under discussion
http://www.cnaf.infn.it/~sergio/datatag/glue/index.htm
Stephen Burke – DESY seminar - 1/12/2003 - 34/61
LCFG (Local ConFiGuration)



LCFG was originally developed by the Computer Science
Department of Edinburgh University
Widely used fabric management tool, whose purpose is to
handle automated installation and configuration
Basic features:





automatic installation of O.S.
installation/upgrade/removal of all (rpm-based) software
packages
centralized configuration and management of machines
extendible to configure and manage EDG middleware and custom
application software
LCFGng: updated version of LCFG, customized to EDG needs.
This is the version used by EDG 2.0

CERN uses a new product called Quattor developed by EDG
Stephen Burke – DESY seminar - 1/12/2003 - 35/61
LCFGng system architecture
LCFGng Config Files
HTTP
Make XML
Profile
Server
Abstract configuration
parameters for all nodes
stored in a central
repository
Read
rdxprof
Profile
Load
ldxprof
Profile
Profile
Generic
Object
Component
Web Server
XML Profile
Local cache
inet
auth
LCFG Objects
Client nodes
A collection of agents read
configuration parameters and
either generate traditional config
files or directly manipulate various
services
Stephen Burke – DESY seminar - 1/12/2003 - 36/61
Testbeds
Stephen Burke – DESY seminar - 1/12/2003 - 38/61
The EDG Applications Testbed
 The
testbed has been running continuously since November
2001
 Five

Peaked at around 15 sites, 900 CPUs, 10 Tb of disk in Storage
Elements (plus local storage). Also Mass Storage Systems at
CERN, Lyon, RAL, IFAE and SARA.
 Key




core sites: CERN, CNAF, Lyon, NIKHEF, RAL
dates:
Feb 2002, release 1.1 with basic functionality
April 2002, release 1.2: first production release, used for ATLAS
tests in August
Nov 2002 - Feb 2003, release 1.3/1.4: bug fixes incorporating a
new Globus version, stability much improved. Used for CMS stress
test, and more recently ALICE and LHCb production tests.
September 2003, release 2.0 with major new functionality

Release 2.1 deployed last week!
Stephen Burke – DESY seminar - 1/12/2003 - 39/61
LCG-1
 Not

a testbed!
Still a prototype, but focused on production
 Tries
to take only middleware which is in a stable state
 Initially


rolled out to core “tier-1” sites
Configuration hand-crafted by deployment team at CERN
Tier-1 sites should support tier-2
 Experiments
 LCG-2

starting to use it
coming ~ now
Not a major upgrade, but will be the platform for MC challenges in
2004
Stephen Burke – DESY seminar - 1/12/2003 - 40/61
LCG-1 software

LCG-1 (LCG1-1_0_2) is:



VDT Virtual Data Toolkit (Globus 2.2.4) delivered through iVDGL
EDG WP1 (Resource Broker)
EDG WP2 (Replica Management tools)





One central RMC and LRC for each VO, located at CERN, ORACLE backend
Several bits from other WPs (Config objects, InfoProviders, Packaging …)
GLUE 1.1 (Information schema) + few essential LCG extensions
MDS based Information System with LCG enhancements
SE-Classic (disk based only, gridFTP) MSS via GFAL and SRM soon

EDG components approx. edg-2.0 version

LCG modifications:




Job managers to avoid shared filesystem problems (GASS Cache, etc.)
MDS – BDII LDAP
Globus gatekeeper enhancements (some accounting and auditing features, log rotation)
Many, many bug fixes to EDG and Globus/VDT
Stephen Burke – DESY seminar - 1/12/2003 - 41/61
LCG service status

Certification and distribution process established

Middleware package – components from –


European DataGrid (EDG)
US (Globus, Condor, PPDG, GriPhyN)  the Virtual Data Toolkit

Agreement reached on principles for registration and security

Rutherford Lab (UK) to provide the initial Grid Operations Centre

FZK (Karlsruhe) to operate the Call Centre

The “certified” release was made available to 13 centres on 1 September


Academia Sinica Taipei, BNL, CERN, CNAF, FNAL, FZK, IN2P3 Lyon,
KFKI Budapest, Moscow State Univ., Prague, PIC Barcelona, RAL,
Univ. Tokyo
Next steps –
- Get the experiments going & Expand to other centres
Stephen Burke – DESY seminar - 1/12/2003 - 42/61
Experience
Stephen Burke – DESY seminar - 1/12/2003 - 43/61
Application tests
Atlas
set up a “task force” in August 2002 to repeat
part of their MC data challenge, and have since done
smaller-scale tests with EDG and LCG
CMS
did a stress-test at the end of 2002, as part of
their data challenge

They have also investigated the use of R-GMA for
monitoring of production jobs
Alice
and LhCB have done limited tests, effectively
treating EDG as “just another batch system”
Stephen Burke – DESY seminar - 1/12/2003 - 44/61
Non-LHC applications
Babar
have done some limited tests, but as a running
experiment have been waiting for a productionquality system
D0
have been gridifying their SAM data management
system, and have recently done some tests with RGMA

CDF are also adopting SAM
Earth
observation and biomedical groups also have
ongoing evaluations
Substantial
generic testing by “Loose Cannons”
Stephen Burke – DESY seminar - 1/12/2003 - 45/61
Lessons learnt - general
Many problems and limitations found, but also a lot of progress
 Having
real users on an operating testbed on a fairly large
scale is vital – many problems emerged which had not been seen
in local testing.
 Problems
with configuration are at least as important as bugs integrating the middleware into a working system takes as long
as writing it!
 Grids
need different ways of thinking by users and system
managers. A job must run anywhere it lands. Sites are not
uniform so jobs should make as few demands as possible.
Stephen Burke – DESY seminar - 1/12/2003 - 46/61
Deployment - EDG
 EDG

provides a complete “grid in a box” using LCFG
Install everything with standard configurations on bare machines
and you can have a working (ish) grid in ~ 1 week
 Very
hard to install by hand, configuration is extremely
delicate – change anything and it breaks!


And the problems are hard to diagnose
Misconfigurations at 1 site can affect the whole Grid
 Hard
to install on an existing system, although a CE can feed
into existing batch systems fairly easily

But batch workers will need Grid client software if apps use it
 Tight


system requirements
RH 7.3, gcc 3.2.2, specific versions of many RPMs
Clashes even within EDG, can’t install all middleware together
Stephen Burke – DESY seminar - 1/12/2003 - 47/61
Deployment - LCG
LCG
have a much more tightly controlled deployment
process than EDG, and have only deployed to the
major sites so far


But still many teething problems – especially for sites new
to the Grid
Huge load on the deployment team at CERN
Many
issues around the running of a real production
system only beginning to be addressed

Yet to see if the current system will scale to dozens of
sites, hundreds of users, thousands of jobs, …
Stephen Burke – DESY seminar - 1/12/2003 - 48/61
Job submission
 Some


limitations in rate per broker (~ 10 jobs per minute)
Can have multiple brokers
Generally fails gracefully if limits are exceeded
 Broker
needs a heavyweight machine – and a good sysadmin!
 Can
be very sensitive to poor or incorrect information from
Information Providers, or propagation delays



Resource discovery may not work
Resource ranking algorithms are error prone
Black holes – “all jobs go to RAL”
 The


job submission chain is complex and fragile
UI – NS – WM – Condor – GRAM – PBS/LSF/…
Hard to diagnose problems
Stephen Burke – DESY seminar - 1/12/2003 - 49/61
Information systems and
monitoring
 It
has not been possible to arrive at a stable, hierarchical,
dynamic system based on MDS



System comes to a halt with a high query rate, hierarchy
propagates problems upwards, timeouts don’t always work
LCG uses a patched system with a static cache (BDII)
MDS schema is inflexible, LDAP is limited
 R-GMA
is conceptually much better, but stability so far has not
been good enough for a production system


Many bugs found, improving rapidly
Registry replication scheme and secure interface designed but not
yet implemented
 Monitoring/debugging

information is fairly limited so far
We need to develop a set of monitoring tools for all system
aspects
Stephen Burke – DESY seminar - 1/12/2003 - 50/61
Replica catalogues
 Can
be deployed against MySQL or Oracle
 Tested
up to O(200k) files
 LRC
is designed to be distributed, but so far both EDG and LCG
have only deployed a single catalogue

RMC is not yet designed to be replicated
 Wildcards

 No
Apparently due to transmission of data via java/XML
consistency checking against disk content
 Secure
interface designed but not yet deployed
 Support
 No
queries can be slow
for metadata is limited, but developing
file collections yet, only single files
Stephen Burke – DESY seminar - 1/12/2003 - 51/61
Data management
 Intuitive
 Deals
 Very

interface, works well in general
with single files only
slow: can take ~20 seconds to register a short file
Use of java/web services the main culprit?
 Client-based
 Fault


system, no queues, retries, delegation …
tolerance not perfect
Errors not always detected
System can be left in an an inconsistent state
 File
naming scheme is complex and hard to understand
Stephen Burke – DESY seminar - 1/12/2003 - 52/61
Use of mass storage
 EDG
and LCG have a “classic SE” which is just a machine with a
large disk and a GridFTP server


LCG only use this so far
Works OK, but no space management, no interface to tape MSS
 SRM


project to have a common access protocol for MSS
edg-se is (almost) an implementation; still under development
LCG is expecting others, e.g. for Castor and Enstore
 SRB
may be an alternative?
 LCG
has developed GFAL (Grid File Access Layer) as a unixstyle interface to files on the grid

Will become a true unix file system
Stephen Burke – DESY seminar - 1/12/2003 - 53/61
Virtual Organisation (VO) management
 Current
system works fairly well, but has many limitations
 LDAP-based
 No


authorisation/accounting/auditing/…
EDG has no security work package! (but there is a security group)
EGEE and LCG have security as a major area
 New

VO servers are a single point of failure
software (VOMS/LCAS/LCMAPS) in release 2
Being tested but not yet integrated
 Experiments
will also need to gain experience about how a VO
should be run
Stephen Burke – DESY seminar - 1/12/2003 - 54/61
User view of the testbeds
 Site
configuration is very complex, there is usually one way to
get it right and many ways to be wrong!

LCFG (the fabric management system) is a big help in ensuring
uniform configuration, but can’t be used at all sites
 Services

The Grid must be robust against failures and misconfiguration
 We


should fail gracefully when they hit resource limits
need a user support system
Currently mainly done via mailing lists, which is not ideal!
FZK Karlsruhe setting up a user support centre for LCG
 Many
HEP experiments (and the EDG middleware at the
moment) require outbound IP connectivity from worker nodes,
but many farms these days have “Internet Free Zones”

Various solutions are possible, discussion is needed
Stephen Burke – DESY seminar - 1/12/2003 - 55/61
Other issues
 Documentation
– EDG has thousands of pages of
documentation, but it can be very hard to find what you want

LCG is working on documentation for users and sysadmins
 Development

Several projects underway, but no standard approach yet
 How

of user-friendly portals to services
do we distribute the experiment application software?
LCG has a recent proposal – still have to see how it works
 Interoperability/federation
– is it possible to use multiple
Grids operated by different organisations?
 Scaling
– can we make a working Grid with hundreds of sites?
Stephen Burke – DESY seminar - 1/12/2003 - 56/61
Outlook
Stephen Burke – DESY seminar - 1/12/2003 - 57/61
EDG
Currently
winding up
Final
reports being finalised, should be ready by
December 15th
Final
EU review in February
EDG
testbed will probably last in some form up to
the review, after that the situation is unclear
Stephen Burke – DESY seminar - 1/12/2003 - 58/61
EGEE/LCG
 EGEE

officially starts on April 1st
Will initially take over the LCG system and middleware, plus other
EDG software (R-GMA, VOMS …) which is not yet in LCG
 EGEE
and LCG are strongly coupled (much management and
resources in common) but are not identical



How does the US part of LCG relate to EGEE?
EGEE has to cater for non-HEP users
Where do non-LHC HEP experiments fit in?
 Both
LCG and EGEE are committed to running a production Grid
in 2004!
Stephen Burke – DESY seminar - 1/12/2003 - 59/61
OGSA
Globus


Open Grid Services Architecture
Extends web services
Likely
stage

have announced a move to a new interface
to be the future of Grids, but still in an early
Current systems will last for at least another year
Will
HEP applications themselves become Grid
services?
Stephen Burke – DESY seminar - 1/12/2003 - 60/61
Summary
Grids
have come a long way in three years, but still
have a long way to go
Some

Particularly how to operate a production system
LHC

areas barely getting started
needs a working system next year
And 2007 is no longer so far away!
Many
projects, all pulling in slightly different
directions
We
will be living in Interesting Times
Stephen Burke – DESY seminar - 1/12/2003 - 61/61