Transcript Slide 1
GRID Analysis Environment for LHC Particle Physics
LHC Data Grid Hierarchy: developed at Caltech
Scientific Exploration at
the High Energy
Physics Frontier
CERN/Outside Resource Ratio ~1:2
Tier0/( Tier1)/( Tier2)
~1:1:1
~PByte/sec
~100-1500
MBytes/sec
Online System
Physics experiments consist of large
collaborations: CMS and ATLAS each
encompass 2000 physicists from
approximately 150 institutes (300-400
physicists in 30 institutes in the US)
Experiment
CERN Center
PBs of Disk;
Tape Robot
Tier 0 +1
Tier 1
~10-40 Gbps
IN2P3 Center
FNAL Center
INFN Center
RAL Center
2.5-10 Gbps
HEP Challenges: Frontiers of Information Technology
•Rapid access to PetaByte/ExaByte data stores
•Secure, efficient, transparent access to heterogeneous worldwide distributed computing and data
•A collaborative scalable distributed environment for thousands of physicists to enable physics analysis
•Tracking the state and usage patterns of computing and data resources, to make possible rapid
turnaround and efficient utilization of resources
Grid Analysis Environment (GAE)
•The “Acid Test” for Grids; crucial for LHC experiments
•Large, diverse, distributed community of users
•Support for 100s to 1000s of analysis tasks, shared among
dozen of sites
•Widely varying task requirements and priorities
•Need for priority schemes, robust authentication and security
•Operates in a severely resource limited and policy constrained
global system
•Dominated by collaboration policy and strategy
•Requires real-time monitoring; task and workflow tracking;
decisions often based on a global system view
•Where physicists learn to collaborate on analysis across the
country, and across world regions
•Focus is on the LHC CMS experiment but architecture and services
can potentially be used in other (physics) analysis environments
~2.5-10 Gbps
Tier
3
Tier 2
Institute Institute
Physics data cache
Tier2 Center
Tier2 Center
Tier2 Center
Tier2 CenterTier2 Center
Institute
Institute
Tens of Petabytes by 2007-8.
An Exabyte ~5-7 Years later.
0.1 to 10 Gbps
Tier 4
Workstations
Emerging Vision: A Richly Structured, Global Dynamic System
Other Clients
Web browser
ROOT (analysis tool)
Python
Cojac (detector viz.)/
IGUANA (cms viz tool)
“Analysis Flight Deck”
JobMon Client
JobStatus Client
MCPS Client
•Discovery,
•Acl management,
•Certificate based access
Grid Services
Web Server
Workflow
Execution
Runjob
Monitoring Clients
MonALISA Clients
Clarens
MCPS
JobStatus
Catalogs
Scheduler
FullyAbstract
Planner
Compute Site
Metadata
Sphinx
PartiallyAbstract
Planner
ROOT
Storage
Virtual
Data
Data
Management
Applications
DCache
FAMOS
ORCA
Replica
BOSS
FullyConcrete
Planner
MonALISA
JobMon
Monitoring
MonALISA
Network
Reservation
Monitoring
Execution
Priority
Manager
•MonALISA based monitoring services
provide global views of the system
•MonALISA based components
proactively manage sites and networks
based on Monitoring information
•The Clarens portal and MonALISA
clients hides the complexity of the
Grid services from the client, but can
expose it in as much detail as
required for e.g. monitoring.
MonALISA
Planning
Grid Wide
Execution
Service BOSS
Global
Command & Control
GAE development (services)
The GAE Architecture
•Analysis clients talk standard protocols to the
Clarens Grid Service Portal
•Enabling Selection of Workflows (e.g. Monte
Carlo simulation, data transfer, analysis)
•Jobs generated submitted to scheduler, which
creates a plan based on monitor information
•Submission of jobs and feedback on job status
•HTTP, •XML-RPC,
•SOAP, •JSON, RMI
Tier2 Site
Workflow
Definitions
Global
view of the
system
Proactive in
minimizing
Grid traffic
jams
•MCPS. Policy based Job submission and workflow
management portal, developed in collaboration with
FNAL and UCSD
•JobStatus. Access to Job Status information through
Clarens and MonALISA, developed in collaboration with
NUST
•JobMon. implements a secure and authenticated
method for users to access running Grid jobs,
developed in collaboration with FNAL
•BOSS. Uniform job submission layer developed in
collaboration with INFN
•SPHINX. Grid scheduler developed at UFL
•CAVES. Analysis code sharing environment developed
at UFL
•Core services (Clarens): Discovery, Authentication,
Proxy, Remote file access, Access control management,
Virtual Organization management
Implementations, developed
within Physics and CS community
associated with GAE components
The Clarens Web Service Framework
3rd party
applications
Service
Web
server
http/https
Client
Client
Client
Client
other servers
A portal system providing a common infrastructure for
deploying Grid enabled web services
•Features:
•Access control to services
•Session management
•Service discovery and invocation
•Virtual Organization management
•PKI based security
•Good performance (over 1400 calls per second)
•Role in GAE:
•Connects clients to Grid or analysis applications
•Acts in concert with other Clarens servers to form a P2P
network of service providers
•Two implementations:
•Python/C using Apache web server
•Java using Tomcat servlets
•A distributed monitoring service system using
JINI/JAVA and WSDL/SOAP technologies. A
•Acts as a dynamic service system and provides
the functionality to be discovered and used by
any other services or clients that require such
information.
•Can integrate existing monitoring tools and
Monitoring SC04
procedures to collect parameters describing
BWC, 101 GBs
computational nodes, applications and network
performance.
•Provides the monitoring information from large and distributed systems to a set of
loosely coupled "higher level services" in a flexible, self describing way. This is part
of a loosely coupled service architectural model to perform effective resource
utilization in large, heterogeneous distributed centers.
Java client, ROOT (analysis tool), IGUANA (CMS viz. tool), ROOT-CAVES client
(analysis sharing tool), … any app that can make XML-RPC/SOAP calls
GRID Enabled Analysis:
User view of a collaborative desktop
Clarens provides a ROOT Plug-In that allows the ROOT user to
gain access to Grid services via the portal, for example to
access ROOT files at remote locations
IGUANA (viz. app.)
Clarens portal
MonaLisa (monitoring)
ROOT (analysis)
Policy based access to workflows
VO Management
Clarens Grid Portal:
(remote) File Access
Secure cert-based access to services through browser
Authentication
More information:
GAE web page: http://ultralight.caltech.edu/web-site/gae
Clarens web page: http://clarens.sourceforge.net
MonaLisa : http://monalisa.cacr.caltech.edu/
SPHINX: http://sphinx.phys.ufl.edu/
This work is partly supported by the Department of Energy as part of
the Particle Physics DataGrid project (DOE/DHEP and MICS) and be
the National Science Foundation (NFS/MPS and CISE). Any
opinions, findings, conclusions or recommendations expressed in
this material are those of the authors and do not necessarily reflect
the views of the Department of Energy or the National Science
Foundation
Authorization
Logging
Shell
Key Escrow