ECECS Lecture 18 Grid Computing Citation: B.Ramamurthy/Suny-Buffalo

Download Report

Transcript ECECS Lecture 18 Grid Computing Citation: B.Ramamurthy/Suny-Buffalo

ECECS Lecture 18 Grid Computing

Citation: B.Ramamurthy/Suny-Buffalo

Globus Material

The presentation is based on the two main publications on grid computing given below: 1.

The Physiology of the Grid, An Open Services Architecture for Distributed Systems Integration, by Ian Foster, Carl Kesselman, Jeffrey Nick, and Steven Tuecke, 2002.

2.

The Anatomy of the grid, Enabling Scalable Virtual Organization, Ian Foster, Carl Kesselman, Steven Tuecke, 2001.

3.

URL:http://www.globus.org/research/papers.html

Grid Technology

• Grid technologies and infrastructures support the sharing and coordinated use of diverse resources in dynamic, distributed “virtual organizations”.

• Grid technologies are distinct from technology trends such as

Internet, enterprise, distributed and peer-to-peer computing

. But these technologies can benefit from growing into the “problem space” addressed by grid technologies.

Virtual Organization: Problem Space

• • •

An industrial consortium formed to develop a feasibility study for a next generation supersonic aircraft undertakes a highly accurate multidisciplinary simulation of the entire aircraft.

A crisis management teams responds to a chemical spill by using local weather and soil models to estimate the spread of the spill, planning and coordinating evacuation, notifying hospitals and so forth.

Thousands of physicists come together to design, create, operate and analyze products by pooling together computing, storage, networking resources to create a Data Grid.

Resource Sharing Requirements

• Members should be trustful and trustworthy.

• Sharing is conditional.

• Should be secure.

• Sharing should be able to change dynamically over time.

• Need for discovery and registering of resources.

• Can be peer to peer or client/server.

• Same resource may be used in different ways.

• All these point to well defined architecture and protocols.

Grid Definition

• • • •

Architecture identifies the fundamental system components, specifies purpose and function of these components, and indicates how these components interact with each other.

Grid architecture is a protocol architecture, with protocols defining the basic mechanisms by which VO users and resources negotiate , establish, manage and exploit sharing relationships.

Grid architecture is also a services standards-based open architecture that facilitates extensibility, interoperability, portability and code sharing.

API and Toolkits are also being developed.

Grid Services Architecture

Applications High-energy physics data analysis Collaborative engineering Regional climate studies Parameter studies On-line instrumentation Application Toolkit Layer High throughput Data intensive Collab.

design Remote viz Remote control Grid Services Layer Information Security Resource mgmt Data access . . .

Fault detection Grid Fabric Layer Transport Instrumentation . . .

Multicast Control interfaces QoS mechanisms

Architecture

Internet GRID

Application Application Collective Resource Connectivity Fabric Transport Internet Link

Fabric Layer

• Fabric layer: Provides the resources to which shared access is mediated by Grid protocols.

• Example: computational resources, storage systems, catalogs, network resources, and sensors.

• Fabric components implement local, resource specific operations.

• Richer fabric functionality enables more sophisticated sharing operations.

• Sample resources: computational resources, storage resources, network resources, code repositories, catalogs.

Connectivity Layer

• Communicating easily and securely.

• Connectivity layer defines the core communication and authentication protocols required for grid-specific network functions.

• This enables the exchange of data between fabric layer resources.

• Support for this layer is drawn from TCP/IP’s IP, TCL and DNS layers.

• Authentication solutions: single sign on, etc.

Resources Layer

• Resource layer defines protocols, APIs, and SDKs for secure negotiations, initiation, monitoring control, accounting and payment of sharing operations on individual resources.

• Two protocols information protocol and management protocol define this layer.

• Information protocols are used to obtain the information about the structure and state of the resource, ex: configuration, current load and usage policy.

• Management protocols are used to negotiate access to the shared resource, specifying for example qos, advanced reservation, etc.

Collective Layer

• Coordinating multiple resources.

• Contains protocols and services that capture interactions among a collection of resources.

• It supports a variety of sharing behaviors without placing new requirements on the resources being shared.

• Sample services: directory services, coallocation, brokering and scheduling services, data replication service, workload management services, collaboratory services.

Applications Layer

• These are user applications that operate within VO environment.

• Applications are constructed by calling upon services defined at any layer.

• Each of the layers are well defined using protocols, provide access to useful services.

• Well defined APIs also exist to work with these services.

• A toolkit Globus implements all these layers and supports grid application development.

Globus Toolkit Services

• Security (GSI) – PKI-based Security (Authentication) Service • Job submission and management (GRAM) – Uniform Job Submission • Information services (MDS) – LDAP-based Information Service • Remote file management (GASS) – Remote Storage Access Service • Remote Data Catalogue and Management Tools – Support by Globus 2.0 released in 2002

Part II

High-level services

Sample of High-Level Services

• Resource brokers and co-allocators – DUROC, Nimrod/G, Condor-G, GridbusBroker Communication & I/O libraries – MPICH-G, PAWS, RIO (MPI-IO), PPFS, MOL • Parallel languages – HPC++, CC++, Nimrod Parameter Specification • Collaborative environments – CAVERNsoft, ManyWorlds • Others – MetaNEOS, NetSolve, LSA, AutoPilot, WebFlow

The Nimrod-G Grid Resource Broker

• A resource broker for managing, steering, and executing task farming (parameter sweep/SPMD model) applications on the Grid based on deadline and computational economy. • Based on users’ QoS requirements, our Broker dynamically leases services at runtime depending on their quality, cost, and availability.

• Key Features – A single window to manage & control experiment – Persistent and Programmable Task Farming Engine – Resource Discovery – Resource Trading – Scheduling & Predications – Generic Dispatcher & Grid Agents – Transportation of data & results – Steering & data management – Accounting • Uses Globus – MDS, GRAM, GSI, GASS

Condor-G: Condor for the Grid

• Condor is a high-throughput scheduler • Condor-G uses Globus Toolkit libraries for: – Security (GSI) – Managing remote jobs on Grid (GRAM) – File staging & remote I/O (GSI-FTP) • Grid job management interface & scheduling – Robust replacement for Globus Toolkit programs • Globus Toolkit focus is on libraries and services, not end user vertical solutions – Supports single or high-throughput apps on Grid • Personal job manager which can exploit Grid resources

Production Grids & Testbeds

• Production deployments underway at: – NSF PACIs National Technology Grid – NASA Information Power Grid – DOE ASCI – European Grid • Research testbeds – EMERGE: Advance reservation & QoS – GUSTO: Globus Ubiquitous Supercomputing Testbed Organization – Particle Physics Data Grid – World-Wide Grid (WWG)

Production Grids & Testbeds

NASA’s Information Power Grid The Alliance National Technology Grid GUSTO Testbed

World Wide Grid (WWG)

GMonitor MEG Visualisation

@ SC 2002/Baltimore

Australia Melbourne+Monash U: Gridbus+ Nimrod-G VPAC, Physics Solaris WS Grid Market Directory WW Grid Internet Asia AIST, Japan: Solaris Cluster Osaka University: Cluster Doshia: Linux cluster Korea: Linux cluster North America ANL: SGI/Sun/SP2 NCSA: Cluster Wisc: PC/cluster NRC, Canada Many others Europe ZIB: T3E/Onyx AEI: Onyx CNR: Cluster CUNI/CZ: Onyx Pozman: SGI/SP2 Vrije U: Cluster Cardiff: Sun E6500 Portsmouth: Linux PC Manchester: O3K Cambridge: SGI Many others

Example Applications Projects (via Nimrod-G or Gridbus)

• Molecular Docking for Drug Discovery – Docking molecules from chemical databases with target protein • Neuro Science – Brain Activity Analysis • High Energy Physics – Belle Detector Data Analysis • Natural Language Engineering – Analyzing audio data (e.g., to identify emotional state of a person!)

Example Application Projects

• Computed microtomography (ANL, ISI) – Real-time, collaborative analysis of data from X-Ray source (and electron microscope) • Hydrology (ISI, UMD, UT; also NCSA, Wisc.) – Interactive modeling and data analysis • Collaborative engineering (“tele-immersion”) – CAVERNsoft @ EVL • OVERFLOW (NASA) – Large CFD simulations for aerospace vehicles

Example Application Experiments

• Distributed interactive simulation (CIT, ISI) – Record-setting SF-Express simulation • Cactus – Astrophysics simulation, viz, and steering – Including trans-Atlantic experiments • Particle Physics Data Grid – High Energy Physics distributed data analysis • Earth Systems Grid – Climate modeling data management

The Globus Advantage

• Flexible Resource Specification Language which provides the necessary power to express the required constraints • Services for resource co-allocation, executable staging, remote data access and I/O streaming • Integration of these services into high-level tools – MPICH-G: grid-enabled MPI – globus-job-*: flexible remote execution commands – Nimrod-G Grid Resource broker – Gridbus: Grid Business Infrastructure – Condor-G: high-throughput broker – PBS, GRD: meta-schedulers

Resource Management

• Resource Specification Language (RSL) is used to communicate requirements • The Globus Resource Allocation Manager (GRAM) API allows programs to be started on remote resources, despite local heterogeneity • A layered architecture allows application specific resource brokers and co allocators to be defined in terms of GRAM services

Resource Management Architecture

Application RSL Broker RSL specialization Queries & Info Information Service Ground RSL Co-allocator Local resource managers GRAM LSF Simple ground RSL GRAM EASY-LL GRAM NQE

Client

GRAM Components

MDS client API calls to locate resources MDS: Grid Index Info Server Site boundary MDS client API calls to get resource info GRAM client API calls to request resource allocation and process creation.

Globus Security Infrastructure GRAM client API state change callbacks MDS: Grid Resource Info Server Query current status of resource Local Resource Manager Create Job Manager Request Allocate & create processes Gatekeeper Parse Monitor & control Process Process RSL Library Process

A simple run

• [raj@belle raj]$ globus-job-run belle.anu.edu.au /bin/date • Mon May 3 15:05:42 EST 2004

Resource Specification Language (RSL)

• Common notation for exchange of information between components – Syntax similar to MDS/LDAP filters • RSL provides two types of information: – Resource requirements: Machine type, number of nodes, memory, etc.

– Job configuration: Directory, executable, args, environment • API provided for manipulating RSL

RSL Syntax

• Elementary form: parenthesis clauses – (attribute op value [ value … ] ) • Operators Supported: – <, <=, =, >=, > , != • Some supported attributes: – executable, arguments, environment, stdin, stdout, stderr, resourceManagerContact, resourceManagerName • Unknown attributes are passed through – May be handled by subsequent tools

Constraints: “&”

• globusrun -o -r belle.anu.edu.au "&(executable=/bin/date)" • For example: & (count>=5) (count<=10) (max_time=240) (memory>=64) (executable=myprog) “Create 5-10 instances of myprog, each on a machine with at least 64 MB memory that is available to me for 4 hours”

Disjunction: “|”

• • • • For example: & (executable=myprog) ( | (&(count=5)(memory>=64)) (&(count=10)(memory>=32))) • Create 5 instances of myprog on a machine that has at least 64MB of memory, or 10 instances on a machine with at least 32MB of memory

Multirequest: “+”

• A multi-request allows us to specify multiple resource needs, for example + (& (count=5)(memory>=64) (executable=p1)) (&(network=atm) (executable=p2)) – Execute 5 instances of p1 on a machine with at least 64M of memory – Execute p2 on a machine with an ATM connection • Multirequests are central to co-allocation

Co-allocation

• Simultaneous allocation of a resource set – Handled via optimistic co-allocation based on free nodes or queue prediction – In the future, advance reservations will also be supported • globusrun and globus-job-* will co-allocate specific multi-requests – Uses a Globus component called the Dynamically Updated Request Online Co-allocator (DUROC)

DUROC Functions

• Submit a multi-request • Edit a pending request – Add new nodes, edit out failed nodes • Commit to configuration – Delay to last possible minute – Barrier synchronization • Initialize computation – Bootstrap library • Monitor and control collection

DUROC Architecture

Subjob status RM1 Job 1 RM2 Job 2 Controlled Jobs Controlling Application RSL multi-request RM3 Job 3 RM4 Job 4 Job 5 Edit request Barrier

RSL Creation Using globus-job run

• globus-job-run can be used to generate RSL from command-line args: globus-job-run –dumprsl \ -: host1 -np N1 [-s] executable1 args1 \ -: host2 -np N2 [-s] executable2 args2 \ ... > rslfile – -np: number of processors – -s: stage file – argument options for all RSL keywords – -help: description of all options

Job Submission Interfaces

• Globus Toolkit includes several command line programs for job submission – globus-job-run: Interactive jobs – globus-job-submit: Batch/offline jobs – globusrun: Flexible scripting infrastructure • Other High Level Interfaces – General purpose • Nimrod-G, Condor-G, PBS, GRD, etc – Application specific • ECCE’, Cactus, Web portals

globus-job-run

• For running of interactive jobs • Additional functionality beyond rsh – Ex: Run 2 process job w/ executable staging globus-job-run -: host –np 2 –s myprog arg1 arg2 – Ex: Run 5 processes across 2 hosts globus-job-run \ -: host1 –np 2 –s myprog.linux arg1 \ -: host2 –np 3 –s myprog.aix arg2 – For list of arguments run: globus-job-run -help

globus-job-submit

• For running of batch/offline jobs – globus-job-submit Submit job • Same interface as globus-job-run • Returns immediately – globus-job-status Check job status – globus-job-cancel – globus-job-get-output – globus-job-clean Cancel job Get job stdout/err Cleanup after job

globusrun

• Flexible job submission for scripting – Uses an RSL string to specify job request – Contains an embedded globus-gass-server • Defines GASS URL prefix in RSL substitution variable: (stdout=$(GLOBUSRUN_GASS_URL)/stdout) – Supports both interactive and offline jobs • Complex to use – Must write RSL by hand – Must understand its esoteric features – Generally you should use globus-job-* commands instead

Resource Brokers

“Run a distributed interactive simulation involving 100,000 entities” “Create a shared virtual space with participants X, Y, and Z” “Perform a parameter study involving 10,000 separate trials” DIS-Specific Broker “Supercomputers providing 100 GFLOPS, 100 GB, < 100 msec latency” Collaborative environment-specific resource broker " . . ." Supercomputer resource broker Information Service “80 nodes on Argonne SP, 256 nodes on CIT Exemplar 300 nodes on NCSA O2000” Parameter study specific broker " . . ." Simultaneous start co-allocator "Run SF-Express on 80 nodes” Argonne Manager "Run SF-Express Resource on 256 nodes” CIT “Run SF-Express on 300 nodes” Resource Manager NCSA Resource Manager

Brokering via Lowering

• Resource location by refining a RSL expression (RSL lowering):

(MFLOPS=1000)



(& (arch=sp2)(count=200))

(+ (& (arch=sp2) (count=120)

(resourceManagerContact=anlsp2)) (& (arch=sp2) (count=80) (resourceManagerContact=uhsp2)))

Remote I/O and Staging

• Tell GRAM to pull executable from remote location • Access files from a remote location • stdin/stdout/stderr from a remote location

What is GASS?

(a) GASS file access API – Replace open/close with globus_gass_open/close; read/write calls can then proceed directly (b) RSL extensions – URLs used to name executables, stdout, stderr (c) Remote cache management utility (d) Low-level APIs for specialized behaviors

GASS Architecture

main( ) { fd = globus_gass_open(…) … read(fd,…) … globus_gass_close(fd) }

(a) GASS file access API

GRAM &(executable=https://…)

(b) RSL extensions

GASS Server HTTP Server FTP Server Cache

(d) Low-level APIs for customizing cache & GASS server (c) Remote cache management

% globus-gass-cache

GASS File Naming

• URL encoding of resource names https

://

quad.mcs.anl.gov:9991

/

~bester/myjob protocol server address file name • Other examples https://pitcairn.mcs.anl.gov/tmp/input_dataset.1

https://pitcairn.mcs.anl.gov:2222/./output_data http://www.globus.org/~bester/input_dataset.2

• Supports http & https • Support ftp & gsiftp.

GASS RSL Extensions

• executable, stdin, stdout, stderr can be local files or URLs • executable and stdin loaded into local cache before job begins (on front-end node) • stdout, stderr handled via GASS append mode • Cache cleaned after job completes

GASS/RSL Example

&(executable=https://quad:1234/~/myexe) (stdin=https://quad:1234/~/myin) (stdout=/home/bester/output) (stderr=https://quad:1234/dev/stdout)

Example GASS Applications

• On-demand, transparent loading of data sets • Caching of data sets • Automatic staging of code and data to remote supercomputers • (Near) real-time logging of application output to remote server

GASS File Access API

• Minimum changes to application • globus_gass_open(), globus_gass_close() – Same as open(), close() but use URLs instead of filenames – Caches URL in case of multiple opens – Return descriptors to files in local cache or sockets to remote server • globus_gass_fopen(), globus_gass_fclose()

GASS File Access API (cont)

• Support for different access patterns – Read-only (from local cache) – Write-only (to local cache) – Read-write (to/from local cache) – Write-only, append (to remote server)

globus_gass_open()/close()

URL in cache?

yes no Download File into cache open cached file, add cache reference globus_gass_open() Modified yes globus_gass_close() no Upload changes Remove cache reference

GASS File API Semantics

• Copy-on-open to cache if not truncate or write only append

and

not already in cache • Copy on close from cache if not read only

and

not other copies open • Multiple globus_gass_open() calls share local copy of file • Append to remote file if write only append: e.g., for stdout and stderr • Reference counting keeps track of open files

globus-gass-server

• Simple file server – Run by user wherever necessary – Secure https protocol, using GSI – APIs for embedding server into other programs • Example globus-gass-server –r –w -t – -r: Allow files to be read from this server – -w: Allow files to be written to this server – -t: Tilde expand (~/…  $(HOME)/…) – -help: For list of all options

Together

1 . Derive Contact String 2 . Build RSL string 3 . Startup GASS server 4 . Submit to request 5 . Return output 5 GASS server stdout

Host name

GRAM & GASS: Putting It

1 3

Contact string

4 5 5 program 5 4 jobmanager

Command Line Args

2

RSL string

globus-job-run gatekeeper 4

Example: A Simple Broker

• Select machines based on availability – Use MDS queries to get current host loads – Look at output and figure out what machines to use • Generate RSL based on selection – globus-job-run -dumprsl can assist • Execute globusrun, feeding it the RSL generated in previous step

GRAM & GASS

• Using RSL with globusrun • Running globus-gass-server • Modifying a program to use globus_gass_open() to read files remotely from a GASS server

Globus Components In Action

Local Machine mpirun RSL multi-request globusrun RSL parser Machines RSL string DUROC User Proxy Cert grid-proxy-init RSL single request X509 User Cert GRAM Client GSI GASS Server GRAM Client GSI GRAM Gatekeeper GSI Remote Machine App GRAM Job Manager GASS Client PBS AIX Nexus MPI GRAM Gatekeeper GSI Remote Machine App GRAM Job Manager GASS Client Unix Fork Solaris Nexus MPI

Client

GRAM Components

MDS client API calls to locate resources MDS: Grid Index Info Server Site boundary MDS client API calls to get resource info GRAM client API calls to request resource allocation and process creation.

Globus Security Infrastructure GRAM client API state change callbacks MDS: Grid Resource Info Server Query current status of resource Local Resource Manager Create Job Manager Request Allocate & create processes Gatekeeper Parse Monitor & control Process Process RSL Library Process

MDS: Monitoring and Discovery Service

• Learn how to use the MDS to locate and determine characteristics of resources • Locate resources – Where are resources with required architecture, installed software, available capacity, network bandwidth, etc.? • Determine resource characteristics – What are the physical characteristics, connectivity, capabilities of a resource?

The Need for Information

• System information is critical to operation of the grid and construction of applications – How does an application determine what resources are available?

– What is the “state” of the computational grid?

– How can we optimize an application based on configuration of the underlying system?

• We need a general information infrastructure to answer these questions

Using Information for Resource Brokering

“10 GFlops, EOS data, 20 Mb/sec -- for 20 mins” Info service: location + selection “20 Mb/sec” Resource Broker “What computers?” “What speed?” “When available?” Metacomputing Directory Service GRAM Globus Resource Allocation Managers “50 processors + storage from 10:20 to 10:40 pm” GRAM Fork LSF EASYLL Condor etc.

GRAM GRAM

Examples of Useful Information

• Characteristics of a compute resource – IP address, software available, system administrator, networks connected to, OS version, load • Characteristics of a network – Bandwidth and latency, protocols, logical topology • Characteristics of the Globus infrastructure – Hosts, resource managers

Grid Information Service

• Provide access to static and dynamic information regarding system components • A basis for configuration and adaptation in heterogeneous, dynamic environments • Requirements and characteristics – Uniform, flexible access to information – Scalable, efficient access to dynamic data – Access to multiple information sources – Decentralized maintenance

MDS

• Store information in a distributed directories – Directory stored in collection of LDAP servers – Each server optimized for particular function • Directory can be updated by – Information providers and tools – Applications (i.e., users) – Backend tools which generate info on demand • Information dynamically available to – Tools – Applications

Directory Service Functions

• White Pages – Look up the IP number, amount of memory, etc., associated with a particular machine • Yellow Pages – Find all the computers of a particular class or with a particular property • Temporary inconsistencies are often considered okay – In a distributed system, you often do not know the state of a resource until you actually use it – Information is often used as “hints” – Information itself can contain ttl, etc.

MDS Approach

• Based on LDAP Application – Lightweight Directory Access Protocol v3 (LDAPv3) – Standard data model Middleware – Standard query protocol • Globus specific schema – Host-centric representation • Globus specific tools – GRIS, GIIS – Data discovery, publication,… SNMP LDAP API GRIS … GIIS NWS NIS LDAP …

MDS Components

• Uses standard LDAP servers – OpenLDAP, Netscape, Oracle, etc • Tools for populating & maintaining MDS – Integrated with Globus Toolkit server release, not of concern to most Globus users – Discover/update static and dynamic info • APIs for accessing & updating MDS contents – C, Java, PERL (LDAP API, JNDI) • Various tools for manipulating MDS contents – Command line tools, Shell scripts & GUIs

Anonymous Grid info search

• grid-info-search -x -h belle.anu.edu.au

….

Mds-Computer-isa: i686 Mds-Computer-platform: i686 Mds-Computer-Total-nodeCount: 1 Mds-Cpu-Cache-l2kB: 512 Mds-Cpu-features: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmo v pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm Mds-Cpu-Free-15minX100: 400 Mds-Cpu-Free-1minX100: 400 Mds-Cpu-Free-5minX100: 400 Mds-Cpu-model: Intel(R) Xeon(TM) CPU 2 …

Summary

• MDS provides the information needed to perform dynamic resource discovery and configuration – Critical component of resource brokers • MDS is base on existing directory service standards (LDAPv3)