Internet Computing and the Emerging Grid

Download Report

Transcript Internet Computing and the Emerging Grid

Nimrod/G and Grid Market
“A Case for Economy Grid Architecture for
Service Oriented Global Grid Computing”
Rajkumar Buyya, David
Abramson, Jon Giddy
Monash University,
Melbourne, Australia
www.buyya.com/ecogrid
www.gridcomputing.com
Scalable HPC: Breaking
Administrative Barriers
2100
2100
2100
2100
2100
2100
2100
2100
?
P
E
R
F
O
R
M
A
N
C
E
2100
Administrative Barriers
•Individual
•Group
•Department
•Campus
•State
•National
•Globe
•Inter Planet
•Universe
Desktop
SMPs or
SuperComputers
Local
Cluster
Enterprise
Cluster/Grid
Global
Cluster/Grid
Inter Planet
Cluster/Grid ??
Why Grids ? Large Scale Exploration
needs them—Killer Applications.

Solving grand challenge applications using
computer modeling, simulation and analysis
Aerospace
Internet &
Ecommerce
Life Sciences
CAD/CAM
Digital Biology
Military Applications
Players in Grid
Computing
What users want ?
Users in Grid Economy & Strategy

Grid Consumers



Execute jobs for solving varying problem size and
complexity
Benefit by selecting and aggregating resources wisely
Tradeoff timeframe and cost


Strategy: minimise expenses
Grid Providers



Contribute “idle” resource for executing consumer jobs
Benefit by maximizing resource utilisation
Tradeoff local requirements & market opportunity

Strategy: maximise returns on services
mix-and-match
Object-oriented
Internet-WWW
Problem Solving Approach
Market/Computational
Economy
Grid Architecture for Computational Economy
Sign-on
Info ?
Grid Explorer
Application
Job
Control
Agent
Grid Market
Services
Information
Server(s)
Health
Monitor
Grid Node N
Secure
Schedule Advisor
QoS
Grid Node1
Pricing
Algorithms
Trade Server
Trade Manager
…
Deployment Agent
Trading
JobExec
Grid User
Grid Resource Broker
Misc. services
Resource Allocation
Storage
R1
Grid Middleware
Services
Accounting
Resource
Reservation
R2
…
Rm
Grid Service Providers
Economy Grid = Globus + GRACE
Applications
Science
Engineering
MPI-G
MDS
Condor
LSF
MPI-IO
Heartbeat
Monitor
Nexus
GASS
GRD
PBS
…
Portals
High-level Services and Tools
GlobusView
DUROC
Commerce
DUROC
QBank
eCash
ActiveSheet
Grid Status
Nimrod/G
CC++
globusrun
Grid
Apps.
Grid
Tools
Core Services
Globus
Security
Interface
Local
Services
GRACE-TS
GRAM
GARA
GMD
GBank
JVM
TCP
UDP
Linux
Irix
Solaris
Grid
Middleware
Grid
Fabric
Nimrod/G : A Grid Resource Broker


A resource broker for managing and steering task
farming (parametric sweep) applications on
computational Grids based on deadline and
computational economy.
Key Features






A single window to manage & control experiment
Resource Discovery
Trade for Resources
Resource Composition & Scheduling
Steering & data management
It allows to study the behaviour of some of the
output variables against a range of different input
scenarios.
A Glance at Nimrod-G Broker
Nimrod/G Client
Nimrod/G Client
Nimrod/G Client
Nimrod/G Engine
Schedule Advisor
Trading Manager
Grid
Store
Grid Dispatcher
Grid Explorer
Grid Middleware
TM
Globus,Legion, Condor-g,, Ninf,etc.
TS
GE
GIS
Grid Information Server(s)
RM & TS
RM & TS
G
RM & TS
C
L
G
Globus enabled node.
L
Legion enabled
node.
RM: Local Resource Manager, TS: Trade Server
C
Condor enabled node.
Deadline
A Nimrod/G
Client
Cost
66
Arlington
Alexandria
Legion hosts
Ra p p a h a n n o c k Po t o m a c
Riv e r
Riv e r
Sh e n a n d o a h
Riv e r
64
64
Ja m e s
Richmond
Riv e r
A p p o m a to x
81
Roanoke
Riv e r
Newport News
77
VIRGINIA
85
Portsmouth
Hampton
Norfolk
Virginia Beach
Chesapeake
Globus Hosts
Bezek is in both
Globus and Legion Domains
Adaptive Scheduling algorithms
A d a p tive Sched uling
E x ecutio n T im e
E x ecutio n C o st
A lg o rithm s
(no t b eyo nd d ea d line)
(no t b eyo nd b ud get)
T im e M in im isation
M in im ise
L im ited b y bu d g et
C ost M in im isation
L im ited b y d ead lin e
M in im ise
N on e M in im isation
L im ited b y d ead lin e
L im ited b y bu d g et
Discover Establish
Resources
Rates
Distribute Jobs
Compose &
Schedule
Discover
More
Resources
Evaluate &
Reschedule
Meet requirements ? Remaining
Jobs, Deadline, & Budget ?
Inter-Continental Grid
Australia
North America
ANL: SGI/Sun/SP2
USC-ISI: SGI
UVa: Linux Cluster
Monash Uni.:
Nimrod/G
Linux cluster
Globus+Legion
+Condor/G
Solaris WS
Globus/Legion
GRACE_TS
Internet
Asia/Japan
Tokyo I-Tech.:
ETL, Tuskuba
Linux cluster
Globus +
GRACE_TS
Europe
ZIB/FUB: T3E/Mosix
Cardiff: Sun E6500
Paderborn: HPCLine
Lecce: Compaq SC
CNR: Cluster
Calabria: Cluster
CERN: Cluster
Pozman: SGI/SP2
Globus +
GRACE_TS
Experimentation on the Grid

Workload:




165 jobs, each need 5 minute
of cpu time
Deadline: 1 hrs. and budget:
800,000 units
Strategy: minimise cost and
meet deadline
Execution Cost with cost
optimisation


AU Peaktime:471205 (G$)
AU Offpeak time: 427155 (G$)
Resour Owner Grid
ce
service
s
Peak
time
(G$)
Off
peak
cost
Linux
cluster
(60
nodes)
Globus/
Condor
20
5
IBM SP2 ANL,
(80
Chicago,
nodes)
US
Globus/LL
5
10
Sun (8
nodes)
Globus/
Fork
5
10
SGI (96 ANL,
nodes)
Chicago,
US
Globus/
Condor-G
15
15
SGI (10 ISI, LA,
nodes)
US
Globus/
Fork
10
20
Monash,
Australia
ANL,
Chicago,
US
Execution @ AU Peak Time
Linux cluster - Monash (20)
12
Sun - ANL (5)
SP2 - ANL (5)
SGI - ANL (15)
SGI - ISI (10)
10
6
4
2
Time (minutes)
54
52
51
49
47
46
44
43
41
40
38
37
36
34
33
31
30
28
27
25
24
22
21
20
19
17
15
14
12
10
9
8
6
4
3
1
0
0
Jobs
8
Execution @ AU Offpeak Time
Linux cluster - Monash (5)
12
Sun - ANL (10)
SP2 - ANL (10)
SGI - ANL (15)
SGI - ISI (20)
10
6
4
2
Time (minutes)
60
57
55
53
50
48
46
43
41
39
37
35
32
31
28
26
23
21
19
17
15
13
10
8
7
4
3
0
0
Jobs
8