FutureGrid NSF September 15 2010 Geoffrey Fox [email protected] http://www.infomall.org http://www.futuregrid.org Director, Digital Science Center, Pervasive Technology Institute Associate Dean for Research and Graduate Studies, School of Informatics.

Download Report

Transcript FutureGrid NSF September 15 2010 Geoffrey Fox [email protected] http://www.infomall.org http://www.futuregrid.org Director, Digital Science Center, Pervasive Technology Institute Associate Dean for Research and Graduate Studies, School of Informatics.

FutureGrid
NSF
September 15 2010
Geoffrey Fox
[email protected]
http://www.infomall.org http://www.futuregrid.org
Director, Digital Science Center, Pervasive Technology Institute
Associate Dean for Research and Graduate Studies, School of Informatics and Computing
Indiana University Bloomington
FutureGrid key Concepts I
• FutureGrid provides a testbed with a wide variety of
computing services to its users
– Supporting users developing new applications and new
middleware using Cloud, Grid and Parallel computing
(Hypervisors – Xen, KVM, ScaleMP, Linux, Windows, Nimbus,
Eucalyptus, Hadoop, Globus, Unicore, MPI, OpenMP …)
– Software supported by FutureGrid or users
– ~5000 dedicated cores distributed across country
• The FutureGrid testbed provides to its users:
– A rich development and testing platform for middleware and
application users looking at interoperability, functionality and
performance
– Each use of FutureGrid is an experiment that is reproducible
– A rich education and teaching platform for advanced
cyberinfrastructure classes
– Ability to collaborate with the US industry on research projects
FutureGrid key Concepts II
• Cloud infrastructure supports loading of general images on
Hypervisors like Xen; FutureGrid dynamically provisions software as
needed onto “bare-metal” using Moab/xCAT based environment
• Key early user oriented milestones:
– June 2010 Initial users
– November 2010-September 2011 Increasing number of users allocated by
FutureGrid
– October 2011 FutureGrid allocatable via TeraGrid process
• To apply for FutureGrid access or get help, go to homepage
www.futuregrid.org. Alternatively for help send email to
[email protected]. You should receive an automated reply to email
within minutes, and contact clearly from a live human no later than
next (U.S.) business day after sending an email message. Please send
email to PI [email protected] if problems
FutureGrid Partners
•
•
•
•
•
•
•
•
•
•
•
Indiana University (Architecture, core software, Support)
– Collaboration between research and infrastructure groups
Purdue University (HTC Hardware)
San Diego Supercomputer Center at University of California San Diego
(INCA, Monitoring)
University of Chicago/Argonne National Labs (Nimbus)
University of Florida (ViNE, Education and Outreach)
University of Southern California Information Sciences (Pegasus to manage
experiments)
University of Tennessee Knoxville (Benchmarking)
University of Texas at Austin/Texas Advanced Computing Center (Portal)
University of Virginia (OGF, Advisory Board and allocation)
Center for Information Services and GWT-TUD from Technische Universtität
Dresden. (VAMPIR)
Red institutions have FutureGrid hardware
FutureGrid Organization
PI
Advisory Committee
Executive Committee
PI and co-PI’s
Project Manager
Operations and
Change Management Committee
Software Architect
Computers and
Network
Software
User Support
Core
Basic Support
Performance
Advanced
User Support
Training
Education
Outreach
Images/
Appliances
Portal
Web Site
Systems Management
5
Compute Hardware
System type
# CPUs
# Cores
TFLOPS
Total RAM
(GB)
Secondary
Storage (TB)
Site
Status
Dynamically configurable systems
IBM iDataPlex
256
1024
11
3072
339*
IU
Operational
Dell PowerEdge
192
768
8
1152
30
TACC
IBM iDataPlex
168
672
7
2016
120
UC
Operational
IBM iDataPlex
168
672
7
2688
96
SDSC
Operational
Subtotal
784
3136
33
8928
585
Being installed
Systems not dynamically configurable
Cray XT5m
168
672
6
1344
339*
IU
Operational
Shared memory
system TBD
40
480
4
640
339*
IU
New System
TBD
IBM iDataPlex
64
256
2
768
1
UF
Operational
High Throughput
Cluster
192
384
4
192
PU
Not yet integrated
Subtotal
464
1792
16
2944
1
Total
1248
4928
49
11872
586
Storage Hardware
System Type
Capacity (TB)
File System
Site
Status
DDN 9550
(Data Capacitor)
339
Lustre
IU
Existing System
DDN 6620
120
GPFS
UC
New System
SunFire x4170
96
ZFS
SDSC
New System
Dell MD3000
30
NFS
TACC
New System
Network & Internal
Interconnects
• FutureGrid has dedicated network (except to TACC) and a network
fault and delay generator
• Can isolate experiments on request; IU runs Network for
NLR/Internet2
• (Many) additional partner machines will run FutureGrid software
and be supported (but allocated in specialized ways)
Machine
Name
Internal Network
IU Cray
xray
Cray 2D Torus SeaStar
IU iDataPlex
india
DDR IB, QLogic switch with Mellanox ConnectX adapters Blade
Network Technologies & Force10 Ethernet switches
SDSC
iDataPlex
sierra
DDR IB, Cisco switch with Mellanox ConnectX adapters Juniper
Ethernet switches
UC iDataPlex
hotel
DDR IB, QLogic switch with Mellanox ConnectX adapters Blade
Network Technologies & Juniper switches
UF iDataPlex
foxtrot
Gigabit Ethernet only (Blade Network Technologies; Force10 switches)
TACC Dell
alamo
QDR IB, Mellanox switches and adapters Dell Ethernet switches
FutureGrid: a Grid/Cloud/HPC
Testbed
•
•
•
Operational: IU Cray operational; IU , UCSD, UF & UC IBM iDataPlex operational
INCA Node Operating Mode Statistics
Network, NID operational
TACC Dell finished acceptance tests
NID: Network
Private
FG Network
Public
Impairment Device
Network Impairment Device
• Spirent XGEM Network Impairments Simulator for
jitter, errors, delay, etc
• Full Bidirectional 10G w/64 byte packets
• up to 15 seconds introduced delay (in 16ns
increments)
• 0-100% introduced packet loss in .0001% increments
• Packet manipulation in first 2000 bytes
• up to 16k frame size
• TCL for scripting, HTML for manual configuration
• Need more proposals to use (have one from
University of Delaware)
FutureGrid Usage Model
• The goal of FutureGrid is to support the research on the future of
distributed, grid, and cloud computing
• FutureGrid will build a robustly managed simulation environment
and test-bed to support the development and early use in science
of new technologies at all levels of the software stack: from
networking to middleware to scientific applications
• The environment will mimic TeraGrid and/or general parallel and
distributed systems – FutureGrid is part of TeraGrid (but not part
of formal TeraGrid process for first two years)
– Supports Grids, Clouds, and classic HPC
– It will mimic commercial clouds (initially IaaS not PaaS)
– Expect FutureGrid PaaS to grow in importance
• FutureGrid can be considered as a (small ~5000 core)
Science/Computer Science Cloud but it is more accurately a virtual
machine or bare-metal based simulation environment
• This test-bed will succeed if it enables major advances in science
and engineering through collaborative development of science
applications and related software
Some Current FutureGrid
Two Recent Projects
early uses
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Investigate metascheduling approaches on Cray and iDataPlex
Deploy Genesis II and Unicore end points on Cray and iDataPlex clusters
Develop new Nimbus cloud capabilities
Prototype applications (BLAST) across multiple FutureGrid clusters and Grid’5000
Compare Amazon, Azure with FutureGrid hardware running Linux, Linux on Xen or Windows
for data intensive applications
Test ScaleMP software shared memory for genome assembly
Develop Genetic algorithms on Hadoop for optimization
Attach power monitoring equipment to iDataPlex nodes to study power use versus use
characteristics
Industry (Columbus IN) running CFD codes to study combustion strategies to maximize
energy efficiency
Support evaluation needed by XD TIS and TAS services
Investigate performance of Kepler workflow engine
Study scalability of SAGA in difference latency scenarios
Test and evaluate new algorithms for phylogenetics/systematics research in CIPRES portal
Investigate performance overheads of clouds in parallel and distributed environments
Support tutorials and classes in cloud, grid and parallel computing (IU, Florida, LSU)
~12 active/finished users out of ~32 early user applicants
Grid Interoperability
from Andrew Grimshaw
• Colleagues,
• FutureGrid has as two of its many goals the creation of a Grid middleware testing
and interoperability testbed as well as the maintenance of standards compliant
endpoints against which experiments can be executed. We at the University of
Virginia are tasked with bringing up three stacks as well as maintaining standardendpoints against which these experiments can be run.
• We currently have UNICORE 6 and Genesis II endpoints functioning on X-Ray (a
Cray). Over the next few weeks we expect to bring two additional resources, India
and Sierra (essentially Linux clusters), on-line in a similar manner (Genesis II is
already up on Sierra). As called for in the FutureGrid program execution plan,
once those two stacks are operational we will begin to work on g-lite (with help
we may be able to accelerate that). Other standards-compliant endpoints are
welcome in the future , but not part of the current funding plan.
• RENKEI/NAREGI
I’m writing the PGI
and are
GINinterested
working groups
see if therefor
is interest
in using
We
project
in the to
participation
interoperation
these resources
(endpoints)
as aand
part
either
GIN or PGImiddleware
work, in particular
demonstrations
or projects
for OGF
SC.ofWe
havethe
a prototype
which
in demonstrations or projects for OGF in October or SC in November. One of the
can submit
and receive
jobs using
HPCBP specification.
have
detailed
key differences
between
thesethe
endpoints
and others isCan
thatwe
they
canmore
be expected
information
of your
data
staging
method,
andThey
so on)
to persist.
Theseendpoints(authentication,
resources will not go away
when
a demo
is done.
willand
be
there as a testbed
for future
and middleware development (e.g., a
the participation
conditions
of theapplication
demonstrations/projects.
metascheduler that works across g-lite and Unicore 6).
http://futuregrid.org
13
OGF’10 Demo
SDSC
Rennes
Grid’5000
firewall
Lille
UF
UC
ViNe provided the necessary
inter-cloud connectivity to
deploy CloudBLAST across 5
Nimbus sites, with a mix of
public and private subnets.
Sophia
Typical Performance Study
Linux, Linux on VM, Windows, Azure, Amazon Bioinformatics
15
Education on FutureGrid
• Build up tutorials on supported software
• Support development of curricula requiring privileges
and systems destruction capabilities that are hard to
grant on conventional TeraGrid
• Offer suite of appliances (customized VM based
images) supporting online laboratories
• Supporting ~200 students in Virtual Summer School on
“Big Data” July 26-30 with set of certified images – first
offering of FutureGrid 101 Class; TeraGrid ‘10 “Cloud
technologies, data-intensive science and the TG”;
CloudCom conference tutorials Nov 30-Dec 3 2010
• Experimental class use fall semester at Indiana, Florida
and LSU
300+ Students learning about Twister & Hadoop
MapReduce technologies, supported by FutureGrid.
July 26-30, 2010 NCSA Summer School Workshop
http://salsahpc.indiana.edu/tutorial
Washington
University
University of
Minnesota
Iowa
State
IBM Almaden
Research Center
University of
California at
Los Angeles
San Diego
Supercomputer
Center
Michigan
State
Univ.Illinois
at Chicago
Notre
Dame
Johns
Hopkins
Penn
State
Indiana
University
University of
Texas at El Paso
University of
Arkansas
University
of Florida
FutureGrid
Layered Software
Stack
User Supported Software usable in Experiments
e.g. OpenNebula, Charm++, Other MPI, Bigtable
http://futuregrid.org
18
Software Components
• Portals including “Support” “use FutureGrid”
“Outreach”
• Monitoring – INCA, Power (GreenIT)
• Experiment Manager: specify/workflow
• Image Generation and Repository
• Intercloud Networking ViNE
• Virtual Clusters built with virtual networks
• Performance library
• Rain or Runtime Adaptable InsertioN Service: Schedule
and Deploy images
• Security (including use of isolated network),
Authentication, Authorization,
FutureGrid Software
Architecture
• Flexible Architecture allows one to configure resources based on
images
• Managed images allows to create similar experiment
environments
• Experiment management allows reproducible activities
• Through our modular design we allow different clouds and
images to be “rained” upon hardware.
• Note will eventually be supported at “TeraGrid Production
Quality”
• Will support deployment of “important” middleware including
TeraGrid stack, Condor, BOINC, gLite, Unicore, Genesis II,
MapReduce, Bigtable …..
– Will accumulate more supported software as system used!
• Will support links to external clouds, GPU clusters etc.
– Grid5000 initial highlight with OGF29 Hadoop deployment over
Grid5000 and FutureGrid
– Interested in more external system collaborators!
Dynamic provisioning
Examples
• Need to provision
– Linux or Windows O/S
– Linux or (Windows O/S) on Hypervisors (KVM, Xen, ScaleMP)
– Appliances – O/S plus application/middleware on bare-metal or hypervisors
• Give me a virtual cluster with 30 nodes based on Xen
• Give me 15 KVM nodes each in Chicago and Texas linked to Azure and
Grid5000
• Give me a Eucalyptus environment with 10 nodes
• Give 32 MPI nodes running on first Linux and then Windows with Cray
iDataPlex Dell comparisons
• Give me a Hadoop or Dryad environment with 160 nodes
– Compare with Amazon and Azure
• Give me a 1000 BLAST instances linked to Grid5000
• Give me two 8 node (64 core) ScaleMP instances on Alamo and India
Dynamic Provisioning Experiment
Logical View
Dynamic Provisioning
Results
Total Provisioning Time minutes
Time minutes
0:04:19
0:03:36
0:02:53
0:02:10
Time
0:01:26
0:00:43
0:00:00
4
8
Number of nodes
16
32
Time elapsed between requesting a job and the jobs reported start
time on the provisioned node. The numbers here are an average of 2
sets of experiments.
Provisioning times for nodes
in a 32 node request
Time minutes
Node Provisioning Times for RHEL stateless image
0:04:19
0:03:36
0:02:53
0:02:10
Node Times
0:01:26
0:00:43
0:00:00
1
2
3
4
5
6
7
8
9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32
The nodes took an average of 3 minutes and 45 seconds to switch
from the stateful to stateless image with a standard deviation of 14
seconds.
Phase III Process View
Security Issues
• Need to provide dynamic flexible usability and preserve system security
• Still evolving process but initial approach involves
• Encouraging use of “as a Service” approach e.g. “Database as a Software” not
“Database in your image”; clearly possible for some cases as in “Hadoop as a
Service”
– Commercial clouds use aaS for database, queues, tables, storage …..
– Makes complexity linear in #features rather than exponential if need to
support all images with or without all features
• Have a suite of vetted images (here images includes customized appliances) that
can be used by users with suitable roles
– Typically do not allow root access; can be VM or not VM based
– Can create images and requested that they be vetted
• “Privileged images” (e.g. allow root access) use VM’s and network isolation
FutureGrid Interaction with
Commercial Clouds
• We support experiments that link Commercial Clouds and FutureGrid
with one or more workflow environments and portal technology installed
to link components across these platforms
• We support environments on FutureGrid that are similar to Commercial
Clouds and natural for performance and functionality comparisons
– These can both be used to prepare for using Commercial Clouds and as
the most likely starting point for porting to them
– One example would be support of MapReduce-like environments on
FutureGrid including Hadoop on Linux and Dryad on Windows HPCS which
are already part of FutureGrid portfolio of supported software
• We develop expertise and support porting to Commercial Clouds from
other Windows or Linux environments
• We support comparisons between and integration of multiple
commercial Cloud environments – especially Amazon and Azure in the
immediate future
• We develop tutorials and expertise to help users move to Commercial
Clouds from other environments
FutureGrid Viral Growth Model
• Users apply for a project
• Users improve/develop some software in project
• This project leads to new images which are placed
in FutureGrid repository
• Project report and other web pages document use
of new images
• Images are used by other users
• And so on ad infinitum ………
http://futuregrid.org
28
200 papers submitted to main track; 4 days of tutorials