vonlaszewski-gridcat.ppt

Download Report

Transcript vonlaszewski-gridcat.ppt

Bio
• Gregor von Laszewski is conducting state-of-the-art work in Cloud
computing and GreenIT at Indiana University as part of the Future
Grid project. During a 2 year leave of absence from Argonne
National Laboratory he was an associate Professor at Rochester
Institute of Technology (RIT). He worked between 1996 and 2007
for Argonne National Laboratory and as a fellow at University of
Chicago.
• He is involved in Grid computing since the term was coined. Current
research interests are in the areas of GreenIT, Grid & Cloud
computing, and GPGPUs. He is best known for his efforts in making
Grids usable and initiating the Java Commodity Grid Kit which
provides a basis for many Grid related projects including the Globus
toolkit (http://www.cogkits.org). His Web page is located at
http://cyberaide.org
• Recently worked on FutureGrid, http://futuregird.org
• Masters Degree in 1990 from the University of Bonn, Germany
• Ph.D. in 1996 from Syracuse University in computer science.
Cyberaide Creative:
On-Demand Cyberinfrastructure
Provision in Clouds
Casey Rathbone, Lizhe Wang,
Gregor von Laszewski, Fugang Wang
Outline
•
•
•
•
•
•
•
Background and related work
Problem definition
System design
Prototype performance results
Current progress
FutureGrid
Conclusion
12/13/09
Gregor von Laszewski, [email protected]
3
Why are we dong it?
Past
12/13/09
Now
Gregor von Laszewski, [email protected]
4
Grid/Cloud Computing
• Effective computing paradigm for distributed
high performance computing applications
• A number of production Grid infrastructures,
projects, applications:
– TeraGrid, EGEE, WLCG, FutureGrid, D-Grid …
• Disadvantages of current production Grids:
– Overloaded Grid middleware
– Complicated access interfaces and policies
– Limited QoS support
– No personalized computing environment provision
12/13/09
Gregor von Laszewski, [email protected]
5
Grid/Cloud Computing
Features:
– On demand service provision
– Utility computing model: pay-as-you-go
– Customized computing environment provision
– Automatic and autonomous service management
– User centric interfaces with broad network access
– Scalable services with resource pooling
……
12/13/09
Gregor von Laszewski, [email protected]
6
Cyberaide
• An open source project
– Originally created at Argonne Nat. Lab.
– Now Indiana University
• Some students from RIT
• PI: Dr. von Laszewski
• A middleware for Cyberinfrastructure
– Including Grids and Clouds
•
•
•
•
12/13/09
Cyberaide virtual appliance
Cyberaide shell
Cyberaide mediator, cyberaide server
Cyberadie creative
Gregor von Laszewski, [email protected]
7
Cyberaide
shell, mediator and server
12/13/09
Gregor von Laszewski, [email protected]
8
Motivation: Cyberaide Creative
• Todays heterogeneous network architectures
require teams of IT specialists to effectively
deploy services. Decreasing accessibility to
computing resources.
• Cyberaide Creative addresses this issue by
providing a platform for individuals to utilize
resources without needing intimate knowledge
of the hardware platform.
12/13/09
Gregor von Laszewski, [email protected]
9
Research Topic
• Increasing accessibility to computing resources
with on-demand deployment on virtualized
hardware resources.
• Effectively abstracting the end-user from
configuring specifications for each system
12/13/09
Gregor von Laszewski, [email protected]
10
System Design
12/13/09
Gregor von Laszewski, [email protected]
11
Use Case
• End-user configures a virtual appliance image
with the web interface
• Cyberaide Creative builds and stores the
virtual appliance
• End-user then has the capability to deploy
instances of the virtual appliance onto Cloud
resources
12/13/09
Gregor von Laszewski, [email protected]
12
Virtual Cluster Deployment
12/13/09
Gregor von Laszewski, [email protected]
13
Cyberaide Gridshell Deployment
12/13/09
Gregor von Laszewski, [email protected]
14
Single Workstation Deployment
12/13/09
Gregor von Laszewski, [email protected]
15
Virtual Machine Linpack
Performance Results
Demonstrates that there is performance sacrifice for virtual
deployments.
12/13/09
Gregor von Laszewski, [email protected]
16
On demand access
Cyberinfrastructures
• Now users can on-demand build desired cyberinfrastructures,
for example production Grid environments.
• Then how to access them?
• Interfaces of Production Grids are strictly defined:
• Resource information
• Security
• Job submission and management
• Access resources of production Grid
• from ad-hoc clients
• without special client software & Grid expertise
• on-demand access at runtime
12/13/09
Gregor von Laszewski, [email protected]
17
Cyberaide Virtual Appliance:
overview
• Cyberaide Virtual Appliance
– Put cyberadie shell, mediator and server into a virtual
machine,
– On demand deploy cyberaide virtual appliance to access
production Grid
– User can access production Grid via cyberaide virtual
appliance
• Advantages
– Cyberaide virtual appliance can be dynamically deployed
with policy customization, like user account, access URI, ..
– Multiple users can share a cyberaide virtual appliance,
then build a VO
– A cyberaide virtual appliance can be managed easily, for
example, start, shutdown, migration, duplication, ..
12/13/09
Gregor von Laszewski, [email protected]
18
Cyberaide virtual appliance:
Solutions
Vmware Studio vs. JeOS VMBuilder
Criteria
Vmware Studio
JeOS VMBuilder
User interface
Very good
Less comfortable
Support OS
Ubuntu, SUSE, RedHat,
CentOS
Ubuntu JeOS only
Support hypervisor
Vmware
Vmware, Xen and KVM
Auto support on
hypervisor
Yes
no
Ease of use
Some technical
problems
good
JeOS VMBuilder is selected
12/13/09
Gregor von Laszewski, [email protected]
19
Cyberaide virtual appliance:
Implementation
• Four configuration files for Boot and Login:
– A basic configuration file that allows to define some basic
parameters such as: platform type (i386), amount of memory of
the virtual appliance, packages that should be directly installed,
etc.
– A hard-disk configuration file that defines the size of each
available (virtual) hard-disk and the number and size of all the
partitions that will be created on these hard-disks.
– Boot.sh: Shell script that will be executed during the first boot of
the new appliance.
– Login.sh: Shell script that will be executed after the first logon in
the new appliance.
• One script is for adapting the VMbuilder configuration files
• One script is for transferring the appliance to the target
host and starting it on the specified hypervisor.
12/13/09
Gregor von Laszewski, [email protected]
20
Cyberaide Virtual Appliance:
Build process
12/13/09
Gregor von Laszewski, [email protected]
21
Test result:
Web portal on TeraGrid
12/13/09
Gregor von Laszewski, [email protected]
22
Test result:
performance evaluation on TeraGrid
Metrics
value
Building time (basic OS packages)
10 minutes
Building time (full system image)
20 minutes
Deployment time
15 minutes
Total time
40 ~ 60 minutes
Virtual machine image size (basic OS package)
400 MB
Virtual machine image size (full system image)
2.8 GB
12/13/09
Gregor von Laszewski, [email protected]
23
Our work on Cloud computing
•
•
•
•
•
Cyberaide virtual appliance (CloudComp’09)
Cyberaide creative (GridCAT’09)
Cyberaide onServe (submitted)
On-demand ESD (accepted as a book chapter)
e-Molst (accepted by CCPE)
12/13/09
Gregor von Laszewski, [email protected]
24
FutureGrid
• The goal of FutureGrid is to support the research that
will invent the future of distributed, grid, and cloud
computing.
• FutureGrid will build a robustly managed simulation
environment or testbed to support the development
and early use in science of new technologies at all
levels of the software stack: from networking to
middleware to scientific applications.
• The environment will mimic TeraGrid and/or general
parallel and distributed systems
• This test-bed will enable dramatic advances in science
and engineering through collaborative evolution of
science applications and related software.
12/13/09
Gregor von Laszewski, [email protected]
25
FutureGrid Partners
•
•
•
•
•
•
•
•
•
Indiana University
Purdue University
University of Florida
University of Virginia
University of Chicago/Argonne National Labs
University of Texas at Austin/Texas Advanced Computing Center
San Diego Supercomputer Center at University of California San Diego
University of Southern California Information Sciences Institute, University of Tennessee Knoxville
Center for Information Services and GWT-TUD from Technische Universtität Dresden.
12/13/09
Gregor von Laszewski, [email protected]
26
FutureGrid Hardware
Secondary Default local
System type
# CPUs # Cores TFLOPS RAM (GB) storage (TB) file system
Site
Dynamically configurable systems
IU
IBM iDataPlex
256
1024
11
3072
335*
Lustre
TACC
Dell PowerEdge
192
1152
12
1152
15
NFS
UC
IBM iDataPlex
168
672
7
2016
120
GPFS
IBM iDataPlex
168
672
7
2688
72
Lustre/PVFS UCSD
784
3520
37
8928
542
Subtotal
Systems not dynamically configurable
IU
Cray XT5m
168
672
6
1344
335*
Lustre
IU
Shared memory system TBD
40**
480**
4**
640**
335*
Lustre
Cell BE Cluster
4
UF
IBM iDataPlex
64
256
2
768
5
NFS
PU
High Throughput Cluster
192
384
4
192
552
2080
21
3328
10
Subtotal
Total
1336
5600
58
10560
552
12/13/09
Gregor von Laszewski,
[email protected]
27
FutureGrid Architecture
12/13/09
Gregor von Laszewski, [email protected]
28
FutureGrid Architecture
• Open Architecture allows to configure
resources based on images
• Shared images allows to create similar
experiment environments
• Experiment management allows management
of reproducible activities
• Through our “stratosphere” design we allow
different clouds and images to be “rained”
upon hardware.
12/13/09
Gregor von Laszewski, [email protected]
29
FutureGrid Usage Scenarios
• Developers of end-user applications who want to develop
new applications in cloud or grid environments, including
analogs of commercial cloud environments such as Amazon
or Google.
– Is a Science Cloud for me?
• Developers of end-user applications who want to
experiment with multiple hardware environments.
• Grid middleware developers who want to evaluate new
versions of middleware or new systems.
• Networking researchers who want to test and compare
different networking solutions in support of grid and cloud
applications and middleware. (Some types of networking
research will likely best be done via through the GENI
program.)
• Interest in performance requires that bare metal important
12/13/09
Gregor von Laszewski, [email protected]
30
Selected FutureGrid Timeline
• October 1 2009 Project Starts
• November 16-19 SC09 Demo/F2F Committee
Meetings
• March 2010 FutureGrid network complete
• March 2010 FutureGrid Annual Meeting
• September 2010 All hardware (except Track IIC
lookalike) accepted
• October 1 2011 FutureGrid allocatable via
TeraGrid process – first two years by
user/science board led by Andrew Grimshaw
12/13/09
Gregor von Laszewski, [email protected]
31
• Cyberaide: a lightweight middleware for
Clusters, Grids and Clouds
• http://cyberaide.org
• Cyberaide creative:
• on-demand build cyberinfrastructures
• Cyberaide virtual appliance:
– on demand deploy middelware to access
cyberinfrastructures
• FutureGrid: http://futuregrid.org
12/13/09
Gregor von Laszewski, [email protected]
32
Future work
Cyberaide: a lightweight middleware for
Clusters, Grids and Clouds
http://cyberaide.org
Cyberaide creative:
on-demand build cyberinfrastructures
Cyberaide virtual appliance:
on demand deploy middelware to access
cyberinfrastructures
12/13/09
Gregor von Laszewski, [email protected]
33
Acknowledgement
• Work conducted by Gregor von Laszewski is
supported (in part) by NSF CMMI 0540076
and NSF SDCI NMI 0721656.
• FutureGrid Is supported by NSF grant
#0910812 - FutureGrid:
– An Experimental, High-Performance Grid Test-bed.
12/13/09
Gregor von Laszewski, [email protected]
34