Accessing and Managing Multiple Clouds (Infrastructures) with Cloudmesh June 24 2014 BigSystem 2014 - Software-Defined Ecosystems at HPDC Vancouver Canada Gregor von Laszewski Fugang Wang Geoffrey Fox.

Download Report

Transcript Accessing and Managing Multiple Clouds (Infrastructures) with Cloudmesh June 24 2014 BigSystem 2014 - Software-Defined Ecosystems at HPDC Vancouver Canada Gregor von Laszewski Fugang Wang Geoffrey Fox.

Accessing and Managing
Multiple Clouds (Infrastructures)
with Cloudmesh
June 24 2014
BigSystem 2014 - Software-Defined Ecosystems at HPDC
Vancouver Canada
Gregor von Laszewski
Fugang Wang
Geoffrey Fox
Introduction
• Cloud computing has become an integral factor for managing
infrastructure by research organizations and industry.
• Public clouds: Amazon, Microsoft, Google, Rackspace, HP, and others.
• Private clouds: set up by internal Information Technology (IT) departments
and made available as part of the general IT infrastructure
• “HPC Clouds”: Non hypervisor or high performance hypervisor based
systems managed like clouds
• Can we leverage all of them?
• How to deal with the frequent changing technologies?
• Minimal changes to users that only want to run an application!
• Use “Software Defined Infrastructure” and “Software Defined
Applications”
• FutureGrid has required this capability to build different software
environments dynamically on it’s hardware
• Describe our Cloudmesh software approach
CloudMesh Architecture
• Tightly integrated software infrastructure toolkit to deliver
• a software-defined distributed system encompassing virtualized and bare-
metal infrastructure, networks, application, systems and platform software with
a unifying goal of providing Computing Testbeds as a Service (CTaaS).
• This system is termed Cloudmesh to symbolize:
• The creation of a tightly integrated mesh of services targeting multiple IaaS
frameworks
• The ability to federate a number of resources from academia and industry.
This includes existing FutureGrid infrastructure, Amazon Web Services, Azure,
HP Cloud, Karlsruhe using several IaaS frameworks
• The creation of an environment in which it becomes easier to experiment with
platforms and software services while assisting with their deployment.
• The exposure of information to guide the efficient utilization of resources.
• Cloudmesh exposes both hypervisor-based and bare-metal
provisioning to users.
• Access through command line, API, and Web interfaces.
Cloudmesh Functionality
5
Cloudmesh User Interface
6
7
Cloudmesh Shell & bash & IPython
8
Monitoring and Metrics Interface
• Service Monitoring
• Energy/Temperature
Monitoring
• Monitoring of
Provisioning
• Integration with other
Tools
• Nagios, Ganglia, Inca,
FG Metrics, Monalytics
• Accounting metrics
FutureGrid offers
Computing Testbed as a Service
Software
(Application
Or Usage)
SaaS
Platform
PaaS
 CS Research Use e.g.
test new compiler or
storage model
 Class Usages e.g. run
GPU & multicore
 Applications
 Cloud e.g. MapReduce
 HPC e.g. PETSc, SAGA
 Computer Science e.g.
Compiler tools, Sensor
nets, Monitors
Infra  Software Defined
Computing (virtual Clusters)
structure
IaaS
Network
NaaS
 Hypervisor, Bare Metal
 Operating System
 Software Defined
Networks
 OpenFlow GENI







FutureGrid uses
Testbed-aaS Tools
Provisioning
Image Management
IaaS Interoperability
NaaS, IaaS tools
Expt management
Dynamic IaaS NaaS
DevOps
CloudMesh is a CTaaS
tool that uses Dynamic
Provisioning and Image
Management to provide
custom environments for
general target systems
Involves (1) creating,
(2) deploying, and
(3) provisioning
of one or more images in
a set of machines on 9
demand
Background - FutureGrid
• Many requirements originate from FutureGrid.
• This is a high performance and grid testbed that allowed scientists to collaboratively
develop and test innovative approaches to parallel, grid, and cloud computing.
• Users can deploy their own hardware and software configurations on a
public/private cloud, and run their experiments.
• Provides an advanced framework to manage user and project affiliation and
propagates this information to a variety of subsystems constituting the FutureGrid
service infrastructure. This includes operational services to deal with authentication,
authorization and accounting.
• Important features of FutureGrid:
• Metric framework that allows us to create usage reports from all of our IaaS
frameworks. Developed from systems aimed at XSEDE
• Repeatable experiments can be created with a number of tools including
Cloudmesh. Provisioning of services and images can be conducted by Rain.
• Multiple IaaS frameworks including OpenStack, Eucalyptus, and Nimbus.
• Mixed operation model. a standard production cloud that operates on-demand, but
also a set of cloud instances that can be reserved for a particular project.
• FutureGrid coming to an end but preserve CTaaS tools as Cloudmesh
Functionality Requirements
• Provide virtual machine and bare-metal management in a multi-cloud
•
•
•
•
environment with very different policies and including
• FutureGrid resources,
• External clouds from research partners,
• Public clouds,
• My own cloud
Provide multi-cloud services and deployments controlled by users & provider
Enable raining of
• Operating systems (bare-metal provisioning),
• Services
• Platforms
• IaaS
Deploy and give access to Monitoring infrastructure across a multi-cloud
environment
Support management of reproducible experiments
Usability Requirements
• Provide multiple interfaces including
• command line tool and command shell
• Web portal and RESTful services
• Python API
• Deliver a toolkit that is
• open source
• Extensible
• easily deployable
• documented
Cloudmesh Definitions I
• Project: The research activity to be supported by Cloudmesh. A
project has roles and users assigned. The roles imply which
types of SDDS can be used by users in the project
• FutureGrid has some roles but need to expand
• This definition supported by FutureGrid [portal
• User: Project participants
• Users have individual authorization roles and roles inherited from projects
with which they are involved
• Users are assigned to projects by project lead
• Public projects can be joined by any Cloudmesh user
• Experiment: The activity unit for Cloudmesh
• SDDS: Software Defined Distributed System
• SDDSL: Specification Language for SDDS; essentially exists
from various sources
Cloudmesh Definitions II
• Infrastructure: Clusters: Computers, Storage, Network with
some reason to be treated as one: Infrastructure has
• Type as in different Amazon Instance Types
• Management Structure
• Provisioning rules for administrators
• Usage rules for users of particular roles
• A current state
• A time interval ranging from transient to a longer term persistence and
including a scheduled start time
• Note storage could often need to be persistent
• Virtual Infrastructure: Dynamically defined Slices of
Infrastructure
• Federated Virtual Infrastructure is a Software Defined
Distributed System SDDS assigned to a Cloudmesh user for an
Experiment in a Project
SDDS Software Defined Distributed Systems
• Cloudmesh builds infrastructure as SDDS consisting of one or more virtual clusters or slices
with extensive built-in monitoring
• These slices are instantiated on infrastructures with various owners
• Controlled by roles/rules of Project, User, infrastructure
User in
Project
Python or
REST API
Repository
Request
Execution in
Project
SDDSL
Results
Request
SDDS
CMMon
Infrastructure
(Cluster,
Storage,
Network, CPS)
 Instance Type
 Current State
 Management
Structure
 Provisioning
Rules
 Usage Rules
(depends on
user roles)
CMPlan
 One needs
User
Roles
Select
Plan
CMProv
CMExec
Requested SDDS as
federated Virtual
Infrastructures
#1Virtual
infra.
Image and
Template
Library
Linux
#3Virtual
infra.
Linux
User role and infrastructure
rule dependent security
checks
#2 Virtual
infra.
Windows
#4 Virtual
infra.
Mac OS X
general hypervisor
and bare-metal
slices to support
FG research
 The experiment
management
system is intended
to integrates ISI
Precip, FG
Cloudmesh and
tools latter invokes
 Enables
reproducibility in
experiments.
Cloudmesh Definitions III
• Cloudmesh Image: The software that is loaded on an
Infrastructure to provision it.
• For nodes, image is loaded on bare metal or a hypervisor
• Images created as described below
• Cloudmesh Image Template: An abstract specification of an
Image used to define an implementation that is valid across
multiple Infrastructures: three steps
• Templates as a set of one or more scripts/XML specifications
• Generic or base images that can be modified on general devops principles.
• Host specific Images
• FutureGrid has a prototype Image and Template Library
• Note templates are preferred model as template description is what we
mean by Software defined Systems
• However one may only have an image in some cases and also provisioning
speed is improved by taking templates and pre-generating images for
particular infrastructures
Cloudmesh Definitions IV
• Cloudmesh Matchmaker CMPlan chooses appropriate
Infrastructures that can be used by CMProv to satisfy a user
requested SDDS (not implemented)
• CloudMesh Provisioner CMProv takes a user request in
SDDSL and a chosen Infrastructure and provisions the
infrastructure in accordance with user roles, Infrastructures
current state, management usage and provisioning rules and
generates requested virtual infrastructure
• CMProv uses appropriate Cloudmesh Images and Templates and
capabilities of Cloudmesh depend on availability of appropriate
images/templates
• CMExec produces the users’ requested SDDS as a federation of
Virtual Infrastructures created by CMProv
• CMMon sets up monitoring and experiment management
infrastructure (incomplete)
CloudMesh Administrative View of SDDS aaS
• CM-BMPaaS (Bare Metal Provisioning aaS) is a systems view and
allows Cloudmesh to dynamically generate anything and assign it as
permitted by user role and resource policy
• FutureGrid machines India, Bravo, Delta, Sierra, Foxtrot are like this
• Note this only implies user level bare metal access if given user is
authorized and this is done on a per machine basis
• It does imply dynamic retargeting of nodes to typically safe modes of
operation (approved machine images) such as switching back and forth
between OpenStack, OpenNebula, HPC on Bare metal, Hadoop etc.
• CM-HPaaS (Hypervisor based Provisioning aaS) allows Cloudmesh
to generate "anything" on the hypervisor allowed for a particular user
• Platform determined by images available to user
• Amazon, Azure, HPCloud, Google Compute Engine
• CM-PaaS (Platform as a Service) makes available an essentially fixed
Platform with configuration differences
• XSEDE with MPI HPC nodes could be like this as is Google App Engine and
Amazon HPC Cluster. Echo at IU (ScaleMP) is like this
• In such a case a system administrator can statically change base system but
the dynamic provisioner cannot
CloudMesh User View of SDDS aaS
• Note we always consider virtual clusters or slices with nodes
that may or may not have hypervisors
• BM-IaaS: Bare Metal (root access) Infrastructure as a
service with variants e.g. can change firmware or not
• H-IaaS: Hypervisor based Infrastructure (Machine) as a
Service. User provided a collection of hypervisors to build
system on.
• Classic Commercial cloud view
• PSaaS Physical or Platformed System as a Service where
user provided a configured image on either Bare Metal or a
Hypervisor
• User could request a deployment of Apache Storm and Kafka to control
a set of devices (e.g. smartphones)
Cloudmesh Infrastructure Types
• Nucleus Infrastructure:
• Persistent Cloudmesh Infrastructure with defined provisioning rules and
characteristics and managed by CloudMesh
• Federated Infrastructure:
• Outside infrastructure that can be used by special arrangement such as
commercial clouds or XSEDE
• Typically persistent and often batch scheduled
• CloudMesh can use within prescribed provisioning rules and users
restricted to those with permitted access; interoperable templates allow
common images to nucleus
• Contributed Infrastructure
• Outside contributions to a particular Cloudmesh project managed by
Cloudmesh in this project
• Typically strong user role restrictions – users must belong to a particular
project
• Can implement a Planetlab like environment by contributing hardware that
can be generally used with bare-metal provisioning
Architecture
• Cloudmesh
Management
Framework for
monitoring and
operations, user and
project management,
experiment planning
and deployment of
services needed by an
experiment
• Provisioning and
execution
environments to be
deployed on resources
to (or interfaced with)
enable experiment
management.
• Resources.
FutureGrid, SDSC Comet, IU Juliet
Building Blocks of Cloudmesh
• Uses internally Libcloud and Cobbler
• Accesses via abstractions external systems/standards
• OpenPBS, Chef,
• Openstack (including tools like Heat), AWS EC2, Eucalyptus,
Azure
• Xsede user management (Amie) via Futuregrid
• Implementing Slurm, OCCI, Ansible, Puppet
• Evaluating Razor, Juju, Xcat (Original Rain used this),
Foreman
User and Project Management
• FutureGrid user and project services simplify the application
processes needed to obtain user accounts and projects.
• We have demonstrated in FutureGrid the ability to create
accounts in a very short time, including vetting projects and
users – allowing fast turn-around times for the majority of
FutureGrid projects with an initial startup allocation.
• We also have shown that we can integrate with other services on user
management such as XSEDE, we also have access to the technical team
that integrated OSG into XSEDE and the XSEDE TAS project
• Cloudmesh re-uses this infrastructure and also allows users to
manage proxy accounts to federate to other IaaS services to
provide an easy interface to integrate them.
Experiment Planning - Future
• Imagine a shopping cart which will allow checking out
of predefined repeatable experiment templates.
• Cost is associated with an experiment making
• Clearing house of images
• Clearing house of complex deployments.
• Integrated accounting framework allowing a usage cost model
• The cost model will be based not only on number of core hours
used, but also the capabilities of the resource, the time, and
special support it takes to set up the experiment. We will
expand upon the metrics framework of FutureGrid that allows
measuring of VM and HPC usage and associate this with cost
models. Benchmarks will be used to normalize the charge
models.
Cloudmesh Provisioning and Execution
• Bare-metal Provisioning
• Originally developed a provisioning framework in FutureGrid based on xCAT and
Moab. (Rain)
• Due to limitations and significant changes between versions we replaced it with a
framework that allows the utilization of different bare-metal provisioners.
• At this time we have provided an interface for cobbler and are also targeting an
interface to OpenStack Ironic.
• Virtual Machine Provisioning
• An abstraction layer to allow the integration of virtual machine management APIs
based on the native IaaS service protocols. This helps in exposing features that
are otherwise not accessible when quasi protocol standards such as EC2 are used
on non-AWS IaaS frameworks. It also prevents limitaions that exist in current
implementations, such as libcloud to use OpenStack.
• Network Provisioning (Future)
• Utilize networks offering various levels of control, from standard IP connectivity to
completely configurable SDNs as novel cloud architectures will almost certainly
leverage NaaS and SDN alongside system software and middleware. FutureGrid
resources will make use of SDN using OpenFlow whenever possible though the
same level of networking control will not be available in every location.
Provisioning – Cont’d
• Storage Provisioning (Future)
• Bare-metal provisioning allows storage provisioning and
making it available to users
• Platform, IaaS, and Federated Provisioning (Current
& Future)
• Integration of Cloudmesh shell scripting, and the utilization of
DevOps frameworks such as Chef or Puppet.
• Resource Shifting (Current & Future)
• We demonstrated via Rain the shift of resources allocations
between services such as HPC and OpenStack or Eucalyptus.
• Developing intuitive user interfaces as part of Cloudmesh that
assist administrators and users through role and project based
authentication to move resources from one service to another.
Testing Resource Federation
• We successfully federated resources from
• Azure
• Any EC2 cloud
• AWS,
• HP cloud
• Karlsruhe Institute of Technology Cloud
• four FutureGrid clouds
• Various versions of OpenStack and Eucalyptus.
• It would be possible to federate with other clouds that run other
infrastructure such as Tashi or Nimbus.
• Integration with OpenNebula is desirable due to strong EU importance
29
CMMon Monitoring Components of CloudMesh
• Leverage best practices and expertise from projects including
FutureGrid and XSEDE now and with GENI possible in future
• Provide transparency of the infrastructure and deep, pervasive
instrumentation capabilities (bare metal up to application level)
• Commercial cloud monitoring focuses on load monitoring (app auto-scaling)
• Available to user
experiments
through the
proposed shopping
cart interface
• Easily configurable
and extensible
• Other Aspects
• Benchmarks
• Security Monitoring
• Energy Monitoring
Cloudmesh
Monitoring and Accounting
• Cloudmesh must be able to access empirical data about the properties
and performance of the underlying infrastructure beyond what is
available from commercial cloud environments. The component of
Cloudmesh accomplishing this is called Cloud Metrics.
• We developed a federated cloud metric service that aggregates the
information from distributed clusters and a variety of heterogeneous
IaaS services, such as OpenStack, Eucalyptus, and Nimbus. The main
components of Cloudmesh Metrics enable
• (a) the measurement of the resource allocation across several IaaS platforms
• (b) the generation of data in regards to utilization
• (c) the comparison of data via definable metrics to mine the usage statistics
• (d) the display of the information through a convenient user interface
• (e) the availability of a simple command line interface and shell language, and
• (f) the automatic creation of resource reports in printed format for arbitrary time
periods.
31
Type of
Monitoring
Tools Used
Types of experiments
Physical host monitoring
Ganglia
Performance evaluation of domain science applications.
Energy monitoring
IPMI
Power/thermally driven data center & scheduling algorithms,
consolidation, and mobile experiments.
Network monitoring
perfSONAR,
Periscope
Network monitoring is essential for experiments from HPC, in
which messaging patterns and fabric contention are significant
to performance, to distributed computing in data movement is a
key cost.
IaaS monitoring
Synaps,
Stackwatch,
Auto-scaling experiments.
Low-level IaaS monitoring Libvirt, libpcap
Experiments that are performance or energy oriented
Application
monitoring
Application performance analysis, including comparisons
between virtual and bare-metal performance, as well as “stealtime,” i.e., the time that's used by other VMs in the cloud which
might be included in "my" per-process timing results
performance PAP/PAPI-V
Integrated monitoring with Monalytics
analytics
Scalable distributed behavior monitoring, debugging, anomaly
detection in large-scale multi-tier, multi-runtime applications
Operational infrastructure Inca, IU metrics Adaptive application simulation experiments driven by real-world
monitoring
and accounting, trace data (e.g., service uptime, usage).
Nagios
32
Operations Monitoring
CloudMesh Status
• First version of Cloudmesh released with a focus on the
development of three of its components. This includes
• virtual machine management in multi-clouds
• cloud metrics in multi-clouds
• and bare-metal provisioning.
• Cloudmesh has been successfully used in FutureGrid. A GUI
and a Cloudmesh shell is available for easy usage by users.
• It has been used by users while deploying it on their local machines
• it also has been demonstrated as a hosted service.
• A RESTful interface to the management functionality is under
development.
• Cloudmesh is an open source project. It uses python and
Javascript.
• WE ARE OPEN, CONTACT [email protected] TO JOIN
Related Work – API IaaS libraries
(Python)
• Boto
• An integrated interface to current and future infrastructural services offered
by Amazon Web Services.
• Lots of interfaces to many services offered by AWS
• Apache libcloud
• Python library for interacting with many of the popular cloud service
providers using a unified API.
• Has limited image management functionality in EC2
• Has support for many providers. If one uses Openstack one should not use the
EC2 interface but the OS based interface.
Related Work - Phantom
• Phantom is a tool targeting users of IaaS
• Monitors the health of resources and automatically provisions and
configures new ones based on demand
• This is good for an individual user, but limits the flexibility for the
administrator of a cloud.
• Uses libcloud which has limitations
• How is Cloudmesh different
• Not only EC2 clouds. Nova, Azure
• Support for native IaaS protocols not only libcloud
• Access to bare-metal provisioning.
Related - RightScale
• RightScale enables users to manage multi-cloud infrastructure
• Amazon Web Services (AWS)
• Rackspace Cloud
• Windows Azure
• Google Compute Engine
• Migrating workloads between private clouds and public clouds.
• It also offers a cloud cost estimator, allowing customers to
assess expenses they are charged by comparing their workload
on various cloud providers.
• Cloudmesh offers users full control of system
Other Related Efforts
• Cloud federation such as efforts planned for future versions of
OpenStack.
• Standards efforts
• Provide an interesting approach to multi-cloud interoperability
• Standards are good, but as libcloud shows libraries that are defacto
standards have limitations (EC2 image management).
• Limit rapid innovation innovation brought forward by individual IaaS
offerings. E.g., OpenStack (Nova) vs EC2
• OCCI is an example of a standards effort not based on the API level.
Conclusions
• Design of a toolkit called Cloudmesh that allows to access to multiple
•
•
•
•
clouds through convenient interfaces. This includes command line, a
command shell, REST, as well as a graphical user interface.
Cloudmesh is under active development and has shown its viability for
accessing more than EC2 based clouds. Native interfaces to
OpenStack, Azure, as well as any EC2 compatible cloud have been
delivered and virtual machine management enabled.
An important contribution of Cloudmesh is that it provides a
sophisticated interface to bare metal provisioning capabilities that not
only can be used by administrators, but also by authorized users. A
role based authorization service makes this possible.
Furthermore, we have developed a multi-cloud metrics framework that
leverages information from various IaaS frameworks.
Future enhancements will include network and storage provisioning.
• PLEASE JOIN CLOUDMESH DEVELOPMENT ….