Science Clouds and Their Innovative Applications July 10 2012 2012 Second International Workshop on Cloud Computing and Future Internet July 9 and 10, 2012, 2nd.
Download ReportTranscript Science Clouds and Their Innovative Applications July 10 2012 2012 Second International Workshop on Cloud Computing and Future Internet July 9 and 10, 2012, 2nd.
Science Clouds and Their Innovative Applications July 10 2012 2012 Second International Workshop on Cloud Computing and Future Internet July 9 and 10, 2012, 2nd Floor Conf. Room, FIT Building Tsinghua University, Beijing, China Geoffrey Fox [email protected] Informatics, Computing and Physics Indiana University Bloomington https://portal.futuregrid.org Science Computing Environments • Large Scale Supercomputers – Multicore nodes linked by high performance low latency network – Increasingly with GPU enhancement – Suitable for highly parallel simulations • High Throughput Systems such as European Grid Initiative EGI or Open Science Grid OSG typically aimed at pleasingly parallel jobs – Can use “cycle stealing” – Classic example is LHC data analysis • Grids federate resources as in EGI/OSG or enable convenient access to multiple backend systems including supercomputers – Portals make access convenient and – Workflow integrates multiple processes into a single job • Specialized visualization, shared memory parallelization etc. machines https://portal.futuregrid.org 2 Clouds and Grids/HPC • Synchronization/communication Performance Grids > Clouds > Classic HPC Systems • Clouds naturally execute effectively Grid workloads but are less clear for closely coupled HPC applications • Service Oriented Architectures portals and workflow appear to work similarly in both grids and clouds • May be for immediate future, science supported by a mixture of – Clouds – some practical differences between private and public clouds – size and software – High Throughput Systems (moving to clouds as convenient) – Grids for distributed data and access – Supercomputers (“MPI Engines”) going to exascale https://portal.futuregrid.org What Applications work in Clouds • Pleasingly parallel applications of all sorts with roughly independent data or spawning independent simulations – Long tail of science and integration of distributed sensors • Commercial and Science Data analytics that can use MapReduce (some of such apps) or its iterative variants (most other data analytics apps) • Which science applications are using clouds? – Many demonstrations –Conferences, OOI, HEP …. – Venus-C (Azure in Europe): 27 applications not using Scheduler, Workflow or MapReduce (except roll your own) – 50% of applications on FutureGrid are from Life Science but there is more computer science than total applications – Locally Lilly corporation is major commercial cloud user (for drug discovery) but Biology department is not https://portal.futuregrid.org 4 Parallelism over Users and Usages • “Long tail of science” can be an important usage mode of clouds. • In some areas like particle physics and astronomy, i.e. “big science”, there are just a few major instruments generating now petascale data driving discovery in a coordinated fashion. • In other areas such as genomics and environmental science, there are many “individual” researchers with distributed collection and analysis of data whose total data and processing needs can match the size of big science. • Clouds can provide scaling convenient resources for this important aspect of science. • Can be map only use of MapReduce if different usages naturally linked e.g. exploring docking of multiple chemicals or alignment of multiple DNA sequences – Collecting together or summarizing multiple “maps” is a simple Reduction https://portal.futuregrid.org 5 27 Venus-C Azure Applications Civil Protection (1) Biodiversity & Biology (2) • Fire Risk estimation and fire propagation Chemistry (3) • Lead Optimization in Drug Discovery • Molecular Docking • Biodiversity maps in marine species • Gait simulation Civil Eng. and Arch. (4) • Structural Analysis • Building information Management • Energy Efficiency in Buildings • Soil structure simulation Physics (1) • Simulation of Galaxies configuration Earth Sciences (1) • Seismic propagation Mol, Cell. & Gen. Bio. (7) • • • • • Genomic sequence analysis RNA prediction and analysis System Biology Loci Mapping Micro-arrays quality. ICT (2) • Logistics and vehicle routing • Social networks analysis Medicine (3) • Intensive Care Units decision support. • IM Radiotherapy planning. • Brain Imaging 6 Mathematics (1) • Computational Algebra Mech, Naval & Aero. Eng. (2) • Vessels monitoring • Bevel gear manufacturing simulation VENUS-C Final Review: The User Perspective 11-12/7 EBC Brussels Internet of Things and the Cloud • It is projected that there will be 24 billion devices on the Internet by 2020. Most will be small sensors that send streams of information into the cloud where it will be processed and integrated with other streams and turned into knowledge that will help our lives in a multitude of small and big ways. • It is not unreasonable for us to believe that we will each have our own cloud-based personal agent that monitors all of the data about our life and anticipates our needs 24x7. • The cloud will become increasing important as a controller of and resource provider for the Internet of Things. • As well as today’s use for smart phone and gaming console support, “smart homes” and “ubiquitous cities” build on this vision and we could expect a growth in cloud supported/controlled robotics. • Natural parallelism over “things” https://portal.futuregrid.org 7 Internet of Things: Sensor Grids A pleasingly parallel example on Clouds • A sensor (“Thing”) is any source or sink of time series – In the thin client era, smart phones, Kindles, tablets, Kinects, web-cams are sensors – Robots, distributed instruments such as environmental measures are sensors – Web pages, Googledocs, Office 365, WebEx are sensors – Ubiquitous Cities/Homes are full of sensors • They have IP address on Internet • Sensors – being intrinsically distributed are Grids • However natural implementation uses clouds to consolidate and control and collaborate with sensors • Sensors are typically “small” and have pleasingly parallel cloud implementations https://portal.futuregrid.org 8 Sensors as a Service Output Sensor Sensors as a Service A larger sensor ……… Sensor Processing as a Service (could use MapReduce) https://portal.futuregrid.org https://sites.google.com/site/opensourceiotcloud/ Open Source Sensor (IoT) Cloud Sensor Cloud Architecture Originally brokers were from NaradaBrokering Replacing with ActiveMQ and Netty for streaming https://portal.futuregrid.org Pub/Sub Messaging • At the core Sensor Cloud is a pub/sub system • Publishers send data to topics with no information about potential subscribers • Subscribers subscribe to topics of interest and similarly have no knowledge of the publishers URL: https://sites.google.com/site/opensourceiotcloud/ https://portal.futuregrid.org Some Typical Results • GPS Sensor (1 per second, 1460byte packet) • Low-end Video Sensor (10 per second, 1024byte packet) • High End Video Sensor (30 per second, 7680byte packet) • All with NaradaBrokering pub-sub system – no longer best https://portal.futuregrid.org 12 GPS Sensor: Multiple Brokers in Cloud GPS Sensor 120 Latency ms 100 80 60 1 Broker 40 2 Brokers 5 Brokers 20 0 100 400 600 1000 1400 1600 2000 2400 2600 3000 Clients https://portal.futuregrid.org 13 High-end Video Sensor High End Video Sensor 700 600 Latency ms 500 400 1 Broker 2 Brokers 300 5 Brokers 200 100 0 100 200 250 300 400 500 600 800 1000 1200 1400 1500 Clients https://portal.futuregrid.org 14 • Classic Parallel Computing HPC: Typically SPMD (Single Program Multiple Data) “maps” typically processing particles or mesh points interspersed with multitude of low latency messages supported by specialized networks such as Infiniband and technologies like MPI – Often run large capability jobs with 100K (going to 1.5M) cores on same job – National DoE/NSF/NASA facilities run 100% utilization – Fault fragile and cannot tolerate “outlier maps” taking longer than others • Clouds: MapReduce has asynchronous maps typically processing data points with results saved to disk. Final reduce phase integrates results from different maps – Fault tolerant and does not require map synchronization – Map only useful special case • HPC + Clouds: Iterative MapReduce caches results between “MapReduce” steps and supports SPMD parallel computing with large messages as seen in parallel kernels (linear algebra) in clustering and other data mining https://portal.futuregrid.org 15 4 Forms of MapReduce (a) Map Only Input (b) Classic MapReduce (c) Iterative MapReduce Input Input (d) Loosely Synchronous Iterations map map map Pij reduce reduce Output BLAST Analysis High Energy Physics Expectation maximization Classic MPI Parametric sweep (HEP) Histograms Clustering e.g. Kmeans PDE Solvers and Pleasingly Parallel Distributed search Linear Algebra, Page Rank particle dynamics Domain of MapReduce and Iterative Extensions MPI Science Clouds Exascale https://portal.futuregrid.org 16 Commercial “Web 2.0” Cloud Applications • Internet search, Social networking, e-commerce, cloud storage • These are larger systems than used in HPC with huge levels of parallelism coming from – Processing of lots of users or – An intrinsically parallel Tweet or Web search • Classic MapReduce is suitable (although Page Rank component of search is parallel linear algebra) • Data Intensive • Do not need microsecond messaging latency https://portal.futuregrid.org 17 Data Intensive Applications • Applications tend to be new and so can consider emerging technologies such as clouds • Do not have lots of small messages but rather large reduction (aka Collective) operations – New optimizations e.g. for huge messages – e.g. Expectation Maximization (EM) has a few broadcasts but dominated by reductions • Not clearly a single exascale job but rather many smaller (but not sequential) jobs e.g. to analyze groups of sequences • Algorithms not clearly robust enough to analyze lots of data – Current standard algorithms such as R not designed for big data • Multidimensional Scaling MDS is iterative rectangular matrix-matrix multiplication controlled by EM • Look in detail at Deterministically Annealed Pairwise Clustering as an EM example https://portal.futuregrid.org 18 Intermediate step in DA-PWC With 6 clusters MDS used to project from high dimensional to 3D space Each of 100K points is a sequence. Clusters are Fungi families. 140 Clusters at end of iteration N=100K points is 10^5 core hours 2) Scales between O(N) and O(N https://portal.futuregrid.org 19 DA-PWC EM Steps (Expectation E is red, Maximization M Black) k runs over clusters; i,j points 1) A(k) = - 0.5 i=1N j=1N (i, j) <Mi(k)> <Mj(k)> / <C(k)>2 2) 3) 4) 5) (i, j) distance Bi(k) = j=1N (i, j) <Mj(k)> / <C(k)> between i(k) = (Bi(k) + A(k)) points I and j <Mi(k)> = exp( -i(k)/T )/k=1K exp(-i(k)/T) C(k) = i=1N <Mi(k)> • Parallelize by distributing points i across processes • Clusters k in simplest case are parameters held by all tasks – fails when k reaches ~10,000. Real challenge to automatic parallelizer! • Either Broadcasts of <Mi(k)> and/or reductions https://portal.futuregrid.org 20 Twister for Data Intensive Iterative Applications Broadcast Compute Communication Generalize to arbitrary Collective Reduce/ barrier New Iteration Smaller LoopVariant Data Larger LoopInvariant Data • (Iterative) MapReduce structure with Map-Collective is framework • Twister runs on Linux or Azure • Twister4Azure is built on top of Azure tables, queues, storage https://portal.futuregrid.org MDSBCCalc 16 MDSStressCalc 14 12 10 8 6 4 2 0 0 2048 140 Number of Executing Map Tasks Task Execution Time (s) 18 4096 6144 8192 10240 12288 14336 16384 18432 Map Task ID MDSBCCalc MDSStressCalc 120 100 80 60 40 20 0 0 100 200 300 400 500 600 Elapsed Time (s) 700 800 https://portal.futuregrid.org MDS Azure 128 cores • Note fluctuations limit performance • Each step is two (blue followed by red) rectangular matrix multiplications 22 2000 MDS Azure 128 cores 1200 800 Twister4Azure Twister Twister4Azure Adjusted 400 0 Num Cores X Num Data Points 300 Execution Time Per Block Execution Time (s) 1600 250 200 150 100 50 Twister4Azure 0 102400 Twister 144384 Twister4Azure Adjusted 176640 204800 Number of Data Points https://portal.futuregrid.org • Top is weak scaling • Bottom 128 cores, vary data size • Twister is on non virtualized Linux • “Adjusted” takes out sequential performance difference 23 • • • • • • • • • • What to use in Clouds: Cloud PaaS HDFS style file system to collocate data and computing Queues to manage multiple tasks Tables to track job information MapReduce and Iterative MapReduce to support parallelism Services for everything Portals as User Interface Appliances and Roles as customized images Software tools like Google App Engine, memcached Workflow to link multiple services (functions) Data Parallel Languages like Pig; more successful than HPF? https://portal.futuregrid.org 24 • • • • • • • • What to use in Grids and Supercomputers? HPC PaaS Services Portals and Workflow as in clouds MPI and GPU/multicore threaded parallelism GridFTP and high speed networking Wonderful libraries supporting parallel linear algebra, particle evolution, partial differential equation solution Globus, Condor, SAGA, Unicore, Genesis for Grids Parallel I/O for high performance in an application Wide area File System (e.g. Lustre) supporting file sharing This is a rather different style of PaaS from clouds – should we unify? https://portal.futuregrid.org 25 Is PaaS a good idea? • If you have existing code, PaaS may not be very relevant immediately – Just need IaaS to put code on clouds • But surely it must be good to offer high level tools? • For example, Twister4Azure is built on top of Azure tables, queues, storage • Historically HPCC 1990-2000 built MPI, libraries, (parallel) compilers .. • Grids 2000-2010 built federation, scheduling, portals and workflow • Clouds 2010-…. have a fresh interest in powerful programming models https://portal.futuregrid.org 26 aaS versus Roles/Appliances • If you package a capability X as XaaS, it runs on a separate VM and you interact with messages – SQLaaS offers databases via messages similar to old JDBC model • If you build a role or appliance with X, then X built into VM and you just need to add your own code and run – Generalized worker role builds in I/O and scheduling • Lets take all capabilities – MPI, MapReduce, Workflow .. – and offer as roles or aaS (or both) • Perhaps workflow has a controller aaS with graphical design tool while runtime packaged in a role? • Need to think through packaging of parallelism (virtual clusters) 27 https://portal.futuregrid.org Private Clouds • Define as non commercial cloud used to support science • What does it take to make private cloud platforms competitive with commercial systems? • Plenty of work at VM management level with Eucalyptus, Nimbus, OpenNebula, OpenStack – Only now maturing – Nimbus and OpenNebula pretty solid but not widely adopted in USA – OpenStack and Eucalyptus recent major improvements • Open source PaaS tools like Hadoop, Hbase, Cassandra, Zookeeper but not integrated into platform • Need dynamic resource management in a “not really elastic” environment as limited size • Federation of distributed components (as in grids) to make a decent size system https://portal.futuregrid.org 28 Architecture of Data Repositories? • Traditionally governments set up repositories for data associated with particular missions – For example EOSDIS (Earth Observation), GenBank (Genomics), NSIDC (Polar science), IPAC (Infrared astronomy) – LHC/OSG computing grids for particle physics • This is complicated by volume of data deluge, distributed instruments as in gene sequencers (maybe centralize?) and need for intense computing like Blast – i.e. repositories need lots of computing? https://portal.futuregrid.org 29 Clouds as Support for Data Repositories? • The data deluge needs cost effective computing – Clouds are by definition cheapest – Need data and computing co-located • Shared resources essential (to be cost effective and large) – Can’t have every scientists downloading petabytes to personal cluster • Need to reconcile distributed (initial source of ) data with shared analysis – Can move data to (discipline specific) clouds – How do you deal with multi-disciplinary studies • Data repositories of future will have cheap data and elastic cloud analysis support? – Hosted free if data can be used commercially? https://portal.futuregrid.org 30 Using Science Clouds in a Nutshell • • • • • • • High Throughput Computing; pleasingly parallel; grid applications Multiple users (long tail of science) and usages (parameter searches) Internet of Things (Sensor nets) as in cloud support of smart phones (Iterative) MapReduce including “most” data analysis Exploiting elasticity and platforms (HDFS, Object Stores, Queues ..) Use worker roles, services, portals (gateways) and workflow Good Strategies: – – – – – – Build the application as a service; Build on existing cloud deployments such as Hadoop; Use PaaS if possible; Design for failure; Use as a Service (e.g. SQLaaS) where possible; Address Challenge of Moving Data https://portal.futuregrid.org 31 FutureGrid key Concepts I • FutureGrid is an international testbed modeled on Grid5000 – July 6 2012: 227 Projects, >920 users • Supporting international Computer Science and Computational Science research in cloud, grid and parallel computing (HPC) – Industry and Academia • The FutureGrid testbed provides to its users: – A flexible development and testing platform for middleware and application users looking at interoperability, functionality, performance or evaluation – FutureGrid is user-customizable, accessed interactively and supports Grid, Cloud and HPC software with and without virtualization. – A rich education and teaching platform for advanced cyberinfrastructure (computer science) classes https://portal.futuregrid.org FutureGrid key Concepts II • Rather than loading images onto VM’s, FutureGrid supports Cloud, Grid and Parallel computing environments by provisioning software as needed onto “bare-metal” using Moab/xCAT (need to generalize) – Image library for MPI, OpenMP, MapReduce (Hadoop, (Dryad), Twister), gLite, Unicore, Globus, Xen, ScaleMP (distributed Shared Memory), Nimbus, Eucalyptus, OpenNebula, KVM, Windows ….. – Either statically or dynamically • Growth comes from users depositing novel images in library • FutureGrid has ~4400 distributed cores with a dedicated network and a Spirent XGEM network fault and delay generator Image1 Image2 ImageN … Choose https://portal.futuregrid.org Load Run FutureGrid Partners • Indiana University (Architecture, core software, Support) • San Diego Supercomputer Center at University of California San Diego (INCA, Monitoring) • University of Chicago/Argonne National Labs (Nimbus) • University of Florida (ViNE, Education and Outreach) • University of Southern California Information Sciences (Pegasus to manage experiments) • University of Tennessee Knoxville (Benchmarking) • University of Texas at Austin/Texas Advanced Computing Center (Portal) • University of Virginia (OGF, XSEDE Software stack) • Center for Information Services and GWT-TUD from Technische Universtität Dresden. (VAMPIR) • Red institutions have FutureGrid hardware https://portal.futuregrid.org Compute Hardware Total RAM # CPUs # Cores TFLOPS (GB) Secondary Storage (TB) Site IU Name System type india IBM iDataPlex 256 1024 11 3072 180 alamo Dell PowerEdge 192 768 8 1152 30 hotel IBM iDataPlex 168 672 7 2016 120 sierra IBM iDataPlex 168 672 7 2688 96 xray Cray XT5m 168 672 6 1344 180 IU Operational foxtrot IBM iDataPlex 64 256 2 768 24 UF Operational Bravo Large Disk & memory 192 (12 TB per Server) IU Operational Delta 32 128 Large Disk & 192+ 32 CPU memory With 14336 32 GPU’s Tesla GPU’s GPU TOTAL Cores 4384 1.5 ?9 3072 (192GB per node) 1536 (192GB per node) https://portal.futuregrid.org 192 (12 TB per Server) Status Operational TACC Operational UC Operational SDSC Operational IU Operational 5 Use Types for FutureGrid • 227 approved projects (~920 users) July 6 2012 – USA, China, India, Pakistan, lots of European countries – Industry, Government, Academia • Training Education and Outreach (8%) – Semester and short events; promising for small universities • Interoperability test-beds (3%) – Grids and Clouds; Standards; from Open Grid Forum OGF • Domain Science applications (31%) – Life science highlighted (18%), Non Life Science (13%) • Computer science (47%) – Largest current category • Computer Systems Evaluation (27%) – XSEDE (TIS, TAS), OSG, EGI • Clouds are meant to need less support than other models; 36 FutureGrid needs more user support ……. https://portal.futuregrid.org https://portal.futuregrid.org/projects https://portal.futuregrid.org 37 Competitions Recent ProjectsHave Last one just finished Grand Prize Trip to SC12 Next Competition Beginning of August For our Science Cloud Summer School https://portal.futuregrid.org 38 Distribution of FutureGrid Technologies and Areas Nimbus Eucalyptus 52.30% HPC 44.80% Hadoop 35.10% MapReduce Education 9% 32.80% XSEDE Software Stack 23.60% Twister 15.50% OpenStack 15.50% OpenNebula 15.50% Genesis II 14.90% Unicore 6 8.60% gLite 8.60% Globus 4.60% Vampir 4.00% Pegasus 4.00% PAPI • 220 Projects 56.90% Technology Evaluation 24% Interoperability 3% Life Science 15% 2.30% https://portal.futuregrid.org other Domain Science 14% Computer Science 35% • • • • • • • • • • • • • • • • • • FutureGrid Tutorials Cloud Provisioning Platforms • Educational Grid Virtual Appliances Using Nimbus on FutureGrid [novice] • Running a Grid Appliance on your desktop Nimbus One-click Cluster Guide • Running a Grid Appliance on FutureGrid Using OpenStack Nova on FutureGrid Using • Running an OpenStack virtual appliance on FutureGrid Eucalyptus on FutureGrid [novice] • Running Condor tasks on the Grid Appliance Connecting private network VMs across Nimbus • Running MPI tasks on the Grid Appliance clusters using ViNe [novice] • Running Hadoop tasks on the Grid Appliance Using the Grid Appliance to run FutureGrid Cloud • Deploying virtual private Grid Appliance clusters using Clients [novice] Nimbus Cloud Run-time Platforms • Building an educational appliance from Ubuntu 10.04 Running Hadoop as a batch job using MyHadoop • Customizing and registering Grid Appliance images using [novice] Eucalyptus Running SalsaHadoop (one-click Hadoop) on HPC • High Performance Computing environment [beginner] • Basic High Performance Computing Running Twister on HPC environment • Running Hadoop as a batch job using MyHadoop Running SalsaHadoop on Eucalyptus • Performance Analysis with Vampir Running FG-Twister on Eucalyptus • Instrumentation and tracing with VampirTrace Running One-click Hadoop WordCount on • Experiment Management Eucalyptus [beginner] • Running interactive experiments [novice] Running One-click Twister K-means on Eucalyptus • Running workflow experiments using Pegasus Image Management and Rain • Pegasus 4.0 on FutureGrid Walkthrough [novice] Using Image Management and Rain [novice] • Pegasus 4.0 on FutureGrid Tutorial [intermediary] Storage https://portal.futuregrid.org 40 • Pegasus 4.0 on FutureGrid Virtual Cluster [advanced] Using HPSS from FutureGrid [novice] Selected List of Services Offered PaaS Hadoop IaaS Grid Twister Bare Metal Hive Nimbus Hbase Genesis II Eucalyptus Unicore OpenStack MPI SAGA OpenNebula OpenMP Globus FG RAIN ScaleMP Portal CUDA Inca ViNE HPCC Others XSEDE Software Ganglia Experiment Management (Pegasus) 11/6/2015 https://portal.futuregrid.org 41 RAINing on FutureGrid Dynamic Prov. Eucalyptus Hadoop Dryad MPI OpenMP Globus IaaS PaaS Parallel Cloud (Map/Reduce, ...) Programming Frameworks Frameworks Frameworks Nimbus Moab XCAT Unicore Grid many many more 11/6/2015 FG Perf. Monitor https://portal.futuregrid.org http://futuregrid.org 42 VM Image Management https://portal.futuregrid.org Create Image from Scratch 1400 (4) Upload It to the Repository (3) Compress Image (2) Generate Image (1) Boot VM CentOS Time (s) 1200 1000 800 600 400 200 0 1 1400 1200 Ubuntu Time (s) 1000 800 4 Number2 of Concurrent Requests 8 (4) Upload It to the Repository (3) Compress Image (2) Generate Image (1) Boot VM 600 400 200 0 1 4 Number2of Concurrent Requests https://portal.futuregrid.org https://portal.futuregrid.org 8 Create Image from Base Image 1400 CentOS Time (s) 1200 1000 800 (4) Upload it to the Repository (3) Compress Image (2) Generate Image (1) Retrieve/Uncompress base image from Repository 600 400 200 0 1 1400 Ubuntu Time (s) 1200 1000 800 4 Number2 of Concurrent Requests 8 (4) Upload it to the Repository (3) Compress Image (2) Generate Image (1) Retrieve/Uncompress base image from Repository 600 400 200 0 1 4 Number2 of Concurrent Requests https://portal.futuregrid.org https://portal.futuregrid.org 8 Templated(Abstract) Dynamic Provisioning • Abstract Specification of image mapped to various OpenNebula HPC and Cloud environments Parallel provisioning now supported Moab/xCAT HPC – high as need Essex replaces Cactus reboot before use Current Eucalyptus 3 commercial while version 2 Open Source https://portal.futuregrid.org 46 Evaluate Cloud Environments: Interfaces ✓ OpenStack (Cactus) ✓✓ ✓ OpenStack (Essex) Eucalyptus (2.0) ✓✓ Eucalyptus (3.1) ✓ Nimbus ✓✓✓ OpenNebula 11/6/2015 EC2 and S3, Rest Interface EC2 and S3, Rest Interface, OCCI EC2 and S3, Rest Interface EC2 and S3, Rest Interface, OCCI EC2 and S3, Rest Interface Native XML/RPC, EC2 and S3, OCCI, Rest Interface https://portal.futuregrid.org 47 Hypervisor ✓✓✓ OpenStack KVM, XEN, VMware Vsphere, LXC, UML and MS HyperV ✓✓ Eucalyptus KVM and XEN. VMWare in the enterprise edition. ✓ Nimbus ✓✓ OpenNebula 11/6/2015 KVM and XEN KVM, XEN and VMWare https://portal.futuregrid.org 48 Networking ✓✓✓ OpenStack - Two modes: (a) Flat networking (b) VLAN networking -Creates Bridges automatically -Uses IP forwarding for public IP -VMs only have private IPs ✓✓✓ Eucalyptus - Four modes: (a) managed; (b) managed-noLAN; (c) system; and (d) static - In (a) & (b) bridges are created automatically - IP forwarding for public IP -VMs only have private IPs ✓✓ Nimbus - IP assigned using a DHCP server that can be configured in two ways. - Bridges must exists in the compute nodes ✓✓✓ OpenNebula - Networks can be defined to support Ebtable, Open vSwitch and 802.1Q tagging -Bridges must exists in the compute nodes -IP are setup inside VM 11/6/2015 https://portal.futuregrid.org 49 Software Deployment - Software is composed by component that can be placed in different machines. - Compute nodes need to install OpenStack software - Software is composed by component that can be placed in different machines. - Compute nodes need to install Eucalyptus software ✓ OpenStack ✓ Eucalyptus ✓✓ Nimbus Software is installed in frontend and compute nodes ✓✓✓ OpenNebula Software is installed in frontend 11/6/2015 https://portal.futuregrid.org 50 DevOps Deployment ✓✓✓ OpenStack Chef, Crowbar, (Puppet), juju ✓ Eucalyptus Chef*, Puppet* (*according to vendor) Nimbus no ✓✓ 11/6/2015 OpenNebula Chef, Puppet https://portal.futuregrid.org 51 Storage (Image Transfer) ✓ OpenStack - Swift (http/s) - Unix filesystem (ssh) ✓ Eucalyptus Walrus (http/s) ✓ Nimbus Cumulus (http/https) ✓ OpenNebula Unix Filesystem (ssh, shared filesystem or LVM with CoW) 11/6/2015 https://portal.futuregrid.org 52 Authentication ✓✓✓ X509 credentials, LDAP ✓ OpenStack (Cactus) OpenStack (Essex) Eucalyptus 2.0 ✓✓✓ Eucalyptus 3.1 X509 credentials, LDAP ✓✓ Nimbus X509 credentials, Grids ✓✓✓ OpenNebula X509 credential, ssh rsa keypair, password, LDAP ✓✓ 11/6/2015 X509 credentials, (LDAP) X509 credentials https://portal.futuregrid.org 53 Typical Release Frequency ✓ ✓ OpenStack <4month Eucalyptus >4 month Nimbus <4 month OpenNebula >6 month 11/6/2015 https://portal.futuregrid.org 54 License ✓✓ OpenStack Open Source Apache Eucalyptus 2.0 Open Source ≠ Commercial (3.0) ✓ Eucalyptus 3.1 Open Source, (Commercial add ons) ✓✓ Nimbus Open Source Apache ✓✓ OpenNebula Open Source Apache 11/6/2015 https://portal.futuregrid.org 55 Cosmic Comments I • Does Cloud + MPI Engine for computing + grids for data cover all? – Will current high throughput computing and cloud concepts merge? • Need interoperable data analytics libraries for HPC and Clouds • Can we characterize data analytics applications? – I said modest size and kernels need reduction operations and are often full matrix linear algebra (true?) • Does a “modest-size private science cloud” make sense – Too small to be elastic? • Should governments fund use of commercial clouds (or build their own) – Most science doesn’t have privacy issues motivating private clouds • Most interest in clouds from “new” applications such as life sciences https://portal.futuregrid.org 56 Cosmic Comments II • Recent private cloud infrastructure (Eucalyptus 3, OpenStack Essex in USA) much improved – Nimbus, OpenNebula still good • But are they really competitive with commercial cloud fabric runtime? • Should we integrate HPC and Cloud Platforms? • More employment opportunities in clouds than HPC and Grids; so cloud related activities popular with students • Science Cloud Summer School July 30-August 3 – Part of virtual summer school in computational science and engineering and expect over 200 participants spread over 9 sites https://portal.futuregrid.org 57