HPC meeting Belfast 2009 Dominique Gillot HP Life Science and Scientific research EMEA High Performance Computing Technology for better research outcomes © 2007 Hewlett-Packard Development.

Download Report

Transcript HPC meeting Belfast 2009 Dominique Gillot HP Life Science and Scientific research EMEA High Performance Computing Technology for better research outcomes © 2007 Hewlett-Packard Development.

HPC meeting Belfast 2009 Dominique Gillot HP Life Science and Scientific research EMEA High Performance Computing Technology for better research outcomes © 2007 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice

2

Targeting the Convergence of Massive Scale-Out

Enterprise/HP • Geo-Sciences • Life & Materials Sciences • Defense/Security • Scientific Research Photo/Video Sharing Web 2.0

• Internet Commerce • Interactive Media • On-Line Gaming

Massive Scale-

• •

Out

Digital Content Creation • Streaming Media Emerging business models: Best performance = faster time to market New metrics: Best performance per watt best performance per sq. ft.

Extreme pain points: Data center constraints of power, cooling, space, manageability & automation of dynamic workloads Market Requirements Multiple go to market motions, optimized supply chain & unique products and services Belfast 09 HP CONFIDENTIAL

HP leads the Scale-Out Market

HP Market Share Q2’08 HP Market Position Industry standard processors Blades 33.9% 53.1% #1: Total Itanium-based servers 1 #1: Total x86-64-based servers 1 #1: Total Opteron-based servers 1 #1 in blades 1 Linux HPC 36.3% 36.7% #1: Total Linux servers 1 #1: Total HPC 2 HPC Clusters 32.4% #1: HPC clusters 2 TOP500 Belfast 09 30 April 2020 Sources: 35.2% #1: HP c-class 176 entries 3 1- IDC Worldwide Quarterly Server Tracker, Q2 2008 2- IDC Worldwide Technical Server QView, Q2 2008 3- TOP500 June 2008, www.top500.org

All market shares and ranks are by revenue.

HP CONFIDENTIAL

Why HP?

HP Leads in High Performance Computing

• • • HP leads HPC market overall for last 5 years Cluster space, driving HPC market Blades, the optimal clusters solution 1 1 1Q08 HPC Share by vendor 1 Other 9% SGI 1% NEC 1% IBM 31% 1Q08 HPC clusters by vendor 1 Other 15% DELL 28% 1Q08 total blades by vendor 1 Other 15% IBM 30% Dell 7 % IBM 27% HP 29% Dell 21% Sun 4% HP33% 4 Sources: Belfast 09 1- IDC, Worldwide Technical Server QView, Q1 2008. All market shares and ranks are by revenue.

HP CONFIDENTIAL HP 47%

HP’s Cloud Definition

Cloud Service Provider An environment where highly scalable and elastic services can be easily consumed over the internet through a low-touch, pay-per-use business model

Provider perspective:

… all the details related to providing a complete solution, at an attractive price, on a cost structure that leads to a profitable business model are your responsibility.

 You own and manage all of the IT assets  You assume the specific costs and risks of the service components Belfast 09 14-Nov-2008

Two very different roles Two very different perspectives

Cloud Service Consumer

Consumer perspective:

…all you need is a device and an internet connection to get the value.  You don’t need software, hardware, technical knowledge.  You don’t own the assets.  You don’t assume the specific costs and risks of the service components

The complete HPC Environment

Users High speed interconnect Compute Resources

Storage Servers Storage Farm

Service Nodes Visualization Resources 6 Belfast 09 HP CONFIDENTIAL

Unified Cluster Portfolio

HPC cluster services HPC application, development and grid software portfolio Scalable visualization ClusterPack Scalable data management Cluster management layer XC, CMU and Insight Control Partner software Microsoft Windows HPC Server 2008 Operating environment and OS extensions HP-UX Linux Windows HP cluster platforms HP ProLiant and Integrity servers, HP BladeSystem, multiple interconnects 7 7 Belfast 09 HP CONFIDENTIAL HP Confidential – CDA Required

HPC Platforms for Scale-up and -out

HP ProLiant Servers HP BladeSystem c-Class Servers HP Integrity Servers

The world’s best-selling rack-mountable servers, meeting a broad range of HPC applications requirements

8 HP CONFIDENTIAL

The ideal platform for HPC clusters, delivering performance, density, efficiency and manageability. Highly scalable servers for large memory, SMP and database work loads, in mission critical HPC environments

HP BladeSystem c-Class Portfolio

A Full Range of 2P and 4P Blades Server Blades Workstation Blades Storage Blades Interconnect choices for LAN, SAN, and Scale-Out Clusters Virtual Connect h LAN (1G & 10G) Ethernet NICs 1G & 10G SAN Fibre Channel h h h Unified Management Choice of Power InfiniBand 4X DDR Complete Services Assessment Implementation Support Belfast 09 9 HP Confidential – NDA Required

Meet diversified HPC challenges – all in the versatile c-Class infrastructure

BL495c/BL480c

: expanded memory and I/O capability, ideal for large memory applications in O&G, EDA, Life Sci.

BL260c

: cost optimized, energy efficient 2P blade, best price performance selection for small- to medium scale HPC cluster deployments

BL460/465c

: high density, energy efficient, balanced design for general purpose HPC workloads

BL680c/BL685c

: 4P SMP server for shared memory single- or multi-threaded applications

BL2x220c

: extreme density, highest performance per watt, best choice of compute nodes in scale-out cluster deployments Belfast 09 30 April 2020 HP CONFIDENTIAL HP Confidential – CDA Required

New HP Proliant BL2x220c G5 Meeting the need for increased computing density for life sciences • • • HP ProLiant BL2x220c G5 Server Blade Announced May 28, 2008 Combines two independent servers in single blade enclosure/slot • Ideal for requirements of hundreds of servers running in single rack • Scales to 128 servers, 1024 cores in single 42U rack − 3x density of traditional 1U rack-mount servers • Tight power and cooling requirements, best performance per watt on market

New Proliant BL2x220c in c7000 enclosure

11 Belfast 09 HP CONFIDENTIAL

HP ProLiant BL2x220c G5

Doubling down with the w orld’s first 2-in-1 server blade Double compute while cutting data center blade infrastructure cost in half • First industry standard server to deliver 1024 cores per standard 42u rack • Lower cost per Gflop compared to competition • Increase performance with up to 12.3 TeraFLOPS per rack. Designed for maximum power & cooling efficiency • ~5300W per 32 nodes at 100% utilization – compared to ~7400W for other blades Broad choice of industry standard servers, interconnects, Operating Systems • Choice of processor architectures, interconnects (i.e. 1Gb/10Gb eth, IB). Linux and Windows Builds on enormous success of HP BladeSystem c-Class • Leader in x86 blades Market • Commanding role in the TOP500 supercomputing list where measured mflops per watt surpasses approx. 99% supercomputers on the list 12 Belfast 09 Scale-out computing just got twice as good!

HP CONFIDENTIAL

Some interesting data

Eight years ago: ASCI White (IBM) •#1 on the Top500 in June 2001 Today: one rack full of Bl2x220c 13 Peak performance: 12,288 Gflop/s Weight: 106 Tons (w/ 160 TB storage) Power: 3MW Cost: $110 million Belfast 09 HP CONFIDENTIAL Peak performance: 12,288 Gflop/s Weight: ~2000 lbs (~1 ton) – 100x lighter Power: ~30KW – 100x less power Cost: ~ $800K -- more than 100x lower cost

HP Personal Supercomputer

Running Windows HPC 8 Blades slots 2x2 sockets 4 Cores 128 Cores max 12 Gflop per core 1.4 Teraflop

14 Belfast 09 HP CONFIDENTIAL C 3000 « TOWER »

HP 9100 Extreme Data Storage System Integrated hardware/software solution

• • • Performance Blocks HP’s Industry Leading BladeSystem − − − Up to 12.8 cores/U Starting at 4 blocks, expand up to 16 blocks Flexible industry standard platform Capacity Blocks Industry Leading Density − − − Up to 12TB/U Starting at 246TB, expand to over 820TB Scales in 82TB Storage Blocks Software Proven in High Availability Environments − − Integrated system management Multiple access protocols • NFS, HTTP, Application IO • Applications can run directly on blades Belfast 09 30 April 2020 HP CONFIDENTIAL HP Confidential – CDA Required

HP SFS G3, based on Lustre 1.6

Lustre 1.6 base with HP installation and maintenance tools on HP hardware

− Integrated, tested, and supported on StorageWorks and ProLiant hardware − Proven production-quality stability and resiliency − HPC ready: High metadata and I/O performance rates − Leading Linux, open-source, multi-vendor, multi platform − POSIX, RedHat, and SUSE compliant • From kernel.org; no client kernel mods needed − Multiple high-speed interconnects: InfiniBand and 10GbE* * 10GbE support scheduled for Q1 2008 Belfast 09 30 April 2020 HP CONFIDENTIAL

• • • • •

StorageWorks MSA2000 storage for HP SFS G3

High reliability and resiliency − − Fully resilient: No single points of failure Redundant paths, redundant storage servers, dual-coherent RAID controllers, dual domain disks Scalable performance: 10s of GB per second − − Scales out across parallel data servers (OSSs) High performance RAID controllers: 1.5 GB/s per OSS pair High capacity: Scale-out petabyte filesystems − − At competitive MSA2000 prices Up to 48 TB per MSA2000 controller-pair, 192 TB per OSS pair Excellent resiliency − Dual parity (RAID 6) + a hot spare (nine drives for data plus three for resiliency) Wide choice of disk types − − − SATA for large-block bandwidth and capacity SAS for random, mixed, short I/O and bandwidth 146 GB to 1 TB drive capacity, 7.2K to 15K RPM Belfast 09 30 April 2020 HP CONFIDENTIAL

MSA2000 Scale-Out Configuration

Each OSS pair: Two or Four MSA2212fc (250 MB/s each with RAID-6) 750 MB/s per OSS Large Sequential READs/WRITEs with Four MSA2212fc Per MSA2212fc (AJ745A): 4 FC connections 12 drives raw capacity 1 or 3 JBOD shelves per MSA2212fc (AJ750A) All Shelves RAID6: 9 data + 2 parity + 1 spare Interconnect :

InfiniBand 4x DDR

MDS G G iLO DL380 #0-1 F F F F IB ADMIN G G iLO DL380 #0-2 F F F F IB G G iLO DL380 #19 F F F F IB G G iLO DL380 #20 F F F F IB

POWER SUPPLY POWER SUPPLY ONLINE SPARE MIRROR DIMMS PCI RISER CAGE PROC PROC FANS INTER LOCK OVER TEMP

UID HP ProLiant DL380G5

ONLINE SPARE MIRROR PCI RISER CAGE DIMMS PROC PROC FANS OVER TEMP

UID HP ProLiant DL380G5

10101 SANSwitch 2/16 10 11 12 13

IP

14 15 10101 SANSwitch 2/16 10 11 12 13

IP

14 15

CTL1 CTL2 MSA2000fc Disks x 12 MSA2000 JBOD Disks x 12 Meta Data MDS Disk drives: 146GB SAS 15KRPM 300GB SAS 15KRPM CTL1 CTL2 CTL1 CTL2 CTL1 CTL2 CTL1 CTL2 MSA2000fc Disks x 12 MSA2000fc Disks x 12 MSA2000fc Disks x 12 MSA2000fc Disks x 12 MSA2000 JBOD Disks x 12 MSA2000 JBOD Disks x 12 MSA2000 JBOD Disks x 12 MSA2000 JBOD Disks x 12 MSA2000 JBOD Disks x 12 MSA2000 JBOD Disks x 12 MSA2000 JBOD Disks x 12 MSA2000 JBOD Disks x 12 MSA2000 JBOD Disks x 12 MSA2000 JBOD Disks x 12 MSA2000 JBOD Disks x 12 MSA2000 JBOD Disks x 12 OSS Pair 1 OSS Disk drives: 146GB SAS 15KRPM 300GB SAS 15KRPM 500GB SATA 7200 RPM 750GB SATA 7200 RPM 1TB SATA 7200 RPM CTL1 CTL2 CTL1 CTL2 CTL1 CTL2 CTL1 CTL2 OSS Pair … 750GB x 12 RAID6/SATA 750GB x 12 RAID6/SATA 750GB x 12 RAID6/SATA 750GB x 12 RAID6/SATA 750GB x 12 RAID6/SATA 750GB x 12 RAID6/SATA 750GB x 12 RAID6/SATA 750GB x 12 RAID6/SATA MSA2000fc for DL380#19 MSA2000fc for DL380#19 MSA2000fc for DL380#20 MSA2000fc for DL380#20 OSS Pair 2008

HP Modular Cooling System G2

• • Cooling for high density deployments 35kW of cooling capacity in a single rack • 17.5kW of cooing when using two racks.

• CTO capable, up to 2000 lbs of IT equipment • Uniform air flow across the front of the servers • Automatic door opening mechanism controlling both racks • Adjustable temperature set point • • Removes 95% to 97% of heat inside racks Polycarbonate front door reduces ambient noise considerably • Level 2 Integration with HP-SIM 19 Belfast 09 HP CONFIDENTIAL

HP Cluster Management Utility (CMU)

• • • • • • Easy, low-cost customizable utility Features: − Scalable provisioning − Configurable monitoring − Remote cluster commands with GUI and Command Line Interfaces − HP SIM Level 1 integration Well adapted for HPC customized clusters Proven and scalable: over 150 customers with Top500 sites Broad HP hardware platforms support November 2008 HP CONFIDENTIAL

HP CMU Major Features

• • • • Management (GUI and CLI) − − Day to day administration of the cluster from one central point.

Halt, re/boot or broadcast commands to a set of nodes Backup/Cloning (GUI and CLI) − Capture & deploy a golden image on all the nodes (or groups of nodes) • fast and scalable − Scalable provisioning: 2000+ nodes Monitoring − − − View cluster activity in real time ‘at a glance’ Monitor many machines from one window Receive alerts when something special happens on a compute node or on a set of compute nodes − Dynamic resource group creation as jobs submitted Coming soon: Diskless support as standard option Belfast 09 November 2008 HP CONFIDENTIAL

HP understand the basic architecture for life sciences research

Next Generation Life Sciences Focus areas for 2008

Basic Research Discovery & Development Preclinical Development 23 Belfast 09 Clinical Development

Three key focus areas:

Next Generation Sequencers (NGS) - High Content Screening (HCS) devices Digital Pathology/Tissue Microarray (TMA) – emerging New instruments are driving significant new requirements for computing and storage: Single instrument can generate 100MBs to TBs data per run Data challenges – analyze, integrate, store HP CONFIDENTIAL

Next Generation Life Sciences

From development to production in 2008

Three major vendors in 2008 100x faster, 1/100 th the cost of previous Sanger technology Illumina / Solexa

Product: llumina Genome Analyzer

Roche /454

Product: GS FLX Sequencer

Applied Biosystems /Agencourt Product:

SOLiD System On the horizon: Helicos BioSciences, others

Belfast 09 24 HP CONFIDENTIAL

Common Compute and Storage Requirements for The Architecture for Next-Generation Research

Next-generation sequencing and analysis architecture in place at leading research institutions today

HP Understands Computational Requirements of Life Sciences

High throughput Large simulation p Monte Carlo simulations p Stochastic dynamics p Image processing and analysis p Molecular mechanics/Molecular dynamics p Compute-intensive quantum mechanics HP Cluster / Blades: HP CP3000 (Xeon) HP CP4000 (Opteron) HP Cluster / Blades: HP CP3000 (Xeon) HP CP4000 (Opteron) HP CP6000 (Itanium) p Bioinformatics p Cheminformatics p Chemical Information Systems p Docking/screening HP Cluster / Blades: HP CP3000 (Xeon), HP CP4000 (Opteron) http://techmktg.rsn.hp.com/lms/Application.htm

HP Cluster / Blades: HP CP3000 (Xeon) HP CP4000 (Opteron) HP CP6000 (Itanium) p Memory-intensive quantum mechanics p Normal mode calculations in molecular mechanics p Informatics p In-memory database searches HP Integrity Servers HP SuperDome SMP: 16+ core/node

The complete HPC Environment

Users High speed interconnect Compute Resources

Storage Servers Storage Farm

Service Nodes Visualization Resources

HP Remote Graphics Software

Applications run natively on workstations with full HW accel for 2D/3D graphics Key enabling technology for HP Blade Workstations Capture, compress & encrypt video CAD Blade CAE Blade Apply keyboard

& mouse

events Network Unlimited distance Decompress & display Capture keyboard & mouse events Single keyboard and mouse Client 28 4/30/2020

Remote Visualization

Use HP Remote Graphics Software or open source VirtualGL/TurboVNC

− enables remote access to Vis nodes over a standard network − Entire remote desktop environment is displayed on local desktop − supports collaboration − More than one user can connect to the same remote node and collaboratively drive and view the applications Remote virtual desktop Ethernet connection Local Windows or Linux desktops

SCI servers and storage

Streamline design

− Optimize for cost − Optimize for power efficiency •

Designed for scale-out

− New dense servers − New high capacity storage •

Variations of standard platforms

− High memory capacity − Dense storage footprint HP Confidential – CDA Required 30 30 April 2020

SCI Infrastructure HP POD

Industry-standard Flexibility 22 x 50U, 19” full-depth industry-standard racks support HP, Dell, IBM, Sun, Cisco, etc.

Best-in-class Density Support for 3,520 compute nodes, 12,000 LFF drives, or any combination Shipped in 6 weeks, deployed WW Pre-integrated, configured and tested before shipment; shipped in six weeks from order.

Energy Effectiveness PUE ratio <1.25 (1.07 excluding chiller) Infrastructure Services Portfolio Full lifecycle support services combining technology and facilities expertise HP Confidential – CDA Required

Extending the vision: powerful catalysts for HPC

innovation

Parallel compositing Accelerators and Multicore Dense computing HP UPC and HP SHMEM Parallel Performance tools Grid-enabled SFS Visualization Computation Scale-up and -out Data management Grid and Adaptive Infrastructure Advanced power and cooling 32 30 April 2020 HP Confidential – CDA Required Converge d fabric

Working collaboratively to accelerate the pace of innovation

33 30 April 2020 HP Confidential – CDA Required

Western Switzerland: Improving Bio codes performance

Roche Novartis CERN Serono

VITAL-IT : A Bio-computing research centre focusing on performance

• • • • •

The Centre :

Located near by Lausanne Opened April 22 2004 Joint project between Intel and HP and the Swiss Institute for Bioinformatics (SIB) and now Oracle Industrial advisory board Direct support from HP labs

Objective

: Improve the efficiency of discovery by using the latest computing techniques and improve Bio codes efficiency.

Tools

: 2 teraflops system dedicated to do Bioinformatics research using HP Itanium based systems running Linux as well as Nocona based systems

Results

: Already striking results ………………

Team

• Solution architects • Consultants • Project / Program Managers • Benchmarking • ISV Engineers • System Engineers • Technical marketing

Located in Grenoble …

- Customer Center - Proof-Of-Solution & Engineering Lab - Auditorium - Training facilities - Remote support -Halo Room -Broadcast for remote demo