SAPPHIRE India - Indian Institute of Technology Kanpur

Download Report

Transcript SAPPHIRE India - Indian Institute of Technology Kanpur

Agenda …
•
•
HPC Technology & Trends
HPC Platforms & Roadmaps
HP Supercomputing Vision
•
HP Today
•
1
HPC Trend:
Faster processors
•
Processors inching ahead of each other
•
Itanium … Xeon … Opteron … Xeon …
•
Big leap happens this year:
2005 Opteron over Xeon 2006 Xeon over Opteron
17%
CAD
Visual Studio
51%
Fin Model
50%
O&G
DCC
CAE
2
10%
55%
19%
46%
17%
24%
17%
16%
1%
Industry Standard Processor choice & leadership
Choices at end of 2006
Woodcrest
Rev F
Montecito
•
Price/performance leadership
with 32/64-bit
co-existence
•
Price/performance leadership
with 32/64-bit co-existence
•
Highest performance 64-bit
processor core for sustained
performance
•
Dual-Core
4 FLOPs/Tick
DDR2 FBD memory
•
•
•
Dual-Core
2 FLOPs/Tick
DDR2 memory
Dual-Core
4 FLOPs/Tick
DDR2 memory
•
New higher performance
chipsets
•
1GHz HyperTransport
•
New higher performance
sx2000 and zx2 chipsets
•
Highest clock speed, peak
performance, large cache
•
High bandwidth for sustained
performance
•
Highest SMP scalability (to
64p/128c)
HP-UX for mission-critical
technical computing
•
•
•
•
•
•
•
•
Extensive 32-bit, and growing
64-bit ecosystems
2p/4c nodes for highly parallel
Scale-out workloads
3
•
•
Extensive 32-bit, and growing
64-bit ecosystems
2p/4c & 4p/8c nodes for
moderate Scale-out workloads
•
•
Extensive 64-bit ecosystem
(and 32/64-bit on HP-UX)
Scale-up and scale-out for
complex workloads
Itanium, Opteron, Xeon comparative results 1HCY06
Relative Performance of 17 ISV Applications
Perf Relative to fastest Itanium (bigger is better)
1.40
1.20
1.00
0.80
0.60
rx1620/HP-UX/1.6GHz
rx1620/LINUX/1.6GHz
DL145/585/2.6GHz S-C
DL140/360/3.6GHz
0.40
0.20
C
PA
FX
M
-C
R
AS
H
R
AD
IO
SS
ST
SC
AR
H
-C
LU
D
M
BE
AN
RG
SY
ER
S
EC
AB
LI
AQ
PS
U
E
S
EX
PL
IC
IT
BL
PO
AS
W
T
ER
FL
O
W
FL
U
EN
T
LS
-D
YN
A
M
SC
AB
.N
AS
AQ
TR
U
S
AN
ST
AN
D
AR
D
G
AU
SS
IA
N
AM
BE
R
D
M
O
L3
C
AS
TE
P
0.00
Application performance is a qualitative number based on HP benchmarking
results. Results are normalized to the faster Itanium operating environment and
sorted by the Opteron:Itanium ratio. ISV compiler choices and optimization levels
influence results as well as raw microprocessor capabilities.
4
Itanium, Opteron, Xeon comparative results 2HCY06
Application performance is a qualitative number based on HP & Intel
benchmarking results. Results are normalized to the faster Itanium operating
environment and sorted by the Opteron:Itanium ratio. ISV compiler choices and
optimization levels influence results as well as raw microprocessor capabilities.
5
HP Confidential. Contains Intel Confidential Information
EDA - Other: simulation,
verification, synthesis,
physical design
Broadest Suite of HPC platforms
rx8620
Superdome
HP Cluster
Platform 6000
rx7620
rx4640
rx2620
rx1620
HP Cluster
Platform 4000
DL580
DL585
DL140
DL145
DL360
DL380
DL385
HP Cluster
Platform 3000
c8000
BL2xp / BL3xp
BL45p, BL60p
BL460, BL680
Blade Clusters
xw9300
xw8200
nw8240
6
HP Blades for HPC
•
Blades are the ideal platform
for clusters
− Simplified management
− Designed for performance and
scalability
− Reduced interconnect and
network complexity
− High density
− Centralized power
management
•
7
Factors for blades adoption
in HPC clusters:
− Performance parity with racked
systems
− Price advantage shifts to
blades
− Interconnect choice expands to
Front
Rear
Agenda …
•
•
Grid Initiative at HP
HPC Focus & Trends
HP Supercomputing Vision
•
HP Today
•
8
What is “Supercomputing Utility” Vision
•
Develop and offer a open standards & open systems based
Supercomputing Utility – that can expand/grow over time, and
truly adapt to the changing enterprise and environment.
•
The utility can deliver high computational throughput, support
multiple applications with different characteristics and workload.
•
The fabric of this utility is a high speed network – all linked to a
large scale data store .
•
The environment is managed and controlled as a single system,
and provides support for dispersed work force – either with direct
log in or grid accessible.
9
HP Vision for Supercomputing facility
Computation
Industry Standard
Servers
Integration is the Key !
Visualization
10
Data
Management
HP Unified Cluster Portfolio strategy
HP Cluster
Platforms
Data
Management
HP
StorageWorks
Scalable File
Share
Storage Grid
Computation
Visualization
HP Integrity &
ProLiant Servers
Scalable
Visualization
Array
Advancing the power of clusters with
•
•
•
•
•
•
11
Integrated solutions spanning computation, storage and visualization
Choice of industry standard platforms, operating systems, interconnects, etc
HP engineered and supported solutions that are easy to manage and use
Scalable application performance on complex workloads
Extensive use of open source software
Extensive portfolio of qualified development tools and applications
HP XC software for Linux
Leveraged Open Source
Function
Technology
Features and Benefits
Distribution and Kernel
RHEL 3.0
compatible
Red Hat compatible shipping product, Posix enhancements,
support for Opteron, ISV support
Batch Scheduler
LSF 6.0
Platform LSF HPC Premier scheduler, policy driven, allocation
controls, MAUI support. Provides migration for AlphaserverSC
customers
Resource Management
SLURM
Simple Linux Utility for Resource Management Fault tolerant,
highly scalable, uses standard kernel
MPI
HP-MPI 2.1
HP’s Message Passing Interface Provides standard interface
for multiple interconnects, MPICH compatible, support for MPI-2
functionality
Inbound Network /
Cluster Alias
LVS
Linux Virtual Server High availability virtual server project for
managing incoming requests, with load balancing
System Files
Management
SystemImager
Configuration tools
Cluster database
SystemImager Automates Linux installs, software distribution,
and production deployment. Supports complete, bootable
image; can use multicast; used at PNNL and Sandia
Console
Telnet based console
commands
Power control Adaptable for HP integrated management
processors – no need for terminal servers, reduced wiring
Monitoring
Nagios
SuperMON
Nagios Browser based, robust host, service and network
monitor from open source. SuperMon supports high speed, high
sample rates, low perturbation monitoring for clusters.
High Perf I/O
LustreTM 1.2.x
LustreTM Parallel File System High performance parallel
file system – efficient, robust, scalable
12
High performance interconnects
top-level switches (288 ports)
•
Infiniband
−
PCI-e −
−
−
•
Emerging industry standard
IB 4x – speeds 1.8GB/s, <5μSec MPI latency
node-level switches (24 ports)
Connects to 12 nodes
24 port, 288 port switches
Scalable topologies with federation of switches
top-level switches (264 ports)
Myrinet
− Speeds up to 800MB/s, <6μSec MPI latency
− 16 port, 128 port, 256 port switches
− Scalable topologies with federation of switches
•
Quadrics
node-level switches (128 ports)
Connects to 64 nodes
top-level switches
− Elan 4 – 800MB/s, <3μSec MPI latency
− 8 port, 32 port, 64 port, 128 port switches
− Scalable topologies with federation of switches
•
GigE
− 60-80MB/s, >40 μSec MPI latency
13
node-level switches (128 ports)
Connects to 64 nodes
HP Cluster Platforms
•
Factory pre-assembled hardware solution with optional
software installation
− Includes nodes, interconnects, network, racks, etc. integrated
& tested
•
Configure to order from 5 node to 512 nodes (more by
request)
− Uniform, worldwide specification and product menus
− Fully integrated, with HP warranty and support
HP Cluster
Platform 3000
HP Cluster
Platform 4000
HP Cluster
Platform 6000
14
Compute Nodes
Operating
Systems
Interconnects
ProLiant DL140 G2
ProLiant DL360
G4 server
Linux
Windows
GigE, IB, Myrinet
ProLiant DL145 G2
ProLiant DL585
Linux
Windows
GigE, IB, Myrinet, Quadrics
Integrity rx1620
Integrity rx2620
Linux
HP-UX
GigE, IB, Quadrics
Data Management
HP StorageWorks Scalable File Share (HP SFS)
Customer challenge
•
I/O performance limitations
HP SFS provides
•
HP Scalable File Share
Scalable performance
− Aggregate parallel read or write bandwidth from
> 1 GB/s to “tens of GB/s”
− 100-fold increase over NFS
•
Scalable access
− Shared, coherent, parallel access across a
huge number of clients, 1000’s today “10’s of
thousands” future
•
Scalable capacity
Scalable
Bandwidth
− multiple terabytes to multiple petabytes
•
Based on breakthrough Lustre technology
− Open source, industry standards based
15
Scalable
Storage Grid
(Smart Cells)
Linux
Cluster
Scalable Visualization
Customer challenge
•
Visualization solutions too expensive,
proprietary, not scalable
HP Scalable Visualization Array
(SVA)
•
Open, scalable, affordable, high-end,
visualization solution based on industry
standard Sepia technology
• Innovative approach combining
− standard graphics adapters
− accelerated compositing
•
16
Yields a system that scales to clusters
capable of displaying 100 million pixels
or more
HP Scalable Visualization Array
Delivering the Vision
inbound connections
High Speed Interconnect
Log-in
App node
App node
xc
App node
App node
compute cluster
Log-in
HP SFS
Servers
App node
App node
App node
App node
App node
App node
Object Storage Servers
OST
sfs / lustre
OST
Admin
OST
Services
Scalable
File Share
Meta Data Servers
Compute Nodes
Viz node
Services
Viz node
Services
SVA
Rendering &
Viz node
MDS
Viz node
Compositing
Viz node
Service Nodes
MDS
VIz node
Visualization Nodes
Scalable HA
Storage
Farm
Pixel Network
Multi-Panel
Display Device
17
users
TIFR – Tata Institute of Fundamental Research
Computational Mathematics Laboratory (CML)
Industry: Scientific Research - Pune
•
Challenges
− Current AlphaServer based
• Increase computational power
− Explosive grow in new research
• Massive increase in performance
− Partnership for support services
•
HP Solution
− 1 teraflop peak HP XC based on:
• CP6000 (77) 2CPU/4GB Integrity rx1620 1.6GHz
compute nodes, Integrity rx2620 service node
• 288 Port Infiniband switch
− HP Math Libraries for Linux on Itanium
− New CCN for collaboration on Algorithms
•
Results
− First step to massive Supercomputer
− Improve ability to solve computationally
18 demanding algorithms
We need partners who complement
our core competency in areas like
complex hardware system design,
microelectronics, nanotechnology and
system software. This is where HP
steps in, as it has been investigating
HPC concepts for more than a
decade and this has led to the
creation of Itanium processors jointly
with Intel.
There is a need to build a giant
hardware accelerator to address
fundamental questions in computer
science, which could not be answered
until now, either by theory or
experiment, to influence future
development of the subject, facilitate
scientific discoveries and solve grand
challenges in various disciplines. This
supercomputer, which will help us
understand how to structure our
algorithms for a larger system, is only
a first step in that direction.
Professor Naren Karmarkar
Head CML, TIFR
(Dr Karmarkar is a Bell Labs Fellow)
TI – Texas Instruments
Industry: Semiconductor Engineering / EDA - Bangalore
•
Challenges
www.ti.com/asia/docs/india/index.html
− 5,000 processors already installed, additional
Cluster computing required
− Reduce design cycle time by 10X.
− Datacenter now full will turn to industry for Utility
Computing
•
HP Solution
− 5.6 teraflop peak Beowulf Clusters based on:
• Cluster Platform 4000
− 500 Compute nodes
− ProLiant DL145 G2 2.8GHz 2P/2GB
− GigaBit Ethernet Interconnect
− Support Services
− Adding to 100/+ existing DL585 Servers
•
Result
− Additional 1,000 processor Cluster for
development requirements
19
IGIB – Institute of Genomics & Integrative Biology
Industry: BioTechnology / LMS - Delhi
•
Challenges
− Current AlphaServer based
• Increase computational power
− Explosive grow in new research
• Massive increase in performance
− Partnership for support
− Improve cost efficiencies
•
HP Solution
− 4½ teraflop peak HP XC based on:
• CP3000 (288) 2CPU/4GB ProLiant DL140 G2 3.6GHz
nodes using Infiniband
• CP3000 (24) 2CPU/4GB ProLiant DL140 G2 test
cluster
• Superdome, 12 TB StorageWorks EVA SAN
− Single point support service
− IGIB research staff collaboration
•
Results
− HP India’s largest Supercomputer
− One of the world’s most powerful research systems
20 dedicated to Life Sciences
HP’s Cluster Platform provides a
scalable architecture that allows us to
complete large, complex simulation
experiments such as molecular
interactions and dynamics, virtual drug
screening, protein folding, etc much
more quickly.
This technology combined with HP’s
experience and expertise in life
sciences helps IGIB speed access to
information, knowledge, and new levels
of efficiency, which we
hope will ultimately culminate in the
discovery of new drug targets and
predictive medicine for complex
disorders with minimum side effects.
Dr. Samir Brahmachari
Director, IGIB