The Complete IDC Intelligence Solution

Download Report

Transcript The Complete IDC Intelligence Solution

th
34
Welcome To The
HPC User Forum
Meeting
October 2009
Thank You To:
HLRS/University of Stuttgart
For Hosting The Meeting!
Thank You To Our Sponsors!
Altair Engineering
Bull
IBM
Microsoft
Introduction: Logistics
We have a very tight agenda (as usual)
 Please help us keep on time!
Review Agenda Times:
 Please take advantage of breaks and free
time to network with attendees
 Note: We will post most of the
presentations on the web site
HPC User Forum Mission
To improve the health of the highperformance computing industry
through open discussions, informationsharing and initiatives involving HPC
users in industry, government and
academia, along with HPC vendors and
other interested parties.
HPC User Forum Goals
Assist HPC users in solving their ongoing computing,
technical and business problems
Provide a forum for exchanging information, identifying
areas of common interest, and developing unified positions
on requirements

By working with users in other sectors and vendors
 To help direct and push vendors to build better products
 Which should also help vendors become more successful
Provide members with a continual supply of information on:

Uses of high end computers, new technologies, high end
best practices, market dynamics, computer systems and
tools, benchmark results, vendor activities and strategies
Provide members with a channel to present their
achievements and requirements to interested parties
1Q 2009 HPC
Market Update
Q109 HPC Market Result – Down 16.8%
HPC
Servers
$2.1B
Workgroup
(under $100K)
$282M
Supercomputers
(Over $500K)
$802M
Divisional
($250K - $500K)
$237M
Source IDC, 2009
Departmental
($250K - $100K)
$754M
Q109 Vendor Share in Revenue
Bull Dawning
Appro 0.6%
0.3%
0.4%
SGI
1.5%Cray
2.9%
Other
11.7%
HP
28.9%
Fujitsu
3.4%
Sun
3.6%
NEC
9.4%
Dell
12.0%
IBM
25.3%
Q109 Cluster Vendor Shares
Appro
0.7%
Other
21.6%
HP
30.8%
Dawning
0.6%
Bull
1.1%
SGI
1.1%
NEC
1.8%
Fujitsu
2.6%
Sun
5.7%
IBM
12.3%
Dell
21.7%
HPC Compared
To IDC
Server Numbers
HPC Qview Tie To Server Tracker:
1Q 2009 Data
HPC Qview Data Focus:
The Complete System:
Tracker QST Data Focus:
Compute Nodes
“Everything needed to turn it on”
1
All WW
Servers As
Reported In
IDC Server
Tracker
$9.9B
QST
2
HPC
HPC Special
HPC
Revenue
Recognition
Qview
Services
Compute
~$474M
Node
Revenues Revenue
Beyond
~$1.05B* Base
Nodes
~$576M
3
HPC
HPC Special Revenue
Recognition Services
Includes those sold through
custom engineering, R&D offsets,
or paid for over multiple quarters
HPC Computer System Revenues
Beyond The Base Compute Nodes:
Includes interconnects and switches,
inbuilt storage, scratch disks, OS,
middleware, warranties, installation
fees, service nodes, special cooling
features, etc.
* This number ties the two data sets on
an apples-to-apples basis
2010 IDC HPC Research Areas
• Quarterly HPC Forecast Updates
 Until the world economy recovers
• New HPC End-user Based Reports:
 Clusters, processors, accelerators, storage, interconnects,
system software, and applications
 The evolution of government HPC budgets
 China and Russia HPC trends
• Power and Cooling Research
• Developing a Market Model For Middleware and
Management Software
• Extreme Computing
• Data Center Assessment and Benchmarking
• Tracking Petascale and Exascale Initiatives
Agenda: Day One
12:45
13:00
13:15
13:45
14:15
14:45
15:00
15:45
16:15
16:45
17:00
17:30
18:30
HPC User Forum Welcome/Introductions: Steve Finn (Chair,
HPC User Forum) and Earl Joseph (IDC)
HLRS Welcome/Introductions: Michael Resch, HLRS
Michael Resch, HLRS, a European View of HPC
Robert Singleterry, NASA HPC Directions, Issues and
Concerns
Tom Sterling, Trends and New Directions in HPC
ISC Update
Break
Jim Kasdorf, Pittsburgh Supercomputer Center, National
Science Foundation Directions
Erich Schelkle, ASCS/Porsche, End User HPC Site Update
Vijay K. Agarwala, Developing a Coherent Cyberinfrastructure
from Local Campus to National Facilities
Thomas Eickermann, Juelich Research Center, PRACE
Program Update
Networking Get-together
End of first day
Welcome
To Day 2 Of The
HPC User Forum
Meeting
Agenda: Day Two
9:10
9:15
9:45
10:15
10:45
11:30
11:45
12:15
12:30
12:35
Welcome/Logistics – Earl Joseph and Steve Finn,
BAE Systems
Jack Collins, National Cancer Institute Update,
Directions and Concerns
Marie-Christine Sawley, ETH Zurich, CERN
group,Data taking and analysis at unprecedented
scale: the example of CMS
Paul Muzio, HPC Directions at the City University of
New York
Bull Technology Update, Jean-Marc Denis
Break
Lutz Schubert, HLRS, Workflow Management
New Software Technology Directions at
Microsoft, Wolfgang Dreyer
Wrap up and plans for future HPC User Forum
meetings, Michael Resch, Earl Joseph and Steve Finn
Farewell and Lunch
Important Dates For Your Calendar
FUTURE HPC USER FORUM MEETINGS:
October 2009 International HPC User Forum Meetings:
 HLRS/University of Stuttgart, October 5-6, 2009
(midday to midday)
 EPFL, Lausanne, Switzerland, October 8-9, 2009
(midday to midday)
US Meetings:
 April 12 to 14, 2010 Dearborn, Michigan at the
Dearborn Inn
 September 13 to 15, 2010 Seattle, Washington
Thank You
th
For Attending The 34
HPC User Forum
Meeting
Questions?
Please email:
[email protected]
Or check out:
www.hpcuserforum.com
Questions?
Please email:
[email protected]
Or check out:
www.hpcuserforum.com
th
35
Welcome To The
HPC User Forum
Meeting
October 2009
Thank You To:
Ecole Polytechnique Fédérale de
Lausanne (EPFL)
For Hosting The Meeting!
Thank You To Our Sponsors!
Altair Engineering
Bull
IBM
Microsoft
Introduction: Logistics
We have a very tight agenda (as usual)
 Please help us keep on time!
Review Agenda Times:
 Please take advantage of breaks and free
time to network with attendees
 Note: We will post most of the
presentations on the web site
HPC User Forum Mission
To improve the health of the highperformance computing industry
through open discussions, informationsharing and initiatives involving HPC
users in industry, government and
academia, along with HPC vendors and
other interested parties.
HPC User Forum Goals
Assist HPC users in solving their ongoing computing,
technical and business problems
Provide a forum for exchanging information, identifying
areas of common interest, and developing unified positions
on requirements

By working with users in other sectors and vendors
 To help direct and push vendors to build better products
 Which should also help vendors become more successful
Provide members with a continual supply of information on:

Uses of high end computers, new technologies, high end
best practices, market dynamics, computer systems and
tools, benchmark results, vendor activities and strategies
Provide members with a channel to present their
achievements and requirements to interested parties
1Q 2009 HPC
Market Update
Q109 HPC Market Result – Down 16.8%
HPC
Servers
$2.1B
Workgroup
(under $100K)
$282M
Supercomputers
(Over $500K)
$802M
Divisional
($250K - $500K)
$237M
Source IDC, 2009
Departmental
($250K - $100K)
$754M
Q109 Vendor Share in Revenue
Bull Dawning
Appro 0.6%
0.3%
0.4%
SGI
1.5%Cray
2.9%
Other
11.7%
HP
28.9%
Fujitsu
3.4%
Sun
3.6%
NEC
9.4%
Dell
12.0%
IBM
25.3%
Q109 Cluster Vendor Shares
Appro
0.7%
Other
21.6%
HP
30.8%
Dawning
0.6%
Bull
1.1%
SGI
1.1%
NEC
1.8%
Fujitsu
2.6%
Sun
5.7%
IBM
12.3%
Dell
21.7%
HPC Compared
To IDC
Server Numbers
HPC Qview Tie To Server Tracker:
1Q 2009 Data
HPC Qview Data Focus:
The Complete System:
Tracker QST Data Focus:
Compute Nodes
“Everything needed to turn it on”
1
All WW
Servers As
Reported In
IDC Server
Tracker
$9.9B
QST
2
HPC
HPC Special
HPC
Revenue
Recognition
Qview
Services
Compute
~$474M
Node
Revenues Revenue
Beyond
~$1.05B* Base
Nodes
~$576M
3
HPC
HPC Special Revenue
Recognition Services
Includes those sold through
custom engineering, R&D offsets,
or paid for over multiple quarters
HPC Computer System Revenues
Beyond The Base Compute Nodes:
Includes interconnects and switches,
inbuilt storage, scratch disks, OS,
middleware, warranties, installation
fees, service nodes, special cooling
features, etc.
* This number ties the two data sets on
an apples-to-apples basis
2010 IDC HPC Research Areas
• Quarterly HPC Forecast Updates
 Until the world economy recovers
• New HPC End-user Based Reports:
 Clusters, processors, accelerators, storage, interconnects,
system software, and applications
 The evolution of government HPC budgets
 China and Russia HPC trends
• Power and Cooling Research
• Developing a Market Model For Middleware and
Management Software
• Extreme Computing
• Data Center Assessment and Benchmarking
• Tracking Petascale and Exascale Initiatives
Agenda: Day One
14:00
14:15
14:30
15:00
15:30
15:45
16:00
16:30
16:45
17:15
18:00
HPC User Forum Welcome/Introductions, Steve Finn and
Earl Joseph
EPFL Welcome/Introductions, Henry Markram, EPFL and
Giorgio Magaritondo, VP, EPFL
Neil Stringfellow, CSCS/ETHZ, HPC Strategy in
Switzerland, Swiss National Supercomputing Centre
Henry Markram, Felix Schuermann, EPFL, and David
Turek, IBM, "Blue Brain Project Update"
IBM Research Partnerships, Dave Turek
Altair Technology Update, Paolo Masera
Jack Collins, National Cancer Institute Update, Directions
and Concerns
Break
Markus Schulz, CERN High-throughput computing
Robert Singleterry, NASA
End of First Day
Welcome
To Day 2 Of The
HPC User Forum
Meeting
Agenda: Day Two
9:00
9:00
9:30
10:00
10:30
11:00
11:15
12:15
12:30
12:45
Welcome/Logistics – Earl Joseph and Steve Finn, BAE
Systems, Summarizing the September '09 User Forum
Victor Reis, US Department of Energy
Alan Gray, EPCC End User Site Update, University of
Edinburgh
Jim Kasdorf, Pittsburgh Supercomputer Center, "National
Science Foundation Directions"
Thomas Eickermann, Juelich Supercomputing Centre,
PRACE Project Update
Break
Panel on Using HPC to Advance Science-Based Simulation
Panel Moderators: Henry Markram and Steve Finn
Panel Members: Jack Collins, Thomas Eickermann, Victor
Reis, Felix Schuermann, Markus Schulz and Neil
Stringfellow,
New Software Technology Directions at Microsoft
Wrap up and plans for future HPC User Forum meetings,
Henry Markram, Earl Joseph and Steve Finn
Farewell and Lunch
Important Dates For Your Calendar
FUTURE HPC USER FORUM MEETINGS:
October 2009 International HPC User Forum Meetings:
 HLRS/University of Stuttgart, October 5-6, 2009
(midday to midday)
 EPFL, Lausanne, Switzerland, October 8-9, 2009
(midday to midday)
US Meetings:
 April 12 to 14, 2010 Dearborn, Michigan at the
Dearborn Inn
 September 13 to 15, 2010 Seattle, Washington
Thank You
th
For Attending The 35
HPC User Forum
Meeting
Questions?
Please email:
[email protected]
Or check out:
www.hpcuserforum.com
Questions?
Please email:
[email protected]
Or check out:
www.hpcuserforum.com
OEM Mix Of HPC Special Revenue
Recognition Services
Non-SEC Reported Product Revenues = $474M
Sun
4%
2
Other
11%
Dell
12%
HP
30%
IBM
43%
Notes:
• Includes product sales that are not reported by OEMs as product revenue in a given quarter
 Sometimes HPC systems are paid for across a number of quarters or even years
• Includes NRE – if required for a specific system
• Includes custom engineering sales
• Some examples – Earth Simulator, ASCI Red, ASCI Red Storm, DARPA systems, and many small
and medium HPC systems that are sold through a custom engineering or services group
because that need extra things added
Areas Of HPC “Uplift” Revenues
How The $576M "Uplift" Revenues Are Distributed
3
Misc. items
Bundled warranties
7%
8%
Software
16%
Computer
hardware (in
cabinet)
45%
External storage
12%
External
interconnects
12%
Areas Of HPC “Uplift” Revenues
Notes:
3
* Computer hardware (in cabinet) -- hybrid nodes, service nodes,
accelerators, GPGPUs, FPGAs, internal interconnects, in-built
disks, in-built switches, special cabinet doors, special signal
processing parts, etc.
* External interconnects -- switches, cables, extra cabinets to hold
them, etc.
* External storage -- scratch disks, interconnects to them, cabinets to
hold them, etc. (This excludes user file storage devices)
* Software -- includes both bundled and separately charged software
if sold by the OEM, or on the purchase contract -- includes the
operating system, license fees, the entire middleware stack,
compilers, job schedules, etc. (it excludes all ISV applications
unless sold by the OEM and in the purchase contract)
* Bundled warranties
* Misc. items -- Since the HPC taxonomy includes everything
required to turn on the system and make it operational, items like
bundled installation services, special features and other add-on
hardware, and even a special paint job if required
Special Paint Jobs Are Back …
http://www.afrl.hpc.mil/consolidated/hardware.php