The Complete IDC Intelligence Solution

Download Report

Transcript The Complete IDC Intelligence Solution

Welcome To The 33

rd

HPC User Forum Meeting September 2009

Important Dates For Your Calendar FUTURE HPC USER FORUM MEETINGS: October 2009 International HPC User Forum Meetings:

HLRS/University of Stuttgart, October 5-6, 2009

(midday to midday) 

EPFL, Lausanne, Switzerland, October 8-9, 2009

(midday to midday)

US Meetings:

April 12 to 14, 2010 Dearborn, Michigan at the Dearborn Inn

September 2010, Seattle Washington

Thank You To Our Meal Sponsors! Wednesday Breakfast -- Hitachi Cable America Wednesday Lunch -- Altair Engineering & AMD Wednesday Break -- Appro International Thursday Breakfast -- Mellanox Technologies Thursday Lunch -- Microsoft Thursday Break -- ScaleMP

A Petascale Triva Question How many years would 1,000 scientists have to calculate by hand to equal 1 second of work on a 0.1 PFLOPS supercomputer?

Assuming that they can do 1 calculation every second, with no rest time (and a long life)

A Petascale Triva Answer 3,200 Years

 0.1 PF = 1,000x365x24x60x60x3,200

From DOD’s new Mana supercomputer in Hawaii:

 A Dell PowerEdge M610 with 1,152 nodes  Each node contains two 2.8 Ghz Intel Nehalem  processors for a total of 9,216 computer cores That gives it a PEAK performance of 103 TFLOPS or 0.1 PFLOPS  From MHPCC Acting Director: David L. Stinson

Tuesday Dinner Vendor Updates: 10 Min. Only IBM Appro Hitachi Cable Luxtera Mellanox ScaleMP Tech-X Mitrionics

Welcome To The 33

rd

HPC User Forum Meeting September 2009

Introduction: Logistics Ask Mary if you need a receipt Meals and events

 Wednesday tour and dinner plans

We have a very tight agenda (as usual)

 Please help us keep on time!

Review handouts

 Note: We will post most of the presentations on the web site  Please complete the evaluation form

HPC User Forum Mission To improve the health of the high performance computing industry through open discussions, information sharing and initiatives involving HPC users in industry, government and academia, along with HPC vendors and other interested parties.

HPC User Forum Goals Assist HPC users in solving their ongoing computing, technical and business problems Provide a forum for exchanging information, identifying areas of common interest, and developing unified positions on requirements

 By working with users in other sectors and vendors   To help direct and push vendors to build better products Which should also help vendors become more successful

Provide members with a continual supply of information on:

 Uses of high end computers, new technologies, high end best practices, market dynamics, computer systems and tools, benchmark results, vendor activities and strategies

Provide members with a channel to present their achievements and requirements to interested parties

Important Dates For Your Calendar FUTURE HPC USER FORUM MEETINGS: October 2009 International HPC User Forum Meetings:

HLRS/University of Stuttgart, October 5-6, 2009 (midday to midday)

EPFL, Lausanne, Switzerland, October 8-9, 2009 (midday to midday) US Meetings:

April 12 to 14, 2010 Dearborn, Michigan at the Dearborn Inn

September 2010, Seattle Washington

Thank You To Our Meal Sponsors! Wednesday Breakfast -- Hitachi Cable America Wednesday Lunch -- Altair Engineering Wednesday Break -- Appro International & AMD Thursday Breakfast -- Mellanox Technologies Thursday Lunch -- Microsoft Thursday Break -- ScaleMP

1Q 2009 HPC Market Update

Q109 HPC Market Result – Down 16.8% HPC Servers $2.1B

Supercomputers (Over $500K) $802M Divisional ($250K - $500K) $237M Departmental ($250K - $100K) $754M Workgroup (under $100K) $282M

Source IDC, 2009

Q109 Vendor Share in Revenue Appro 0.4% SGI 1.5%Cray 2.9% Fujitsu 3.4% Sun 3.6% NEC 9.4% Bull 0.6% Dawning 0.3% Other 11.7% Dell 12.0% HP 28.9% IBM 25.3%

Q109 Cluster Vendor Shares SGI Dawning 0.6% Bull 1.1% 1.1% NEC 1.8% Appro 0.7% Fujitsu 2.6% Sun 5.7% Other 21.6% IBM 12.3% Dell 21.7% HP 30.8%

HPC Compared To IDC Server Numbers

HPC Qview Tie To Server Tracker: 1Q 2009 Data Tracker QST Data Focus: Compute Nodes HPC Qview Data Focus: The Complete System: “Everything needed to turn it on” All WW Servers As Reported In IDC Server Tracker $9.9B

1 QST 2 HPC HPC Qview Compute Node Revenues ~$1.05B* HPC Special Revenue Recognition Services ~$474M Revenue Beyond Base Nodes ~$576M 3 HPC HPC Special Revenue Recognition Services

Includes those sold through custom engineering, R&D offsets, or paid for over multiple quarters

HPC Computer System Revenues Beyond The Base Compute Nodes:

Includes interconnects and switches, inbuilt storage, scratch disks, OS, middleware, warranties, installation fees, service nodes, special cooling features, etc.

* This number ties the two data sets on an apples-to-apples basis

OEM Mix Of HPC Special Revenue Recognition Services 2 Non-SEC Reported Product Revenues = $474M

Other Sun 11% 4% Dell 12% HP 30% IBM 43% • • • •

Notes: Includes product sales that are not reported by OEMs as product revenue in a given quarter

Sometimes HPC systems are paid for across a number of quarters or even years Includes NRE – if required for a specific system Includes custom engineering sales Some examples – Earth Simulator, ASCI Red, ASCI Red Storm, DARPA systems, and many small and medium HPC systems that are sold through a custom engineering or services group because that need extra things added

Areas Of HPC “Uplift” Revenues 3 How The $576M "Uplift" Revenues Are Distributed

Misc. items Bundled warranties 7% 8% Software 16% Computer hardware (in cabinet) 45% External storage 12% External interconnects 12%

Areas Of HPC “Uplift” Revenues Notes: 3 * Computer hardware (in cabinet) -- hybrid nodes, service nodes, accelerators, GPGPUs, FPGAs, internal interconnects, in-built disks, in-built switches, special cabinet doors, special signal processing parts, etc. * External interconnects -- switches, cables, extra cabinets to hold them, etc. * External storage -- scratch disks, interconnects to them, cabinets to hold them, etc. (This excludes user file storage devices) * Software -- includes both bundled and separately charged software if sold by the OEM, or on the purchase contract -- includes the operating system, license fees, the entire middleware stack, compilers, job schedules, etc. (it excludes all ISV applications unless sold by the OEM and in the purchase contract) * Bundled warranties * Misc. items -- Since the HPC taxonomy includes everything required to turn on the system and make it operational, items like bundled installation services, special features and other add-on hardware,

and even a special paint job if required

Special Paint Jobs Are Back …

http://www.afrl.hpc.mil/consolidated/hardware.php

2010 IDC HPC Research Areas

• • • • • • •

Quarterly HPC Forecast Updates

 Until the world economy recovers

New HPC End-user Based Reports:

 Clusters, processors, accelerators, storage, interconnects, system software, and applications   The evolution of government HPC budgets China and Russia HPC trends

Power and Cooling Research Developing a Market Model For Middleware and Management Software Extreme Computing Data Center Assessment and Benchmarking Tracking Petascale and Exascale Initiatives

Agenda: Day One, Wednesday Morning

8:10am Introductions and Welcome, Steve Finn and Earl Joseph Morning Session Chair: Steve Finn 8:15am Weather/climate presentation from ORNL, Jim Hack 8:45am Weather/climate presentation from NCAR, Henry Tufo 9:15am Weather/climate presentation from NASA/Goddard, Phil Webster 9:45am Two short vendor technology updates (Altair and Sun) 10:15am Break 10:30am Weather/climate presentation from NRL Monterey, Jim Doyle 11:00am Weather and Climate Directions from an IBM perspective, Jim Edwards 11:25am Panel on HPC Weather/Climate/Earth Sciences Requirements & Directions Moderators: Steve Finn and Earl Joseph 12:00pm Networking Lunch

Lunch Break Thanks to Altair Engineering

Please Return Promptly at 1:00pm

Thank You Altair Engineering For Lunch

Agenda: Day One, Wednesday Afternoon Afternoon Session Chair: Paul Muzio 1:00pm HPC in Europe, HECToR Update, Andrew Jones, NAG 1:30pm DOD HPCMP Program Update, Larry Davis 2:00pm Weather/climate Research at Northrop Grumman, Glenn Higgins 2:25pm Weather and Climate Directions from a Cray perspective, Per Nyberg 2:50pm Panel on Government and Political Issues, Concerns and Ideas for New Directions Moderator: Charlie Hayes 3:30pm DICE Parallel File System Project, Tracey Wilson 4:00pm NCAR HPC User Site Tour, return by 6:00pm 6:00pm Networking break and time for 1-on-1 meetings 6:30pm Special Dinner Event

Welcome To Day 2 Of The HPC User Forum Meeting

Thank You To Our Meal Sponsors! Wednesday Breakfast -- Hitachi Cable America Wednesday Lunch -- Altair Engineering Wednesday Break -- Appro International Thursday Breakfast -- Mellanox Technologies Thursday Lunch -- Microsoft Thursday Break -- ScaleMP

Agenda: Day Two, Thursday Morning 8:10am 8:15am 8:45am 9:15am Welcome, Earl Joseph and Steve Finn Morning Session Chair: Douglas Kothe Power Grid Research at PNNL, Mo Khaleel HPC Data Center Power and Cooling Issues, and New Ways to Measure HPC Systems, Roger Panton, Avetec Compiler and Tools: User Requirements from ARSC, Edward Kornkven 9:45am New HPC Directions at Microsoft, Roger Barga 10:15am Break 10:30am Technical Panel on HPC Front-End Compiler Requirements and Directions Moderators: Robert Singleterry, Vince Scarafino 12:15pm Networking Lunch

73 ?

Lunch Break Thanks to Microsoft

Please Return Promptly at 1:00pm

Thank You Microsoft For Lunch

Agenda: Day Two, Thursday Afternoon 1:00pm 1:30pm 2:00pm 3:15pm 3:30pm 4:00pm 4:30pm 5:00pm 5:00pm Afternoon Session Chair: Jack Collins ARL HPC User Site Update, Thomas Kendall Weather/climate presentation from NCAR, John Michelakes Technical Panel on HPC Application Scaling Issues, Requirements and Trends Moderators: Doug Kothe and Paul Muzio. Panel members: Short vendor technology update (Microsoft) Break Weather/climate presentation from NASA Langley, Mike Little "Spider" the Largest Lustre File Stem, ORNL, Galen Shipman Meeting Wrap-Up and Future Meeting Dates, Earl Joseph and Steve Finn Meeting Ends

Important Dates For Your Calendar FUTURE HPC USER FORUM MEETINGS: October 2009 International HPC User Forum Meetings:

HLRS/University of Stuttgart, October 5-6, 2009 (midday to midday)

EPFL, Lausanne, Switzerland, October 8-9, 2009 (midday to midday) US Meetings:

April 12 to 14, 2010 Dearborn, Michigan at the Dearborn Inn

September 2010, Seattle Washington

Thank You For Attending The 33

rd

HPC User Forum Meeting

Questions?

Please email: [email protected]

Or check out: www.hpcuserforum.com

Questions?

Please email: [email protected]

Or check out: www.hpcuserforum.com

HPC User Forum Steering Committee Meeting September 2009

How Did The Meeting Go?

What worked well?

What needs to be changed or improved? Dates and locations for the next Steering Committee meetings?

 SC09 – Monday  January, 2010 at NASA

Important Dates For Your Calendar FUTURE HPC USER FORUM MEETINGS: October 2009 International HPC User Forum Meetings:

HLRS/University of Stuttgart, October 5-6, 2009 (midday to midday)

EPFL, Lausanne, Switzerland, October 8-9, 2009 (midday to midday) US Meetings:

April 12 to 14, 2010 Dearborn, Michigan at the Dearborn Inn

September 2010, Seattle Washington

Questions?

Please email: [email protected]

Or check out: www.hpcuserforum.com

Q408 vs. Q109 Segment Supercomputer Divisional Departmental Workgroup Grand Total Q408 Revenue ($K) 769,055 373,534 953,635 399,757 2,495,981 Q109 Revenue ($K) 801,874 237,496 753,893 282,198 2,075,460 Segment Supercomputer Divisional Departmental Workgroup Grand Total Q408 Shipments 464 1,083 6,762 27,697 36,006 Q109 Shipments 342 852 4,291 16,189 21,845 Sequential Growth 4.3% -36.4% -20.9% -29.4% -16.8% Sequential Growth -26.3% -21.3% -36.5% -41.5% -39.3%

HPC Qview Tie To Server Tracker: 1Q 2009 Data HP IBM Dell Sun Other Total This Number Ties to the Server Tracker HPC Server Revenue HPC Special Revenue Recognition Services Revenue Beyond Base Nodes Q1 HPC Total $268 $178 $140 $207 $190 $140 $598 $525 $102 $36 $464 $1,048 $57 $20 $50 $474 $90 $19 $137 $576 $249 $75 $651 $2,098 Q1 Share 29% 25% 12% 4% 31% 100% This Number Ties to the HPC Qview

44

Government Panel Questions

Government Panel Questions #1 If you believe that the US’s greatest asset in the next 25 years will be our ability to lead to lead the world in the development of intellectual property:

 Do you believe the USG is providing sufficient investment to ensure US competitiveness in science and technology, in general, and HPC in particular? Elaborate.

 What do you think the USG should or should not do to help HPC?

Government Panel Questions #2 Most hardware vendors will agree that profit margins on USG HPC procurements, especially those at the high end, are often negligible at best. a.

While it is generally understood that the USG is obligated to try to get the best value for its money, is there a greater obligation beyond a specific procurement for the USG’s behavior towards the industry in general?

b. If you believe a healthy US HPC community is important for US competitiveness, what, if anything, should the USG specifically do to help the financial or business health of the US HPC vendors?

c.

d.

Should the vendors, via one or more of the industry groups, lobby for more lenient procurement terms, less stringent benchmarks, and lower penalties in advanced system procurements?

Or, should the vendors simply “no bid” more frequently, until the USG relaxes its procurement terms?

Government Panel Questions #3 Do you agree that the USG emphasis, especially within DOE and the NSF, in the area of petascale and exascale computing is appropriate and the best use of USG funding for support of the US HPC industry and HPC technology development? Please Elaborate

Government Panel Questions #4 a.

b.

c.

d.

e.

Over the past forty years or so, up to about the middle 1990s, industry traditionally followed the lead of the USG in adopting HPC technology. For example: Cray Research sold more YMP supercomputers to industry than to governments. Why hasn’t US industry followed the lead of the USG in the race to petascale computing?

Is it because their traditional applications don’t need to scale that high?

Is it because ISV software (and their own s/w) doesn’t scale?

Is it because of the software per CPU costs?

Will this effect US competitiveness?

What action should the USG take, if any, to encourage industry adoption of high end HPC specifically, or HPC of any size, in general?

Government Panel Questions #5 a.

b.

At the National Science Foundation there have been two major HPC system funding programs over the past three years: The Track 1 Program, to fund the worlds most powerful “leadership class” petascale supercomputers, an IBM system, developed under the DARPA HPCS Program, planned for installation at NCSA in 2011. ( It is important to note that Cray is also developing a multi petaFLOP system under the DARPA HPCS Program, which is currently expected to be installed at the Oak Ridge National Laboratory, funded by DOE.) The Track 2 Program, four annual procurements to install “mid range” systems smaller than the Track 1 system but of a size to bridge the gap between current HPC systems and more advanced petascale systems. The first Track 2 system was installed at TACC at the University of Texas. The second and third systems are scheduled for the University of Tennessee at ORNL and the University of Pittsburgh and Carnegie Mellon at the Pittsburgh Supercomputing Center. The results of the fourth annual procurement, promised to be a multiple buy of up to four systems, has yet to be announced.

Questions: a.

Do you agree specifically with the NSF Track 1 and 2 programs, or do you think NSF’s resources should have been or should, in the future, be distributed more broadly throughout academia? Why? b.

Now that the forth and last Track 2 procurement is about over, what do you recommend NSF should do next with respect to HPC?

Government Panel Questions #6 Do you believe the USG is funding HPC software to the degree necessary to ensure US leadership?

a. Specifically with respect to the petascale programs?

b. With respect to assisting the ISVs or industrial corporations themselves?

c. What other actions would you recommend to improve the US posture with respect to software, especially for US competitiveness?

Government Panel Questions #7 Do you think programs like DARPA HPCS will lead the mainstream HPC industry toward higher productivity and performance, or will the technologies developed for these programs split off from most of HPC and go their own way?

Government Panel Questions #8 In summation, if any panel members have comments regarding HPC public policy not previously expressed heretofore, please take a few minutes to summarize those points of importance to you.