G-WAAT Global Wide Area Application Testbed
Download
Report
Transcript G-WAAT Global Wide Area Application Testbed
Overview of SPEC HPG Benchmarks
SPEC BOF SC2003
Matthias Mueller
High Performance Computing Center Stuttgart
[email protected]
Kumaran Kalyanasundaram,
G. Gaertner, W. Jones, R. Eigenmann,
R. Lieberman, M. van Waveren, and B. Whitney
SPEC High Performance Group
Matthias Müller
Höchstleistungsrechenzentrum Stuttgart
Outline
• Some general remarks about benchmarks
• Benchmarks currently produced by SPEC HPG:
– OMP
– HPC2002
Matthias Müller
Höchstleistungsrechenzentrum Stuttgart
Where is SPEC Relative to Other Benchmarks ?
There are many metrics, each one has its purpose
Computer Hardware
Raw machine performance: Tflops
Microbenchmarks:
Stream
Algorithmic benchmarks:
Linpack
Compact Apps/Kernels:
NAS benchmarks
Application Suites:
SPEC
User-specific applications:
Custom benchmarks
Applications
Matthias Müller
Höchstleistungsrechenzentrum Stuttgart
Why do we need benchmarks?
• Identify problems: measure machine properties
• Time evolution: verify that we make progress
• Coverage:
Help the vendors to have representative codes:
– Increase competition by transparency
– Drive future development (see SPEC
CPU2000)
• Relevance:
Help the customers to choose the right computer
Matthias Müller
Höchstleistungsrechenzentrum Stuttgart
Comparison of different benchmark classes
coverage
relevance
Identify
problems
Time
evolution
Micro
0
0
++
+
Algorithmic
-
0
+
++
Kernels
0
0
+
+
SPEC
+
+
+
+
Apps
-
++
0
0
Matthias Müller
Höchstleistungsrechenzentrum Stuttgart
SPEC OMP
•
•
•
•
•
•
Benchmark suite developed by SPEC HPG
Benchmark suite for performance testing of shared
memory processor systems
Uses OpenMP versions of SPEC CPU2000
benchmarks
SPEC OMP mixes integer and FP in one suite
OMPM is focused on 4-way to 16-way systems
OMPL is targeting 32-way and larger systems
Matthias Müller
Höchstleistungsrechenzentrum Stuttgart
SPEC HPC2002 Benchmark
• Full Application benchmarks
(including I/O) targeted at HPC platforms
• Currently three applications:
– SPECenv: weather forecast
– SPECseis: seismic processing, used
in the search for oil and gas
– SPECchem: comp. chemistry, used in
chemical and pharmaceutical
industries (gamess)
• Serial and parallel (OpenMP and/or MPI)
• All codes include several data sizes
Matthias Müller
Höchstleistungsrechenzentrum Stuttgart
Submitted Results
40
35
30
25
OMPM
20
OMPL
15
HPC2002
10
5
0
2001
2002
Matthias Müller
Höchstleistungsrechenzentrum Stuttgart
2003
Details of SPEC OMP
Matthias Müller
Höchstleistungsrechenzentrum Stuttgart
SPEC OMP Applications
Code
ammp
applu
apsi
art
fma3d
gafort
galgel
equake
mgrid
swim
wupwise
Applications
Language
Molecular Dynamics
CFD, partial LU
Air pollution
Image Recognition\
neural networks
Crash simulation
Genetic algorithm
CFD, Galerkin FE
Earthquake modeling
Multigrid solver
Shallow water modeling
Quantum chromodynamics
Matthias Müller
Höchstleistungsrechenzentrum Stuttgart
lines
C
Fortran
Fortran
13500
4000
7500
C
Fortran
Fortran
Fortran
C
Fortran
Fortran
Fortran
1300
60000
1500
15300
1500
500
400
2200
CPU2000 vs OMPL2001
Characteristic
Max. working set
Memory needed
Benchmark runtime
Language
Focus
System type
Runtime
Runtime 1 CPU
Run modes
Number benchmarks
Iterations
Source mods
Baseline flags
Reference system
CPU2000
200 MB
256 MB
30 min @ 300 MHz
C, C++, F77, F90
Single CPU
Cheap desktop
24 hours
24 hours
Single and rate
26
Median 3 or more
Not allowed
Max of 4
1 CPU @ 300 MHz
Matthias Müller
Höchstleistungsrechenzentrum Stuttgart
OMPL2001
6.5 GB
8 GB
9 hrs @ 300 MHz
C, F90, OpenMP
> 16 CPU system
Engineering MP sys
75 hours
1000 hours
Parallel
9
2 or more
Allowed
Any, same for all
16 CPU @ 300 MHz
SPEC OMPL Results: Applications with scaling to 128
Matthias Müller
Höchstleistungsrechenzentrum Stuttgart
SPEC OMPL Results: Superlinear scaling of applu
Matthias Müller
Höchstleistungsrechenzentrum Stuttgart
SPEC OMPL Results: Applications with scaling to 64
Matthias Müller
Höchstleistungsrechenzentrum Stuttgart
Details of SPEC HPC2002
Matthias Müller
Höchstleistungsrechenzentrum Stuttgart
SPEC HPC2002 Benchmark
• Full Application benchmarks
(including I/O) targeted at HPC platforms
• Currently three applications:
– SPECenv: weather forecast
– SPECseis: seismic processing, used
in the search for oil and gas
– SPECchem: comp. chemistry, used in
chemical and pharmaceutical
industries (gamess)
• Serial and parallel (OpenMP and/or MPI)
• All codes include several data sizes
Matthias Müller
Höchstleistungsrechenzentrum Stuttgart
SPEC ENV 2002
• Based on the WRF weather model, a state-of-theart, non-hydrostatic mesoscale weather model,
see http://www.wrf-model.org
• The WRF (Weather Research and Forecasting)
Modeling System development project is a multiyear project being undertaken by several
agencies.
• Members of the WRF Scientific Board include
representatives from EPA, FAA, NASA, NCAR,
NOAA, NRL, USAF and several universities.
• 25.000 lines of C and 145.000 lines of F90
Matthias Müller
Höchstleistungsrechenzentrum Stuttgart
SPEC ENV2002
• Medium data set: SPECenvM2002
– 260x164x35 grid over Continental United States
– 22km resolution
– Full physics
– I/O associated with startup and final result.
– Simulates weather for a 24 hour period starting from
Saturday, November 3nd, 2001 at 12:00 A.M.
• SPECenvS2002 provided for benchmark researchers
interested in smaller problems.
• Test and Train data sets for porting and feedback.
• The benchmark runs use restart files that are created after
the model has run for several simulated hours. This ensures
that cumulus and microphysics schemes are fully developed
during the benchmark runs.
Matthias Müller
Höchstleistungsrechenzentrum Stuttgart
SPEC HPC2002 Results: SPECenv scaling
Matthias Müller
Höchstleistungsrechenzentrum Stuttgart
SPEC HPC2002 Results: SPECseis scaling
Matthias Müller
Höchstleistungsrechenzentrum Stuttgart
SPEC HPC2002 Results: SPECchem scaling
Matthias Müller
Höchstleistungsrechenzentrum Stuttgart
Hybrid Execution for SPECchem
Matthias Müller
Höchstleistungsrechenzentrum Stuttgart
Current and Future Work of SPEC HPG
• SPEC HPC:
– Update of SPECchem
– Improving portability, including tools
– Larger datasets
• New release of SPEC OMP:
– Inclusion of alternative sources
– Merge OMPM and OMPL on one CD
Matthias Müller
Höchstleistungsrechenzentrum Stuttgart
Adoption of new benchmark codes
• Remember that we need to drive the future
development!
• Updates and new codes are important to stay
relevant
• Possible candidates:
– Should represent a type of computation that is
regularly performed on HPC systems
– We currently examine CPU2004 for candidates
– Your applications are very welcome !!!
Please contact SPEC HPG or me
<[email protected]> if you have a code for us.
Matthias Müller
Höchstleistungsrechenzentrum Stuttgart
Conclusion and Summary
• Results of OMPL and HPC2002:
– Scalability of many programs to 128 CPUs
• Larger data sets show better scalability
• SPEC HPC will continue to update and improve
the benchmark suites in order to be representative
of the work you do with your applications!
Matthias Müller
Höchstleistungsrechenzentrum Stuttgart
BACKUP
Matthias Müller
Höchstleistungsrechenzentrum Stuttgart
What is SPEC?
The Standard Performance Evaluation Corporation
(SPEC) is a non-profit corporation formed to
establish, maintain and endorse a standardized
set of relevant benchmarks that can be applied to
the newest generation of high-performance
computers. SPEC develops suites of benchmarks
and also reviews and publishes submitted results
from our member organizations and other
benchmark licensees.
For more details see http://www.spec.org
Matthias Müller
Höchstleistungsrechenzentrum Stuttgart
SPEC Members
• Members:
3DLabs * Advanced Micro Devices * Apple Computer, Inc. *
ATI Research * Azul Systems, Inc. * BEA Systems * Borland
* Bull S.A. * Dell * Electronic Data Systems * EMC * Encorus
Technologies * Fujitsu Limited * Fujitsu Siemens * Fujitsu
Technology Solutions * Hewlett-Packard * Hitachi Data
Systems * IBM * Intel * ION Computer Systems * Johnson &
Johnson * Microsoft * Mirapoint * Motorola * NEC - Japan *
Network Appliance * Novell, Inc. * Nvidia * Openwave
Systems * Oracle * Pramati Technologies * PROCOM
Technology * SAP AG * SGI * Spinnaker Networks * Sun
Microsystems * Sybase * Unisys * Veritas Software * Zeus
Technology *
Matthias Müller
Höchstleistungsrechenzentrum Stuttgart
SPEC HPG = SPEC High-Performance Group
• Founded in 1994
• Mission: To establish, maintain, and endorse a
suite of benchmarks that are representative of
real-world high-performance computing
applications.
• SPEC/HPG includes members from both industry
and academia.
• Benchmark products:
– SPEC OMP (OMPM2001, OMPL2001)
– SPEC HPC2002 released at SC 2002
Matthias Müller
Höchstleistungsrechenzentrum Stuttgart
Currently active SPEC HPG Members
•
•
•
•
•
•
•
•
•
Fujitsu
HP
IBM
Intel
SGI
SUN
UNISYS
University of Purdue
University of Stuttgart
Matthias Müller
Höchstleistungsrechenzentrum Stuttgart
SPEC Members
•
•
Members:
3DLabs * Advanced Micro Devices * Apple Computer, Inc. * ATI Research * Azul
Systems, Inc. * BEA Systems * Borland * Bull S.A. * Dell * Electronic Data Systems *
EMC * Encorus Technologies * Fujitsu Limited * Fujitsu Siemens * Fujitsu Technology
Solutions * Hewlett-Packard * Hitachi Data Systems * IBM * Intel * ION Computer
Systems * Johnson & Johnson * Microsoft * Mirapoint * Motorola * NEC - Japan *
Network Appliance * Novell, Inc. * Nvidia * Openwave Systems * Oracle * Pramati
Technologies * PROCOM Technology * SAP AG * SGI * Spinnaker Networks * Sun
Microsystems * Sybase * Unisys * Veritas Software * Zeus Technology *
Associates:
Argonne National Laboratory * CSC - Scientific Computing Ltd. * Cornell University *
CSIRO * Defense Logistics Agency * Drexel University * Duke University *
Fachhochschule Gelsenkirchen, University of Applied Sciences * Harvard University *
JAIST * Leibniz Rechenzentrum - Germany * Los Alamos National Laboratory * Massey
University, Albany * NASA Glenn Research Center * National University of Singapore *
North Carolina State University * PC Cluster Consortium * Purdue University * Queen's
University * Seoul National University * Stanford University * Technical University of
Darmstadt * Tsinghua University * University of Aizu - Japan * University of California Berkeley * University of Edinburgh * University of Georgia * University of Kentucky *
University of Illinois - NCSA * University of Maryland * University of Miami * University
of Modena * University of Nebraska - Lincoln * University of New Mexico * University of
Pavia * University of Pisa * University of South Carolina * University of Stuttgart *
University of Tsukuba * Villanova University * Yale University *
Matthias Müller
Höchstleistungsrechenzentrum Stuttgart
CPU2000 vs. OMPM2001
Characteristic
Max. working set
Memory needed
Benchmark runtime
Language
Focus
System type
Runtime
Runtime 1 CPU
Run modes
Number benchmarks
Iterations
Source mods
Baseline flags
Reference system
CPU2000
200 MB
256 MB
30 min @ 300 MHz
C, C++, F77, F90
Single CPU
Cheap desktop
24 hours
24 hours
Single and rate
26
Median 3 or more
Not allowed
Max of 4
1 CPU @ 300 MHz
Matthias Müller
Höchstleistungsrechenzentrum Stuttgart
OMPM2001
1.6 GB
2 GB
5 hrs @ 300 MHz
C, F90, OpenMP
< 16 CPU system
MP workstation
34 hours
140 hours
Parallel
11
Worst of 2, median of 3
Allowed
Any, same for all
4 CPU @ 350 MHz
CPU2000 vs OMPL2001
Characteristic
Max. working set
Memory needed
Benchmark runtime
Language
Focus
System type
Runtime
Runtime 1 CPU
Run modes
Number benchmarks
Iterations
Source mods
Baseline flags
Reference system
CPU2000
200 MB
256 MB
30 min @ 300 MHz
C, C++, F77, F90
Single CPU
Cheap desktop
24 hours
24 hours
Single and rate
26
Median 3 or more
Not allowed
Max of 4
1 CPU @ 300 MHz
Matthias Müller
Höchstleistungsrechenzentrum Stuttgart
OMPL2001
6.5 GB
8 GB
9 hrs @ 300 MHz
C, F90, OpenMP
> 16 CPU system
Engineering MP sys
75 hours
1000 hours
Parallel
9
2 or more
Allowed
Any, same for all
16 CPU @ 300 MHz
Program Memory Footprints
wupwise
swim
mgrid
applu
galgel
equake
apsi
gafort
fma3d
art
ammp
OMPM2001
(Mbytes)
1480
1580
450
1510
370
860
1650
1680
1020
2760
160
Matthias Müller
Höchstleistungsrechenzentrum Stuttgart
OMPL2001
(Mbytes)
5280
6490
3490
6450
5660
5030
1700
5210
10670
SPEC ENV2002 – data generation
• The WRF datasets used in SPEC ENV2002 are
created using the WRF Standard Initialization (SI)
software and standard sets of data used in
numerical weather prediction.
• The benchmark runs use restart files that are
created after the model has run for several
simulated hours. This ensures that cumulus and
microphysics schemes are fully developed during
the benchmark runs.
Matthias Müller
Höchstleistungsrechenzentrum Stuttgart
SPECenv execution models on a Sun Fire 6800
Medium scales better
OpenMP best for small size
MPI best for medium size
Matthias Müller
Höchstleistungsrechenzentrum Stuttgart
SPECseis execution models on a Sun Fire 6800
Medium scales better
OpenMP scales better than
MPI
Matthias Müller
Höchstleistungsrechenzentrum Stuttgart
SPECchem execution models on a Sun Fire 6800
Medium shows better scalability
MPI is better than OpenMP
Matthias Müller
Höchstleistungsrechenzentrum Stuttgart
SPEC OMP Results
• 75 submitted results for OMPM
• 28 submitted results for OMPL
Vendor
HP
HP
SUN
SGI
Superdome
Superdome
Fire 15K
O3800
PA-8700+
Itanium2
UltraSPARC
III
R12000
Speed
875
1500
1200
400
L1 Inst
0.75 MB
16 KB
32 KB
32 KB
L1 Data
1.5 MB
16 KB
64 KB
32 KB
L2
-
256 KB
8 MB
8 MB
L3
-
6144 KB
-
-
Architecture
CPU
Matthias Müller
Höchstleistungsrechenzentrum Stuttgart