Building the IESP Roadmap

Download Report

Transcript Building the IESP Roadmap

4.3.1 Pioneering Applications: Priority Research Directions
Criteria for Consideration
(1) Demonstrated need for Exascale
(2) Significant Scientific Impact in: basic
physics, environment, engineering, life
sciences, materials
Summary of Barriers & Gaps
What will Pioneering Apps do to address the
barriers & gaps in associated Priority Research
Directions (PRD’s)?
(3) Realistic Productive Pathway (over 10 years)
to Exploitation of Exascale
Potential Impact on Software
What new software capabilities will result?
What new methods and tools will
be developed?
Potential impact on user community
(usability, capability, etc.)
How will this realistically impact the research
advances targeted by pioneering applications that
may benefit from exascale systems?
What’s the timescale in which that impact may be
felt?
4.3.1 PIONEERING APPLICATIONS
Pioneering Applications with demonstrated need for Exascale to have significant
scientific impact on associated priority research directions (PRD’s) with a
productive pathway to exploitation of computing at the extreme scale
Multi-hadron physics
Electroweak symmetry breaking
Whole system
burning plasma simulations
Science Milestones
Single hadron physics
Regional decadal climate
Global coupled climate processes
New capability 1
2010
1 PF
Integrated Plasma
core-edge simulations
2011
2012
10 PF
2013 2014
2015
2016 2017
100 PF
2018
2019
1 EF
PIONEERING APPS
• Technology drivers
– Advanced architectures with greater capability but with
formidable software development challenges
• Alternative R&D strategies
– Choosing architectural platform(s) capable of addressing
PRD’s of Pioneering Apps on path to exploiting Exascale
• Recommended research agenda
– Effective collaborative alliance between Pioneering Apps,
CS, and Applied Math with an associated strong V&V effort
• Crosscutting considerations
– Identifying possible common areas of software development
need among the Pioneering Apps
– Addressing common need to attract, train, and assimilate
young talent into this general research arena
4.3.1 Pioneering Applications: High Energy Physics
Key challenges
Summary of research direction
The applications community will develop:
• Achieving the highest possible sustained
applications performance for the lowest cost
• Exploiting architectures with imbalanced node
performance and inter-node communications
• Developing multi-layered algorithms and
implementations to exploit on-chip (heterogeneous)
capabilities, fast memory, and massive system
parallelism
• Tolerance to and recovery from system faults at all
levels over long runtimes
Potential impact on software component
Generic software components required:
• Performance analysis tools
• Highly parallel, high bandwidth I/O
• Efficient compilers for multi-layered parallel
algorithms targeting heterogeneous architectures
• Automatic recovery from hardware/system errors
• Robust global file system and metadata standards
• Multi-layer, multi-scale algorithms and
implementations
• Optimised single-core/single-chip complex
linear algebra routines
• Mixed precision arithmetic for fast memory
access and off-chip communications
• Algorithms that tolerate hardware without
error detection/correction
• Verification of all components (algorithms,
software and hardware)
• Data management and standards for shared
data use
Potential impact on usability, capability,
and breadth of community
• Stress testing and verification of exascale
hardware and system software
• Development of new stochastic and linear
solver algorithms
• Reliable fault-tolerant massively parallel
systems
• Global data sharing and interoperability
4.3.1 Pioneering Applications: High Energy Physics
• Technology drivers
– massive parallelism
– heterogeneous microprocessor architectures
• Alternative R&D strategies
– optimisation of computationally demanding kernels
– algorithms targeting all levels of hardware parallelism
• Recommended research agenda
– co-design of hardware and software
• Cross-cutting considerations
– automated fault tolerance at all levels
– multi-level algorithm implementation and optimisation
Industrial challenges in the Oil & Gas industry: Depth Imaging roadmap
1015
Algorithm complexity
flops
3-55 Hz
9.5 PF
1000
Visco elastic FWI
petro-elastic inversion
100
10
3-35 Hz
900 TF
elastic FWI
visco elastic modeling
isotropic/anisotropic FWI
elastic modeling/RTM
1
0,5
isotropic/anisotropic RTM
isotropic/anisotropic modeling
0,1
Paraxial isotropic/anisotropic imaging
3-18 Hz
56 TF
RTM
Substained performance for different frequency content
over a 8 day processing duration
Asymptotic approximation imaging
1995
2000
2005
2010 2015 2020
Algorithmic complexity Vs. corresponding computing power
6
HPC Power
PAU (TF)
courtesy
High Performance Computing as
key-enabler
LES
Capacity:
# of Overnight
Loads cases run
Unsteady
RANS
102
103
104
Available
Computational
Capacity [Flop/s]
21
1 Zeta (10 )
18
1 Exa (10 )
RANS Low
Speed
15
1 Peta (10 )
RANS High
Speed
105
“Smart” use of HPC power:
1 Tera (10 )
106
• Algorithms
• Data mining
• knowledge
1 Giga (10 )
1980
HS
Design
1990
Data
Set
2000
CFD-based
LOADS
& HQ
2010
12
9
2020
Aero
Optimisation
& CFD-CSM
x106
2030
Full MDO
Capability achieved during one night batch
CFD-based
noise
simulation
Real time
CFD based
in flight
simulation
Courtesy AIRBUS France
Computational Challenges and Needs for Academic and Industrial
2003
2006 Applications
2007
2015
Communities2010
The whole vessel
Consecutive thermal fatigue
9 fuel assemblies
BACKUP
reactor
event
No experimental approach up
to now
Computations enable to
better understand the wall
thermal loading in an
injection.
Knowing the root causes of
the event  define a new
design to avoid this
problem.
Will enable the study of side
effects implied by the flow
around neighbour fuel
assemblies.
Computation with an
L.E.S. approach for
turbulent modelling
Refined mesh near the
wall.
Part of a fuel assembly
3 grid assemblies
Better understanding of
vibration phenomena and
wear-out of the rods.
Computations with smaller and smaller scales in larger and larger geometries
 a better understanding of physical phenomena  a more effective help for decision making
 A better optimisation of the production (margin benefits)
106 cells
3.1013 operations
107 cells
6.1014 operations
108 cells
1016 operations
109 cells
3.1017 operations
1010 cells
5.1018 operations
Fujistu VPP 5000
Cluster, IBM Power5
IBM Blue Gene/L
1 of 4 vector processors
400 processors
20 Tflops during 1 month
2 month length computation
9 days
# 1 Gb of storage
# 15 Gb of storage
# 200 Gb of storage
# 1 Tb of storage
# 10 Tb of storage
2 Gb of memory
25 Gb of memory
250 Gb of memory
2,5 Tb of memory
25 Tb of memory
Power of the computer
Pre-processing not parallelized
600 Tflops during 1 month
10 Pflops during 1 month
Pre-processing not parallelized
… ibid. …
… ibid. …
Mesh generation
… ibid. …
… ibid. …
IESP/Application Subgroup
Scalability / Solver
… ibid. …
Visualisation
From sequences to structures : HPC Roadmap
2009
2011
2015 and beyond
Grand Challenge GENCI/CCRT
QuickTime™ et un
décompresseur
sont requis pour visionner cette image.
Quic kTime™ and a
TIFF ( Unc ompres s ed) decompr es s or
are needed to s ee this picture.
Proteins 69 (2007) 415
Identify all protein sequences using public
resources and metagenomics data, and
systematic modelling of proteins belonging
to the family (Modeller software).
Improving the prediction of protein structure by
coupling new bio-informatics algorithm and massive
molecular dynamics simulation approaches.
Systematic identification of biological partners of
proteins.
Computations using more and more sophisticated bio-informatical and physical modelling
approaches  Identification of protein structure and function
1 family
1 family
1 family
5.103 cpu/~week
5.104 cpu/~week
# 25 Gb of storage
# 5 Tb of storage
# 5*CSP Tb of storage
500 Gb of memory
5 Tb of memory
5*CSP Tb of memory
~ 104*KP cpu/~week
CSP : proteins structurally characterized ~ 104
PIONEERING APPS: Fusion Energy Sciences
Criteria for Consideration
(1) Demonstrated need for Exascale
-- FES applications currently utilize LCF’s at ORNL and ANL, demonstrating
scalability of key physics with increased computing capability
(2) Significant Scientific Impact: (identified at DOE Grand Challenges Workshop)
-- high physics fidelity integration of multi-physics, multi-scale FES dynamics
-- burning plasmas/ITER physics simulation capability
(3) Productive Pathway (over 10 years) to Exploitation of Exascale
-- ability to carry out confinement simulations (including turbulence-driven
transport) demonstrates ability to include higher physics fidelity components with
increased computational capability
-- needed for both of the areas identified in (2) as priority research directions
PIONEERING APPS: Fusion Energy Sciences
Summary of Barriers & Gaps
(1) high physics fidelity integration of multi-physics, multi-scale FES dynamics
-- FES applications for macroscopic stability, turbulent transport, edge physics (where
atomic processes important), etc. have demonstrated at various levels of efficiency the
capability of using existing LCF’s
-- Associated Barrier & Gap: need to integrate/couple improved versions of such large
scale simulations to produce an experimentally validated integrated simulation capability
for scenario modeling of the whole device
(2) burning plasmas/ITER physics simulation capability
-- As FES enters new era of burning plasma experiments on the reactor scale, require
capabilities for addressing the larger spatial and longer energy-confinement time
-- Associated Barrier & Gap: scales spanning the small gyroradius of the ions to the
radial dimension of the plasmas will need to be addressed
•
an order of magnitude greater spatial resolution is needed to account for the larger
plasmas of interest
•
major increase expected in the plasma energy confinement time (~1 second in the ITER
device) together with the longer pulse of the discharges in these superconducting
systems
• will demand
simulations of unprecedented aggregate floating point operations
PIONEERING APPS: Fusion Energy Sciences
Potential Impact on Software
(1) What new software capabilities will result?
-- For each science driver and each exascale-appropriate application the approach for
developing new software capabilities will involve:
• Inventory current codes with respect to mathematical formulations, data structures,
current scalability of algorithms and solvers (e.g. Poisson solves) with associated
identification of bottlenecks to scaling, current libraries used, and
“complexity” with respect to memory, flops, and communication
• Inventory current capabilities for workflows, frameworks, V&V, uncertainty
quantification, etc. with respect to: tight vs. loose code coupling schemes for integration;
mgt. of large data sets from experiments & simulations; etc.
• Inventory expected software developmental tasks for the path to exascale (concurrency,
memory access, etc.)
• Inventory and carry out work-force assessment needs with respect to computer
scientists, applied mathematician, and FES applications scientists.
(2) What new methods and tools will be developed?
-- Outcome from above inventory/assessment exercises should lead to development of
corresponding exascale relevant tools and capabilities.
PIONEERING APPS: Fusion Energy Sciences
Potential impact on user community
(usability, capability, etc.)
(1)How will this realistically impact the research advances targeted by FES that
may benefit from exascale systems?
-- The FES PRD’s for (1) high physics fidelity integrated simulations and for
addressing (2) burning plasmas/ITER challenges will potentially be able to
demonstrate how the application of exascale computing capability can enable
the accelerated delivery of much needed modeling tools.
(2) What’s the timescale in which that impact may be felt?
-- As illustrated on Pioneering Apps Roadmap (earlier slide):
• 10 to 20 PF (2012) integrated plasma core-edge coupled simulations
• 1 EF (2018) whole-system burning plasma simulations applicable to ITER
PIONEERING APPS: Fusion Energy Sciences
• Technology drivers
– Advanced architectures with greater capability but with formidable
software development challenges (e.g., scalable algorithms and solvers;
workflows & frameworks; etc.)
• Alternative R&D strategies
– Choosing architectural platform(s) capable of addressing PRD’s of FES
on path to exploiting Exascale
• Recommended research agenda
– Effective collaborative alliance between FES, CS, and Applied Math (e.g.,
SciDAC activities) with an associated strong V&V effort
• Crosscutting considerations
– Identifying possible common areas of software development needs with
Pioneering Apps (climate, etc.)
– Critical need in FES to attract, train, and assimilate young talent into this
field – in common with general computational science research arena