The Magnetic Reconnection Code FLASH Timur Linde, Leonid Malyshkin, Robert Rosner, and

Download Report

Transcript The Magnetic Reconnection Code FLASH Timur Linde, Leonid Malyshkin, Robert Rosner, and

Center for Magnetic Reconnection Studies
The Magnetic Reconnection Code
within the FLASH Framework
Timur Linde, Leonid Malyshkin, Robert Rosner, and
Andrew Siegel
University of Chicago
June 5, 2003
Princeton, NJ
Overview


FLASH project in general
FLASH role in Magnetic Reconnection Code
(MRC) development
Center for Magnetic Reconnection Studies
(Univ. of Chicago branch)
What is FLASH? What is MRC?


Initially: AMR code for astrophysics problems on ASCI
machines (compressible hydro + burning)
FLASH evolved into two things:


More general application code
A framework for building/hosting new problems
FLASH physics modules + FLASH framework
= FLASH application code
Hall MHD modules + FLASH framework
= Magnetic Reconnection Code
Next:
 What physics modules does FLASH contain?
 What services does FLASH framework contain?
Center for Magnetic Reconnection Studies
(Univ. of Chicago branch)
FLASH breakdown



physics modules: (in)compressible hydro, relativistic hydro/MHD,
resistive mhd, 2-D Hall mhd, (nuclear) reaction networks, timedependent ionization, various equations of state, particles, selfgravity, Boltzmann transport, subgrid models, front-tracking
framework: block-structured AMR (Paramesh), parallel io (hdf5),
runtime vis (pvtk), runtime performance monitoring (PAPI),
generic linear solvers tied to mesh, syntax/tool for building new
solvers
code support (public web-based)






flash_test
flash_benchmark
coding standard verification
bug/feature tracker
user support schedule
download: http://flash.uchicago.edu
Center for Magnetic Reconnection Studies
(Univ. of Chicago branch)
General features of FLASH

Three major releases over four years


300,000+ lines (F90 / C / Python)
Good performance



Emphasis on portability, interoperability


Scalable on ASCI machines to 5K procs
Gordon Bell prize (2000)
Standardization of AMR output format, data sharing via CCA
Flash 2.3

New release, scheduled June 1, 2003












optimized multigrid solver
significant improvements in documentation
ported to Compaq TRU64
2-D runtime visualization
optimized uniform grid
support for different mesh geometries
FFT on uniform grid
optimized multigrid on uniform grid
paramesh3.0
Parallel NetCDF i/o module
Implicit diffusion
Flash 2.4

Final 2.x version (Sept 2004)
Center for Magnetic Reconnection Studies
(Univ. of Chicago branch)
FLASH foci

Four initial major emphases





Performance
Testing
Usability
Portability
Later progress in extensibility/reuse: Flash v3.x



Generalized mesh variable database
FLASH component model
FLASH Developer’s Guide
Center for Magnetic Reconnection Studies
(Univ. of Chicago branch)
The future of Flash

Take this a step further: identify the “actors”
A. End-users
Run an existing problem
B. Module/problem contributors
Use database Module interface but unaware of Flash internals
C. Flash developers
Work on general framework issues, utility modules, performance, portability, etc. according to
needs of astrophysics and (laboratory) code validation.

Flash development successively focused on these 3 areas




Flash1.x: emphasis on A
Flash2.x: expand emphasis to B
Flash3.x: expand emphasis to C
Note:


Application scientists lean toward A. and B; programmers/software
engineers lean toward C; computer scientists can be involved at any level
Everybody contributes to design process; software architect must make
final decisions on how to implement plan.
Center for Magnetic Reconnection Studies
(Univ. of Chicago branch)
FLASH and CMRS


Follows typical pattern of FLASH
collaborations
Prototyping, testing, results initially external to
FLASH if desired



Iowa AMR-based Hall MHD – Kai Germaschewski
No “commitment” to FLASH
Interoperability strategy agreed upon



how are solvers packaged?
what data structures are used?
what operations must mesh support?
Center for Magnetic Reconnection Studies
(Univ. of Chicago branch)
component
model
CMRS/Flash strategy


Move portable components between
FLASH/local framework as needs warrant
People strategy:




FLASH developer leading the FLASH single-fluid MHD work
(Timur Linde) leads the Chicago MRC development
CMRS supports a postdoctoral fellow (Leonid Malyshkin) fully
engaged in developing/testing the MRC
We also support a new graduate student (Claudio Zanni/U.
Torino) working on the MRC and its extensions
Science strategy:

The immediate target of our efforts are on reconnection

Specifically: what is the consequence of relaxing the “steady state”
assumption of reconnection - can one have fast reconnection in timedependent circumstances under conditions in which steady reconnection
cannot occur?
Center for Magnetic Reconnection Studies
(Univ. of Chicago branch)
Using FLASH

Some advantages of FLASH













tested nightly
constantly ported to new platforms
i/o optimized independently
visualization developed independently
documentation manager
user support
bug database
performance measured regularly
AMR (tested/documented independently)
coding standards enforcement scripts
debugged frequently (lint, forcheck)
sophisticated versioning, repository management
possible interplay with other physics modules (particles, etc.)
Center for Magnetic Reconnection Studies
(Univ. of Chicago branch)
Where are we now?

We have a working 3-D resistive/viscous AMR MHD code


MRC v1.0 exists






Has already been used by R. Fitzpatrick in his study of compressible
reconnection
FLASH and 2-D Hall MHD have been joined and are being tested
Required elliptic solves for Helmholtz, Poisson (i.e., multigrid)
Based on reusable components
This was done by importing the Iowa Hall MHD code as a “module”, but using
our own Poisson and Helmholtz solvers; hence we solve exactly the same
equations as the Iowa “local framework”
We are now running comparisons of MRC with the Iowa Hall MHD code
The next steps are


Inclusion of full 3-D Hall MHD, again implemented in a staged manner (almost
completed)
More flexible geometry: cylindrical, toroidal
Center for Magnetic Reconnection Studies
(Univ. of Chicago branch)
Concluding remarks

Code emphases:

Standards of interoperability





Simple: common i/o formats – can reuse postprocessing tools
More complex: reusing solvers from one meshing package in another –
libAMR (Colella)
More complex: standard interface for meshing package
Robustness, performance, portability, ease of use
Science emphases:


Focus is on an astrophysically-interesting and central problem
Problem is also highly susceptible to laboratory verification
Center for Magnetic Reconnection Studies
(Univ. of Chicago branch)
… which brings us to
Questions and discussion
Center for Magnetic Reconnection Studies
(Univ. of Chicago branch)