Transcript Document

ICCMSE07 in Corfu, Greece: Sep. 25-30, 2007
Multi-physics Extension
of
OpenFMO Framework
Toshiya Takami, J. Maki, J. Ooba,
Y. Inadomi, H. Honda, R. Susukita,
K. Inoue, T. Kobayashi, R. Nogita,
M. Aoyagi
九州大学
KYUSHU UNIVERSITY
Research Inst. for Inf. Tech.
Kyushu University, Japan
ICCMSE07 in Corfu, Greece: Sep. 25-30, 2007
Contents Studies of 3D-RISM/FMO
Scientific
1.
Scientific Studies on
Coupled Simulations
–
–
–
FMO in Water
3D-RISM/FMO
Nonlinear Science
2.
Computing Environment
–
–
–
–
3.
Grid to Peta Computer
TeraGrid, EGEE, NAREGI
 Peta-scale Computer
Mediator-API
Performance prediction
OpenFMO for Multi-physics Simulations
–
–
–
Multi-scale design of FMO with a skeleton and MO-API
One-sided-communication implementation for HPC
Open-framework for Multi-physics Simulations
ICCMSE07 in Corfu, Greece: Sep. 25-30, 2007
Fragment Molecular Orbital Method (1)



Fragment MO method, developed by Dr. Kitaura in AIST, Japan,
is known as an approximate method of all electron calculation for
large molecules.
The target is divided into fragments with one or two residues. SCF
calculation of each fragment is performed under the ES potential
made by other fragments. After improvements with respect to pairs
of fragments, the total energy is obtained.
This algorithm has been implemented in
ABINIT-MP (http://moldb.nihs.go.jp/abinitmp/),
GAMESS (http://www.msg.ameslab.gov/GAMESS/),
etc.
Works on FMO by Dr. Kitaura’s group:
K. Kitaura, E. Ikeo, T. Asada, T. Nakano and M. Uebayasi, Chem. Phys. Lett. 313, 701 (1999).
K. Kitaura, S. Sugiki, T. Nakano, Y. Komeiji and M. Uebayasi, Chem. Phys. Lett. 336, 163 (2001).
D.G. Fedorov and K. Kitaura, J. Chem. Phys. 120, 6832 (2004).
D.G. Fedorov and K. Kitaura, J. Chem. Phys. 121, 2483 (2004).
ICCMSE07 in Corfu, Greece: Sep. 25-30, 2007
Fragment Molecular Orbital Method (2)

The flow of FMO calculation can be represented in this figure.
–
–
–

The main part is a self-consistent loop of SCF calculations of
fragments under the electro-static potential made by other fragments.
This is executed until the total electro-static potential is converged.
After the convergence, fragment-pair calculation is carried out over all
combinations of two fragments in order to improve the result.
Parallel execution
–
–
Calculations of fragments
and fragment-pairs can
be parallelized.
Since the SCF calculation
itself can be parallelized,
FMO is executed under a
hierarchical parallelization
scheme.
Initialize
Fragment
Fragment
Fragment
・・・
Converge?
Frag. Pair
Frag. Pair
Frag. Pair
・・・
Total Energy
Schematic flow of FMO
D.G..Fedorov, R.M. Olson, K. Kitaura, M.S. Gordon, and S. Koseki, J. Comp. Chem. 25, 872 (2004).
ICCMSE07 in Corfu, Greece: Sep. 25-30, 2007
RISM/SCF: MO in Aqueous Solution (1)

SCF calculations of molecules in water
–
–
–
–
RISM/SCF [Tenno-Hirata-Kato, 1993, 1994]
RISM/MCSCF [Sato-Hirata-Kato, 1996]
3D-RISM/DFT [Kovalenko-Hirata, 1999]
3D-RISM/SCF [Sato-Kovalenko-Hirata, 2000]
S. Ten-no, F. Hirata and S. Kato, CPL 214, 391 (1993)
S. Ten-no, F. Hirata and S. Kato, JCP 100, 7443 (1994).
H. Sato, F. Hirata and S. Kato, JCP 105, 1546 (1996).
A. Kovalenko and F. Hirata, JCP 110, 10095 (1999).
H. Sato, A. Kovalenko and F. Hirata, JCP 112, 9463 (2000).

RISM (Reference Interaction Site Model)
–
–
Statistical mechanics of molecular liquid
without any fitting parameter
3D-RISM is 3D version of RISM.
F. Hirata, ed., “Molecular Theory of Solvation,” (Kluwer Pub., 2003)
from JCP 100, 7443 (1994)
ICCMSE07 in Corfu, Greece: Sep. 25-30, 2007
RISM/SCF: MO in Aqueous Solution (2)

We have done several test execution on small proteins
in aqueous solution.
met-enkephalin (75 atoms)
chignolin (138 atoms)
ICCMSE07 in Corfu, Greece: Sep. 25-30, 2007
RISM/SCF: MO in Aqueous Solution (3)

Our recent work: 3D-RISM/SCF calculation as a
multi-physics simulation in molecular science
–
–
Fictitious parameter is introduced to find many
avoided structures in orbital energies.
Localization of one-electron orbitals is analyzed
through eigenvalue statistics (Brody analysis).
T. Takami, J. Maki, J. Ooba, T. Kobayashi, R. Nogita, and M. Aoyagi,
“Interaction and Localization of One-electron Orbitals in an Organic
Molecule: the Fictitious Parameter Analysis for Multi-physics
Simulations,” J. Phys. Soc. Jpn. 76, 013001 (2007).
from JPSJ 76, 013001 (2007)
ICCMSE07 in Corfu, Greece: Sep. 25-30, 2007
Computing Environments
1.
Scientific Studies on
Coupled Simulations
–
–
–
FMO in Water
3D-RISM/FMO
Nonlinear Science
2.
Computing Environment
–
–
–
–
3.
Grid to Peta Computer
TeraGrid, EGEE, NAREGI
 Peta-scale Computer
Mediator-API
Performance prediction
OpenFMO for Multi-physics Simulations
–
–
–
Multi-scale design of FMO with a skeleton and MO-API
One-sided-communication implementation for HPC
Open-framework for Multi-physics Simulations
ICCMSE07 in Corfu, Greece: Sep. 25-30, 2007
From Grid to Peta-scale Computing

TeraGrid
–

EGEE
–

http://www.eu-egee.org/
NAREGI
–

http://www.teragrid.org/
http://www.naregi.org/
Next-generation
Supercomputer Project
–
http://www.nsc.riken.jp/
ICCMSE07 in Corfu, Greece: Sep. 25-30, 2007
3D-RISM/FMO by Mediator-API


Mediator-API provides transformation/exchange of
data between each component in coupled simulation.
This is parallelized with GridMPI to achieve electronicstate calculations of a protein molecule in water.
S.Ho, S.Itoh, S.Ihara and R.D.Schlichting, “Agent middleware for heterogeneous scientific simulations,”
in Proceedings of ACM/IEEE SC 1998 Conference (SC’98), 1998, p. 15.
ICCMSE07 in Corfu, Greece: Sep. 25-30, 2007
Performance Prediction of FMO
1,000,000 CPUs
P4
P4
P4
P4
P4
P4
P4
P4
P4
P4
P4
P4
P4
P4
P4
P4
Nf
...
...
This may be available within ten years.
10,000 nodes
multi-core multi-core multi-core multi-core
CPU
CPU
CPU
CPU
100 times faster
than current P4
...
We assume this type of hierarchical computer.

If we assume the hierarchical computer
(10,000 nodes of 100 core CPUs)
–
–
–
The total execution time is represented in
a quadratic function of
.
The execution time can be estimated on
the 10,000 nodes computer with a
sufficient performance in each node.
All electron calculation of a molecule with
100,000 fragments (approx. 2,000,000
atoms) can be executed by FMO in a day.
Total execution time of FMO
T. Takami, J. Maki, J. Ooba, Y. Inadomi, H. Honda, T. Kobayashi, R. Nogita, and M. Aoyagi,
“Open-architecture Implementation of Fragment Molecular Orbital Method for Peta-scale Computing,”
appear in Proceedings of HPCNano06 held in SC06, Tampa, FL. (2007); arXiv:cs/0701075v1 [cs.DC]
ICCMSE07 in Corfu, Greece: Sep. 25-30, 2007
OpenFMO Project
1.
Scientific Studies on
Coupled Simulations
–
–
–
FMO in Water
3D-RISM/FMO
Nonlinear Science
2.
Computing Environment
–
–
–
–
3.
Grid to Peta Computer
TeraGrid, EGEE, NAREGI
 Peta-scale Computer
Mediator-API
Performance prediction
OpenFMO for Multi-physics Simulations
–
–
–
Multi-scale design of FMO with a skeleton and MO-API
One-sided-communication implementation for HPC
Open-framework for Multi-physics Simulations
ICCMSE07 in Corfu, Greece: Sep. 25-30, 2007
OpenFMO Project (1)
It is revealed that FMO calculations can exhibit peta-scale
performance in the next-generation supercomputer.
However, it is known that the present implementations
have significant problems in:


the memory allocation
communications between processes.
and may not be executed in the peta machines.
Then, we began a project named OpenFMO. The main
objective of this project is to construct the FMO program
which can be executed in peta-scale computers.
ICCMSE07 in Corfu, Greece: Sep. 25-30, 2007
OpenFMO Project (2)
This project stands for the following “Opennesses”:
A)
B)
C)
Open Architecture Implementation of Skeleton and APIs
(Dr. Maki, Dr. Inadomi, Dr. Honda)
–
The layered structure of the control program (skeleton) and the
molecular orbital API (MO-API) is successfully developed. It was
found that the one-sided communication implementation using MPI2 functions outperforms the usual one based on MPI.
Open Interface to Multi-physics Simulations
(Dr. Kobayashi, T.T. (myself))
–
FMO can also be opened to multi-physics simulations. Since it is
based on electro-static interaction between fragments, each
fragment can be substituted by the general object which can provide
a static charge distribution.
Open Source License
–
The source code of the skeleton program of OpenFMO is publicly
opened according to some open-source license like GPL.
T. Takami, J. Maki, J. Ooba, Y. Inadomi, H. Honda, T. Kobayashi, R. Nogita, and M. Aoyagi,
“Open-architecture Implementation of Fragment Molecular Orbital Method for Peta-scale Computing,”
appear in Proceedings of HPCNano06 held in SC06, Tampa, FL. (2007); arXiv:cs/0701075v1 [cs.DC]
ICCMSE07 in Corfu, Greece: Sep. 25-30, 2007
OpenFMO Project (3)

Subjects Achieved and Future Schedule:
–
In 2006



–
The first half of 2007


–
A skeleton program based on the parallelization scheme of
GAMESS-FMO (by Dr. Maki).
Interfaces of MO-APIs (by Dr. Inadomi and Dr. Maki).
Web pages of OpenFMO (by Dr. Maki and T.T. (myself))
The new style skeleton which can be executed on the peta-scale
resources (by Dr. Maki and Dr. Inadomi, see below).
Determin multi-physics interfaces (Dr. Kobayashi, and myself)
The latter half of 2007

Beta release of the multi-physics application
J.Maki, Y.Inadomi, T.Takami, R.Susukita, H.Honda, J.Ooba, T.Kobayashi, R.Nogita, K.Inoue, M.Aoyagi,
“One-sided Communication Implementation in FMO Method,” appear in Proceedings of HPCAsia07.
OpenFMO web-site: http://www.OpenFMO.org/
ICCMSE07 in Corfu, Greece: Sep. 25-30, 2007
Multi-physics Extension (1)


Multi-physics/multi-scale applications will play an important role in
the benchmark for peta-scale computers, since there is a limit in
the scalability of a single application.
From scientific points of view, they will be a significant milestone
when we concern complex multi-scale problems.
Multi-scales in phenomenon
Stack structure of multi-simulator
ICCMSE07 in Corfu, Greece: Sep. 25-30, 2007
Multi-physics Extension (2)

Conditions:
–
Light-weight, reconfigurable structure is required in
order to correspond the rapidly changing world of
high-performance computing

–
This program should be used by wide-range
reseachers including beginners.

–
be very adaptive for the computational environments
reliability and stability of the program are required
Open architecture must be preserved after the multiphysics extension.

should be developed on a kind of “standards”
ICCMSE07 in Corfu, Greece: Sep. 25-30, 2007
Multi-physics Extension (3)
Rapidly Changing Computing Environement
ICCMSE07 in Corfu, Greece: Sep. 25-30, 2007
Multi-physics Extension (3)
Rapidly Changing Computing Environement
ICCMSE07 in Corfu, Greece: Sep. 25-30, 2007
Multi-physics Extension (4)

Component-based configuration, which is already
done in the original OpenFMO using Skeleton/MO-API
implementation
Standard Communication Protocol between each
components
Completed!!

–
–
–

Web/Grid Services with many WSXX Standards
RPC-based invocation / MPI parallel programing
...
Standard Data Representation, or Mediator-like
APIs for transformation of the physical data
–
–
–
–
BMSML (BioMolecular Simulation Markup Language)
CML (Chemical Markup Language)
netCDF (Network Common Data Format)
...
ICCMSE07 in Corfu, Greece: Sep. 25-30, 2007
Summary



Several implementations and applications on (3D)
RISM/SCF as a multi-scale/multi-physics simulation
are reviewed, where various interesting phenomena
can be investigated as scientific simulations.
I have also mentioned that these applications and
studies are performed on rapidly changing computer
environments such as Grid, peta-computer, vector /
scalar architecture, network topology, and so on.
We are still struggling in the multi-physics extension of
OpenFMO, which is originally introduced as an opensourced software in order to avoid the dead-end in
development of large and complex applications.
ICCMSE07 in Corfu, Greece: Sep. 25-30, 2007
Acknowledgements

Collaborated with
–

Thanks to
–
–

Dr. J. Maki, Dr. Y. Inadomi, Dr. H. Honda,
Dr. R. Susukita, Prof. K. Inoue, Ms. R. Nogita,
Dr. T. Kobayashi, Prof. M. Aoyagi
(PSI/NAREGI Project Members in Kyushu Univ.)
Dr. T. Ikegami, Dr. S. Sekiguchi (AIST, Tsukuba)
Prof. S. Matsuoka (Tokyo Inst. Tech, Japan)
Special Thanks to
–
–
Director K. Murakami, Prof. T. Nanri, Dr. F-L. Gu
Organizer of this Symposium