Document 7173607

Download Report

Transcript Document 7173607

SciDAC Accelerator Modeling Project
Kwok Ko and Robert D. Ryne
SciDAC PI meeting
Charleston, South Carolina
March 23, 2004
Outline




Project overview
Applications
Collaborations in Applied Math and Computer Science
Future Plans
Outline




Project overview
Applications
Collaborations in Applied Math and Computer Science
Future Plans
DOE’s Facilities for the Future of Science, 20yr
Outlook
is a testament to the importance of DOE/SC
and the importance of particle accelerators
Of the 28 priorities on the list,
nearly 1/2 are accelerator facilities
Accelerator projects on 20yr list

LCLS
 RIA
 CEBAF upgrade
 BTeV
 Linear Collider
 SNS upgrade
 RHIC II
 NSLS upgrade
 Super Neutrino Beam
 ALS upgrade
 APS upgrade
 eRHIC
 IBX
SciDAC Accelerator Modeling Project
Goal: Create a comprehensive simulation environment,
capable of modeling a broad range of physical effects, to
solve the most challenging problems in 21st century
accelerator science and technology
Sponsored by: DOE/SC Office of High Energy Physics
(formerly HENP) in collaboration w/ Office of Advanced
Scientific Computing Research
SciDAC codes are having a major impact on
existing accelerators and future projects









PEP-II interaction region heating analysis (Omega3P,Tau3P,T3P)
Simulation of beam-beam effects in Tevatron, PEP-II, RHIC, and LHC
(BeamBeam3D)
Discovery that self-ionization can lead to meter-long high density
plasma sources for plasma accelerators
NLC acc. structure design (Omega3P) & wakefield computation
(Omega3P, S3P, Tau3P)
Beam loss studies at FNAL booster (Synergia)
Study of e-cloud instability in LHC (QuickPIC)
NLC peak surface fields and dark current simulations (Tau3P, Track3P)
Gas jet modeling (Chombo/EB)
RIA RFQ cavity modeling (Omega3P)
The SciDAC Accelerator Modeling Project team:
A multidisciplinary, multi-institutional team producing comprehensive
terascale accelerator design tools
LBNL (AFRD)
Beam-Beam; Space Charge in
linacs & rings; parallel Poisson
solvers
UC Davis
Particle & Mesh
Visualization
FNAL
Space-charge in rings;
software integration;
Booster expts
BNL
Space-charge in rings;
wakefield effects;
Booster expts
M=e:f2: e:f3: e:f4:…
N=A-1 M A
U. Maryland
Lie Methods in
Accelerator Physics
SLAC
Large-Scale
Electromagnetic
Modeling
LANL
High Intensity Linacs,
Computer Model Evaluation
Stanford, LBNL (CRD)
Parallel Linear Solvers,
Eigensolvers, PDE Solvers,
AMR
UCLA, USC, Tech-X, U. Colorado
Plasma-Based Accelerator Modeling;
Parallel PIC framworks (UPIC)
SNL
Mesh
Generation
Code Development

Electromagnetics


Beam Dynamics


Omega3P, Tau3P,T3P, S3P, Track3P
BeamBeam3D, IMPACT, MaryLie/IMPACT,
Synergia, Langevin3D
Advanced Accelerators

OSIRIS, VORPAL, QuickPIC, UPIC
IMPACT code suite
User-Map
•RAL
•SLAC
•PSI
•LBNL
•GSI
•LANL
•KEK
•TX corp
•FNAL
•ORNL
•MSU
•BNL
•JLab
Collaborations with Applied Math
and Computer Science

SciDAC ISICs (TOPS, APDEC, TSTT), SAPP
 Eigensolvers and linear solvers
 Poisson solvers
 AMR
 Meshing & Discretization
 Parallel PIC methods
 Partitioning
 Visualization
 Stat methods
Outline




Project overview
Applications
Collaborations in Applied Math and Computer Science
Future Plans
Modeling the PEP-II Interaction Region
Right crotch
Center beam pipe
Left crotch
2.65 m
2.65 m
e+
eCourtesy K. Ko et al., SLAC
FULL-SCALE OMEGA3P MODEL
FROM CROTCH TO CROTCH
Beam heating in the beamline
complex near the IR limited the
PEP-II from operating at high
currents. Omega3P analysis
helped in redesigning the IR
for the upgrade.
Modeling the PEP-II Interaction Region
Left, Trapped mode with highest power loss calculated by Omega3P (5.28 GHz, 230W). Right,
Power loss distribution about interaction point (17.2 kW total from 330 modes)
Top, Distributed Mesh of the IR between the crotches only. Bottom, Snapshot in time of electric field due to
two colliding beams from Tau3P time-domain simulation
Tevatron Modeling
 Large
computing requirement: each point requires 12 hrs x 1024 procs
 Recent result: good agreement for pbar lifetime vs proton intensity

Courtesy Fermilab and LBNL
Beam-Beam Studies of PEP-II
• Collaborative study/comparison of beam-beam codes
• Predicted luminosity sensitive to # of slices used in
simulation
Modeling a Plasma Wakefield Accelerator with added
realism in full 3D models (OSIRIS, VORPAL)
Full EM PIC simulation of drive beam ionizing Lithium in a gas cell. Courtesy W. Mori et al, UCLA
Full Scale modeling of 30-cell Structure


Distributed model on a mesh of half million hexahedral elements
Study RF damage at high power X-Band operation using Tau3P &
Ptrack3D
Courtesy K. Ko et al., SLAC
NLC Accelerating Structure Design
Model of the 55-cell H60VG3 structure without damping manifolds, but with cell-to-cell variation to
provide Gaussian detuning
QuickPIC calculations have resulted in up to
500x increase in performance over fully EM PIC
2
QuickPIC
Osiris (2D)
Ez (mcp/e)
1
0
-1
-2
-3
-8
-6
-4
-2
0
2
4
6
8
Z (c/p)
Wake produced by an electron beam
propagating through a plasma cell
Modeling beam loss in the Fermilab
Booster using Synergia
Booster simulation and experimental results.
(P. Spentzouris and J. Amundson, FNAL)
Outline




Project overview
Applications
Collaborations in Applied Math and Computer Science
Future Plans
Collaboration w/ SciDAC ISICs



TOPS: linear algebra libraries, preconditioners,
eigensolvers for better convergence & accuracy
APDEC: solvers based on block-structured AMR, and
methods for AMR/PIC
TSTT: gridding and meshing tools
Collaboration with TOPS:
Eigensolvers and Linear Solvers
100,000,000
75,000,000
50,000,000
25,000,000
0
SuperLU
WSMP
CG with
Hierarchical
preconditioner
Largest problem size attempted with three different linear solvers on NERSCÕs IBM/SP.
Collaboration w/ TOPS: Partitioning
Left, ParMETIS partitioned model of a five-cell structure (top), same structure partitioned with RCB1D,
showing only two neighbors per partition but three partitions sharing one waveguide port (bottom).
Right,Comparison between ParMETIS and RCB1D on 55 cell structure without waveguide ports.
Collaboration with APDEC

AMR for particle-in-cell.
 Goal: Develop a flexible suite of
fast solvers for PIC codes, based
on ADPEC’s Chombo framework for
block-structured adaptive mesh
refinement (AMR).
• Block-structured adaptive mesh
solvers.
• Fast infinite-domain boundary
conditions.
• Flexible specification of
interaction between grid and
particle data.
• Accurate representation of
complex geometries.
Collaboration with APDEC:
Benefits from Heavy Ion Fusion program
AMR modeling of an HIF source and
triode region in (r,z) geometry
Fine grid patch around
source, & tracking beam edge
high resolution
low resolution + AMR
0.4
0.2
0.0
0.1
0.2
0.3
0.4
Courtesy of A. Friedman, P. Colella et al., LBNL
• In this example, we
obtain a ~ 4x savings
in computational cost
for ~ the same answer
Collaboration with APDEC:
Embedded boundary methods for gas jet modeling
axisymmetric jet exp anding into va cuum . (The axis of symmetry is at the bottom). This
uses AMR and A PDEC’s embedded bounda ry methods for gas dynamics.
Collaboration with TSTT: Meshing & Discretization
Hexahedral mesh of the PEP-II Interaction Region (excluding the crotches) generated with CUBIT for
Tau3P simulation using a transit beam to study wall heating
Collaboration with TSTT: AMR on Unstructured
Grids
Three steps of AMR applied to the Trispal cavity to refine regions of high wall loss (in red) for accurate
quality factor determination
SciDAC Accelerator Modeling Project
provides challenging visualization problems
Courtesy K.-L. Ma et al., UC Davis
Simulating high intensity beams & beam halos
QuickTime™ and a YUV420 codec decompressor are needed to see this picture.
Courtesy Andreas Adelmann (PSI) and
Cristina Siegerist (NERSC viz group)
QuickTime™ and a YUV420 codec decompressor are needed to see this picture.
Courtesy Andreas Adelmann and PSI viz group
Parallel Performance and Parallel
Implementation Issues
• Example:BeamBeam3D
PEs
time (sec)
128
1612
256
858
512
477
1024
303
2048
212
Scaling using weak-strong option
Performance of different parallelization
techniques in strong-strong case
Milestone: First-ever million particle, million turn, strong-strong
simulation performed for LHC
High Aspect Ratio solver based on Integrated Green Function (IGF):
New algorithm provides < 1% accuracy using 64x64 grid (black curve).
64x1024
64x2048
64x4096
64x8192
64x16384
IGF
64x64
Comparisons with Experiments

LANL proton radiography (single-particle optics)
 LANL LEDA beam halo experiment
 J-PARC front end test (collab w/ KEK/JAERI)
 FNAL booster
 BNL booster
 CERN PS (collab w/ CERN, GSI)
Statistical Methods for Calibration and
Forecasting
• Determining initial phase
space distribution from 1D
wire scan data.
• Courtesy D. Higdon (LANL)
et al.
Simulation of a high intensity proton beam through
a series of quadrupole magnets. Statistical
techniques were used to combine 1D profile
monitor data with simulations to infer the 4D beam
distribution. The figure shows the 90% intervals for
the predicted profile at scanner #6 (shaded
regions), and, for comparison, the observed data
(black line). Only data from the odd numbered
scanners were used to make the prediction.
Outline




Project overview
Applications
Collaborations in Applied Math and Computer Science
Future Plans
NLC Accelerating Structure Design
Model of the 55-cell H60VG3 structure without damping manifolds, but with cell-to-cell variation to
provide Gaussian detuning
NLC Accelerating Structure Design
Model of the 55-cell H60VG3 structure without damping manifolds, but with cell-to-cell variation to
provide Gaussian detuning
Model of the H60VG3 DDS structure showing damping manifolds and HOM couplers
3D First-Principles Fokker-Planck Modeling
• Requires analog of 1000s of space-charge calculations/step
— “…it would be completely impractical (in terms of # of particles, computation time,
and statistical fluctuations) to actually compute [the Rosenbluth potentials] as
multiple integrals” J.Math.Phys. 138 (1997).
FALSE. Feasibility demonstrated on parallel machines at NERSC and ACL
Self-Consistent
Diffusion
Coefficients
Spitzer
approximation
Previous approximate
calculations performed
w/out parallel computation
were not self-consistent
Courtesy J. Qiang (LBNL) and S. Habib (LANL)
Optimization



Accelerator system design including space charge
Shape optimization
Plasma afterburner
QuickTime™ and a
TIFF (LZW) decompressor
are needed to see this picture.