Argpmme PowerPoint Presentation

Download Report

Transcript Argpmme PowerPoint Presentation

Advanced Neutronics Modeling
GNEP Fast Reactor Working Group Meeting
Argonne, IL
August 22, 2007
M. A. Smith, C. Rabiti, D. Kaushik, C. Lee,
and W. S. Yang
Argonne National Laboratory
Current Status
 Current fast reactor physics analysis tools were developed during the 1985-94
time period, as part of the IFR/ALMR programs
 Current approach is based on cumbersome multi-step calculations
– Cross sections self-shielded at ultra-fine group (~2000+) level with 0D/1D
spectrum calculation
– Spatial collapse to form regional broad group cross sections with 1D/2D
flux calculation
– Broad group (~33) nodal diffusion or transport 3D core calculations with
homogenized nodes
– Number of design calculations (fuel cycle analysis, heating calculation,
reactivity coefficient calculation, control rod worth and shutdown margin
evaluation, etc.) based on broad group 3D core calculation
 Current tools are judged to be adequate to begin the design process
– Extensive critical experiment and reactor operation database exists
– Validation and capabilities have evolved in parallel
2
Improvement Needs
 However, significant improvements are needed to allow more accurate and
economical design procedures
– Significant efforts are also required to utilize all the existing design tools on
modern computer frameworks
 Improved accuracy is needed to meet burner design challenges
– Radial blanket typically replaced by reflector
• Many critical experiments (BFS-62, MUSE-4) exhibit problems in
accurate prediction of reaction rates in the immediate reflector region
• Important for bowing (safety) and shielding considerations
– High leakage configurations also challenge design methods
• Transport effects are magnified
• Key reactivity coefficients (void worth) under-predicted
– Improved pin power and flux distributions
• Accurate pin power distributions for T/H calculations
• Accurate pin flux distributions for isotopic depletion prediction
3
Improvement Needs (Cont’d)
 Applicable range of problems needs to be extended
– Modeling of structure deformation (for accurate reactivity feedback)
– Neutron streaming in voided coolant condition
– Control assembly worth (relatively large heterogeneity effects)
– Shielding calculations (severe spectral transitions)
 A modern, integrated design tool is crucial to improve the current design
procedure, which is time-consuming and inefficient
– Eliminate piecemeal nature vulnerable to shortcomings in human
performance, organizational skills, and project management
• Improved automation of data transfers among codes/modules
– Greatly improve the turn-around time for design iterations
– Utilize advances in computer science and software engineering
 Improved modeling in the integrated design tool allows radical improvements
– Ability to optimize the design (e.g., reduce nominal peak temperatures)
– New knowledge to alter and redirect the design features and approach
4
Objective and Approaches
 The final objective is to produce an integrated, advanced neutronics code that
allows the high fidelity description of a nuclear reactor and simplifies the multistep design process
– Integration with thermal-hydraulics and structural mechanics calculations
 Allow uninterrupted applicability to core design work
– Phased approach for multi-group cross section generation
• Simplified multi-step schemes to online generation
– Adaptive flux solution options from homogenized assembly geometries to
fully explicit heterogeneous geometries in serial and parallel environments
• Allow the user to smoothly transition from the existing homogenization
approaches to the explicit geometry approach
• Rapid turn-around time for scoping design calculations
• Detailed models for design refinement and benchmarking calculations
 Unified geometrical framework
– Finite element analysis to work within the existing tools developed for
structural mechanics and thermal-hydraulics
– Domain decomposition strategies for efficient parallelization
5
Adaptive Flux Solution Options
Homogenized
assembly
Homogenized
pin cells
Homogenized
assembly internals
Fully explicit
assembly
6
Work Scopes of FY07
 Develop an initial working version (V.0) of deterministic neutron transport
solver in general geometry
– General geometry capability using unstructured finite elements
– First order form solution using method of characteristics
– Second order form solution using even-parity flux formulation
– Parallel capability for scaling to thousands of processors
– Adjoint capability for sensitivity and uncertainty analysis
 Targeted computational milestones
– An ABR full subassembly with fine structure geometrical description for
coupling with thermal-hydraulics calculation
– A whole ABR configuration with pin-by-pin description
 Multi-group cross section generation
– Develop an initial capability for investigating important physical phenomena
and identifying the optimum strategies for coupling with the neutron
transport code
• ANL techniques in the fast energy range
• ORNL techniques in the resonance range
7
General Geometry Capability
 Unstructured finite element mesh capabilities have been implemented
– CUBIT package is the primary mesh generation tool (hexahedral and
tetrahedral elements)
– Further research is required for reducing mesh generation efforts and
robust merging of the meshes of individual geometrical components
8
Accomplishments for Second Order Form Solutions
 PN2ND and SN2ND solvers have been developed to solve the steady-state,
second-order, even-parity neutron transport equation
– PN2ND: Spherical harmonic method in 1D, 2D and 3D geometries with FE
mixed mesh capabilities
– SN2ND: Discrete ordinates in 2D and 3D geometries with FE mixed mesh
capabilities
 These second order methods have been implemented on large scale parallel
machines
– Linear tetrahedral and quadratic hexahedral elements
– Fixed source and eigenvalue problems
– Arbitrarily oriented reflective and vacuum boundary conditions
– PETSc solvers are utilized to solve within-group equations
• Conjugate gradient method with SSOR and ICC preconditioners
• Other solution methods and preconditioners will be investigated
– Synthetic diffusion acceleration for within-group scattering iteration
– Power iteration method for eigenvalue problem
• Various acceleration schemes will be investigated
9
Initial Benchmarking Based Upon Takeda Benchmarks
Reference 1.09515 ± 0.00040 (Rods out)
Angular
VARIANT PN2ND SN2ND
order
1
1.07543 1.07335
3
1.09441 1.09305 1.09464
5
1.09577 1.09444
7
1.09614 1.09481
9
1.09628 1.09493
11
1.09636 1.09498 1.09485
23
1.09494
Error (pcm)
1
-1982
-2180
3
-74
-210
-51
5
62
-71
7
99
-34
9
113
-22
11
121
-17
-30
23
-21
 More spatial refinement is necessary for
VARIANT
 Serial computational performance is
comparable to other codes
– VARIANT P5 was 2 minutes
– PN2ND P5 was ~4 hours
– SN2ND S3 was ~10 minutes
– DFEM S4 ~54 minutes (LANL 2001 )
 Improper preconditioner is an issue
– SSOR for PN2ND
10
ABTR Whole-Core Calculations
Barrel ID = 2.27 m
Equivalent core OD = 1.31 m
P
T
P
T
T
S
T
P
S
P
T
P
T
T
S
T
P
T
P
Inner Core (24)
Fuel Test (6)
Outer Core (30)
Secondary
Control (3)
Primary
Control (7)
Reflector (78)
Shield (48)
Material Test (3)
 Four benchmark problems are being
analyzed
– All require P7 or S8 angular order
– 33, 100, and 230 groups are planned
 30º symmetry core with homogenized
assemblies
– ~40,000 spatial DOF
• ~100 processors
 120º periodic core with homogenized
assemblies
– ~400,000 spatial DOF
• ~500 processors
 30º symmetry core with homogenized pin
cells
– 1.7 million spatial DOF
• ~1000 processors
 Single assembly with explicit geometry
– 2.2+ million spatial DOF
• ~5000 processors
11
30º Symmetry Core with Homogenized Assemblies
 VARIANT result is 12 pcm off with P9
angular flux approximation
– CPU time was 12 hours
 SN2ND was ran on Janus
– S10 solution took 8 hours on 1
processor
 PN2ND was ran with different XS
– P7 solution with SSOR took 1
hour on 132 processors
MCNP
1.01921±0.00004
Angular
Order
SN2ND
40,000
Error
(pcm)
4
1.02548
627
6
1.02040
119
8
1.02007
86
10
1.01982
61
12
120º Periodic Core with Homogenized Assemblies
 P11 solution of PN2ND on 512 processors was 2.1 hour (total 1093 hours)
 MeTiS domain decomposition for 512 processors and 14th of 33 group flux
solution
Angular
Order
PN2ND
793,000
1
1.00941
3
1.02274
5
1.02417
7
1.02457
9
1.02472
11
1.02478
13
Scalability Test of PN2ND
 Parallel performance (strong scaling) from 512 to 4096 Cray XT4 processors
– 120º periodic ABTR core with homogenized assemblies
– 33 group P5 calculation
– Mesh contains 587,458 quadratic tetrahedral elements and 793,668
vertices
• About 12 million space-angle degrees of freedom per energy group
14
30º Symmetry Core with Homogenized Pin Cells
 MCNP 24 hours on 40 processors (or 40 days)
 SN2ND and PN2ND calculations under progress
– PN2ND P3 was completed
15
Accomplishments for First Order Form Solution
 MOCFE solver has been developed to solve the steady-state, firstorder neutron transport equation
– Method of characteristic in space and discrete ordinates in angle
– Linear tetrahedral and quadratic hexahedral elements
• Utilizes the very efficient Moller-Trumbore algorithm to find
intersection with triangle
• Surface of quadratic hexahedral element is meshed with 48
triangles
– Synthetic diffusion acceleration with the use of PETSc parallel
matrices and vectors
 Have performed ray tracing for meshes containing 1 million elements
– Ray tracing accounts for a minor component of the total time
– Fully scalable process
 Algorithm analysis for parallelization is ongoing
 Single assembly T/H coupling calculation is ready to go given
computational time
– Estimated time for the initial flux solution is 5 hours on 128 proc.
16
Mesh Refinement Study of MOCFE on Takeda 1 Benchmark
800
Keff CR in
600
Keff CR out
CR Worth
Error (pcm)
400
200
0
-200
-400
-600
0
100000
200000
300000
Number of Elements
400000
500000
 Max 0.01 cm2 ray area and 80 angular directions
17
Scalability Test of MOCFE on Takeda 4 Benchmark
9
Measured Result
Ideal Result
8
7
Time (hr)
6
5
4
3
2
1
0
16
32
48
64
Processors
 85538 elements without synthetic diffusion acceleration
18
Algebraic Collapsing Synthetic Acceleration of MOCFE
19
Single Assembly Geometry
 One group, fixed source scoping study for
T/H coupling calculation
20
Multi-group Cross Section Generation
 Phased approach was adopted to allow uninterrupted applicability to core design
– Tens of thousands groups are required for accurate representation of selfshielding effects
– Near term goal is to use 50-500 energy groups
• Simplifies the existing multi-step cross section generation schemes
– Improvements to resolved and unresolved resonances are underway
 Provide the user the option to choose the level of approximation
– Online cross section generation
• Libraries of ultra-fine group smooth
cross sections and resonance
parameters (or point-wise XS)
• Utilize fine group MOCFE solutions
– Fine group cross section libraries
• Functionalized XS data
s (U-238)
• Subgroup method is being
s (U-238)
s (Fe-56)
considered
s
a
s
21
Multi-group Cross Section Generation (Cont’d)
 Starting set of methodologies
– Above resolved resonance energy range
• Ultra-fine group methodologies of MC2-2
– Resolved resonance energy range
• Ultra-fine group calculation of MC2-2 with analytic resonance integrals
using a narrow resonance approximation
• Hyper-fine group (almost point-wise) calculation of MC2-2 with
RABANL integral transport method
• Point-wise resonance calculation using CENTRM (ORNL)
– Thermal energy range
• CENTRM methodologies (ORNL)
 ENDF/B data processing
– ETOE-2: create MC2-2 libraries
– NJOY: point-wise resonance cross sections
22
Multi-group Cross Section Generation (Cont’d)
 Update and testing of the ETOE-2/MC2-2 system
– The MC2-2 code is currently being revised for eventual coupling with UNIC
and use in a parallel computing environment
• FORTRAN 90
• Current focus is on off-line cross section generation
– Use of ENDF/B-VII.0 data
• Required coding changes to ETOE-2
• Completed processing major actinides and structural material nuclides
 Preliminary tests of the ENDF/B-VII.0 libraries of MC2-2
– MC2-2/TWODANT R-Z modeling is compared with Monte Carlo R-Z
– Good agreement within 0.25% ∆ρ for bigger systems with relatively soft
spectrum (Big-10, ZPR-6, and ZPPR-21)
– Overestimated multiplication factors by 0.22 ~ 0.35% ∆ρ for small systems
(Flattop, Jezebel, and Godiva)
– Good agreement of C/E ratios of spectral indices within 2.7% (Godiva and
Jezebel)
23
Preliminary Results of ENDF/B-VII.0 Data
Assembly
MCNP
± pcm
TWODANT
∆, pcm
Flattop-25
1.00212 ± 35
227
Flattop-Pu
1.00072 ± 34
235
Flattop-23
0.99921 ± 34
257
Godiva
0.99996 ± 32
350
Jezebel-240
0.99944 ± 31
300
Jezebel-23
1.00007 ± 31
279
Jezebel
1.00028 ± 30
216
Big-10
0.99513 ± 19
-57
ZPR-6/6A
0.99609 ± 23
22
ZPR-6/7
0.98671 ± 22
-126
ZPPR-21A
0.99869 ± 20
-153
ZPPR-21B
0.99293 ± 20
75
ZPPR-21C
0.99923 ± 18
-25
ZPPR-21D
1.00345 ± 20
253
ZPPR-21E
1.00485 ± 20
184
ZPPR-21F
1.00612 ± 20
11
Isotope
Experiment
TWODANT
C/E ratio*
Godiva
239
s Pu
/ s Uf 235
f
0.165 ± 0.002
0.973 ± 0.011
237
s Np
/ s Uf 235
f
1.59 ± 0.03
0.987 ± 0.019
s Uf 233 / s Uf 235
0.837 ± 0.013
1.006 ± 0.015
s Uf 238 / s Uf 235
1.402 ± 0.025
0.989 ± 0.018
Jezebel
239
s Pu
/ s Uf 235
f
0.214 ± 0.002
0.978 ± 0.011
237
s Np
/ s Uf 235
f
1.578 ± 0.027
0.987 ± 0.017
s Uf 233 / s Uf 235
0.962 ± 0.016
1.018 ± 0.016
s Uf 238 / s Uf 235
1.448 ± 0.029
0.986 ± 0.020
* C/E: calculated / experimental values
24
Conclusions
 Code development is progressing well
– ETOE-2 and MC2-2 are being revised
– Second order solvers PN2ND and SN2ND have been developed
• Demonstrated good scalability to 4000 processors
– First order solver MOCFE has been developed including synthetic
acceleration scheme and quadratic hexahedral meshes
– Adjoint solver has not been completed
– Preconditioner study has yet to be completed
 Milestone calculations are primary area to be completed
– Jaguar (Cray XT4 at ORNL) is being expanded and job queue is typically
saturated
• We are very grateful of ORNL for the cpu time
– BlueGene and JAZZ (ANL) are typically saturated
– Production machines are not effective for code development
• Average 4-8 hour wait in the queue for scoping is problematic
• Purchase request for a small cluster is under progress
– INCITE proposal was submitted in collaboration with ORNL for computer
time on big leadership computers (XT4 and BlueGene/P)
25
Plans for FY08
 Further development of high fidelity neutronics solvers
– Implement acceleration for steady state eigenvalue calculations
– Optimize acceleration schemes based upon geometry and cross section data
– Investigate strategy utilizing parallelization by group and space
– Formalize user interface and cross section management
 Develop a time dependent solution capability (kinetics)
 Develop a multi-group cross section generation code based on the ETOE-2/MC2-2
methodologies for use in a parallel computing environment
 Interface the new cross section code with the neutronics solvers
– Develop platform independent library interface of all key cross section data
– Investigate online cross section generation
 Complete the process of ENDF/B-VII.0 for fast reactor analysis work
 Verification and validation
– Systematic verification of multi-group cross section generation scheme
– Benchmark using ZPR critical experiments for fast reactors
• ZPR 6-6a or ZPR 6-7
26