Cadarache - 2 - Royal Institute of Technology

Download Report

Transcript Cadarache - 2 - Royal Institute of Technology

1

Monte-Carlo Primer

Motto: Sir, In your otherwise beautiful poem (The vision of Sin) there is a verse which reads: “Every moment dies a man every moment one is born” Obviously, this cannot be true and I suggest that in the next edition you have it read

“Every moment dies a man every moment 1 and 1/16 is born”

Even this value is slightly in error but should be sufficiently accurate for poetry.

....Charles Babbage (in a letter to Lord Tennyson)

2 • • • • • • • •

PROGRAM

Introduction History of Monte Carlo Basics of Monte-Carlo Random Number Generators Cross sections Monte-Carlo vs Deterministic Methods Particle Transport Simulations Estimators and Statistical Analysis

– Non-analog Monte Carlo – Variance Reduction Technique

3

Introduction: Monte Carlo

The Monte Carlo method has been used for almost 60 years to solve radiation transport problems in high energy physics, nuclear reactor analysis, radiation shielding, medical imaging, oil well-logging, etc. Individual particle histories are simulated using random numbers, highly accurate representations of particle interaction probabilities, and exact models of 3D problem geometry. Monte Carlo methods are sometimes the only viable methods for analyzing complex, demanding particle transport problems.

4

Modern applications of Monte Carlo

• Nuclear reactor design • Quantum chromodynamics • Radiation cancer therapy • Traffic flow • Stellar evolution • Econometrics • Dow-Jones forecasting • Oil well exploration • VLSI design

5

History of Monte Carlo

John Von Neumann invented scientific computing in 1940’s

• stored programs, “software” • algorithms & flowcharts • assisted with hardware design as well “ordinary” computers are called “Von Neumann machines”

John Von Neumann invented Monte Carlo particle transport in 1940’s

• Highly accurate - no essential approximations • Expensive -typically “method of last resort” • Monte Carlo codes for particle transport have been proven to work effectively on all computers • Vector, parallel, supercomputers, workstations, clusters of workstations, . .

6

Prominent Figures in Computing

If I have seen further it is by standing on the shoulders of Giants

in Isaac Newton’s Letter to Robert Hooke • Gottfried Wilhelm Leibniz (1646 -1716) • Charles Babbage (1791-1871) • Augusta Ada Byron (King), countess of Lovelace, Lady Lovelace (1815-1852), the first computer programmer; 1979, Programming Language ADA

7

The First Digital Computer

Mid-1830’s

8

John von Neumann,

1903-57, American mathematician, b. Hungary, Ph.D. Univ. of Budapest, 1926 He came to the United States in 1930 and was naturalized in 1937. He taught (1930-33) at Princeton and after 1933 was associated with the Institute for Advanced Study. In 1954 he was appointed a member of the Atomic Energy Commission. A founder of the mathematical theory of games, he also made fundamental contributions to quantum theory and to the development of the atomic bomb (Stan Ulam was another great mathematician in this environment). He was a leader in the design and development of high-speed electronic computers; his development of maniac-an acronym for mathematical analyzer, numerical integrator, and computer-enabled the United States to produce and test (1952) the world's first hydrogen bomb.

With Oskar Morgenstern he wrote Theory of Games and Economic Behavior). Von Neumann's other writings include Mathematical Foundations of Quantum Mechanics (1926), Computer and the Brain (1958), and Theory of Self-reproducing Automata (1966).

9

”Parties and nightlife held a special appeal for von Neumann. While teaching in Germany, von Neumann had been a denizen of the Cabaret-era Berlin nightlife circuit. ”

10

1) Deterministic:

D

r

E

r

 

a

r

2) Probabilistic:

Terminology

r

f

r r

 1

v



r

t

11

Basics of M-C

Two basic ways to approach the use of Monte Carlo methods for solving the transport equation:

mathematical technique for numerical integration

computer simulation of a physical process

• • •

Each is “correct”

mathematical approach useful for:

• importance sampling, convergence, variance reduction, random sampling techniques, . ..*.

simulation approach useful for:

collision physics, tracking, tallying, . . . . .

For Monte Carlo approach, consider the integral form of the Boltzmann equation.

Most theory on Monte Carlo deals with fixed-source problems.

Eigenvalue problems are needed for reactor physics calculations

12

Major Components of a Monte Carlo Algorithm

• • • • • • •

The primary components of a Monte Carlo simulation method:

Probability distribution functions

(pdf's) - the physical (or mathematical) system must be described by a set of pdf's.

Random number generator

- a source of random numbers uniformly distributed on the unit interval must be available.

Sampling rule

- a prescription for sampling from the specified pdf's, assuming the availability of random numbers on the unit interval, must be given.

Scoring (or tallying)

- the outcomes must be accumulated into overall tallies or scores for the quantities of interest.

Error estimation

- an estimate of the statistical error (variance) as a function of the number of trials and other quantities must be determined.

Variance reduction techniques

- methods for reducing the variance in the estimated solution to reduce the computational time for Monte Carlo simulation

Parallelization and vectorization

- algorithms to allow Monte Carlo methods to be implemented efficiently on advanced computer architectures.

13

Major Shortcoming of Deterministic Appoach: Too Many Simplifications

• Physical simplifications: – Collision model – Isotropic scattering in Lab-system – Continuous slowing-down, etc.

• Mathematical simplifications: – Diffusion model, boundary conditions, etc.

– Multi-group model – Discretization, … • Technological simplifications: – Geometrical simplifications – Homogenization, …

14 

Diffusion Model

Q 0 Transport equation  Transport eq.

Free surface

x

= 0 3 

s

Diffusion eq.

0.66

tr

0.71

tr

15

Multi-Group Approximation

E

min Group

G E G E G-

1 . . . . . . . . . . . . . . . . . . . . . . . . .

E

2 Group 2

E

1 Group 1

E

max

E

0 

r

 

g

E g E g

  1 

r

E g E g

  1  

r Ω r Ω

16

Geometrical Simplifications

In space

Discretization

In angle 17 It completely falls out

18

Advantages of Monte Carlo

• Very complex 3-D configurations • Continuous space variables • Continuous angle variables • Continuous energy variable

The errors in Monte Carlo calculations take the form of stochastic uncertainties

19

The Theoretical Foundation of MC

x

1

x

2

x N

 1

N i N

  1

x i x

3

x N

iid variables  , 2 ,

x N

 

P

  

x N

  

N

z

   

N μ

 1   and

z

 

e

 

z

   2 2  2

z

  

N

  

N

N

 

N

20

Elements of Random Variables

 pdf: 

y

    =

b a

     

x

as

x

0 ;     

x

 

f x dx

     1  

dx

   

dx dy

21

y

Distribution Sampling

y

1; and 

dx dy

 1  

P

 1   pseudo random number We want to generate a series of random numbers that are distributed according to the number of mean free paths  

e

x

e

x

x

0  ln(1   )  

e

x

22

Rejection Technique

0   ; 0  

f

max

x

 

x a

;      1 , 2    2 

f

  1  )

f

max  

x

 )

f

  1  2 0   

x

 1  1 1

23

Example of Simple Simulation

x

1

x

2  1) Generate a r.n.  

x

3  

t e



t x

1 

t

ln 

x N

3) Calculate

x

 1

N i N

  1

x i

 

f t

 

a t

 

t f

t n

 

s t n

t

0 1 

24 2 MeV

E

Principles of a Nuclear Reactor

Leakage

N

2

N

1 Fast fission ν ≈ 2.5

k

N

2

N

1 Non-fissile abs.

Resonance abs.

Non-fuel abs.

1 eV

Fission

200 MeV/fission Leakage

25 

1

E dE

x

26 

c

 

t

V c

 1

N i N

  1

c i

vn

Flux/Current Estimators

Average 

V c

t

1

V

t N i N

  1

c i V

l V

 1

VN i N

  1

l i J n

         1

AN i N

  1 

i

;  

d

    

i

M m

  1 1 

m

d

J

n

  

n

 ;

J

A

1

N i N

  1

n i

  

n

A J

  1

AN i N

  1

n i

 ;

J

J

 

J

 

Ω n

The Essence of Monte Carlo

27

Random numbers Monte Carlo Probability Distribution RESULTS

28

The Essence of Monte Carlo

Monte Carlo

Sampling Scoring (tallying) Error estimations Variance reduction techniques Vectorization and parallelization

29

Simple Monte Carlo Example:

Evaluate G  1  0

g(x)dx

with g(x)  1 x 2 1

g

(

x

)

Mathematical approach:

For k = 1, . . . . N: choose

x

ˆ

k

randomly in (0,1) 0 1

G

= (1-0) •[average value of g(x)]  1

N k N

  1

g

( ˆ

k

)  1

N k N

  1 1  ˆ

k

2

Simulation approach:

“darts game” For k = 1, . . . . N: choose

k and y k randomly in

( 0 , 1 )

If x k

2 

y k

2  1 ,

tally

(

register

)

a

"

hit

"

30

Simulation approach:

“darts game” For k = 1, . . . . N: choose

k and y k randomly in

( 0 , 1 )

If x k

2 

y k

2  1 ,

tally

(

register

)

a

"

hit

" miss 1

g

(

x

) 0 1 G = [area under curve]  ( 1  1 ) 

number of N hits

,

N

a total number of shots

M-C as integration tool

Monte Carlo is often the method-of-choice for applications with integration over many dimensions

: 31

Examples: high-energy physics, particle transport (ADS)

Evaluate :

G

b

1

a

1

b

2 2

b

3

a

3

b M

   

a

....

a M g

(

r

1 ,

r

2 ......

r M

)

dr

1

dr

2 ...

dr M

where

r

1 ,

r

2 ......

r M

are all indendent variables For k  1,.....N

:

G

 (

b

1

For m

 1 ,......

M

a

1 )  ....

 (

b M

: 

a M

) 

chose

1

N k N

_ 1

g R m

(

k

R

1 (

k

) )

randomly

,

R

2 (

k in

) ......

R M

(

k

) (

a m

 ,

b m

)

32

Probability Density Functions:

Continuous Probability Density

f

(

x

) 0 

f

(

x

) Probabilit Normalizat y {

a

x

b

} ion 

a

b f

 0  

f(x) dx

 1 (

x

)

dx

Discrete Probability Density

  k 0  f k k  1,......, N where f k 

f

(

x k

) Probabilit y { x  x k }  f k Normalizat ion

k N

   1 f 1

f

(

x

)

f

(

x

)

a b x

x 1 x 2 x 3 x N

x

33

Basic parameters

Mean, Average, Expected Value (1 st statistical moment)

x

  

x

E

      

xf

(

x

)

dx

 

k N

  1

x k f k

for continuous

for discrete probablity

Variance (2 nd var (

x

)  statistical moment), standard deviation (

x

  ) 2   2  

x

   2 

E

 

x

   2   2       

x

   2

f

(

x

)

dx

 2 

k N

  1 

x k

   2

f k

standard deviation    2

34

Basic functions

Function of Random Variable

Consider

g

(

x

), where x is a random variable

E

g

(

x

)

    

g

(

x

)

f

(

x

)

dx E

g

(

x

)

with density

k N   1 g k f k

f

(

x

)

The key of MC methods is the notion of RANDOM sampling The problem can be stated this way: Given a probability density,

f(x),

produce a sequence of The

x

'

s x

should be distributed in the same manner as

f(x).

'

s f

(

x

) 35

x

 The use of random sampling distinguishes Monte Carlo from all other methods • • When Monte Carlo is used to solve the integral Boltzmann transport equation: Random sampling models the outcome of physical events (e.g., neutron collisions, fission process, source, . . . . . ) Computational geometry models the arrangement of materials

36

Transport Equation

 (

r

,

v

)      (

r

' ,

v

' )

C

(

v

' 

v

,

r

' )

dv

' 

Q

(

r

' ,

v

) 

T

(

r

' 

r

,

v

)

dr

' • • • • • • •

Assumptions

static homogeneous medium time-independent Markovian - next event depends only on current (r,v) not on previous events particles do not interact with each other neglect relativistic effects no long-range forces (particles fly in straight lines between events) material properties are not affected by particle reactions •  etc., etc.

can use superposition principle

37

Basis for Monte Carlo Solution Method

Let p

Expand

  

k

   0 

k

into

and with

 0 

p

)

components having

  

by definition

k k

 1  '   0,1, 2,.....

k collisions

 ' Note that collision

k

depends only on the results of collision

k-l,

and not on any prior collisions

k-2, k 3, . . .

38

Statistical approach to determining

 k 

k

(

p

)   

k

 1 (

p

' )

R

(

p

' 

p

)

dp

' interpret terms in the following manner: 

k

 1  

p

)

probability density for occurence of

 (  1)

st k

 1)

st collisions at p

'

result in a k th collision at p

.

Monte Carlo method: 1.

Randomly sample p’ from  k-1 (p’) 2.

Given p’, randomly sample p from R (p’  p) 3. If p lies within dp i  at p i , tally 1 in bin i Repeat steps 1,2,3 N times, then ˆ (

p i

)

dp i

 

counts per bin i

 /

N

dp p i p

39

Histories (trajectories)

After repeated substitution for 

k

(

p

)    k  

k

 1 (

p

 .....

  0 ( ' )

R

(

p

0

p

' )

R

( 

p

0

p

)

dp

p

1 )

R

(

p

1 

p

2 )......

R

(

p k

 1 

p

)

dp

0

dp

1 ...

dp k

 1

A “history” (trajectory)

is a sequence of states (p 0 ,p 1 ,p 2, p 3 ...) p 1 p 3 p 4 p 3 p 0 p 0 p 2 p 2 p 1 For estimates in a given region, tally the occurrences for

each collision

of

each “history”

within a region

40

Transport Equation

k

(

p

)   .....

  0 (

p

0 )

R

(

p

0 

p

1 )

R

(

p

1 

p

2 )......

R

(

p k

 1 

p

)

dp

0

dp

1 ...

dp k

 1

Monte Carlo approach:

Generate a sequence of

States,

(p 0 , p 1 , . . . . p k ), [i.e., a

history by:

• Randomly sample from PDF for source:  0 (p 0 ) • Randomly sample from PDF for k th transition: R (p k-1  p k ) Generate estimates of results, by averaging over M histories:

A

 

A

(

p

)  (

p

)

dp

 1

M M

m

 1

k

 1

A

(

p k

,

m

)

41

Simulations

Simulation approach to particle transport Faithfully simulate the history of a single particle from birth to death During the particle history, • model collisions using physics equations & cross section data • model free-flight using computational geometry • tally the occurrences of events in each region Select source r,  , E randomly Track through geometry, select collision site randomly Apply collision physics analysis, slect new  , E randomly Repeat the simulation for many histories, accumulating the tallies Fundamental rule:

Think like a neutron or other projectile (proton)

42

MC rules for simulations

Source

Random sampling E,  -analytic, discrete, or piecewise tabulated PDF’s Computational geometry

Tracking r

-sample from region in 3-D space, or from discrete PDF Random sampling d collide -distance to collision, from mfp or exponential PDF Computational geometry d geom - distance-to-boundary, ray-tracing, next-region, ....

Collisions

Random sampling E’,  ’ - analytic, discrete, or piecewise-tabulated PDF’s Physics  , f(  ) - cross-section data, angular PDFs, kinematics, ...

Tallies

Statistics

Variance Reduction

Random sampling

MC rules for simulations

43

Single particle

• random-walk for particle history • simulate events, from birth to death • tally events of interest

Batch of histories (”generation”)

• random-walk for many particle histories • tally the aggregate behavior Select source r,  , E randomly Track through geometry, select collision site randomly Apply collision physics analysis, slect new  , E randomly

Overall

timesteps • geometry changes • material changes • fuel depletion • burnable absorbers • control rods

44

Random number generators

• • •

Truly random

time between - is defined as exhibiting ”true" randomness, such as the ”tics" from a Geiger counter exposed to a radioactive element

Pseudorandom

- is defined as having the appearance of randomness, but nevertheless exhibiting a specific, repeatable pattern.

Quasi-random

- is defined as filling the solution space sequentially (in fact, these sequences are not at all random - they are just comprehensive at a preset level of granularity). For example, consider the integer space [0, 100]. One quasi-random sequence which fills that space is 0, 1, 2,...,99, 100. Another is 100, 99, 98,...,2, 1, 0. Yet a third is 23, 24, 25,..., 99, 100, 0, 1,..., 21, 22. Pseudorandom sequences which would fill the space are pseudorandom permutations of this set (they contain the same numbers, but in a different, ”random" order).

45

Random number generators

Desirable Properties

Random numbers are used to determine: (1) attributes (such as outgoing direction, energy, etc.) for launched particles, (2) Interactions of particles with the medium. • • Physically, the following properties are desirable:

The attributes of particles should not be correlated.

The attributes of each particle should be independent of those attributes of any other particle.

The attributes of particles should be able to fill the entire attribute space

in a manner which is consistent with the physics. E.g. if we are launching particles into a hemispherical space above a surface, then we should be able to approach completely filling the hemisphere with outgoing directions, as we approach an infinite number of particles launched. At the very least, ”holes" or sparseness in the outgoing directions should not affect the answers significantly. Also, if we are sampling from an energy distribution, with an increasing number of particles, we should be able to duplicate the energy distribution better and better, until our simulated distribution is ”good enough."

46

Random number generators

Desirable properties Mathematically, the sequence of random numbers used to effect a Monte Carlo model should possess the following properties: •

Uncorrelated Sequences

- The sequences of random numbers should be serially uncorrelated. Any subsequence of random numbers should not be correlated with any other subsequence of random numbers. Most especially, n-tuples of random numbers should be independent of one another. For example, if we are using the random number generator to generate outgoing directions so as to fill the hemispherical space above a point (or area), we should generate no unacceptable geometrical patterns in the distribution of • outgoing directions

Long Period

- The generator should be of long period (ideally, the generator should not repeat; practically, the repetition should occur only after the generation of a very large set of random numbers).

47 • •

Uniformity

- The sequence of random numbers should be uniform, and unbiased. That is, equal fractions of random numbers should fall into equal ”areas" in space. For example, if random numbers on [0,1) are to be generated, it would be poor practice were more than half to fall into [0, 0.1), presuming the sample size is sufficiently large. Often, when there is a lack of uniformity, there are n-tuples of random numbers which are correlated. In this case, the space might be filled in a definite, easily observable pattern. Thus, the properties of uniformity and uncorrelated sequences are loosely related.

Efficiency

- The generator should be efficient. In particular, the generator used on vector machines should be vectorizable, with low overhead. On massively parallel architectures, the processors should not have to communicate among themselves, except perhaps during initialization. This is not generally a signiffcant issue. With minimal effort, random number generators can be implemented in a high level language such as C or FORTRAN, and be observed to consume well less than 1% of overall CPU time over a large suite of applications.

48

Multiplicative, Linear, Congruential Generators - the one most commonly used for generating random integers.

i

 (

A

i

 1 

B

)

modulo M or

i

 

o

mod

(

A

i

B

,

M

)

a

"

seed

"

in practice

i M

mod

(

M

i

 1 , 2 48 )  5 19 

from integer to

i i

float

(

M

)

real

49

Random numbers

The Essence of Monte Carlo

Monte Carlo RESULTS Probability Distribution

50  1    

e

 

x

51

THE HEART OF THE MONTE CARLO

:

probabilit y

sampling of mean free path

of passing distance x

dx p

(

x

)

dx

 

e

 

x dx cumulative probabilit y

  0 

z

e

 

x dx for passing distances up to z

  

e

 

x

0

z

  1 

e

 

z

thus  z = ln  1   thus z = 1  ln  ' , 

’ - random numbers

52

w

(

x

,

y

,

z

,

v x

,

v y

,

v z

,

t

) 

t l

  1 

t

ln  1 

c

f

s

,

el

s

s

,

in w

1 (

x

1 ,

y

1 ,

z

1 ,

v

1

x

,

v

1

y

,

v

1

z

,

t

1 )

w

1 

w

   1  

a

T

   2  3 

choice

choice of of interactio n angle

Cross sections – probability distribution functions!

i

random numbers

53 • •

DATA LIBRARIES ARE OF TWO FORMS

Pointwise

• Cross sections are specified at a sufficient number of energy points to faithfully reproduce the evaluated cross section • Used with: • Continuous energy Monte Carlo codes

Multigroup

• All data are collapsed and averaged into energy groups • Used with: • Diffusion codes • Discrete ordinate codes • Multigroup Monte Carlo codes

54

CROSS-SECTION LIBRARIES

• Pointwise – User does not have to worry about : • Group structure • Flux spectrum • Self-shielding – Small amount of work in processing code – Large amount of work in transport code • Multigroup – User needs to worry about everything – Large amount of work in processing code – Small amount of work in transport code

55 238 Np - fission cross section 10000 1000 100 10 1 0.1

0.01

0.001

10 -3 10 -2 10 -1 10 0 10 1 10 2 10 3 10 4 10 5 10 6 10 7 E (eV)

56 10 4 10 3 10 2 10 1 10 0

235 U - fission cross section - pointwise JEF2

Resonance region 10 4 10 3 10 2 10 1 10 0 10 -4 10 -3 10 -2 10 -1 10 0 10 1 10 2 10 3 10 4 10 5 10 6 10 7 10 1 10 2 10 3 E (eV) E (eV)

57

235 U fission and capture cross sections JEF2

pointwise multigroup 10000 1000 100 10 1 0.1

0.01

0.001

10 4 10 3 10 2 10 1 10 0 10 -1 10 -4 10 -3 10 -2 10 -1 10 0 10 1 10 2 10 3 10 4 10 5 10 6 10 7 10 -2 10 -5 10 -4 10 -3 10 -2 10 -1 10 0 10 1 10 2 10 3 10 4 10 5 10 6 10 7 E (eV) E (eV)

58

235 U fission cross sections in resonanse region Pointwise and multigroup JEF2 data

10 2 10 1 10 0 10 1 10 2 E (eV) 10 3

59

Neutron cross section data contain

• Cross sections for reactions: – elastic scattering, (n,2n), (n,3n), (n,n’,p) – inelastic scattering (n,n’) continuum, (n,  , (n,p) and (n, a – total, absorption, proton prod. and a production • Angular distrubutions for different reactions (45 in MCNP) • Energy distrubution for some reactions (4) • Heating numbers and Q-values • Photon production cross sections, angular distributions for (n,  ), (n,n’), (n,2n) and n,3n)

60 C D Y T P M G E -

Classes of cross-section data (MCNP)

Continuous-energy neutron Discrete reaction neutron Neutron dosimetry Neutron S( a,b ) thermal Continuous-energy photon Multigroup neutron Multigroup photon Continuous-energy electron

61

Major Neutron Physics Approximations

• (n,xn) sampled independently • (n,f),(n,xn) happen instantly • unresolved resonances treated as average cross section • no delayed gamma production • no charged particle production

62

Experimental Data; Theoretical Codes Feedback to Evaluators Experimentalists, and Processors ENDF/B Evaluations T-2 Evaluations NJOY/TRANSX Processing codes T-2 Multigroup or Continuous Energy File Checking Codes Processsing Codes Data Analysis Evaluations MCPOINT CLYDE Processing Codes LLNL Multigroup or Continuous Energy File Test Data in Transport Codes Include Data in Libarary

63

Approximations in generating data

• Evaluation Assumptions

:

– choice of experiments – choice of representations, i.e., discrete lines for continuous distributions – interpolation (model codes) – bin sizes – tresholds, Q-values, particularly for elements • Processing Assumptions: – representation of angular distributions as equiprobable bins – resonance parameter treatment

64

Things to consider when choosing neutron cross-section data

• Differences in evaluator’s philosophy • Neutron energy spectrum • Temperature at which set was processed • Availability of photon-production data • Sensitivity of results to different evaluations • Use the best data that you can afford

65

THERMAL EFFECTS

• Target motion (gas liquid, solid) --> incident neutron “sees” targets with various velocities, i.e., many different relative velocities

–  

f V relative

);

so average cross section is changed – kinematics are different • neutron upscattering • Doppler broadening

66

THERMAL EFFECTS (cont.)

• Lattice structure (solid)

Lattice spacing At high energy - wavelength of neutron << lattice spacing At low energy - wavelength of neutron @ lattice spacing • Cross sections show very jagged behaviour, each peak corresponds to a particular set of crystal planes • Coherent scattering (interference of scattered waves) add constructively in some directions and add destructively in others • ==>Angular distribution changed • Bragg scattering

67

THERMAL EFFECTS (cont.)

• Molecular energy levels (liquid, solid)

vibrational and rotational levels ~0.1 eV spacing below a few eV Neutron loses or gains energy in discrete amounts ==> modify DOUBLE-DIFFERENTIAL 

(

E

E

thermal inelastic scattering

68

Deterministic Monte-Carlo

• solves the transport equation (integro-differential) for the average particle behaviour • gives fairly complete information (e.g. flux) throughout the phase space of the problem • simulates individual particles and records some aspects of their behaviour • supplies information only about the specific “sampled” distribution (“tally”) • No transport equation need to be written to solve a transport problem by Monte Carlo - but - equation of the probability density of particles = integral transport equation

69

Deterministic

• the discrete ordinates method visualises the phase space to be divided into many small boxes, the particles move from one box to another. Boxes get progressively smaller, particles moving from box to box take differential amount of time to move and differential distance in space ----> approaches the integro differential tarnsport

Monte-Carlo

• Monte-Carlo transports particle between events (e.g. collisions) that are separated in space and time. Neither differntial space nor time are inherent parameters of Monte-Carlo. The integral equation

does not have

time or space derivatives

70

Phase space

mean value - X n simulation results

Accuracy:

measure of how close the mean is to the TRUE PHYSICAL quantity (TRUE VALUE) being estimated TRUE VALUE • Sometimes called “the systematic error” • Monte Carlo cannot directly estimate this • Factors related to accuracy: • Code accuracy (physics models etc.) • Problem modelling (geometry, dource, etc • User errors/abuse

71

Phase space

mean value - X n simulation results TRUE VALUE Precision : the uncertainty in X n the statistical fluctuations in the sampled x j • Monte-Carlo can estimate this

R

 

X n

    1

n

 

x

2

X n

2  1      1 / 2

72

PRINCIPLES

• Central Limit Theorem

P

a  The distribution of the sum of n independent, identically distrubuted random variables

X n

 

/

n

 b 

1 2

 b a 

e

t

2 / 2

dt

Conditions: Mean and Variance must exist (i.e. be finite)

73

PRINCIPLES

• Confidence Intervals 

n

1

 

n

2

   

X n X n

 

1

2

     • Probability Distributions Discrete Probability Density Function: n discrete events p 2 p 1 p n

........

p i = probability per event 1 2 n

74

PRINCIPLES

• Probability Distributions Continuous Probability Density Function: p(x) = probability per unit of x p(x) x Normalisation:    Cumulative Probability Function: 

x

  

1

75

DEFINITIONS

true mean

estimated mean

x

variance of population

  

1

N

N i

 

2 1

x

i

  

x

 

2

2 )

( ( )) 2 standard deviation of population =

76

DEFINITIONS

estimated variance of population

S

2

 

N N N

1

1

i N

 

1

x

i

1

    

1

N x

2

i N

 

1

x x

i

2 2

 

x

2

x

2

   

where

x

2

1

N

N i

x

i

2

77

DEFINITIONS

estimated variance of mean = S x 2

S N

2 estimated realtive error =

R R

2

 

1

x

2 1 1

x N

 

2

x

2

N

   

N N

x

2

1



1

        

i

1

N

i N

 

1

N i N

 

1

1

x i

  

x i

2

x i

2 2

  

S x

2

x

  

x

1

N

78

Reminder:

f(x) = probability density function (PDF) variance:

= history score distribution

If

 

f x dx

 

1 then -

 

xf(x)dx - 1st moment 2

-

  

 

2 vov (variance of variance): -

  

 

4

79

W

Key of Symbols in Monte Carlo

- particle weight,

W s

- source weight

E

- particle energy (MeV);

E D

- energy deposited (MeV)  - cosine of angle between surface normal and trajectory

A

- surface area (cm 2 );

V

- cell volume (cm 3 )  - track length (cm) p(  ) - probability density function:  - cosine of angle between particle trajectory and detector

s

- total mean free path to the detector

R

- distance to the detector  T

(E)

- microscopic cross section (barns), 

f (E)

- fission x-section

H(E)

- heating number (MeV/collision)

Q

r

a

- fission heating Q-value (MeV) - atom density (atoms/barn-cm),

m

- cell mass (g)

80

ESTIMATORS

• Definitions:

– Particle speed - v(cm/s) – Particle density - n (particles/cm3), a function of position, energy, angle, and time – Flux (particles/cm 2  s)  

,

t

 – Fluence (particles/cm 2 )       

,

  – Cell fluence estimator (track length)  

, ,

t

 

W V

  

W V

 

81 •

Total heating:

F TH

 r r

a g

  

T HdE

Fission Heating

F fH

 r r

a g

  

f QdE

k eff

k eff

V

r  

a

f dE

82

DEFINITIONS

Precision

: the uncertainty in X n caused by the statistical fluctuations in the sampled x i ’s • Monte Carlo codes can estimate this (relative error):

R

S m X N

=

 

1 N

x

2

X N

2

1

  

1/ 2

User controlled factors related to precision: • Forward versus adjoint calculation • Estimator type (tally) • Variance reduction technique • Number of histories run

83

DEFINITIONS

Efficiency: a measure of how quickly the desired precision is achieved • Monte Carlo codes can estimate this (Figure of Merit - FOM):

FOM

1 2

R T R

2

1

N

and T = calc. time

N

FOM should be constant with N Larger FOM more efficient • Factors affecting efficiency: • History-scoring efficiency • Dispersions in nonzero history scores • Computer time per history

84

VARIANCE REDUCTION MOTIVATION

N R

2

    

i N

 

1

i N

x

1

i x i

2

  

2

1

N

     

0

N

 

0

x x i

2

i

  

2

1

qN

1

qN q

q: fraction with x i  0 R 2 non R 2 eff R 2 eff can be a significant part of the relative error Variance reduction techniques should strive to equalize the x i scores as well as reduce the fraction of zero scores.

85

VARIANCE REDUCTION TECHNIQUE

Type Time and energy cutoffs Weight window generator Energy splitting and Russian roulette Forced collisions Exponential transform T P P M D

86

TYPES OF VARIANCE REDUCTION

(T) (P) Truncation method - trancates parts of phase space that do not contribute significantly to the solution Population control method - uses particle splitting and Russian roulette to control the number of samples taken in various regions of phase space (M) Modified sampling method - alters the statistical sampling of a problem to increase the “amount of information” squeezed from each trajectory. Sampling is done from distributions that send particles in desired directions or into other desired regions of phase space such as time, energy, or change the location or type of collision (D) Partially deterministic methods - circumvents part of the normal random walk process (e.g. next event estimator)

87

VARIANCE REDUCTION TECHNIQUE

Implicit Capture

- a splitting process where the particle is split into absorbed weight (which is not followed) and serviving weight.

Applied after each collision Surviving weight is given by:

W

   1  

a

T

   a = collision 

T

= collision absorption nuclide cross total sect.

cross sect.

Generally this decreases R eff faster than increasing T.

88

VARIANCE REDUCTION TECHNIQUE

Weight cutoff

- Russian roulette is played if a particle’s weight drops below a user-specified weight cutoo. The particle is either killed or its weight is increased.

Unlike other cutoffs, weight cutoff is NOT biased.

Applied when W < R j probability weight WC1   W/(WC1  R WC2 and survival is given by: j R j ) R j = ratio of source cell/ cell j importances This decreases R non faster than increasing T

89

VARIANCE REDUCTION TECHNIQUE

Geometry splitting and Russian roulette -

particles in important regions are increased in number (or weight) for better sampling and decreased in number in other regions.

Applied at surface crossing.

If importance ratio (previous cell to new) is an integer > 2, the particle is split into n particles with weight W/n If ratio is less than one, Russian roulette is played.

This decreases R eff faster than increasing T

SPLITTING

90 Higher importance region Lower importance region

w i

w n n

R new R previous

91

RUSSIAN ROULETTE

Bang!

Higher importance region Lower importance region

if w cut

cut off weight Russian roulette applies when

:

w

w cut

R j R j

ratio of source cell

/

cell

j importance survival probabilit y

:

w cut w

R j new weight w

w cut

R j

92

VARIANCE REDUCTION TECHNIQUE

General source biasing

- allows the production of more source particles. with suitably reduced weights, in the more important regimes of each variables.

Most common variables: Angle Energy Position This decrease R non and R eff faster than increasing T.

93

The END

94   

t

* * t

t

  

EXPONENTIAL TRANSFORM

t

( 1

p

), where artificially adjusted total cross section true total cross section p = the stretching parameter

= cosine of angle between particle direction and stretching direction.

t e

collided weight  

t s ds

 Requirements for preserving expected:

w c

*

t e

 

*

t ds

uncollided weight

e

 

t s ds

w e s

 

*

t s w c

 

t e

 

t s

*

t e

 

*

t s

e

1

 

p

w s

e

 

t s e

 

*

t s

e

95

VARIANCE REDUCTION CAUTIONS

I could have it done in a much more complicated way” said the red Queen, immensely proud

• • •

Lewis Caroll

Be cautious: the advantage of one variance reduction technique may cancel another balance user time with computation time analyse carefully the results make cells small enough so neighbouring importances do not exceed a factor of 8-10

96

Monte Carlo and Transport Equation

Boltzman Transport Equation - Time-independent, Linear A general integral form:

 (

r

,

v

)     (

r

' ,

v

' )

C

(

v

' 

v

,

r

' )

dv

' 

Q

(

r

' ,

v

) 

T

(

r

' 

r

,

v

)

dr

'

where

T

(

r

(

r

' ,

v

)

Q

(

r

´,

v

)

C

(

v

' 

v

,

r

' )  collision 

r

,

v

  )  particle source transport density term kernel, kernel change velocity at fixed position

Angular Scalar flux flux

 (

r

,

v

)   (

r

,

v

)    (

r

,

v

)  (

r

,

v

)  (

r

,

v

)  (

r

,

v

)

d

v

v

97

Monte Carlo and Transport Equation

Source term for the Boltzmann equation

Q

(

r

,

v

)     

S

(

r

,

v

) 

S

(

r

,

v

)   (

r

,

v

' )

F

(

v

'  1

k

v

 (

r

,

v

' ,

r

)

dv

' )

F

(

v

' 

v

,

r

)

dv

'  

Fixed Source Fixed Source

Fission

Eigenvalue S

(

r

,

v

)

F

(

v

' 

v

,

r

)

k

fixed source

creation operator

eigenvalue

(

due to fission

)

98

Photon Physics

• Coherent (Thomson) Scattering + Form Factors • Inhorenet (Compton) Scattering + Form Factors • Pair Production • Photoelectric Absorption and Fluorescence • Thick-Target Bremsstrahlung

99

Photon Physics Approximations

Only K,L edges treated for photoelectric absorption Distance to collision sampled, not distance to scatter Thick/thin-target bremsstrahlung No distinction between pair and triplet production No anomalous scattering factors No photoneutrons

10 0

Surface flux

• Cell flux estimator: • Surface flux estimator: 

W V

 

F s

 d  

0

W

d  d

/| |

A

 

W A

 A   d