Error Estimation-Monte Carlo Simulation-Modeling and Simulation-Lecture, Lecture notes for Mathematical Modeling and Simulation. Pakistan Institute of Engineering and Applied Sciences, Islamabad (PIEAS)
bilu
bilu

Error Estimation-Monte Carlo Simulation-Modeling and Simulation-Lecture, Lecture notes for Mathematical Modeling and Simulation. Pakistan Institute of Engineering and Applied Sciences, Islamabad (PIEAS)

PDF (383 KB)
7 pages
1Number of download
1000+Number of visits
Description
Dr. Nasir M Mirza delivered this lecture at Pakistan Institute of Engineering and Applied Sciences, Islamabad (PIEAS) to cover following points of Modeling and Simulation course: Error, Estimations, Direct, Batch, Statis...
20 points
Download points needed to download
this document
Download the document
Preview3 pages / 7
This is only a preview
3 shown on 7 pages
Download the document
This is only a preview
3 shown on 7 pages
Download the document
This is only a preview
3 shown on 7 pages
Download the document
This is only a preview
3 shown on 7 pages
Download the document
Slide 1

Nasir M Mirza 1 6/23/2012

Monte Carlo Simulation

Room A-114,

Department of Physics & Applied Mathematics,

Pakistan Institute of Engineering & Applied Sciences,

P.O. Nilore, Islamabad

email: [email protected]

Dr. Nasir M Mirza

Lecture 16:

Error Estimations

Course: NE-628

Docsity.com

Nasir M Mirza 2 6/23/2012

Direct error estimation

 Assume that the calculation calls for the

simulation of N particle histories.

 Assign and accumulate the value xi for the score

associated with the i'th history, where 1 < i< N.

Assign as well the square of the score x2 for i’th

history.

 Calculate the mean value of x:

Assume that x is a quantity we calculate during the

course of a Monte Carlo simulation, i.e. a scoring

variable or simply a “score" or a \tally".

The output of a Monte Carlo calculation is usually

useless unless we can ascribe a probable error to it.

The conventional approach to calculating the estimated

error is as follows:

 

N

i

ix N

x 1

1

Docsity.com

Nasir M Mirza 3 6/23/2012

dd

 

 

 

N

i

i

N

i

ix xx N

xx N

s 1

22

1

22 )( 1

1 )(

1

1

It is the error in <x>, we are seeking, not the “spread"

of the distribution of the xi. Report the final result as x

= <x> ± s(<x>).

 Estimate the variance associated with the distribution

of the xi:

 The estimated variance of <x> is the standard

variance of the mean:

N

s xs x

2 2 )( 

Remarks:

The true mean and variances are not available to us

in general. We must estimate them. The estimated

mean <x> calculated in above equation is an estimate

for the true mean and the estimated variance s2

calculated in above equation is an estimate for the

true variance s2

Docsity.com

Nasir M Mirza 4 6/23/2012

Batch statistics error estimation

It may require a lot of effort to stop after each history and

compute the xi's and x 2. The “direct error estimation"

should be considered first and if it is not feasible, then an

alternative that is nearly as good is “batch statistics error

estimation". We present a procedure here

In many cases, the estimation of means and variances

using the methods described in the previous section is not

feasible. A score may be a complicated object or there

may be many geometrical volume elements to consider.

Split the N histories into n statistical batches of w=N/n

histories each. Note that n must be “large enough". A

standard choice is n = 30. The accumulated quantity

for each of these batches is called sumj as a sum for

the j’th batch. The sumj is

 

w

i

ij xsum 1

Then compute the mean value of x

 

 n

j

jsum N

xx 1

1

Docsity.com

Nasir M Mirza 5 6/23/2012

Discrete Distribution Sampling

Estimate the variance associated with the distribution of

the xj:

 

 

 

n

j

j

n

j

jx xx n

xx n

s 1

22

1

22 )( 1

1 )(

1

1

The estimated variance of x is the standard variance

of the mean:

Report the final result as x = <x> ± s(<x>).

n

s xs x

2 2 )( 

Remarks:

We use above equations with n fixed (at say, 30)

because it gives a reasonable estimate of the

error in x.

Any large number will do, as long as we are within the

range of applicability of the Central limit theorem.

There is some evidence that the calculated statistic

depends weakly on the choice of n. Therefore, it is

important to report how your statistics were done

when you publish your Monte Carlo results.

Docsity.com

Nasir M Mirza 6 6/23/2012

Combining Errors of Independent Runs

For m independent Monte Carlo runs, it is easy to derive

the following relation:

k

m

k

k x N

N x

 

  

 

1

Where, is the average value of x for the k’th run and the Nk is the number of histories in the k’th run. The total number of histories are :

kx

 

m

k

kNN 1

Then assuming the first order propagation of

independent errors, it is also easy to derive that

2

1

2

2

kx

m

k

k x s

N

N s

 

  

 

when m = 2

2

2

22

2

1

21

2 2

1 1

21 xxx s

N

N s

N

N s

NNN

x N

N x

N

N x

 

  

 

  

 



 

  

 

  

 

Docsity.com

Nasir M Mirza 7 6/23/2012

The Error

Remarks:

Thus, by sampling the exponential three times and adding the three values one can sample this pdf. In general the sum of n such variables of exponential pdf gives a

azn n

ez n

a zw 

  1

)!1( )(

This method of combining errors effectively increases the value of n, the number of statistical batches used in the calculation. In view of the fact that the calculated statistics are thought to depend weakly on n, it is preferable (but only marginally so for the sake of consistency) to combine the xi's (the raw data) into the standard number of statistical batches.

This is easy to do by initializing the data arrays to the

results of the previous run before the start of a new run.

Docsity.com

no comments were posted
This is only a preview
3 shown on 7 pages
Download the document