## Search in the document preview

Lecture 9: Nasir M Mirza

1 6/23/2012

**Monte Carlo Simulation
**

Room A-114, Department of Physics & Applied Mathematics, Pakistan Institute of Engineering & Applied Sciences, P.O. Nilore, Islamabad email: [email protected]

Dr. Nasir M Mirza

Lecture 10:

Rejection Techniques

**Course: NE-628
**

Docsity.com

Lecture 9: Nasir M Mirza

2 6/23/2012

In recipe form, the procedure is this: 1. Scale the probability distribution function by its maximum value obtaining a new distribution function, f(x) = p(x)/p(xmax), which has a maximum value of 1 which occurs at x = xmax

While the invertible cumulative probability distribution
function method is always possible, at least in
principle, it is often impractical to calculate *the inverse
*because it may be exceedingly a or contain
mathematical structure that is difficult to control.
Another approach is to use the rejection method.

Rejection method

Clearly, this method works only if the probability distribution function is not infinite anywhere and if it is not prohibitively difficult to determine the location of the maximum value. If it is not possible to determine the maximum easily, then overestimating it will work as well, but less efficiently

Docsity.com

Lecture 9: Nasir M Mirza

3 6/23/2012

Rejection Techniques

It is technique for sampling any kind of probability distribution

A trial value for random variable is selected and proposed.

This value is subjected to one or more tests (involving one or more random variables).

This value may be accepted or rejected.

If it is rejected, the cycle of choosing and testing is repeated until an acceptance takes place.

Important property is that the normalization of density need not be known explicitly.

The disadvantage is that it may have low efficiency.

Docsity.com

Lecture 9: Nasir M Mirza

4 6/23/2012

Rejection Techniques

First find the maximum value of the pdf, g(x). Then make a sort of normalization to form the following quantity:

.)()(
max*g
xgxh *

1. Choose a random number of value x uniformly in the range of [a, b].

2. Choose a second uniformly distributed random number y.

3. If h(x) is greater or equal to y, then accept the value of x, otherwise return to step 1.

If the point chosen falls with in the two lines below the curve, it is part of that area and is accepted. If it is falling above then it is not part of the area and is rejected.

0.0 0.5 1.0 1.5 2.0 2.5 3.0 0.0

0.2

0.4

0.6

0.8

1.0

N or

m al

iz ed

g (x

)

x-value

Docsity.com

Lecture 9: Nasir M Mirza

5 6/23/2012

**Rejection method
**

The efficiency of the rejection technique is defined as:

*b
*

*a
*

*dxxp
xp
*

)( )(

1

max

Docsity.com

Lecture 9: Nasir M Mirza

6 6/23/2012

Example 1: Rejection Technique

Let us consider a pdf: 10; 1

14)( 2
*x
*

*x
xf
*

10;.1 110 *x
*

The algorithm to sample such a pdf using rejection technique is following:

0.0 0.5 1.0 1.5 0.0

0.2

0.4

0.6

0.8

1.0

1.2

1.4

*f(x
)
*

x-value

It is a monotonically decreasing function on (0, 1) range and its maximum value is 4/

10;4.2 221
*x
*

1

; 1

14.3 21
*ox
*

*xIf
*

Accept xo, else repeat step 1.

Docsity.com

Lecture 9: Nasir M Mirza

7 6/23/2012

Rejection Technique

Example 1: continued

The algorithm can be rephrased as

Another much more precise way is:

;.1 10 *x
*

; 1

14.2 22
*ox
*

*If
*

Then repeat step 1.

*uniformandgenerate *;.1 21

. ;1)1(.2

1

2 12

*oxelse
repeatIf
*

Docsity.com

Lecture 9: Nasir M Mirza

8 6/23/2012

**Example1: Using MATLAB
**

% Program name; rejection_tech1.m % Sampling from a given pd Function using rejection % Technique ; function is 4/(pi*(1+x*x)), 0<x<1 % random numbers from rand function in interval [0, 1] N = 90000 ; max_bins = 80; rand('state', 0) % initialize the generator to zero for j=1:max_bins % initialize ibin(j) = 0; xmid(j) = 0; ntheory(j)=0; end for i= 1:N % Start Monte Carlo loop g(i) = 0.0; x0 = rand; x1 = rand; gg =1.0/(1 + x0*x0); if(x1 <= gg) g(i) = x0; end end max_f = max(g); % maximum of function size_bin = max_f/max_bins; % size of bin for kk=1: N l_limit=0; u_limit=0; for ii=1: max_bins l_limit = (ii-1)*size_bin; u_limit = ii*size_bin; if((g(kk)>l_limit)&(g(kk)<=u_limit)) jj=ii; end end ibin(jj) = ibin(jj) + 1; % one more for the bin end

Docsity.com

Lecture 9: Nasir M Mirza

9 6/23/2012

**Example 1: rejection Technique
**

% the last part of the program – continued for k = 1:max_bins % compute mid-point xmid(k) = (k - 0.5)*size_bin; % compute the expected pdf fx = 4.0/(pi*(1 + xmid(k)^2)); ntheory(k) = N*fx*size_bin; % no of point by theory end plot(xmid, ibin,'r+',xmid, ntheory,'b.')

N = 90000 ;max_bins = 80; for results below:

Docsity.com

Lecture 9: Nasir M Mirza

10 6/23/2012

**Example 2: Rejection Technique
**

Consider a singular probability distribution function:

10; 1

12)( 2

*x
x
*

*xf
*

This function is unbounded. Straight forward rejection technique is not appropriate. Instead, we write:

10;

11 12)(

*x
xx
*

*xf
*

; 12 1)(

*x
xgLet
*

10;4

1 14

)( )(,

*x
*

*xxg
xfThen
*

This function is has now a least upper bound that is equal to 4/and this can be sampled using rejection technique.

Docsity.com

Lecture 9: Nasir M Mirza

11 6/23/2012

Then Choice for function is

STEP 2: If

STEP 1: First generate two uniform random numbers. Then sample x from g(x):

Accept the x; else reject it and repeat step 1.

Step 2 can also be written as

; 1 1

/4 )(/)()(

*x
xgxfxh
*

; 12 1)(

*x
xg
*

That is use sampling function, from cdf:

;1 21*x
*

; 1 1

2 *x*

;1122 *x*

**Example 2: Rejection Technique
**

Docsity.com

Lecture 9: Nasir M Mirza

12 6/23/2012

% Program name; rejection_tech2.m % Sampling from a pd Function % function is 2/(pi*sqrt(1-x*x)), 0<x<1 % random numbers from rand function % in interval [0, 1] N = 50000 ; max_bins = 1000; rand('state', 0) % initialize the generator to zero for j=1:max_bins % initialize ibin(j) = 0; xmid(j) = 0; ntheory(j)=0; end for i= 1:N % Start Monte Carlo loop g(i) = 0.0; r1 = rand; x = 1.0 - r1^2; r2 = rand; gg =r2*r2*(1.0 + x); if(gg <= 1.0) g(i) = x; end end max_f = max(g); % find max value of func estimated size_bin = max_f/max_bins; % find size of bins for kk=1: N l_limit=0; u_limit=0; for ii=1: max_bins l_limit = (ii-1)*size_bin; u_limit = ii*size_bin; if((g(kk)>l_limit)&(g(kk)<=u_limit)) jj=ii; end end ibin(jj) = ibin(jj) + 1; % one more for the bin end

**Example 2: Rejection Technique
**

Docsity.com

Lecture 9: Nasir M Mirza

13 6/23/2012

N = 50000 ; max_bins = 1000; for results.

**Results for Example 2
**

Docsity.com

Lecture 9: Nasir M Mirza

14 6/23/2012

Imagine that the probability distribution function is too difficult to integrate and invert, ruling out the direct approach without a great deal of numerical analysis, and that it is “spiky", rendering the rejection method inefficient. Many probability distributions have this objectionable character.

**Mixed methods
**

As a next topic in elementary sampling theory we consider the \mixed method", a combination of the previous two methods.

However, imagine that the probability distribution function can be factored as follows: p(x) = f(x)g(x)

where *f*(*x*) is an invertible function that contains most
of the “spikiness", and *g*(*x*) is relatively flat but
contains most of the mathematical complexity. The
recipe is as follows:

1. Normalize *f*(*x*) producing *f*(*x*)/fmax such that

1max)/)((
*b
*

*a
*

*dxfxf
*

Docsity.com

Lecture 9: Nasir M Mirza

15 6/23/2012

2. Normalise g(x) producing g(x)/gmax such that this normalized function remains less than or equal to one in the range [a, b]. 3. Using the direct method described previously, choose an x using normalized f(x) as the probability distribution function. 4. Using this x, apply the rejection technique using normalized g(x). That is, choose a random number, r, uniformly in the range [0; 1]. If normalized g(x) is less or equal to r, accept x, otherwise go back to step 3.

**Remarks:
**With some effort, any mathematically complex,
spiky function can be factored in this manner.
The art boils down to the appropriate choice of
normalized f(x) that leaves a normalized g(x) that
is nearly flat.

**Mixed methods
**

Docsity.com

Lecture 9: Nasir M Mirza

16 6/23/2012

% Program name; reject1.m % Sampling from a Function g(x) = 1/x and % const = log(2); max_bins = 200; nmax = 40; N = 30000 ; rand('state', 0) % initialize the generator to zero for j=1:max_bins % initialize ibin(j) = 0; xmid(j) = 0; ntheory(j)=0; end for i= 1:N % Start Monte Carlo loop i r1 = rand; r2 = rand; % choose random F; simple % ff(i) = exp(r1*const); % calculate function; simple y = 1.0/(r1+1.); % rejection technique if(r2<=y) ff(i) = r1+1.0; end % rejection technique end max_ff = 2.0; % find max value of func estimated size_bin = (max_ff-1.0)/max_bins; % find size of bins for kk=1: N l_limit=0; u_limit=0; for ii=1: max_bins l_limit = (ii-1)*size_bin + 1.0; u_limit = ii*size_bin + 1.0; if((ff(kk)>l_limit)&(ff(kk)<=u_limit)) jj=ii; end end ibin(jj) = ibin(jj) + 1; % one more for the bin end for k = 1:max_bins % compute mid-point xmid(k) = (k - 0.5)*size_bin + 1.0; % compute the expected pdf fx = 1.0/(const*xmid(k)); ntheory(k) = N*fx*size_bin; % no of point by theory end plot(xmid, ibin,'r+',xmid, ntheory,'b.')

Docsity.com