Monte Carlo Methods, Notas de estudo de Engenharia de Produção
igor-donini-9
igor-donini-9

Monte Carlo Methods, Notas de estudo de Engenharia de Produção

132 páginas
50Números de download
1000+Número de visitas
Descrição
Monte Carlo Methods
50 pontos
Pontos de download necessários para baixar
este documento
Baixar o documento
Pré-visualização3 páginas / 132
Esta é apenas uma pré-visualização
3 mostrados em 132 páginas
Esta é apenas uma pré-visualização
3 mostrados em 132 páginas
Esta é apenas uma pré-visualização
3 mostrados em 132 páginas
Esta é apenas uma pré-visualização
3 mostrados em 132 páginas
Test Cell Correlation Proposal

1MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Material for these lecture notes was compiled from the references below

L. Zhigilei, MSE 524 (Virginia Tech)

MIT’s 3.320 course notes (Prof. G. Ceder)

D. Chandler, Introduction to Modern Statistical Mechanics

D.A. McQuarrie, Statistical Thermodynamics or Statistical Mechanics

D. Frenkel, Introduction to Monte Carlo Methods

D. Frenkel and B. Smit, Understanding molecular simulation

M.E.J. Newman and G.T. Barkema, Monte Carlo Methods in Statistical Physics

K. Binder and D.W. Heerman, Monte Carlo Simulation in Statistical Physics

K. Kremer, Entangled polymers: from universal aspects to structure property relations

J. Baschnagel et al., Monte Carlo simulation of polymers: coarse-grained models

Monte Carlo Methods

2MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Molecular Dynamics versus Monte Carlo In a MD simulation, one way you calculate certain properties is by tracking them over time. If you could calculate certain properties for a given microscopic state (e.g. energy or volume), you calculate their macroscopic properties as averages (an integral over time) over the simulation. There are two issues:

At first, in that average you only include states that you can reach with the MD simulation. For example, since you simulate over a finite time, if there are excitations which can determine the average energy that occur over long-time, you would not include them in that average. In this case, statistical sampling may be more efficient.

These averages only include states that occur in the time scale of the MD simulation

Secondly, you maybe interested to compute the average properties but not interested in simulating the system dynamics. Then sometimes you can do simpler techniques (less computing intensive) than MD. You could compute these averages by sampling.

3MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Statistical sampling versus MD

Sampling maybe useful not only if you care about the averages but not the dynamics itself but also in situations where you don’t know the dynamics.

In MD, you basically assume that the atoms move with Newton equation of motion (so you know the dynamics). There will be certain material models for which you have no idea about the dynamics (e.g. a spin model).

A spin model is a model of magnetic moments being oriented in space: In many cases, you don’t know the spin dynamics. If all you want to know is e.g. the average magnetic moment, there is no way of doing dynamics on the spins. You’ll actually have to sample them.

4MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

A time scale problem: inter-mixing Let us consider an example of a binary mixture. You may want to know the average energy of this system.

To average the energy, the system would have to go through many configurations in the simulation

Diffusion is required

In any given configuration, let’s assume that we have a method to calculate the energy of the system. At high temperature, the atoms will simply hop around and give you some average energy.

What kind of time scale you need to get that average right?

5MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

A time scale problem: inter-mixing Lets estimate the diffusion constant D required to get significant number of atom exchanges.

To get an idea of the kind of hopping rate you need, you can assume that the atoms do a random walk. Then the root mean square displacement you take is the number of jumps you take, N, times the jump distance squared (Na2).

Recall the relation between the diffusion constant and the root mean square displacement.

Now we can relate to jump rate Γ= to the diffusion constant.

<r2(t)> = N a2

N t

∂ ∂

6MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

A time scale problem: inter-mixing

If you know how long your simulation can last, you know what jump rate you need to see atoms jump in that simulation. So you can relate that to the diffusivity you need.

We know that the jump rate over a barrier is some vibrational frequency (frequency with which they try to go up the hill) times the success factor (Boltzmann factor e–Ea/kT).

<r2(t)> = N a2

Γ a2 = 2d D D= 2a

2d Γ

Ea activation barrierν

7MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Implications for the activation barrier

This means that your success rate (Boltzmann factor) has to be greater than 10-3, so one out of a 1000 times, the atom has to jump across the barrier.

Let us assume that ν ~1013 Hertz.

Let’s say we want to get a jump rate Γ of 1010 Hertz (one jump per 100 picosecs or per 0.1 nanosec).

So, if you did a 10 nanosecs simulation, you’ll have 100 jumps per atom which would be reasonable for some equilibration (but in many cases still not enough).

8MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Activation barrier in solids and liquids

In a solid, barriers are anywhere, typically, from 0.5 to 2 eV. So you will have problems in solids (not enough hopping).

In liquids you’ll have much faster transport. If you do liquids at a 1000 K, you would have tremendous amount of hopping and thus no problem.

So you can deduce from that what the activation energy should be.

Then we’ll see jumps of the order of once per 100 picoseconds.

9MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Long term averages and Monte Carlo

If long-time averages is all you care about, and excitations of the system are beyond the time scale of Molecular Dynamics, it may be better to use statistical sampling methods (Monte Carlo)

Try to obtain a sample of microscopic states that is statistically significant for the long-time averages

10MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Monte Carlo: The two key issues Monte Carlo is NOT another form of dynamics, it is just a sampling method.

If you want to get a sample, there are two things you have to decide:

Which population you sample from? How do you set constraints on the population of states that you sample from?

With what probability do you sample them? Should you sample them unbiased, for example?

The biggest mistake in Monte Carlo is with defining the constraints on the states you sample. Those constraints come from the thermodynamic boundary conditions you

impose on your system.

11MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Monte Carlo: Issues to be discussed

How do macroscopic constraints (e.g. the environment) relate to the simulation approach ?

Macroscopic conditions, correspond to fixing the averages of microscopic quantities

Example: How do you constrain your states to model adsorption of H in Pd? Does H sit on the surface of Pd or the subsurface ? How do you investigate ?

How to go from microscopic description to macroscopic variables ?

12MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Review of statistical mechanics and thermodynamics

There is a relation between the macroscopic thermodynamics and the microscopic statistical ensemble. The statistical ensemble is the group of states over which you sample.

Macroscopic conditions (constant volume, temperature, number of particles, …) translate to the microscopic

world as boundary conditions (constraints)

The microscopic system is defined by the extensive variables that are constant in the

macroscopic world e.g. (E,V,N), (T,V,N)

The probability distribution for the microscopic system and its Hamiltonian are related to the macroscopic

free energy function

13MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Review of statistical mechanics and thermodynamics

Any fixed thermodynamical variable sets a constraint on how you can sample. The constraint with which you sample comes from the extensive thermo- dynamic variables.

Extensive thermodynamic variables are things that scale with size (e.g. volume, number of atoms of a given type, etc.).

Intensive variables are the conjugates of the extensive variable and don’t scale with size (temperature, pressure, chemical potential, etc.).

The probability with which you sample depends on the relevant Hamiltonian and the macroscopic counterpart to that is the free energy function.

A Hamiltonian in microscopic space would correspond to the free energy function in the macroscopic space.

14MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Conjugate thermodynamic variables In the energy formulation, the conjugate variable pairs can be identified from the work terms in the first law of thermodynamics: (T,S), (-p,V), (μ,N), …There is one extensive variable and one intensive variable (always in pairs).

You can write that from different contributions, TdS (heat flow), pdV (mechanical work done) and μdN (chemical work) where μ is the chemical potential and N is the number of atoms of a given type. These are conjugate pairs:

S is an extensive, T is an intensive, V is an extensive, p is an intensive, N is an extensive, μ is an intensive.

To define your boundary conditions, you always need to specify one out of each pair. So, you need to say whether you are at constant S or constant T, whether you are at constant p or constant V, whether you are at constant μ or constant N. You cannot leave one unspecified.

15MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Conjugate thermodynamic variables To define your boundary conditions, you always need to specify one variable in each pair: (T,S), (-p,V), (μ,N), …

For example, if you have a closed box (nothing can go in or out) you are obviously under constant number of particles.

If you study a surface, you are usually under constant chemical potential

A classic mistake is simulating under constant N when you are in reality under constant μ. Surfaces are a really good example of that. Real surfaces are equilibrated with their environment. So they are under constant μ (chemical potential of the environment or under constant chemical potential from their bulk). With constant μ, the number of atoms on a surface can change.

16MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Entropy formulation of the first law

In statistical mechanics, we often use the `entropy formulation’ of the first law:

In this formulation the conjugate pairs are (1/T, U), (-p/T, V), (μ/T, N).

17MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Microcanonical, canonical and grand-canonical ensembles

Thermodynamic quantities are averages over relevant set of microscopic states (ensembles):

Ensemble is the collection of all possible microscopic states the system can be in, given the thermodynamic

(macroscopic) boundary conditions

E(E,V,N) -> micro canonical: e.g. Newtonian system in box with elastic walls.

E(T,V,N) -> canonical ensemble: e.g. Newtonian system in a box with non-elastic walls (walls equilibrated at temperature T)

E(T,V, μ) -> grand canonical ensemble: e.g. open system

18MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Microcanonical ensemble: E(E,V,N)

The ensembles are determined by the extensive variables you keep constant.

The simplest is that you keep (E,V,N) constant. That’s the ensemble that comes from energy, volume, number of particles that’s what is called the microcanonical ensemble.

A realization of that would be if you do Newtonian mechanics in a system in a box. If it’s in a closed box, the number of particles can’t change. The volume is fixed. And if you do Newtonian dynamics, the energy is fixed.

The basic form of molecular dynamics goes through a microcanonical ensemble

19MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Canonical E(T,V,N), Grand-canonical E(T,V, μ)

If you have to control the average energy and you do that with the temperature, then you end up in what’s called the canonical ensemble.

If you release the constraint of fixed number of particles, again then you have to take into account the field that sets your average number of particles. So, then you have an ensemble with T, V and μ that is called the grand canonical ensemble.

You could keep on making other ensembles by switching in intensive variables for extensive.

So that defines the states from which you should sample. The question is now, how do you sample?

20MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

How do you sample? You should sample with the correct probability and the probability of the states in an ensemble is proportional to the exponential of e-βH, H being the Hamiltonian, β = 1/kT. And then normalized by the sum of that over all the states which is called the partition function Q.

where

The free energy is the log of the partition function:

So what should that Hamiltonian be? It should be the relevant microscopic Hamiltonian that includes all the

things that can fluctuate in your system.

21MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Sampling in a canonical ensemble Here is the way you can remember what goes in the Hamiltonian. You think of your thermodynamic boundary conditions and you take the relevant Legendre transformation of your entropy. It’s essentially those pieces that go in your Hamiltonian.

Canonical ensemble: In this case, our first law is written as dS = , or d (S- )=0, i.e. the relevant free energy is F = E- TS.

Thus for fixed N,V,T: -F/T = S – <E>/T. If you work at fixed N, V and T, the relevant free energy is the Helmholtz free energy F = E- TS. The part that goes into your Hamiltonian is the - E/T term. It’s whatever is in the Legendre transform part of the entropy.

1 dE T1 E

T

22MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Sampling in a grand canonical ensemble E(T,V,μ)

In the grand-canonical ensemble, the Legendre transform of the entropy takes the form:

-F/T = S – E/T + μ N /T The part that goes into your Hamiltonian is the – (E-μN)/T term.

Fixed (T,V,μ): In this case, our first law is written as dS = , or

d (S- )=0, i.e. the relevant free energy is F = E- TS –μ N.

1 dE dN T T

μ −

1 E N T T

μ +

It makes sense that you need these μN term in your Hamiltonian because if not there, you have no way of controlling the number of particles

23MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Sampling on a fixed (T,P,N) ensemble

In this ensemble, the Legendre transform of the entropy takes the form:

-F/T = S – E/T -PV /T

The part that goes into your Hamiltonian is the – (E+PV)/T term.

Fixed (T,P,N): In this case, our first law is written as dS = , or

d (S- )=0, i.e. the relevant free energy is F = E- TS +PV.

1 pdE dV T T

+

1 PE V T T

exp( (E PV ))P exp( (E PV ))

ν ν ν

ν ν ν

β β

∈Ε

− + =

− +∑

24MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Summary

The macroscopic boundary conditions are set by the constant extensive thermodynamic variables. That in turn defines the relevant ensemble and the probability with which you sample over an ensemble.

When you do that all correctly, you can get the averages of properties such as energy and volume with those probabilities.

25MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Ergodicity principle There’s an implicit assumption you are making here which is common to mechanics. We’ve, essentially, said that following a system in time would give the same averages than sampling over its ensemble. The assumption in there is that, when you follow a system in time, it will reach, essentially, all the states in its ensemble, i.e.all the states that are within the boundary conditions.

If you put a system in a box, fixed number of particles, say, at constant T, you are essentially saying that that system will, if you wait long enough, reach any possible configuration of atoms. Because that’s the one that will be allowed in your ensemble. And that’s called the ergodicity principle.

The ergodicity principle says that if you wait long enough, the system will finally reach all its states. If you know there is going to reach all its states and you know

the probability of them then you can sum over. That’s what you do in statistical mechanics rather than doing the

full dynamics.

26MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Not all systems are ergodic Note that not all systems are ergodic. An example is the harmonic oscillator. Think of the 2 coordinates: the position and the velocity coordinate.

Because it’s a harmonic oscillator, r and p are actually correlated. So, if you think of the phase space of this system (position/momentum), this system goes through an ellipse. It never goes through any other states. So, we may say the harmonic oscillator is not ergodic.

In a 2D phase space, it’s not ergodic. But we know there is only one coordinate when the system is quantized. In this 1D space (amplitude of the normal mode), the system is ergodic.

27MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Glassy states are non ergodic Some systems are “not ergodic” on normal time scales, but would be if one waited long enough e.g. glasses.

Glasses are really difficult to do thermodynamics on, to integrate over the phase space because you really don’t know over what region to integrate.

When interested to compute the free energy of a glass, you don’t know what ensemble to integrate over.

If you consider glass as fully random and you average over all the possible atom positions in the box, then you are actually getting the free energy of a liquid. The glass never samples all the possible positions so you could argue that it’s a liquid that is not ergodic or a liquid that is frozen in.

28MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

The Ising model

In the Ising model, we have a lattice of sites. At every site we can have an up- spin or a down-spin so there’s only a binary variable at every site. We’ll mark that with a +1 or –1, say, +1 for up and –1 for down (a spin variable σi = +1 or -1).

The simplest Hamiltonian (energy function) is used here that counts the nearest neighbor bonds and associates some interaction (J) with them.

When J > 0, ferromagnetic behavior; when J < 0 anti-ferromagnet

29MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

The Ising model

σi and σjare nearest neighbors in this model. If J>0, then you want to make σi σj positive to get a ferromagnet. That means if you have a +1 somewhere, you want a +1 next to it to get low energy. If J<0, you will want the spin products σi σj to be negative. That means, if you have a plus spin, you want a negative spin next to it. So, you have an anti-ferromagnet.

When J > 0, ferromagnetic behavior; when J < 0 anti-ferromagnet

The Ising model is a model that we will use for any 2-state system (2 possibilities on the lattice site).

A binary alloy AB will be an example.

30MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Monte Carlo as an integration method Modern form originated with Ulam and Segré in Los Alamos and the ENIAC computer (but goes back to Fermi). Before its introduction as a “sampling” method, it was used a method for integration of functions (Comte de Buffon, 1777).

x

x xx

x x

x

A simple way to integrate a function is to draw a box and throw randomly darts in that box. If you track the number of times that you’re below the curve (probability that you hit below the curve), that’s telling you something about that integral under the curve!

N times below curve =

Integral of the function / total area

31MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Estimation of π

32MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Simple sampling We will discuss simple sampling and importance sampling.

In simple sampling, you would randomly pick points v in your phase space. This means that the states of your system have to be weighted with the correct probability Pν .

So you could randomly pick M states from the ensemble. Let’s say we have some probability with each state ν and the property of interest in the state is Aν. The way you get the average of A is by weighting that Aν with the probability of that state.

The probability is proportional to e-β Hv, Hν the Hamiltonian:

33MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Simple sampling

What we don’t know is the partition function that we have to normalize with. After we sample the bunch of states, we can define an approximation to the partition function as the sum of that exponential over all these states that we have sampled. We need that because the probabilities need to be normalized to 1.

34MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Simple sampling does not work!

Statistical-mechanics integrals typically have significant contributions only from very small fractions of the 3N phase space.

For example for hard spheres, contributions are coming from the areas of the configurational space where there are no spheres that overlap:

Simple sampling does not work, because one picks mainly states with low weight in the true partition function (i.e. states with high energy).

It only works for small phase spaces.

We have to restrict the sampling to the areas of space contributing to the integral

This leads to the concept of importance sampling

35MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Sampling states proportionally to their multiplicity With simple sampling you essentially pick states in proportion to their degeneracy (multiplicity).

Typically you pick states with higher entropy and recall that the entropy at a given energy is the log of the number of states at that energy:

S(E)/k = ln(Ω(E)) and dln Ω (E)/dE = 1/kT > 0 So if you do simple sampling, you are going to pick states proportional to Ω. Thus you are always going to pick states with high energy because we know that dln Ω (E)/dE = 1/kT > 0. That means the number of states with a given energy is an increasing function of energy.

36MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Simple sampling for the Ising model With simple sampling you essentially pick states in proportion to their degeneracy. For e.g. in the spin model, let’s say that you are studying in a low temperature where this system wants to be a ferromagnet (all the spins aligned). If you pick spins randomly, you would never end up with a ferro- magnetic configuration because you pick the spin of high energy states. So you would sample a lot of states in the phase space but never the relevant one.

So if you simple sample you are almost never going to pick states at low energies because there are almost no states there. You are always going to pick up states at high energies because there are a lot of them.

Ferrmomagnet Para-magnet

37MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Simple sampling for the Ising model For example, in the spin model, if you are at low temperature the dominant low energy states are the ferromagnetic states. However, if you randomly pick spin states, you will mostly pick disordered states – you would barely ever have any states with significant ferromagnetic patches.

In the figure below, the system lives somewhere around the red curve but we are always sampling at the high energy regime, i.e. we are always sampling the wrong states.

At some T, our system has an average energy and

a fluctuation around it. The system states need to be sampled from the

(red) distribution shown.

38MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Simple versus importance sampling

In simple sampling you walk around Africa with a dipstick and you measure how deep the Nile is. Of course, most of the time you are not in the Nile. So, you get a lot of 0’s.

The smarter way is to first find the Nile (that’s putting a bias on your sample). You still want to have some randomness to walk around in the Nile, otherwise you are not getting the average depth of the Nile. But you want to be biased so you stay in it. That’s the idea of importance sampling.

Measuring the average depth of the Nile

39MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Importance sampling What is importance sampling? You want to sample the important (low energy) states. If you want to set up a thermodynamical relevant ensemble, you know that your system spends most of its time in the states with low energy and then fluctuates around it.

So you want to sample with bias and then correct your probabilities for that bias.

Picking states with a biased probability: Importance Sampling

Can we pick states from the ensemble with a probability proportional to exp (-βE) rather than picking random and later

weighing them by a probability?

40MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Importance sampling

In a random sample you pick randomly and to average you weight a property with the correct probability (on the left above).

We could sample with a probability that is proportional to the real probability the state should have in the ensemble (on the right above). If you pick states with the proper rate, you really just average the property you are getting because these already occur in the sample with a frequency proportional to the correct probability.

In simple sampling you randomly pick things. In importance sampling you right away try to shoot for getting them with the proper probability, so the low energy states are sampled a lot more than the high energy states.

41MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Importance sampling

There will be properties where you need high energy states (e.g. maybe some properties are determined by extremes in your distribution, maybe it’s when you get to a certain energy that your system does something weird).

If you want to see that, then you should bias your system towards there.

42MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

How to construct probability weighted samples?

“walk” through phase space (Markov chain of states) visiting each state with proper probability (in the infinite time limit)

• Random starting from state i • Make a “move” (for example, pick a random displacement).

Calculate the energy for new “trail” configuration. • Accept j with some probability P(i j).

43MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Metropolis algorithm If we are interested in just thermodynamical behavior, we want to sample around the energy minimum. A common way to construct the probability weighted sample is the Metropolis algorithm.

The idea is that you construct a Markov chain, which is really a sequence of states, where over long time, you visit each state with the proper probability.

At first, you start with some random state, let us call that state μ. Then you have to pick a new state. Call it ν from μ. You decide whether to accept the move. Draw a random number R from 0 to 1. If P(μ -> ν) > R then accept the new configuration, otherwise, stay at the same place.

How to choose P(μ -> ν)? The Metropolis choice is the following:

44MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

The Metropolis algorithm

We want to compute average over measurements of A for configurations generated according to distribution P(rN).

To generate configurations according to the desired distribution P(rN), we can create random walk in the phase space, sampling it with the ensemble distribution. This can be realized in different ways.

The approach that is used in the Metropolis Monte Carlo algorithm uses random walk in the phase space with transition probability to go from state m to state n equal to 1 if the move is downhill in energy (∆U = Unm = Un-Um:< 0). If the move is uphill in energy (∆Unm > 0) then the move is accepted with a probability defined by the ratio of probabilities of initial and final states:

45MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Conditions for generating proper probability distribution How does the Metropolis method produces states with the desired relative probability?

Start with some configuration with a non-vanishing Boltzmann factor. Generate a new configuration by giving one or more particles a small, random displacement. This is called a trial configuration. Decide whether to accept or reject the trial configuration, depending on the Boltzmann factors of and .

The acceptance/rejection is phrased in terms of a transition probability P(i j). The above describes a Markov process. In order to achieve an equilibrium distribution, this Markov process must obey two requirements:

It must be ergodic, i.e., their must be a “path” form every state i to every state j . It must obey the detailed balance or microscopic reversibility

46MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Detailed balance condition Let us set up a random walk through the configurational space (so-called Markov `chain of configurations’) by the introduction of fictitious dynamics.

The ``time'' t is a computer time (marking the number of iterations of the `procedure), not real time - our statistical system is considered to be in equilibrium, and thus time is irrelevant.

P(m,t) is the probability of being in configuration m at time t, P(n,t) the probability of being in configuration n at time t, P(m -> n, t) is the probability of going from state m to state n per unit time (transition probability). Then we can write:

P(m,t+1) = P(m,t) + n

[P (n -> m) P(n,t) - P (m -> n) P(m,t) ]∑

At large t, once the arbitrary initial configuration is ``forgotten,'' we want P(m,t) to be P(m). For an equilibrium distribution, it would in principle be sufficient to have the same probability of arriving at a certain state m and departing from it.

p P( ) p P( )→ = →∑ ∑n m n n

n m m n The LHS represents all possible ways to arrive inm and the RHS represents the total probability of leaving m.

47MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Detailed balance condition

p P( ) p P( )→ = →∑ ∑n m n n

n m m n

This can be written as follows:

p P( ) p P( ) p P( ) pn m m m n n n

n m m n m n→ = → = → =∑ ∑ ∑ However, there is no guarantee that application of these transition

probabilities leads to an equilibrium distribution irrespective of the initial state!

Indeed, the system can be in a so-called limit cycle (dynamic equilibrium), where the probability distribution is not invariant after one transition, but only after n applications of the transition matrix Pnm.

This is an absolute disaster, as one cannot be sure that the states generated by such a choice of P are distributed according to the desired Pm.

48MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Detailed balance condition

Instead of just requiring that the total probability of leaving a state equals the total probability of arriving at the same state, we now impose the much stronger requirement that all moves are symmetric.

Clearly, this satisfies the original condition. In addition, it eliminates limit cycles, since all transitions are reversible.

P (n -> m) P(n,t) = P (m -> n) P(m,t)

p P( ) p P( )→ = →∑ ∑n m n n

n m m n

The solution lies in the following condition of detailed balance

49MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

More on detailed balance condition

This can be applied to any probability distribution. If we choose the Boltzmann distribution, with Unm = Un-Um:

P (n -> m) P(n,t) = P (m -> n) P(m,t)

n

nm

m

U1 exp( ) UP(m n) P(n) Z kT exp( )U1P(n m) P(m) kTexp( ) Z kT

−→ = = = −

→ −

Z does not appear in this expression and it only involves quantities that we know, T, or can easily calculate, Unm.

There are many possible choices of the P which will satisfy detailed balance. Each choice would provide a dynamic method of generating an arbitrary probability distribution.

Does the Metropolis algorithm satisfy the detailed balance condition?

50MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Conditions for generating proper probability distribution

Let us make sure that Metropolis algorithm satisfies the detailed balance condition. Based on the Metropolis algorithm:

n

nm

m

U1 exp( ) UP(m n) P(n) Z kT exp( )U1P(n m) P(m) kTexp( ) Z kT

−→ = = = −

→ −

If U(n) > U(m), then: nm

nm

Uexp( ) UP(m n) kT exp( ) P(n m) 1 kT

−→ = = −

If U(n) < U(m), then: nm

mn

UP(m n) 1 exp( )UP(n m) kTexp( ) kT

→ = = −

→ −

Thus, the Metropolis Monte Carlo algorithm generates a new configuration n from a previous configuration m so that the transition probability P(m -> n) satisfies the detailed balance condition.

51MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Summary of the Metropolis Monte Carlo 1. Choose the initial

configuration, calculate energy

2. Make a “move” (for example, pick the random displacement). Calculate the energy for new “trail” configuration.

3. Decide whether to accept the move:

If Unm = Un – Um < 0, then accept the new configuration,

If Unm = Un – Um > 0, then calculate P(m -> n) =exp(-Unm /kT)

4. Draw a random number R from 0 to 1. If P(m -> n) > R then accept the new configuration, Otherwise, stay at the same place.

5. Repeat from step 2, accumulating sums for averages (if atom is retained at its old position, the old configuration is recounted as a new state in the random walk).

52MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Implementation of Metropolis Monte Carlo 1. Choose the initial configuration of the system, calculate energy 2. Loop through all the particles

For each particle pick a random displacement d = (random # - 0.5)*dmax for x, y and z coordinates. Here dmax is the maximum displacement, and random # is from 0 to 1.

Calculate the energy change ∆U due to the displacement. Decide whether to accept the move based on the Metropolis criterion:

if ∆U < 0, then accept the new configuration, if ∆U > 0, then calculate

draw a random number R from 0 to 1

if transition probability P > R then accept the new configuration, otherwise, keep the old one

Accumulate sums for averages (if atom is retained at its old position, the old configuration is recounted as a new state).

Pick the next particle…

3. If # of MC cycles is < then maximum # of cycles, Go back to step 2.

53MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Thermodynamic averages You would then have your property of interest as a function of sample (which is often called ``Monte Carlo time’’). You could think of it as a trajectory through phase space.

So you go through some property and then you essentially track the average. Typically people cut off the first part. Remember that you started with a random state and you may be very far away from the low energy state. By cutting the first part you often get a much faster relaxation of the average. The initial state may have very high energy and, thereby, some weird property.

You start sampling after the system has relaxed towards the minimum

Average from here

equilibration passes not used in averaging

This is not a dynamical trajectory, only an

efficient way to sample the phase

space

54MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Random number generators Remember that you want to implement a move with a given probability (Boltzmann factor e- βΔΕ which is between 0 and 1).

You pick random numbers between 0 and 1. The probability that the random number is less than e- βΔΕ is P= e- βΔΕ.

So you pick the random number and if it’s below that exponential, you execute. If it’s above, you don’t execute. Now you execute with the correct probability.

Random numbers give you a way of drawing from a probability distribution.

55MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

The random number set is discretized A random number generator generates random numbers between 0 and some maximum integer e.g. 2N. Then it divides by 2N so you are in between 0 and 1. Your set of random numbers is thus discretized.

Often what’s important is the space in between 0 and the smallest random number. Because very high energy excitations fall in that first gap.

Your random number is now 0 with a probability of 1/2N which is way too high. It should be 1 over infinity if you had perfect continuous random numbers.

If your system is stuck and it doesn’t have a lot of low energy excitations left, it tends to execute high energy excitations with too high probability.

When rand = 0, any excitation occurs, since exp (-βΔE) > 0, for any ΔE

56MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

The random number set is discretized

Suppose you have a perturbation algorithm that allows you to occasionally pick very high energy perturbations (lets say 100 eV above the ground state or one atom on the top of each other). Normally that is not a problem if your random number generator was continuous – the probability of hitting 0 is essentially zero and thus you will never get accepted. However, with discrete random numbers, if you hit 0 with a rate of 1/2N, you execute any move because e- βΔΕ is small but it’s not 0.

In a regular sampling, this problem appears more often than in 2N passes.

We solve this problem by adding a small ε>0 to a random number. This way you never hit 0.

When rand = 0, any excitation occurs, since exp (-βΔE) > 0, for any ΔE

57MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Application to the Ising model

How would you implement the Metropolis algorithm in a magnetic mode (Ising model)? You could randomly initialize the spin or you could also start with a ferromagnetic configuration. The closer you start to your equilibrium state, the faster you would start sampling the appropriate states.

An obvious perturbation is, you pick a spin and you flip it over. So, if it was ``up’’ you make it ``down’’. If it was ``down’’, you make it ``up’’.

1. Start with some spin configuration 2. Randomly pick a site and consider flipping the spin over on that site 3. Compute energy for that perturbation 4. If ΔE<0 accept perturbation If ΔE>0 accept perturbation with probability e-ΔE/kT 5. Go back to 2

58MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

METROPOLIS ALGORITHM You start with some configuration in the system which could be pretty much anything.

You choose a perturbation of the system.

You compute the energy of that perturbation. If the energy goes down, you accept the perturbation. If the energy goes up, you accept it with a probability of the Boltzmann factor, e–βΔΕ, and then you repeat the process.

59MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

THE ISING SPIN MODEL

In the Ising spin model you have up and down spins on a given lattice sites with the simple interaction Hamiltonian

Our perturbations are simply the following: We pick a spin and we flip it over.

1. Start with some spin configuration 2. Randomly pick a site and consider flipping the spin over on that site 3. Compute energy for that perturbation 4. If ΔE<0 accept perturbation

If ΔE>0 accept perturbation with probability e-ΔE/kT 5. Go back to 2

60MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Tabulation of moves in the Ising model

You will need to calculate the energy for the perturbation. If it goes down, you accept it. If it doesn’t, you accept it with Boltzmann probability. And you go back.

You can do this very fast. You have to calculate a random number to randomly pick a spin in the lattice or two if it’s a 2-dimensional. You have to compute the energy. If your Hamiltonian is σiσj products that goes really fast and then you have to compute e-ΔE/kT. You could probably do 50 million moves per second.

61MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Tabulation of discrete environments speeds MC simulation

In these models you can quickly calculate the excitation energy. For example, if you were seeing a ferromagnetic state, lets say a central spin that is surrounded by 4 up spins, you can easily calculate what the spin flip energy is of these. Here you have 4 parallel bonds. If you flip this over, you have 4 anti-parallel bonds so ΔE = 8 J >0 (J=bond energy). An excitation occurs with a rate of e–βΔΕ.

If you just have a nearest neighbor interaction, the number of different excitation energies you can get is finite (there is only a small number of possible nearest neighbor environments). You can then tabulate them and their exponential at a given temperature. This leads to an extremely fast MC code.

In an efficient MC code, you can easily reach 100 million spin flips attempts per sec. If you have discrete environments, it can be efficient to tabulate them ahead of time. You can tabulate their energy ΔE and the exponential e–βΔΕ thus saving a lots of computation time.

ΔE = 8 J

62MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Selecting forms of the Hamiltonian

If you did a lattice model where you can change the overall spin (you are flipping up spin and down spin), you would typically have an interaction term and a field term in that Hamiltonian.

You could think of the field term as a chemical potential for the spin which tries to essentially force the spin to have a given average value (this field term is needed when the average spin is not conserved).

Then the probability needs to be weighted with:

Interaction term

Field term

63MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Two dimensional Ising model: Energy vs. MCS

This is the 2D square lattice model. The temperature is in units of J/k so the interaction comes from the Boltzmann constant. Here, kT/J =2. That’s below the Curie temperature of this model. This is a ferromagnetic model. The transition temperature would be about ~2.23 in this model.

64MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Two dimensional Ising model: Energy vs. MCS

We start up randomly. The energy is sort of high, it kind of fluctuates for a while and then starts going down. Then it seems to relax fairly well and then it just kind of fluctuates around an average (if you average these samples you get the right energy).

65MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Ising model: Trajectories for the magnetization

This is the magnetization as a function of temperature. Notice that it is clearly ferromagnetic as we start heating it up. We have preferred up-spin that slowly goes down with temperature and then at the transition temperature (kT/J~2.3) it becomes paramagnetic. This is a 2nd order transition.

Magnetization is the difference between how

much up-spin and how much down-spin do you have.

66MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Detecting phase transitions

How do you detect phase transitions?

Usually you do Monte Carlo or molecular dynamics.

It’s one think to look at the atoms, it’s another to figure out exactly what they are doing. For e.g., it is not easy using MD to figure out by looking at the atoms at which temperature melting occurs.

Similarly, if you have a lattice model you cannot easily figure out the paramagnetic transition temperature by just looking at the spins. When you are close to that transition, just below and just above, it looks pretty similar and one is still the ferromagnetic phase, the other one is the anti-ferromagnetic phase.

You need to look at physical properties to detect phase transitions

67MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Detecting phase transitions

You can figure out if you have a phase transition through the thermodynamic quantities that you sample.

If you sample the energy, you typically use the fact that the energy is discontinuous at first order transitions. It’s not so at 2nd order transitions.

Energy discontinuity indicates 1st order transition

If you work at constant chemical potential, you will have concentration discontinuities at first order transitions.

Concentration discontinuity (when working at constant chemical potential indicates 1st order transition)

68MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Detecting phase transitions using the heat capacity

The heat capacity is the fluctuation of the energy. The heat capacity is, essentially, a measure of how much you fluctuate around the average.

Heat capacity: is infinite at 1st order transition (but is difficult to spot)

has log-like infinite singularity for 2nd order transitions

This is the heat capacity of our earlier 2D model as a function of normalized temperature. The transition is near 2.24. This is actually a logarithmic diversion to infinity.

http://bartok.ucsc.edu/peter/java/ising/ising.html

69MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

H Adsorption on Pd (100) Let us discuss adsorption of Hydrogen on a Palladium (100) surface. Pd is an Fcc metal, so the (100) surface is, essentially, a centered square lattice. Remember if you take the (001) plane of a Fcc cube, its a centered square lattice which if you rotate it 45o, this is a simple square lattice. At least topologically this is a very simple model.

On this square lattice we are going to look at Hydrogen adsorption. We need to know if at every lattice site there is a Hydrogen or not?

We use a lattice Hamiltonian that has occupation variables which are either 1 or 0. Its 1 when the site is occupied by H and 0 when the site is not occupied.

pi = 1 when site is occupied by H, =0 when not

70MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

H Adsorption on Pd (100) Only when you have 2 occupations across say i and j in the figure below will you actually get interaction energy between the 2 Hydrogens. However, if you use spins, you have contributions from all terms σiσj since the spin variables take values +1 or -1.

You can write your Hamiltonian something like a sum over ij bonds and you could have the restriction that i,j are nearest neighbors with a nearest neighbor interaction W1. So that gives you the interaction between the Hydrogens.

i

j

i

71MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

H Adsorption on Pd (100) If you work on this as an open system, you have a chemical potential like term which is essentially the difference between the adsorption energy and whatever chemical potential you have in your source of Hydrogen.

We can now transform our two terms (interaction + field terms) into a lattice Hamiltonian. We do the spin transformation as follows:

Total amount of H on the surface

σi=2pi-1, pi =

pi = 1 σi= 1 and pi = 0 σi= -1

i1 2 σ+

72MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

H Adsorption on Pd (100) If you transform your lattice model Hamiltonian to a spin model Hamiltonian you’ll see the relation between the interactions V1 in the spin model and W1 in the lattice model. The interaction in the lattice model is a factor of 4 more than the interaction in the spin model. The chemical potential in the spin model also relates to the adsorption energy in the lattice model.

A spin model is often mathematically much easier to deal with than a lattice model.

73MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Spin vs lattice form of the Hamiltonian

In the lattice model its easier to interpret quantities in a physical sense. The reason is that here you only have interaction when i and j are occupied by a Hydrogen. So, W is a true H-H interaction on the surface.

In the spin model, you have terms for any occupation of i and j. When i an j are Hydrogen then you have the interaction energy V1. When i and j are both vacant, so they are –1, -1, you also have +1 so you also have the interaction V1. And then when it just have a Hydrogen on either i or j, so you have +1 and –1, then you get –V1. This makes V1 harder to interpret physically.

However, we always tend to use spin models because they are mathematically more elegant. The variables σi are symmetric around 0 (by taking +1 and –1) and this gives you a lot of useful properties.

74MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Canonical or grand-canonical ensemble to get H/Pd (100) phase diagram

We are interested to get the phase diagram for H adsorption on Pd (100). That means that as a function of coverage (how much of the sites are covered), and temperature, we want to see how the phase diagram looks like.

To equilibrate it’s always much faster to do grand canonical ensembles than canonical ensembles.

If you plot the Hydrogen coverage versus temperature, you may think initializing a system at a given H coverage (the dotted line) and then just heating it and looking at it.

No H Full H coverage

T

75MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Canonical or grand-canonical ensemble to get H/Pd (100) phase diagram

The problem there occurs when you are in 2-phase regions. If you are in a canonical system and you are in a 2-phase region, your system has to phase-separate within your cell. This means that it may make a region with a lot of Hydrogen (left) and a region with less Hydrogen (right).

No H Full H coverageHϑ

T

..

Phase boundary Computation cell

76MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Canonical or grand-canonical ensemble to get H/Pd (100) phase diagram

If you are in a 2-phase region and you need a phase boundary in your cell that carries energy with it. Because the system here is small (maybe you’ll do a 40x 40 cell or even a 100x100 cell), the boundary has a non-negligible effect on the total energy. Whereas, in real systems the patches that coexist will be much bigger and so the boundary would have proportionally a much smaller energy contribution.

Phase boundary Computation cell

If you do canonical ensemble and you are in this 2-phase region, your energy would depend somewhat on your system size (the size of the Monte Carlo box) because the system size determines the relative proportion of boundary to perfect system. And so you get a different energy.

It’s much better to do grand-canonical simulations where you essentially assume some Hydrogen chemical potential and use in the Hamiltonian some interactions, e.g. : H =

i i j i, j

Vσ σ∑ ij i j H i i, j i

1 V 2

σ σ μ σ−∑ ∑

77MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

H/Pd (100) phase diagram

So we here have an interaction term and a field term that controls the amount of Hydrogen with the chemical potential term

ij i j H i i, j i

1 V 2

σ σ μ σ−∑ ∑H =

H i i

μ σ∑ How will you see phase boundaries then? Essentially, at a given temperature, you tweak the chemical potential (like tweaking the pressure!). If you start with very low (so very negative ), then you want to be low. Then the spin is going to drift to –1.

Hμ Hμ i i

σ∑

Depending on what you’ve defined to be Hydrogen, if you define –1 to be a vacant site and +1 Hydrogen, then you are towards having no Hydrogen. Then as you crank up your chemical potential, you’ll get more and more Hydrogen in your system. The nice thing is that if you have two-phase regions, you’ll essentially jump across them.

78MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

H/Pd (100) phase diagram

Let’s say that this phase diagram simply looked like the blue line in the figure (in reality it doesn’t). Let’s say it was just a miscibility gap. On the left is essentially the state with no Hydrogen and on the right is full coverage.

No H Full H coverageHϑ

T

..

In between (e.g. dotted blue line) you have co-existence between patches of full coverage and no Hydrogen. If you do this by cranking up the chemical potential, you’ll essentially move from no Hydrogen (point A) to full coverage (point B) and you will never live in the two-phase region.

>A B

79MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

H/Pd (100) phase diagram

If you were to plot the Hydrogen coverage (vertical axis) as a function of the chemical potential , you’ll essentially go up very small.

Hydrogen coverage

Then at some critical chemical potential, you’ll jump up. This discontinuity is the width of your 2-phase region (see also horizontal width shown in Fig. on the right).

No H Hϑ

T

..

Full H coverage

So you never live in a two-phase region when you do grand canonical because you work only with intensive variables in your thermodynamics.

You essentially jump across the 1st order transitions rather than living in the discontinuity that defines them. This is more efficient way to get phase diagrams.

80MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Binary solid mixtures

Spin can indicate whether a site is occupied by an A or B atom -> model for binary solid mixtures

You can write down the Hamiltonian for a binary alloy. You can sum over the probability that you have a AB bonds on ij and multiply with the AB interaction. You can get the probability that it’s AA and multiply it with the AA interaction. And you get the probability that it’s BB and multiply with the BB interaction.

81MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Moves from one state to another don’t have to be physical

The key thing in Monte Carlo is not to be constrained by physical principles in selecting your moves from one state to another. The point is not to do a physical move but to efficiently sample the phase space.

Lets revisit the spin model: a spin going from up to down or vice-versa is a physical move. In some cases, it’s much more efficient to flip over whole patches of spin. When you are near a 2nd order transition in the material, the fluctuations in the system are very long range. There are whole patches of spin that move together so you get much better convergences there if you flip whole patches of spin (lets say 10 at a time) rather than attempting one at a time.

“Dynamics” in Monte Carlo is not real, hence you can pick any “perturbations” that satisfy the criterion of detailed balance and a priori probabilities.

82MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Moves in Monte Carlo: Glauber dynamics

Let’s say you have two nearest neighbor atoms A and B atoms on a lattice. If you want to equilibrate that you could pick the obvious dynamics and interchange A and B. This is an obvious move because it mimics diffusion. Atoms interchange and equilibrate like that. That’s called Glauber dynamics on the lattice model.

A

B

B

A

83MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Moves in Monte Carlo: Kawasaki dynamics

It turns out that a much faster move to equilibrate is actually exchanging the identity of the atoms. So, rather than interchanging A and B at different positions, you take an atom and if it’s A you try to turn it into B. If it’s B you try to turn it into A. That’s called Kawasaki dynamics.

A

B

B B B

A

A

A

A BA

AA

A

B

B

B B

If you do Kawasaki dynamics, you will need some kind of chemical potential

84MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Moves in Monte Carlo: Kawasaki dynamics

In Glauber dynamics the composition is fixed. You just interchange the positions of A and B but you are not changing the number of A or B. You conserve the concentration.

In Kawasaki dynamics you change the composition: you don’t change the number of particles (# A + # B = constant) but you do change the concentration of the A and B species. Because whenever you flip A to B, you are changing the concentration.

A

B

B B B

A

A

A

A BA

AA

A

B

B

B B

85MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Moves in Monte Carlo: Kawasaki dynamics

You need some kind of chemical potential term in your Hamiltonian. That chemical potential term is the difference in chemical potentials of A and B. This difference μA-μB will drive the concentration.

A

B

B B B

A

A

A

A BA

AA

A

B

B

B B

Probability that you have A on each site (essentially the concentration)

86MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Kawasaki vs. Glauber, closed vs. open systems

Kawasaki dynamics equilibrates hundreds times faster and easily in many cases. If you started with a situation where you had too much A in one area and too much B in another, if you do Glauber dynamics it will take long time to equilibrate that system by slowly diffusing A and B atoms. In Kawasaki dynamics, you sense that there is too much of A in one area and it starts turning to B. And B in the other area starts turning to A. So it tends to be a much more efficient way.

A

B

B B B

A

A

A

A BA

AA

A

B

B

B B

Open systems tend to equilibrate faster than closed systems. So if you can at all make your system open and end up with the same end result, it would go much faster. This is essentially because you give open systems more degrees of freedom (you can change their chemistry).

87MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Selection of random moves in a fluid A liquid is a bunch of atoms in a box that you could study and track them with MD. You could also get their thermodynamic averages just by sampling all the possible positions of the atoms.

Which perturbation would you pick for a liquid?

Anything that is consistent with the degrees of freedom of your ensemble is fine.

You pick an atom and then you pick a random displacement vector for that atom. and that’s your perturbation. Then you calculate the energy. If it goes down, you accept it. If not, you accept it with a probability.

You pick the orientation and magnitude of the displacement vector randomly. Typically, you’ choose (Δx, Δy, Δz) coordinates out of a range [ –a/2 ,a/2].

88MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Selection of random moves in a fluid What do you take as “a” ?

The density of the liquid it’s 10% less than a solid and so most space is filled by atoms.

If you take “a” really big (of the order of the box length), you are not going to accept a lot of your perturbations (many times if this is a dense liquid, your atoms are going to end up very much on top of other atoms).

So, if you take “a” very big, you are going to equilibrate in principle the density easily but you are not going to accept a lot of your moves.

If you pick “a” too small, then you are going to have a high acceptance rate (your atom is just moving a little bit). But now you have high degree of correlation in your averages. Essentially, all the states that you put in your Markov chain look all the same (i.e. if you barely displace your atoms, they all look the same). It’s going to take you a really long time to get good averages.

89MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Selection of random moves in a fluid

You could almost think of Monte Carlo with very small perturbations on the system is almost like molecular dynamics. It’s like the atoms move slowly with a kind of random force but they still move slowly along trajectories and are real trajectories. But they don’t wildly go through phase space.

The nice thing about models like these is that you can, actually, during the simulation adjust “a” to obtain a reasonable acceptance rate.

If you set “a” too big, then you’ll get a low acceptance rate. If you set “a” too small, you get a high acceptance

rate.

You typically shoot for acceptance ratio anywhere from 0.3 to 0.5.

90MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

What perturbations to pick? You want to be somewhere where on the order of a third to half of the moves you try are accepted. If they don’t

get accepted, you are wasting all your effort calculating energies.

If you are doing this with quantum mechanics, then you want to optimize your MC moves because any energy evaluation costs you a lot of time. If you are doing this with simple hard spheres models, you can afford to have a lot of moves, calculate a lot of energies on which you don’t execute.

1) randomly pick an atom 2) displace by random amount between two limits 3) compute ΔE = Enew -Eold 4) If ΔE < 0 accept perturbation

If ΔE > 0 accept perturbation with probability

91MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Moves in Monte Carlo In MC, you can pretty much choose any types of moves. We already discussed diffusion like and exchange moves.

Looking carefully at your system can really help you do useful moves.

Let’s consider studying Oxygen transport on Platinum surfaces (e.g. Pt (111), a triangular lattice).

You have Oxygen sitting on a particular lattice site but let us say that because of some reason, the Oxygen wanted to form strong triplet clusters. That means on different places in the lattice, you all have these triplet clusters.

Low energy excitation: A triplet of O atoms

moves around as one unit.

92MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Moves in Monte Carlo If your perturbation mechanisms is flipping 1 Oxygen into a vacant site, the dynamics of this kind of system becomes really poor. The reason is that if you anneal the system, it forms these triplet clusters. But in your Monte Carlo you can’t move them anymore.

In the perturbation where you take one of the Oxygens and make it into a vacant site you break up the triplet. Since the triplet clusters are very stable, these perturbations are very high in energy.

What essentially happens is that your Monte Carlo doesn’t move any more because your excitations are very high in energy even though the system may have low energy excitations.

High energy excitation: One O atom becomes

vacancy.

93MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Moves in Monte Carlo

Maybe moving the triplet over sort of keeps the triplet together. And if there is a weak energy interacting with them, that’s a low energy excitation.

In conclusion, especially at low temperature, you should look at your system to see if there are stable ordered units, e.g. molecular units. It is then pays important to move those units as a whole. This can be a much more efficient perturbation. However, it is more complicated to implement these kinds of moves.

This is by far the biggest problem in complicated systems -- they tend to have multiple energy scales. As soon as you freeze out one of them, it becomes hard to equilibrate the lower energy scales.

94MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Long chain molecules If you go to more complex geometries, the perturbation algorithms in MC get more tedious.

Let’s say you do a long chain molecule, a polymer. This is a chain that because of the fixed angle of certain C-C bonds, it can only rotate along these bonds.

So, often, for a polymer, people don’t use all the possible degrees of freedom. An SP2 or an SP3 Carbon bond has a certain bonding angle. So, all you can do, is rotate around that angle so that you keep that bonding angle. So, this essentially restricts the rotational degrees of freedom of every Carbon joint along the polymer chain axis. Fixed dihedral angle, but still rotation possible.

Sample the configuration space by sampling possible values of free rotations. Often only small

perturbations are possible since even small rotations can lead to large atom movements.

95MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Polymers or bio-molecules in real space

A

To do Monte Carlo on polymers in real space, requires all kind of tricks to find good perturbations and make these algorithms more efficient. This is a difficult problem.

You then have to sample with those degrees of freedom. In the figure below, you want to start rotating one end of the chain around that Carbon A. You would keep one part of the chain fixed and swing the rest of the chain around.

This is a very inefficient algorithm since if this is a really a long chain, if you just do a small ω, you’re kind of swinging the far end of the chain really far and it’s going to bump into other atoms (small rotation can cause large movement)

The difficult thing about polymers is that your atoms can move freely a little bit but they are all connected. Building that constraint in the perturbation is not trivial.

It is hard to find excitations that sample all your DOF but that don’t make large pieces of your chain bump into other things.

96MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Polymers on a lattice For the reasons just discussed, people do polymers on a lattice. A really simple way of putting a polymer is by snaking it on a lattice.

People have even built lattice models with a lots more lattice points per unit length of the polymer so that you start to approach a continuum.

The lattice algorithms are an important class of Monte Carlo algorithms in polymer dynamics.

How would you do Monte Carlo perturbations on a polymer on a lattice?

97MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Polymers on a lattice For the polymer chain shown of length 11, you could build, essentially, all self- avoiding random walks of length 11. And that is the ensemble of your polymer of chain length 11. For a polymer of length 11 that’s quite doable. However, if you go to a length of 50, that’s not doable anymore.

How would you do an algorithm to perturb chains like that? Remember, it doesn’t have to be realistic as long as it samples your space.

There are 3 or 4 different types of perturbations you can do on that and sample space.

How could you make this blue line on the figure to move on the lattice and take on different configurations?

98MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Polymers on a lattice: Moves Move set: Crankshaft, kink jump, end rotation, slithering snake, etc.

Here is an obvious move. You can take the end and fluctuate it. For example, take the blue segment at the end and move it to either location of the red segments shown below.

Move this segment

99MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Polymers on a lattice: Moves You could slide the snake forward. That means I randomly pick where I put the next segment. Or I take the end piece and I put it on the front with a random orientation.

You could do what is called the kink jump. Whenever you have a sort of square corner, you can kink it over and do this. That’s a symmetric perturbation.

In 3D that’s called the crank shaft jump because in 3D you can rotate this piece like a crank shaft.

100MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Heisenberg magnet – contrinuous spin model Instead of an Ising magnet with 2 states, up and down, we’ll look at a Heisenberg magnet. Here you have a fixed size spin but it can rotate continuously in space, i.e. it can take any direction in space (not discretized). So, the spin vector has 3 components, x, y and z.

The Hamiltonian is very similar to an Ising Hamiltonian. You can define the interaction energy through the dot product.

If the spins are parallel, the dot product is 1. If they are anti-parallel, the dot product is –1. If they are orthogonal, it’s 0. And then you can have anything in between. The J has a very similar meaning here. J>0 will give you ferromagnets and J<0 will give you anti-ferromagnets. Also, you can have a field term to control the amount of spin.

101MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Heisenberg magnet – contrinuous spin model

First we need a coordinate system. Since we conserve the amount of spin (the norm of the vector), we can just spherical coordinates θ (0, 180), φ (0,360) and r.

If you didn’t think carefully, you’d pick θ randomly from 0 to 180 and φ from 0 to 360.

This will be a big mistake. The reason is, when you are at θ=0, then all the states for φ from 0 to 360 map into the same state, whereas if θ=90o, then all the states for φ from 0 to 360 are different states. So the density with which you sample θ should depend on its value.

How would we do MC perturbations on this system?

102MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Distributions of θ anf φ

For a given θ, the number of states that have that angle θ is proportional to the slice of this sphere shown in the figure below (~2 π sinθdθ). You need to pick θ proportional to sinθ because sinθ=0 when θ=0 and there are very few states here and there is maximum amount of states at θ=90o. How do you do that?

Random distribution of θ and φ does not give random distribution of vector orientations in space. For small θ all φ give spin oriented along the z-axis

Area ~

2πsinθ dθ

You have to go from a homogeneous distribution of random numbers (from –1 to 1) to a random distribution ½ sin θ, where θ is between 0 and 180 to (the factor ½ is introduced here for normalization).

103MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Sampling variables that are not homogeneously distributed in phase space

So, what you need to do is find some relation between θ and x (where x is what you are going to pick from your random number generator). θ is what you sample so that θ is distributed like ½ sin θ. You can do that is by saying that in a given interval around θ, you should have the same states as in a given interval around x.

P(θ) d θ = P(x) dx ½ sin θ d θ = 1/2 dx dx/dθ = sin θ x = - cos θ θ = cos-1(-x)

This is how you can go from a uniform to some other distribution. If you pick the angles, with this function you’ll get a properly distributed value of θ. The other angle φ you can pick randomly from 0 to 360.

104MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Heisenberg magnet – contrinuous spin model We set up a ferromagnetic Heisenberg model and look at the heat capacity. The horizontal axis is the temperature and the vertical the heat capacity.

Computed result with the continuous spin model

discussed

Computed result ``with a more accurate

method’’

105MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Heisenberg magnet – contrinuous spin model Looking at the result on the right, you think that there must be a phase transition at low T (because there is a maximum in the heat capacity). However, this is not the case, and if you do it right you get the heat capacity shown on the left.

Computed result with the continuous spin model

discussed

Computed result ``with a more accurate

method’’

106MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Heisenberg magnet – Low temperate spin waves Heat capacity is sampling fluctuations. If you don’t allow your system to fluctuate enough, you will reduce its heat capacity artificially. Here you are not doing a good job about sampling ``certain fluctuations’’. That’s why the heat capacity as you go towards 0 spuriously goes down. The fluctuations that are missing have no counterpart in discrete models like lattice models; they are spin waves.

In a lattice model, the lowest energy fluctuation is a finite amount of energy above the ground state. If you think of your ferromagnetic model, the lowest energy fluctuation, its flipping one spin over and costs you 8 J. It’s a finite number above the ground state.

In a continuous model, you have excitations that are an infinitesimal amount above the ground state. These fluctuations are reachable at any temperature because if they are infinitesimal amount above the ground state, then you can get them as soon as T is infinitesimal amount above 0. In a Heisenberg model the equivalent thing are spin waves.

107MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Heisenberg magnet – Low temperate spin waves We may have thought that we have a ferromagnet and the lowest order of fluctuation is to take one spin and kind of randomly orient it. However, the lowest energy fluctuation is a spin wave.

In a spin wave, you move every spin just a little bit, first one to the next one. If you, actually, calculate the energy of this and you take the long wavelength limit, you find that the energy is actually only an infinitesimal amount above the ground state. As the wavelength gets shorter, the energy of the fluctuation goes up.

You can see now that we don’t get this perturbation (small amounts of spin counting over a very long wavelength) in our sampling algorithm. If you are in a ferromagnetic state, with our algorithm we are going to be picking a spin and taking a random new orientation and we are never going to accept it. If you are at low enough temperature, that’s a finite energy perturbation, so you are never going to accept it.

108MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Heisenberg magnet – Low temperate spin waves We need to start taking other perturbations to fix this problem. If you have a spin at a given orientation, rather than finding a random new orientation of the spin, you randomize it just within a small cone, i.e. you only allow for a certain amount of deviation from that angle. As you make that deviation smaller and smaller, you’ll start getting spin waves because every spin will only be perturbed a very small amount at a time. If you do that, then you get the correct heat capacity shown.

Computed result with spin waves perturbations

109MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Specific heat capacity must be 0 at 0K Notice from the figure that the computed heat capacity is finite at 0 K. However, if the heat capacity is non-0 at 0 K, you have infinite entropy. Recall that:

vcs dT T

= ∫ So if T goes to 0, the cv has to go to 0. There must be something wrong with our earlier model.

What’s wrong is that oursystem has these infinitesimal small excitation energies because it’s a continuum model but we don’t know what these excitations are.

110MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Specific heat capacity at low temperatures In any real system the states are quantized and the first excitation is always a finite energy above the ground state.

If you are at finite energy above the ground state, there will always be some temperature that is low enough to freeze out that excitation which will not contribute to the heat capacity.

Thus the heat capacity is going to 0 because the energy states are quantized.

So, any time you work with a model with continuous degrees of freedom (e.g. the Heisenberg magnet), it will give you finite heat capacity at 0 K and that’s wrong.

111MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Specific heat capacity at low temperatures To solve this you would need to know the quantized excitations.

In an Ising model we know the eigenstates. The eigenstates are states with just one spin perturbed, from, say, +1 to –1. So we’re actually getting the right excitation as we go to the ground state.

If you had atoms, your continuous degrees of freedom are the displacements.

In a solid you find your quantized states with phonon diagonalization.

What are the quantized excitations in a Heisenberg magnet?

That’s an open problem!

112MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Accuracy of MC to obtain ensemble properties

In MC, you can get the answer as accurately as you want or can afford.

There are essentially no approximations in the Metropolis algorithm and we are not interested in inaccuracies in the Hamiltonian.

We are mainly interested about inaccuracies in the integration (ensemble averaging).

Where do inaccuracies come from when you do MC sampling?

How accurate is the sampling?

You get two sources of errors.

The fact that you take a finite sample (i.e. you don’t collect the whole ensemble)

You work with finite size (periodic boundaries)

113MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Error from finite size effects Often since your system is infinite, you impose periodic boundary conditions. We call these ``finite size effects’’.

For example, if you set a 20 x 20 lattice, you can’t get anything in there that have a periodicity of lets say 55. Thus you can’t get truly random things.

In anything with periodic boundary conditions, you don’t have truly random things because you force that unit to repeat.

To get in more and more random, you can get the periodic box bigger. In the end it’s all non-periodic. You can’t simulate a random liquid using periodicity.

Notice that you also impose symmetry from your periodic boundary conditions.

A real liquid has a perfect spherical symmetry (point group, every direction is identical). By simulating it in a cubic lattice over a repeat unit you essentially impose cubic symmetry. For many problems, this is not important but something to consider.

114MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Size effects If your system has some periodicity that doesn’t fit in your simulation box you are messing up your system. If it’s a solid state, often what it’s going to do is make some kind of phase boundaries to try to fit into it. Thus if your system has real periodicity, your box size should commensurate with that periodicity.

The other obvious thing, if your system is really way too small, you can start seeing interaction between the perturbation in your Monte Carlo box and its image coming from the periodic boundary conditions. Usually you’ve got to have pretty small systems to actually run into that.

Near 2nd order transitions, the fluctuations in a system go towards infinity. The correlation length of the fluctuation becomes infinite. At the transition, it’s actually, infinite.

For example, if you do that for a ferromagnet that goes to 2nd order transition, there will be whole patches of spins kind of flipping over. And the spatial correlation, the distance over which they do that becomes infinite at the 2nd order transition. That means there will be a point where your cell is by definition not large enough to capture those fluctuations.

115MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Size effects near 2nd order transition

If you work in a finite cell, you remove all fluctuations below a certain wavelength. That does things to the heat capacity. Remember that the heat capacity is a measure of the fluctuations. It’s (<E2>-<E>2 )/ kT2.

Thus the heat capacity or magnetic susceptibility at a 2nd order transition depend on the system size, even when normalized by the number of particles or spins in your system. That’s what is shown in the figure for the susceptibility i.e. the fluctuation of the spin rather than the energy.

This goes like the square root of N where N is the number of particles in your system.

116MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Size effects near 2nd order transition If you see phase transitions in your system, if you want to know whether they are 1st or 2nd order, this scaling effect is the test to do.

2nd order derivatives of free energy (heat capacities, magnetic susceptibilities, chemical susceptibilities, etc.) tend to show peak behavior near phase transitions. Since you’ll often see peaks in them, even when you are not at a phase transition, the way to know that it’s a 2nd order transition is, often, just to look at the scaling with system size.

117MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Sampling error The sampling error is the easiest one to deal with because you can quantify it. Most people run it long enough and hope it’s long enough.

We are talk about quantities that exist in the microscopic states and the macroscopic value is just the average over the microscopic defined variables, e.g. things like volume, energy or concentration.

We denote with true average of A in the ensemble (the ensemble is some distribution of A’s). Lets also denote with <A> the average that we actually collect in our finite sample.

In the figure shown, is the standard deviation of A in the distribution.

For example, if the property that we were looking was the energy, then the standard deviation is, essentially, the heat capacity. Heat capacity is the measure of the standard deviation in the ensemble.

A fluctuates with some spread around the true

average

distσ

118MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Accuracy of MC to obtain ensemble averages If you take a finite sample in this distribution, you will not get the true average. If you do that a lot of times, you can see how the average you sample is distributed.

If the sample you take is uncorrelated, then the standard deviation on the average is essentially the standard deviation of the distribution divided by the square root of m, .

aveσ dist

m σ

119MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Accuracy of MC to obtain ensemble averages

If you pick averages from the distribution (e.g. if you take a sample with only 4 members), you are going to get some average for that distribution but you won’t be right. If you take many times a sample of 4, you are going to get different values of the average. If you plot how that average is distributed and if your original distribution was a Gaussian, your average is a Gaussian. You can look at the standard deviation for your average and it turns out that if your sample is uncorrelated, that they relate as shown above.

120MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Accuracy of MC to obtain ensemble averages

In our simulation, we actually have some idea about the standard deviation on the distribution because we take a very large sample. We can approximate it by, essentially, the standard deviation in our sample. You can easily get in your simulation and .

Then if you know the sample size m, you have an idea of the error on your average.

distσ ist

121MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Sample correlation What we discussed earlier is true if the samples that you take are uncorrelated.

This premise is not satisfied in the Markov chain that we set up through a Metropolis algorithm. Indeed, the samples are not uncorrelated because every step in the Markov chain comes from the previous one!

You can see why that is. If you did Monte Carlo on a liquid and you picked really small perturbations (e.g. just move the atom within the range of 0.1 Angstrom), that way you will almost always accept the moves but, obviously, your states are very correlated. One state in the Markov chain has a 1 atom displaced 0.1 Angstrom from the other one. So you have a high degree of correlation.

Correlation reduces the quality of your sampling. The quality of your sample is polluted by the correlation time The extreme limit is when you do perturbations with very high energy in a Markov chain that will almost never be accepted. Then, your Markov chain just has identical states for a long time (if you don’t accept the state, you, actually, keep the old one in the Markov chain). So you have a high degree of correlation.

122MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Correlation function and correlation time

The correlation time can be calculated with the equation above.

Let us look at a quantity A, let’s say as a function of time A(t) (and time could be the distance in your Markov chain). Let’s say we get a bunch of points scattered.

You are, essentially, trying to measure if what comes at any time, say, t, can be predicted from what you had at time t - τ.

............. ..

t-τ t

A(t)

t ...

123MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Correlation function and correlation time

Can A(t) be predicted from A(t – τ)?

............. ..

t-τ t

A(t)

t ...

If it can be, then what happens at t and t-τ are correlated over this distance τ.

is the perturbation away from equilibrium (how far A is away from the average at time t). is how far A is away from the average at time t + τ. So you are seeing if these are correlated.

124MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Correlation function and correlation time

In a liquid where the system over a long time is purely random, you have short correlation time. In a liquid after after 1 sec you have no memory from time 0. You’ll have a finite correlation time.

You are looking at how this fluctuation x looks like when you normalize it by the overall fluctuations in the system.

The correlation function, , essentially, measures over what time you keep correlation.

125MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Correlation function and correlation time If you plot correlation function for systems with some degree of disorder, it will start all at 1 (that’s the way this function is normalized) and then go down.

There are systems where the correlation time is infinite. In a perfect harmonic oscillator if you look at the velocity versus time, it just oscillates back and forth. Once you know what happens at one time, you can exactly predict what happens at another time. This system has an infinite correlation time.

cA(τ)

τ

126MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Average gets polluted by the correlation time The quality of the average gets diluted by the correlation time. If you have correlation in your sample, the standard deviation of the average is what we had before (the standard deviation on the uncorrelated sample) multiplied by a factor that depends on the correlation time. And the correlation time is the integral of the correlation function.

τα it’s the integrated time over which you have correlation

The extra cost to keep track of the terms in the equations above and to compute τA is insignificant.

127MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Correlation time in Ising models

If your energy calculation is fast (e.g. as in the lattice magnetic spin model), 95% of the time would be spent in the random number generator. Random number generation is expensive because of the fraction algorithm that calculates it.

People often try to do shortcuts to reduce the number of random numbers you have to take. Let’s say you do a 3D spin model. First of all, you have to randomly pick a spin you are trying to perturb. That takes 3 random numbers if you have to randomize x, y, z. Then you need a fourth random number to compare with the e-βΔE. That’s 4 random numbers per pass.

We usually don’t randomly pick spins in the lattice. We run through the lattice sequentially because it kills 3 random number operations. Most of the time this works out fine, except in the example discussed below.

Let us see a few correlation times in the lattice spin model and in the H on Pd system that we discussed earlier.

128MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Correlation time in Ising models Here’s the correlation time in the lattice model. This is the H-Pd system at different temperatures. The horizontal scale is the number of Monte Carlo passes. We show the correlation time in units of Monte Carlo time (1 unit of time is here is 1 pass through the whole lattice. So, if you have a 10 by 10 lattice, it’s 100 perturbations).

This is a system, that undergoes a 2nd order transition near T equal to 8 or 9. Below that, you are in an ordered state, above that you are in a disordered state.

129MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Correlation time in Ising models Let’s look at the low temperature (T=2) where you are in an ordered state. The correlation time goes down very quickly to 0 and then you have these spurious oscillations.

You don’t expect large correlation times at either low or high temperature. Why is that?

At low temperature you barely have any fluctuations At high temperatures everything is so random that it’s uncorrelated. At intermediate temperature you have the longest correlation, in particular near phase transitions.

130MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Correlation time in Ising models At T=6, the correlation time is a little longer. Remember the correlation time is the integral under this curve. Then you’ve got similar fluctuations again.

At T=8, you are very close to the transition. You have fairly long correlation time. We sort of have a triangle from 1 to 15, so we have about 7.5 under this integral. This means that the quality of our averaging will go down by (2x7.5+1)=16 here. So the quality of our average is pretty poor near the 2nd order transition.

131MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Correlation time in Ising models

Then at high temperature you have a short correlation time

132MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/21/2009)

Random number generation has a period to it! The spurious fluctuations you see here are too big to be noise. They don’t go away when you take a longer sample. They are shown to emphasize an important aspect of the random number generator.

Your random number generator has a period to it. Also if you sequentially move through the lattice, you’re visiting every site at a given periodicity of time. At some point, that periodicity of time starts locking in with the periodicity of the random number generator. Certain spins are not randomly flipped anymore since they occur at a certain time scale in the random number generator.

You can solve this problem by randomly jumping ahead in the spins you touch in the lattice or if once a while you randomly jump ahead in the random number generator.

Até o momento nenhum comentário
Esta é apenas uma pré-visualização
3 mostrados em 132 páginas