##### Document information

Runge-Kutta Method

Adrian Down

April 25, 2006

1 Second-order Runge-Kutta method

1.1 Review

Last time, we began to develop methods to obtain approximate solutions to the differential equation ẋ = f(t, x(t)) with error of O(h2), where h is the spacing of the mesh used to calculate the approximation. To construct these methods, our goal was to minimize the local truncation error. We saw that, in general, the global truncation error should be one order less in h.

Last time, we introduced an approximation that generalized the Modified Euler method. The general form of the expression was,

y(t+ h)− y(t) = ω1hF (t) + ω2hf (t+ αh, y(t) + βhF (t))

where F (t) ≡ f(t, y(t)). Our proposal was to choose ω1, ω2, α and β such that the local truncation error of the approximation is O(h3), from which we expect the desired global truncation error to be O(h2).

1.2 Computation

We began to evaluate this condition last time by Taylor expanding the func- tion f up to second order in h. Since f is a function of two variables, we used the Taylor formula for multiple dimensions. We obtained,

x(t+ h)− x(t) = h(ω1 + ω2)F (t) + h2ω2 (αf1?βFfx) +O(h3)

where subscripts indicate partial differentiation. The partial derivatives are to be evaluated at (t, x(t)).

1

The Taylor series of the left side is easily computed,

x(t+ h)− x(t) = hẋ(t) + h 2

2 ẍ(t) +O(h3)

Since these two expression are equal, match terms in h and cancel coef- ficients, such that only terms of O(h3) remain. Matching terms in h yields two equations,

(ω1 + ω2)F (t) = ẋ(t) ω2 (αft + βFfx) = 1

2 ẍ(t)

The first equation can be simplified using the definition of f ,

F (t) = f(t, x(t)) = ẋ

⇒ (ω1 + ω2) = 1

The second equation can be solved by taking the time derivative of the differential equation. This yields partial derivatives, which can be compared with those on the left of the equation,

ẋ = f(t, x(t))⇒ ẍ = ft + fxẋ = ft + Ffx

⇒ ω2αft + ω2βFfx = 1

2 ft +

1

2 Ffx

This equation must hold for all t and x, so it must be that the coefficients of the partial derivatives are separately equal,

ω2α = ω2β = 1

2

1.3 Solutions

We now have three equations and four unknowns. This system is under- determined, meaning that we should expect a family of solutions in one parameter.

One possible choice of parameters is,

ω1 = 1

2 ω2 =

1

2 α = 1 β = 1

2

This choice corresponds to the trapezoid rule approximation scheme. Another possible choice of parameters is,

ω1 = 0 ω2 = 1 α = 1

2 β =

1

2

This choice corresponds to the Modified Euler method developed earlier.

2 Fourth-order Runge-Kutta method

2.1 Motivation

The fourth-order Runge-Kutta method is commonly used in science and en- gineering applications. However, the computations required are less than optimal, as we will see.

2.2 Formulation

The approximation scheme for the fourth-order Runge-Kutta approximation is,

y(t+ h)− y(t) = 1 6 {F1 + 2F2 + 2F3 + F4}

where the Fi’s are recursively defined below.

Note. The Fi’s are constructed such that Fi → F (t, y(t)) as h → 0. The factor of 1

6 is to normalize due to the four factors of F . at t = 0.

The definition of the Fi’s can be understood intuitively as attempts to evaluate the function f at the midpoint and endpoints of the mesh interval over which the function is being approximated. F1 is the value of the function f at the left endpoint of the interval,

F1 = f(t, y(t))

In the spirit of the Modified Euler method, F2 is analogous to an attempt at evaluating the function f at the midpoint of the approximation interval,

F2 = f

( t+

h

2 , y(t) +

h

2 F1

)

3

F3 is analogous to a second more accurate attempt to evaluate f at the midpoint of the interval,

F3 = f

( t+

h

2 , y(t) +

h

2 F2

) F4 is an attempt to evaluate f at the right endpoint of the interval,

F4 = f(t, f(t) + hF3)

Note. The Fourth-Order Runge-Kutta method can be used in the case that f is a vector function provided that f is not a function of time. In this case f is a function only of position. This is called the autonomous case.

3 Multi-step methods

3.1 Motivation

The fourth-order Runge-Kutta method is not computationally optimal be- cause the function f must be evaluated four times to calculate the approxima- tion at each mesh point. Computations could be prohibitive if the function f is complicated.

Multistep methods attempt to avoid this problem by creating an approx- imation based on the values of the approximation at previous points. Al- though such methods reduce computations, some such methods may not be stable in all situations. When using multi-step methods, it is necessary to verify the convergence of the method.

3.2 Two-step method

3.2.1 Setup

We find an explicit two-step method for obtaining an approximation at the mesh point t + h based only on the values of the approximation and the function F at the previous two mesh points t and t − h. Forming a general linear combination of these points,

y(t+ h) + a2y(t) + a1y(t− h) = h {A2F (t) + A1F (t− h)} where F (t) = f(t, y(t)). Our strategy is to choose the coefficients ai and Ai to make the approximation consistent, meaning that the error is as small as is reasonable.

4

3.2.2 Taylor expand the local truncation error

As always, we substitute the exact solution x(t) into the given differential equation to determine the local truncation error. Subtracting the left hand side of the above equation from the right hand side,

LHS− RHS = x(t+ h) + a2x(t) + a1x(t− h)− h {A2ẋ(t) + A1ẋ(t− h)} Our goal is to cancel as many orders of h as possible from the right side of the above expression.

Performing the Taylor expansions of the terms in x(t± h), x(t+ h) + a2x(t) + a1x(t− h) = {1 + a2 + a1}+ h {1− a1} ẋ(t)

+ h2

2 {1 + a1} ẍ(t) +

h3

3! {1− a1}

... x(t) +O(h4)

Taylor expanding the terms in ẋ,

−h {A2ẋ(t) + A1ẋ(t− h)} = −h(A1 + A2)ẋ+ h2A1ẍ(t)

− h 3

2 A1

... x(t) +O(h4)

3.2.3 Match coefficients of h

As before, match coefficients of terms in powers in h. Since there are four relevant powers of h, we obtain four equations, which determine the four unknowns.

Matching powers of h,

h0 1 + a1 + a2 = 0

h1 1− a1 = A1 + A2

h2 1

2 (1 + a1) = −A1

h3 1

3! (1− a1) =

A1 2

The solution to this system of linear equations is,

a1 = −5 a2 = 4 A1 = 2 A2 = 4 With this choice of coefficients, we can create a two-step approximation

that has local truncation error of O(h4), and so we expect the global trunca- tion error to be O(h3). However, we will see that because we have attempted to cancel the third order terms, this method is not stable.

5