Baixe Equações ordinárias diferenciais e outras Esquemas em PDF para Matemática, somente na Docsity! Ordinary Differential Equations Gabriel Nagy Mathematics Department, Michigan State University, East Lansing, MI, 48824.
[email protected] August 27, 2019 x2 x1 x1 x2 ab 0 IV CONTENTS 2.1.2. Solutions to the Initial Value Problem. 80 2.1.3. Properties of Homogeneous Equations 81 2.1.4. The Wronskian Function 85 2.1.5. Abel’s Theorem 86 2.1.6. Exercises 89 2.2. Reduction of Order Methods 90 2.2.1. Special Second Order Equations 90 2.2.2. Conservation of the Energy 93 2.2.3. The Reduction of Order Method 98 2.2.4. Exercises 101 2.3. Homogenous Constant Coefficients Equations 102 2.3.1. The Roots of the Characteristic Polynomial 102 2.3.2. Real Solutions for Complex Roots 106 2.3.3. Constructive Proof of Theorem 2.3.2 108 2.3.4. Exercises 111 2.4. Euler Equidimensional Equation 112 2.4.1. The Roots of the Indicial Polynomial 112 2.4.2. Real Solutions for Complex Roots 115 2.4.3. Transformation to Constant Coefficients 117 2.4.4. Exercises 119 2.5. Nonhomogeneous Equations 120 2.5.1. The General Solution Formula 120 2.5.2. The Undetermined Coefficients Method 121 2.5.3. The Variation of Parameters Method 125 2.5.4. Exercises 130 2.6. Applications 131 2.6.1. Review of Constant Coefficient Equations 131 2.6.2. Undamped Mechanical Oscillations 132 2.6.3. Damped Mechanical Oscillations 134 2.6.4. Electrical Oscillations 136 2.6.5. Exercises 139 Chapter 3. Power Series Solutions 141 3.1. Solutions Near Regular Points 143 3.1.1. Regular Points 143 3.1.2. The Power Series Method 144 3.1.3. The Legendre Equation 151 3.1.4. Exercises 154 3.2. Solutions Near Regular Singular Points 155 3.2.1. Regular Singular Points 155 3.2.2. The Frobenius Method 158 3.2.3. The Bessel Equation 162 3.2.4. Exercises 167 Notes on Chapter 3 168 Chapter 4. The Laplace Transform Method 173 4.1. Introduction to the Laplace Transform 175 4.1.1. Oveview of the Method 175 4.1.2. The Laplace Transform 176 4.1.3. Main Properties 180 CONTENTS V 4.1.4. Solving Differential Equations 183 4.1.5. Exercises 185 4.2. The Initial Value Problem 186 4.2.1. Solving Differential Equations 186 4.2.2. One-to-One Property 187 4.2.3. Partial Fractions 189 4.2.4. Higher Order IVP 194 4.2.5. Exercises 196 4.3. Discontinuous Sources 197 4.3.1. Step Functions 197 4.3.2. The Laplace Transform of Steps 198 4.3.3. Translation Identities 199 4.3.4. Solving Differential Equations 203 4.3.5. Exercises 208 4.4. Generalized Sources 209 4.4.1. Sequence of Functions and the Dirac Delta 209 4.4.2. Computations with the Dirac Delta 211 4.4.3. Applications of the Dirac Delta 213 4.4.4. The Impulse Response Function 214 4.4.5. Comments on Generalized Sources 217 4.4.6. Exercises 220 4.5. Convolutions and Solutions 221 4.5.1. Definition and Properties 221 4.5.2. The Laplace Transform 223 4.5.3. Solution Decomposition 225 4.5.4. Exercises 229 Chapter 5. Systems of Linear Differential Equations 231 5.1. General Properties 232 5.1.1. First Order Linear Systems 232 5.1.2. Existence of Solutions 234 5.1.3. Order Transformations 235 5.1.4. Homogeneous Systems 238 5.1.5. The Wronskian and Abel’s Theorem 242 5.1.6. Exercises 246 5.2. Solution Formulas 247 5.2.1. Homogeneous Systems 247 5.2.2. Homogeneous Diagonalizable Systems 249 5.2.3. Nonhomogeneous Systems 256 5.2.4. Exercises 259 5.3. Two-Dimensional Homogeneous Systems 260 5.3.1. Diagonalizable Systems 260 5.3.2. Non-Diagonalizable Systems 263 5.3.3. Exercises 266 5.4. Two-Dimensional Phase Portraits 267 5.4.1. Real Distinct Eigenvalues 268 5.4.2. Complex Eigenvalues 271 5.4.3. Repeated Eigenvalues 273 5.4.4. Exercises 275 VI CONTENTS Chapter 6. Autonomous Systems and Stability 277 6.1. Flows on the Line 279 6.1.1. Autonomous Equations 279 6.1.2. Geometrical Characterization of Stability 281 6.1.3. Critical Points and Linearization 283 6.1.4. Population Growth Models 286 6.1.5. Exercises 290 6.2. Flows on the Plane 291 6.2.1. Two-Dimensional Nonlinear Systems 291 6.2.2. Review: The Stability of Linear Systems 292 6.2.3. Critical Points and Linearization 294 6.2.4. The Stability of Nonlinear Systems 297 6.2.5. Competing Species 299 6.2.6. Exercises 302 Chapter 7. Boundary Value Problems 303 7.1. Eigenfunction Problems 304 7.1.1. Two-Point Boundary Value Problems 304 7.1.2. Comparison: IVP and BVP 305 7.1.3. Eigenfunction Problems 308 7.1.4. Exercises 312 7.2. Overview of Fourier series 313 7.2.1. Fourier Expansion of Vectors 313 7.2.2. Fourier Expansion of Functions 315 7.2.3. Even or Odd Functions 320 7.2.4. Sine and Cosine Series 321 7.2.5. Applications 324 7.2.6. Exercises 326 7.3. The Heat Equation 327 7.3.1. The Heat Equation (in One-Space Dim) 327 7.3.2. The IBVP: Dirichlet Conditions 329 7.3.3. The IBVP: Neumann Conditions 332 7.3.4. Exercises 339 Chapter 8. Review of Linear Algebra 341 8.1. Linear Algebraic Systems 342 8.1.1. Systems of Linear Equations 342 8.1.2. Gauss Elimination Operations 346 8.1.3. Linearly Dependence 349 8.1.4. Exercises 350 8.2. Matrix Algebra 351 8.2.1. A Matrix is a Function 351 8.2.2. Matrix Operations 352 8.2.3. The Inverse Matrix 356 8.2.4. Computing the Inverse Matrix 358 8.2.5. Overview of Determinants 359 8.2.6. Exercises 362 8.3. Eigenvalues and Eigenvectors 363 8.3.1. Eigenvalues and Eigenvectors 363 8.3.2. Diagonalizable Matrices 370 CHAPTER 1 First Order Equations We start our study of differential equations in the same way the pioneers in this field did. We show particular techniques to solve particular types of first order differential equations. The techniques were developed in the eighteenth and nineteenth centuries and the equations include linear equations, separable equations, Euler homogeneous equations, and exact equa- tions. This way of studying differential equations reached a dead end pretty soon. Most of the differential equations cannot be solved by any of the techniques presented in the first sections of this chapter. People then tried something different. Instead of solving the equa- tions they tried to show whether an equation has solutions or not, and what properties such solution may have. This is less information than obtaining the solution, but it is still valu- able information. The results of these efforts are shown in the last sections of this chapter. We present theorems describing the existence and uniqueness of solutions to a wide class of first order differential equations. t y π 2 0 −π 2 y′ = 2 cos(t) cos(y) 3 4 1. FIRST ORDER EQUATIONS 1.1. Linear Constant Coefficient Equations 1.1.1. Overview of Differential Equations. A differential equation is an equation, where the unknown is a function and both the function and its derivatives may appear in the equation. Differential equations are essential for a mathematical description of nature— they lie at the core of many physical theories. For example, let us just mention Newton’s and Lagrange’s equations for classical mechanics, Maxwell’s equations for classical electro- magnetism, Schrödinger’s equation for quantum mechanics, and Einstein’s equation for the general theory of gravitation. We now show what differential equations look like. Example 1.1.1. (a) Newton’s law: Mass times acceleration equals force, ma = f , where m is the particle mass, a = d2x/dt2 is the particle acceleration, and f is the force acting on the particle. Hence Newton’s law is the differential equation m d2x dt2 (t) = f ( t,x(t), dx dt (t) ) , where the unknown is x(t)—the position of the particle in space at the time t. As we see above, the force may depend on time, on the particle position in space, and on the particle velocity. Remark: This is a second order Ordinary Differential Equation (ODE). (b) Radioactive Decay: The amount u of a radioactive material changes in time as follows, du dt (t) = −k u(t), k > 0, where k is a positive constant representing radioactive properties of the material. Remark: This is a first order ODE. (c) The Heat Equation: The temperature T in a solid material changes in time and in three space dimensions—labeled by x = (x, y, z)—according to the equation ∂T ∂t (t,x) = k (∂2T ∂x2 (t,x) + ∂2T ∂y2 (t,x) + ∂2T ∂z2 (t,x) ) , k > 0, where k is a positive constant representing thermal properties of the material. Remark: This is a first order in time and second order in space PDE. (d) The Wave Equation: A wave perturbation u propagating in time t and in three space dimensions—labeled by x = (x, y, z)—through the media with wave speed v > 0 is ∂2u ∂t2 (t,x) = v2 (∂2u ∂x2 (t,x) + ∂2u ∂y2 (t,x) + ∂2u ∂z2 (t,x) ) . Remark: This is a second order in time and space Partial Differential Equation (PDE).C The equations in examples (a) and (b) are called ordinary differential equations (ODE)— the unknown function depends on a single independent variable, t. The equations in examples (d) and (c) are called partial differential equations (PDE)—the unknown function depends on two or more independent variables, t, x, y, and z, and their partial derivatives appear in the equations. The order of a differential equation is the highest derivative order that appears in the equation. Newton’s equation in example (a) is second order, the time decay equation in example (b) is first order, the wave equation in example (d) is second order is time and 1.1. LINEAR CONSTANT COEFFICIENT EQUATIONS 7 (c) It makes sense that we have a free constant c in the solution of the differential equa- tion. The differential equation contains a first derivative of the unknown function y, so finding a solution of the differential equation requires one integration. Every indefi- nite integration introduces an integration constant. This is the origin of the constant c above. Proof of Theorem 1.1.2: First consider the case b = 0, so y′ = a y, with a ∈ R. Then, y′ = a y ⇒ y ′ y = a ⇒ ln(|y|)′ = a ⇒ ln(|y|) = at+ c0, where c0 ∈ R is an arbitrary integration constant, and we used the Fundamental Theorem of Calculus on the last step, ∫ ln(|y|)′ dt = ln(|y|). Compute the exponential on both sides, y(t) = ±eat+c0 = ±ec0 eat, denote c = ±ec0 ⇒ y(t) = c eat, c ∈ R. This is the solution of the differential equation in the case that b = 0. The case b 6= 0 can be converted into the case above. Indeed, y′ = a y + b ⇒ y′ = a ( y + b a ) ⇒ ( y + b a )′ = a ( y + b a ) , since (b/a)′ = 0. Denoting ỹ = y + (b/a), the equation above is ỹ′ = a ỹ. We know all the solutions to that equation, ỹ(t) = c eat, c ∈ R ⇒ y(t) + b a = c eat ⇒ y(t) = c eat − b a . This establishes the Theorem. Remark: We solved the differential equation above, y′ = a y, by transforming it into a total derivative. Let us highlight this fact in the calculation we did, ln(|y|)′ = a ⇒ (ln(|y|)− at )′ = 0 ⇔ ψ(t, y(t))′ = 0, with ψ = ln(|y(t)|)− at. The function ψ is called a potential function. This is how the original differential equation gets transformed into a total derivative, y′ = a y → ψ′ = 0. Total derivatives are simple to integrate, ψ′ = 0 ⇒ ψ = c0, c0 ∈ R. So the solution is ln(|y|)− at = c0 ⇒ ln(|y|) = c0 + at ⇒ y(t) = ±ec0+at = ±ec0 eat, and denoting c = ±ec0 we reobtain the formula y(t) = c eat. In the case b 6= 0 a potential function is ψ(t, y(t)) = ln (∣∣y(t) + b a ∣∣)− at. Example 1.1.5. Find all solutions to the constant coefficient equation y′ = 2y + 3. Solution: (Solution Video) Let’s pull a common factor 2 on the right-hand side of the equation, y′ = 2 ( y + 3 2 ) ⇒ ( y + 3 2 )′ = 2 ( y + 3 2 ) . Denoting ỹ = y + (3/2) we get ỹ′ = 2 ỹ ⇒ ỹ ′ ỹ = 2 ⇒ ln(|ỹ|)′ = 2 ⇒ ln(|ỹ|) = 2t+ c0. 8 1. FIRST ORDER EQUATIONS We now compute exponentials on both sides, to get ỹ(t) = ±e2t+c0 = ±e2t ec0 , denote c = ±ec0 , then ỹ(t) = c e2t, c ∈ R. Since ỹ = y + 3 2 , we get y(t) = c e2t − 3 2 , where c ∈ R. C Remark: We converted the original differential equation y′ = 2 y+ 3 into a total derivative of a potential function ψ′ = 0. The potential function can be computed from the step ln(|ỹ|)′ = 2 ⇒ ( ln(|ỹ|)− 2t )′ = 0, then a potential function is ψ(t, y(t)) = ln (∣∣y(t) + 3 2 ∣∣) − 2t. Since the equation is now ψ′ = 0, all solutions are ψ = c0, with c0 ∈ R. That is ln (∣∣y(t) + 3 2 ∣∣)− 2t = c0 ⇒ ln(∣∣y(t) + 3 2 ∣∣) = 2t+ c0 ⇒ y(t) + 3 2 = ±e2t+c0 . If we denote c = ±ec0 , then we get the solution we found above, y(t) = c e2t − 3 2 . 1.1.4. The Integrating Factor Method. The argument we used to prove Theo- rem 1.1.2 cannot be generalized in a simple way to all linear equations with variable coef- ficients. However, there is a way to solve linear equations with both constant and variable coefficients—the integrating factor method. Now we give a second proof of Theorem 1.1.2 using this method. Second Proof of Theorem 1.1.2: Write the equation with y on one side only, y′ − a y = b, and then multiply the differential equation by a function µ, called an integrating factor, µ y′ − aµ y = µ b. (1.1.5) Now comes the critical step. We choose a positive function µ such that − aµ = µ′. (1.1.6) For any function µ solution of Eq. (1.1.6), the differential equation in (1.1.5) has the form µ y′ + µ′ y = µ b. But the left-hand side is a total derivative of a product of two functions,( µ y )′ = µ b. (1.1.7) This is the property we want in an integrating factor, µ. We want to find a function µ such that the left-hand side of the differential equation for y can be written as a total derivative, just as in Eq. (1.1.7). We only need to find one of such functions µ. So we go back to Eq. (1.1.6), the differential equation for µ, which is simple to solve, µ′ = −aµ ⇒ µ ′ µ = −a ⇒ ( ln(|µ|) )′ = −a ⇒ ln(|µ|) = −at+ c0. Computing the exponential of both sides in the equation above we get µ = ±ec0−at = ±ec0 e−at ⇒ µ = c1 e−at, c1 = ±ec0 . Since c1 is a constant which will cancel out from Eq. (1.1.5) anyway, we choose the integration constant c0 = 0, hence c1 = 1. The integrating function is then µ(t) = e−at. 1.1. LINEAR CONSTANT COEFFICIENT EQUATIONS 9 This function is an integrating factor, because if we start again at Eq. (1.1.5), we get e−at y′ − a e−at y = b e−at ⇒ e−at y′ + ( e−at )′ y = b e−at, where we used the main property of the integrating factor, −a e−at = ( e−at )′ . Now the product rule for derivatives implies that the left-hand side above is a total derivative,( e−at y )′ = b e−at. The right-hand side above can be rewritten as a derivative, b e−at = ( − b a e−at )′ , hence( e−at y + b a e−at )′ = 0 ⇔ [( y + b a ) e−at ]′ = 0. We have succeeded in writing the whole differential equation as a total derivative. The differential equation is the total derivative of a potential function, which in this case is ψ(t, y) = ( y + b a ) e−at. Notice that this potential function is the exponential of the potential function found in the first proof of this Theorem. The differential equation for y is a total derivative, dψ dt (t, y(t)) = 0, so it is simple to integrate, ψ(t, y(t)) = c ⇒ ( y(t) + b a ) e−at = c ⇒ y(t) = c eat − b a . This establishes the Theorem. We solve the example below following the second proof of Theorem 1.1.2. Example 1.1.6. Find all solutions to the constant coefficient equation y′ = 2y + 3 (1.1.8) Solution: (Solution Video) Write the equation in (1.1.8) as follows, y′ − 2y = 3. Multiply this equation by the integrating factor µ(t) = e−2t, e−2ty′ − 2 e−2t y = 3 e−2t ⇔ e−2ty′ + ( e−2t )′ y = 3 e−2t. We now solve the same problem above, but now using the formulas in Theorem 1.1.2. Example 1.1.7. Find all solutions to the constant coefficient equation y′ = 2y + 3 (1.1.9) Solution: The equation above is the case of a = 2 and b = 3 in Eq. (1.1.3). Therefore, using these values in the expression for the solution given in Eq. (1.1.4) we obtain y(t) = c e2t − 3 2 . C 12 1. FIRST ORDER EQUATIONS The initial condition y(0) = 2 selects only one solution, 1 = y(0) = c+ 1 3 ⇒ c = 2 3 . We get the solution y(t) = 2 3 e−3t + 1 3 . C Notes. This section corresponds to Boyce-DiPrima [3] Section 2.1, where both constant and variable coefficient equations are studied. Zill and Wright give a more concise exposition in [17] Section 2.3, and a one page description is given by Simmons in [10] in Section 2.10. The integrating factor method is shown in most of these books, but unlike them, here we emphasize that the integrating factor changes the linear differential equation into a total derivative, which is trivial to integrate. We also show here how to compute the potential functions for the linear differential equations. In § 1.4 we solve (nonlinear) exact equations and nonexact equations with integrating factors. We solve these equations by transforming them into a total derivative, just as we did in this section with the linear equations. 1.1. LINEAR CONSTANT COEFFICIENT EQUATIONS 13 1.1.6. Exercises. 1.1.1.- Find the differential equation of the form y′ = f(y) satisfied by the function y(t) = 8e5t − 2 5 . 1.1.2.- Find constants a, b, so that y(t) = (t+ 3) e2t is solution of the IVP y′ = ay + e2t, y(0) = b. 1.1.3.- Find all solutions y of y′ = 3y. 1.1.4.- Follow the steps below to find all so- lutions of y′ = −4y + 2 (a) Find the integrating factor µ. (b) Write the equations as a total de- rivative of a function ψ, that is y′ = −4y + 2 ⇔ ψ′ = 0. (c) Integrate the equation for ψ. (d) Compute y using part (c). 1.1.5.- Find all solutions of y′ = 2y + 5 1.1.6.- Find the solution of the IVP y′ = −4y + 2, y(0) = 5. 1.1.7.- Find the solution of the IVP dy dt (t) = 3 y(t)− 2, y(1) = 1. 1.1.8.- Express the differential equation y′ = 6 y + 1 (1.1.14) as a total derivative of a potential func- tion ψ(t, y), that is, find ψ satisfying y′ = 6 y + 1 ⇔ ψ′ = 0. Integrate the equation for the poten- tial function ψ to find all solutions y of Eq. (1.1.14). 1.1.9.- Find the solution of the IVP y′ = 6 y + 1, y(0) = 1. 1.1.10.- * Follow the steps below to solve y′ = −3y + 5, y(0) = 1. (a) Find any integrating factor µ for the differential equation. (b) Write the differential equation as a total derivative of a potential func- tion ψ. (c) Use the potential function to find the general solution of the differen- tial equation. (d) Find the solution of the initial value problem above. 14 1. FIRST ORDER EQUATIONS 1.2. Linear Variable Coefficient Equations In this section we obtain a formula for the solutions of variable coefficient linear equations, which generalizes Equation (1.1.4) in Theorem 1.1.2. To get this formula we use the integrat- ing factor method—already used for constant coefficient equations in § 1.1. We also show that the initial value problem for variable coefficient equations has a unique solution—just as happens for constant coefficient equations. In the last part of this section we turn our attention to a particular nonlinear differential equation—the Bernoulli equation. This nonlinear equation has a particular property: it can be transformed into a linear equation by an appropriate change in the unknown function. Then, one solves the linear equation for the changed function using the integrating factor method. The last step is to transform the changed function back into the original function. 1.2.1. Review: Constant Coefficient Equations. Let us recall how we solved the constant coefficient case. We wrote the equation y′ = a y + b as follows y′ = a ( y + b a ) . The critical step was the following: since b/a is constant, then (b/a)′ = 0, hence( y + b a )′ = a ( y + b a ) . At this point the equation was simple to solve, (y + ba ) ′ (y + ab ) = a ⇒ ln (∣∣∣y + b a ∣∣∣)′ = a ⇒ ln(∣∣∣y + b a ∣∣∣) = c0 + at. We now computed the exponential on both sides, to get∣∣∣y + b a ∣∣∣ = ec0+at = ec0 eat ⇒ y + b a = (±ec0) eat, and calling c = ±ec0 we got the formula y(t) = c eat − b a , This idea can be generalized to variable coefficient equations, but only in the case where b/a is constant. For example, consider the case b = 0 and a depending on t. The equation is y′ = a(t) y, and we can solve it as follows, y′ y = a(t) ⇒ ln(|y|)′ = a(t) ⇒ ln(|y(t)|) = A(t) + c0, where A = ∫ a dt, is a primitive or antiderivative of a. Therefore, y(t) = ±eA(t)+c0 = ±eA(t) ec0 , so we get the solution y(t) = c eA(t), where c = ±ec0 . Example 1.2.1. The solutions of y′ = 2t y are y(t) = c et 2 , where c ∈ R. However, the case where b/a is not constant is not so simple to solve—we cannot add zero to the equation in the form of 0 = (b/a)′. We need a new idea. We now show an idea that works with all first order linear equations with variable coefficients—the integrating factor method. 1.2. LINEAR VARIABLE COEFFICIENT EQUATIONS 17 Using that −3 t−4 = (t−3)′ and t2 = ( t3 3 )′ , we get t−3 y′ + (t−3)′ y = ( t3 3 )′ ⇒ ( t−3 y )′ = ( t3 3 )′ ⇒ ( t−3 y − t 3 3 )′ = 0. This last equation is a total derivative of a potential function ψ(t, y) = t−3 y− t 3 3 . Since the equation is a total derivative, this confirms that we got a correct integrating factor. Now we need to integrate the total derivative, which is simple to do, t−3 y − t 3 3 = c ⇒ t−3 y = c+ t 3 3 ⇒ y(t) = c t3 + t 6 3 , where c is an arbitrary constant. C Example 1.2.4. Find all solutions of ty′ = −2y + 4t2, with t > 0. Solution: Rewrite the equation as y′ = −2 t y + 4t ⇔ a(t) = −2 t , b(t) = 4t. (1.2.6) Rewrite again, y′ + 2 t y = 4t. Multiply by a function µ, µ y′ + 2 t µ y = µ 4t. Choose µ solution of 2 t µ = µ′ ⇒ ln(|µ|)′ = 2 t ⇒ ln(|µ|) = 2 ln(|t|) = ln(t2) ⇒ µ(t) = ±t2. We choose µ = t2. Multiply the differential equation by this µ, t2 y′ + 2t y = 4t t2 ⇒ (t2 y)′ = 4t3. If we write the right-hand side also as a derivative,( t2 y )′ = ( t4 )′ ⇒ (t2 y − t4)′ = 0. So a potential function is ψ(t, y(t)) = t2 y(t)− t4. Integrating on both sides we obtain t2 y − t4 = c ⇒ t2 y = c+ t4 ⇒ y(t) = c t2 + t2. C 1.2.3. The Initial Value Problem. We now generalize Theorem 1.1.4—initial value problems have unique solutions—from constant coefficients to variable coefficients equations. We start introducing the initial value problem for a variable coefficients equation—a simple generalization of Def. 1.1.3. Definition 1.2.2. The initial value problem (IVP) is to find all solutions y of y′ = a(t) y + b(t), (1.2.7) that satisfy the initial condition y(t0) = y0, (1.2.8) where a, b are given functions and t0, y0 are given constants. 18 1. FIRST ORDER EQUATIONS Remark: The Equation (1.2.8) is the initial condition of the problem. Although the differential equation in (1.2.7) has infinitely many solutions, the associated initial value problem has a unique solution. Theorem 1.2.3 (Variable coefficients IVP). Given continuous functions a, b, with domain (t1, t2), and constants t0 ∈ (t1, t2) and y0 ∈ R, the initial value problem y′ = a(t) y + b(t), y(t0) = y0, (1.2.9) has the unique solution y on the domain (t1, t2), given by y(t) = y0 e A(t) + eA(t) ∫ t t0 e−A(s) b(s) ds, (1.2.10) where the function A(t) = ∫ t t0 a(s) ds is a particular antiderivative of function a. Remark: In the particular case of a constant coefficient equation, where a, b ∈ R, the solution given in Eq. (1.2.10) reduces to the one given in Eq. (1.1.12). Indeed, A(t) = − ∫ t t0 a ds = −a(t− t0), ∫ t t0 e−a(s−t0) b ds = − b a e−a(t−t0) + b a . Therefore, the solution y can be written as y(t) = y0 e a(t−t0) + ea(t−t0) ( − b a e−a(t−t0) + b a ) = ( y0 + b a ) ea(t−t0) − b a . Proof Theorem 1.2.3: Theorem 1.2.1 gives us the general solution of Eq. (1.2.9), y(t) = c eA(t) + eA(t) ∫ e−A(t) b(t) dt, c ∈ R. Let us use the notation K(t) = ∫ e−A(t) b(t) dt, and then introduce the initial condition in (1.2.9), which fixes the constant c, y0 = y(t0) = c e A(t0) + eA(t0)K(t0). So we get the constant c, c = y0 e −A(t0) −K(t0). Using this expression in the general solution above, y(t) = ( y0 e −A(t0) −K(t0) ) eA(t) + eA(t)K(t) = y0 e A(t)−A(t0) + eA(t) ( K(t)−K(t0) ) . Let us introduce the particular primitives Â(t) = A(t) − A(t0) and K̂(t) = K(t) − K(t0), which vanish at t0, that is, Â(t) = ∫ t t0 a(s) ds, K̂(t) = ∫ t t0 e−A(s) b(s) ds. Then the solution y of the IVP has the form y(t) = y0 e Â(t) + eA(t) ∫ t t0 e−A(s) b(s) ds which is equivalent to y(t) = y0 e Â(t) + eA(t)−A(t0) ∫ t t0 e−(A(s)−A(t0)) b(s) ds, 1.2. LINEAR VARIABLE COEFFICIENT EQUATIONS 19 so we conclude that y(t) = y0 e Â(t) + eÂ(t) ∫ t t0 e−Â(s) b(s) ds. Once we rename the particular primitive  simply by A, we establish the Theorem. We solve the next Example following the main steps in the proof of Theorem 1.2.3 above. Example 1.2.5. Find the function y solution of the initial value problem ty′ + 2y = 4t2, t > 0, y(1) = 2. Solution: In Example 1.2.4 we computed the general solution of the differential equation, y(t) = c t2 + t2, c ∈ R. The initial condition implies that 2 = y(1) = c+ 1 ⇒ c = 1 ⇒ y(t) = 1 t2 + t2. C Example 1.2.6. Find the solution of the problem given in Example 1.2.5, but this time using the results of Theorem 1.2.3. Solution: We find the solution simply by using Eq. (1.2.10). First, find the integrating factor function µ as follows: A(t) = − ∫ t 1 2 s ds = −2 [ ln(t)− ln(1) ] = −2 ln(t) ⇒ A(t) = ln(t−2). The integrating factor is µ(t) = e−A(t), that is, µ(t) = e− ln(t −2) = eln(t 2) ⇒ µ(t) = t2. Note that Eq. (1.2.10) contains eA(t) = 1/µ(t). Then, compute the solution as follows, y(t) = 1 t2 ( 2 + ∫ t 1 s2 4s ds ) = 2 t2 + 1 t2 ∫ t 1 4s3ds = 2 t2 + 1 t2 (t4 − 1) = 2 t2 + t2 − 1 t2 ⇒ y(t) = 1 t2 + t2. C 1.2.4. The Bernoulli Equation. In 1696 Jacob Bernoulli solved what is now known as the Bernoulli differential equation. This is a first order nonlinear differential equation. The following year Leibniz solved this equation by transforming it into a linear equation. We now explain Leibniz’s idea in more detail. Definition 1.2.4. The Bernoulli equation is y′ = p(t) y + q(t) yn. (1.2.11) where p, q are given functions and n ∈ R. 22 1. FIRST ORDER EQUATIONS Then, the integrating factor is µ(t) = e−A(t). In this case we get µ(t) = e− ln(t 2) = eln(t −2) ⇒ µ(t) = 1 t2 . Therefore, the equation for v can be written as a total derivative, 1 t2 ( v′ − 2 t v ) = 2 3 t2 ⇒ ( v t2 − 2 9 t3 )′ = 0. The potential function is ψ(t, v) = v/t2−(2/9)t3 and the solution of the differential equation is ψ(t, v(t)) = c, that is, v t2 − 2 9 t3 = c ⇒ v(t) = t2 ( c+ 2 9 t3 ) ⇒ v(t) = c t2 + 2 9 t5. Once v is known we compute the original unknown y = ±v3/2, where the double sign is related to taking the square root. We finally obtain y(t) = ± ( c t2 + 2 9 t5 )3/2 . C Notes. This section corresponds to Boyce-DiPrima [3] Section 2.1, and Simmons [10] Section 2.10. The Bernoulli equation is solved in the exercises of section 2.4 in Boyce- Diprima, and in the exercises of section 2.10 in Simmons. 1.2. LINEAR VARIABLE COEFFICIENT EQUATIONS 23 1.2.5. Exercises. 1.2.1.- Find all solutions of y′ = 4t y. 1.2.2.- Find the general solution of y′ = −y + e−2t. 1.2.3.- Find the solution y to the IVP y′ = y + 2te2t, y(0) = 0. 1.2.4.- Find the solution y to the IVP t y′ + 2 y = sin(t) t , y (π 2 ) = 2 π , for t > 0. 1.2.5.- Find all solutions y to the ODE y′ (t2 + 1)y = 4t. 1.2.6.- Find all solutions y to the ODE ty′ + n y = t2, with n a positive integer. 1.2.7.- Find the solutions to the IVP 2ty − y′ = 0, y(0) = 3. 1.2.8.- Find all solutions of the equation y′ = y − 2 sin(t). 1.2.9.- Find the solution to the initial value problem t y′ = 2 y + 4t3 cos(4t), y (π 8 ) = 0. 1.2.10.- Find all solutions of the equation y′ + t y = t y2. 1.2.11.- Find all solutions of the equation y′ = −x y + 6x√y. 1.2.12.- Find all solutions of the IVP y′ = y + 3 y2 , y(0) = 1. 1.2.13.- * Find all solutions of y′ = a y + b yn, where a 6= 0, b, and n are real constants with n 6= 0, 1. 24 1. FIRST ORDER EQUATIONS 1.3. Separable Equations 1.3.1. Separable Equations. More often than not nonlinear differential equations are harder to solve than linear equations. Separable equations are an exception—they can be solved just by integrating on both sides of the differential equation. We tried this idea to solve linear equations, but it did not work. However, it works for separable equations. Definition 1.3.1. A separable differential equation for the function y is h(y) y′ = g(t), where h, g are given functions. Remark: A separable differential equation is h(y) y′ = g(y) has the following properties: • The left-hand side depends explicitly only on y, so any t dependence is through y. • The right-hand side depends only on t. • And the left-hand side is of the form (something on y)× y′. Example 1.3.1. (a) The differential equation y′ = t2 1− y2 is separable, since it is equivalent to ( 1− y2 ) y′ = t2 ⇒ { g(t) = t2, h(y) = 1− y2. (b) The differential equation y′ + y2 cos(2t) = 0 is separable, since it is equivalent to 1 y2 y′ = − cos(2t) ⇒ g(t) = − cos(2t), h(y) = 1 y2 . The functions g and h are not uniquely defined; another choice in this example is: g(t) = cos(2t), h(y) = − 1 y2 . (c) The linear differential equation y′ = a(t) y is separable, since it is equivalent to 1 y y′ = a(t) ⇒ g(t) = a(t), h(y) = 1 y . (d) The equation y′ = ey + cos(t) is not separable. (e) The constant coefficient linear differential equation y′ = a0 y + b0 is separable, since it is equivalent to 1 (a0 y + b0) y′ = 1 ⇒ g(t) = 1, h(y) = 1 (a0 y + b0) . (f) The linear equation y′ = a(t) y+ b(t), with a 6= 0 and b/a nonconstant, is not separable. C From the last two examples above we see that linear differential equations, with a 6= 0, are separable for b/a constant, and not separable otherwise. Separable differential equations 1.3. SEPARABLE EQUATIONS 27 Definition 1.3.3. A function y is a solution in implicit form of the equation h(y) y′ = g(t) iff the function y is solution of the algebraic equation H ( y(t) ) = G(t) + c, where H and G are any antiderivatives of h and g. In the case that function H is invertible, the solution y above is given in explicit form iff is written as y(t) = H−1 ( G(t) + c ) . In the case that H is not invertible or H−1 is difficult to compute, we leave the solution y in implicit form. We now solve the same example as in Example 1.3.3, but now we just use the result of Theorem 1.3.2. Example 1.3.4. Use the formula in Theorem 1.3.2 to find all solutions y to the equation y′ = t2 1− y2 . (1.3.4) Solution: Theorem 1.3.2 tell us how to obtain the solution y. Writing Eq. (1.3.4) as( 1− y2 ) y′ = t2, we see that the functions h, g are given by h(y) = 1− y2, g(t) = t2. Their primitive functions, H and G, respectively, are simple to compute, h(y) = 1− y2 ⇒ H(y) = y − y 3 3 , g(t) = t2 ⇒ G(t) = t 3 3 . Then, Theorem 1.3.2 implies that the solution y satisfies the algebraic equation y(t)− y 3(t) 3 = t3 3 + c, (1.3.5) where c ∈ R is arbitrary. C Remark: Sometimes it is simpler to remember ideas than formulas. So one can solve a separable equation as we did in Example 1.3.3, instead of using the solution formulas, as in Example 1.3.4. (Although in the case of separable equations both methods are very close.) In the next Example we show that an initial value problem can be solved even when the solutions of the differential equation are given in implicit form. Example 1.3.5. Find the solution of the initial value problem y′ = t2 1− y2 , y(0) = 1. (1.3.6) Solution: From Example 1.3.3 we know that all solutions to the differential equation in (1.3.6) are given by y(t)− y 3(t) 3 = t3 3 + c, 28 1. FIRST ORDER EQUATIONS where c ∈ R is arbitrary. This constant c is now fixed with the initial condition in Eq. (1.3.6) y(0)− y 3(0) 3 = 0 3 + c ⇒ 1− 1 3 = c ⇔ c = 2 3 ⇒ y(t)− y 3(t) 3 = t3 3 + 2 3 . So we can rewrite the algebraic equation defining the solution functions y as the (time dependent) roots of a cubic (in y) polynomial, y3(t)− 3y(t) + t3 + 2 = 0. C Example 1.3.6. Find the solution of the initial value problem y′ + y2 cos(2t) = 0, y(0) = 1. (1.3.7) Solution: The differential equation above can be written as − 1 y2 y′ = cos(2t). We know, from Example 1.3.2, that the solutions of the differential equation are y(t) = 2 sin(2t) + 2c , c ∈ R. The initial condition implies that 1 = y(0) = 2 0 + 2c ⇔ c = 1. So, the solution to the IVP is given in explicit form by y(t) = 2 sin(2t) + 2 . C Example 1.3.7. Follow the proof in Theorem 1.3.2 to find all solutions y of the equation y′ = 4t− t3 4 + y3 . Solution: The differential equation above is separable, with h(y) = 4 + y3, g(t) = 4t− t3. Therefore, it can be integrated as follows:( 4 + y3 ) y′ = 4t− t3 ⇔ ∫ ( 4 + y3(t) ) y′(t) dt = ∫ (4t− t3) dt+ c. Again the substitution y = y(t), dy = y′(t) dt implies that∫ (4 + y3) dy = ∫ (4t− t3) dt+ c0. ⇔ 4y + y4 4 = 2t2 − t 4 4 + c0. Calling c1 = 4c0 we obtain the following implicit form for the solution, y4(t) + 16y(t)− 8t2 + t4 = c1. C 1.3. SEPARABLE EQUATIONS 29 Example 1.3.8. Find the solution of the initial value problem below in explicit form, y′ = 2− t 1 + y , y(0) = 1. (1.3.8) Solution: The differential equation above is separable with h(y) = 1 + y, g(t) = 2− t. Their primitives are respectively given by, h(y) = 1 + y ⇒ H(y) = y + y 2 2 , g(t) = 2− t ⇒ G(t) = 2t− t 2 2 . Therefore, the implicit form of all solutions y to the ODE above are given by y(t) + y2(t) 2 = 2t− t 2 2 + c, with c ∈ R. The initial condition in Eq. (1.3.8) fixes the value of constant c, as follows, y(0) + y2(0) 2 = 0 + c ⇒ 1 + 1 2 = c ⇒ c = 3 2 . We conclude that the implicit form of the solution y is given by y(t) + y2(t) 2 = 2t− t 2 2 + 3 2 , ⇔ y2(t) + 2y(t) + (t2 − 4t− 3) = 0. The explicit form of the solution can be obtained realizing that y(t) is a root in the quadratic polynomial above. The two roots of that polynomial are given by y+-(t) = 1 2 [ −2± √ 4− 4(t2 − 4t− 3) ] ⇔ y+-(t) = −1± √ −t2 + 4t+ 4. We have obtained two functions y+ and y-. However, we know that there is only one solution to the initial value problem. We can decide which one is the solution by evaluating them at the value t = 0 given in the initial condition. We obtain y+(0) = −1 + √ 4 = 1, y-(0) = −1− √ 4 = −3. Therefore, the solution is y+, that is, the explicit form of the solution is y(t) = −1 + √ −t2 + 4t+ 4. C 1.3.2. Euler Homogeneous Equations. Sometimes a differential equation is not separable but it can be transformed into a separable equation changing the unknown func- tion. This is the case for differential equations known as Euler homogenous equations. Definition 1.3.4. An Euler homogeneous differential equation has the form y′(t) = F (y(t) t ) . Remark: 32 1. FIRST ORDER EQUATIONS multiply them by “1” in the form (1/t)1/(1/t)1, that is y′ = ( 2y − 3t− y 2 t ) (t− y) (1/t) (1/t) . Distribute the factors (1/t) in numerator and denominator, and we get y′ = ( 2(y/t)− 3− (y/t)2 ) (1− (y/t)) ⇒ y′ = F (y t ) , where F (y t ) = ( 2(y/t)− 3− (y/t)2 ) (1− (y/t)) . So, the equation is Euler homogeneous and it is written in the standard form. C Example 1.3.12. Determine whether the equation (1− y3) y′ = t2 is Euler homogeneous. Solution: If we write the differential equation in the standard form, y′ = f(t, y), then we get f(t, y) = t2 1− y3 . But f(ct, cy) = c2t2 1− c3y3 6= f(t, y), hence the equation is not Euler homogeneous. C 1.3.3. Solving Euler Homogeneous Equations. In § 1.2 we transformed a Bernoulli equation into an equation we knew how to solve, a linear equation. Theorem 1.3.6 trans- forms an Euler homogeneous equation into a separable equation, which we know how to solve. Theorem 1.3.6. The Euler homogeneous equation y′ = F (y t ) for the function y determines a separable equation for v = y/t, given by v′( F (v)− v ) = 1 t . Remark: The original homogeneous equation for the function y is transformed into a sep- arable equation for the unknown function v = y/t. One solves for v, in implicit or explicit form, and then transforms back to y = t v. Proof of Theorem 1.3.6: Introduce the function v = y/t into the differential equation, y′ = F (v). We still need to replace y′ in terms of v. This is done as follows, y(t) = t v(t) ⇒ y′(t) = v(t) + t v′(t). Introducing these expressions into the differential equation for y we get v + t v′ = F (v) ⇒ v′ = ( F (v)− v ) t ⇒ v ′( F (v)− v ) = 1 t . The equation on the far right is separable. This establishes the Theorem. 1.3. SEPARABLE EQUATIONS 33 Example 1.3.13. Find all solutions y of the differential equation y′ = t2 + 3y2 2ty . Solution: The equation is Euler homogeneous, since f(ct, cy) = c2t2 + 3c2y2 2(ct)(cy) = c2(t2 + 3y2) c2(2ty) = t2 + 3y2 2ty = f(t, y). Next we compute the function F . Since the numerator and denominator are homogeneous degree “2” we multiply the right-hand side of the equation by “1” in the form (1/t2)/(1/t2), y′ = (t2 + 3y2) 2ty ( 1 t2 ) ( 1 t2 ) ⇒ y′ = 1 + 3 (y t )2 2 (y t ) . Now we introduce the change of functions v = y/t, y′ = 1 + 3v2 2v . Since y = t v, then y′ = v + t v′, which implies v + t v′ = 1 + 3v2 2v ⇒ t v′ = 1 + 3v 2 2v − v = 1 + 3v 2 − 2v2 2v = 1 + v2 2v . We obtained the separable equation v′ = 1 t (1 + v2 2v ) . We rewrite and integrate it, 2v 1 + v2 v′ = 1 t ⇒ ∫ 2v 1 + v2 v′ dt = ∫ 1 t dt+ c0. The substitution u = 1 + v2(t) implies du = 2v(t) v′(t) dt, so∫ du u = ∫ dt t + c0 ⇒ ln(u) = ln(t) + c0 ⇒ u = eln(t)+c0 . But u = eln(t)ec0 , so denoting c1 = e c0 , then u = c1t. So, we get 1 + v2 = c1t ⇒ 1 + (y t )2 = c1t ⇒ y(t) = ±t √ c1t− 1. C Example 1.3.14. Find all solutions y of the differential equation y′ = t(y + 1) + (y + 1)2 t2 . Solution: This equation is Euler homogeneous when written in terms of the unknown u(t) = y(t) + 1 and the variable t. Indeed, u′ = y′, thus we obtain y′ = t(y + 1) + (y + 1)2 t2 ⇔ u′ = tu+ u 2 t2 ⇔ u′ = u t + (u t )2 . Therefore, we introduce the new variable v = u/t, which satisfies u = t v and u′ = v + t v′. The differential equation for v is v + t v′ = v + v2 ⇔ t v′ = v2 ⇔ ∫ v′ v2 dt = ∫ 1 t dt+ c, with c ∈ R. The substitution w = v(t) implies dw = v′ dt, so∫ w−2 dw = ∫ 1 t dt+ c ⇔ −w−1 = ln(|t|) + c ⇔ w = − 1 ln(|t|) + c . 34 1. FIRST ORDER EQUATIONS Substituting back v, u and y, we obtain w = v(t) = u(t)/t = [y(t) + 1]/t, so y + 1 t = − 1 ln(|t|) + c ⇔ y(t) = − t ln(|t|) + c − 1. C Notes. This section corresponds to Boyce-DiPrima [3] Section 2.2. Zill and Wright study separable equations in [17] Section 2.2, and Euler homogeneous equations in Section 2.5. Zill and Wright organize the material in a nice way, they present first separable equations, then linear equations, and then they group Euler homogeneous and Bernoulli equations in a section called Solutions by Substitution. Once again, a one page description is given by Simmons in [10] in Chapter 2, Section 7. 1.4. EXACT DIFFERENTIAL EQUATIONS 37 So, the differential equation is not exact. C The following examples show that there are exact equations which are not separable. Example 1.4.3. Show whether the differential equation below is exact or not, 2ty y′ + 2t+ y2 = 0. Solution: We first identify the functions N and M . This is simple in this case, since (2ty) y′ + (2t+ y2) = 0 ⇒ N(t, y) = 2ty, M(t, y) = 2t+ y2. The equation is indeed exact, since N(t, y) = 2ty ⇒ ∂tN(t, y) = 2y, M(t, y) = 2t+ y2 ⇒ ∂yM(t, y) = 2y, } ⇒ ∂tN(t, y) = ∂yM(t, y). Therefore, the differential equation is exact. C Example 1.4.4. Show whether the differential equation below is exact or not, sin(t) y′ + t2ey y′ − y′ = −y cos(t)− 2tey. Solution: We first identify the functions N and M by rewriting the equation as follows,( sin(t) + t2ey − 1 ) y′ + ( y cos(t) + 2tey ) = 0 we can see that N(t, y) = sin(t) + t2ey − 1 ⇒ ∂tN(t, y) = cos(t) + 2tey, M(t, y) = y cos(t) + 2tey ⇒ ∂yM(t, y) = cos(t) + 2tey. Therefore, ∂tN(t, y) = ∂yM(t, y), and the equation is exact. C 1.4.2. Solving Exact Equations. Exact differential equations can be rewritten as a total derivative of a function, called a potential function. Once they are written in such way they are simple to solve. Theorem 1.4.2 (Exact Equations). If the differential equation N(t, y) y′ +M(t, y) = 0 (1.4.1) is exact, then it can be written as dψ dt (t, y(t)) = 0, where ψ is called a potential function and satisfies N = ∂yψ, M = ∂tψ. (1.4.2) Therefore, the solutions of the exact equation are given in implicit form as ψ(t, y(t)) = c, c ∈ R. Remark: The condition ∂tN = ∂yM is equivalent to the existence of a potential function— result proven by Henri Poincaré around 1880. 38 1. FIRST ORDER EQUATIONS Theorem 1.4.3 (Poincaré). Continuously differentiable functions N , M , on t, y, satisfy ∂tN(t, y) = ∂yM(t, y) (1.4.3) iff there is a twice continuously differentiable function ψ, depending on t, y such that ∂yψ(t, y) = N(t, y), ∂tψ(t, y) = M(t, y). (1.4.4) Remarks: (a) A differential equation defines the functions N and M . The exact condition in (1.4.3) is equivalent to the existence of ψ, related to N and M through Eq. (1.4.4). (b) If we recall the definition of the gradient of a function of two variables, ∇ψ = 〈∂tψ, ∂yψ〉, then the equations in (1.4.4) say that ∇ψ = 〈M,N〉. Proof of Lemma 1.4.3: (⇒) It is not given. See [9]. (⇐) We assume that the potential function ψ is given and satisfies N = ∂yψ, M = ∂tψ. Recalling that ψ is twice continuously differentiable, hence ∂t∂yψ = ∂y∂tψ, then we have ∂tN = ∂t∂yψ = ∂y∂tψ = ∂yM. In our next example we verify that a given function ψ is a potential function for an exact differential equation. We also show that the differential equation can be rewritten as a total derivative of this potential function. (In Theorem 1.4.2 we show how to compute such potential function from the differential equation, integrating the equations in (1.4.4).) Example 1.4.5 (Verification of a Potential). Show that the differential equation 2ty y′ + 2t+ y2 = 0. is the total derivative of the potential function ψ(t, y) = t2 + ty2. Solution: we use the chain rule to compute the t derivative of the potential function ψ evaluated at the unknown function y, d dt ψ(t, y(t)) = ( ∂yψ ) dy dt + ( ∂tψ ) = (2ty) y′ + (2t+ y2). So the differential equation is the total derivative of the potential function. To get this result we used the partial derivatives ∂yψ = 2ty = N, ∂tψ = 2t+ y 2 = M. C Exact equations always have a potential function ψ, and this function is not difficult to compute—we only need to integrate Eq. (1.4.4). Having a potential function of an exact equation is essentially the same as solving the differential equation, since the integral curves of ψ define implicit solutions of the differential equation. Proof of Theorem 1.4.2: The differential equation in (1.4.1) is exact, then Poincaré Theorem implies that there is a potential function ψ such that N = ∂yψ, M = ∂tψ. 1.4. EXACT DIFFERENTIAL EQUATIONS 39 Therefore, the differential equation is given by 0 = N(t, y) y′(t) +M(t, y) = ( ∂yψ(t, y) ) y′ + ( ∂tψ(t, y) ) = d dt ψ(t, y(t)), where in the last step we used the chain rule. Recall that the chain rule says d dt ψ ( t, y(t) ) = (∂yψ) dy dt + (∂tψ). So, the differential equation has been rewritten as a total t-derivative of the potential func- tion, which is simple to integrate, d dt ψ(t, y(t)) = 0 ⇒ ψ(t, y(t)) = c, where c is an arbitrary constant. This establishes the Theorem. Example 1.4.6 (Calculation of a Potential). Find all solutions y to the differential equation 2ty y′ + 2t+ y2 = 0. Solution: The first step is to verify whether the differential equation is exact. We know the answer, the equation is exact, we did this calculation before in Example 1.4.3, but we reproduce it here anyway. N(t, y) = 2ty ⇒ ∂tN(t, y) = 2y, M(t, y) = 2t+ y2 ⇒ ∂yM(t, y) = 2y. } ⇒ ∂tN(t, y) = ∂yM(t, y). Since the equation is exact, Lemma 1.4.3 implies that there exists a potential function ψ satisfying the equations ∂yψ(t, y) = N(t, y), (1.4.5) ∂tψ(t, y) = M(t, y). (1.4.6) Let us compute ψ. Integrate Eq. (1.4.5) in the variable y keeping the variable t constant, ∂yψ(t, y) = 2ty ⇒ ψ(t, y) = ∫ 2ty dy + g(t), where g is a constant of integration on the variable y, so g can only depend on t. We obtain ψ(t, y) = ty2 + g(t). (1.4.7) Introduce into Eq. (1.4.6) the expression for the function ψ in Eq. (1.4.7) above, that is, y2 + g′(t) = ∂tψ(t, y) = M(t, y) = 2t+ y 2 ⇒ g′(t) = 2t Integrate in t the last equation above, and choose the integration constant to be zero, g(t) = t2. We have found that a potential function is given by ψ(t, y) = ty2 + t2. Therefore, Theorem 1.4.2 implies that all solutions y satisfy the implicit equation ty2(t) + t2 = c, for any c ∈ R. The choice g(t) = t2 + c0 only modifies the constant c. C 42 1. FIRST ORDER EQUATIONS We now check the condition for exactness, ∂tÑ = µ ′, ∂yM̃ = −aµ, and we get that ∂tÑ = ∂yM̃ the equation is exact } ⇔ { µ′ = −aµ µ is an integrating factor. Therefore, the linear equation y′ = a y + b is semi-exact, and the function that transforms it into an exact equation is µ(t) = e−A(t), where A(t) = ∫ a(t) dt, which in § 1.2 we called it an integrating factor. C Now we generalize this idea to nonlinear differential equations. Theorem 1.4.5. If the equation N(t, y) y′ +M(t, y) = 0 (1.4.9) is not exact, with ∂tN 6= ∂yM , the function N 6= 0, and where the function h defined as h = ∂yM − ∂tN N (1.4.10) depends only on t, not on y, then the equation below is exact, (eHN) y′ + (eHM) = 0, (1.4.11) where H is an antiderivative of h, H(t) = ∫ h(t) dt. Remarks: (a) The function µ(t) = eH(t) is called an integrating factor. (b) Any integrating factor µ is solution of the differential equation µ′(t) = h(t)µ(t). (c) Multiplication by an integrating factor transforms a non-exact equation N y′ +M = 0 into an exact equation. (µN) y′ + (µM) = 0. This is exactly what happened with linear equations. Verification Proof of Theorem 1.4.5: We need to verify that the equation is exact, (eH N) y′ + (eHM) = 0 ⇒ Ñ(t, y) = eH(t)N(t, y), M̃(t, y) = eH(t)M(t, y). We now check for exactness, and let us recall ∂t(e H) = (eH)′ = h eH , then ∂tÑ = h e H N + eH ∂tN, ∂yM̃ = e H ∂yM. Let us use the definition of h in the first equation above, ∂tÑ = e H ( (∂yM − ∂tN) N N + ∂tN ) = eH ∂yM = ∂yM̃. So the equation is exact. This establishes the Theorem. 1.4. EXACT DIFFERENTIAL EQUATIONS 43 Constructive Proof of Theorem 1.4.5: The original differential equation N y′ +M = 0 is not exact because ∂tN 6= ∂yM . Now multiply the differential equation by a nonzero function µ that depends only on t, (µN) y′ + (µM) = 0. (1.4.12) We look for a function µ such that this new equation is exact. This means that µ must satisfy the equation ∂t(µN) = ∂y(µM). Recalling that µ depends only on t and denoting ∂tµ = µ ′, we get µ′N + µ∂tN = µ∂yM ⇒ µ′N = µ (∂yM − ∂tN). So the differential equation in (1.4.12) is exact iff holds µ′ = (∂yM − ∂tN N ) µ. The solution µ will depend only on t iff the function h(t) = ∂yM(t, y)− ∂tN(t, y) N(t, y) depends only on t. If this happens, as assumed in the hypotheses of the theorem, then we can solve for µ as follows, µ′(t) = h(t)µ(t) ⇒ µ(t) = eH(t), H(t) = ∫ h(t) dt. Therefore, the equation below is exact, (eH N) y′ + (eHM) = 0. This establishes the Theorem. Example 1.4.9. Find all solutions y to the differential equation( t2 + t y ) y′ + ( 3t y + y2 ) = 0. (1.4.13) Solution: We first verify whether this equation is exact: N(t, y) = t2 + ty ⇒ ∂tN(t, y) = 2t+ y, M(t, y) = 3ty + y2 ⇒ ∂yM(t, y) = 3t+ 2y, therefore, the differential equation is not exact. We now verify whether the extra condition in Theorem 1.4.5 holds, that is, whether the function in (1.4.10) is y independent; h = ∂yM(t, y)− ∂tN(t, y) N(t, y) = (3t+ 2y)− (2t+ y) (t2 + ty) = (t+ y) t(t+ y) = 1 t ⇒ h(t) = 1 t . 44 1. FIRST ORDER EQUATIONS So, the function h = (∂yM − ∂tN)/N is y independent. Therefore, Theorem 1.4.5 implies that the non-exact differential equation can be transformed into an exact equation. We need to multiply the differential equation by a function µ solution of the equation µ′(t) = h(t)µ(t) ⇒ µ ′ µ = 1 t ⇒ ln(µ(t)) = ln(t) ⇒ µ(t) = t, where we have chosen in second equation the integration constant to be zero. Then, multi- plying the original differential equation in (1.4.13) by the integrating factor µ we obtain( 3t2 y + t y2 ) + ( t3 + t2 y ) y′ = 0. (1.4.14) This latter equation is exact, since Ñ(t, y) = t3 + t2y ⇒ ∂tÑ(t, y) = 3t2 + 2ty, M̃(t, y) = 3t2y + ty2 ⇒ ∂yM̃(t, y) = 3t2 + 2ty, so we get the exactness condition ∂tÑ = ∂yM̃ . The solution y can be found as we did in the previous examples in this Section. That is, we find the potential function ψ by integrating the equations ∂yψ(t, y) = Ñ(t, y), (1.4.15) ∂tψ(t, y) = M̃(t, y). (1.4.16) From the first equation above we obtain ∂yψ = t 3 + t2y ⇒ ψ(t, y) = ∫ ( t3 + t2y ) dy + g(t). Integrating on the right hand side above we arrive to ψ(t, y) = t3y + 1 2 t2y2 + g(t). Introduce the expression above for ψ in Eq. (1.4.16), 3t2y + ty2 + g′(t) = ∂tψ(t, y) = M̃(t, y) = 3t 2y + ty2, g′(t) = 0. A solution to this last equation is g(t) = 0. So we get a potential function ψ(t, y) = t3 + 1 2 t2y2. All solutions y to the differential equation in (1.4.13) satisfy the equation t3 y(t) + 1 2 t2 ( y(t) )2 = c0, where c0 ∈ R is arbitrary. C We have seen in Example 1.4.2 that linear differential equations with a 6= 0 are not exact. In Section 1.2 we found solutions to linear equations using the integrating factor method. We multiplied the linear equation by a function that transformed the equation into a total derivative. Those calculations are now a particular case of Theorem 1.4.5, as we can see it in the following Example. Example 1.4.10. Use Theorem 1.4.5 to find all solutions to the linear differential equation y′ = a(t) y + b(t), a(t) 6= 0. (1.4.17) 1.4. EXACT DIFFERENTIAL EQUATIONS 47 Remark: Sometimes, in the literature, the equations N y′ +M = 0 and N +M x′ = 0 are written together as follows, N dy +M dx = 0. This equation deserves two comments: (a) We do not use this notation here. That equation makes sense in the framework of differential forms, which is beyond the subject of these notes. (b) Some people justify the use of that equation outside the framework of differential forms by thinking y′ = dy dx as real fraction and multiplying N y′+M = 0 by the denominator, N dy dx +M = 0 ⇒ N dy +M dx = 0. Unfortunately, y′ is not a fraction dy dx , so the calculation just mentioned has no meaning. So, if the equation for y is exact, so is the equation for its inverse x. The same is not true for semi-exact equations. If the equation for y is semi-exact, then the equation for its inverse x might or might not be semi-exact. The next result states a condition on the equation for the inverse function x to be semi-exact. This condition is not equal to the condition on the equation for the function y to be semi-exact. Compare Theorems 1.4.5 and 1.4.7. Theorem 1.4.7. If the equation M x′ +N = 0 is not exact, with ∂yM 6= ∂xN , the function M 6= 0, and where the function ` defined as ` = − (∂yM − ∂xN) M depends only on y, not on x, then the equation below is exact, (eLM)x′ + (eLN) = 0 where L is an antiderivative of `, L(y) = ∫ `(y) dy. Remarks: (a) The function µ(y) = eL(y) is called an integrating factor. (b) Any integrating factor µ is solution of the differential equation µ′(y) = `(y)µ(y). (c) Multiplication by an integrating factor transforms a non-exact equation M x′ +N = 0 into an exact equation. (µM)x′ + (µN) = 0. Verification Proof of Theorem 1.4.7: We need to verify that the equation is exact, (eLM)x′ + (eLN) = 0 ⇒ M̃(x, y) = eL(y)M(x, y), Ñ(x, y) = eL(y)N(x, y). We now check for exactness, and let us recall ∂y(e L) = (eL)′ = ` eL, then ∂yM̃ = ` e LM + eL ∂yM, ∂xÑ = e H ∂xN. 48 1. FIRST ORDER EQUATIONS Let us use the definition of ` in the first equation above, ∂yM̃ = e L ( − (∂yM − ∂xN) M M + ∂yM ) = eL ∂xN = ∂xÑ . So the equation is exact. This establishes the Theorem. Constructive Proof of Theorem 1.4.7: The original differential equation M x′ +N = 0 is not exact because ∂yM 6= ∂xN . Now multiply the differential equation by a nonzero function µ that depends only on y, (µM)x′ + (µN) = 0. We look for a function µ such that this new equation is exact. This means that µ must satisfy the equation ∂y(µM) = ∂x(µN). Recalling that µ depends only on y and denoting ∂yµ = µ ′, we get µ′M + µ∂yM = µ∂xN ⇒ µ′M = −µ (∂yM − ∂xN). So the differential equation (µM)x′ + (µN) = 0 is exact iff holds µ′ = − (∂yM − ∂xN M ) µ. The solution µ will depend only on y iff the function `(y) = −∂yM(x, y)− ∂xN(x, y) M(x, y) depends only on y. If this happens, as assumed in the hypotheses of the theorem, then we can solve for µ as follows, µ′(y) = `(y)µ(y) ⇒ µ(y) = eL(y), L(y) = ∫ `(y) dy. Therefore, the equation below is exact, (eLM)x′ + (eLN) = 0. This establishes the Theorem. Example 1.4.11. Find all solutions to the differential equation( 5x e−y + 2 cos(3x) ) y′ + ( 5 e−y − 3 sin(3x) ) = 0. Solution: We first check if the equation is exact for the unknown function y, which depends on the variable x. If we write the equation as N y′ +M = 0, with y′ = dy/dx, then N(x, y) = 5x e−y + 2 cos(3x) ⇒ ∂xN(x, y) = 5 e−y − 6 sin(3x), M(x, y) = 5 e−y − 3 sin(3x) ⇒ ∂yM(x, y) = −5 e−y. Since ∂xN 6= ∂yM , the equation is not exact. Let us check if there exists an integrating factor µ that depends only on x. Following Theorem 1.4.5 we study the function h = ( ∂yM − ∂xN ) N = −10 e−y + 6 sin(3x) 5x e−y + 2 cos(3x) , which is a function of both x and y and cannot be simplified into a function of x alone. Hence an integrating factor cannot be function of only x. Let us now consider the equation for the inverse function x, which depends on the 1.4. EXACT DIFFERENTIAL EQUATIONS 49 variable y. The equation is M x′ +N = 0, with x′ = dx/dy, where M and N are the same as before, M(x, y) = 5 e−y − 3 sin(3x) N(x, y) = 5x e−y + 2 cos(3x). We know from Theorem 1.4.6 that this equation is not exact. Both the equation for y and equation for its inverse x must satisfy the same condition to be exact. The condition is ∂xN = ∂yM , but we have seen that this is not true for the equation in this example. The last thing we can do is to check if the equation for the inverse function x has an integrating factor µ that depends only on y. Following Theorem 1.4.7 we study the function ` = − (∂yM − ∂xN) M = − ( −10 e−y + 6 sin(3x) )( 5 e−y − 3 sin(3x) ) = 2 ⇒ `(y) = 2. The function above does not depend on x, so we can solve the differential equation for µ(y), µ′(y) = `(y)µ(y) ⇒ µ′(y) = 2µ(y) ⇒ µ(y) = µ0 e2y. Since µ is an integrating factor, we can choose µ0 = 1, hence µ(y) = e 2y. If we multiply the equation for x by this integrating factor we get e2y ( 5 e−y − 3 sin(3x) ) x′ + e2y ( 5x e−y + 2 cos(3x) ) = 0,( 5 ey − 3 sin(3x) e2y ) x′ + ( 5x ey + 2 cos(3x) e2y ) = 0. This equation is exact, because if we write it as M̃ x′ + Ñ = 0, then M̃(x, y) = 5 ey − 3 sin(3x) e2y ⇒ ∂yM̃(x, y) = 5 ey − 6 sin(3x) e2y, Ñ(x, y) = 5x ey + 2 cos(3x) e2y ⇒ ∂xN(x, y) = 5 ey − 6 sin(3x) e2y, that is ∂yM̃ = ∂xÑ . Since the equation is exact, we find a potential function ψ from ∂xψ = M̃, ∂yψ = Ñ . Integrating on the variable x the equation ∂xψ = M̃ we get ψ(x, y) = 5x ey + cos(3x) e2y + g(y). Introducing this expression for ψ into the equation ∂yψ = Ñ we get 5x ey + 2 cos(3x) e2y + g′(y) = ∂yψ = Ñ = 5x e y + 2 cos(3x) e2y, hence g′(y) = 0, so we choose g = 0. A potential function for the equation for x is ψ(x, y) = 5x ey + cos(3x) e2y. The solutions x of the differential equation are given by 5x(y) ey + cos(3x(y)) e2y = c. Once we have the solution for the inverse function x we can find the solution for the original unknown y, which are given by 5x ey(x) + cos(3x) e2 y(x) = c C Notes. Exact differential equations are studied in Boyce-DiPrima [3], Section 2.6, and in most differential equation textbooks. 52 1. FIRST ORDER EQUATIONS Proof of Theorem 1.5.4: We know that the amount of a radioactive material as function of time is given by N(t) = N0 e −kt. Then, the definition of half-life implies, N0 2 = N0 e −kτ ⇒ −kτ = ln (1 2 ) ⇒ kτ = ln(2). This establishes the Theorem. Remark: A radioactive material, N , can be expressed in terms of the half-life, N(t) = N0 e (−t/τ) ln(2) ⇒ N(t) = N0 eln[2 (−t/τ)] ⇒ N(t) = N0 2−t/τ . From this last expression is clear that for t = τ we get N(τ) = N0/2. Our first example is about dating remains with Carbon-14. The Carbon-14 is a radioac- tive isotope of Carbon-12 with a half-life of τ = 5730 years. Carbon-14 is being constantly created in the upper atmosphere—by collisions of Carbon-12 with outer space radiation— and is accumulated by living organisms. While the organism lives, the amount of Carbon-14 in the organism is held constant. The decay of Carbon-14 is compensated with new amounts when the organism breaths or eats. When the organism dies, the amount of Carbon-14 in its remains decays. So the balance between normal and radioactive carbon in the remains changes in time. Example 1.5.1. Bone remains in an ancient excavation site contain only 14% of the Carbon-14 found in living animals today. Estimate how old are the bone remains. Use that the half-life of the Carbon-14 is τ = 5730 years. Solution: Suppose that t = 0 is set at the time when the organism dies. If at the present time t1 the remains contain 14% of the original amount, that means N(t1) = 14 100 N(0). Since Carbon-14 is a radioactive substant with half-life τ , the amount of Carbon-14 decays in time as follows, N(t) = N(0) 2−t/τ , where τ = 5730 years is the Carbon-14 half-life. Therefore, 2−t1/τ = 14 100 ⇒ − t1 τ = log2(14/100) ⇒ t1 = τ log2(100/14). We obtain that t1 = 16, 253 years. The organism died more that 16, 000 years ago. C Solution: (Using the decay constant k.) We write the solution of the radioactive decay equation as N(t) = N(0) e−kt, kτ = ln(2). Write the condition for t1, to be 14 % of the original Carbon-14, as follows, N(0) e−kt1 = 14 100 N(0) ⇒ e−kt1 = 14 100 ⇒ −kt1 = ln ( 14 100 ) , so, t1 = 1 k ln (100 14 ) . Recalling the expression for k in terms of τ , that is kτ = ln(2), we get t1 = τ ln(100/14) ln(2) . 1.5. APPLICATIONS OF LINEAR EQUATIONS 53 We get t1 = 16, 253 years, which is the same result as above, since log2(100/14) = ln(100/14) ln(2) . C 1.5.2. Newton’s Cooling Law. In 1701 Newton published, anonymously, the result of his home made experiments done fifteen years earlier. He focused on the time evolution of the temperature of objects that rest in a medium with constant temperature. He found that the difference between the temperatues of an object and the constant temperature of a medium varies geometrically towards zero as time varies arithmetically. This was his way of saying that the difference of temperatures, ∆T , depends on time as (∆T )(t) = (∆T )0 e −t/τ , for some initial temperature difference (∆T )0 and some time scale τ . Although this is called a “Cooling Law”, it also describes objects that warm up. When (∆T )0 > 0, the object is cooling down, but when (∆T )0 < 0, the object is warming up. Newton knew pretty well that the function ∆T above is solution of a very particular differential equation. But he chose to put more emphasis in the solution rather than in the equation. Nowadays people think that differential equations are more fundamental than their solutions, so we define Newton’s cooling law as follows. Definition 1.5.5. The Newton cooling law says that the temperature T at a time t of a material placed in a surrounding medium kept at a constant temperature Ts satisfies (∆T )′ = −k (∆T ), with ∆T (t) = T (t)−Ts, and k > 0, constant, characterizing the material thermal properties. Remark: Newton’s cooling law for ∆T is the same as the radioactive decay equation. But now the initial temperature difference, (∆T )(0) = T (0)− Ts, can be either positive or negative. Theorem 1.5.6. The solution of Newton’s cooling law equation (∆T )′ = −k (∆T ) with initial data T (0) = T0 is T (t) = (T0 − Ts) e−kt + Ts. Proof of Theorem 1.5.6: Newton’s cooling law is a first order linear equation, which we solved in § 1.1. The general solution is (∆T )(t) = c e−kt ⇒ T (t) = c e−kt + Ts, c ∈ R, where we used that (∆T )(t) = T (t)− Ts. The initial condition implies T0 = T (0) = c+ Ts ⇒ c = T0 − Ts ⇒ T (t) = (T0 − Ts) e−kt + Ts. This establishes the Theorem. Example 1.5.2. A cup with water at 45 C is placed in the cooler held at 5 C. If after 2 minutes the water temperature is 25 C, when will the water temperature be 15 C? Solution: We know that the solution of the Newton cooling law equation is T (t) = (T0 − Ts) e−kt + Ts, 54 1. FIRST ORDER EQUATIONS and we also know that in this case we have T0 = 45, Ts = 5, T (2) = 25. In this example we need to find t1 such that T (t1) = 15. In order to find that t1 we first need to find the constant k, T (t) = (45− 5) e−kt + 5 ⇒ T (t) = 40 e−kt + 5. Now use the fact that T (2) = 25 C, that is, 20 = T (2) = 40 e−2k ⇒ ln(1/2) = −2k ⇒ k = 1 2 ln(2). Having the constant k we can now go on and find the time t1 such that T (t1) = 15 C. T (t) = 40 e−t ln( √ 2) + 5 ⇒ 10 = 40 e−t1 ln( √ 2) ⇒ t1 = 4. C 1.5.3. Mixing Problems. We study the system pictured in Fig. 3. A tank has a salt mass Q(t) dissolved in a volume V (t) of water at a time t. Water is pouring into the tank at a rate ri(t) with a salt concentration qi(t). Water is also leaving the tank at a rate ro(t) with a salt concentration qo(t). Recall that a water rate r means water volume per unit time, and a salt concentration q means salt mass per unit volume. We assume that the salt entering in the tank gets instantaneously mixed. As a consequence the salt concentration in the tank is homoge- neous at every time. This property simplifies the mathematical model describing the salt in the tank. Before stating the problem we want to solve, we review the physical units of the main fields involved in it. Denote by [ri] the units of the quantity ri. Then we have [ri] = [ro] = Volume Time , [qi] = [qo] = Mass Volume , [V ] = Volume, [Q] = Mass. Instantaneously mixed Tank ro, qo(t)V (t) Q(t) ri, qi(t) Figure 3. Description of a water tank problem. Definition 1.5.7. A Mixing Problem refers to water coming into a tank at a rate ri with salt concentration qi, and going out the tank at a rate ro and salt concentration qo, so that the water volume V and the total amount of salt Q, which is instantaneously mixed, in the tank satisfy the following equations, V ′(t) = ri(t)− ro(t), (1.5.1) Q′(t) = ri(t) qi(t)− ro(t), qo(t), (1.5.2) qo(t) = Q(t) V (t) , (1.5.3) r′i(t) = r ′ o(t) = 0. (1.5.4) The first and second equations above are just the mass conservation of water and salt, respectively. Water volume and mass are proportional, so both are conserved, and we chose the volume to write down this conservation in Eq. (1.5.1). This equation is indeed a conservation because it says that the water volume variation in time is equal to the difference of volume time rates coming in and going out of the tank. Eq. (1.5.2) is the salt 1.5. APPLICATIONS OF LINEAR EQUATIONS 57 Solution: The first step to solve this problem is to find the solution Q of the initial value problem Q′(t) = a(t)Q(t) + b(t), Q(0) = Q0, where function a and b are given in Eq. (1.5.6). In this case they are a(t) = − ro (ri − ro) t+ V0 ⇒ a(t) = − r V0 , b(t) = ri qi(t) ⇒ b(t) = 0. The initial value problem we need to solve is Q′(t) = − r V0 Q(t), Q(0) = Q0. From Section 1.1 we know that the solution is given by Q(t) = Q0 e −rt/V0 . We can now proceed to find the time t1. We first need to find the concentration Q(t)/V (t). We already have Q(t) and we now that V (t) = V0, since ri = ro. Therefore, Q(t) V (t) = Q(t) V0 = Q0 V0 e−rt/V0 . The condition that defines t1 is Q(t1) V (t1) = 1 100 Q0 V0 . From these two equations above we conclude that 1 100 Q0 V0 = Q(t1) V (t1) = Q0 V0 e−rt1/V0 . The time t1 comes from the equation 1 100 = e−rt1/V0 ⇔ ln ( 1 100 ) = −rt1 V0 ⇔ ln(100) = rt1 V0 . The final result is given by t1 = V0 r ln(100). C Example 1.5.5 (Nonzero qi, for V (t) = V0). Consider a mixing problem with equal con- stant water rates ri = ro = r, with only fresh water in the tank at the initial time, hence Q0 = 0 and with a given initial volume of water in the tank V0. Then find the function salt in the tank Q if the incoming salt concentration is given by the function qi(t) = 2 + sin(2t). Solution: We need to find the solution Q to the initial value problem Q′(t) = a(t)Q(t) + b(t), Q(0) = 0, where function a and b are given in Eq. (1.5.6). In this case we have a(t) = − ro (ri − ro) t+ V0 ⇒ a(t) = − r V0 = −a0, b(t) = ri qi(t) ⇒ b(t) = r [ 2 + sin(2t) ] . We are changing the sign convention for a0 so that a0 > 0. The initial value problem we need to solve is Q′(t) = −a0Q(t) + b(t), Q(0) = 0. 58 1. FIRST ORDER EQUATIONS The solution is computed using the integrating factor method and the result is Q(t) = e−a0t ∫ t 0 ea0sb(s) ds, where we used that the initial condition is Q0 = 0. Recalling the definition of the function b we obtain Q(t) = e−a0t ∫ t 0 ea0s [ 2 + sin(2s) ] ds. This is the formula for the solution of the problem, we only need to compute the integral given in the equation above. This is not straightforward though. We start with the following integral found in an integration table,∫ eks sin(ls) ds = eks k2 + l2 [ k sin(ls)− l cos(ls) ] , where k and l are constants. Therefore,∫ t 0 ea0s [ 2 + sin(2s) ] ds = [ 2 a0 ea0s ]∣∣∣t 0 + [ ea0s a20 + 2 2 [ a0 sin(2s)− 2 cos(2s) ]]∣∣∣t 0 , = 2 a0 q ( ea0t − 1 ) + ea0t a20 + 2 2 [ a0 sin(2t)− 2 cos(2t) ] + 2 a20 + 2 2 . With the integral above we can compute the solution Q as follows, Q(t) = e−a0t [ 2 a0 ( ea0t − 1 ) + ea0t a20 + 2 2 [ a0 sin(2t)− 2 cos(2t) ] + 2 a20 + 2 2 ] , recalling that a0 = r/V0. We rewrite expression above as follows, Q(t) = 2 a0 + [ 2 a20 + 2 2 − 2 a0 ] e−a0t + 1 a20 + 2 2 [ a0 sin(2t)− 2 cos(2t) ] . (1.5.9) C t y 2 f(x) = 2− 8 5 e−x Q(t) Figure 5. The graph of the function Q given in Eq. (1.5.9) for a0 = 1. 1.5. APPLICATIONS OF LINEAR EQUATIONS 59 1.5.4. Exercises. 1.5.1.- A radioactive material decays at a rate proportional to the amount present. Initially there are 50 mil- ligrams of the material present and after one hour the material has lost 80% of its original mass. (a) Find the mass of the material as function of time. (b) Find the mass of the material after four hours. (c) Find the half-life of the material. 1.5.2.- A vessel with liquid at 18 C is placed in a cooler held at 3 C, and after 3 min- utes the temperature drops to 13 C. (a) Find the differential equation satis- fied by the temperature T of a liq- uid in the cooler at time t = 0. (b) Find the function temperature of the liquid once it is put in the cooler. (c) Find the liquid cooling constant. 1.5.3.- A tank initially contains V0 = 100 liters of water with Q0 = 25 grams of salt. The tank is rinsed with fresh wa- ter flowing in at a rate of ri = 5 liters per minute and leaving the tank at the same rate. The water in the tank is well- stirred. Find the time such that the amount the salt in the tank is Q1 = 5 grams. 1.5.4.- A tank initially contains V0 = 100 liters of pure water. Water enters the tank at a rate of ri = 2 liters per minute with a salt concentration of q1 = 3 grams per liter. The instantaneously mixed mixture leaves the tank at the same rate it enters the tank. Find the salt concentration in the tank at any time t > 0. Also find the limiting amount of salt in the tank in the limit t→∞. 1.5.5.- A tank with a capacity of Vm = 500 liters originally contains V0 = 200 liters of water with Q0 = 100 grams of salt in solution. Water containing salt with concentration of qi = 1 gram per liter is poured in at a rate of ri = 3 liters per minute. The well-stirred water is allowed to pour out the tank at a rate of ro = 2 liters per minute. Find the salt concentration in the tank at the time when the tank is about to overflow. Compare this concentration with the limiting concentration at infinity time if the tank had infinity capacity. 62 1. FIRST ORDER EQUATIONS Using the triangle inequality for norms and and the sum of a geometric series one compute the following, ‖yn − yn+m‖ = ‖yn − yn+1 + yn+1 − yn+2 + · · ·+ yn+(m−1) − yn+m‖ 6 ‖yn − yn+1‖+ ‖yn+1 − yn+2‖+ · · ·+ ‖yn+(m−1) − yn+m‖ 6 (rn + rn+1 + · · ·+ rn+m) ‖y1 − y0‖ 6 rn(1 + r + r2 + · · ·+ rm) ‖y1 − y0‖ 6 rn (1− rm 1− r ) ‖y1 − y0‖. Now choose the positive constant b such that b < min{a, 1/k}, hence 0 < r < 1. In this case the sequence {yn} is a Cauchy sequence in the Banach space C(Db), with norm ‖ ‖, hence converges. Denote the limit by y = limn→∞ yn. This function satisfies the equation y(t) = y0 + ∫ t t0 f(s, y(s)) ds, which says that y is not only continuous but also differentiable in the interior of Db, hence y is solution of the initial value problem in (1.6.1). The proof of uniqueness of the solution follows the same argument used to show that the sequence above is a Cauchy sequence. Consider two solutions y and ỹ of the initial value problem above. That means, y(t) = y0 + ∫ t t0 f(s, y(s) ds, ỹ(t) = y0 + ∫ t t0 f(s, ỹ(s) ds. Therefore, their difference satisfies ‖y − ỹ‖ = max t∈Db ∣∣∣∫ t t0 f(s, y(s)) ds− ∫ t t0 f(s, ỹ(s)) ds ∣∣∣ 6 max t∈Db ∫ t t0 ∣∣f(s, y(s))− f(s, ỹ(s))∣∣ ds 6 k max t∈Db ∫ t t0 |y(s)− ỹ(s)| ds 6 kb ‖y − ỹ‖. Since b is chosen so that r = kb < 1, we got that ‖y − ỹ‖ 6 r ‖y − ỹ‖, r < 1 ⇒ ‖y − ỹ‖ = 0 ⇒ y = ỹ. This establishes the Theorem. Example 1.6.2. Use the proof of Picard-Lindelöf’s Theorem to find the solution to y′ = 2 y + 3 y(0) = 1. Solution: We first transform the differential equation into an integral equation.∫ t 0 y′(s) ds = ∫ t 0 (2 y(s) + 3) ds ⇒ y(t)− y(0) = ∫ t 0 (2 y(s) + 3) ds. Using the initial condition, y(0) = 1, y(t) = 1 + ∫ t 0 (2 y(s) + 3) ds. 1.6. NONLINEAR EQUATIONS 63 We now define the sequence of approximate solutions: y0 = y(0) = 1, yn+1(t) = 1 + ∫ t 0 (2 yn(s) + 3) ds, n > 0. We now compute the first elements in the sequence. We said y0 = 1, now y1 is given by n = 0, y1(t) = 1 + ∫ t 0 (2 y0(s) + 3) ds = 1 + ∫ t 0 5 ds = 1 + 5t. So y1 = 1 + 5t. Now we compute y2, y2 = 1+ ∫ t 0 (2 y1(s)+3) ds = 1+ ∫ t 0 ( 2(1+5s)+3 ) ds ⇒ y2 = 1+ ∫ t 0 ( 5+10s ) ds = 1+5t+5t2. So we’ve got y2(t) = 1 + 5t+ 5t 2. Now y3, y3 = 1 + ∫ t 0 (2 y2(s) + 3) ds = 1 + ∫ t 0 ( 2(1 + 5s+ 5s2) + 3 ) ds so we have, y3 = 1 + ∫ t 0 ( 5 + 10s+ 10s2 ) ds = 1 + 5t+ 5t2 + 10 3 t3. So we obtained y3(t) = 1 + 5t+ 5t 2 + 10 3 t3. We now rewrite this expression so we can get a power series expansion that can be written in terms of simple functions. The first step is done already, to write the powers of t as tn, for n = 1, 2, 3, y3(t) = 1 + 5t 1 + 5t2 + 5(2) 3 t3 We now multiply by one each term so we get the factorials n! on each term y3(t) = 1 + 5 t1 1! + 5(2) t2 2! + 5(22) t3 3! We then realize that we can rewrite the expression above in terms of power of (2t), that is, y3(t) = 1 + 5 2 (2t)1 1! + 5 2 (2t)2 2! + 5 2 (2t)3 3! = 1 + 5 2 ( (2t) + (2t)2 2! + (2t)3 3! ) . From this last expressionis simple to guess the n-th approximation yN(t) = 1 + 5 2 ( (2t) + (2t)2 2! + (2t)3 3! + · · ·+ (2t) N N ! ) = 1 + 5 2 N∑ k=1 (2t)k k! . Recall now that the power series expansion for the exponential eat = ∞∑ k=0 (at)k k! = 1 + ∞∑ k=1 (at)k k! ⇒ ∞∑ k=1 (at)k k! = (eat − 1). Then, the limit N →∞ is given by y(t) = lim N→∞ yN(t) = 1 + 5 2 ∞∑ k=1 (2t)k k! = 1 + 5 2 ( e2t − 1 ) , One last rewriting of the solution and we obtain y(t) = 5 2 e2t − 3 2 . C 64 1. FIRST ORDER EQUATIONS Remark: The differential equation y′ = 2 y + 3 is of course linear, so the solution to the initial value problem in Example 1.6.2 can be obtained using the methods in Section 1.1, e−2t (y′ − 2 y) = e−2t 3 ⇒ e−2t y = −3 2 e−2t + c ⇒ y(t) = c e2t − 3 2 ; and the initial condition implies 1 = y(0) = c− 3 2 ⇒ c = 5 2 ⇒ y(t) = 5 2 e2t − 3 2 . Example 1.6.3. Use the proof of Picard-Lindelöf’s Theorem to find the solution to y′ = a y + b y(0) = ŷ0, a, b ∈ R. Solution: We first transform the differential equation into an integral equation.∫ t 0 y′(s) ds = ∫ t 0 (a y(s) + b) ds ⇒ y(t)− y(0) = ∫ t 0 (a y(s) + b) ds. Using the initial condition, y(0) = ŷ0, y(t) = ŷ0 + ∫ t 0 (a y(s) + b) ds. We now define the sequence of approximate solutions: y0 = y(0) = ŷ0, yn+1(t) = ŷ0 + ∫ t 0 (a yn(s) + b) ds, n > 0. We now compute the first elements in the sequence. We said y0 = ŷ0, now y1 is given by n = 0, y1(t) = y0 + ∫ t 0 (a y0(s) + b) ds = ŷ0 + ∫ t 0 (a ŷ0 + b) ds = ŷ0 + (a ŷ0 + b)t. So y1 = ŷ0 + (a ŷ0 + b)t. Now we compute y2, y2 = ŷ0 + ∫ t 0 [a y1(s) + b] ds = ŷ0 + ∫ t 0 [ a(ŷ0 + (a ŷ0 + b)s) + b ] ds = ŷ0 + (aŷ0 + b)t+ (a ŷ0 + b) at2 2 So we obtained y2(t) = ŷ0 + (aŷ0 + b)t+ (a ŷ0 + b) at2 2 . A similar calculation gives us y3, y3(t) = ŷ0 + (aŷ0 + b)t+ (a ŷ0 + b) at2 2 + (a ŷ0 + b) a2t3 3! . We now rewrite this expression so we can get a power series expansion that can be written in terms of simple functions. The first step is done already, to write the powers of t as tn, for n = 1, 2, 3, y3(t) = ŷ0 + (aŷ0 + b) (t)1 1! + (a ŷ0 + b) a t2 2! + (a ŷ0 + b) a 2 t 3 3! . 1.6. NONLINEAR EQUATIONS 67 Recall now that the power series expansion for the exponential eat = ∞∑ k=0 (at)k k! = 1 + ∞∑ k=1 (at)k k! . so we get y(t) = 1 + (e 5 2 t 2 − 1) ⇒ y(t) = e 52 t 2 . C Remark: The differential equation y′ = 5t y is of course separable, so the solution to the initial value problem in Example 1.6.4 can be obtained using the methods in Section 1.3, y′ y = 5t ⇒ ln(y) = 5t 2 2 + c. ⇒ y(t) = c̃ e 52 t 2 . We now use the initial condition, 1 = y(0) = c̃ ⇒ c = 1, so we obtain the solution y(t) = e 5 2 t 2 . Example 1.6.5. Use the Picard iteration to find the solution of y′ = 2t4 y, y(0) = 1. Solution: We first transform the differential equation into an integral equation.∫ t 0 y′(s) ds = ∫ t 0 2s4 y(s) ds ⇒ y(t)− y(0) = ∫ t 0 2s4 y(s) ds. Using the initial condition, y(0) = 1, y(t) = 1 + ∫ t 0 2s4 y(s) ds. We now define the sequence of approximate solutions: y0 = y(0) = 1, yn+1(t) = 1 + ∫ t 0 2s4 yn(s) ds, n > 0. We now compute the first four elements in the sequence. The first one is y0 = y(0) = 1, the second one y1 is given by n = 0, y1(t) = 1 + ∫ t 0 2s4 ds = 1 + 2 5 t5. 68 1. FIRST ORDER EQUATIONS So y1 = 1 + (2/5)t 5. Now we compute y2, y2 = 1 + ∫ t 0 2s4 y1(s) ds = 1 + ∫ t 0 2s4 ( 1 + 2 5 s5 ) ds = 1 + ∫ t 0 ( 2s4 + 22 5 s9 ) ds = 1 + 2 5 t5 + 22 5 1 10 t10. So we obtained y2(t) = 1 + 2 5 t5 + 22 52 1 2 t10. A similar calculation gives us y3, y3 = 1 + ∫ t 0 2s4 y2(s) ds = 1 + ∫ t 0 2s4 ( 1 + 2 5 s5 + 22 52 1 2 s10 ) ds = 1 + ∫ t 0 ( 2s4 + 22 5 s9 + 23 52 1 2 s14 ) ds = 1 + 2 5 t5 + 22 5 1 10 t10 + 23 52 1 2 1 15 t15. So we obtained y3(t) = 1+ 2 5 t5 + 22 52 1 2 t10 + 23 53 1 2 1 3 t15. We now try reorder terms in this last expression so we can get a power series expansion we can write in terms of simple functions. This is what we do: y3(t) = 1 + 2 5 (t5) + 22 53 (t5)2 2 + 23 54 (t5)3 6 = 1 + 2 5 (t5) 1! + 22 52 (t5)2 2! + 23 53 (t5)3 3! = 1 + ( 25 t 5) 1! + ( 25 t 5)2 2! + ( 25 t 5)3 3! . From this last expression is simple to guess the n-th approximation yN(t) = 1 + N∑ n=1 ( 25 t 5)n n! , which can be proven by induction. Therefore, y(t) = lim N→∞ yN(t) = 1 + ∞∑ n=1 ( 25 t 5)n n! . Recall now that the power series expansion for the exponential eat = ∞∑ k=0 (at)k k! = 1 + ∞∑ k=1 (at)k k! . so we get y(t) = 1 + (e 2 5 t 5 − 1) ⇒ y(t) = e 25 t 5 . C 1.6. NONLINEAR EQUATIONS 69 1.6.2. Comparison of Linear and Nonlinear Equations. The main result in § 1.2 was Theorem 1.2.3, which says that an initial value problem for a linear differential equation y′ = a(t) y + b(t), y(t0) = y0, with a, b continuous functions on (t1, t2), and constants t0 ∈ (t1, t2) and y0 ∈ R, has the unique solution y on (t1, t2) given by y(t) = eA(t) ( y0 + ∫ t t0 e−A(s) b(s) ds ) , where we introduced the function A(t) = ∫ t t0 a(s) ds. From the result above we can see that solutions to linear differential equations satisfiy the following properties: (a) There is an explicit expression for the solutions of a differential equations. (b) For every initial condition y0 ∈ R there exists a unique solution. (c) For every initial condition y0 ∈ R the solution y(t) is defined for all (t1, t2). Remark: None of these properties hold for solutions of nonlinear differential equations. From the Picard-Lindelöf Theorem one can see that solutions to nonlinear differential equations satisfy the following properties: (i) There is no explicit formula for the solution to every nonlinear differential equation. (ii) Solutions to initial value problems for nonlinear equations may be non-unique when the function f does not satisfy the Lipschitz condition. (iii) The domain of a solution y to a nonlinear initial value problem may change when we change the initial data y0. The next three examples (1.6.6)-(1.6.8) are particular cases of the statements in (i)-(iii). We start with an equation whose solutions cannot be written in explicit form. Example 1.6.6. For every constant a1, a2, a3, a4, find all solutions y to the equation y′(t) = t2( y4(t) + a4 y3(t) + a3 y2(t) + a2 y(t) + a1 ) . (1.6.4) Solution: The nonlinear differential equation above is separable, so we follow § 1.3 to find its solutions. First we rewrite the equation as( y4(t) + a4 y 3(t) + a3 y 2(t) + a2 y(t) + a1 ) y′(t) = t2. Then we integrate on both sides of the equation,∫ ( y4(t) + a4 y 3(t) + a3 y 2(t) + a2 y(t) + a1 ) y′(t) dt = ∫ t2 dt+ c. Introduce the substitution u = y(t), so du = y′(t) dt,∫ (u4 + a4 u 3 + a3 u 2 + a2 u+ a1 ) du = ∫ t2 dt+ c. Integrate the left-hand side with respect to u and the right-hand side with respect to t. Substitute u back by the function y, hence we obtain 1 5 y5(t) + a4 4 y4(t) + a3 3 y3(t) + a2 2 y(t) + a1 y(t) = t3 3 + c. This is an implicit form for the solution y of the problem. The solution is the root of a polynomial degree five for all possible values of the polynomial coefficients. But it has been 72 1. FIRST ORDER EQUATIONS Figure 7. The function f as a slope of a segment. Definition 1.6.3. A direction field for the differential equation y′(t) = f(t, y(t)) is the graph on the ty-plane of the values f(t, y) as slopes of a small segments. We now show the direction fields of e few equations. Example 1.6.10. Find the direction field of the equation y′ = y, and sketch a few solutions to the differential equation for different initial conditions. Solution: Recall that the solutions are y(t) = y0 e t. So is the direction field shown in Fig. 8. C t y 1 0 −1 y′ = y Figure 8. Direction field for the equation y′ = y. 1.6. NONLINEAR EQUATIONS 73 Example 1.6.11. Find the direction field of the equation y′ = sin(y), and sketch a few solutions to the differential equation for different initial conditions. Solution: The equation is separable so the solutions are ln ∣∣∣csc(y0) + cot(y0) csc(y) + cot(y) ∣∣∣ = t, for any y0 ∈ R. The graphs of these solutions are not simple to do. But the direction field is simpler to plot and can be seen in Fig. 9. C t y π 0 −π y′ = sin(y) Figure 9. Direction field for the equation y′ = sin(y). Example 1.6.12. Find the direction field of the equation y′ = 2 cos(t) cos(y), and sketch a few solutions to the differential equation for different initial conditions. Solution: We do not need to compute the explicit solution of y′ = 2 cos(t) cos(y) to have a qualitative idea of its solutions. The direction field can be seen in Fig. 10. C 74 1. FIRST ORDER EQUATIONS t y π 2 0 −π 2 y′ = 2 cos(t) cos(y) Figure 10. Direction field for the equation y′ = 2 cos(t) cos(y). CHAPTER 2 Second Order Linear Equations Newton’s second law of motion, ma = f , is maybe one of the first differential equations written. This is a second order equation, since the acceleration is the second time derivative of the particle position function. Second order differential equations are more difficult to solve than first order equations. In § 2.1 we compare results on linear first and second order equations. While there is an explicit formula for all solutions to first order linear equations, not such formula exists for all solutions to second order linear equations. The most one can get is the result in Theorem 2.1.7. In § 2.2 we introduce the Reduction Order Method to find a new solution of a second order equation if we already know one solution of the equation. In § 2.3 we find explicit formulas for all solutions to linear second order equations that are both homogeneous and with constant coefficients. These formulas are generalized to nonhomogeneous equations in § 2.5. In § 2.6 we describe a few physical systems described by second order linear differential equations. t y1 y2 e−ωdt −e−ωdt 77 78 2. SECOND ORDER LINEAR EQUATIONS 2.1. Variable Coefficients We studied first order linear equations in § 1.1-1.2, where we obtained a formula for all solutions to these equations. We could say that we know all that can be known about solutions to first order linear equations. However, this is not the case for solutions to second order linear equations, since we do not have a general formula for all solutions to these equations. In this section we present two main results, the first one is Theorem 2.1.2, which says that there are solutions to second order linear equations when the equation coefficients are continuous functions. Furthermore, these solutions have two free parameters that can be fixed by appropriate initial conditions. The second result is Theorem 2.1.7, which is the closest we can get to a formula for solutions to second order linear equations without sources—homogeneous equations. To know all solutions to these equations we only need to know two solutions that are not proportional to each other. The proof of Theorem 2.1.7 is based on Theorem 2.1.2 plus an algebraic calculation and properties of the Wronskian function, which are derived from Abel’s Theorem. 2.1.1. Definitions and Examples. We start with a definition of second order linear differential equations. After a few examples we state the first of the main results, Theo- rem 2.1.2, about existence and uniqueness of solutions to an initial value problem in the case that the equation coefficients are continuous functions. Definition 2.1.1. A second order linear differential equation for the function y is y′′ + a1(t) y ′ + a0(t) y = b(t), (2.1.1) where a1, a0, b are given functions on the interval I ⊂ R. The Eq. (2.1.1) above: (a) is homogeneous iff the source b(t) = 0 for all t ∈ R; (b) has constant coefficients iff a1 and a0 are constants; (c) has variable coefficients iff either a1 or a0 is not constant. Remark: The notion of an homogeneous equation presented here is different from the Euler homogeneous equations we studied in § 1.3. Example 2.1.1. (a) A second order, linear, homogeneous, constant coefficients equation is y′′ + 5y′ + 6 = 0. (b) A second order, linear, nonhomogeneous, constant coefficients, equation is y′′ − 3y′ + y = cos(3t). (c) A second order, linear, nonhomogeneous, variable coefficients equation is y′′ + 2t y′ − ln(t) y = e3t. (d) Newton’s law of motion for a point particle of mass m moving in one space dimension under a force f is mass times acceleration equals force, my′′(t) = f(t, y(t), y′(t)). (e) Schrödinger equation in Quantum Mechanics, in one space dimension, stationary, is − ~ 2 2m ψ′′ + V (x)ψ = E ψ, 2.1. VARIABLE COEFFICIENTS 79 where ψ is the probability density of finding a particle of mass m at the position x having energy E under a potential V , where ~ is Planck constant divided by 2π. C Example 2.1.2. Find the differential equation satisfied by the family of functions y(t) = c1 e 4t + c2 e −4t, where c1, c2 are arbitrary constants. Solution: From the definition of y compute c1, c1 = y e −4t − c2 e−8t. Now compute the derivative of function y y′ = 4c1 e 4t − 4c2 e−4t, Replace c1 from the first equation above into the expression for y ′, y′ = 4(y e−4t − c2 e−8t)e4t − 4c2 e−4t ⇒ y′ = 4y + (−4− 4)c2 e−4t, so we get an expression for c2 in terms of y and y ′, y′ = 4y − 8c2 e−4t ⇒ c2 = 1 8 (4y − y′) e4t At this point we can compute c1 in terms of y and y ′, although we do not need it for what follows. Anyway, c1 = y e −4t − 1 8 (4y − y′)e4te−8t ⇒ c1 = 1 8 (4y + y′) e−4t. We do not need c1 because we can get a differential equation for y from the equation for c2. Compute the derivative of that equation, 0 = c′2 = 1 2 (4y − y′) e4t + 1 8 (4y′ − y′′) e4t ⇒ 4(4y − y′) + (4y′ − y′′) = 0 which gives us the following second order linear differential equation for y, y′′ − 16 y = 0. C Example 2.1.3. Find the differential equation satisfied by the family of functions y(t) = c1 t + c2 t, c1, c2 ∈ R. Solution: Compute y′ = −c1 t2 + c2. Get one constant from y ′ and put it in y, c2 = y ′ + c1 t2 ⇒ y = c1 t + ( y′ + c1 t2 ) t, so we get y = c1 t + t y′ + c1 t ⇒ y = 2c1 t + t y′. Compute the constant from the expression above, 2c1 t = y − t y′ ⇒ 2c1 = t y − t2 y′. Since the left hand side is constant, 0 = (2c1) ′ = (t y − t2 y′)′ = y + t y′ − 2t y′ − t2 y′′, 82 2. SECOND ORDER LINEAR EQUATIONS Proof of Theorem 2.1.4: This is a straightforward calculation: L(c1y1 + c2y2) = (c1y1 + c2y2) ′′ + a1 (c1y1 + c2y2) ′ + a0 (c1y1 + c2y2). Recall that derivations is a linear operation and then reoorder terms in the following way, L(c1y1 + c2y2) = ( c1y ′′ 1 + a1 c1y ′ 1 + a0 c1y1 ) + ( c2y ′′ 2 + a1 c2y ′ 2 + a0 c2y2 ) . Introduce the definition of L back on the right-hand side. We then conclude that L(c1y1 + c2y2) = c1L(y1) + c2L(y2). This establishes the Theorem. The linearity of an operator L translates into the superposition property of the solutions to the homogeneous equation L(y) = 0. Theorem 2.1.5 (Superposition). If L is a linear operator and y1, y2 are solutions of the homogeneous equations L(y1) = 0, L(y2) = 0, then for every constants c1, c2 holds L(c1 y1 + c2 y2) = 0. Remark: This result is not true for nonhomogeneous equations. Proof of Theorem 2.1.5: Verify that the function y = c1y1 + c2y2 satisfies L(y) = 0 for every constants c1, c2, that is, L(y) = L(c1y1 + c2y2) = c1 L(y1) + c2 L(y2) = c1 0 + c2 0 = 0. This establishes the Theorem. We now introduce the notion of linearly dependent and linearly independent functions. Definition 2.1.6. Two functions y1, y2 are called linearly dependent iff they are propor- tional. Otherwise, the functions are linearly independent. Remarks: (a) Two functions y1, y2 are proportional iff there is a constant c such that for all t holds y1(t) = c y2(t). (b) The function y1 = 0 is proportional to every other function y2, since holds y1 = 0 = 0 y2. The definitions of linearly dependent or independent functions found in the literature are equivalent to the definition given here, but they are worded in a slight different way. Often in the literature, two functions are called linearly dependent on the interval I iff there exist constants c1, c2, not both zero, such that for all t ∈ I holds c1y1(t) + c2y2(t) = 0. Two functions are called linearly independent on the interval I iff they are not linearly dependent, that is, the only constants c1 and c2 that for all t ∈ I satisfy the equation c1y1(t) + c2y2(t) = 0 are the constants c1 = c2 = 0. This wording makes it simple to generalize these definitions to an arbitrary number of functions. Example 2.1.7. (a) Show that y1(t) = sin(t), y2(t) = 2 sin(t) are linearly dependent. (b) Show that y1(t) = sin(t), y2(t) = t sin(t) are linearly independent. 2.1. VARIABLE COEFFICIENTS 83 Solution: Part (a): This is trivial, since 2y1(t)− y2(t) = 0. Part (b): Find constants c1, c2 such that for all t ∈ R holds c1 sin(t) + c2t sin(t) = 0. Evaluating at t = π/2 and t = 3π/2 we obtain c1 + π 2 c2 = 0, c1 + 3π 2 c2 = 0 ⇒ c1 = 0, c2 = 0. We conclude: The functions y1 and y2 are linearly independent. C We now introduce the second main result in this section. If you know two linearly independent solutions to a second order linear homogeneous differential equation, then you actually know all possible solutions to that equation. Any other solution is just a linear combination of the previous two solutions. We repeat that the equation must be homoge- neous. This is the closer we can get to a general formula for solutions to second order linear homogeneous differential equations. Theorem 2.1.7 (General Solution). If y1 and y2 are linearly independent solutions of the equation L(y) = 0 on an interval I ⊂ R, where L(y) = y′′ + a1 y′ + a0 y, and a1, a2 are continuous functions on I, then there are unique constants c1, c2 such that every solution y of the differential equation L(y) = 0 on I can be written as a linear combination y(t) = c1 y1(t) + c2 y2(t). Before we prove Theorem 2.1.7, it is convenient to state the following the definitions, which come out naturally from this Theorem. Definition 2.1.8. (a) The functions y1 and y2 are fundamental solutions of the equation L(y) = 0 iff y1, y2 are linearly independent and L(y1) = 0, L(y2) = 0. (b) The general solution of the homogeneous equation L(y) = 0 is a two-parameter family of functions ygen given by ygen(t) = c1 y1(t) + c2 y2(t), where the arbitrary constants c1, c2 are the parameters of the family, and y1, y2 are fundamental solutions of L(y) = 0. Example 2.1.8. Show that y1 = e t and y2 = e −2t are fundamental solutions to the equation y′′ + y′ − 2y = 0. Solution: We first show that y1 and y2 are solutions to the differential equation, since L(y1) = y ′′ 1 + y ′ 1 − 2y1 = et + et − 2et = (1 + 1− 2)et = 0, L(y2) = y ′′ 2 + y ′ 2 − 2y2 = 4 e−2t − 2 e−2t − 2e−2t = (4− 2− 2)e−2t = 0. It is not difficult to see that y1 and y2 are linearly independent. It is clear that they are not proportional to each other. A proof of that statement is the following: Find the constants c1 and c2 such that 0 = c1 y1 + c2 y2 = c1 e t + c2 e −2t t ∈ R ⇒ 0 = c1 et − 2c2 e−2t 84 2. SECOND ORDER LINEAR EQUATIONS The second equation is the derivative of the first one. Take t = 0 in both equations, 0 = c1 + c2, 0 = c1 − 2c2 ⇒ c1 = c2 = 0. We conclude that y1 and y2 are fundamental solutions to the differential equation above.C Remark: The fundamental solutions to the equation above are not unique. For example, show that another set of fundamental solutions to the equation above is given by, y1(t) = 2 3 et + 1 3 e−2t, y2(t) = 1 3 ( et − e−2t ) . To prove Theorem 2.1.7 we need to introduce the Wronskian function and to verify some of its properties. The Wronskian function is studied in the following Subsection and Abel’s Theorem is proved. Once that is done we can say that the proof of Theorem 2.1.7 is complete. Proof of Theorem 2.1.7: We need to show that, given any fundamental solution pair, y1, y2, any other solution y to the homogeneous equation L(y) = 0 must be a unique linear combination of the fundamental solutions, y(t) = c1 y1(t) + c2 y2(t), (2.1.5) for appropriately chosen constants c1, c2. First, the superposition property implies that the function y above is solution of the homogeneous equation L(y) = 0 for every pair of constants c1, c2. Second, given a function y, if there exist constants c1, c2 such that Eq. (2.1.5) holds, then these constants are unique. The reason is that functions y1, y2 are linearly independent. This can be seen from the following argument. If there are another constants c̃1, c̃2 so that y(t) = c̃1 y1(t) + c̃2 y2(t), then subtract the expression above from Eq. (2.1.5), 0 = (c1 − c̃1) y1 + (c2 − c̃2) y2 ⇒ c1 − c̃1 = 0, c2 − c̃2 = 0, where we used that y1, y2 are linearly independent. This second part of the proof can be obtained from the part three below, but I think it is better to highlight it here. So we only need to show that the expression in Eq. (2.1.5) contains all solutions. We need to show that we are not missing any other solution. In this third part of the argument enters Theorem 2.1.2. This Theorem says that, in the case of homogeneous equations, the initial value problem L(y) = 0, y(t0) = d1, y ′(t0) = d2, always has a unique solution. That means, a good parametrization of all solutions to the differential equation L(y) = 0 is given by the two constants, d1, d2 in the initial condition. To finish the proof of Theorem 2.1.7 we need to show that the constants c1 and c2 are also good to parametrize all solutions to the equation L(y) = 0. One way to show this, is to find an invertible map from the constants d1, d2, which we know parametrize all solutions, to the constants c1, c2. The map itself is simple to find, d1 = c1 y1(t0) + c2 y2(t0) d2 = c1 y ′ 1(t0) + c2 y ′ 2(t0). We now need to show that this map is invertible. From linear algebra we know that this map acting on c1, c2 is invertible iff the determinant of the coefficient matrix is nonzero,∣∣∣∣y1(t0) y2(t0)y′1(t0) y′2(t0) ∣∣∣∣ = y1(t0) y′2(t0)− y′1(t0)y2(t0) 6= 0. 2.1. VARIABLE COEFFICIENTS 87 We now show one application of Abel’s Theorem. Example 2.1.11. Find the Wronskian of two solutions of the equation t2 y′′ − t(t+ 2) y′ + (t+ 2) y = 0, t > 0. Solution: Notice that we do not known the explicit expression for the solutions. Neverthe- less, Theorem 2.1.12 says that we can compute their Wronskian. First, we have to rewrite the differential equation in the form given in that Theorem, namely, y′′ − (2 t + 1 ) y′ + ( 2 t2 + 1 t ) y = 0. Then, Theorem 2.1.12 says that the Wronskian satisfies the differential equation W ′12(t)− (2 t + 1 ) W12(t) = 0. This is a first order, linear equation for W12, so its solution can be computed using the method of integrating factors. That is, first compute the integral − ∫ t t0 (2 s + 1 ) ds = −2 ln ( t t0 ) − (t− t0) = ln ( t20 t2 ) − (t− t0). Then, the integrating factor µ is given by µ(t) = t20 t2 e−(t−t0), which satisfies the condition µ(t0) = 1. So the solution, W12 is given by( µ(t)W12(t) )′ = 0 ⇒ µ(t)W12(t)− µ(t0)W12(t0) = 0 so, the solution is W12(t) = W12(t0) t2 t20 e(t−t0). If we call the constant c = W12(t0)/[t 2 0e t0 ], then the Wronskian has the simpler form W12(t) = c t 2et. C We now state and prove the statement we need to complete the proof of Theorem 2.1.7. Theorem 2.1.13 (Wronskian II). If y1, y2 are fundamental solutions of L(y) = 0 on I ⊂ R, then W12(t) 6= 0 on I. Remark: Instead of proving the Theorem above, we prove an equivalent statement—the negative statement. Corollary 2.1.14 (Wronskian II). If y1, y2 are solutions of L(y) = 0 on I ⊂ R and there is a point t1 ∈ I such that W12(t1) = 0, then y1, y2 are linearly dependent on I. Proof of Corollary 2.1.14: We know that y1, y2 are solutions of L(y) = 0. Then, Abel’s Theorem says that their Wronskian W12 is given by W12(t) = W 12(t0) e −A1(t), 88 2. SECOND ORDER LINEAR EQUATIONS for any t0 ∈ I. Chossing the point t0 to be t1, the point where by hypothesis W12(t1) = 0, we get that W12(t) = 0 for all t ∈ I. Knowing that the Wronskian vanishes identically on I, we can write y1 y ′ 2 − y′1 y2 = 0, on I. If either y1 or y2 is the function zero, then the set is linearly dependent. So we can assume that both are not identically zero. Let’s assume there exists t1 ∈ I such that y1(t1) 6= 0. By continuity, y1 is nonzero in an open neighborhood I1 ⊂ I of t1. So in that neighborhood we can divide the equation above by y21 , y1 y ′ 2 − y′1 y2 y21 = 0 ⇒ (y2 y1 )′ = 0 ⇒ y2 y1 = c, on I1, where c ∈ R is an arbitrary constant. So we conclude that y1 is proportional to y2 on the open set I1. That means that the function y(t) = y2(t)− c y1(t), satisfies L(y) = 0, y(t1) = 0, y ′(t1) = 0. Therefore, the existence and uniqueness Theorem 2.1.2 says that y(t) = 0 for all t ∈ I. This finally shows that y1 and y2 are linearly dependent. This establishes the Theorem. 2.1. VARIABLE COEFFICIENTS 89 2.1.6. Exercises. 2.1.1.- Find the constants c and k such that the function y(t) = c tk is solution of −t3 y + t2 y + 4t y = 1. 2.1.2.- Let y(t) = c1 t+ c2 t 2 be the general solution of a second order linear differ- ential equation L(y) = 0. By eliminat- ing the constants c1 and c2, find the dif- ferential equation satisfied by y. 2.1.3.- (a) Verify that y1(t) = t 2 and y2(t) = 1/t are solutions to the dif- ferential equation t2y′′ − 2y = 0, t > 0. (b) Show that y(t) = a t2 + b t is so- lution of the same equation for all constants a, b ∈ R. 2.1.4.- Find the longest interval where the solution y of the initial value problems below is defined. (Do not try to solve the differential equations.) (a) t2y′′ + 6y = 2t, y(1) = 2, y′(1) = 3. (b) (t − 6)y′ + 3ty′ − y = 1, y(3) = −1, y′(3) = 2. 2.1.5.- If the graph of y, solution to a sec- ond order linear differential equation L(y(t)) = 0 on the interval [a, b], is tan- gent to the t-axis at any point t0 ∈ [a, b], then find the solution y explicitly. 2.1.6.- Can the function y(t) = sin(t2) be solution on an open interval containing t = 0 of a differential equation y′′ + a(t) y′ + b(t)y = 0, with continuous coefficients a and b? Explain your answer. 2.1.7.- Compute the Wronskian of the fol- lowing functions: (a) f(t) = sin(t), g(t) = cos(t). (b) f(x) = x, g(x) = x ex. (c) f(θ) = cos2(θ), g(θ) = 1 + cos(2θ). 2.1.8.- Verify whether the functions y1, y2 below are a fundamental set for the dif- ferential equations given below: (a) y1(t) = cos(2t), y2(t) = sin(2t), y′′ + 4y = 0. (b) y1(t) = e t, y2(t) = t e t, y′′ − 2y′ + y = 0. (c) y1(x) = x, y2(t) = x e x, x2 y′′ − 2x(x+ 2) y′ + (x+ 2) y = 0. 2.1.9.- If the Wronskian of any two solu- tions of the differential equation y′′ + p(t) y′ + q(t) y = 0 is constant, what does this imply about the coefficients p and q? 2.1.10.- * Suppose y1 is solution of the IVP y′′1 + a1 y ′ 1 + a0 y1 = 0, y1(0) = 0, y′1(0) = 5, and y2 is solution of the IVP y′′1 + a1 y ′ 1 + a0 y1 = 0, y1(0) = 0, y′1(0) = 1 that is, same differential equation and same initial condition for the function, but different initial conditions for the derivatives. Then show that the func- tions y1 and y2 must be proportional to each other, y1(t) = c y2(t) and find the proportionality factor c. Hint 1: Theorem 2.1.2 says that the initial value problem y′′ + a1 y ′ + a0 y = 0, y(0) = 0, y′(0) = 0, has a unique solution and it is y(t) = 0 for all t. Hint 2: Find what is the initial value problem for the function yc(t) = y1(t)− c y2(t), and fine tune c to use hint 1.