Welcome to Calculus, I'm professor Ghrist. We're about to begin Lecture 48 on Numerical ODES. Let's retreat from the digital back to the analog world and see how the two relate. We have used our intuition for smooth calculus to tell us things about the discrete world. In this lesson, we're going to see how the discrete calculus helps us solve smooth problems, in particular the problem of solving ordinary differential equations. In this chapter, we've been using our intuition from differential calculus to understand discrete calculus. But the inference can run backwards as well, and in this lesson we're going to use our discrete understanding to solve problems in differential calculus. In particular, we're going to consider differential equations. Recall some of the main classes. In the simplest case, one just integrates both sides of the differential equation. Others are not so easy. But if the equation is, say, separable, then again, we can integrate and solve. In more complex examples still, say a linear differential equation, we can apply an integrating factor. This is great, but it will not help us solve the general case. And indeed at some point, calculus fails to help, and we have to turn to a numerical method for solution. We're going to focus on the initial value problem for an ODE of the form dx dt = f(x, t). We can think of that differential equation as defining a slope field in the x t plane that tells us how x changes with t, or at what rate. Now, given some initial condition, x0 at an initial time, t0, what we want to do is approximate the final value, x* at some final time, t*. Now we don't know the solution x of t, and we're not gonna be able to use calculus to get that analytic solution. So what we're going to do is try to approximate that final value, x star, as best we can. To approximate, we're going to use sequences. Let's begin, first of all, along the time axis. We're going to write a sequence t sub n that goes from t0, t1, all the way up to tN, where t0 is the initial time and tN is our final time, t*. This is going to be a uniform grid, meaning that we have a step size h = the final time- the initial time divided by N. We are, likewise, going to approximate the x values by a sequence x sub n, where x0 is the initial condition on x. And hopefully our final x value, x N, is close enough to x* that we have a good approximation. The simplest way to do this is called Euler's Method. Perhaps you've seen it before, but we're going to look at it for a discrete calculus perspective. We're going to take our differential equation and discretize it using sequences, and instead of derivatives, differences, a literal quotient of differences. What does this mean? Well, if we consider at the nth time step, then on the right we have f evaluated at x sub n, t sub n. On the left we have the quotient of differences. Delta x is x n+1- xn. Delta t is t n+1- tn. Of course, since we're using a uniform grid, that delta t is simply equal to h, our step size. And now with a little bit of rearrangement, we obtain Euler's Method. This is an update rule. It tells you that if you know x sub n and t sub n, you can obtain x n+1 as xn + h times f evaluated at xn, tn. That's the method, but what does it mean geometrically? Well, if we are at time step t sub n and we have our approximation x sub n, we'd like to get to an approximation at tn + 1. We don't know the true solution that passes through that point. But we do know the value of f there. That is telling us the slope. And what Euler's method is really doing is projecting forward to time t sub n+1, and at heart Euler's method is a linearization. So why does it work? Well, it works because we repeat this at each time step, linearizing and approximating as we move from time step to time step. Let's look at a simple example, a differential equation that we already know how to solve, dx dt = x. We'll choose an initial time of 0, and an initial value x0 = 1, a final time t* = 1. What is x*? Well, we already know that the solution to this equation is e to the t, and so x* really should be e. But let's see what happens when we choose a step size of h, that is, final time- initial time over capital N. In other words, 1 over capital N. Euler's Method tells us that x at n+1 = xn + h times f(xn, tn). What's f in this case? Oh, it's the right-hand side of the ODE, in this case, just the x value. So we can simplify Euler's Update rule to x n+1 = xn + h xn. Factoring out the xn, we get quantity (1 + h) times xn. We've seen this before. This is really a recursion relation. E applied to the sequence x = (1+ h) times x. Now you may recall from our lectures on discrete calculus that we know how to solve such simple recurrence relations. The sequence x is really x0 times (1 + h) to the n. It is that coefficient of 1 + h that determines the solution. This means that our final value, x sub N, is x0 times (1 + h) to the N. If we substitute in 1 over N for h, then we obtain something that should look familiar, (1 + 1 over N) to the N power. You may recall that for N large, this is a very good approximation to e, our desired final value. What happens when we do an example, the answer to which we don't already know? Consider dx dt = cos t + sin x with an initial time of 0, an initial value of 0, and a final time of pi. I don't know what the final value should be. This is not a separable equation, and so we're going to fire up the computer. Program in, let's say, 5 time steps, and see what it says. We will break up the time interval into five equal sub intervals, and then use Euler's method to approximate what x sub 5 should be. And we get a value of about 2.4. Now, that's with 5 times steps. If we use a greater number of times steps, Euler's method will return, hopefully, any more accurate approximation to that final value. And as we move towards let's say 80 time steps, we seem to be converging. The final answer looks like it ought to be about 2.25, 2.24 something like that. It's hard to say without running more examples. Now, why is it that Euler's method works and seems to converge? How quickly does it converge? Well, we're going to think from a Taylor series point of view. Consider what happens when we expand the solution x(t) about t0. x(t0 + h) is, by a Taylor series, x0 + h times the derivative of x with respect to t, evaluated at t0 + something in big O of h squared. Now what's that derivative evaluated at t0? That is simply f at x0, t0. And here we see Euler's method hiding inside of the Taylor series. Euler's method is really just Taylor expansion and dropping all of the terms of order two and greater. Now we say because of this, that Euler's method is a 1st order method and that the error you make at each step is in big O of h squared, where h is our time step, remember. Now, what is the net error, when you do N such steps? Well, it's N times something in big O of h squared. You might think, ha ha, N is a constant. So I can forget about it since we're doing big O. No, N depends on h. It's really some constants, the change in time divided by h. And so being careful, we see that the net error is in big O of H. We'll look at two other methods for doing numerical ODEs. The first is called the midpoint method. And it begins easily enough, just like the Euler method, one projects forward. Call this change in x that we get from Euler's method, kappa. Now this is the midpoint method. So what we're going to do is take the time value that is in between t sub n and t n+1 and try to approximate what the differential equation is doing there. We will use evaluation at that midpoint to obtain a new slope, and pull that back to x sub n, and then project that forward at that midpoint slope to get xn+1. Now when we write all this out algebraically, it involves this constant kappa, and it looks a little bit complicated. You don't have to memorize this, but what you do need to know is that this is a second order method. That means that we're really doing a Taylor expansion and keeping all the terms up to and including degree 2. The step error in the midpoint method is in big O of h cubed, meaning that the net error is in big O of h squared. This is a better approximation at the expense of being algebraically more complex. Now with that in mind, you might wonder, wouldn't it be possible to do higher order approximations using midpoints, pulling it back? And then evaluating again, and pushing it forward, and then pulling it back, and then averaging everything all up together? Indeed this is possible, and this is a wonderful method called the Runge-Kutta method. And there is a lot of equations that go into this. You don't have to memorize it. What you need to know is that first, this exists, and second, it's very helpful even if it seems rather mysterious. This is a 4th order method very good approximation. Your step error is in big O of h to the 5th, and your net error is in big O of h to the 4th. Let's compare all these methods together in the one simple example that we know how to solve. dx/dt = x with the initial time 0, the initial value 1, the final time 1, and our final value, x*, which is known to be e. We've already seen what happens in the general case in Euler as h is going to zero. Let's be dumb. Let's use one step. We're going to take our step size, h = 1. In Euler's method, what we get? Well, we get simply that x1 = 1 + h times x0. Since x0 = 1, we get an approximation of 1+1. That's not a terrible approximation to e given that we've only done one step. But that's all we get from Euler. Now with the midpoint method, our kappa value, we already know what that is. That is = 1. What happens when we take the formula for the midpoint method? Well, we get a value of x1 = 1 + h times (x0 + one-half kappa). That means we get 1 + 1 + one-half. That's a better approximation to e. Finally, when we go all the way and write down all of those formulae for the Runge-Kutta fourth order method, then, with a little bit of algebra, you'll see we get 1 + 1 + a half + one-sixth + one twenty-fourth. And now you see the pattern. These numerical methods are really just giving higher and higher terms from the Taylor series. The moral of our story is that by looking at things from a Taylor series point of view, you can understand errors. We've now seen just a small taste of how discrete or digital methods help us solve smooth calculus problems, but this is, by no means, the only possible example. In our next lesson, we'll consider how to use the discrete calculus to solve problems involving definite integrals.