Welcome to the last week of our course, Analysis of a Complex Kind. We'll study Infinite Series, Power Series, Taylor Series this week. And we'll even get to the Riemann Zeta Function as well as the Riemann Hypothesis in its relation to prime numbers. Let's start with a definition. An infinite series is just an infinite sum we're going to add up infinitely many numbers. What does that mean? We say such a series converges. To a number S and this S is also going to be a complex number if the sequence of partial sums. So the sequence you get when you stop.at a certain point and then you move that point farther and farther back so if the sequence of partial sums given by the first n numbers of the sequence converges to S. These aks are complex numbers. You've probably seen infinite series before in the context of real numbers. Now these aks are complex numbers. And one can easily see that a series of complex numbers converges if and only if the corresponding series of real parts converges in the corresponding series of imaginary parts converges. Let's start with an example. Consider the series of z to the case. Where sum z and z so for example, if z was equal to i over 2 and the series would be for k is equal 0 you get 1, for k equal to 1 you get i over 2. For k equals 2 you get i squared over 4. Then you get i cubed over 8 and so forth. You could split that up into real and imaginary parts, so you have 1 minus one-fourth and the next one is going to be this one is going to be plus one-sixteenth and so forth. And then the corresponding imaginary parts are this one right here, i over 2, so one-half. Then this one here, i cubed is minus i, so minus one-eighth. And then plus one thirty-second, and so forth. So you could look at the sequence that way and determine for each particular z whether the two corresponding real series converge. Or we can look at this series more generally for a general z, and try to figure out if we can find a general way to figure out where the series converges. And that's what we're going to do. The partial sums were given by simply adding from k equals 0 to n. So 1 + z + z squared + z cubed, all the way to z to the n. And here's a great trick to find a closed formula in order to help us figure out whether these partial sums have a limit. Sn to infinity. So here's the trick, Sn again is the sum of one plus z plus z squared all the way to z to the n. And now we look to z times Sn. So in other words we need to multiply each term by z. Or 1 times z is z. z times z is z squared and so forth. All the way to the last from z to the n, times z to the n+1. So, z times Sn is pretty closed to Sn, except for that missing first term of 1 isn't here. And the extra last term. So, when I take Sn and subtract from it z times Sn, most of the terms cancel out. All that's left is this 1 from the Sn term and the extra zn plus 1 from the z tense Sn term. So Sn minus z times Sn is 1 minus z to the n + 1. But now here on the left hand side I can factor out an Sn. That's Sn(1-z). And so I can divide both sides of the equation by 1 minus z and find that Sn is 1 minus z to the n plus 1 over 1 minus z. All of a sudden I have a close form expression for Sn, and I can figure out whether it converges as n goes to infinity. And it turns out that depends on what the value of z is. Remember, if z is less than 1 in absolute value, and if I take higher and higher powers of z, then the higher and higher powers of z converge to the origin. On the other hand, when z is bigger than 1 in absolute value, then the modulus of the powers gets larger and larger and larger and z to the n goes to infinity. When z is equal to 1 in absolute value, then the powers of z simply walk around the unit circle, but they don't converge because they keep walking around the unit circle. In other words, if z is less than 1 in absolute value, then this term, z to the n plus 1 goes to 0, so that in a limit Sn goes to 1 over 1 minus z. And we can say that the sum from k equals 0 to infinity. So that series we were looking at equals 1 over 1 minus z for z less than 1 in absolute value. So what happens when z is greater than or equal to 1? Here's a theorem. If a series converges, then these aks must go to 0 as k goes to infinity. So it's a necessary condition for the aks to go to 0 for the series to converge, but it's actually not sufficient. There are examples of series where the ak terms do go to 0, yet the series does not converge. They don't go to 0 fast enough to make the series converge. So these aks has to go to 0 at a certain rate. But if the series converges, then the aks definitely have to go to 0. In our example, if z is greater than or equal to 1 in absolute value, then certainly z to the k does not go to 0. It actually goes to infinity if z is bigger than 1 in absolute value, and it stays at 1 in absolute value at z's of absolute value 1. And therefore the series z to the ks cannot converge by this theorem because those z to the ks don't go to 0. We say the series diverges for z greater than 1. Lets now analyze the real and imaginary parts of this equation. So we found z less than 1 we have the sum of the z to the k. This is 1 over 1 minus z. What does it tell us about the real and imaginary parts? Let's write z in polar form. That's re to the i theta. Then, z to the k can be found by taking the radius to the power of k, and then by Dewar's formula, e to the i theta To the k th power, is equal to e to the i k theta. And e to the i k theta, that is cosine of k theta plus i sine of k theta. So therefore the sum of the z to the ks is the same as the sum of r to the k cosine of k theta. That's the real part of the series of the z to the ks plus i times The sum of r to the k sin of k theta. So that's the imaginary part. We just rewrote the left-hand side of this equation in terms of real and imaginary part. Now, let's rewrite the right-hand side of the same equation. What is 1 over 1- z, how do we split that up into real and imaginary part? Again, we write z as r e to the i theta. And we remember a trick. In order to find out 1 over a complex number, we multiply top and bottom by the complex conjugate. What is the complex conjugate of re to the i theta? The complex conjugate of this number is 1 because 1 is real,- r, r is real. And so we have the conjugate of e to the i theta, which was e to the -i theta. So we multiply top and bottom by that. The topic can expand because e to the -i theta, recall is cosine of -theta. But cosine of -theta is the same as cosine of theta because cosine is an even function plus i times sine of minus theta. But sine of minus theta is minus sine of theta because sine is an odd function. So the top becomes minus r times consign data minus minus, so +r times sine theta times i. The denominator we can multiply through. We have 1 times 1 = 1. 1 times r e to the i theta, you see right here and it's a 1 times r e to the -i theta. Again with the negative sign in front of it. And then r to the e theta times r e to the minus i theta which gives you this r squared. In the denominator, we have a term -r times e to the i theta, plus e to the -i theta. But e to the i theta +e to the -i theta, if we were to divide that by 2, that would be cosine theta. We can't just divide by 2, we have to then multiply by 2 to make it right. So the denominator, these two terms here in the denominator equal -r times 2 cosine theta. And that's what you see here in on the right. Altogether, we have found the real part of 1 over 1- z. Right here because these are all real. And the imaginary part is right here. We also found the real part of the sum of the z to the k. That's right here. And we also found the imaginary part, that's right here. Because we have an equality right here, left hand side and right hand side are the same. That means that the real parts must agree with each other and the imaginary parts must agree with each other. In other words the real parts agree. So r to the k cosine k theta, the sum of that must equal 1-r cosine theta divided by 1-2 r cosine theta + r squared and that's what you see right here. Furthermore, the imaginary parts must agree. So the sum of r to the k sin of k theta must equal r sin theta over the same denominator that we had before. Here's another example. Let's look at the sum of i to the k over k, for k from 1 to infinity. Does that series converge? Let's write out a few terms. Notice that the series starts at 1 because we can't really have it start at 0. We would be dividing by 0, we don't want to do that. Series can start at other points, and then it doesn't really influence their convergence because we're talking about finally many terms which have something to do with a value of this series, but not whether or not they converge. So for k = 1, we get i over 1. For k =2, we get i squared over 2, so minus one-half. For k =3, we get i cubed, which is -i over 3. For k =4, we get i to the fourth which is 1 over 4. And then we keep going plus i over 5 -1/6- i over 7 + 1/8 and so forth. So does this series converge? We can split it up into real and imaginary parts, or let's force a false start by looking at the series of absolute values. The absolute value of i to the k over k is simply 1 over k. This series right here is known to be the harmonic series, and it is known to diverge, even though 1 over k definitely goes to 0 as k goes to infinity. It's not sufficient for convergence of the series. Here, by the way, is a way to see why the harmonic series does not converge. You could write the series of the 1 over k's as 1 plus one half plus one third plus one fourth, and then we group these numbers in a certain smart way. We group 1/3 + 1/4 together and observe, well, 1/3 is greater than or equal to 1/4. And so this term, 1/3 + 1/4 is greater than or equal to 1/4 + 1/4, which is one half. Similarly we group one fifth through one eighth together, and observe one fifth is greater then or equal to one eighth. One six is greater then or equal to one eighth. One seventh is greater then or equal to one eighth. And one eighth is equal to one eighth. So altogether these four terms are greater then or equal to 4 times one-eighth which is one half. And then we next eight terms and do the same thing, and we keep being able to squeeze out one halfs, lots of one halfs. You get infinitely many one halfs, which means the series becomes as large as we want to. It does not converge If I stop anywhere, the partial sums keep getting bigger and bigger and bigger by significant amounts, and therefore, the series diverges. So, if I put absolute values around the terms of the series, it diverges. But how about the series itself, without the absolute values? Does it converge? So, we'll have to split it up into real and imaginary parts like I did at the top. And we noticed when we wrote out the individual terms of the series when k is even and you take i to an even power that make it a real number. It's either 1 or -1 that we get. So when k's of the form 2n, where n is a natural number then i to the k is really i to the 2n. And i to the 2n could be written as i squared, to the power n, so we just -1 to the n. That's definitely real, that's 1 or a negative 1. On the other hand, when k is odd, then k is of the form, 2n + 1, those are all odd numbers. Then i to the k is of the form i to the 2n plus 1. And I could write that as i times i to the 2n and since i to the 2n is negative 1 to the n this i to the 2n + 1 becomes i times -1 to the n, that's purely imaginary, it's either i or negative i depending on weather n is even or odd,. Therefore, the series of the i to the ks over ks can be split up into the real part, which is this one right here, and the imaginary part, which is this one right here. Notice that the first series we started with n equals one, because the first even number we get that way is two, which is the smallest even number we have. The second series I'm starting at n equals zero, because when n is equal to zero, I get the exponent one the first odd number. So it starts at the correct first odd number we can now simplify a little bit. I'm going to pull this two in the first series outside of the series, so we get this one-half. I rewrite i to the 2n as minus one to the n. And so the series becomes one-half times the sum of minus one to the n over n. It looks almost like the harmonic series except for the minus one to the n right there. In the second one I'm going to pull one I out of there therefore finding a minus one to the n in the numerator and 2n plus one in the denominator. Now let's look at these two series. What is the sum of minus one to the n over n? That's called the alternating harmonic series. This is almost like the harmonic series, but the signs alternate between one and negative one. And the claim is the alternating harmonic series converges. How would you see that? You can see that by actually drawing a number line. Say zero is here and negative one is here. And let's look at the values of the series. We've started negative one. That's the first partial sum. Then, we add one-half to that so we get to here Negative 1 plus one half. Next, we subtract one third from that. So maybe that gets us here. Negative 1 plus one half, minus one third. Next, we add one fourth, which is less than I subtracted before. Next I subtract one fifth, then I add one sixth, I subtract one seventh, I add one eight. As you can see that this will narrow down on to one point somewhere here. The sequence of partial sums actually converges, one can make this precise. But that's the main reason why the alternating harmonic series converges. A similar argument leads to the imaginary part here converging as well. So therefore, this series of the i to the k over k's converges. It just didn't converge when you put absolute values around it. So there's a difference there. We call it absolute convergence. We say a series of complex numbers converges absolutely if you can just put absolute values around the ak's in the series, then converges. We've seen examples. The series of the z to the k's actually converges absolutely. As long as the absolute value of z is less than 1, the argument that we gave for convergence doesn't change. You can still find the partial sums. And nothing changes. So the series converges and also converges absolutely as long as the absolute value of z is less than 1. On the other hand, the series of the i to the k over k converges but not absolutely. Here's the theorem. If a series converges absolutely, then it also converges. The is not true is we saw an example of a series that converges, but it does not converge absolutely. Absolute convergence implies convergence, but the reverse direction is not true. Furthermore, you have an infinite version of a triangle and the quality, in that case, the absolute value of the value of the original series is bounded above by the sum of the absolute values of the a case. Let's look at an example. Let's again look at the series of the z to the k. We know it converges absolutely as long as the absolute value of z is less than one therefore the previous theorem tells us the absolute value of the sum is by the sum of the absolute values. And since the absolute value of z to the k is equal to absolute value of z to the k that's what I wrote over here. But the left hand side we found to be 1 over 1 minus z. This is the value of the sum, and then we simply put absolute values around it. The right hand side equal 1 over 1 minus the absolute value of z, because I'm adding powers of the absolute value of z here so the same argument we made earlier to find the value of the series can be applied to find that the value of this series is 1 over 1 minus the absolute value of z. Therefore, this inequality right here shows that the absolute value of 1 over 1 minus z must be less than or equal to 1 over 1 minus the absolute value of z. By the way you could have found the the same inequality simply by observing that one. Minus z, a denominator of this expression, is greater than or equal to, by the reverse triangle inequality, 1 minus the absolute value of z. And then taking reciprocals, which will flip this sign around, you get this inequality. In the next lecture we'll study specific series, namely power series. These are series that are off the form sum k from 0 to infinity a coefficient times z minus z0 to the k. We'll study these for a fixed value of z0 and we'll let z vary. And as z varies, it turns out. This series form analytic functions, as long as we are within the region where these series converge.