0:04

Welcome to the second lecture in the last week of our course,

Analysis of a Complex Kind.

Today we'll learn about a special type of series, namely the power series.

A power series, also often called a Taylor series,

centered at a point z0 in the complex plane,

is a series of the form sum k from 0 to infinity ak time (z- z0) to the k.

So z0 is this fixed number and z is something arbitrary we can plug in there.

So we can therefore vary this z right here.

We've already seen a few examples of power series.

For example, last class we looked at the series z to the k.

And we found out that it converges

as long as you plug in a z whose absolute value is less than one.

So now our power series notation, there's just a one here for

the coefficient ak, so the ak's were all equal to one.

And this z0 is simply not there so z0 = 0.

Here's another example.

The series -1 to the k over 2 to the k times z to the 2k.

1:12

So this series only has even powers of z, so

all the odd coefficients are equal to 0.

ak = 0 if k is odd, and

the even coefficients of the form a2k =

-1 to the k divided by 2 to the K.

And we have z0 = 0 again.

How can we find out where the series converges?

We can actually rewrite the series exhibit by pulling out the component

k out of all of these individual terms,

thereby finding the series equals -z squared over 2 quantity to the kth

power because if I brought the k back in, that would be back to the left hand side.

Now if I call this entire term in parentheses w,

2:08

I get a series of w to the k which I completely understand already.

I know that series of the w to the k's converges when

w is less than one in absolute value.

And it diverges when w is greater than or equal to one.

What does that mean in terms of z?

Well the absolute value of w is the absolute value of -z squared divided by 2.

When is that less than one?

Well that is less than one when the absolute value of z squared

is less than two.

The negative sign goes away in the absolute value and

then multiply through by two.

When is the absolute value of z squared less than two?

Well that's the case when the absolute value of z is less than square root

of two and so we know that this series, this original series,

converges when the absolute value of z is less than two, and

then diverges when the absolute value of z is greater than or equal to two.

3:13

Then there exists a number R and this R is between 0 and infinity.

Infinity is allowable as a number of R and 0 is as well.

Such that this series converges absolutely

as long as z- z0 is less than R in absolute value and

then diverges when z- z0 is bigger than R in absolute value.

Remember what absolute convergent meant?

That means if I put absolute values around all the terms of the series and

the series still converges.

We noticed that absolute convergence implies regular convergence.

But the reverse is not necessarily true.

4:24

such that in the disc of radius R centered at the point z0, it's,

little hard to read, so this is the z0 right here.

In that disk, I have convergence.

So all the blue shaded regions I get convergence.

Outside of the disk I get divergence.

When I plug in z's from out here, the series diverges.

The theorem says nothing about z values on this boundary.

It gives us no information.

We've seen an example where the series diverges everywhere on the boundary but

all kinds of things can happen on the boundary.

5:15

We call this upper case R the radius of the big disk.

The radius of convergence of the power series.

And you' ve probably heard of the radius of convergence before

when you studied series on the real line.

And the word radius made very little sense because you got an interval of convergence

and not a disk.

Now the word radius makes sense because it's the radius of a disk we found.

That a power series converges for z values from a disk and

diverges outside of the disk.

This is called the disk of convergence and it's radius is the radius of convergence.

6:27

Next let's look at the power series k from 0 to infinity, k to the k z to the k.

So these k to the k's, those are my ak's.

But why don't you write out a few terms out of this series, when k = 0,

zero to the zeroth power by definition is one.

So I get 1 times z to the 0 which is 1.

When k = 1, I get one to the first power times z, that's z.

When k = 2, I get two to the second power, that's 4 times z squared.

When k = 3, I get three cubed which

is 27 z cubed, and so forth.

7:10

So, the terms, the coefficients in front of the powers of z, grow quite rapidly,

as k gets larger, and larger.

So, for what values of z does this series converge?

Well, obviously, when z is equal to 0,

all the terms are 0, so that's quite obviously a point of convergence.

Let's not pick an arbitrary z that's not zero.

And we observe the following.

If I look at the absolute value of k to the k times z to the k,

I can pull this exponent k outside.

7:39

And I get it's k times the absolute value of z to the power k,

but this k times the absolute value of z, notice that z is fixed.

I can pick a large enough k, for

example greater than two over the absolute value of z,

and this term is greater than or equal to two to the k.

Which means the individual terms of my power series, they don't go to 0,

but a series whose terms don't go to 0 cannot converge.

The terms going to 0 was a necessary, not a sufficient, but a necessary condition

for convergence, since my terms don't go to 0, this series does not converge.

And so can converge for any z that's non-zero.

Therefore the radius of convergence of this power series is 0,

there is no disk around the origin in which the series converges.

So this power series has radius of convergence 0.

How about if I look at z to the k divided by k to the k?

Well, again, let's pick an arbitrary z and observe that the absolute value of z

to the k divided by k to the k, I can do the same trick and pull these ks out.

It's the absolute value of z divided by k to the kth power.

Now when I pick a k large enough, for example, bigger than 2 times the absolute

value of z, then this term z over k to the k is bounded above by one-half to the k.

9:05

So most of the terms of the power series except for

the first few are less than one-half to the k in absolute value.

But this series of one-half to the k is a geometric series and

we know that convergence sum one-half to the k,

k from 0 to infinity converges and we actually know what it is.

Its sum is 1 over 1- one-half,

which is actually 2.

So, since my absolute values of my series are bounded above by terms

that belong to a series that converges, it's a known theorem that

therefor the original series converges as well even an absolute value.

So no matter what value of z I choose, eventually for

large enough k, this is going to be true.

And since finally many terms of a power series do not influence its convergence,

we find that the series converges no matter what z I plug in there.

10:41

And now I look at this function, f(z) is given by the sum ak(z-

z0) to the k where z varies.

Now let z vary inside the disk of convergence.

And the claim is that the outcome is an analytic function

in that disk of convergence.

And more is true.

An analytic function has a derivative and

I can find that derivative by differentiating the series term by term,

so I can pull the derivative inside of this infinite sum.

We know that we can pull derivatives inside of finite sums.

But, for an infinite sum, that's a whole different matter.

So, that's something that needs to be proved.

But, it can be proved that, for this infinite sum,

which converges inside the disc of radius R.

I can differentiate inside the infinite sum and

therefore I find that f'(z) is ak(z- z0)k

which is k(z- z0) to the k- 1st power.

Notice I started the series at 1 here because when k = 0,

the derivative of that term goes 0, it goes away.

The second derivative can be found similarly.

I just take another derivative inside so

I find ak times k(z- z0) k- 1 to the k- 2.

And now to find to start the sum of 2.

In particular, if I look at the nth derivative of f, that's z.

So I keep taking derivative.

The nth derivative is going to be

something like the sum k from n to infinity and

then here I have ak times k(k- 1) and

so forth, through k- n + 1,

times z- z0 to the k- n.

When I plug in z0 for

z right there.

That makes this z as z0.

So as soon as this power k- n is at least 1.

The term vanishes because I have 0 to some positive power which is 0.

Only when this exponent is equal to 0, do I get a 1.

So when the exponent is equal to 0, that's the case when k is equal to n.

So the first term of my power's here.

The nth derivative of f at z0 is the first term of the power series and

that's going to be an times n, times n- 1 and so

forth all the way through n- n which is 0.

And so I just, all left with a 1.

That's the factorial of n, so that's an times m factorial.

I wrote this out back here again, the kth derivative, so

I can put it in terms of k instead of in terms of n.

The kth derivative of f at z0, is ak times k factorial and I can solve that for ak.

ak is there for the kth derivative of f at z0 divided by k factorial for all ks.

So that is amazing.

What this says that not only do I get an analytic function

14:09

inside the disk of convergence.

But in fact,

these ak's are strongly related to the derivatives of that analytic function,

namely the kth derivative of this analytic function at the point z0,

the center of the disk divided by k factorial is equal to that ak.

Let's look at some examples.

So remember, the series z to the k has radius of convergence 1.

Therefore, by the theorem, this function f(z),

given by the sum k=0 to infinity z to the k, is an analytic function.

We kind of knew that.

14:46

We found what the sum is, right?

We know f(z) is 1 over 1- z.

We know an explicit formula.

And we know that is analytic.

In fact, we know this is actually an analytic function in the whole complex

plane minus the value of 1.

We know this function agrees to the power series in the disk of radius 1 and

the theorem says this function is analytic.

But it also says I can differentiate term by term.

So, let's take the derivative and

differentiate term by term inside the sum as the theorem tells us.

We find that f'(z) can be found as the sum k = 1 to infinity, k times z to the k- 1.

Often it is desirable to express a power series where the z's are powers of k and

not k- 1.

I therefore can do a shift of my summation index.

15:36

Because the power sub z, when k = 1, I get a power of 0.

When k =2, I get a power of 1.

So I have all the powers of z from 0 to infinity here.

And therefore I can write this sum in terms of power sub z and I notice

the number in front of z is one bigger than the exponent that must be k + 1.

This is called a shift Of the index.

If you wanted to do this in a formal way,

you could briefly say well it's albion new index and that lbk- 1.

So if I replace k with my l,

k is l+1 and therefore I get the sum.

And whenever I see a K I'm going to write L plus one.

16:49

So now remember, we also know that f(z) is 1/1- z and so

we know the derivative of 1/1- z which is 1/1- z quantity squared.

So I found two ways to express the derivative of the function F.

And since those are the derivative of the same function,

these two things must agree with each other.

And I therefore find that the sum K plus 1 Z to the K

is one over one minus Z quantity squared.

So, I was actually able to evaluate a new series and find it's value using

the fact that the derivative of my power series can be taken term by term.

18:20

we're talking about analytic function in a disk.

I can actually evaluate this.

I have an anti derivative for z minus z 0 to the k namely 1 over k

plus 1 times z minus z 0 to the k plus 1.

And then you need to evaluate that from z0 to w.

When I do that, when I plug in z0, it's nothing.

When I plug in w, I get 1 over k + 1 times w- z0 to the k + 1.

Let's apply this theorem again

to our power series z to the k, which we know has radius of convergence R = 1.

So for any W whose absolute value's less than one,

we can therefore integrate this power search from 0 to w.

Where again by that integral, we mean the integral over any path

from 0 to w within the disc.

20:45

And therefor instead of integrating the whole series as we did above we could

just as well integrate the function one over one minus Z.

Well, an anti-derivative of one over one minus E

in the disc of radius one centered at the origin is minus log of one minus Z.

Because the derivative of minus log of one minus Z is minus

one over one minus Z times a derivative of the inside function which gets rid of that

minus sign so the derivative of minus log of one minus Z is indeed one over minus z.

And if I again flag in the upper part w I'll get the minus log of 1 minus w and

at the lower bound the primitive is equal to 0.

We used here that the logarithm of 1 minus z is analytic in this disc.

So let me remind you why that is the case.

The logarithm function itself.

Most analytic in the complex plane minus the negative real axis.

So, that's where log Z is analytic.

21:46

Now we're looking at logarithm of one minus Z which shifts this region

to the complex plane minus the portion of the positive real axis starting at one.

So, that's what log of 1 minus z is analytic

or also minus log of 1 minus z, that doesn't change that minus sign.

And therefore in particular it's analytic in this disk of radius 1.

Centered at the origin.

That's what we're interested in. That's where our path is located.

That's why- log(1- z) forms a primitive of 1/1- z in that disk.

So we have shown that the integral from 0 to w of the power series z to the k

Is equal to the power series k from one to infinity of w the k over k.

But we've also shown that the same integral is equal to minus

log of one minus w.

22:42

Since both of these functions are equal to the integral to the original power series,

the two right hand sides must agree.

So it must be the case that the sum came from 1 to infinity.

W to the k over k is equal to- Log(1- w).

We therefore found a power series expansion for the logarithm function.

Now, Log(1- w) that sounds little awkward, so let say z = 1- w.

In other words replace this and call this actually to z and

then we going to bring this negative sign to the other side and

find a nicer way f writing this formula s log of z is equal to minus,

that's the min us sign that came over, some k from one to infinity.

And if z is equal to 1 minus w then w is equal to 1 minus z so

we find 1 minus z to the power k divided by k.

And typically if you write a power series you want it in the form z minus z0,

not z0 minus z's.

So our z0 is equal to 1 but we want to interchange these two terms here.

24:20

And in this disk I have that disk is true.

I have a power series that is equal to the logarithm function there

just write down the first few terms.

When k = 1, I find -1 squared,

that's 1 divided by 1(z- 1) to the first power.

So the first term is (z- 1).

The next term is -1 cubed divided by 2 so

Z minus 1 squared over 2,

next I get the plus z minus 1 cubed over 3 and so forth.

25:02

So, it seems quite important to find out where a power series converges and

we've looked at some examples and

found in our examples the radius of convergence of the power series.

How would you find it in general though?

If I'm giving you an arbitrary power series how would you determine

its radius of convergence?

That's a question we're going to deal with in the next lecture.