So, we can now think about different channels and their properties and there's

this very important notion called the channel capacity.

So let's first think about different error rates that a channel, different

error modes, that a channel might exhibit.

So here's something called the binary symmetric channel, often abbreviated BSC,

and the binary symmetric channel says that when I send a zero, then with

probability.9, I get a zero, but with probability 0.1, I get a one.

And conversely, with a probability, when I send the one, I get a one with

probability of 0.9, and the zero with probability 0.1.

You can see why it's called the binary symmetric channel because the area is

symmetric between zero and one. A different error model, an erasure

channel where the bits don't get corrupted, they just get destroyed.

Now the big difference between this, the binary erasure, erasure channel

abbreviated BEC is that when a bit gets destroyed, I know it.

That is what I get at the end, is a question mark as opposed to a bit where I

don't actually know whether that was the original bit that was sent or a

corrupted, here I know the bit was corrupted.

And finally, here is what's called a Gaussian channel,

where the noise that gets added on top of is actually analog noise.

And so, there is sort of a, a Gaussian distribution relative to the signal that

was sent. And it turns out these different channels

have very different notions of what capacity means.

And capacity, we'll define exactly what implications that has in a moment but

information there is. Your super smart people have computed,

the capacity for these different kinds of channels, and it turns out, that for

example the capacity for the binary symetric channel, is zero, is a little

bit over 0.5, where the capacity of the binary eraser channel, is 0.9 where 0.9

is, is this, probability of getting the correct bit.

And if you think about that it makes perfect sense that, the capacity of a

channel where you know the bit were erased is higher than ones where you have

to figure out whether bits are right or wrong.

So this is, little less, a more benign type of noise.

And the, what you see over here is the capacity of the Gaussian channel and it's

a, its a expression that involves things like the with this Gaussian for example,

the wider it is, the lower the capacity. Now, Shannon, in a very famous result

known as Shannon's Theorem, related the notion of channel capacity and bit error

probability in a way that tells us and pro, defines an extremely sharp boundary

between codes that are feasible and codes that are infeasible.

So let's look at the diagram. The x axis is the rate of the code.

Remember, the rate was k over n. In the example that we gave and this is a

rate in multiple channel capacity so this says once you define the channel capacity

we can look at the rate of a code and each code will try for a dif could, could

be in a different point of the spectrum in terms of the rate.

On this axis, we have the bit error probability.

So obviously, lower is better, in terms of bit error probability.

And what Shannon proved is that this region over here.

The, the, the, the region that I'm marking here in blue,

the attainable region is, you could construct codes that achieve any point in

this space. That is for any point in this, in this

2-dimentional space of rate and bit error probability you could achieve that that,

you can construct a code. He didn't show it as a constructive

argument, it was a, it was non constructive argument but he proved that

there existed such codes. Conversely, he showed that anything that

passes, on this side of the boundary the

forbidden region is just not obtainable. That is not matter how clever of a coding

theorist you are you, could not construct a code that had a rate above a certain

value and bit era probability that was below a certain value, which is the shape

of this boundary that we see over here. And you can see why the channel capacity

is called capacity because this is a multiple of one.

So this was Shannon's theorem and it set up as we said in non-constructive proof

that the blue region was an attainable region.

But the question is how can you achieve something that's close To the Shannon

limit. And, around, up to, a certain point in

time, the mid 90s.' This was, sort of, a diagram of the kind

of, Achieve, what was achievable in terms of

codes. And so this is a diagram we'll see

several times from, in a minute. so here is, on the x axis, we have the

signal to noise ratio, measured in db. And on the y axis, we have the log of the

bit error probability. And what you see here are codes that were

used, first up here we see the uncoded, version.

And you can see that the uncoded is not very good.

It, has very high, bit error rates, so high here is worse bit error rates, so,

very bad. And, of course, as the signal to noise

ratio moves to the left, the error rate grows.

And what you see over here in these two other lines are.

The bit error rate, sorry the, the curves achieved by two the NASA space missions

Voyager and Cassini and you can see that there was a good improvement between

Voyager and Cassini, which is 1977 to 2004.