So again, here is the source we'll work with.

Item is three symbols, here are the probabilities and here's

the Huffman code that we designed based on these statistics.

The difference between actually this source and the one we used in the

previous slide is that the probability of S2 stayed the same, but the

probability of S3, with respect again

to the previous case was reduced considerably,

by 0.15, so 0.15 was added to the probability of the first symbol.

So therefore this is a, a source with a skewed probability of its symbols.

So one symbol has a very high probability and the other a very small one.

So if you recall from a couple of slides ago, this is a

case that for which you were arguing that Huffman can be an inefficient code.

It's the case of a small alphabet this of course the

toy example is that if it's small and Pmax greater than half.

So, if you recall in that case of Pmax greater, equal than

0.5 the average code word length of the rate, I use this to.

Indistinguishably is between it's, I'm sorry,

less than entropy plus Pmax plus 0.086.

So with Pmax 0.95 this here is greater than one

so the upper bound becomes a rather loose one.