Just flipping the sign, minus log z, and we're interested only in the range

of when this function goes between zero and one, so get rid of that.

And so we're just left with, you know, this part of the curve, and

that's what this curve on the left looks like.

Now, this cost function has a few interesting and desirable properties.

First, you notice that if y is equal to 1 and h(x) is equal to 1,

in other words, if the hypothesis exactly predicts h equals 1 and

y is exactly equal to what it predicted, then the cost = 0 right?

That corresponds to the curve doesn't actually flatten out.

The curve is still going.

First, notice that if h(x) = 1, if that hypothesis

predicts that y = 1 and if indeed y = 1 then the cost = 0.

That corresponds to this point down here, right?

If h(x) = 1 and we're only considering the case of y = 1 here.

But if h(x) = 1 then the cost is down here, is equal to 0.

And that's where we'd like it to be because if we correctly predict the output

y, then the cost is 0.

But now notice also that as h(x) approaches 0, so as

the output of a hypothesis approaches 0, the cost blows up and it goes to infinity.

And what this does is this captures the intuition that if a hypothesis of 0,

that's like saying a hypothesis saying the chance of y equals 1 is equal to 0.

It's kinda like our going to our medical patients and saying

the probability that you have a malignant tumor, the probability that y=1, is zero.

So, it's like absolutely impossible that your tumor is malignant.