In this video, we're going to run through three special case functions,

which give us interesting results when differentiated.

The first example we're going to work through is the function f of x,

equals one over x, which you can see plotted in the corner.

Now take a minute to notice that the gradient of this function is negative everywhere,

except at the point x=0,

where we can't see what it is.

And actually something quite interesting must be happening at this point.

As on the negative side,

the function drops down presumably towards negative infinity,

but then it somehow reemerges from above on the positive side.

This sudden break in our otherwise smooth function

is what we refer to as a discontinuity.

We've mentioned already that the operation divide by zero is undefined,

which means that this function simply doesn't have a value at the point x=0.

But, what about the gradient?

Well, let's sub our function into the differentiation expression to investigate.

So, if we take it f of x equals one over x,

we can say that f dash of x must equal the limit as delta x goes to zero of

one over x plus delta x minus one over x all divided by delta x.

And we look at this thing and we say we're going to have to make this top row

the numerator as a single fraction.

So we're going to have to combine these two fractions here,

which means that we're going to have to make the denominators the same.

So multiply top and bottom by x here and top and bottom by x plus delta x here.

So we can say that the limit of x over x (x +

delta x) minus x plus

delta x over x(x + delta x),

all divided by once again delta x.

Looking at these two we can see that we've got

an x and a -x both with the same denominator.

So we can subtract these now.

So this cancels with this.

So we can say okay,

this thing is going to be the limit of minus delta x

divided by x(x+delta x) all divided by delta x.

Now at this point, you can see that you've got a delta x at the very top,

and a delta x the bottom so once again,

these two terms cancel each other out.

And what you're left with is the limit as delta x goes to

zero of minus one divided by x squared,

we're going to open up this bracket,

plus x delta x and this once again is where the magic of limits comes into play.

So we look at this function and we say,

okay this term here has got a delta x in it.

This means that as delta x becomes very, very small,

this term itself is going to become very

small and therefore eventually become irrelevant.

So what we can say is as we apply our limit,

we can actually ignore this term here entirely,

and we get minus one divided by x squared,

which looks like this.

So as we realized just by looking,

this derivative function is negative everywhere and like our base function,

the derivative is also undefined at x equals zero.

The second function we're going to look at,

I'll start simply by explaining what it does.

It's a function that has the special property that the value of the function f of x,

is always equal to the value of its own gradient f dash of x.

Now there is a boring function which has

this property which is the function f of x equals zero,

because as it's a horizontal line,

clearly both the function and the gradient are zero everywhere.

But there's also a much more interesting case.

We're not going to work through the rigorous derivation,

but we can skip through some of the more interesting bits.

So let's start by noticing that our mystery function

must always either be positive or always be negative.

As if it ever tried to cross the horizontal axis,

then both the function and the gradient will be zero. And so we get stuck.

So we'd just be at our boring function zero again.

The next thing to realise is that by virtue of always increasing or always decreasing,

it can never return to the same value again.

Plenty of functions could fit these criteria,

and focusing on the positive case,

they all look something like this.

However, besides the zero function,

there is only one function that will satisfy all our demands.

This is the exponential function, e to the x,

where e is Euler's number,

named after the 18th century Mathematician.

The number e which is approximately 2.718,

is very important for the study of Calculus.

But more than that, e like pi,

turns up all over mathematics and seems to be

written all over the fabric of the universe.

As differentiating e to the x gives us e to the x, clearly,

we can just keep differentiating this thing as

many times as we'd like and nothing is going to change.

This self similarity is going to come in very handy.

The last special case function that we're going to talk about in this video

are the Trigonometric Functions, Sine and Cosine.

You may recall that for a right angled triangle,

sine of angle x multiplied by

the hypotenuse r gives you the length of the opposite side to the angle.

And the graph of Sin x, looks like this.

Let's take a look at this function and see if we can

work out what shape its derivative would be by eye.

So sine x, starts with

a positive gradient which gently decreases until at zero at the top of the bump,

and then it starts being negative again until it gets to the bottom of the next bump,

and it turns out the derivative of Sin x is actually just cosine x.

Now what happens when we differentiate cosine x?

Well actually, we get minus sine x. Differentiating a third time gives us minus Cosine x.

And then amazingly differentiating changing

a fourth time brings us all the way back to our original function, sine x.

And then the pattern of course repeats.

This self similarity may remind you some of the exponential function we discussed above,

and that is because these trigonometric functions are

actually just exponentials in disguise,

albeit quite a convincing disguise,

which we won't be discussing here.

Many of the details of the functions that we've talked

about in this video were skimmed over rather quickly.

But for the benefit of this particular course,

all I need you to understand is that

differentiation is fundamentally quite a simple concept,

even when you might not be able to battle through all the algebra,

you're ultimately still just looking for the rise over run gradient at each point.

This pragmatic approach to Calculus is going to come up

again when we start talking about calculating gradients with computers.

Sometimes we really can just find an equation for

the gradient as we've been doing for all of our examples so far.

However, if instead of a nice smooth function we just have

discrete data points than it can seem as if there's nothing for us to differentiate.

But as we shall see,

rise of a run comes back once again to save the day.