In the last couple of lectures,
we've been talking about Algorithm Analysis and
Big-O Notation and that probably still feels pretty abstract to you.
So in this lecture,
we'll actually go through some concrete examples.
Before we do, you should take in in-video quiz.
Okay, the big idea here with our algorithm analysis is,
let's say we actually came up with a math equation,
a function for how long our particular algorithm would take to execute,
like the example I gave you.
The first thing we do is we throw away all the numbers,
all we keep are n,
any term that has an n in it.
Once we do that,
we can just pick the highest O(n) term,
the term that will dominate the rest.
So n squared grows much faster than n log n which grows faster than
n. So we can throw everything away except for the n squared in the example.
So the example was O(n squared).
Okay, so some examples,
we'll start at the lowest cost or fastest complexity and that's constant time.
That's O(1), is the way we specify,
it takes constant time.
Now, that doesn't mean no time.
It doesn't mean instantaneous.
It just means that the time it takes to do it has no dependence whatsoever on n. So,
one good example of this is accessing a specific element in an array.
So it doesn't matter if our array has one thing in it or 20,000 things in it.
Accessing a particular element takes constant time because what we do is,
we take the index,
we multiply it by how large each element
is and then we add it to the base address of the array.
So it doesn't matter how big the array is.
We do a multiplication,
we do an addition,
and there we are with the address of the element we're trying to access.
So that's a constant time operation where
the time it takes to do that operation is not dependent
on n. The next one up and there are actually
some sort of more obscure complexities in between the ones we're talking about,
but I'm talking about the most common ones.
The next one up is O(log n) and that's called
logarithmic and this log by the way is log base two.
It's not log base 10 or natural log anything like that,
it's log base two.
And a good example of that is binary search and let's walk
through a binary search example so you can see why it's O(log n).
Here's our binary search example.
We're going to look for the number three in this list of numbers that's right next to it.
And there are 11 items in that list.
We'll come back to that comment in a few minutes.
So what we do is we check the middle of the list.
Now, it's important to note that this list has to be
sorted in either ascending or descending order.
We can do a binary search on either sorting order,
but most people will sort lists in ascending order,
so that's how we did it here.
So we checked the middle of the list and it's 21 and three is less than 21.
Now, because the list is sorted,
we know that if three appears in the list,
then it's going to be the left of 21.
And so we split the list in half and we now look for three and that list one,
three, five, 12, 17.
We check the middle of that list and that's five,
and we know three is less than five.
So we look for three, and one, and three.
Now, we check the middle of the list and you may well say,
well there is no middle,
there's only two values.
And so we have to consistently decide as we run
our binary search algorithm to either round down or round up.
I'm going to round down,
because when we implement this that's how it will work.
And so now, we're going to compare three to one,
and three is greater than one.
So we're going to look for three in the list of one element three, and we found it.
And of course, if that number had been four,
we wouldn't have found it,
but we would have been done anyway because we got down to a list of one,
and that list of one wasn't three.
But in this case, we found it and that's great.
Because we split the list in half every single time,
this is an O(log n) algorithm,
log base two, not log base 10 or natural log or anything like that,
but we only have to do for this list of 11 items.
Log base two is slightly higher than three but we'll have to say four,
and it turns out that we did four comparisons here.
We checked against 21 and five and one and three.
And so, as you can see,
the number of comparisons that we need to do when we do a binary search is O(log n).
Now, it's important to note that because each comparison takes a certain amount of time,
and because depending on how the rounding works we
might end up doing one additional comparison.
The actual equation for this might be something like 3 log n + 3.
And the important things to know are,
we don't care about constants because if we can have an
upper bound on top of our function that's 20 log n,
then we don't care that we had 3 log n + 3, it's O(log n).
So the great thing about algorithm analysis is, any number,
any actual number we end up with,
we can just throw away when we're converting to Big-O Notation.
So, this is an O(log n) algorithm even
though if we were precisely trying to calculate how many clock cycles it would take,
it might be 3 log n + 3 for example,
but we don't care because we throw away the numbers.
Okay, we have three more examples to at least briefly discuss.
The next one is O(n).
So this is linear in N and the best example of this is doing a linear search.
Let's say, you were looking for a particular card in a deck of
cards and you didn't have the cards in any sort of order.
So to look for a particular card,
you have to look at the first one,
and if it's not the card look at the second one,
and if it's not a card, look at a third one, and so on.
And in the worst case,
remember doing worst-case analysis here.
In the worst case,
we might find the card we're looking for at
the very end of the deck or we might not find it at all.
And in both of those cases,
we had to look at n cards,
where n is defined as the number of the cards in
the deck we're searching to do our linear search.
So linear search is an O(n) operation.
The next one is something called Loglinear,
it's O(n log n),
and we see this most commonly in various sorting algorithms.
So there is something called merge sort and
there are many sorting algorithms that are n log n,
but merge sort is one such algorithm that actually takes n times log n time to do.
Now the interesting thing about it is that,
we already saw log n with the binary search and we
saw that it was log n because we split something in half each time.
The way merge sort works is we split
something in half each time and that's where the log n comes from.
But then we have to actually merge these sub-lists
that have been sorted back together and at the end,
that merge actually takes n time.
So that's why we get n log n. Finally,
quadratic means O(n squared) and we
see O(n squared) in lots of sort of basic sorting algorithms like bubble sort,
and insertion sort, and selection sort,
and we're actually not covering the sorting algorithms in this class,
but you will probably come across sorting
algorithms somewhere in your game programming career.
So these last three are the highest we'll go
with the sort of practical things that we'll do in programming.
There are other algorithms that are even worse,
so we won't worry about those, thank goodness.
Okay, you should do in in-video quiz and
the answer is the number of loop iterations that tends to be
the key thing in our algorithm analysis and we'll see that a
lot as we actually continue with our algorithm analysis work.
To recap, in this lecture,
we saw some algorithm analysis examples that actually used Big-O Notation.
From this point forward,
we'll do our algorithm analysis on C Sharp source code because
C Sharp is just a language that we use to express our algorithms.
Those step-by-step plans for solving a problem are the code we write.
And so we'll do our analysis based on
the actual source code that implements the algorithms that we want to analyze.