0:05

>> In order for binary search to be effective,

we need to be able to sort the items in the whitelist.

So let's begin by looking at a method for getting that done.

Sorting is a very fundamental problem.

How do we rearrange N items to put them in ascending order?

On our example, we have these names in the whitelist in an array and what we

want to do is rearrange them within the array, so they appear in sorted order.

That's what we need for binary search to be effective.

Now, sorting's a very fundamental problem with lots of applications not just

binary search.

It's used in statistics and databases, data compression, bioinformatics,

computer graphics, scientific computing.

The number of applications of sorting is too numerous to list.

In fact, it has been estimated that a substantial fraction of

all cycles spent on all computers is devoted to sorting,

even today to setup for all these different types of applications.

Let's begin with a pop quiz on sorting.

This is something to think about.

It might be a job interview question.

In fact, we've got a famous example of that next.

>> [APPLAUSE]

>> Now senator, you're here at Google and

I like to think of the presidency as a job interview.

>> [LAUGH]

>> Now, it's hard to get a job

as president and you're going through the rigors now.

It's also hard to get a job at Google. >> [LAUGH]

>> We have questions and

we ask our candidates's questions and

this one is from Larry Schwimmer. >> [LAUGH]

>> You guys think I'm kidding?

It's right here. >> [LAUGH]

>> What is the most efficient way to sort

a million 32-bit integers? >> [LAUGH]

>> Well-

>> I'm sorry maybe.

>> No, no, no, no, no.

I think the bubble sort would be the wrong way to go.

>> [LAUGH]

>> Come on, who told him this?

I didn't see computer science in your background.

>> We've got our spies in there.

>> [LAUGH]

>> Let's ask a different interview

question. >> [LAUGH]

>> So that was just before the 2008

election in Senator Obama and I guess did fine with that interview question.

People still use this interview question today.

2:45

So we're going to look at the first sorting method that we're going to look at

is called insertion sort and here's how it works.

So what we do is we consider each of the items in turn and

we bubble them up through the array into their proper position.

It's called insertion sort.

We go down through the array.

Each item gets exchanged with the larger ones above it to get to its proper

position.

You think of it as lighter than them and it bubbles up.

So everything above the current item, in this case, Carol is already in order.

And then once we complete this bubbling up process,

then we've got it for the current item as well.

And everything below is untouched.

This method, there's Carol going into position.

So this method is not exactly bubble sort.

We don't teach bubble sort anymore, because it's even slower than this method

and we're going to look at better methods for sure just in this lecture.

But you get the idea.

So let's take a look at insertion sort.

So here's a trace of what happens.

When we consider Alice, Wendy moves down to make room for Alice.

Same with Dave and Walter.

Carlos comes, Dave, Walter and Wendy move down to make room for

Carlos, then Carol, then Erin bubbles up and so forth.

At every point, we've got the part of the array that, elements that we've

already considered are in order and the other parts, I have not touched.

So now, Bob comes in.

That bubbles up almost all the way to the beginning.

Again, think of the new key as lighter than the ones that

are bigger than it bubbles up to its right level.

So that's a trace and the last one is Victor goes up just a couple of positions.

So that's a trace of bubble sort in operation.

So let's look at the job implementation of insertion sort.

So we go through for i from 1 to n and

then we're going to look at the ones that

are less than i moving lower in indices.

So we start at j=i.

As long as j's not 0, we decrement it and then what do we do?

We compare the one with the one lowering index with our current one.

And if it's bigger than we exchange it where an exchange is our method for

exchanging a(j-1) a(j) in the array.

If we get to the point where our current element is bigger than

the one with one index lower, then we're done.

It's found its position.

That's insertions or there's the exchange method.

That's one of the earliest little computing techniques that we learned.

And so now, here's the test client which is just

readAllStrings on StdIn and put them in an array.

Sort it using this method and then print it out.

So if we use that for our array of 16 elements,

then it will put them in sorted order.

That's insertion sort.

Simple sorting method that is going to do this job.

Now again, we're going to do imperial tests.

We can also do a mathematical model, but let's do the imperial test.

Again, as before, we're going to take an array of length in and

10-character strings.

So that's using our generator.

And if we do 20,000, it takes a second.

And 40,000, it takes 4 seconds.

6:45

And we keep going, eventually, it's getting pretty slow for

320,000 and our ratio is going towards 4.

Takes 4 hours to sort a million names, which is actually pretty fast.

Computers today are pretty fast, but 4 hours is a long time.

And again, our hypothesis is going to be that the order of growth is about n

squared and that confirms this and you think about the way the bubbling works.

For every one of the N elements,

it moves about halfway through the array on average.

So that's for each N elements, it moves about N over 2 positions.

So it's going to be close to N over 4 positions is going to be about n squared.

It's not going to scale.

And if your company grows to do 10 million,

it's going to take 10 days and Alice knows that sounds like a bad idea.

So we need a better sorting method than insertion sort.

7:47

Now, that brings up just the idea of Moore's Law and

I want to talk more carefully about algorithms that scale.

So it's been true ever since the 70s that since we first had integrated circuits,

the founder of Intel, Gordon Moore postulated this idea that about every two

years, you're going to double the number of transistors in an integrated circuit.

So that means that we're going to get twice as much memory every two years,

because our memories are implemented on integrated circuits and

our processor speed is going to double every two years.

Those are really important implications.

So one thing that I've noticed like during my career is that what that means is that

if you buy a new computer,

you write a program that touches every word of the computer whatever it does.

It's going to take a few seconds and one of the first computers I used just had

tens of thousands words of memory, you can do 10,000 instructions per second and

I'll talk about this type of computer later on.

It's going to take a few seconds to access every word in that computer,

then we got to millions, tens of millions.

And now a day, you have billions of words of memory.

But you can also execute billions of instructions per second.

So to get to everywhere in the computer, it's going to be a couple of seconds.

It doesn't matter what kind of computer it is, roughly.

So what does that mean in terms of programs that we run on computers?

That's the idea of scalability.

So what you need is an algorithm that its running time doubles when

the problem size doubles.

The idea is when you get a new computer, you can fit a problem that's twice as big.

So your algorithm better be able to

manage that at least at scale.

So for example, if the order of growth is N,

problem size doubles, then the running time doubles.

N log, N also pretty close, but N squared doesn't happen.

We already saw according to our doubling test,

if the running time is quadratic, then when the problem size doubles.

The running time goes up by a factor of four and even worse for N cubed.

So the thing is if you're solving the problem you're solving now and

you get a faster computer, then you can do it in half the time.

But at least if you take on a bigger problem,

you can solve it in the same time.

It took you before to solve the smaller problem and that's really progress,

if you have an algorithm that scales.

If you have a quadratic algorithm or worse, an algorithm that doesn't scale,

then you can till solve the problems you're solving in half the time.

Your computers choice is fast after all.

But if you take on a bigger problem, because you have more memory,

it's going to take you twice as long to solve it and

that immediately leads to frustration.

And so you have to have algorithms that scale, if you want to keep pace with

Moore's Law and that comes up in application after application and

sorting's the first example.

So what we're looking for is a sorting algorithm that scales.

That runs in time proportional to N log N not quadratic time like insertion sort.