you're not going to compute more by build, building some new machine. so these had

profound implications. And, it's, it's really amazing that Turing came up with

this so long ago really before computers came into use at all. and it means that we

don't have to try to invent a more powerful machine, or, or language and it

really enables rigorous study of the computation that we can expect to do in

the natural world. so that's the basis for the theory of computation. And it's, it's

related to the study of algorithms. but, because it gives us a simple and universal

model of computation. But we haven't talked about efficiency. how long does it

take to simulate an Android to s imulate an iPhone and so forth? So, what about

cost? what about time? That's been our focus when developing algorithms. and what

we want to think about next is how to deal with that cost. but first, I just want to

lay out the evidence that we have for the Church-Turing thesis. That is, it's been

eight, eight decades and nobody's found a machine that can compete more than we've

been computing with the machines that we have. And the other thing that was really

convincing to the mathematicians in the mid-20th century is that people tried all

different types of models of computation. Church was working on something called the

un-typed lambda calculus, which actually is the basis for modern programming

languages. And there are other mathematicians working on recursive

function theory and grammars and other things. And eventually, essentially, they

were all able to use these languages to express the formalism of another one, and

then therefore, prove them to be all equivalent. And that comes in through to

the modern day with all programming languages. You can write a simulator or a

compiler for one programming language and another, or a different random access

machines, simple machines, like cellular automaton. and even computing using

biological operations on DNA. or quantum computers that you may have heard about.

all of these things have turned out to be equivalent. So, there's a, a lot of

confidence that the Church-Turing thesis holds. but what about efficiency? So, this

is a similar starting point for the study of algorithms. What algorithms are useful

in practice? so in order to know what the, what useful we're going to consider time

and we're going to say that it's an algorithm that it's not just that it's in

theory, would, would complete the problem is that it actually would solve the

problem in the time that we have available. and just to set a dividing

line, it a, a, a clear bright line it, it, we're going to say is that we'll consider

it to be useful in practice or efficient. If it's running time is guaran teed to be

bounded by some polynomial in the size of the input for all inputs. and for the

purpose of this theory, we don't even care of what the exponent is, could be n to the

100th. now you might argue that's not a very useful dividing line but it is a

useful starting point. and as you'll see, even the starting point has told us a

great deal about the kinds of algorithms useful in practice. and, this was, this

work was started in the 50s.' also, here at Princeton. and many, many people looked

at models for trying to understand what, what this meant. the dividing line is

between polynomial as, as I've said, and exponential. Exponential is a two the, the

n-th power of some constant of greater than one to the n-th power. And we'll talk

about that in just a second. so sorting is useful. It, so anytime it's bounded by a

polynomial or, but here's another one which is finding the best traveling

salesperson tour on endpoints. if you're going to try all possible tours that

algorithm is not useful in practice because n factorial is not bounded by a

polynomial in n. Its running time actually grows like n to the n-th. so, this is a

useful starting point for our theory because it's very broad and robust. and if

we can put some algorithms in the class with sorting and other algorithms in the

class with traveling salesperson using brute, brute force search, that's going to

be useful. It'll tell us which algorithms we want to use. We definitely can use the,

think about using the first but we can't think about using the second. and, and

actually, unlike turing machines, where we wouldn't actually kinda play turing

machine to solve a practical problem. in this theory, it's often the case that the

polynomial time algorithms will scale to help us solve large problems because the

constants usually tend to be small. but the, the key is the idea of exponential

growth and I just want to be sure to convince everybody that it's not about,

when it comes to exponential growth, it's not about the technology. That is a

dividing line th at no matter how fast your computer is, you can't be running an

exponential algorithm and expect to be solving a large problem, in just a thought

experiment to reconvince everybody of that. So let's suppose that, imagine that

we had a huge, giant, parallel computing devices. This thing is so big that it's

got as many processes, processors as there are electrons in the universe. So, we have

a supercomputer on every electron and we'll put the most powerful super computer

we can imagine today on every electron in the universe. and then, we'll also run

each processor for the lifetime, estimated lifetime of the universe. so, how big a

problem can we solve? Well, it's estimated there's ten to the 79th electrons in the

universe and supercomputers. Nowadays, maybe can do ten to the thirteenth

instructions per second. and you can quibble about these numbers and make it

ten to the twentieth if you want. it's estimated the age of the universe in

seconds is ten to the seventeenth. And if you don't like that, make it ten to

the thirtieth. You multiply all those numbers together

you're going to get a, a relatively small number to say a thousand factorial. ten to

the seventeenth times ten to the thirteenth times ten to the 79th is only