We now consider another technique for solving TSP exactly.

It is called Dynamic Programming.

In contrast to the Branch and Bound method,

the running time of the Dynamic Problems and solutions that were going to design,

actually depends only on the on the number of nodes in the input graph,

but not on the structure or not on some heuristic use.

So it's running time is always to the sum times m square.

So, first of all,

the Dynamic Programming in itself is probably the most powerful algorithmic paradigm.

Usually the rough idea of this method is the following: given some instance of a problem,

instead of solving this problem,

just try to break it into small sub problems,

probably not to independent pieces but in the sum collection of smaller problems.

And then try to solve them one by one

always using the solutions to previous sub problems,

and finally, reaching the initial problem.

So, another technical trick of Dynamic Programming solutions is that,

usually, the solutions to all sub problems are stored in some data structure,

usually it is called a table.

So, this is done in order to avoid recomputing the same thing again and again.

So, what is a reasonable sub problem for the case of the Traveling Salesman problem?

Well, it is actually suggested just by

the Branch and Bound techniques that we just consider it.

So when we are trying to construct the cycle what we do actually is,

we construct some sub path of a cycle, right?

Some initial parts.

So we start with some initial verdict,

we end at some point and we've already constructed housing.

So let's then define a sub problem as follows.

First of all, assume that our nodes are indexed from zero to n-1.

So we had n nodes, then number it from zero to n-1.

Assume also that we always start our cycle at node zero.

So this is we can assume this result was of

generality because we can't select any node to be the starting node of our cycle.

Then assume that S is some subset of our nodes and i is some element of this subset.

We assume also that S contains the node zero itself.

Then, let's define the function C as follows.

C(i, S) is equal to the length of an ultimate path that starts at the node zero,

then visits all the nodes from the set S and end in the node i.

So it looks as follows, in the graph it will look as follows.

We start within the node zero then we go to some nodes,

and some other nodes, and some other nodes,

and some other nodes, and finally,

we end up in the node i,

and S actually is a set of all these visited nodes.

So S contains, this one, that one, that one, that one, and that one and so on.

So, this is just the definition of a function.

And it will turn out that computing this function for various tasks one by one

will allow us to solve the initial problem of

finding the optimal cycle in the input graph.

What we're going to do roughly is to compute

this function by gradually increasing the size of S. So initially,

S is going to consist of just one or two nodes or three nodes.

Definitely, when we need to visit just one or two or three nodes,

the problem is much easier than for visiting all n nodes.

And then we will gradually increase the size of S

until S becomes equal to the set of all nodes of our input graph.

So this is the main idea, but we will now,

of course, extend it and we'll fill all the details.

So there are two common cases.

When i is equal to zero and S consists of just zero,

so this zero in races corresponds to the set whose single element is just the node zero.

So in this case,

it is equal to zero.

So this just corresponds to the fact that if we start in the node

zero and we did not need to visit any other nodes,

then we did not need to produce any edges,

and, in this case,

it is equal to zero.

On the other hand, if i is equal to zero,

but S, the size of S is greater than one,

then corresponding value is plus infinity.

This just corresponds to the fact that it is, in fact,

forbidden because C(0,S) is

actually the length of the path that starts at zero but also ends at the node 0.

But we're allowed to visit every node just once.

So this is just some [inaudible] case and we define it as plus infinity just for convenience,

it will be needed in our code.

Now, probably the most important part of

any dynamic programming solution is a reference relation for some problems.

So remember what we need to find.

So we need to compute C(i, S).

Now, what we would like to do is to

write the so-called recurrence relation for the function C(i, S).

So we're going to express C(i,

S) for some values of C of some other values and some other subsets.

So we're going to relate C(i,

S) to the values of the same function for probably smaller size sets S. So,

what is C(i, S)?

This is a length of an optimal path starting from

zero visiting all nodes from the set S and ending in node i.

Now, let's do the following.

Let's consider the second to last node on this path.

So assume that we start from zero, we visit all the nodes S,

and this is i,

and this is the previous node on this optimal path.

So assume that this path is optimal,

which means that its length is equal to C(i, S).

And j is some unknown node, that is second-to-last node on this path.

So the question is,

what can be said about this paths going from zero to S?

First of all,

it visits almost all of the nodes of the initial path, except for the node i.

So this is S-{i}.

So we subtract from S one element subset containing i so,

everything except for i.

And what is even more important is that

this particular subpath must be an optimal search path.

So it must be a shortest path starting from zero and ending at j,

and visiting all nodes except for S-{i}.

This is just because our initial path is optimal, right?

Because if there was some other shorter path that visits

all the edges from S-{i} and end in j,

then the initial path will not be optimal.

So the only problem is that we actually don't know the value of j, right?

For an optimal path starting from zero and then ends in i,

there could be several potential values for j.

But that issue is going to be solved in an easy fashion.

So we are going just to enumerate all possibilities for j.

And this finally brings us to the following recurrence relation.

So C(i, S) is equal to the minimum overall j that belonged to S,

and that are not equal to i,

of the following sum.

So, it is C(j, S-{i}), so this is the same as starting at zero,

visiting all nodes in S except for i and ending in j and then [inaudible] h, g, i.

This is what we pay for the last step.

And we just take the minimum search value.

And this is a value of C(i,S).

And why is it?

Well, begin just because if we had a path

starting at zero and ends in a tie and go to all the nodes of S, then,

if we can it's second-to-last node j,

then we are sure that this prefix is optimal,

mainly that it's length is equal to this one and the weight of the last edge,

we know this, it is equal to w, g, i.

So the length of this path is equal to this sum.

And this is a recurrence relation.

And this is the heart actually of our Dynamic Programming solution.

So in the next video,

we are going to discuss some input notation remarks,

and then we'll finally write down the code based on this Dynamic Programming idea.