The last approach that we're going to consider in

this project is called the local search approach.

Its main idea is quite simple.

So we start with some solution,

and by solution I mean just any cycle in our graph,

and we also fix at the beginning some integer parameter d.

And then we keep repeating the following.

While it is possible to locally improve our solution,

and by saying to locally improve our solution,

I mean the following, we take our cycle with n edges.

We remove some n edges, some d edges, I'm sorry, from it.

This creates some nodes of degree one.

And then we try to connect these two d nodes in a different manner.

If it is possible to do this and to improve

our solution s and to get some solutions s prime,

then we just replace s with s prime and continue this while,

and we stop when it is not possible, okay?

So this is a reasonable algorithm.

So what is clear about it,

is that it returns some locally optimum solution.

So at least, we are sure then when we return

some solution then it is at least not possible to fix it,

to improve it by changing some of the edges.

At the same time there is

no guarantee of course whether this solution is globally optimum or not.

This only means that by changing the edges in it,

we can not improve it, right?

At the same time,

if it is not possible to improve it with, for example,

with d equal to two,

then probably is possible to improve it with d equal to three.

Okay, we will see such an example in a minute.

So by increasing d,

we actually increase our chances of finding a global optimum solution.

At the same time we also increase

the running time of a single iteration of our procedure,

just because when d for example is equal to two,

we need to iterate through all possible pairs

of edges and for any such pair of our current cycle.

For any such pair,

we need to check whether it is possible to remove it and

then to connect this nodes in a different fashion to improve it.

But when d is equal to three,

we need to use all possible triples of such edges.

So from m square possible pairs of edges we go

to m cubed possible triples of edges and so on, okay?

Now let me show you an example.

So assume that we have a solution like this,

it is clearly suboptimal, right?

Because it wouldn't be much better to use just only vertical and horizontal edges.

And we can fix it just by changing two edges.

Namely, let's remove these two edges.

So we remove them and we add two new edges and this definitely gives a better cycle.

So sometimes, even when d is equal to two,

it is possible to fix the solution,

to improve it, okay?

At the same time there are cases when d equal to two is not enough,

and this is one of such cases,

because in this case,

clearly it doesn't make any sense to visit this point through these two guys, right?

So to fix this solution,

we would need actually to change three edges,

so we would need to remove these three edges, okay?

And instead of them,

add the following three edges.

So we can fix this solution so our algorithm can

realize that this solution is suboptimal if d is equal to three.

But unfortunately it is not possible to realize it when d is equal to two.

Okay, so we can't fix it when d is equal to three,

and we cannot when d is equal to two.

Okay, so once again there is a trade off between the quality of this solution,

the quality of the return solution,

and the running time of a single iteration.

So what does it mean, is that when we increase d,

we increase the running time of an iteration because

we need to consider many more possibilities for the edges.

At the same time, we also increase our chances of

being able to fix a current solution, okay?

At the same time, even for reasonable choices of d,

it is still not excluded that the number of

iterations of the y loop is going to be exponential,

but the resulting solution is going to be poor, unfortunately.

This is like bad news, so there are bad examples for these locals search algorithm.

At the same time, this same algorithm is again known to work well in practice.

So practical experiments actually reveal that for

many practical instances this algorithm works quite well.

And also in practice more complicated versions of local search are known where sometimes

we do allow not to decrease the weight of the current cycle.

But sometimes occasionally to increase the weight of the given cycle, okay?

The approach is called simulated annealing.

Okay, now time to summarize.

So what we considered in this project are basically two types of algorithms.

The first category is exact algorithms,

and the second one is approximation algorithms.

So in exact algorithm, the most naive approach is

to go through all possible candidate solutions.

It is called brute force,

it's running time is roughly n factorial and this is catastrophically slow.

Already for RAM equal to talent is not going to work on your laptop.

Okay, then the next algorithm that we considered,

the next exact algorithm is called branch and bound.

This can be considered also as a pruned search tree.

So we tried to extend the current permutation or the current pass,

but each time when we realize that it can no longer be

extended to an optimal solution we stop immediately, we stop this branch.

So this algorithm, it depends on heuristic used for branching

and for bounding and solution and in practice it works quite well.

So its running time is still, so there are

no guarantees for its running time but in practice it works quite well.

Okay, sometimes it is as bad as n factorial but sometimes it is much better.

And then finally consider it a dynamic

programming solution whose running time is n squared times two to the n,

so this allows to solve all instances of sizes of

up to 15 or 18 depending on the programming language and depending on the implementation.

Okay, and again, all these three algorithms always return an optimal solution.

The only issue is the running time, okay?

And then we consider it approximation algorithm.

So the first one is the nearest neighbors.

So the good news about this algorithm is that it is quite efficient.

Its running time is n squared where n is the number of nodes.

But unfortunately there are no good guarantees about the solutions that it returns.

So we know that for general graphs it can return

a very poor solution and for metric cases it actually,

for metric instances it returns a solution which might

be roughly log n times worse than an optimal one.

Then we consider it minimum spanning tree based algorithm.

So for metric instances it is guaranteed to return

a solution which is only at most two times longer than an optimal solution.

And it is also extremely efficient.

So it works almost in linear time and the size of the graph.

And finally, we consider it a local search

which can be considered like a customizable approximation algorithm.

So it also returns a solution with no great ease actually on the quality of the solution,

but we can play with parameters to try to increase the quality of the solution.

So I hope that you enjoyed

our journey through this approach and I hope that I was able to convince you that

travelling salesman problem is a very important combinatorial optimization problem.