Welcome back this is Discrete Optimization Local Search part six.

Okay so what we're going to do today? Start talking about this topic of

escaping local minima. And we'll introduce a very important

notion which is connectivity. And talk a little bit about various

neighborhoods and how they relate to this, this notion.

the big goal here is that we are setting the stage for.

Introducing techniques that will allow us to actually find higher-quality solution,

not just producing, you know, move, that are improving all the time, but allowing

the search to do more interesting things. But a requirement, you know, in general,

is that you need some property of you neighborhood to do good things, okay?

So, remember, you know what we did, that the guarantee that we have is we apply,

you know, greedy moves inside a neighborhood.

Is that when we complete the search we are guaranteed that the value of the

configuration that we find is a local minimum.

Okay, so which basically means that if you are that position in that space, if

you look around there is no place you can go which is better.

That's what it means to be a local optimum, okay?

You can move to a place which is better, okay?

But that doesn't guarantee that this place is actually good in the grand

scheme of things. They could, there may be something really

good somewhere else, and you don't know how to get to it, okay?

So, we have no guarantees for optimality, and therefore, in practice escaping these

local minima of, of poor quality is a very important issue.

And once again, the next lecture will show you two really interesting

techniques to do that. What we are doing today is setting the

stage for that, okay? So this is a beautiful picture done by

Carlton, right? So you see this is kind of the

neighborhood, the entire regions of the space, okay?

So, so, so you see you know, in this particular case, you know.

Basically, this neighborhood is colored into you know, cold and hot region.

And hot means high quality, cold means terrible, right?

And so, local search dots for you a point.

You define the neighborhood of that point.

And then, what we have done so far is taken a direction which is improving, you

know, which is improving the, the move. And so we follow this, and then we can

continue finding a path. Oops, and hopefully.

We, you know, we get to a good region of the space, okay?

So that's what we have been doing so far, okay?

Now, the, the landscape of the objective function, okay, can be complex like this

one, right? So, so what you see here is the various,

the quality of the solutions, and we are minimizing, okay?

So this is essentially all a possible solution and this is the quality.

And what you see here is there is various local minimarts in that popular place,

right? So this is a local minimart this is also

a local minimart and you know this one is actually pretty interesting.

This is the global Minimum. Okay, so that's the global minimum in

this particular case. So we want to get there but the search

that we have seen so far are not going to get you there, right?

Now, one of the things that you really don't want to happen is this, okay?

So you can't define your problem, okay, in such a way that it's not connected,

which means that if you start from that particular point.

You can move inside that region right. And find a good point in that region

which is here. But you have no way to actually get

there. There is no move that will get you there

and that's terrible. The entire region here which is where the

good things are is not accessible if you start from there.

Okay, so we don't want that. This is not good okay?

So we have no way of moving from one side of the neighbor to the other one.

And so we want to avoid that and the connectivity requirements that I'm going

to talk about is exactly avoiding this. Okay, so this is a definition but

essentially what it means is that you want to be able go from any configuration

to at least one optimal solution. Okay, so in a sense you're going to say

that neighborhood N, and remember that a neighborhood is something which given a

configuration is going to give you a set of configuration where you can go, okay.

So neighborhood N is going to be connected if from every configuration S,

anywhere, okay? So there an optimal solution O that you

can reach by taking local move and what is a local move?

Well you start from s, OK that state at zero.

Okay, and then you can take a local move to go to a s1, then to s1, to s3 and so

on up to s n which is the optimal solution that you are looking for and

everyone of these configurations can be accessed by your local move.

Which basically means that SI is in the neighborhood of SI minus 1, right?

So this is the definition of connectivity, intuitively it's very

simple. It basically means that you can apply

local move from any configuration to get to the optimum.

It doesn't mean that you will get there right?

It must means that there is a path, okay? Afterwards we need to find that path.

But what we know is that for sure there is one path to an optimum solution, okay?

Now, connectivity doesn't guarantee optimality as I just tell, told you,

right? Because so far, our local search has been

greedy, okay? So, what happens when you are greedy,

okay? When you are greedy, let's say this is

the starting point. When you are there, you're going to prove

you're not going to, you're going to take move that are basically improving, okay?

So, if you take move, that are improving, you're going to get to this local minima.

Okay, so you think, another point over there, beautiful point, you go down,

drop, that's where you get. You don't get here, right.

Because if you try to improve all the time, you're not going to get there.

You will have to go up the hill and then go down, right?

Which is not something that we have been able to do so far.

And obviously, you know, you have all these points here that are basically

bringing, oops, I want to show you that, because they are so beautiful, right.

So that are going to all these places, but they don't get to the global, to the

global optimum. Okay, so in a sense, connectivity doesn't

guarantee optimality, but it's still a very good property to have.

Because once we entry some of the techniques that we'll see, you know, in

the next lecture, you know, in connected neighbourhoods you may have more hope and

actually in some cases guarantees to get. two of the actual optimal solution.

So, what you see here is graph coloring. Remember, we had this beautiful problem

where you would color these, all these vertices, such that two adjacent nodes

were colored by had to colored by a different color.

Okay so and the neighborhood that we chose was very simple, right?

It was changing the color of a note. And usually that neighborhood is

connected, and I'm going to give you a simple algorithm, and this is actually

very interesting because we are heating like hell, but this is what you do when

you want a sure that something is connected, right?

So what you have to do is you start from the configuration.

And you have to show a path, you know, that indicates that you know, that brings

you to the optimal solution. But, we can do a really nice assumption.

We can actually postulate that we know the optimal solution, so if I know the

optimal solution and I'm in a particular configuration, I can design a very simple

algorithm to get to the optimal solution. What do I know?

I look at every note and if that note okay, doesn't have the right value that

means that it's value is different from the value in the configuration simply

assign it's value which each value which is in the optimal solution.

So I do that for all the notes okay, this, they are all legal, right?

And then essentially I get to the ultimate solution.

So, this is very easy, right? So, but all the connectivity groove are

like that. So, you, you suppose that, you basically

assume that you have the objective, the optimal solution.

And then you have a particular configuration.

And you show you can actually get there. No, this is a very simple one, right?

Some of them are very complicated, okay, but this gives you the essence of these

kinds of proofs, okay? Now, remember car sequencing, okay?

So car sequencing once again we were scheduling these cars on the assembly

line. The production unit that is to put these

options while the cars were moving, right?

And we had to satisfy the capacity constraints on this production unit to

make sure that we could put the options. On the cars, right?

So, what we designed was this beautiful neighborhood, right?

where we would basically look at the cars and swap them, okay?

And here you see the options of the various car, okay?

So we have a swap neighborhood And this swap neighborhood is connected, okay?

And why, let's look at the algorithm. So actually, let's go back and look at

the sequence right? So we can take this sequence and look at

the values and either you have the right type of car there or you don't, okay?

If you don't have the right type of car, what are you going to do?

Well, there is your type of car somewhere inside the sequence.

You can swap them, and now what do you have?

Well, the first slot is right. You move to the next one, okay?

The next one you say, well, you look. Is it right, or not?

Is it the value that you have in the optimum solution?

If it is fine, you move to the third one. If it's not, you look for the right type

of car, and you solve these two thing, okay?

And so you can continue doing that, and that's how you show connectivity, okay?

So, once again, that's what you see there.

You have an algorithm which iterates from you know, the first to the last load of

the assembly line. Then you look essentially it's the value

for the configuration s, you know, s i is the same as the value of that part, of

the slot you know, of position i in the ultimate solution that's o i.

If they are different, what you do is you look for a particular config.

[SOUND] A particular configuration inside, a particle type of car inside the

sequence S, you know, and, and further away, obviously, which is of the type,

the same time as the value which is inside the automobile solution.

And you swap these two things. And you continue like that, and

obviously, you know that you will find one, right, because it's in the optimum

solution. And therefore, you, you solve all these

things and you get an observable solution at the end.

So what I'm showing you here is a very, is essentially that by, by, by doing

swaps, you will always get to the optimal solution by a sequence of move.

Now once again, I want to repeat this. It doesn't mean that your algorithm is

going to find that sequence, right? Because we are always trying to find

improving moves and things like this. We would have to do other things.

So in practice, some of these steps here huh, may be degrading tremendously the

quality of your solution. At that's what is the challenge in local

search, right? So, let me, let me look at something

which is a little bit more complex, right.

So this is the traveling salesman problem.

Remember We had all these cities that we had to visit exactly once, okay.

And trying to minimze the total distance. And we introduce this simple

neighborhood, that's one of the simplest that [UNKNOWN] 2-OPT.

Now remember 2-OPT was basically taking two edges and replacing them by two other

edges. And the question that you may be asking

yourself, that is this neighborhood connected, okay?

So, very simple question. Because it has only two possible answer,

right? Yes or no?

So, think about it. Is it connected or not?