Welcome back. This is discrete optimization, again and

we are moving to the second computational part for optimization which is called

local search. This is the outline for today.

We're going to start with basic interaction to our local search, and then

we'll cover very simple local search, which is called max/min conflict.

To introduce most of the concepts that we will be dealing with in the next couple

of lectures. So, a couple of years ago when my son

was, you know, very young, so I gave him the eight queens problem to solve, okay?

So I wanted to see how he would just approach it.

And so this is what he did, he took all these queens, okay, that I gave him, and

he put them on the chess board. Okay?

And then what he did was started moving these queens.

Okay? Trying to find a configuration where none

of the queens were attacking each other. Okay?

So, he didn't try to prove the search space, or try to extend you know, a

partial solution. He just moved the queens around.

And this is exactly what local search is about.

So, what you're going to do is you're going to move from configuration to

configuration, by doing local moves. So, moving the configuration from it's

current state to a state which is very close to it, but you know, but changing

it a little bit. In the hope of getting closer to a

particular solution. Okay?

So local search, okay, is basically working on complete assignments of values

to the decision variables and then essentially is modifying those, those,

those assignments. Okay?

So is there a different from constraint programming?

Well, essentially, what we were doing, we're working with partial assignments,

and trying to see if we could extend them.

And, to make sure that we could extend them, we were trying to prune the search

base as much as possible so that we would not try to extend them with a value

which, which was not feasible. Local search is way more brutal, okay?

So we accept that we're going to violate some of these constraints.

And then we just try to eliminate those, those, those, those violations as we

proceed, okay? So, how to drive a local search, okay,

there will be different things that we can do, okay?

And it depends essentially on the nature of the problem.

If you have a satisfaction problem, what you will do is you will always start with

an invisible configuration. Otherwise you would have already a

solution and therefore there would be no problem to solve.

And then what we are going to do is start, you know, move from this

infeasible, you know, configuration toward, you know, feasibility.

And trying to essentially make this configuration more and more, less and

less infeasible in a sense. in pure optimization problems the only

thing that you have is an objective function, you have no constraints.

Okay, and what you are trying to do is essentially go to the optimal solution,

so you start with a sub-optimal solution. A local search is going to try to move

progressively towards optimality. And then, you have constrained

optimization problems which are the most complicated, of course, you have both

constraints and optimization. And I will review some of the options

later on, once we have seen above satisfaction and optimization problems.

Because there are many ways to actually try to address those problems in low

consumption. Okay?

Now lets look at satisfaction problems first and how to drive the search towards

feasibility. And the key idea here is essentially we

have this, you know feasible, well this satisfaction problem and we will

transform it into a sequence of optimization, into an optimization

problem. Okay?

So we use the concept of violation. Essentially, at any point, we have that

configuration. Okay, basically this assignments of

values to the decision variables. And we know that their are a bunch of

constraints that are not, that are violated, okay?

We can count them, for instance. And then what we can try to, and this is

only one of the ways to do it, but we can try to minimize the number of violations

to these constraints. So we want to move from a configuration

to another configuration where there are fewer violations.

Okay? Now how do we do that?

Well we have to decide what is a local move.

And once again, and we'll see that in the next couple of lectures, there are many

ways you can move from one configuration to the next.

The simple things is just to select a decision variable and change its value.

Okay? So very simple move.

Okay? Now once you have defined all moves that

you can take, you have to design the one that you're really going to select and

execute. Okay?

And once again there are many choices, and we'll see many different strategies

over the next couple of lectures. But the thing that we are going to do,

the, the approach we are going to pursue today is simply a maximum conflict.

approach which is essentially a slight generalization of the main conflict

heuristic , which has been very successful in a variety of problems,

okay? So what is the max/min conflict heuristic

doing? Essentially what we do, is choose a

variable that appear in the large numbers of violations.

So, we look at the variable which, which is really violating a lot.

All those variables really violating a lot of constraints.

And then what we try to do is to change the value of that constraint so that you

decrease the number of violations the most.

Okay? So, let me just trade that.

Okay, so this is remember this is the constraint programming model for the, the

queens problem. And what is interesting to see on that

particular model are all the constraints that you see over there, right.

So, some of these constraints, once you assign some value to the variables, are

going to be violated, are the ones are going to be satisfied, okay.

Now visually, this is what this looks like, right, so when you look at the

part, so you see. You see here, you know there will be a

constraint which is violated. This is another constraint which is

violated, okay? So now when we look at the max/min

[UNKNOWN], what we want to do first is to compute, you know, the various

violations, the constraint violations in which are the particular variables

appear, okay? So we look at this particular queen here

and now we look at the constraints that they are violating and, of course, what

we have to do is to look at, you know, the vertical, the diagonals and the

horizontal lines, okay? So in this particular case, we see that

this particular variable appear in exactly one conflict, right?

This queen is attacking that one. Okay?

So we move to the second one, okay, second queen.

Okay, and what we see there is that this queen attacks that queen, and it attacks

this one, and so this particular queen appears in two violations.

Okay? And we can do that for all the Queens.

So this is another queen which is basically having two violation.

The next one is very aggressive has three violation, and so on and so forth.

And so essentially what you see here are all the vi, the violations that these

variables, appear ins, ins, into, and what the number that you see over here

essentially is the total number of constraints that are violated, okay?

Now, the first thing that a maximum conflict heuristic will do is take a

queen which appear in the largest number of violation.

Okay? So you have a queen here which is very

aggressive and appearing at three violations.

That's the queen that you're going to select.

Okay? So that's the first step.

Now we have the queen that we really want to move because this queen is very

aggressive. Okay?

So what do we do? So now we have to choose the value that

we going to to assign to that particular queen.

And what we can do is decide, okay so what happens if I move this queen to that

particular position, okay how does that change the constraint violation?

And what you can see in this particular case is that instead of having three

violations, we would have only two, so we are attacking these two queens.

And then you can do the same thing for the next queen.

Here in this particular case the queen is still very aggressive and has still three

viola, three violations, okay? And then we continue doing this

essentially for everyone of the possible position for that part of the queens.

Okay, and what we see at that point is that, you know, if you want to reduce the

violation the most we have to choose one of these values and we can randomly

choose, let's say the first one here. Okay, and at that point what we do is we

place the queens at that particular position, so that's the local move.

We chose the queens. We chose a new position for the queens

and we you know, we performed the move. Okay?

So at that point we have a new configuration and we can redo all the

computations that we have done. We can find that there are five

violations overall. We can find the number of violations in

which all the queens are participating to.

And then we can decide, you know, we can select the queen.

One of the queens participating in the most violation and start deciding where

to move it. In this particular case only one of the,

of the move is going to decrease the number of violation, so that is where we

move the queens. We have a new configuration, so what do

we do? Exactly the same thing.

Now we have four violation. We reduce the number of violation by one.

We recompute the violation so the variables, you know, we select the one

which appears in the most violation, and we start moving it again at a position

that will decrease the number of violations.

And we keep doing this, now we have, we move another queen.

And we can, we have three violations now. We continue moving this queen again, we

have another queen which has two violations.

We move, we decrease the violation by one.

We keep doing this, another queen which is violated by one, we decrease the

violation to one, and then finally we will be able to reduce the violation to

zero, and we have a feasible solution. OK?

So what is important to see here is that essentially we start from an infeasible

configuration which has a number of constraint violations.

And we try to move to, you know, configuration which have fewer and fewer

violations. And at every step, we try to do that by

selecting a queens, in this particular case, which, which is, you know,

appearing in the largest number of violation, and moving it to a place which

decrease the violation the most. This is just one of the possible

strategies. We'll see many strategies in the next

couple of lecture but this is essentially illustrating the concept behind local

search. Okay?

So so in a sense the way to think about local search is that you have an

optimization function. Okay?

That the, an objective function that you are trying to optimize.

And then what the local moves are doing is defining a neighborhood.

That's essentially the configurations that are close to the configuration that

we are considering right now, okay? So, in the sense that the neighborhood

is, from a particular configuration, it's going to give you a set of

configurations. And that's the places where we can move.

Okay? So, in the particular example of the

queens problems, the neighboring configurations were simply the ones that

you can reach by selecting the, one of the queens which is most violated, and

moving it to a position which decreased the violation the most.

Okay? So, in a sense you can think of local

search as a graph exploration. You start, you know, from a particular

point, and then you start moving to configuration, to another configuration,

and so on. And you try to go to configuration which

are, you know, more and more feasible in a sense.

Okay? So that's what we do.

Okay? So the concept, the key, the key concept

in local search is the concept of local minima.

Okay? That's what a local search does.

It gets you a local minima. And what is the local minima?

It's a state. Inside the configuration space where,

wherever you move, wherever you're neighbors are, okay, they are going to be

worse off than where you are. So it's a position where when you look at

every neighbor N okay, to the configuration C that you are conf, that

you are considering right now, the value of that objective functions of that

neighbor is going to be greater than the current value of the objective function.

Okay? So essentially.

You know, you are at that point, wherever you move, you are going to get worse, or

at best, you know, as, as good as you are right now.

Okay? So in a sense, the local search that you

know, the, the basic kernel of a local search, is going to select a

configuration seed, initially, using some method.

Then compute the neighborhood, okay. So that's a set of the configurations.

Well if you've got a computer set I of all the neighbors that you can, that you

can reach, and which are better than the configurations that you are in right now.

So we look at essentially all the neighbors which are basically decreasing

the value of the objective function, okay?

If that set is not empty you select one of these configurations, apply it, and

then you recompute the set of configuration in which you can move to.

Okay? So basically, you know, local search is

this kind of iteration where you move from one position to a better position,

and you keep doing that until you reach a local minima.

Okay? So local search, you know, gives you no

guarantees. Okay, so the only thing that it's

going to give you is a local minima, that local minima can be arbitrarily bad, or

arbitrarily good. Okay, so one of the things that we'll see

in a lot of the lectures later on is how to escape this local minima, or to try to

find local minima that are of high quality.

Not getting stuck into a poor quality optima and there are many, many ways to

do this, and but we'll talk about this later on once we have seen many of the

concept behind local search in itself. So for the moment we'll do this greedy

local such algorithm, which always trying to improve and gets stuck at some point

in a local minima. Afterwards, we'll see how to escape.

Okay? Now in satisfactions in satisfaction

problems what we saw is that the function F is actually minimizing the number of

constraint violations. Okay?

And usually when this function reaches zero, we know that we have a feasible

solution. If we get stuck in a local minima, which

is a value higher than zero, you know, the solution that we found is not

feasible. It's, the configuration that we found is

not a feasible solution. Okay?

So, there are many ways to measure this constraint violation.

And once again, this is a topic that we will return to, but what we have done so

far is basically looking at the constraints, and deciding if the

constraints is violated or not. There are other ways to actually measure

these violations. You can consider how violated that

constraint is, and we'll talk about that later on as well.