And then the second, the second aspect here is basically the transportation
cost. We look at the pair wc, and if, and if,
you know, w is serving c, that variable y is going to be 1.
So, we want to incur the transportation cost.
twc okay? So, that's we have to do, of course with
the variable y is 0 then essentially there is no cost incurred.
You know, these things are not connected. Okay?
So now, so this is the transportation costs okay?
So fixed costs, transportation costs. We have the objective function and we
have the entire model okay? So, this is the entire MIP model for
warehouse location. Okay?
Objective function, fixed cost, variable cost.
Okay? Then we have the constraints here, that
basically state that you know, you can serve a customer with a warehouse.
If the warehouse is, if, you can serve a customer with a warehouse if the
warehouse is open. Then you have to serve exactly 1 customer
uh, [INAUDIBLE] a customer exactly once, so you look at all the warehouse exactly
one should serve that customer. And then usually these two types of
variables have to be 0 and 1. Okay?
We open up a warehouse or we don't, we serve you know, the warehouse w serve
customer c or not. Okay?
And that's the entire model. Okay?
So, so its interesting when you look at these models that the variables here are
0, 1. Okay?
And I'm going to come back to that, but we could have another essential model.
We could think of you know, instead of having all these 0, 1 variables, ywc for
all the warehouses, why don't I have simply a variable yc.
Which denotes the, the, the warehouse that is serving customer c, okay?
Now this is a question for you, okay? So think, you know, use, you know, try to
build a MIP model using a decision variable which is yc, okay?
And see what the challenges are. It's a very, it's a very interesting
exercise try to see what is going to be tough if you use these decision
variables, instead of those 0, 1 variables that I showed you, okay?
So, so to think [UNKNOWN] is that in general, MIP models love 0, 1 variables,
okay? So, you'll see a huge amount of 0, 1
variables in MIP model. You'll see typically very few integer
variables, okay? The integer variables are typically
going to be 0 and 1. So, you know, it's like you know, MIP
models like to have decision which are actually yes or no, okay?
So, very different from constraint programming model in that sense, okay?
But the good point is that once you have the 0, 1 variables expressing linear
constraints, is typically very simple, okay?
So, in a sense you know, you use these very simple 0, 1 variables and then all
the constrains. Are you know, simple to state in general.
Okay? So, so sometimes they have many indices,
but they are easier to state, they are easy to state okay?
So, it doesn't mean that you know modeling in MIP is easy.
And we'll see some of these issues later on so, there're are still many, many
possible models you can design. You know, what are the basic, what are
the basic decision variables that you are going to use?
What are the constraints? What are the objectives?
Everything that we said for constraint programming is still valid.
Finding a good model is not necessarily easy, okay?
And so, we'll see some of these things in, in a moment, actually, or the next
lecture. Okay, so now we have a beautiful MIP
model, how do we solve them? Now this has been an active research area
for many, many decades, and it's still a very active research area, it's a
fascinating area, okay? And so, one of the basic techniques to
solve MIP problems okay? Is to use branch and bound.
Okay? Remember, branch and bound, we saw that
you know, first lecture you have, or second lecture, with bounding, and
branching. Okay, bounding is what, is finding an
optimistic evaluation of the, of the objective function.
Optimistic, optimistic relaxation, we love relaxing, right?
And then branching is when, you know, is basically splitting the problem into sub
problems, and then apply the same algorithm on the sub products.
Okay? So this is the basic two steps.
Now what is nice about MIP models is that they have a very, very natural
relaxation, okay? Which is linear programming.
And that's why we talked about linear programming so much.
We have this linear relaxation that we can use as the bonding of as the bonding
part of MIP models, of branch and bond for MIP.
Okay? So, the key idea is that you going to
remove the integrality constraints, and you have a relaxation, okay?
So, this is the only thing that you do, right?
You look at this big MIP model. If you remove the integrality
constraints, what do you have? You have a linear program.
You can solve that very fast, and that's what we going to do, okay?
So, the bounding path, the optimistic relaxation, is going to be basically
relaxing the integer constraints. You wrap your largest set of values that
the variable can take, so it's a relaxation.
Okay? So, so basically the branch and bound for
MIP models is going to be solving the linear relaxation.
Okay? At a particular node, you're going to do
that, you're going to look at that node. You're going to say, oh, I want to solve
the linear relaxation. And if the linear relaxation is worse
than the best solutions you have found so far, you can prune that node, that, that
node away, okay? It basically means that the associated
sub problems to that node is provably sub-optimal.
You can get rid of it. Okay?
So, if not, okay, so if the, you, you are basically looking at this linear
relaxation more closely, and you are looking to see if it has all integer
values, which can happen, right? And if that happens, that means that in
this large space of solutions, you know, the linear programming relaxation, it
throws in an integer value for all of these variables.
And that means that you know, obviously it's a solution to the overall MIP
problem, well to, for that node, right? And so, essentially you have found a
feasible solution and you can obtain the best solution you have found so far,
okay? Now if none of these two things apply
then you need to branch, okay? And the interesting thing about MIP
models, is that they are going to use the linear relaxation to drive or to branch
in general. Okay?
And so there are many ways to do this, okay so I'm just going to mention one
here. But this is also a very active area.
How you branch, what do you use, how do you use, do you use the objective
function? Do you use the constraints, and so on.
But this is the simplest scheme that is using the linear relaxation.
So, what you're going to do is you're going to look at this relaxation, and you
know that, you know, some of the integer variables have a fractional value, and
they should not. Okay?
So, what you're going to do is take one of those, okay, which has, let's say, x
which has a fractional value f. And what you're going to do is create two
set problems. Okay?
One of them is going to be saying that x has to be small or equal to the integer
which is the smallest, the largest integer which is smaller than f.
Okay? So, that's the floor of, of f and okay
and that's one of the sub problems, okay? And the other sub problem is going to see
that x is greater than the seal of f, that's the smallest integer which is
larger than f. Okay?
So, these are two sub problems you can add to the, to the set of constraints
that you have and then you repeat the branch and bound.
Okay? So, the key point here, if you remember
constraint programming. Constraint programming was obsessed with
constraints, okay? And was basically branching on the, you
know, using feasibility information. Here we are obsessed with the objective.
We are getting these, good, you know, well, hopefully good.
Relaxation of the objective optimistic evaluation of the objective.
And then what we use is we use this linear relaxation which is like ooh, you
know, this is like a very good solution, okay?
But it's not feasible, and trying to use that for branching, okay?
So, in a sense we are branching using optimality information, okay?
So, summarizing what I just said. Okay, so we are basically focusing on the
objective here. Okay, once again I'm exaggerating, of
course the MIP system in practice. We'll also focus on some other
constraint, but it's good to exaggerate to, you know, drive the point home here.
So, we are basically focusing on the objective and the relaxation gives you an
optimistic bound all the best value that the MIP can ever get.
Okay? The pruning is based obviously on the
objective and sub-optimality. Okay?
So, when you detect the linear relaxation is not higher than the best solutions
you've found, not better than the best solution you've found so far, you just
prune it away, okay? And obviously the relaxation has been
obtained by relaxing the integer integrality constraints okay?
And we have a global view of all the constraints, you know.
So, in constraint programming all the constraints were acting, you know
independently. Here we relax all the integrality
constraints, but when we look at the constraints, we look at them globally.
Okay, so we have this linear program, which basically look at the constraints
globally, but they just have removed the integrality constraints.
Okay? So, in the Knapsack, in a sense, so when
you look at the relaxation, what we did, this is basically the MIP model for the
knapsack. And the relaxation is only looking at
this, you know, 0,1 that you see there, and basically replacing them by the fact
that oh, it doesn't have to take the values 0.
One, it can take any value between 0 and 1.
Okay? That's the linear relaxation and we can
do that for any MIP model right? So we just remove these integrality
constraints okay? Now, so in the case of the Knapsack, the
linear relaxation is the same as the greedy approach that I've shown you
right? Order the item by the most value per
weight okay, and select them in that order okay?
Until you get a fractional value and you take that fractional value.
to, to, to get a relaxation. So, that's what the linear programming
relaxation is going to do. So, I'm going to talk a lot about this
linear programming relaxation the next couple of, of classes.
It's a beast and, you know, you see it's, it's, it has very interesting behavior.
Okay? Now when we branch, we branch with a
fractional value, and essentially in this particular case of the Knapsack, the
fractional value is going to be the most. You know the, the item with the, the
largest ratio that we can't actually fit in the item.
We fit all of the items which are more valuable per weight, right, better ratio.
Okay? And then we are basically looking at that
last item that we would want to put into the Knapsack, but we could only put in a
fraction of it. That's essentially, you know, the, the
item we're going to branch on, because this is the only item which will have a
fractional value, okay? Now, the question that I have for you,
for you to think of at this point, is that ooh, you know, we're going to take
two things. These are 0,1 variables, right?
So, he's going to be, so that, that the, when we round the fraction downwards or
upwards, what you are going to get is essentially 0 on one side and 1 on the
other side. So, we are going to not take the item or
take the item. So, when we don't take the item, can you
think of what the linear relaxation is going to do, okay?
So, lot of the games in MIP is going to figure out what these [INAUDIBLE] linear
relaxation is going to do. And it's very capricious, you'll see.
Okay? So, in a sense here, you know, what I'm
asking you, is that if we don't take the item, what's going to happen.
Okay, or if we take the item what's going to happen to this linear relaxation, can
you figure that out? Okay?
So in a sense, the question that I'm asking you is, ooh which value is
going to become fractional if you don't take the item?
Okay, think about it. I decide not to take the item.
All the items that were more valuable, what's going to happen to them?
Well, you're going to still take them. And then what you're going to have is a
fractional value for one or more, well, yeah, one or more items, later on.
Okay? So that's what's going to happen.
If you don't take the item. If you take the item, what's happening?
Ooh, before you couldn't put this item in the Knapsack.
No you decided, I want to take that item. So, that means that some of the items
that were before, you won't be able to put inside the Knapsack, okay?
And so the, there is going to be a fractional value for an item that was
more value that's now going to be, that's now going to be fractional okay?
So, this is the kind of reasoning that sometimes you need to think about, you
know, what is the linear relaxation doing.
Because that will give you insight on how to actually improve your models, okay?
So, this is a very simple illustration because this is a really, really simple
example but it's going to drive two points home, okay?
And, and, you know, you'll see them. So, once again, so we had basically these
three, you know, this knapsack with three items, okay?
So, the value of the item is Vi, the weight of the item is Wi, your item 1, 2
,3. Okay?
So, most valuable item is item 3. Okay, 35 divided by 3 is pretty good.
Okay? Above 10.
Okay? So, a 45 divided by 5 is what is 9 and 48
divided by 8 is some, it's around 6 right?
And so therefore, you know this is the most valuable item that is the next one
and this is the least valuable item from the weights, you know value ratio.
Okay? So, at the top of the, at the top of the
tree at the root node, what you have is a value of 92 and that value of 92 is
basically obtained by selecting item. Well, I think we can let the linear
relaxation do this and this is what the linear relaxation does okay?
So, the linear relaxation is going to take, you know, value, value one for x1,
I'm taking item 1, I'm taking item 3. Okay?
So, these are the two, you know, items with the best ratio.
And then I take only a fraction on the second item, because I can't put the
entire item in okay? And you know once again this is 5 and 3
this is 8 you know I can only take a quarter of that guy.
And this is all you get you know, this this value because you, you, you sum 45
and 35 that's 80 and then a quarter of 48 that's 12 that's the value of the linear
relaxation. So, this linear relaxing is basically
giving you this solution and that particular value.
Okay? What are we going to do?
Okay? I told you.
We look at this thing and we look, ooh, what is the fractional value?
And in this particular case, it's item x2 okay?
So, we're going to branch on x2, and we're going to create two sub-problems,
okay? One where x2 is 0 and one where x2 is 1.
Okay? What happens if x2 is 1, is 0, okay?
So, that's the node that you see over there.
Okay? Now the linear relaxation is 80 okay, you
can select only item 1 and 3. Okay?
So, we solved the linear relaxation at that node, and what do we obtain?
You know, x1 is equal to 1, x2 is equal to 0, x3 is equal to 1.
That's the linear relaxation. The linear relaxation here is finding,
you know, an integral solution. We know that we don't have to branch
anymore. We are fine, okay, this is the best we
would be able to do. It's all integral okay, so we don't have
to branch anymore. It's done, okay?
So, the linear relaxation here is telling you when to stop branching in a sense,
okay? So, we have found a solution we evaluated
okay? The other thing that we wanted to do, is
basically you know, choose x2 is equal to 1, okay?
So we basically fix the value of x2 to 1. We solve the linear relaxation again, and
the linear relaxation is going to give us a value of 71.33.
Okay? You see that now there is this value of
this x3 which is fractional, okay, it was actually, it was actually not fractional
in the original relaxation, okay? So, we know that this is, we would have
to continue branching, but what we have seen here is that the value of the inner
relaxation is 71. The best solution that I have [UNKNOWN]
so I can actually prune that node away already.
So, with one branching essentially I, I solved this problem optimally.
Okay? So, this is essentially illustrating the
various concept here. At the particular node you solve the
linear relaxation. Okay?
You look, the value of the linear, the objective function of the linear program
is basically giving you an optimistic evaluatoin of what the MIP can do.
Okay? At that particular node.
Okay? And then you look at the variables over
there and you take a fractional variable to actually branch okay?
And then you branch left and then you branch right based on that fractional
value. Okay?
Now, one of the questions that you have to think of is when is branching bound
going to be effective? Okay?
What is it's going to be really effective is the value of the linear relaxation is
tight. It's very, very close if the linear
programming relaxation is very close to the optimal value of the MIP.
Okay? Is that sufficient?
huh? Think about it.
So, if I tell you, I guarantee, okay, that this linear relaxation is going to
be 0.5%, always of the best value of the MIP.
Is that sufficient for the MIP to actually behave nicely all the time?
Okay? Question for you.
Okay? I wish we had quiz in this class.
We don't, but anyway. So think about it.
Is it sufficient? Okay?
So, what is a good MIP model? A good MIP model is going to be a model
which is a good linear relaxation. It's not sufficient necessarily.
Okay? I answered that question, right?
But this is a necessary condition, you want a model with a very good linear
relaxation. Why?
Because you're going to prune the search space much better.
Okay? Now, which variable should we branch on?
So, we have a lot of variables, it seems that we have plenty of variables with
fractional values. Which one are we going to branch on?
Okay? So, one of the good heuristic is to
branch with the most fractional value okay?
So why, why would we do that well, you know once again exaggerate.
Assume that you variable which is not very fractional okay?
So, it's like you know 0.99999999999 okay?
And it can take value 0, 1 okay? So, when it is 0.9999999 [UNKNOWN] I hope
I you know, use the same number of nines, but that means that it's really close to
one, probably branching there doesn't make too much sense.
If a variable is 0.5 what does that mean? It means that the linear relaxation is
saying, hm, maybe I should take this item hm, maybe I should not take this item hm,
I do not really know. Okay?
And so therefore these are the type of variables that you need to branch on.
Okay? So, this is a basic introduction to MIP.
Next time we are going to see really cute things, on how you can actually build MIP
model. What does it mean to be a good MIP model?
Okay? come back.
Thank you.