0:00

Okay, so discrete optimization, part six, last part, okay?

And, so what I am going to talk about today is where these, this dual coming

from and why it is useful, okay? So, we saw that it was beautiful, okay?

We have no idea, you know, where magically it came from, okay?

And so, that's what I want to talk about, you know, in this, in this lecture.

And then, I want to also tell you a little bit what it means.

How you can understand these things, okay?

What is the meaning of these dual variables, and then I want to talk about,

you know, what it's useful for. Okay, so this is, you know, the mod, you

know, understanding where it came from, okay?

So this is an example that I took from [UNKNOWN] book.

If you have to read two books on linear programming, this, this has to be one of

them, okay? So, what you see there is basically

linear programming. I'm maximizing my profit here, I'm

maximizing this objective function, you know, subject to these constraints.

Okay? And the, the question is that, you know,

I'm asking, you know, the question, you know, can I bound this maximum?

Is there an optimistic evaluation of that maximum?

Okay? And so what I'm showing you here is one

of them. I'm basically telling you here, okay?

That the maximum value that is objective function in everything is 110, okay?

Why? Okay?

So, look at this constraints and look at this one.

Look at these two constraints, okay? So, this is 55, this is 110, okay?

Double. Why?

Because I basically took that constraints and I doubling every one of the

coefficient, okay? So instead of adding 5, I get 10, okay?

Instead of there being 1, I get 2. Why did I do that?

Okay? Because as soon as I do that, okay?

So, what you can see, okay? Is that this 10 here, okay?

Is actually greater than the coefficient in the objective function.

This value is also greater. All the values here are greater than the

objective function. And therefore, I'm maximizing something,

which has to, so, so if this, is this is a value, this will have a larger value of

inter-objective functions. As a value, this has to be greater than

the value of the objective function by definition, okay?

And therefore, in this particular case, I know that 110, okay?

Has to be an upper bound, okay? I will never be able to get to, to, you

know, to get higher than 110, okay? So that's an upper bound.

Now this is a terrible upper bound, of course.

But I'm going to show you better ones, okay?

So, look at these two things here. I have two, I basically take these two

constraints. I add them together, okay?

If I add them together, I get this constraint that you see here, right?

So, you basically see here 4x1, you know, plus 3x2, and so on smaller equal to 58,

okay? So now, once again, you know, you can do

the same reasoning. You can look at all the coefficients that

you see there. And once again, they are always greater

than the coefficients inside the objective function, okay?

So, greater or equal, right? So this 4 is the 4 of the objective

function. The next one is 3 instead of 1.

So, I know that if I sum these two terms there, they will be smaller than the sum

of these two terms. I continue to doing the same way, right?

So the five in the objective function becomes six, okay?

And the three remains three, okay? So, I know that this value is always

going to be greater than the value of the, the objective function.

Whatever the value that I find for the variables.

But now, this time, the value here is 58. So, it's a much smaller value.

So, I know that 58 in this particular case is a bound to this objective value

already, okay? So, wow.

Okay. So, so what I'm showing you here is that

by combining these, these constraints. And by actually making, you know, making

sure that this combination satisfies the basic constraints.

They have to be greater than the coefficients of the objective function,

okay? I can actually bound the value of this

objective function, okay? And that's very nice, okay?

So in a sense, yeah, yeah, yeah, but what is the relationship of the dual, okay?

So, look. What I'm going to do is take an arbitrary

combination, positive combination. It's called conic combination, of these

constraints, okay? And so if I do that, I'm basically going

to use y1, y2, and y3 to find the coefficients of the way I want to combine

these equations, right? [laughs] Think of it, you know, why y1,

y2, y3, does that remind you of something?

Okay. So, I'm basically taking these y1, y2,

y3, okay? And then, I'm basically combining,

multiplying the, the various constraints that you have there, okay?

And I know, okay? I know that, you know, when, when I do

this combination, okay? So, I have to be smaller or equal to this

particular value here, okay? Because, you know, this other right hand

side of this constraint, okay? So, if I multiply these left hand side by

y, I have to be smaller or equal to that, okay?

So, I know that is I multiply this constraint by y1, y2, y3, okay?

I have to be smaller than this expression, okay?

Once again, you know, right inside, hey, objective function.

So, if I minimize this, okay? I know that the value of this because of

this combination has to be greater than the value of it's to be, it's to be

greater. I knew that these values have, would have

to be greater than this thing, right? But what I want to do is find the

tightest, the tightest upper bound. So, I minimize this, okay?

So, so I minimize, I want to find the y's that are basically minimizing the value

of this expression. That I want to be bonding this thing by

both, okay? Now I told you before, we have to satisfy

some constraints, right? So, when I look at these coefficients

there and multiply them by the y's, I have to make sure that they are greater

than the value of the coefficient in this primal, right?

So in a sense, what are these? Well, these are the dual constraints,

okay? So, this is the dual objective function.

These are the dual constraints, okay? So, the whole thing here, it's just the

trick to actually bound the value of this primal, okay?

So, the dual is basically bounding these values.

And that's what I told you before, right? So the other day, you know, what we were

doing is the primal might be minimizing, the dual was, you know, maximizing.

And basically, the dual was always the lower bound to the value of the primal.

Here, I'm basically maximizing. Of course, the dual is minimizing, and

I'm always telling you hey, hey. The primal, the dual here has always to

be an upper bound to the value of the primal, okay?

And this is a systematic derivation of why this is true.

Of course, once you relax, ooh, this is an interesting linear program, you can

start whether the properties of these thing.

And that is essentially what I have shown you last time, okay?

That the value of the objective function of the primal at optimality is equal to

the objective value of the dual of the optimality.

So, this value here that you are minimizing is going to be exactly the

optimal value of the primal, okay? That's how you actually find the

derivation. You know what?

Where this dual is coming from essentially, okay?

Now, so this is, so I'm going to talk about complimentary slackness.

And this is essentially starting to convey what these dual variables mean.

And so in a sense, this is a topic which is, you know, a very interesting topic in

mathematical programming. In general, this is applied to linear

programming only, right? So what you see here, what basically

these equations are saying, is that if you have x star and y star.

Solutions to the primal and the dual, you have to have these conditions which are

satisfied. And I'm going to go over, you know, over

that. But essentially, what it means is that if

you look at the constraints of the primal, this is the constraint of the

primal. It's an equality, right?

And if this, this inequality of the optimal solution is tight.

So, so it has to be the case that either this inequality at opt, of the optimality

has to be tight. Or the dual variable has to be zero,

okay? So this is linking the primal val, the

primal solutions, the primal optimal solution.

And the dual of, you know, optimal solution.

And basically saying oh, either that constraints in the primary state or the

dual variable is zero. And this essentially expressing the same

thing for the dual, right? So, all the constraints in the dual is

tight, all the primary variable is zero. Okay, very interesting, okay?

Very interesting ratio between these things, okay?

So, you can guess which values are zero in the other side depending upon the way

that the constraints are tight, okay? Now, let me give you an economic

interpretation of this, okay? So once again, you know, I'm looking at

this program, okay? This linear program, and we are

maximizing so think maximizing profit. Now you have constraints, okay?

For instance, the constraint could be production, you know, capacity

constraints on your production, okay? So basically, this bi over there, okay?

Is telling you oh, this is as much as you can produce, okay?

So basically, you want to maximize your pro, profit.

This is what you can produce, okay? But, we are limited in what you can

produce, okay? So, look at this ti there.

So, what I'm looking at here is ooh, assume that I can increase my capacity

production a tiny bit, okay? What's going to happen?

And this is essentially what this dual variables are telling you, okay?

So, if you increase your capacity a little bit, okay?

The value of the, the objective function is going to change tiny, you know, in a

tiny fashion, okay? Is going to still be the, you know, still

be at least be as good as the primal objective function that you see there is

z star. But then, you're going to increase it

with what, with, you know, ti times yi star, okay?

For everyone of these dual variables. So, which basically means that, if you

increase the capacity of these various, various constraints on the capacity,

okay? I can, I can increase the value of the

objective function by multiplying this increase by the value of the dual

variables, okay? So, the dual variable is, in a sense,

telling you a much Increasing that particular capacity.

You know, relaxing that particular constraint, is bringing to the objective

function, okay? Now, this is only valid when ti is

sufficiently small, right? Because at some point, if you make this

ti sufficiently big, you may change basis, and so on and so forth, okay?

But this is essentially, this is essentially telling you, you may change

fundamentally the nature of the solution. But here, when you're very close, this is

what this is basically telling you, okay? So one last thing that I have to tell

you, duality in the tableau, okay? So remember you know, when, when we

presented the tableau, you always start with a basis, okay?

So, when you start with a basis, okay? So, you know that at the end of the day,

when you are at optimal solution. This other value of the objective

function, okay? Now, when you look at, when you have the

first basis, it's either the artificial variables.

Or a basis that you found after the first phase or something like that.

You have this basis and assume that these are, let's say artificial variables,

okay? The cost of this things is zero in the

objective function, okay? So, what do you know?

You know that when you look at this formula, the cj has to be zero, okay?

You also know that this matrix here, Aj, you know, this is the unit matrix, this

is a unit thing, okay? So essentially, what you have here is

that what remains of this is basically the value of these guys.

So, cBAB minus 1 which you recognize as the simplex multiplier, right?

So in a sense, the cost here, so what you will find in those locations are the

simplex multipliers. Which are the dual variables, okay?

So essentially, what you see there is that the, if you solve the primal, you

can always look inside your tableau. And you will find the value of the dual

variables. You will find the solution to the dual

variables. So, not only you know that the dual is a

particular cost, but you can actually retrieve the value of the dual variable

to its optimality, okay? Now, why is this thing useful?

Okay, so, so we have this beautiful theory.

We know where it comes from. We know what it means from economic

standpoint, for instance. But what does it correspond to?

Okay? So, so let's look at this example, okay?

So, we have a primary which is visible, okay?

11:41

So, the dual is not visible until you get a optimality.

So, you start your primary. You start minizining.

So, because essential, so, so you have to think the primary always visible I get

down to a point. In all these things, the dual is

infeasible. And at some point, you get to the optimal

solution. The dual become optimal and at the same

time feasible, okay? So the dual, so when you do the dual

simplex, okay? So, the dual is always feasible and the

primal is not until you reach optimality, okay?

So in a sense, what I'm telling you, that we can solve the primal.

And then when, add optimality of a solution to both the primal and the dual.

Or you can solve the dual, and then add optimality of a solution of a primal into

the dual. So, which do you do, okay?

Well, there is one case where it's actually very useful to use the dual,

okay? So, assume that you optimize the primal.

Okay, you optimize ta-ta-ta-boom, you have the objective, you have the optimal

solution. And then somebody comes and says ooh, I

forgot to tell you, I have this one constraint, okay?

And obviously, the constraint is satisfied, it's not interesting.

But assume the constraint is not satisfied, what's going to happen?

Okay? Well, the dual, okay?

Is going to be feasible. Why?

Because if you look at these constraints in the dual, what is it?

Oh, you flip, you know, and then it's the variable, okay?

So, if you are the variable to the dual, you know, of course, you know, this, the

dual is still feasible. You can give a zero to all these

variables and you have the same solution, right?

The primal is not feasible, it's constraint is violated.

So what you can do is to say ooh, but my dual is feasible, so I can optimize the

dual. [SOUND] I get to optimality and at that

point, I know that the primal is going to be feasible, okay?

So in a sense, if I'm adding a constraints to a primal algorithm, okay?

And, and obvious this, obviously, if these constraints is, is feasible, I

don't have a basic feasible solution to start from in the primal.

But I have one in dual. And I can optimize the dual, get back to

optimality, and there I know that the, the primal is, is, is going to be

feasible and optimal, okay? So in a sense, if I have an optimal

solution to the primal, [SOUND] I add the constraints, and that constraints is

violated. I can optimize the dual to get back to

optimality and feasibility by using the dual algorithm.

Very nice, okay? You will see why this is very nice in the

next couple of lecture when you do, when we do mixed integer programming, okay?

Now, the other things that we can do is that actually, you know, you can do

primal and dual on exactly the same tableau, right?

So remember, you know, we have this primal, we have this, basically, this

primal over there. And basically the dual is what?

The dual is taking the objective functions of the primal, okay?

And putting it as a right end side, okay? Where is my right end side?

Here, right? So, we can do the same thing for the

objective function, right? So, this is the right end side of the

primal, it becomes the objective function of the primal, okay?

And then, you have this big matrix, okay? The only thing that you do, [SOUND] you

transpose it, right? And so, that's what you have here.

These two things are the same, okay? And so, when you have the primal, assume

that I don't have this representation of the dual, right?

I see only the primal, right? But if I want to see the dual on the

primal, I do this, right? And know I see the primal, right?

So, in a, I, I see the dual, right? So, in a sense, I don't need this guy

because it's already there, right? It's, you know, it just, you know, a

different way of looking at it. So, now look at this.

If I do, if I do the primal algorithm, the primal simplex, what do I do?

I select an entering variable and then I select a leaving variable, okay?

So, that's the primal algorithm. If I look on the other side, what does it

mean? Ooh, it could mean in the dual, I would

select then living variable, and then an entering variable.

It's another way of doing it. It's fine, right?

Because it's going to do a pivoting operation in exactly the same fashion,

okay? And when I do the dual, right?

If I select an entering variable in the dual and the leaving variable, and then

the leaving variable to the dual, what does it correspond to?

Well, the leaving variable on this guy is the entering variable on the primal and

then vise versa. So, these things are the same, okay?

If I pivot here, I have the same pivot here.

If I pivot there, whoops, I have the pivot here, right?

So when I add a new constraint and it's violated, and I'm looking at the dual,

what do I do in the dual? I'm adding a, I'm selecting a constraint

to enter the basis, and another constraint for leaving the basis.

Which basically means that here, you know, I'm also selecting a variable.

But this time, a primary variable to lead, to enter the basis, and another

variable to leave the basis. So when I, when my constraints is

violated in the primal and I look at the dual, the dual is basically fine.

You know, I am basically, you know, doing a pivot operation here.

The same thing happens here. But of course, I never have to use this.

I can always use one of the tableau and do all of the things.

I just have to, you know, turn my head and then it's fine, okay?

So in a sense, that tells you, and of course, you could do some steps in the

primal, some step in the dual. And you can do all kinds of beautiful

things, right? So, using the same tableau, okay?

So, this is the relationship with this primal and dual.

And these things, you can exploit, and we will exploit them.

And I'll, you know, you'll understand, you know, more of this when we do mixed

integer programming. But the fact that we have these two

things is really very useful, okay? So next time, we're going to move to

mixed integer programming, okay? Which is going to be the, you know, the

third big approach to combinatory optimization.

And it will obviously rely on linear programming.

Thank you very much.