0:02

Cao Cao led his army towards the south in anticipation

of a war with the Sun-Liu alliance.

Cao Cao had a much larger and stronger army than the alliance, but being

residents of the north, Cao Cao's army was not familiar with sailing on rivers.

The soldiers became very nauseated during their sail to Eastern Wu.

Zhuge Liang and Zhou Yu planned to send Pang Tong,

a famous scholar at the time to infiltrate Cao Cao's camp.

They hoped Pang Tong would trick Cao Cao into believing that by committing his

ships together using iron strings,

it would stabilize them and prevent his army from becoming sick.

When really, this would provide an opportunity for

the alliance to launch a widespread and destructive fire attack on the ships.

Pang Tong went to Cao Cao's camp to plant this idea, and

knowing him to be a notable scholar, Cao Cao trusted his advice.

0:55

He asked him to start by packing the square shaped house ships together.

[SOUND] Pang Tong came up with an arrangement, but Cao Cao thought it wasn't

stable enough, as there were too many gaps and the span of the packing was too large.

[SOUND] He asked Pang Tong to come up with a tighter and more stable arrangement.

Pang Tong was unable to do so and

traveled back to the allies to ask Zhuge Liang for help using the tablet.

[MUSIC]

[SOUND] >> [SOUND] So,

Cao Cao has a very large army, and with that army come auxiliary.

So, these are officials, families of the soldiers, other kinds of camp followers.

And in order to make them mobile, he's housing them on these ships,

these square housing ships.

1:37

But Pang Tong says, well, look, a lot of the people in your camp,

followers, are from the north, and they find these ships very,

very unsettling, because they move around in the river, and it gives them nausea.

It means that they're not going to be performing to the best of their ability.

So, what you should be doing is tying them all together to make one big sort

of house raft, with lots and lots of these house ships joined together.

And that will make it much more stable, and

then these people will be much more comfortable on the ships.

And so, he suggests, here's a pattern,

where we can join all those ships together to make one big raft.

2:15

And, so we can see that these are all squares of different sizes.

So, Pang Tong looks at this thing and says, look, hang on,

there's these gaps in here.

I mean, surely, we can make a better square.

Obviously, the tighter the square is, the more stable the whole raft would be,

because there'll be more ships next to each other.

So, he says, well, what I really want you to do is, yes,

we should tie the ships up but we should try to minimize this sort of size of

this rectangle to make it as stable as possible.

2:43

And so that gives us the multiple square packing problem.

So, we're going to be given a number of squares of size 1x1, to nxn and

various different numbers of each of this kind of squares.

And then we're going to pack the squares into a rectangle of

the smallest possible area in order to solve Cao Cao's problem.

So, here's our model for the problem.

It's not too difficult.

So, here n is the number of square sizes.

So, square is just going through a number of square sizes and then for each square,

we have a number of copies.

And then we can do a better reasoning about, well, the maximum length

rectangle we'd ever need is if we just take all the lengths of these squares and

add them together.

And we'd certainly have a solution with that length.

Just by lining up all the squares one after the other.

And we are also going to take care of the minimum area which is,

just if I sum up all of the area of the squares, then the area of the whole thing,

the rectangle that I contained them in, obviously has to bigger than that area.

3:39

So mina, principle decisions of these two, the height and

the width of this rectangle.

And we can note that the height has to be at least n, otherwise,

it kind of if you just square size n in it.

And it doesn't have to be any longer than maxl.

And similarly, the width is between those ranges.

It'll also be worthwhile, this is the area which we're actually trying to minimize,

which is going to be height times width.

We can also give it fairly tight bounds.

We know that the area has to be equal to this minimum area.

And of course, it needs to be no bigger than n times maxl,

which would be a rectangle big enough to just fit all the squares in a row.

4:12

All right, and we can also calculate the total number of squares,

which is just adding up this number of copies of each type.

And that'll give us this array for all the squares that we actually have to pack.

And basically our main decisions are going to be then for

each of the squares, what's the x position and what's the y position?

And notice, as I've said many times, we're trying to, when we introduce

these decision variables, we're trying to add type bounds on the variables,

to reduce the amount of search we have to do to find a solution.

4:44

All right, so here's an example of the answer to a square packing problem.

So, here we're packing a 1x1, 2x2 and 3x3, and 4x4 square together.

And it uses a height of 5 and a width of 7 and we have a total area of 35.

And it's hard to see, how to do better for this particular packing problem.

5:04

So, it's going to be useful to have some auxiliary variables to make our model.

So, we're going to have an array which for

each square laid out gives us the size of that square.

So, we're trying to build this array here, right.

So, we have three of size 1, two of size 2, five of size 3, four of size 4,

sorry, three of size 4, no, four of size 4 and three of size 5.

So, this is just some calculation to do that.

So, we're actually going to use variables here.

And we're just going to use a global cardinality constraint to push them all

forward.

So, the global cardinality is going to make sure that we have the right number of

copies of each square in this array.

And then we're just going to order all the values in this size array, so

it's increasing.

And that'll force that there's only one solution, which is exactly this one.

5:53

All right, now that we have the size of each array, right?

So, we were trying to get for this number of squares for each one the size it has.

Now we can easily add the rest of the constraints.

So, we need that each square fits inside the rectangle.

So, if we take its right-hand side, it has to be less than or equal to the width.

And we take the position of its top, which is its y-coordinate plus its size,

that has to be less than or equal to the height.

And then we need to make sure that for each pair of squares s1 and s2,

they don't overlap.

6:31

This one says that s1 is below s2 and this one says that s1 is above s2.

And so one of those four things has to happen,

that's enough to make sure those squares don't overlap.

And then we're going to minimize the area.

So, that gives us our model.

6:55

So, that looks like this, right?

So, you can see, we've actually packed very, very neatly,

every square into this thing of size 35.

But let's look at the real instance, that Cao Cao needs to have solved.

So, here we have seven different sizes of square, some of them, there's no copies.

There's no 3x3 ships that we need to tie up.

We can see, we get a solution quickly and if we keep going, we get better solutions.

And after seven minutes, we find that solution of 504 but

even after one more hour, we don't improve it.

Now, possibly this is the optimal solution but we don't know.

This is very common for hard combinatorial optimization problems that we can find

a good solution but maybe not be able to prove it optimum.

7:40

So, how can we improve our model because we would like to get the best

possible solution?

So, we're going to add global constraints, obviously,

we can add redundant constraints and we can use symmetry breaking.

All of these things will help reduce the amount of search

it takes to answer this problem.

So, the first thing is the global constraint and

as you should be used to by now, if there's some combinatorial substructure,

which is common, then we'll have a global constraint, which captures that.

And so, two dimensional non overlap is a very common thing in any kind of 2d

packing problem, so there is a global constraint to do that, it's called diffn.

It should probably be called diff2, because the 2 is the 2 dimensions.

So, what does it take?

It takes the x coordinates of rectangles and the y coordinates of the rectangles

and the size in the x direction of each of those rectangles and

the size in the y direction of each of those rectangles and

make sure that none of those rectangles overlap.

And so if we go back to our model, we can remove this non-overlap constraint and

replace it with just a single global constraint, which is this one here, right?

So, the x-coordinates are the x, the y-coordinates are y, and the sizes in

the x-direction is size, and the sizes in the y-direction are also size.

So, that global constraint will do all of that non-overlap for us and

much more efficiently.

8:57

Now, another thing we can do,

is we can note that cumulative is very close to a packing constraint.

So, if there is a packing, then a cumulative constraint, which just adds up

the amount of each size that we use in one dimension, will also hold.

Which means we can add redundant cumulative constraints to our packing

problem.

And that will improve the propagation and hence the solving.

Because the cumulative constraint, maybe,

able to do things before the diffn constraint can kick in.

9:24

And so we can add a cumulative constraint.

So, basically saying that the amount of height we can use at any position

in our packing can only be up to this height that we're choosing.

So, how much height are we using?

We can think of these boxes as tasks.

They start at the x coordinate, because an x coordinate is going along this way,

they have a duration, which is their size, that's the size of the square.

And they have a height, the resource usage of the size.

So, that cumulative constraint must be true in any correct packing.

And we can simply reason about the width.

So, a width can only be up to width.

We look at the y-coordinate, the duration's now the size,

which is going upwards.

And the resource usage is the size that way.

And that has to be the total usage at any point.

And the width has to be less than or

equal to this width that we're trying to minimize.

All right, so in general, we have to be careful.

Cumulative constraints do not enforce packing.

So, here's an example, even when the x positions are fixed, which shows this.

So, if we have all of these rectangles here and

we're trying to pack them in a box.

Now if we look at it, it looks like it fits.

If we think about the cumulative constraint, we'll look here and say,

we're not using too much height at this point because if we add them up,

we get something less than this bound.

But if we try to actually pack them, right?

If we try to push that 6 down, then it would push this 3 at the bottom.

So, even though this also satisfies the cumulative,

we're not using too much resource here, it's not a problem.

It's because the cumulative really sees the problem like this, right?

And you can see, that this is a cumulative solution, but it's not a packing solution.

Because there's no way of leaving the rectangle four in the right shape.

11:03

All right, but we have some very important things that we can take advantage of here,

which is symmetries.

So, when we have the squares of the same size, they're completely interchangeable.

So, if I have a square of size 2x2 here and another one here and

I swap them, then I'm going to get exactly the same solution.

And so I need to make sure that only one of our possible solutions is going to be

explored.

And the way to that is the order the placement of those queries.

So we're going to order them on their x, y coordinates.

So basically, we want to say, that the first square of size 2x2 is

sort of below and to the left of any other square.

11:39

Well indeed, it's a lexicographical ordering or actually,

we'll make it above and to the right.

So, we're going to enforce a lexicographical ordering on the squares,

so that x1, y1 is let's say, in position to the first square of 2x2.

It's going to be lexicographically but

greater than the position of the second square of the same size.

And of course, this means that either x1 is greater than x2 Or

x1 equals x2 and y1 is greater than y2.

12:06

And of course, we have a global constraint to do that kind of thing.

So, lex_greater will take arbitrary length arrays and enforce that,

this array here, is lexicographically greater than that one.

All right, there is its MiniZinc signature there,

and that will be a very useful global constraint for many things, but

typically, most commonly used in symmetry breaking, like the example we have here.

12:29

So, now we're going to have to order our squares, and do to this,

we're going to have to do a little bit of work,

because we need to find all the squares of the same size.

So, first we are going to build this base array which just says,

it's the position just before the first square.

So, base of i is the position just before the first square of size

i in our n square layout of all the squares.

So, I can do that by just summing up the number of copies of all the squares

of size smaller than this one.

12:59

And once I've got that base array, then I can build up my lexicographic constraints.

That make sure that squares are the same size,

lexicographical ordered in a reducing order.

So, here's what we're doing, we're iterating over all possible square sizes.

And then we're looking at all the copies of those square sizes except the last

ones, because we're going to compare this one plus the one after it.

And then we're just basically adding this lex constraint saying that the first

square of this size,

it's xy coordinate is lex greater than the next square of this size.

And we see, we're using base of i here plus j to give us the right

value in the x array for the squares of this size i.

13:45

All right, now we solve the model, and indeed, well,

we find a better solution than we found after one hour.

And in fact, we've proved in just 32 seconds.

So, obviously, adding all of this extra

stuff to our model made a very significant difference on how powerful it could go.

14:17

So, we can make another improvement.

We use this constraint solver here to build this size array.

Now really that size array was fixed, right?

Once we knew this number of copies of every ship then we could have built

this array without using the constraints alone.

So, that's likely to be much more efficient because it won't, well,

this won't take very long to solve.

But if we can just build it like this, then MiniZinc, when it's compiling

the model, knows these fixed sizes and then can make some more optimizations.

Which may not be possible with this version,

where the solver is the only one that works out the resources.

14:56

So, how do we build this array?

Well, it's a little bit complicated.

We're going to iterate over all the i's in NSQ,

that's what we are trying to give sizes to.

And we're looking for all the j's, where i is greater than the base of j.

So, there are all the j's, where this i is greater than its base.

So, that's basically going to be 1, 2, 3, up to the one where i is in the base.

And then we take the maximum of those, and so that's actually going to give us, for

each position i, which square size should be in that position.

Takes a little bit of thought, but it'll work.

15:31

All right, so what we've seen here is a packing problem, a 2D packing problem.

It's a very common use of CP in the real world to solve this kind of

packing problems.

And there's lots of varieties and

they vary complex discrete optimization problems and

they also typically have things like symmetries and things like that.

So, we've seen the use of diffn to encode 2D non-overlap.

And you can think about this as direct extension of the disjunctive constraint,

which actually encodes 1D non-overlap, if you think about it that way.

And we've also seen that we can use cumulative constraints for

helping our packing problems.

So, they're redundant for packing problems, but they're very useful for

improving our solving.