Of course, finally, some constraints will be treated as penalty functions.

And ideally, to do this,

we want to unflatten the constraints as much as possible.

Basically, we want to build an expression tree which evaluates the penalty.

So if you think about what happens for this constraints here which was not

an easy one to think about building a neighborhood to treat as implicit,

then it actually gets created into two linear reified equalities array_bool_or.

But really, only the last constraint is soft.

And so only the disjunction that we have to keep true.

So we can actually just keep these as left hand side definitions.

So this defines it's value I3 and this one variable I6.

And we can soften the last constraint into this penalty which is basically saying,

well, at least one of them should be true.

So the penalty is 1-I3, 1-I6.

So you can see, if both of them take the value 0, then we'll get a penalty of 1.

Otherwise, we'll get a penalty of zero.

In other words, the constraint is satisfied.

So that's not very good.

Because really,

we're only keeping track of whether this constraint is solved or not.

And remember, we've talked about penalties.

We said, when we're doing a penalty, what we want to do is make sure that

we're sort of describing the amount of violation of the constraint.

So a better way of mapping this to soft constraints is to build penalty functions

for each of the subconstraints and then join them up.

So here, we can build a penalty constraint for this x+-y+6, which is this max(0,

y=x+6).

So obviously, if the constraint is satisfied, this will be negative.

And so we'd just get zero if the constraints is unsatisfied,

then this will take a positive value and that would be the penalty.

And you can see the more unsatisfied that is, the worst the penalty is.

We can do the same for this other constraints here and then we can just take

the minimum for two penalties, because this is an all constraints.

So we really want to say, okay,

it's almost satisfied we take the closest to satisfy.

That's how much penalty we should pay and notice that's much better,

because the penalty now records the actual amount of violation of the constraint.

And this is what native constraint-based local search solvers do, but

it's not currently done by the MiniZinc local search solvers.

So it actually weakens them quite substantially.

For global constraints then, we really want this penalties times multiplier.

So this global violation we actually want to add the Lagrangian penalty term.

We've seen the Lagrangian purchase to penalization very robust.

Because basically, if we keep finding that its constraint is hard to satisfy,

it gets more and more important and we just take this global violation.

So basically, these penalties times the multiples to give the total,

add it to the objective and that gives us the local search objective.

So local search, it basically splits the variable into three categories.

So there is this left-hand side of the one-way constraints.

So typically, most of the local variables on you introduce variables

introduced to evaluate and expression and

that will be handled by this one-way constraint solving.

There'll be other constraints which appear an implicit constraint.

So often,

these are the actual original search variables from the original problem.

And we hope that we have an implicit constraint which gives us a neighborhood

and a way of changing the values while still maintaining that constraint, and

there'll be another set.

So basically, variables which appear in constraints which we

don't treat as implicit and we just have to work around them.

Basically, they become just search variables.

Now those are just given random values,

because we have no implicit constraint to give them an initial veying.

The implicit vars are given a value from the implicit constraint,

which controls them.

And then we can just calculate the initial values for these expression vars,

those ones defined by the one-way constraint.

And then search is just picking an implicit constraint and doing a move for

that implicit constraint or picking an independent variable,

and just doing a move on that independent variable.

So it's just typically just giving it another value in its domain.

We do the appropriate move, then we have to evaluate all these

one-way constraints to find out the new penalty and decide what to do.