Now I've done this in the context of a very simple scenario, where I've broken

up my model into the factors that existed in the original specification.

So, for example, I had factors that were pairwise potential, so I had a slave for

each such factor. But, that doesn't have to be the case.

[COUGH] In fact, the only constraint that I need to satisfy is that I need to, to

define a slave to be defined as a subset of factors that admit a tractable

solution. So that each one can be optimized

efficiently when considered in isolation. So for example, if we can, if we return

to this network, say, one thing we could do is we can introduce two larger slaves,

instead of the four smaller F, G, K, H slaves which one is corresponding to F

and G together, so this is the F, G slave and the other

one representing the K, the K, H slave. And in this case we think about the

agreement, we still have the one, two, three, four slaves that represent the

individual variables. And, so now, we're going to have to

require that theta F agrees with X1 with a one slave on X1 with a two slave on X2

and a three slave on X3. And so we have this expression, which has

a lambda term for each of these three variables.

And we have a similar term for a similar expression for the K, H slave, where in

this case, the K, H slave has to agree with the one slave on X1, the four slave

on X4, and the three slave on X3. If we now think about the singleton

slaves, for example X1, we see that X1 has to agree with the F, G slave so there

is a term to encourage that and with the K, H slave,

because it's present in both. On the other hand, the X2 variable only

occurs in the first slave and so it only has a single agreement term with with the

F, G slave and similarly for X3 and X4. So, how do we then construct the slaves?

In pairwise networks, what we typically do is we divide the factors into a set of

disjoint trees, and this is in fact, what I showed in the previous slide. We

divided it into one tree that went through the first two edges and the

second one that went through the second two edges.

And then, and we define, and they are disjoint, which means every edge factor

is assigned to exactly one tree. And because they're trees, we know how to

optimize them efficiently using techniques like variable elimination or

clique tree. But we also discussed that there is other

classes of factors that are, that offer tractable inference.

So, for example, we talked about matchings or models that are associative,

or regular, or super modular, or whatever we call them.

and all of these are tractable classes and we can take a whole bunch of factors

that satisfy, that, that fit into one of these

tractable categories and optimize them together in one fell swoop.

We don't need to divide into separate edges.

So let's look at an example of this. Remember, a while ago, we talked about

the problem of 3D cell reconstruction, where we have a a cell over here that

we're imaging and we're taking various images that are slices two-dimensional

slices through that cell, and we'd like to reconstruct three-dimensional

structure. So here, as as we discussed, we have the

slices and we would like to correspond the beads that we see, these little gold

beads that we see in these different images to each other,

so trying to figure out that this bead corresponds to this bead over there.

And from that, we can then subsequently obtain the 3D reconstruction.

Now, when we last talked about this, we described this as a matching problem

where the matching weights between a point here and a point here are derived

from the similarity of both the location in the 2D slice and the appearance of the

local neighborhood. But it turns out that neither of these is

sufficiently distinctive in order to really make that determination robustly,

that one point in, in an image matches another point in the other image.

And so, in order to do this with a reasonable success, we need to have

another set of potentials, which are pairwise potentials which basically

preserved distances of these marker positions.

So, tell us that if we had, if we want to correspond this point over here to this

point, and this point to that point,

then the distances between them should also roughly be preserved.

Now, this set of potentials is relatively sparse, and therefore, can be solved

usually using exact inference. This set of potentials is is complicated,

is not, is not complicated on its own because it's a matching problem.

If you put them together, it turns out to be a difficult problem to solve.

In fact, you can show that a joint set of potentials like of, like this, with these

two different categories is in [INAUDIBLE] hard, but using dual

decomposition, we can solve each of these separately and then communicate

information between them, so each of these would form a tractable

slave.