So here in 8.2 we're going to continue our study of

don't cares in multilevel logic networks,

how they arise implicitly as a consequence of the structure of the network.

In the previous lecture we showed how some don't cares arise because of

the structure of the logic in front of a node in the Boolean network.

What we're now going to show is that it's also possible for

the nodes in back of your node,

the nodes and between you and the output how

the structure of that logic can also create internal don't cares.

So let's go back to our example and see how that happens.

So let's continue our tour of how

you pull implicit don't care is out of a multilevel logic network.

What I had sort of shown you in the previous lecture was that I could pull a bunch of

don't cares out of the inputs to the x,

b, and y inputs to the f node and roughly

speaking I'm just going to draw sort of a line through

this logic network which is what we were showing previously.

So this is a Boolean network that still has X = ab as a node

Y = b + c as a node feeding an f = Xb + bY + XY node.

There's a new node here,

a node between f and the output called Z = fXd.

And what I sort of showed you in the previous lecture which was

8.1 was looking at nodes in

front of the f node I can find some patterns of X and b and Y.

What I'm going to show you now in lecture

8.2 is I'm going to show you that if you look at the nodes behind node F,

the nodes between F and the output there are some more don't cares.

So here's the new question I want to ask: one,

Suppose that f is not a primary output so f passes through some other stuff,

the Z node to become a primary output.

When does the value of the output of node f actually affect the primary output Z or said

conversely when does it not matter what f is because the value of f does not affect Z.

Let's go look at patterns of the f, X,

and d inputs at node Z and see if we can find

some of these values that make Z not care about

f. So what we're going to look at now is inputs to the Z node

and we're going to see if we can find

some interesting context, some interesting relationships.

So let's ask the question,

when is the Z sensitive to the value of f?

So I've got the logic network shown again here.

X = ab, Y = b + c feeding into the f = Xb + bY

+ XY node whose output goes to the Z = fXd node to become the primary output Z.

And I want to ask the question when is Z sensitive to the value of f and

so I've got a table it's also got three columns like the last lecture.

The columns are f, X,

and d and the column to the right says does f affect Z?

Now note please that the rows of the table are not in boolean,

the standard sort of boolean order.

What's actually happening is the first two rows are when X and d are

zeros and then the leftmost column is when f is zero and one.

And then the second pair of roses when X and d

are zero one and again f moving from zero to one.

The third pair of rows,

X and d are one zero again f moving from zero to one.

The fourth pair of rows X and d one one,

f moving from one to zero.

The reason I have f highlighted in a different color is I want to ask the question,

whenever X and d take on this set of values,

for the first set of rows zero zero,

does f affect Z if f changes from zero to a one?

Does anything happen.

So let's just go look.

What happens if X and d zero zero?.

Does it matter if f is zero or f is a one?

So there's a little arrow here that says Delta f

between f between zero and one and the answer is no.

Look, the Z node is basically an AND gate if X and d are zeros,

no Z is zero and f does not affect Z.

What if X and d are zero and one?

Again if f is a zero versus f is a one,

does it actually affect the Z node?

No it does not, it's an AND gate,

Z is still a zero.

What if X and d are one and zero?

Does it matter if f is a zero or one does it affect the Z node?

No. Hey it's an AND gate, Z is still zero.

Ah, what if X and d are one and one.

Now does it matter if f is zero versus f as a one?

Yes. Now it matters.

Now the output actually depends on f. But in the other cases when X and d are zero zero ,

zero one, and one zero respectively Z does not depend on f. It doesn't matter.

So question, can we use this information to find new patterns

of X and b and Y to help us simplify the f node,

because remember these are not patterns of X and b and Y.

Like the things we did in the previous lecture,

we start by looking at the nodes that are

contextually next to the node we want to simplify and we ask

some questions and then we use some techniques that you don't know yet that are in

the next couple of lectures to sort of pull out the patterns

that are of value, the impossible patterns.

So looking at what we now know about f and X

and d can we find some patterns to simplify X and b and Y?

And the answer is yes.

So just to be clear what patterns at the input of f, okay,

are don't cares because they make the Z output insensitive to changes in f. And again,

it has to be values of X and b and Y and what we discover when we

stare at the table of f and X and d,

the does f affect Z table what we see is that if X and d are zero zero,

if X and d are zero one,

if X and d are one and zero,

then Z is insensitive.

But I only get to look at patterns of X and b and Y,

I can't pay attention to the d. So the only thing that's of

consequence here is that I notice that whenever X is zero,

and I'm just going to go highlight that over here,

whenever X is a zero it is

the case that the network outputs Z is

insensitive to f. And so the pattern that I can get,

that I can pull out that's legitimate for me to use,

is that a pattern of X,

b and Y where X is a zero and b and Y are I don't care that is in fact a new don't care.

That's a don't care because the output is no longer affected by my f value.

So can we use this new don't care pattern X = 0 to simplify f?

Yes we can.

The pattern X b Y = 0 don't care don't care at the input of f will make

the network's Z output insensitive to

changes in f. And so I've got the same network shown below,

X is a b, Y is b + c,

they both feed the f equals Xb + bY + XY node whose output passes through Z = fXd.

And I've got the Karnaugh map drawn up again Xb on

the horizontal axis for columns Y on the vertical axis two rows.

And I've got the entire right three columns

of the Karnaugh map are either don't cares or ones.

The top row is don't cares,

the bottom row one one don't care for the right three columns.

And if I now put the X = 0 pattern in as a new kind of a don't care,

what I get is that everything where X is zero becomes a

don't care so the entire first column zero zero becomes a

don't care and the entire second column zero one also

becomes a don't care the one in the zero one one slot being replaced.

And so the amazing thing now is that

the entire Karnaugh map is either made out of don't cares or it has

a single one in the XbY one one one cell

and so the unavoidable conclusion is that the value of f,

if I were to actually sort of simplify this thing is that f is just equal to one.

And if I you know just do it.

Oh there it is, you know,

f is one that's what I'd circle in the Karnaugh map.

I'd circle everyone of the eight squares in the Karnaugh map.

So that's pretty amazing you know.

As a consequence of having the X node and the Y node feeding the f node and

having the Z node consuming the f node

before it actually generates a real primary output,

f which started out as something kind of complicated is replaceable by the constant one.

Just because of all of these implicit

don't cares that happen because of the structure of the logic network.

And so if we, you know,

just go look at this,

look what happened to f. On the top I have the old network X = ab is a node,

Y = b + c is a node,

they feed the f = Xb + bY + XY node that

in turn goes through the equals fXd node become primary output Z.

What happened? Well down below I still got the X = ab node, you know,

I still got the Y = b + c node and

a B primary input but they feed a node that just says gone.

I don't need the f node anymore.

You know I can really replace that with a one in the Z node.

It turns out that I still need the X node to create

X for Z but I can also put a big X through the Y node.

I don't need the Y node either.

This network has become dramatically simplified because I was able to

extract some don't cares from the structure of the rest of the Boolean logic network,

pass those don't cares to the two level optimizer,

something like espresso that's operating on

the two level form inside every node and simplify things.

So this is a really big idea.

So here's a summary of where we are in this implicit don't care story.

Don't cares are implicit in the Boolean network model.

They arise from the graph structure of the multilevel Boolean network itself.

Implicit don't cares are powerful.

Gonna put an exclamation point by that.

They can greatly help simplify the two level S.O.P structure of any node.

But implicit don't cares require computational work to go find.

For this example we just stared at the logic to find the don't care patterns.

It is clear that we cannot do that.

There's got to be an algorithm and that's what we're going to do next.

We need algorithms to do this automatically.

So let's go look at how we can use all the nice computational Boolean algebra that

we learned at the beginning part of the course to actually pull these don't cares out.