0:00

[music]. So, welcome to Lecture 2.

We're going to start talking about computational Boolean algebra.

Now, that's a, that's an interesting set of words.

I'm assuming that everybody who's, who's decided to take this class know something

about Boolean algebra and can do things like you know, manipulate the equations by

hand and maybe do things like Karnaugh maps to simplify things.

But that's a far cry from being able to write a computer program that deals with

something like a Boolean equation as a data structure manipulated by an operator.

So in this first set of lectures on computational Boolean algebra, Lecture

2.1, our first topic is how do we take complicated Boolean things apart into

simpler pieces. So we're going to introduce the

foundational tools for doing that. So here's Lecture 2.1.

This is where we're going to start talking about Boolean algebra from a computational

angle. Computational Boolean algebra basics.

So what do I mean by computational Boolean algebra?

Look, by way of background, I'm assuming that you've done some Boolean algebra.

You've done hand manipulations of these algebraic equations.

You've done Karnaugh maps to simplify little Boolean equations.

I've got 1 showing on the right. But, look, this is just not sufficient for

real designs and is a, is a concrete way of just, you know, illustrating that.

Let's suppose that I actually told you that I have a 40 variables, and I'd like

you to optimize that with a Karnaugh map. Do the math, it's got about a trillion

squares. You could in fact fit that on an American

style football field, which is 100 by 50 yards by the way.

But every map in the Karnaugh map would be about 60 by 60 microns, so that's 60

millionths of a meter. That is just really not happening, here

has just got to be a better way. So what we need is, we need a

computational approach, we need an algorithmic, computational set of

strategies for Boolean stuff. We've got to be able to think of Boolean

objects as data structures and operators and that's just a really new thing.

So what are we going to study? We're going to study three things, we're

going to study decomposition strategies, because the way you do something that's

really complicated is by taking it apart into simpler pieces.

Solving what you can on the simple pieces and then putting them back together.

So there's a set of advanced concepts and terms you need to be able to do this.

We're also just going to look at some computational strategies, ways to think

about Boolean functions, that let them be, manipulated by programs, that's a big

thing. And then, we're going to look at a couple

of small, but really interesting applications, because when you have new

tools, there's some new and interesting things to do.

And I find that it's actually, and this may be a little bit surprising to you,

it's interesting to go back to calculus as an analogy here.

One of the things you probably learned somewhere in high school is that you can

take complex functions, for example, something like e to the x.

Alright, so there's e to the x, and you could actually express e to the x as

simpler pieces. Alright, so suppose that you have you

know, powers of x. You know, you have 1, x, x squared, x

cubed and stuff like that. You can glue those things together and you

discover that e to the x is 1 plus x plus x squared over 2 factorial plus x cubed

over 3 factorial and on you go. And that was pretty cool.

And in calculus, you go a little further, we tell you there's a general formula, the

Taylor series expansion, and I've got it written below.

You know, it's kind of like the, the, e to the x expansion, it actually is that, but

it's the general case. And it involves first and second and third

derivatives, and things of the function evaluated in appropriate set of places.

And it, you, you take a little bit more math, you're going to find there's a whole

bunch of other ways of taking complicated functions apart into simpler pieces.

For, for example, in engineering, if it's a periodic function, you can use a Fourier

series. Here's a great big new question.

Is there anything like this for Boolean functions?

Can I take a Boolean function apart and put it back together in simpler pieces in

a way that lets me do something useful? And the answer is yes.

Yes, exclamation point. It's called the Shannon expansion, it's

very famous. It's so famous that it's even got its own

Wikipedia page. And so this is one of the wonderful

inventions of Claude Shannon an incredibly famous individual.

This is the person who invented information theory.

Much of the, the mathematical infrastructure for modern communication.

This is the person who invented the word or at least coined the word bit for binary

digit, very famous guy, Art Claude. And the Shannon expansion is a very simple

kind of idea. So, we're going to start by defining

something new. So we're going to define a new function

which, which sounds a little strange. What if we take the existing function that

we've got, this f of x which is shown above and we set one of the variables to

be a constant value. Right, so, we've got this variable xi as a

constant and we're just going to either set the value of xi to be a one or we're

going to set the value xi to be a zero. That's pretty easy to do, let's just sort

of do a couple of examples. If I've got this little function, F of x,

y, z here if I said x equal to 1 let's just look what do we get?

Alright, so x is 1, so I'm going to get a y and then x is a 1, so I'm going to get a

z bar, and then, x is a 1, so the x bar turns into a zero.

And I believe I get a yz bar. Okay.

So that's the function and I could simplify that a little bit more.

If I take y equals zero and do the same thing.

Okay. The first term goes away, I get a zero.

And the second term, I get an x. Z bar, and the third term just goes away.

So I get xc bar. Alright?

So that's, that's what I get. Now, here is something I'm going to say a

whole bunch of times until you remember. It's a new function, right?

But it no longer depends on this variable. Okay?

Once you set xi to a constant, there are no more xis in this function.

Makes sense, right? They're constants, the variable went away.

So these things are spectacularly useful, so they have a name.

They're called the Shannon cofactors. Right?

So it's the Shannon cofactor with respect to a variable.

And there's a couple of ways you write this.

If you, if you take, oops, sorry, if you take the function and you set the value to

be a 1, right. Then the most standard way of writing this

would be f of xi is a subscript. Right and that's we refer to that as the

positive cofactor. And if you set xi to be 0, we refer to

that as the negative cofactor. And to be honest, and I recognize it's a

little bit sloppy, but everybody does it. Sometimes we just going to write this as f

of xi equals 1 or f of xi equals 0. Ignoring all the other variables which is

mathematically a little, little sloppy. But it's just easier to type, so that' s

what they are, that's what they're called. That's how we write them why are these

useful functions to get from f? Well, the big reason is that this is a

very famous theorem, it's called the Shannon expansion theorem.

If I give you any function f of n variables and I pick any of it's input

variables xi, that function can be represented in this very simple way.

The simpler pieces that you decompose the function into are the positive, there's

the positive cofactor. And the negative, there's the negative

cofactor. So you have xi the variable ended with the

positive co-factor, plus or xi bar ended with a negative cofactor.

So take the positive form of the variable, end it with a positive cofactor, sum in

the negative form of the variable, and the negative cofactor.

And you, can always take a function apart in just that very simple way and it's

amazingly easy to. A proof, not exactly a proof, kind of more

just like a, a case study and, you know, let's just let's just let let's let xi be

a 1 here, okay? Well, then what happens?

So, you know we've got the value of the function.

Right, which has a bunch of variables in it, but there's xi equals 1, right?

And we're claiming that that's supposed to be f of 1 times the positive cofactor plus

1 bar, which is a 0, times, the negative cofactor.

All right. You know, that term just goes away and

this thing just turns into f of x equals 1, which I said was kind of a sloppy way

of writing the cofactor. And that's true, because that's, that's

just the definition of this thing. You know, there's just nothing else going

on there. And if I was to say let xi equal 0 I

basically, I just get the same kind of a I get basically the same kind of a, of a

derivation. I discover that f when you let xi be 0 is

equal to f when you let xi be 0. Right?

So it's a very simple kind of an idea. It's a deceptively simple kind of an idea,

but it's an amazingly powerful idea. It's one that we're going to use all over

the place. Here is just another way of viewing what

the Shannon expansion is, and, and what it does.

And I'll just remind you of, of, what a mux is, right?

This is a multiplexer. Right?

So a multiplexer is, is just a little piece of hardware.

Right? It's got two inputs.

We're going to call them 0 and 1. And another input SEL, for select.

Right, and we're going to put the variable x on there.

And suppose that a goes on the 0 and b goes on the 1 input what this means is

that if x is a 0, you send a to the output and if x is a 1, you send b to the output,

right? So we can write that as a Boolean

expression if x is a 0, I should send out a and x, because that's going to be a but

if x is a 1, I should send out b and x, because that's going to be b, right?

So that's what a multiplexer does. And the Shannon expansion theorem is

nothing more than saying, you know if this is the negative cofactor up here, you

know, this is function F with a zero replacing its input, and down here, this

is the positive cofactor. So this is F with a one replacing its

inputs and the thing I'll be very careful about is that all these other inputs are

the same. Right, so these things and those things

are the same and those things and those things are the same, same inputs going

there. Right, this thing says, oh, look this is

just x bar and f of x equals zero plus x and f of x equals 1, and that's just the

Shannon cofactor theorem. That's just the Shannon expansion theorem.

So, sometimes I find that when you just look at this thing as a piece of hardware,

it, it sometimes makes more sense for people.

Now obviously you can do this on more than one variable, and in fact we're going to

want to do this on more than one variable, no surprise.

So suppose I now have a function that's got let's say four variables x, y, z, w

and I'm already, I've already done the Shannon expansion around x, alright?

So, this is x and F of x equals 1 plus x bar, and F of x equals 0.

Each of those things in the yellow box is just a function.

Okay? It's just another function there's

nothing, particularly interesting happening there.

We could expand it again, right? So we can say, for example, that this,

this f of x equals 1. Why don't we expand that around y?

Right? Well, then that's just y, and I need to

co-factor it some more, so x is 1, but now y is also 1, right, plus y bar times, F of

x is still 1, but y is now a 0. Right?

And I could do the other one as well. The negative x cofactor, f of x equals 0.

If I chose to expand that around y, I'd get y and F of x stays a zero, but now y

is a one plus y bar and f of x stays a zero.

But now y is also a zero and so this kind of obvious if you just take these two

things in the red boxes and you substitute them back up into the yellow boxes on top,

right? What you would actually get is the four

product version of the Shannon, expansion. The Shannon expansion in two variables,

right? So what you would get, in this particular

case, is this. You would say F of x, y, z, w that's equal

to xy, times F of x equals 1 and y equals 1 plus xy bar times f of x equals 1 and y

equals 0. You can see the pattern here, the pattern,

you know, the variable values match the polarities on the products plus x bar y

times F of x equals 0, Y equals 1, plus x bar, y bar, F of x equals 0, y equals 0.

So, when you do the Shannon around one variable, you chop the function up into

two pieces. When you do the Shannon expansion around

two variables, you chop it up into four pieces and it's just what you think.

If you have the Shannon expansion around three variables, you chop it up in to

eight pieces. You get all eight of the products and all

eight of the polarities and you get all eight of the constant values.

This for example, x and y and z, across all of the cofactors and use or them all

together and you can represent the function.

These things also have some notation, alright, they're also called cofactors.

It's not so easy to call them positive or negative cofactors anymore, but we just,

we just write them the same way, right? We write them as for example, if xi is 1

and xj is zero or xi, xj bar. However, you like to write it or write

just as I did on the previous slide, a little sloppy, but very easy to write, xi

equals 1, xj equals 0. Sometimes just easier to type, and this is

just the same thing that I wrote on the previous slide, xy, and F of xy plus x bar

y and F of x bar y plus, dot, dot, dot for the other four terms.

That's the four product version, the four cofactor version, the two variable version

of the Shannon expansion. And again, I need you to remember this,

each of the cofactors is a function, it's not a number.

So not like calculus here, it's a function and if I take a function like f of xy.

Alright, where x is a 1 and y is a 1 it's a Boolean function of all the variables

that are left, but there's no x's and there's no y's in this function, because

they have been replaced by constants. So that's how cofactors work.

Now, let's go see what we can do with them.