In this lecture. I'm gonna talk about our first approach to modeling people. And

this is known as the rational actor model. Now the rational actor model assumes that

people are, well, rational. We make optimal choices. And it comes under a lot

of criticism, increasingly so, because people are dissatisfied with some of the

results it produces. Nevertheless, I'm gonna argue this is a really useful way to

think about people as, especially when you're constructing models. So what I'm

gonna do in this lecture is define, I'm gonna describe how it works. I'm gonna

describe the rational actor model first in the context of decisions, and then I'll do

it in the context of games, which are strategic interactions where my choice

depends on your choice. After I do that I'll talk about, you know, give an example

where. Sort of breaks down. Let's talk about sort of why I think the [inaudible]

is a really useful thing to have in your pocket. [inaudible] a useful way to

analyze situations. Okay? So let's get started. So how does the [inaudible] work?

Well what you do is assume that people have some sort of objective. That

objective could be you know anyone of a variety of things, but there is some goal

or purpose that somebody has, or group has, or firm has. Given that objective we

assume that you make optimal choices, that you optimize. Again strong assumption, but

that what the approach assumes. There is an objective and [inaudible]. So how does

that work? Well let?s suppose it's a firm. If you?re a firm you might want to

maximize profits. Or he might wanna maximize profit share, market share I'm

sorry right. Or, he might wanna maximize total revenue, those are all things that a

firm might wanna do. If that?s your objective. What we assume in the rational

reactor model is that you do that, you make the choice that maximizes that goal.

Now, if your person you might care about maximizing your own utility. Making

yourself as happy as possible. Or if you're altruistic, you might not care not

only about yourself, but other people. Getting the presumption though, is that

whatever your objective is. That you make optimal choices, to satisfy that

objective. To do as well, as you can possibly do. If you're a political

candidate, right. What you might do is you make [inaudible] as getting many votes as

possible. So that's your goal. That's your objective function. Get votes, and

[inaudible] assumes that you take the action. Make the choice, to get you as

many votes as possible. Okay. So where can you apply this? Where can you apply the

rationale [inaudible]? Well, let's take the simple case of a firm. And what they

wanna do is they wanna you know maximize let's say revenue instead of profits. And

suppose that their revenue, we can write it as this way where it gets price times

the quantity. That's how much revenue we're gonna get. And let's suppose if we

let the quantity be Q that the price will be 50 minus Q. Now why would this make

sense? Well, if the Q is ten and if I only produce ten of these things then there's

not many to go around and maybe I can charge $40 a piece for them. Right? But,

so the price would equal 40. But if I produce more of these, if I produce

twenty, then there's more to go around, and that's gonna cause the price to fall,

and the price to fall to 30. So in this first case, I'd get a revenue of 400. In

the second case, I'd get a revenue of 600. So the question is, what cue do I choose

to maximize my total revenue? And if you think of this, my total revenue is Q.

Times 50 minus Q. So the optimal thing to do is going to be, to have those two

numbers be equal. So [inaudible] equals 25. So I get 25 times 25. Which is 625. So

my optimal choice is q equals 25. So the rational actor model would assume is, the

firm wants to maximize revenue. That's the objective function. And then given that,

he wants to choose a quantity that will do so, so he chooses q equals 25. So where

can you apply this? You can apply this just about anywhere. So you can think

about, if I'm making investment decisions, I get some objective and that may be to

maximize the value of my portfolio, or to give me some sort of nest egg to retire

on. And so I'm gonna make choices that maximize that. I want to think of my

purchases, right. Or your purchases. Like, when you go to the grocery store, or

you're thinking about buying furniture for your house. You could assume you've got

some objective function. And you make choices, right. That are optimal. Given

that objective. Even if like education level, you'd say how many years of school

should I get, take, right. Should I just get a bachelor's degree? Should I get a

master's? Should I get a PhD? Well, you could assume like, you've got some

objective function which could be you maybe care about income, you may care

about what sort of life you lead. Is it life of the mind, is it physical labor?

And you choose how much education to get given your objective. Now you can even

apply this [inaudible] to how do I vote? Right, cuz my objective could be for, you

know, some policies to be implemented. So if I look at the candidates and figure out

which candidate is likely to vote, you know, implement policies that I want. Now

you also probably wanna figure out is that candidate likely to win. I don't wanna

vote for someone that espouses my preferences but has no chance of winning.

So, what I do is choose the candidate who is likely to win, or most likely to win,

and also takes the positions like the positions that I prefer. Here's an issue

though that I want to bring up, when we assume rationality. Rationality, people

often think that means selfish. That's not true. So let me give you an example.

Suppose I'm walking down the street and I find a $100. Well, and I'm walking with a

friend, so there's me. And there's my friend. So one possibility's I can just

say, you know what. This is so cool. I just found $100.00. I can put it in my

pocket, and give my friend nothing. That would be rational, if my objective

function was just me. If I could [inaudible] about myself. But. It could be

this is a really good friend and I care a lot about my friend. And so when I found

100 bucks, I say hey wait, whoa this is great, I just found 100 bucks. So I walk

into the nearest store and say can you give me two 50s. And I give one of the 50s

to my friend because I care a lot about my friend. So there's nothing intrinsic about

the rationality assumption that assumes selfishness. So again, selfishness

[inaudible] my objective function is me. This is how we put it in the framework.

All I care about is me, my happiness, my income, my wealth, right. Altruistic

preferences would be that my objective is that I care about other people as well. So

I care about, not only about the happiness of myself, but I care about the happiness

of others. And I can do all this same mathematics, right. So here's an example

just like the price quantity example involving an altruistic person. So suppose

I've got someone that's got an income of $40,000 and they've got to decide how much

they consume and how much do they donate. And their objective function is just the

square root. Of their consumption times the square root of their donations. And

they want to think about how much do I donate, and how much do I consume if this

is my goal? Well this is just a mathematical problem, right? So, my

donation is $40,000 minus whatever I consume. So this is just the square root

of C. Times the square root of 40 minus C. I can bring everything under the square

root sum. And get the square root of C times 40 minus C. Where now looking at it

this way realize I wanna make C times 40 minus C as big as possible and the way I

do that is gonna chose C equals twenty so that D equals twenty. So the optimal thing

to do here is to consume twenty and donate twenty to split my income halfway between

consumption and donation. That's rational. It's also incredibly altruistic. I could

be irrational altruistic and possibly consume less or more than this, right? And

I could also be irrational and selfish. But the point is that rationality. Right.

This in no way assumes selfishness. You can be rational and altruistic, you can be

irrational and altruistic. Now, I wanna move on to something that's sort of

complicated. That is, I wanna make a distinction between this decision and a

game. So the previous example, that was a decision. I had to decide how much to

consume, how much to donate. And a decision, my payoff, what I get, only

depends on what I do. In a game, my pay off depends on what other people do. This

is where it gets tricky. Cuz for me to decide what I'm gonna do in a game,

depends on what I think the other person's gonna do. So therefore I need a model of

what I think the other person's gonna do. And oftentimes a really good model to have

is to assume the other person is rational. And that's a lot of how game theory works.

A lot of game theory assumes the other person is rational and that allows you to

figure out what you're gonna do. So here's an example. Let's suppose there's two

people. Let's call these people person one. In person too, and this is what's

called a normal form game. And this is a game path. I'll explain this in a second.

So person one can decide whether to stay home or go into the city on a Saturday, as

can person two. If person one stays home and person two stays home, person one gets

a payoff of one. If person one stays home and person two goes to the city, person

one still gets a payoff of one. So person one, if they stay home, their payoff is

one. It's also if they go to the city, person one's payoff is just two. So one if

they stay home, two if they go to the city. Person two is a more complicated

person. [laugh], person two, if they stay home, their path is also one. But if they

go to the city, their pay off depends on what person one does. So if person two

goes to the city, and person one stays home, person two gets a payoff of zero.

Because person two is lonely. It's no fun to go to the city alone, at least for

person two. But if person two goes to the city, and person one goes to the city,

person two gets path of four. Look at person two's choice here. This is hard.

Person two is trying to say do I go home, do I stay home or go to the city. Well, if

person one is going to stay home then I should stay home because one is bigger

than zero. But if person one goes to the city then I should go to the city because

four is bigger than one. So it would be really fun to go to the city. Person two

thinks it would be great to go to the city with my friend. So, for person two to

figure out what to do, person two has to know what person one is going to do.

Here's where an assumption of rationality is really useful, cuz if person one says,

I have no idea what person, person two says, I have no idea what person one is

going to do. I'm clueless. Then person two can't figure out what to do. But if person

two says, I think person one is rational, then person two would say, well look, hmm,

if person one is rational, if they go to the city, they get a payoff of two, if

they stay home, they get a payoff of one, so I bet they're going to go to the city.

So therefore, person two thinks person 1's rational, person two thinks person 1's

going to the city, therefore person two goes to the city, and they get this great

payoff. Okay so that's where when you think about decisions you have to make in

the real world you often have to have some model of what other people do, and often

times a decent model is to assume the other person is rational, right, that

they're going to do the rational thing. Let's do another example. That was an

example of what we call a normal form game. This is an extinction, extensive

form game. Now an extensive form game, these are sometimes called game trees, and

you sort of draw the action, action sequentially. So here is, is a green

person, right, and a blue person. So the green person's gonna go first and they're

gonna say "Do I go this way? And if I do, we both get payoffs of zero, we're going

to move down here. If we move down here the blue person gets to move." So the

green person's gotta decide hmmm. What do I think the blue person's gonna do? Well,

the green person passes it down to the blue person and he looks and says the blue

person can move over here, and the blue person will get two and I'll get two.

Where the blue person can go straight down, and the blue person will get three,

and I'll get minus three. So if the green person assumes the blue person is

rational, the green person is gonna say, well look, three is bigger than two.

Right. And so the blue person's going to move down here. Well then, the green

person is going to say, well even though I could get two, two, I'm not going to get

minus three, so I'm going to move over here. So again here by the green person

making a rationality assumption on the part of the blue person the green person

can figure out what to do. Okay, so when would we see rationality? Rationality

seems like a strong assumption. First case is when the stakes are really large. So if

you think about it, like, if you're just in, you know, buying lunch somewhere,

maybe you just follow some rule of thumb. Right, or if you?re trying to decide you

know exactly how many bagels to buy at the bagel store, you know maybe you just sort

of pick a dozen or something. But if you think about buying a house or buying a car

or deciding where to go to college or deciding whether to go to college, those

are large stake decisions and in those situations it?s probably the case that you

come fairly close to being rational. 'Kay, when else? When it's repeated. So there's

been a lot of experiments on whether or not people are rational, what we often

find is the first time somebody does something, especially like, remember that

mounty hall problem with the three tours we did? First time people do stuff they

typically, they often don't get it right, right? But the more and more you do it,

people tend to learn and we get closer and closer, right? To being optimal. Third

case. When you have groups of people making decisions. Now groups can get led

astray and you can get groupthink and terrible choices and escalation of biases

and that sort of thing. But, typically if you bring in more people, you're less

likely to make an irrational decision. That's why, when we're making large-stake

choices, right [laugh], we often go and ask friends and family and other people

who respect, so that we're not making these decisions alone, so there's some

sort of group of us making the decisions. And then last case is, I mean, one reason

we make [inaudible] choices is [inaudible] choices are easy to make. And if someday

says would you rather have $twenty or $ten, we'd choose twenty. If someone says

would you rather do less work or more work, we'd typically say I prefer to do

less work. So why then if rationality is often too complicated and why if we just

think about people don't do ration, act rationally, why make the assumption? So my

advisor, one of my advisors was Roger Miason who won a Nobel Prize in the theory

of mechan, called mechanism design, which is a, sort of a branch of game theory, in

a way. And Roger makes this. Follow a compelling argument. That rational

behavior is an incredibly important benchmark, it's probably the most

important benchmark is you think about modeling people, why? Well, first off, it's

unique. Most of the time, not always, but most of the time it's gonna be unique.

So think about that case of the firm deciding how much quantity to produce to

maximize revenue. Or think of the person trying to decide how much to donate to

charity. There's a unique answer, so it gives you this. Definitely testable

amount, [inaudible] say this is what rational behavior is. Okay, second thing

something that's easy to solve for. So even though [inaudible] is hard and

practice. We're writing all these mathematical equations okay, so we've got

this function that looks like this. It's often very easy to use mathematics to find

the actual point. To find, you know within the [inaudible] model. What someone should

do. So let's contrast this with Irrational behaviors. Suppose I write down some model

and say, people are irrational. Well, I've got two problems. One is its not unique.

Right? There could be 1,000 ways to be irrational. So I have no real prediction

coming from the model. The second thing is, it may be really hard to figure out.

What exactly is it that this person's gonna do in this context if I start taking

in all these sort of psychological influences and contextual influences and

that sort of stuff? It's often just easier to say, here's their objective function,

let's just assume they optimize. Another point, another reason it's an incredibly

important benchmark. People learn, and we talked about these experiments that over

time people get things right. Well, if over time you're moving towards the

rationality assumption then maybe the rationality assumption's sort of not a bad

place to sorta start, and then you can sorta say this is where we expect the

system to go over time. Right? And then last. In can be the case that, even if

people make mistakes, if there's no bias on way one way or the other in terms of

the mistakes, those mistakes my darn well cancel out. And what you're left with,

then, is something that looks pretty close to rational behavior. So some people could

spend too much, some people would spend too middle, little. And therefore, on

average, you get something that looks close to rational. Okay, so what have we

seen? What we've seen is this is that rational behavior. Works from the

following set of assumptions, you assume there's some sort of objective function.

And then you assume people optimize, given that objective. This could be firms, this

could be people, whatever you want it to be. Now, strong assumption yes, but a

really powerful benchmark. Now one of the things we found, from doing a lot of

experiments we , Psychologists, Economists, all sorts of people,

Scientist. What we found is, there are places where people sort of systematically

deviate from rationality. That's what we're going to look at next, we're gonna

get some specific biases where the rationality assumption, sort of seams

consistently not to hold. Now there's going to be cases where it does hold

[inaudible] where it consistently doesn't hold. Nevertheless, it still can be useful

even if you think it's not consistently going to hold. To think through your

model, assuming rational behavior to get that sort of benchmark, to see what would

rational people do. That way when you're actually looking at the evidence. You can

see exactly how far from rational people how we are behaving. Okay, thank you.