Hi. In this lecture I want to continue our discussion of the prisoner's dilemma, and

I want to focus on how we get cooperation in the prisoner's dilemma. And I'm gonna

highlight seven ways that scholars have identified that cooperation can emerge in

a prisoner's dilemma even though it's in no one's individual interest to cooperate

necessarily. Now remember in the business dilemma it looks as follows: got two

players, player one, player two and each one has two actions; they can cooperate or

they can defect. It's in our collective interest, to have us both cooperate. We

both get a payoff of four. But individually because six is bigger than

four and two is bigger than zero, it's always in each player's interest to

defect. But if we both defect, we both get payouts of two, which is worse than if we

both cooperate. So individual interest don't line up with collective interest, So

if the individual incentives point us towards defecting, collectively, we'd like

to cooperate. So how do we get that cooperation? So to analyze that, I want to

move to a somewhat simpler model. I'm going to assume that what happens is now

is each person just has an action they can take. And if they take that action it has

some costs and some benefits. So I assume if I cooperate that has a cost to me of C.

But, it has a benefit to the person I'm, get, playing with of B. And I'm further

going to assume that their benefit is larger than my cost. So, therefore,

socially, we'd like me to cooperate because the other person's benefit is

larger than my cost. Individually, I'd like to not cooperate because my cost is

positive. So this captures, the [inaudible], the essence, really, of a

prisoner's dilemma, right? Individually, I'd rather not cooperate. Socially,

everyone would prefer if I do cooperate. So in this simpler setting, I wanna talk

about the different ways in which we can get cooperation. I want to start by

talking about some work by Martin Nowak. Now Martin has this wonderful book called

Super Cooperators, where he goes in much more deta ils about these different

mechanisms. So the language I'm gonna use comes from his book. The first way in

which we can get cooperation is something like a prisoner's dilemma is through

repetition. Now Nowak refers to this as direct reciprocity. What does he mean by

that? What he means is, we're gonna play this game many times. So, if we're gonna

play this game many times, I can recognize maybe it's in my interest to cooperate

now, because if we meet next time, we'll cooperate in the future. So my colleague,

Bob Axle rod, has a very simple strategy that can, that can induce cooperation and

produce [inaudible] called tit for tat. We both start out cooperating, and as long as

the other person keeps cooperating, we cooperate. If that person ever defects,

then I defect. And this very simple strategy can keep us both cooperating

provided we meet often enough. And that's the essence of direct reciprocity. Let's

see why. So let's let p be the probability that we meet again. And let's let zero be

our path if we deviate. And then, what's my payoff if I cooperate? Well, if I

cooperate, it's gonna cost me C, however, if I'm gonna meet you later, there's some

probability of meeting you later. And when we meet later, you can cooperate with me,

then later I'll get a payoff of B. So, my payoff isn't just minus C whom defecting,

it's minus C plus P times the benefit if we meet again. So, if that ends up being

positive, what that means is that I should probably cooperate. >> And we can rewrite

that as the property of meaning is bigger than C over B, than cooperation should

emerge in the prisoner's dilemma. Let me give an example from my life that sort of

explains how this works. I used to live in Los Angeles which is a huge city Pasadena,

actually, And when we moved, my wife and I moved to Iowa City. And one of the first

days I was in Iowa I was in the grocery store and I was just buying a couple of

items. The woman in front of me in the grocery store had a cart full of food, and

she said to me, why don't you go ahead? >> Now I w as shocked because no one in L.A.

Ever let me jump ahead of them in line [laugh] at the grocery store no matter had

much stuff they had in their carts. But the reason why she did this is not because

people in Iowa are intrinsically nicer than the people in L. A., it's because of

the fact that she knew she was likely to meet me again because Iowa City was a

small town. So let's see how that works. It's just direct reciprocity. So let's

suppose that the benefit to me of getting the jump in front of her was ten because

she had this cart full food. Let's suppose the cost to her is only two because I only

had a few items. So this ratio of cost to benefits is one over five. Well now the

question is what's her likelihood of meeting me again? Well in a place like Los

Angeles, if we're at the same grocery store, it could be maybe one in 1000. It's

not very big. But in a town like Iowa City, there might be a 50 percent chance

she's gonna see me again. [laugh] It's not a very big town. So, given that she's

likely to see me again, in Iowa City, she's gonna cooperate. In L. A., she's not

gonna cooperate. She has a greater likelihood of direct reciprocity, and

direct reciprocity leads to cooperation. What's another method? Reputation. Now

Nowak calls this indirect reciprocity. Cuz [inaudible] reputation is as follows.

Instead of us directly meeting again, maybe I know the, and I get to know the

woman who met, was in front of me in the grocery store, and I tell other people how

nice she is. So she gets a reputation. So now we can do is we can say, let's let q,

instead of it being the probability that we meet again, let it be the probability

that her reputation gets out, And so what's going to happen is, now the cost of

her to letting me go ahead is C. Her benefit is Q, the probability that her

reputation gets known, times B, because if her reputation is known that she's a nice

person, other people cooperate with her because they know she's going to cooperate

with them or possibly someone else. So you're creating this sort of v irtuous

cycle of people cooperating with one another. So again we get this same sort of

inequality. As long as the value for reputation being known is bigger than that

ratio C over B, we're going to get cooperation. So notice this subtle

difference. In direct reciprocity I'm cooperating because I'm going to meet you

again, and I think I'm going to get a payoff from you. In indirect reciprocity

I'm hoping to get a reputation, a good reputation. I'm hoping that person will

spread far and wide how cooperative I am. And then when somebody else meets me,

they'll say, oh there's Scott. He's cooperative. I'll cooperate with him

because he's such a nice person. And through indirect relationships we can

induce cooperation. Here's a third, network reciprocity. So let's suppose that

we got a set of cells in a network and we want to ask little cells cooperate with

one another. Is it in their interest to cooperate with one another? What I'm gonna

do is I'm gonna consider a very regular graph, number of certain networks with

these different types of networks. A regular graph is [inaudible]. Everybody

has the same number of neighbors. What we're gonna see is each person has K

neighbors. If K is less than B over C, that's same ratio again, then what we're

likely to get Is cooperation. Let's see why In this setting, I'm going to make a

different assumption about behavior. What I'm going to assume that people are

networks and they decide what behavior to follow based on how successful their

neighbors are. So let's think of a simple network where the benefit of having

someone cooperate with you is five, the cost of cooperating is two. And again,

here, each person's connected to two people. So red is gonna denote defectors,

and green is gonna denote cooperators. [inaudible] this long line of people. If

you're a defector right here surrounded by two defectors, your path is gonna be zero.

Cuz you're neither cooperating, nor is anybody who's playing against you

cooperating, so your payoff is just a flat rate of zero. If you're over he re, and

you're a cooperator, and you're playing with cooperators, your path is gonna be

six. Why is that? You're gonna get plus five. >> From each of the two people

playing with you, but you're playing two for corroborating each of those, so that

gives you a payoff of six. So now we have to think about this person sitting here in

the center. It's on the edge between defectors and cooperators. What are they

gonna do? Well they're gonna look at the defector to their left, they're gonna say

this person isn't cooperating with anybody, so they're not, it's not costing

them anything, I'm cooperating with them, so they're gonna get a payoff of five.

This person to my right is getting a payoff of six, as we talked about before.

So when you look at this person in the center. They're deciding that if I defect,

I might only get five, but if I cooperate, that person is getting six. So, their

impressions are going to be that cooperating is better than defecting, so

they're going to cooperate. Let's change the path. Let's suppose that the cost of

cooperating. >> Is three. Well now, the payoffs to the defectors is going to be

unchanged, Because they're not cooperating with anybody. But the payoffs to the

cooperators are going to fall, by 2Y. Before they were six, but now they're

going to be four. Let's see why that's true. Remember their benefiting, they're

cooperating with two people, and two people are cooperating with them. So that

means they're getting two benefits of five. But they're getting two costs of

three, and we add that up to get four. Well now when this person looks to the

right and their left they're going to see that well, the defectors look like they're

doing better, and so this person is going to switch, and defect. Now, we can do more

elaborate models. Let's suppose we now have K=4. So each person is playing

against four people. So this person is playing against four people, as follows.

Again, if this person is defecting, and all their friends are defecting, their

path is gonna be zero, And if we look at som eone over here. >> Who's cooperating

and let's suppose all four friends are cooperating, we can figure out their

payoff as well, and they're going to get four times five, and they're going to get

four times -two. So that's twenty. That's -eight. So the payoff is twelve. So if I

look at this person here, their payoff is seven, but if they look at the cooperators

they know, they say wow these people are getting twelve. If they look at the one

defector they know they see this person's getting fifteen, and they're going to

defect. So it's interesting here, is the benefits are five, the costs are two, and

this person wants to defect. Whereas previously, we had the benefits of five

cost her, too, but we were less connected. We only had two neighbors. You want it to

cooperate. You see, as you get more connected, you have incentives to defect.

Let's see why. We get this K less than B over C. And the way to think about that is

you want to think about this one defector who's sort of sticking out. Who's in the

midst of all these cooperators? So as in general you get K neighbors. And let's

suppose you're surrounded by cooperators. What's your payoff going to be? It's going

to be K times the quantity B minus C. Because everybody cooperates with you, so

you're getting K times B. But you're cooperating with everybody else, so you're

losing K times C. Let's suppose you're a boundary defector. [laugh]. So, somebody

who's defecting, who's surrounded by cooperators, Well, then, your payoff is

gonna be K minus one, all your other neighbors that are cooperating with you,

times B. Now, let's look at when this is gonna be true. When is K times B minus C

gonna be bigger than K minus one times B? Well, if we just do the math, we're gonna

get B over C is gotta be bigger than K. And so, what we get is we get that, this

is the inequality we get. That [inaudible] gotta be less than B over S. So again,

very simple mathematics explains what's gonna be necessary for cooperation. Now,

[inaudible], when you think about reputation, A really dense network, cause

their reputation's more likely to spread. When we think about this network

reciprocity story, we'd like to have a less dense network because there's less of

an incentive to defect. So whether you want a rich network with lots of

connections with high degree and high [inaudible] coefficient, or whether you'd

like a sparse network, depends on the mechanism you're using to get cooperation.

If you're relying on reputation, you want lots of clusters, lots of connections. If

you're relying on network reciprocity, you'd prefer it to be starker. Next, Group

selection, Group selection refers to the fact that it could be that the solution is

on groups of people as opposed to individuals. And so groups of cooperators

can win out. Here's the idea. Suppose you've got two groups of people. There's a

red group here and a blue group here, And each group has to, within itself has some

percentage of cooperators. So let's suppose the red group has 80 Percent

Cooperators, And let's suppose the blue group has only 50 percent cooperators. Now

let's suppose that the red group and the blue group go to war. Well who?s likely to

win? The red group is likely to win because they've got more cooperators and

they've got more cooperators and over time they've benefited more. They probably have

more food. They probably have better technology. They probably have all sorts

of stuff. So when you think of going to war, groups of cooperatives are likely to

beat groups of defectors. So what's gonna happen is, even though it's the case that

within the group. Defectors will do better when those groups go to war against each

other, groups that have more cooperators are likely to win, And so what you get, is

that by selection at the group level, if there's competition between these groups.

As long as it's frequent enough. Then you could actually get a force towards

cooperation. Last, you have kin selection. In kin selection, the idea is this. Is

that different members of a species have different amounts of relatedness. And so

if someone is my brother, or my offspring or second cousin, I may actually care

about their benefit. So what we formally do is we have some measure of relatedness

R. So for a child, that relatedness would be a half genetically. So if I could do

something that benefits my child ten, and only costs me two, Maybe it's not the case

but ten is bigger than two, Purely selfless, that'd be the case. But if I

just take into congenetic relatedness, I, then I just think, is five bigger than

two? This particular model has been used a lot in ecology, because you have some

species, like ants and bees, where R is really, really high, and it's not

surprising that within those species, you see lots of cooperation. Okay. Those are

the [inaudible] five general ways; let's talk about two ways we can get cooperation

in human societies. The first one is, laws and prohibitions, you can just make things

illegal. So for example, it might be my interest to talk on the cell phone when

I'm driving, but it's not in society's interest, because it increases the

probability that somebody else is going to get injured. So we pass laws saying, it's

not legal to talk on your cell phone when you're driving. Another thing we can do is

create incentives. So, when I lived in Madison, Wisconsin, often times it would

show a lot, and there was a law saying that 24 hours after the snow had stopped,

you had to have your sidewalk shoveled. Now, the cost of shoveling my sidewalk was

high for me, but other people are gonna benefit, And, the way that they, and they

would benefit more. Then it cost me to shovel. In order to induce me to do it,

the city basically said; if you don't shovel it you're gonna have to pay huge

cost. You're gonna have to pay $100. And that fear of paying the $100 made me

shovel my walk. So simple incentives, it wasn't illegal, I didn't have to shovel my

walk I was just gonna have to pay a fine if I didn't. >> Okay. So we've seen a

whole bunch of ways in which we can get cooperation [inaudible] dilemma. It can be

repeated, direct reciprocity; it c ould be reputation, indirect reciprocity. >> Yeah.

>> It can be a network effect. It can be group selection, where groups fight

against each other, and so the groups that cooperate are likely to win. There can be

kin selection, where what happens is that I cooperate with people who are like me.

And then finally, we can have things like laws prohibit, just prohibiting things

that aren't good, and we can have incentive structures, where we pay people

to cooperate when they'd naturally be willing to defect. Okay, so that's the

prisoner's dilemma, and how we can solve it. But that's a simple two by two

interaction. Where we wanna go next is we wanna talk about larger prisoner's

dilemmas, where there's lots of players. These are sometimes called collective

action problems. Okay? Let's move on... Thank you.