0:00

Okay folks, so we're back, and let's have a look at one other game with continuum

of actions, so people can take a whole series of different actions, not just

zero or one. And this is going to be one with

compliments, strategic compliments, and it's a linear quadratic model, and so

it's got a very simple structure to the payoffs which allows for a simple, closed

form solution. So this comes out of a paper by Ballester

Calvo-Armengol and Zenou in 2006. And the structure of the game, that the

payoffs are very simple, and tractable. And so, in particular, when we think of a

given individual taking their action, Xi. Other actions are taking, other people

are taking actions X minus I, let's let xi be greater than or equal to zero, so

I'm taking some real valued action. And the utility that people get, we can

see why it's called linear quadratic, so, it's going to be increasing in my own

action, just linearly. And quadratically, there's going to be a

cost, so eventually I don't want to take too much action, because I'm going to pay

for that in terms of Xi squared. but there's also the strategic complement

aspect. So what I also do is I look at different

friends, and I weight them, so I have some weight I on, on J.

And what I get is some product of our actions.

So, if other individuals are taking really high actions, then that gives me

an incentive to take higher actions. I get a a payoff, a bonus, from taking

higher actions when people take high actions, okay?

So, the, the, the full model that they have allows also for some global

substitutes, and so forth. But let's focus in on this essential

aspect of the model. Which is the linear quadratic aspect.

Which is that I get a positive payoff from my direct action.

I get some negative cost and then which is quadratic, and then a bonus in terms

of what other individuals are doing in, in a strategic confluence.

Okay, so the nice thing about this is in this quadratic form it's going to be easy

to figure out how, what's my best action given what other people are doing.

And then to solve for nash equilibrium in this world is fairly easy.

We'll be able to find a set of, of, of actions such that everybody is best

responding to everybody else. We'll be able to solve that as a function

of the network in a really clean and simple form.

Okay, so [COUGH], we've got this payoff, and we want to figure out what's the best

Xi in response to what other people are doing.

So when I say X minus I, this is the vector of the actions for everybody else

besides I. So if you maximize this function with

respect to Xi, just take the derivative of this and set it equal to zero.

Alright, so if you want to maximize this function, what's the maximizing action

just set the derivative of this thing with respect to, so let's take Dui, Dxi,

set that equal to zero, and that gives you the maximizer here.

Okay and so we've got a nice function that's going to be necessary and

sufficient in this case. so when we go through and solve this when

we look at the first order conditions taking the derivative we get an A minus a

Bxi right, so the two comes down and cancels the two.

And then sum of Wj, Wij, Xj has to be equal to zero so Xi is just, has a nice

form, where what you're doing, is you're weighting things by A over B and you have

something which is positively responsive to the amount of activity that your

neighbors take. So, the more activity your neighbors take

the more you want to take, okay. And so you are getting benefits that are

proportional to A, and benefits proportional to how much action your

neighbors are taking, and you pay a cost relative to B.

And so we get this benefits in the numerator over the cost and that

modulates exactly what they write Xi is. Okay, so we've got a best response.

What I should do in response to everybody else is.

And then an equilibrium is going to be solving this set of equations

simultaneously so everybody wants to be taking the action response to everybody

else. So to solve for all these simutaniously.

What we've got is, if we write this down as a vector now, X one to Xn, it's a

function of everybody else and the weights on what everyone else is doing.

You can rewrite this as a vector X is equal to alpha times G of X, right?

So we've got our X one to Xn. And that's equal to a vector everybody's

taking action A over B, A over B, plus a matrix times the actions.

5:10

Where this matrix has the entry in the eighth, Ij's entry has Wij over B, right.

So you can rewrite this in this form. And what that says, is now this is an

easily solved equation, right. So you got a linear equation, in terms of

X, and the G. Where the, the matrix, that we're working

with, in terms of the network. Is now just these weights, just these

Wij's but divided through by B, which is the relative cost of the action, okay?

So we've got a nice simple form here, and of we want to solve this, then you know,

you could rewrite this, so, rewriting this and here we can substitute it, okay,

X is equal to this. So substitute in this expression, put it

in for X, and then do that repeatedly. What you're going to get is alpha plus G

times, alpha plus G, times alpha plus G, and so forth.

And if you look at that, you can write that as a sum of G to the K where k's

greater than zero, times this alpha vector.

Okay, so one way to solve for what X is, is going to say it's equal to this

infinite sum. [COUGH] And what that means is that this

G is going to have to converge in terms of summing.

Otherwise you're going to get an expression which explodes and, and the

equilibrium is, is not going to be well defined.

Now just in terms of understanding that the equilibrium structure, we're getting

a feedback here, right? So the more action my friends take, the

more action I want to take, and, and so forth.

And so for that to work well, it has to be that these weights that I put on other

people are small enough relative to the cost, the B, so that I don't want to take

an infinite action. And so that's going to make sure that

this thing has to converge, we remember this, these entries here are the relative

weights compared to the B's and if those are small enough then this thing will

converge, if they get too big then, they won't be any solution.

Now another way to solve this, is just to write this as X times I minus G is equal

to apha or sorry. I minus G times X is equal to alpha.

And then that means that X is equal to alpha times I minus G inverse down here.

I got this reversed the alpha should be here so you could have X is equal to I

minus G inverse times alpha, if this thing's invertible and this being

invertible is this same condition that you need to for this thing to converge,

okay. So we have a very nice solution, we can

find X now directly as a function of the parameters of the game, the A and B's.

And this G matrix which is the Wij's and the B's.

So we have a very simple game, it ends up giving a nice solution where we can

calculate the actions of every player as a function of the network structure and

the payoff structure. Now if A is equal to zero, then we end up

with A, you know, X equals Gx, so then it's just a unit eigenvector, so then we

end up with the solution which is just an eigenvector calculation.

Okay, so what's nice about this model? The actions are related to the network

structure we get higher neighbors actions, higher own action, higher own

action, higher neighbors actions so we get these feedbacks.

So, in order for a solution we need the B to be large enough and, or the Wij's to

be small enough so that this actually converges, but once it does then there

isn't a very nice prediction. And what's interesting is that this

relates back to our centrality measures. So let's have a look at this.

So we've got our solution that we can write X in either of these forms.

recall that Bonacich centrality looked like a calculation that was very similar

to this. It looked like counting pairs of

different length from I's to different J's and then summing over all possible

path lengths according to some weight. That's exactly the calculation we're

doing here, and alternatively we wrote Bonacich centrality looked like an I

minus G to the minus one times G times one.

9:58

So in fact what we can say is the action that any individual taken one of these

linear quadratic games of complementarities is something which is

proportional to their Bonacich centrality.

So, higher Bonacich centrality, higher actions, okay.

So we've got everybody takes an action, A over B to begin with, which is just sort

of what they would in isolation with no network.

And then the extra network effect adds in these complementarities.

And how much extra action they get here depends on their Bonacich centrality in

the network. So we get a natural feedback from

complementarities. The actions relate to the total feedback.

Centrality tells us relative number of weighted influences from one node to

another, then captures the complementarities.

And why is that working? So again, you know, these things we're

measuring sort of. How much do I get influence from other

people? From other, from their friends and so

forth? That's exactly what's happening here.

How much is their action influence their, my friend's actions, which then

influences my action, and what do the feedback's look like?

Okay, so we've got this nice solution. So the beauty of their model is that you

end up with the very simple expression for X.

This scales with A over B so this is just multiplying everywhere so we can just

rescale and eliminate that. so, if we think about the Gij is equal to

Wij over B. we, let, let's think of a simple world

where, you either connected to an individual or not.

And then, effectively, the main thing is, you know, who you're connected to, and,

and what's the size of B. And then that will give us a calculation.

And you can directly estimate these things.

so for instance, if we, if we do that calculation, you can do that calculation.

Here's on one network that, for which they did these, these calculations.

You can do it in different settings. so you know, depending on whether, what B

is, if B is 10, that's sort of relatively high cost to taking actions.

Then, what do you get? You get that, a person in the center

position takes an action of 1.75. This person takes an action of 1.88.

This takes 1.72. These people are all, right, this is

going to be a 1.72, 1.88, 1.88. So, depending on how many neighbors you

have and how central you are in this case, the highest action ends up being

for these individuals in, in this position.

if you rescale the B and, and change the B to a different level you get slightly

different numbers. basically, you know, here, you can redo

that for B equals five. So if you lower the cost, people's

actions go up and it more than doubles. And it's more than doubling because

you're getting a feedback. So everybody wants to put in a higher

action but that means that their, their neighbors want to take higher action so

is that an even, increase is there more. So if you hadn't been doing this with the

with the neighbor feedback, you know, just taking the cost in half would have

doubled the action. Now with the feedback we, we get an extra

effect. And indeed for this particular example,

you're still going to get you know, similar structure, but much higher

actions. So what's nice about this model is it

gives us predictions of exactly who's going to take which actions as a function

of their position in the network. And now we've got something which begins

to give us some feeling for why Bonacich's centrality might be an

interesting centrality measure. Its coming out, and its givings us some

idea of what the feedback is and complementarities, and it gains a

strategic complementarities that can be important.

Okay so that takes us through another game with this kind of feedback.

Now you know, the, the nice part about this is that it's, it allows one to do

calculations in terms of a simple network measure.

it's going to be more difficult if we wanted to add in a lot of [UNKNOWN] on

nodes and have different nodes have different preferences.

but we can enrich these models in, in ways that, that allow us then to take

them to data. And, indeed, people have been starting to

work with these models and started to do analyses of, of what predictive behavior

is, looks like, as a function of the network.

And then actually seeing whether that gives us some insight into what's

happening In, in different settings.