0:00

Okay folks, so we're back, and let's have a look at one other game with continuum

Â of actions, so people can take a whole series of different actions, not just

Â zero or one. And this is going to be one with

Â compliments, strategic compliments, and it's a linear quadratic model, and so

Â it's got a very simple structure to the payoffs which allows for a simple, closed

Â form solution. So this comes out of a paper by Ballester

Â Calvo-Armengol and Zenou in 2006. And the structure of the game, that the

Â payoffs are very simple, and tractable. And so, in particular, when we think of a

Â given individual taking their action, Xi. Other actions are taking, other people

Â are taking actions X minus I, let's let xi be greater than or equal to zero, so

Â I'm taking some real valued action. And the utility that people get, we can

Â see why it's called linear quadratic, so, it's going to be increasing in my own

Â action, just linearly. And quadratically, there's going to be a

Â cost, so eventually I don't want to take too much action, because I'm going to pay

Â for that in terms of Xi squared. but there's also the strategic complement

Â aspect. So what I also do is I look at different

Â friends, and I weight them, so I have some weight I on, on J.

Â And what I get is some product of our actions.

Â So, if other individuals are taking really high actions, then that gives me

Â an incentive to take higher actions. I get a a payoff, a bonus, from taking

Â higher actions when people take high actions, okay?

Â So, the, the, the full model that they have allows also for some global

Â substitutes, and so forth. But let's focus in on this essential

Â aspect of the model. Which is the linear quadratic aspect.

Â Which is that I get a positive payoff from my direct action.

Â I get some negative cost and then which is quadratic, and then a bonus in terms

Â of what other individuals are doing in, in a strategic confluence.

Â Okay, so the nice thing about this is in this quadratic form it's going to be easy

Â to figure out how, what's my best action given what other people are doing.

Â And then to solve for nash equilibrium in this world is fairly easy.

Â We'll be able to find a set of, of, of actions such that everybody is best

Â responding to everybody else. We'll be able to solve that as a function

Â of the network in a really clean and simple form.

Â Okay, so [COUGH], we've got this payoff, and we want to figure out what's the best

Â Xi in response to what other people are doing.

Â So when I say X minus I, this is the vector of the actions for everybody else

Â besides I. So if you maximize this function with

Â respect to Xi, just take the derivative of this and set it equal to zero.

Â Alright, so if you want to maximize this function, what's the maximizing action

Â just set the derivative of this thing with respect to, so let's take Dui, Dxi,

Â set that equal to zero, and that gives you the maximizer here.

Â Okay and so we've got a nice function that's going to be necessary and

Â sufficient in this case. so when we go through and solve this when

Â we look at the first order conditions taking the derivative we get an A minus a

Â Bxi right, so the two comes down and cancels the two.

Â And then sum of Wj, Wij, Xj has to be equal to zero so Xi is just, has a nice

Â form, where what you're doing, is you're weighting things by A over B and you have

Â something which is positively responsive to the amount of activity that your

Â neighbors take. So, the more activity your neighbors take

Â the more you want to take, okay. And so you are getting benefits that are

Â proportional to A, and benefits proportional to how much action your

Â neighbors are taking, and you pay a cost relative to B.

Â And so we get this benefits in the numerator over the cost and that

Â modulates exactly what they write Xi is. Okay, so we've got a best response.

Â What I should do in response to everybody else is.

Â And then an equilibrium is going to be solving this set of equations

Â simultaneously so everybody wants to be taking the action response to everybody

Â else. So to solve for all these simutaniously.

Â What we've got is, if we write this down as a vector now, X one to Xn, it's a

Â function of everybody else and the weights on what everyone else is doing.

Â You can rewrite this as a vector X is equal to alpha times G of X, right?

Â So we've got our X one to Xn. And that's equal to a vector everybody's

Â taking action A over B, A over B, plus a matrix times the actions.

Â 5:10

Where this matrix has the entry in the eighth, Ij's entry has Wij over B, right.

Â So you can rewrite this in this form. And what that says, is now this is an

Â easily solved equation, right. So you got a linear equation, in terms of

Â X, and the G. Where the, the matrix, that we're working

Â with, in terms of the network. Is now just these weights, just these

Â Wij's but divided through by B, which is the relative cost of the action, okay?

Â So we've got a nice simple form here, and of we want to solve this, then you know,

Â you could rewrite this, so, rewriting this and here we can substitute it, okay,

Â X is equal to this. So substitute in this expression, put it

Â in for X, and then do that repeatedly. What you're going to get is alpha plus G

Â times, alpha plus G, times alpha plus G, and so forth.

Â And if you look at that, you can write that as a sum of G to the K where k's

Â greater than zero, times this alpha vector.

Â Okay, so one way to solve for what X is, is going to say it's equal to this

Â infinite sum. [COUGH] And what that means is that this

Â G is going to have to converge in terms of summing.

Â Otherwise you're going to get an expression which explodes and, and the

Â equilibrium is, is not going to be well defined.

Â Now just in terms of understanding that the equilibrium structure, we're getting

Â a feedback here, right? So the more action my friends take, the

Â more action I want to take, and, and so forth.

Â And so for that to work well, it has to be that these weights that I put on other

Â people are small enough relative to the cost, the B, so that I don't want to take

Â an infinite action. And so that's going to make sure that

Â this thing has to converge, we remember this, these entries here are the relative

Â weights compared to the B's and if those are small enough then this thing will

Â converge, if they get too big then, they won't be any solution.

Â Now another way to solve this, is just to write this as X times I minus G is equal

Â to apha or sorry. I minus G times X is equal to alpha.

Â And then that means that X is equal to alpha times I minus G inverse down here.

Â I got this reversed the alpha should be here so you could have X is equal to I

Â minus G inverse times alpha, if this thing's invertible and this being

Â invertible is this same condition that you need to for this thing to converge,

Â okay. So we have a very nice solution, we can

Â find X now directly as a function of the parameters of the game, the A and B's.

Â And this G matrix which is the Wij's and the B's.

Â So we have a very simple game, it ends up giving a nice solution where we can

Â calculate the actions of every player as a function of the network structure and

Â the payoff structure. Now if A is equal to zero, then we end up

Â with A, you know, X equals Gx, so then it's just a unit eigenvector, so then we

Â end up with the solution which is just an eigenvector calculation.

Â Okay, so what's nice about this model? The actions are related to the network

Â structure we get higher neighbors actions, higher own action, higher own

Â action, higher neighbors actions so we get these feedbacks.

Â So, in order for a solution we need the B to be large enough and, or the Wij's to

Â be small enough so that this actually converges, but once it does then there

Â isn't a very nice prediction. And what's interesting is that this

Â relates back to our centrality measures. So let's have a look at this.

Â So we've got our solution that we can write X in either of these forms.

Â recall that Bonacich centrality looked like a calculation that was very similar

Â to this. It looked like counting pairs of

Â different length from I's to different J's and then summing over all possible

Â path lengths according to some weight. That's exactly the calculation we're

Â doing here, and alternatively we wrote Bonacich centrality looked like an I

Â minus G to the minus one times G times one.

Â 9:58

So in fact what we can say is the action that any individual taken one of these

Â linear quadratic games of complementarities is something which is

Â proportional to their Bonacich centrality.

Â So, higher Bonacich centrality, higher actions, okay.

Â So we've got everybody takes an action, A over B to begin with, which is just sort

Â of what they would in isolation with no network.

Â And then the extra network effect adds in these complementarities.

Â And how much extra action they get here depends on their Bonacich centrality in

Â the network. So we get a natural feedback from

Â complementarities. The actions relate to the total feedback.

Â Centrality tells us relative number of weighted influences from one node to

Â another, then captures the complementarities.

Â And why is that working? So again, you know, these things we're

Â measuring sort of. How much do I get influence from other

Â people? From other, from their friends and so

Â forth? That's exactly what's happening here.

Â How much is their action influence their, my friend's actions, which then

Â influences my action, and what do the feedback's look like?

Â Okay, so we've got this nice solution. So the beauty of their model is that you

Â end up with the very simple expression for X.

Â This scales with A over B so this is just multiplying everywhere so we can just

Â rescale and eliminate that. so, if we think about the Gij is equal to

Â Wij over B. we, let, let's think of a simple world

Â where, you either connected to an individual or not.

Â And then, effectively, the main thing is, you know, who you're connected to, and,

Â and what's the size of B. And then that will give us a calculation.

Â And you can directly estimate these things.

Â so for instance, if we, if we do that calculation, you can do that calculation.

Â Here's on one network that, for which they did these, these calculations.

Â You can do it in different settings. so you know, depending on whether, what B

Â is, if B is 10, that's sort of relatively high cost to taking actions.

Â Then, what do you get? You get that, a person in the center

Â position takes an action of 1.75. This person takes an action of 1.88.

Â This takes 1.72. These people are all, right, this is

Â going to be a 1.72, 1.88, 1.88. So, depending on how many neighbors you

Â have and how central you are in this case, the highest action ends up being

Â for these individuals in, in this position.

Â if you rescale the B and, and change the B to a different level you get slightly

Â different numbers. basically, you know, here, you can redo

Â that for B equals five. So if you lower the cost, people's

Â actions go up and it more than doubles. And it's more than doubling because

Â you're getting a feedback. So everybody wants to put in a higher

Â action but that means that their, their neighbors want to take higher action so

Â is that an even, increase is there more. So if you hadn't been doing this with the

Â with the neighbor feedback, you know, just taking the cost in half would have

Â doubled the action. Now with the feedback we, we get an extra

Â effect. And indeed for this particular example,

Â you're still going to get you know, similar structure, but much higher

Â actions. So what's nice about this model is it

Â gives us predictions of exactly who's going to take which actions as a function

Â of their position in the network. And now we've got something which begins

Â to give us some feeling for why Bonacich's centrality might be an

Â interesting centrality measure. Its coming out, and its givings us some

Â idea of what the feedback is and complementarities, and it gains a

Â strategic complementarities that can be important.

Â Okay so that takes us through another game with this kind of feedback.

Â Now you know, the, the nice part about this is that it's, it allows one to do

Â calculations in terms of a simple network measure.

Â it's going to be more difficult if we wanted to add in a lot of [UNKNOWN] on

Â nodes and have different nodes have different preferences.

Â but we can enrich these models in, in ways that, that allow us then to take

Â them to data. And, indeed, people have been starting to

Â work with these models and started to do analyses of, of what predictive behavior

Â is, looks like, as a function of the network.

Â And then actually seeing whether that gives us some insight into what's

Â happening In, in different settings.

Â