0:00

Hello folks. And we're back again.

And I want to talk a little bit more about dynamics and network formation.

And in particular adding a little bit of noise to the dynamic process, and that

actually has some nice properties to it. So, let's have a peek so we're going to

talk about stochastic evolution of networks.

And we'll start with a concept which we called improving path.

And this comes out of, of work with I did with Alyson Watts, so it's Jackson and

Watts 2002. And the idea here is we've got this

sequence of adjacent networks. And just as, as in the process we talked

about just before. you could, you add a link if it benefits

both. at least one strictly.

Delete a link if either person benefits from its deletion.

But what we're going to do now in terms of, of looking at these paths is we're

going to noise them up a bit. So first let's just make sure that we've

got the concept of improving paths down. And the idea here is, we can look at say

a simple example. everybody disconnected has a value of

zero. So now what we're going to do is we're

going to have a diagram where we've got a bunch of different possible networks

there. And we're just going to keep track of

what the improving deviations would be. So from here you can think of, of adding

links. adding one link between any of the two

players. Then you can think of adding, you know,

another link so you got two links or going to the complete network.

And the payoffs here are telling us that players either get zero from the empty

network. a single link in isolation is not

worthwhile. You only get a value of minus one.

Once you form two links, then it's valuable and three links is the most

valuable. And what the arrows do here is represent

the changes that would be value improving for people.

So, if you were in a situation where you had one link, then you could benefit by

either severing that link and going to zero.

Or if the possibility of adding a new link arose, then you would want to add

either of the links that weren't there that would also be improving.

So, there'd be three different ways to get out of this.

They would all be improving. So, that's the notion of improvements.

And the arrows point here in the directions of improvements.

So the idea of improving paths is going to be that there are paths here in this

setting where everybody does better along each step.

So that improving path from this network to the complete network would be one that

passed through adding one link and then adding a second link.

So if we follow these arrows, we've got a series of path where each choice is an

improvement. And there's also improving paths leading

in this direction. So for many of these networks, you can

have an improving path leading down to the empty network.

So, in this situation there's actually two set of Pairwise Stable Networks.

And in fact, they're both Pairwise Nash Stable Networks.

So we've got the end points here, the empty and full and networks being stable.

And there's improving paths leading from different networks to one or the other of

these depending on which situation you're in.

Okay, so that's the concept of improving paths.

And what we're going to do now is, So we've got two Pairwise Nash Stable

Networks, these end points. Right?

Are both Pairwise Nash Stable and there's a question of, well, is one of them more

realistic as an outcome than the other? And in, in fact they're both strict

equilibriums, so if you were here, nobody would, would want to add a link.

It, it's strictly costly to them to do so.

and here obviously if you, anybody deleting a link is strictly costly.

So, just looking directly at these concepts and actually even adding

strictness isn't going to give us predictability.

so what we're going to do is add some trembles and errors to these improving

paths. So, we'll start at some network and with

equal probability look at any one of its links just like we did before.

We'll add that link if it's not present and both people want to add it.

We'll delete it if it's not there and, er, if it is there, and somebody benefits

from deleting it. but the key thing is we're going to

reverse the above decision with some probability epsilon.

So it'll be a probability epsilon greater than zero, that for whatever reason,

somebody makes a mistake. And when they want to add a link, they

don't or when they want to delete a link, they don't.

or a link that's beneficial gets deleted or a, a link that's not beneficial gets

added. Right?

So, so we're in a situation where some link is recognized.

People want to do one thing with it. Either keep it or get rid of it, or add,

it and the decision is reversed. Okay.

So whatever decision is made, it's reversed with some probability epsilon.

And we'll think of this as a small probability.

So there's some chance that people make errors but most of the time they'll make

the improving choice. Okay?

So, what this does is it adds these trembles and errors to improving paths.

and what this produces is now, this process is going to, as, as long as

there's errors, there's always a chance that you're going to keep moving around,

right? So, you're never going to stay anywhere

forever. And, you can get from any network to any

other network through a series of errors and improvements.

And so, this is a finite state. There's a finite set of possible networks

you could be in. it's irreducible, in the sense that from

anyone you can get to any other one, and it's aperiodic.

You could move in odd cycles, even cycles.

So this is a very nicely behaved Markov chain.

Ones that we know a lot about in terms of theory.

It's going to have a nice distribution over how much time you'll spend at each.

So, pro, given epsilon and we'll have an idea of how much time you would spend at

each network in this process over time. There's a steady state distribution which

gives us the long-run distribution of how much time you'll spend at each one of

these networks. So, lets have a look in more detail at

this process. So, what we've got, when we've noised up

our improvement paths, let's suppose we're here.

We're at the empty network. Well, now with probability one third, any

one of these links could be recognized. So, what's going to happen is, if we

recognize a link, they don't, it doesn't want to be added but with a probability

epsilon it would be added. So, there's a probability of one third

epsilon that you would end up recognizing this link and adding it by accident.

There's a probability one third epsilon that you would move here.

And a probability one third epsilon that you would move in this direction.

So, from a network with zero links there is a probability of epsilon that you go

from zero links to one link. And there's a probability of one minus

epsilon that you'd just stay there. Okay?

And, and so forth. If we're here with two, two, with three

links. Then there's a probability epsilon that

you go from three to two and the probability of one minus epsilon that you

go from three back to three. And so forth, okay.

And so from, from one of these networks there's a one third times one minus

epsilon chance that you go to the complete network.

There's a one third chance times epsilon you go and delete a link that you wanted.

one third chance times epsilon here and so forth.

So, for each one of these things we can figure out what's the probability that

you move from a network with zero links to one with one.

From one with one, to two, and so forth. So instead of keeping track of all of the

networks, let's just keep track of them in terms of how many links they have.

So zero links, one link, two links, three links.

And what's the probability that we move between those?

Okay. Well, you can, you can keep track of

that. What's the probability that you move

from, from any one to any other one. And we can go through, you know, the

probability that you went to one with zero to one with zero, with one, with

two, with three. So you can't move directly to two links.

if you have one link. there's a third of a chance any one of

the ones you recognize you could go back to to zero.

There's an epsilon chance you stay where you are.

there's a two third chance that you recognize one of the links that you

want to add and go to a two link network and so forth.

Alright, so we got a chance this matrix keeps track of what's the probability

that if you currently have zero, one, two, or three links that you transition

to a different state in the, in the next period.

9:06

That's a well defined Markov chain. And, given this, you can figure out

what's the probability that you're going to stay in every state.

So there's a solution of how much time you spend in state zero, how much time

will you spend in state one and state two and state three.

So what's the average frequency of the number of periods you're going to spend

in each one of these things? That's a well defined Markov chain

problem. You look for the steady state

distribution. In this case it happens to be the left

hand side unit eigenvector of this matrix.

If you don't remember things on Markov chains or if you haven't seen Markov

chains before you can find detailed things on on the internet.

Also, I have some pages in my book that just goes through the basics of Markov

chains. There is lots of sources where you can

find information about this. the important thing is that once we've

got this process, it's a well defined dynamic process.

And we can figure out how much time you are going to spend in each different

state. In this case when you look at the, the

probability, that, the fraction of time you're going to spend in zero link

network, in a one link network, in a two link network and in a three link network,

this is the answer to that unique steady state distribution.

And you can find this by just checking what the left hand side unit eigenvector

of that matrix was. So what's the interesting thing about

this? Well the interesting thing about this.

Well, the interesting thing about this was we had two Pairwise Stable Nash

Networks. The empty network and the complete

network. And if you look at the amount of time

you're spending in each one of these things, as epsilon gets very, very small,

right, as epsilon goes to zero, this approaches one.

10:53

This term ex, approaches zero. This term approaches zero and this term

approaches zero. So, this process is going to spend more

and more of its time in this three link network over time.

So, even though both of these were Pairwise Nash Stable.

This dynamic process is going to end up spending, as epsilon becomes very small,

most of its time in the three length network.

And so you can look at that by looking at the limit of this unique steady state

distribution. This is what's known as stochastic

stability in the game theory literature. It's a technique of, of refining

equilibria that has become quite powerful.

and Alison and I used this to, to, to develop methods of, of working through

dynamics in, in networks. And here it consists of prediction of

which network is going to be visited most often and stayed at for most of the time

in this process as epsilon becomes quite small.

And one thing to notice is that, as it's placing probability one here in the

limit. That's not the same, as the steady states

of the limit process. So if you actually look at the process

that had zero errors in it, you could have ended up in either the empty network

or the full network. So the steady states of the, of the limit

process, where you have no noise, is different than the limit of the process

with little bits of noise. So the little bits of noise here are very

important in making predictions. And let's go through sort of the

intuition of why it is that we're spending more time in the even though we

have two Pairwise Stable Networks, Nash Stable Networks.

Why is it that we're spending most of our time here and very little of our time

here even though this is Pairwise Nash Stable?

When we go to the limit we're spending most of our time here.

Okay. And the idea is that if we think about

trembling. So, how do we leave this state?

So, if we suppose we're at at the network with three links.

Well, we can make an error. If we make an error and bounce to one of

these, so there's probably epsilon of that happening.

most likely it's going to come back. You would need two errors in order to get

to a point where you might get to to the other network, to the empty network,

right? So you'd have to, you'd have to make two

errors in a row before you could get to a point where you're going to find a

network which then naturally sort of has a tendency to lead back to the empty

network or the other Pairwise Nash Stable Network.

In contrast, one error on this network leads you to something which then leads

naturally to the, to this. So when you've got errors, it's much

harder in some sense that this stochastic stability notion means this is a more

stable, the full network is a more stable network.

It's harder to get away from that network.

And it's easier to, relatively easier, and it only takes one error to get away

from the empty network. And so, what this does is, is it begins

to look at what is known as basins of attraction.

When you look at these dynamic processes, this one has a larger basin of

attraction. If you get to any of these networks, you

can get back to this one. To get to this one you really can only

get there from, from a smaller set of states.

And as, as you begin to look at this, this process more naturally gives you a

complete network than a smaller one. So, this is a powerful tool for looking

at dynamics. you can simulate these things so once,

you know, this we can solve for explicitly with, with only three, three

nodes. And relatively small number of, of

networks. As you begin to enrich these kinds of

processes, they're going to get more complicated, but you can begin to analyze

them by simulation. So the idea's here again to emphasize,

you need more errors to to leave the basin of attraction of the complete

networks. And to leave the basin of attraction of

the empty network. And more generally to solve these things,

you need to keep track of basins of attraction of many states.

so there's a theorem by Friedlin and Wentzel in the 1980s on Markov chains

and, and that allows one to leverage this and to do calculations of, of how these

things work. But more generally this is going to, this

kind of process is going to select some of the networks.

It's turning things into a Markov Chain. We know a lot about them.

even though it's difficult to get analytic solutions you can simulate these

things. Okay, so what we've done is we've looked

at beyond Pairwise stability looking at some dynamics, some stochastic stability.

there's other things in the literature that we're not going to touch too much

on, existence questions. Forward looking behaviors, another

interesting question but we wont have time.

We talked a little bit about transfers, there's other richer models of transfers

and bargaining to form networks. So, there's a series of topics which are

important in understanding modeling, that we you know, that I, you can get

references for from the reference list here.

but I'm not going to spend a whole lot of time going into in this course.