Okay so now we're going to talk a little bit about random networks and, and

enriching them to start to capture some properties that we actually observe in

real networks and we'll talk about small world.

And before I get in to that, let me just sort of put in context where we are in

term of the course. Again, we're talking about network

formation now. random network models and we had a whole

series of properties that we observed in real networks that are sort of stylized

facts that, that sometimes emerge in different contexts and we might begin to

start thinking about, well what kinds of models would we need to match that and

so. simple Erdos-Renyi networks gave us some

insight in to why the average path links might be small.

But one thing that they didn't do very well was give an of of clustering, so we

saw that that social a lot of social networks that one observes actually have

clustering coefficients which look significantly different Than what would

have happened if we were just working with a Erdős–Rényi random network.

So somehow, those networks are missing features that are actually going on.

So, when we look at enriching models, what we might want to do is, is start to

generate some clustering. So how can it be that we get clustering.

which is, is non trivial and you know, that's going to be one thing we might be

interested in. the other thing we saw, we saw that the

distributions didn't necessarily match the Erdős–Rényi random graphs.

We saw these fat tails. Can we generate models that have

different tails? can we generate more flexible classes,

general classes of models to, to fit data?

So that's where we're going and in particular we're going to start by just

asking you know, something about clustering.

And then we'll move on to sort of enriching the model to become richer and

richer, and, and try to fit more things as we go along.

okay, so, err, an early paper, Watts and Strogatz paper in 99, was basically

asking the question, how do we match these two different features of networks,

small average path length together with high clustering in one model.

In the, one important observation is that the Erdos-Renyi model misses clustering

because the clustering is on the ordering of p which is going to have to go to 0

unless the average degree is becoming very, very large.

And, you know, with billions of, of people in the world is not as if we each

have billions of heighbours. we, we have a, you know, hundreds or

thousands of neighbors. So the idea is that, we, we need to have

something which captures the, the, high clustering, and yet has, a small p.

and, and so their idea was what they we're going to do is, is a, has a, a

following observation. Those started with what's known as a ring

lattice, a very particular structure of a network, and then randomly picked some

links and rewired them, okay. And the idea is you, by starting with

this lattice structure you start with a network which had very high clustering

but also has high diameter. And then you just change a few links in

it. And the idea is that as you begin to

change these links, that actually you don't need to change many links in order

to get a very low diameter. So a few randomly placed links will

actually shrink the diameter of a graph very quickly.

Any average path length as well. And if you don't rewire too many, you

sort of keep this high clustering. And so let's have a peek at that, and and

then we can talk a little bit more about some of the conclusions of it.

So here's just picture I drew of here of a set, a set of nodes.

So 25 nodes in a ring lattice. And so initially they're connected, each

node is connected to its two immediate neighbors.

So if we look here we have node one, and it's connected two and three, and also

connected to 25 and 24. Right?

So it's got these connections going here and here.

and then, each one of the nodes is, in terms of these in terms of the red, have

connections to the, the different neighbors, okay?

So we've, we've, we are connected to your 2 neighbors.

What that does is, in terms of this original lattice is give you very high

clustering. So 1 is connected to both 2 and 3, and 2

and 3 are connected to each other. 1 is connected to the 2 and 25, and 2 and

25 are connected to each other. And so forth.

So the clustering is is high when you start.

And then what you can do is is actually what I've done is this picture is rewire

the links but just to had a few random links.

Links. So let's just stick a few random links in

a network. And so what happens is initially if you

wanted to get without these initial, without these random links here, if you

wanted to get from node one to node 15 your path length would be quite far.

Right? You'd have to go sort of marching around

the circle. Your path length, especially if you

expanded this thing to be a much larger graph, your path link would be quite far.