Hi folks. So, we're back again and let's talk a little bit

more about solving this SIS model and getting

explicit expressions for the infection rates

when there exists a positive steady state infection rate.

And so, if you recall when we were looking at solutions,

we could find the steady state in terms of the theta which is

the fraction of people that you would meet randomly who are infected.

Then it was given by this expression and we can simplify this

by dividing each side by theta and we end up with an expression that looks like this.

And so solving this for the theta would then give us a solution.

So, let's first look at

a regular example with regular network so everybody has exactly the same degree.

Then P(d) is just going to put weight on

some particular degree which is just the expected degree.

And so then things simplify and we end up with

1 = Σ P(d) λd^2 /[ (λθd+ 1) E[d]].

So, that's going to simplify and we'll end up with an expression which looks like this,

1= λE[d]/ (λθE[d] + 1).

So, just plugging in that everybody

has the same degree which is just the expected degree,

then we can get rid of this, Σ P(d),

this is going to become E[d]^2,

and this is the expected degree here, expected degree here.

We can get rid of one of these,

get rid of that and we end up with this expression here, okay?

So, when we solve that we end up with a very simple expression.

Now, we can rearrange this in terms of theta and we

end up with θ = 1 - 1/ λE[d]).

So, if everybody had the same degree then we can solve explicitly for what

the steady state expression is going to be for the infection rate.

And we notice now that this is increasing in λE(d) and in order for this thing to

be at least zero then we need λE(d)

to be greater than

one or greater than equal to one or greater than one for it to be positive.

So, that was what we found in regular networks and

then this thing just scales up with lambda and with

the expected degree so it's actually just linear in the level of λE(d),

which is effectively just this infection rate.

So this is one where it goes back to the original model we looked at where we just had

random meetings and this is just increasing the rate of random meetings

and things are just proportional to the lambda parameter.

Okay. So, let's have a look at a more interesting degree distribution,

one where we have this power law,

and if you plug in the power law

and integrate this out

then you'll end up with an expression that you can solve for theta.

And, in that case if you want to go through and verify,

you can just integrate this and then solve for

theta or you can take my word for it and theta

will come out to be θ = 1 /(λ(e_1/λ - 1) ).

So, what do we end up with?

We end up with theta having an expression which

depends on lambda and we can plot that out.

So, if you plot that function out,

so we're just looking at this function right here,

and plotting this function as a function of lambda.

So, how does it vary with lambda?

And what we see is it's very rapidly increasing.

So, as lambda increases we get a very rapid increase and then eventually it asymptotes.

It can't go above one,

but we're getting a very high neighbor infection rate

as lambda increases because then we've got these very high degree nodes.

They become infected, they infect others and so forth,

and as lambda is increasing,

we get a very rapid infection increase.

Okay, what can we say about how these things change with the degree distribution?

So, if we want to do comparisons and say okay,

if we go from regular to power law or a regular to Erdos Renyi, sort of, graph?

How is that going to change?

And one way we can do that is we can look at this expression

and ask how this right hand side changes with P(d).

Right, because remember the way that we're solving this,

we look at theta here,

we have this right hand side,

which is H(θ) and we're looking for

the solution to this thing and if we can say that H(θ) goes up,

right, so if we do something that changes H(θ) in a way that goes up then

that's going to move the solution to this equation upwards.

So, any kind of comparative static where we're making changes

that change the distribution in a way that

increases this overall expression on the right hand side for each theta gets

a higher value then we can say something about what the resulting change is in theta.

So, let's see what we can say about how this right hand side moves.

Okay. So first thing,

if you look at this function,

what we're doing is we're weighting it by

different degrees and then we've got some function

here that we're taking an expectation over with respect to the different degrees.

And, what can we say about this function right here?

How does it behave?

And one thing we notice,

so we're taking expectations with respective degrees,

this thing is increasing in degrees, okay.

So higher degree nodes are going to have

higher relative expected infection rates and basically that's what we're getting here.

This is remember our old P(d) and so,

this thing is going to tend to be higher for higher degree nodes.

So, more connected nodes are going to tend to have more contact,

they're going to tend to be more infected so this overall function is increasing

in D. So that tells us that any distribution which puts weight,

puts more weight on higher degree nodes,

is going to have relatively higher infection rates.

That's going to move this whole function up.

That's going to give us a higher solution and a higher steady state.

Okay. So one thing we can say is that if we take a distribution and then shift it,

so that we put more weight on higher degree nodes,

that's known as first order stochastic dominance

when you are comparing two distributions.

So, if we have two distributions and we move it towards

higher degree weights then that right hand side of this thing,

this H(θ) is going to increase that

every theta we're basically gonna have higher infection rates.

That's going to lead to a higher steady state.

Okay. So, basically putting weights on

higher degree nodes is going to increase infection.

Okay. So, that's relatively straightforward and

basically we're just shifting the weight towards higher degree nodes.

So when we do that we end up with shifting this H function

higher at every theta and that leads us from a steady state with P. So,

if we go to P prime,

which is increase the weight on higher degree nodes,

we're going to end up with an increased steady state

so the theta that solves this is going to be higher.

Okay. So, let's take another look at this.

So that just says that if we are all shifting weight towards

higher degree nodes in a very well-defined sense

this notion of first order stochastic dominance.

And just to, sort of,

give you a feeling for first order stochastic dominance,

if you're dealing with a frequency distribution

where you've got different degrees down here,

you know, say one, two, etc.,

three, so you've got some degree distribution.

First order stochastic dominance shifts are ones where we're

essentially moving the distribution to the right so we're putting

more weights on higher degree nodes and that's

what's moving us up and having us have more interactions,

higher infection rates, everybody gets a higher steady state of infection.

So, if you have, you know,

when you look at the world and you have increased travel or

increased contact with people,

you're going to have increased spread and things like the flu or other.

In this particular model,

it's something you can catch repeatedly but you're going to have increased contacts

and increased number of contacts per individual are going to

increase the steady state infection rate.

Okay, that's fairly intuitive, fairly simple.

Let's do a little more nuanced calculation now and again we're looking at this function,

this RHO of D function,

so the infection rate for different degrees.

And if we look at that function,

it's also a function which is convex in D, okay.

So if you look at this function,

it's a function of d squared over something which is linear in D. This

is actually a convex function in D. So, in fact,

when you look at what this RHO of D function looks like,

it is not only increasing,

so this function is increasing and convex, okay.

So, that tells us, okay, first of all,

if we put more weight on higher degrees we're going to end up with

higher values for this are going to come out.

Right, so as we, we put on higher degrees,

we get, here is degree,

here is this RHO of D function,

and as we put weight on higher degrees we're getting higher values.

That was what we just showed but also even if you took a mean preserving spread,

so suppose instead of putting all your weight on some particular value E of D. So,

we start with a regular network and instead we spread it out,

so that now we have half our weight on something lower,

half our weight on something higher,

but we move these in equal distance.

If we keep the same mean then when we take the expectation over the higher and lower,

we're going to end up with a higher expected value than what we started with.

Okay. So, the idea is if you're taking an expectation of

a convex function and you do a mean preserving spread,

so you move some weight higher and some weight lower,

when you're moving lower, well,

the rate at which you slow down decreases,

but here you get an increasing rate at which you get higher values.

So the convexity of this function means that the expectation is higher.

So, if you take a mean preserving spread,

so if you start with some P and you

take some expectation with respect to P of some function,

in this case our row of D,

and now you take a mean preserving spread, P prime,

and you take the same expectation of RHO of D. If this is a mean preserving spread,

so you've kept the same mean but you've spread out and put more weight on the extremes,

then what you're going to end up with is a higher expectation

and that's going to lead then to a higher value of the right hand side here, okay.

So this is a form of what's known as

a second order stochastic dominance where you fix the mean.

So, taking mean preserving spreads on convex functions gets you a higher expectation.

So, since this is a convex function,

we can say that mean preserving spread is also

going to increase things and this is why, you know,

so even though you're losing some degree on some nodes,

you're increasing it on other nodes.

The fact that those are hub's could actually increases the expectation overall.

So, if P prime is a mean preserving spread of P,

then the right hand side increases at every theta and so what happens?

Well, it increases everywhere.

We end up with a higher steady state, okay.

So, either way that we went through things, mean preserving spread,

more high degree nodes and low degree nodes,

but the higher degree nodes are more prone to infection.

Neighbors are more likely to be high degree.

So, either first order stochastic dominance or mean preserving spreads,

both of those lead to increases in the infection rate.

So here we are now able to say something about

the degree distributions of interactions and how infection rates,

so it's a nice model in terms of allowing us to be

able to do these kinds of calculations.