0:17

In the second part of the lecture, we're going to discuss practical aspects.

How can we solve stochastic mathematical models?

In the first part,

we talked about what is, what we meant by a stochastic mathematical model.

And we talked about how conceptually

one has to think of things in terms of probabilities of reactions occurring,

rather than rates at which reactions occur.

Now we want to look at it from a practical point of view.

How do we solve stochastic mathematical models?

0:43

I'm going to introduce a couple of different approaches for

solving stochastic mathematical models.

First a simple brute, what you might call a brute force method for

solving these stochastic models.

I call this a stochastic Euler's method, because it's analogous to what we saw with

Euler's method for solving ordinary differential equations.

And then we want to introduce a more sophisticated one

which is known as Gillespie's algorithm.

And then we're going to finish this,

this lecture with that with a description of Gillespie's algorithm.

1:14

And we're going to show some MATLAB code in order to solve these stochastic models.

And the application that we're going to use is stochasticating of

the Hodgkin-Huxley potassium channel.

In the previous lectures on the Hodgkin-Huxley model of

the neuronal action potential, we discussed in detail

how those investigators developed their their models for the potassium channel.

And here were going to show how we, one can implement a stochastic version of

that, that incorporates the random opening and closing of individual channels.

1:49

Well start be reviewing one of the most important concepts that we discussed in

the last lecture.

Remember we talked about the fact that the,

the key in stochastic models is to convert from rates that reactions occur

into probabilities that reactions might occur.

Where a rate which we are used to thinking of with ordinary differential equation

models is expressed as the number of transitions per unit time, for

instance, reactions per second.

Now for stochastic models, we need to think in terms of probabilities.

Meaning, what's the chance that a reaction is going to occur

in a given amount of time?

2:21

And in the last lecture we went through this procedure by which one can convert

from rates into probabilities.

And we're not going to go over this in details again, but we had two steps here,

one is to convert from concentration per time to molecules per time.

And then the second step would be to convert from molecules per time

into a probability that an event will occur in a given time interval, and

this procedure here is going to be essential

in our development of stochastic mathematical models.

2:49

The other important concept I want to review goes back to our lectures from

the Hodgkin-Huxley model of, of the action potential, the ordinary differential

equation model, which is how Hodgkin and Huxley developed their model of

potassium conductance, which then lead to their model of potassium current.

3:16

And one of the thing we, one of the things we discussed when we talked about

the Hodgkin-Huxley model is that those investigators noticed that changing

the voltage changes both the steady-state level of the potassium conductance, and

the rate at which it goes up.

In other words, it goes up faster at plus 60 millivolts than it does at these other

3:35

these other voltages.

They also noticed that the time course of potassium conductance increased

is similar to an exponential raised to a power, and

they specifically thought it looked like the fourth power.

So this is what happens, this blue curve is what you have if you have a,

4:00

And Hodgkin and Huxley deduced that these facts suggested the following model where

they introduced a variable n, which is a variable between 0 and 1,

representing the fraction of particles in our, that are in a permissive state.

And they said that the conductance of the,

of potassium conductance was proportional to n raised to the fourth power.

In other words you, if you had you had four particles and

all four of them had to be in the permissive state in order for

the the membrane to be allowing potassium current to go through.

So, in their model potassium conductance was equal to some maximum

potassium conductance times n raised to the fourth power.

And they simulated the transitions of between the non-permissive

state 1 minus n and the permissive state n, with these variables alpha and

beta, where in general alpha and beta are functions of voltage.

4:54

And then remember what I, I said just a minute ago, is that your, your variable n,

the gating variable, because it represents a fraction of particles in the permissive

state, this will always be some number between 0 and 1.

This is just to review our what we learned before about potassium con,

conductance model from Hodgkin and Huxley, so that we can introduce a stochastic

model for the gating of a single potassium channel.

5:32

It looks something like this, which looks pretty complicated, but

as we'll see it's not nearly as complicated as you might think.

So you can't just say that the channel can be closed or open.

We have to posit several different closed states here which we call C0,

closed 0, closed 1, closed 2, closed 3, and then open state.

5:55

Why do we have to introduce all these states here?

Well, the idea here, and the reason we number them as we did 0, 1,

2, and 3, is that this state over here, C0,

is when we have zero out of four of our particles in the permissive state.

This is for one out of four permissive.

This is for two out of four permissive, et cetera, and then as we just discussed in

the previous slide, when all four particles are in the permissive state then

that is when the channels open, and that is a consequence of this equation here,

where the conductance is proportional to n raised to the fourth power.

6:30

In other words, you have four, gates on your sodium channel.

All four of them have to move into the permissive state in order for

the channel to open.

Now, what are the rates that govern the transition from C0 to C1,

from C1 to C2, et cetera?

6:48

It's actually something that we can get just based

on knowing how each particle transitions from permissive to non-permissive.

This is what we had in the deterministic Hodgkin-Huxley model, where the transition

from the non-permissive state, 1 minus n, to the permissive state n, occurs at a,

a rate alpha, and the transition in the other direction occurs at a rate beta.

7:09

So, what that tells us is that, when we're transitioning from C0 to C1,

the overall rate is 4 times alpha.

The reason we have a 4 in front of it is because when zero out of four are perm,

are permissive, that means that there,

there are four particles that might be able to transition.

That's why we multiply this one by 4.

Here there are three particles that might be able to transition.

So it's 3 alpha, and then 2 alpha, and

then 1 alpha, and then it's the opposite going back in the other direction.

7:36

When all four particles are in the permissive state, then you have four

particles that could possibly transition back into the non-permissive state.

So that's why we multiply beta by 4, and then over here we multiply beta by 1.

Now that we've established that this is our model of the channel, converting this

into a stochastic gating model in this case is relatively straightforward.

If our model of a single potassium channel looks like this,

as we just discussed in the last slide, we can write down a simple procedure for

stochastically simulating channel gating that looks like this.

8:18

At any given time, the channel's going to be in a particular state.

Let's just assume for the sake of argument that,

at this moment, the channel is in state C1.

Extending this to all the other states is, is straightforward but, at any given time,

you're going to be in a particular state.

Let's say that we're in C1.

8:36

What you do in your algorithm next is you compute the probabilities that you, well,

are going to transition out of that state.

Now the probability that you're going to go

from C1 to C2 is equal to 3 times alpha times your times step.

Remember, this is what we discussed in the previous lecture,

that the probability you're going to have a transition in a particular amount of

time is whatever that that propensity is times your time step.

So, the probability you're going to go from C1 to C2 is dt times 3 alpha.

The probability that you're going to go from C1 to C0 is equal to dt times beta.

9:14

Then you pick some uniformly distributed random numbers.

If a uniformly distributed random number is less than the transition probability,

then the channel is going to change states.

In other words, if you pick a random number that's less than dt times 3 alpha,

that's going to give you a transition from C1 to C2.

If you pick a uniformly distributed random number that's less than dt times beta,

you're going to transition from C1 to C0.

Now you might be saying to yourself, well, what if you pick

a random number that will tr, tell it's going to transition both to C2 and to C0?

That's that's obviously impossible for both of those to occur in dt,

and that tells you something about what sort of time step you have to choose for

an algorithm like this.

You need to choose dt very, very small, so that the probability of

both transitions occurring has to be, become vanishingly small.

And so I think the rule of thumb that a lot of people use for

these sorts of these algorithms is you want the probability of any

single transition to occur, occurring to be much less than 1%.

Therefore, the probability that multiple transitions will be predicted to occur

is very, very, very small.

10:23

Once you do this, you update the state if necessary.

So, you pick a random number.

You say, okay maybe, I'm going to have a tra, transition from C1 to C2 or maybe,

I'm going to have a transition from C1 to C0.

Most of the time I'm not going to have either transition.

10:39

You update it if you need to, you record your results, and

then you move on to the next time stop.

This is actually a rather straight forward algorithm for

simulating the stochastic gating of this potassium channel.

11:01

We're going to simulate a voltage clamp step from minus 60 millivolts

to 0 millivolts.

Remember, we discussed these sorts of experiments in our

lectures on the Hodgkin-Huxley model where you hold the axonal membrane

at minus 60 millivolts, and then you depolarize to 0 millivolts, and

the potassium channels will open.

In this case it's not enough just to run the simulation once.

You have to run it multiple times over and over again, because when you run

the simulation on once channel you'll see the single channel opening and

closing randomly.

And this is what the output looks like when you run multiple trials with

a single channel.

I think the depolarization occurs right around here maybe after one millisecond or

so, and sometimes there's a long delay before it opens,

other times there's a short delay.

You can also see that how long it's open for is randomly distributed.

Sometimes it's open for a long time, sometimes it's only open for

a short time and occasionally you might not get any openings at all in,

in this time interval, that's what we see with this cyan one here.

So what each one of these traces is, is showing is either the closed state,

which is the lower level, or the open state which is the upper level,

and by convention we denote this as little ik it was,

it was a big I for, for current though all the potassium channels.

And little ik tells us, tells us the current through one channel, and

we can see individual channels in this case are opening and closing at random.

12:24

We can also compare the results of this model with what we get

from the deterministic or ordinary differential equation model.

If we average 100 of these voltage clamp steps,

we get something that looks like this, an increase in

12:38

potassium current is a function of time with a lot of noise on it.

And the noise reflects the, the randomness that the channels open and

close at random, and but

this looks like very much like what we see with the ordinary differential equation

model solution, where the the current goes up smoothly as a function of time.

So as our number of of trials with a single channel gets bigger,

and bigger, and bigger the solution with

our stochastic solution is going to approach our deterministic solution and

it has to because we're essentially simulating the same model here.

It's just in one case we're, we're incorporating some randomness,

in the other case we're not.

13:33

But one important aspect of this very simple algorithm

is that during most intervals, no transitions occur.

In the examples I showed on the last slide of the the, potassium channel that was

opening and closing stochastically, the time step I used was four microseconds.

And during most of these four microsecond intervals if

the channel starts in a particular state at the end of that four microsecond

interval it's going to remain in that state.

13:58

So then we can ask, well what if, instead of calculating for

each time step whether or not a random event occurs,

what if instead we computed the interval until the next random event?

Now the thing about how might implement an algorithm like

this we need to remember some things from probability theory.

14:18

If stochastic transitions occur on average n times per second then

the average interval between transitions is going to be 1 over n seconds.

In other words, if our transitions are occurring on average ten times per second,

then our average interval is going to be one-tenths of a second or 100 milliseconds

between events, and that's required to get ten transitions on average per second.

Furthermore, we also know that these intervals are going to be exponentially

distributed, and what I mean by that is if you collected lots of intervals,

one after the other, after the other, after the other and

you plotted a histogram of these intervals, it would look like this,

where very, very long intervals would become more and more rare.

Short intervals would be more common and the mean value you would have

would be equal to 1 over n, and it would be indicated something like this.

And the histogram going from like, the shorter intervals to the longer intervals

would follow the, the shape of an exponential.

And so, the intervals in this case would be exponentially distributed

with a mean value of 1 over n in this case.

15:39

Why do we call this Gillespie's algorithm?

Well, as you can probably guess,

it's because it was invented by a man named Gillespie.

And this is probably still the most commonly used stochastic algorithm for

solving these sorts of systems.

It comes from this paper here, published all the way back in 1977.

That's a a mark that, that you've published something that's very

influential, that's an important piece of work when people actually,

still talking about it all these years down the, down the line.

So, Daniel T.Gillespie published this paper in 1977,

in the Journal of Physical Chemistry, that described this,

this algorithm that people still commonly use these days.

Gillespie's algorithm is similar to the stochastic

Euler's method that we discussed before, but it's, it's a little bit more complex,

but in return for this complexity it's a lot faster.

Gillespie's algorithm involves selecting two random variables at each time step.

And I'll go through the process, so we can see how how this works, and

how this algorithm can be quite efficient.

16:39

First thing you do is you compute the overall propensity for

any reaction to occur.

We're going to go back to that same example with the the stochastic gating of

the potassium channel, and we're going to assume that

at the current time step the potassium channel is in state C1.

16:54

And if it's in state C1, there are two transitions that are possible, right?

It can either go from C1 to C2, which has a propensity of 3 alpha, or

it can go from C1 back to C0, which has a propensity of beta.

So the overall propensity is the sum of these two, 3 alpha plus beta.

The next thing you do with Gillespie's algorithm is you select a random number

from an exponential distribu, distribution with mean of 1 over the, the propensity.

And what this random number from the exponential distribution tells you

is this is the interval until the next reaction occurs.

17:30

And then the third thing you have to do is select ano, another random number.

In this case it's a uniformly distributed random number which we're

going to call little r, and little r will tell you which reaction occurs.

So in this case, you start in state C1, you can either move to C2 or you can

move to C0 and we discussed the overall propensity for a reaction to occur.

We select the the interval that tells us when the next reaction is going to occur,

but we don't know if it's going to be moving from left to right, or

moving from right to left in this case.

The second uniformly-distributed random number, little r,

tells us which reaction occurs.

And in this case, it depends on the relative

magnitudes of this propensity beta and this propensity 3 alpha.

So if little r is less than beta over beta plus 3 alpha,

that tells us we have a C1 to C0 transition.

On the other hand if it's between this ratio, beta over

beta plus 3 alpha and 1, that tells us we have to have a C1 to C2 transition.

Then the fourth thing we do, is we update the system state,

repeat move onto the next iteration of Gillespie's Algorithm.

18:43

In summary then, what we've seen is that stochastic mathematical

models are simulated with algorithms that generate random numbers, and these random

numbers tell you whether a reaction occurs or when the reaction occurs.