0:31

At the very beginning of our journey, we learned about neurons, synapses and brain

regions. This was in week number one, when we did

our neuroscience review. Adrian then told you about a class of

descriptive models known as neural encoding models.

And you learned about the spike triggered average as well as covariance analysis.

And you also learned about the Poisson model of spiking which describes how

neurons fire in a stochastic manner. The following weeks we covered Neural

decoding methods. Which allowed you to discriminate between

stimuli based on neural activities as well as decode stimuli from populations

of neurons. And, we learned about Information Theory

and how it's related to neural coding. In the previous week, we shifted gears

and Got into mechanistic models and particularly we looked at single neuron

models. And we covered concepts such as the RC

Circuit model of a membrane as well as the famous, Hodgkin-Huxley model of how,

the action potential is genrated in neurons.

And we ended with simplified neuron models such as the Integrate and fire

model. Of a neuron this leads us to the question

of how can we model neurons that are connected to eaach other in networks.

1:49

So how do enurons connect to form networks, you know the answer,they use

synapses in particular we are going to focus on.

Chemical synapses because there the most common type of synapse found in the

brain. What do these chemical synapses do?

Well, as you know, when there is spike that arrives from the first neuron.

So we're going to call this the pre-synaptic neuron, and this the

post-synaptic neurons. The spike causes some chemical to be

released into the space known as the synaptic cleft and this chemical are in

turn are going to bind with some receptors on the post synaptic membrane.

And that in turn is going to cause either an increase or a decrease in the membrane

potential of this. Postsynaptic neuron how does it happen?

Let's first review what happens in the case of a excitatory synapse.

So in the case of an excitatory synapse, when you have an input spike you get the

neurotransmitter release. In this case it would be glutamate, which

binds to receptors in the post-synaptic membrane, and that in turns causes ion

channels to open. So you could have ion channels that open,

which in turn cause positive ions such as sodium to come inside the cell.

And that in turn is going to cause a deep polarization which basically means you

have an increase in the local membrane potential of the neurons.

We have an increase in the local membrane potential, and that excites the cell.

On the other hand, in the case of an inhibitory synapse, you have the input

spike releasing neurotransmitters into the synaptic cleft and this could be a

neurotransmitter such as GABA A or GABA B.

And this binds to receptors, again, in the post-synaptic membrane.

And that in turn causes some ion channel to open, and this could result in either

chloride coming in, or you might have positive ions such as potassium leading

to cell. And that in turn cause hyper polarization

or a decrease in the local membrane potential given by these negative signs

over here. And so that's the effect of an inhibitory

synapse. Now, what we want to do is

computationally model the effects of a synapse on the membrane potential V of a

neuron. So, here's a cartoon of what we want to

do. Here's a synapse.

And, we would like to model the effects of input spikes.

As they're transmitted by the synapse on to the membrane potential V of a neuron.

So, how do we do that? I'll let you think about that for a

couple of seconds. Let's start by looking at the RC circuit

model of the membrane, which you heard about last week's lecture.

As you recall we were modeling the membrane in terms of a resistance and a

capacitance so here 's the membrane voltage as you recall there is a net

negative charge on the inside compared to the outside of the meembrane.

And we were also allowign for some current I sub E to be injected into this

ball, which is approximating a neuron. Now here's the circuit diagram for the

same situation and you have both the membrane capacitance and the membrane

resistance shown here, along with the equilibrium potential of the neuron

denoted by e sub l. Now how do we model such a circuit?

Well if you go back to your physics class in high school, you will recall that the

charge held by a capacitor is given by q equals cm, in this case the membrane

capacitance. Times the voltage across the capacitance

so Q equals CmV now if we take the derivative of this equation with respect

to time 2 dq dt that is nothing but the current coming into the cell and that is

given now by Cm dv dt.Now this equation. Cm dv dt equals i can be written in this

particular manner by using the fact that we have an input current i sub e divided

by a, which is the input current per unit area as well as the current due to the

leakage of ions. This is the current due to ion pumps if

you recall that maintains this equilibrium potential E sub L.

So the E sub L if you recall the equilibrium potential was something

around minus 70 millivolts also called the resting potential of the neuron Now

given this equation, Cm, dv/dt equals this current that's coming into the cell.

You can now multiply both sides by the resistant R sub m.

This is also called the specific membrane resistant, and this little C sub m, is

the specific membrane capacitance. Then the equation that you'll get, looks

something like this. So now what we have is the product RM

times CM and that is something called the membrane time constant, tal sub M, and

that in turn is also equal to the total membrane resistance times the total

membrane capacitance big RM times the big CM.

And they related to each other in this particular manner by the surface area of

the cell. And so this equation here is describing

how the membrane behaves as a function of time as you inject some input current

into the cell. Now what is this equation really telling

us about the membrane? Well, here is time, and here is the

volatge as a function of time. And so if you, start out, at some

particular value, let's say at equilibrium, so that is given by EL.

7:59

Then if you inject some current into this neuron.

This equation tells you that the voltage is going to raise to some particular

level and it will stabilze at some partiuclar level as long as you are

injecting the same current and that value is going to be the steady state value so

thats Vs s at some particular value here. And that Vss, the steady state voltage,

is going to be equal to whatever is the value that we gets when you said dv/dt

equal to 0. So let's set dv/dt equal to 0 and what

you're going to get then is minus V minus e l, plus i e.

Rm is equal to zero and if you solve for V you're going to get EL plus IeRm as the

voltage that the cell converges to, that's the steady state voltage of the

cell. Now if you turn off [SOUND].

The input currents, set that equal to 0. Then what are you going to get?

Well, you're going to get an exponential detail back to the equillibrium

potential, EL of the cell. the membrane times constant tau, and

plays an important role in determining. How quickly the cell reacts to changes in

the input. So, for example, if tau m is very large,

then the cell will take a long time to converge to the steady-state value.

And, similarly, when you turn off the input, it will take a long time to

converge back to the equilibrium potential.

On the other hand, if you have a small time constant for the membrane, then the

cell will react quickly to inputs, and it'll converge quickly to the steady

state value. And when you turn that input off, it'll

quickly converge back to the equilibrium potential.

It might be fun to make an analogy here. When you wake up in the morning, you

might find yourself a bit sluggish and slow.

And a bit slow to react to new inputs and that's when we could say you have a large

time constant. But after you've had your first few cups

of morning coffee, you might find yourself alert and fast.

And one could then say that you have changed your time constant to being the

tiny value. Okay, so perhaps that analogy was bit

corny. Well, in any case how do we model the

effects of a synapse on the membrane potential V, Now that we know how to

model the membrane potential using the RC circuit model.

10:39

So what do synapses do? We know that synapses release

neurotransmitters which in turn cause ionic channels to open or close and that

in turn changes the membrane potential of this whole synaptic cell.

So what we really need to do is to be able to model the opening and closing of

ionic channels. On the membrane.

So given that we have a model of the membrane potential, how do we model the

opening and closing of ionic channels? Well here's a hint, remember the

Hodgkin-Huxley model? So in the Hodgkin-Huxley model you had to

model the opening and closing of potassium and sodiaum channels.

And you did that by adding these additional conductance to model the

opening and closing of sodium and potassium channels.

So can you do something similar for synapses, which in effect also open and

close certain channels? The answer as you might have guessed is

yes, we can model the effects of a synapse on the membrane potential by

using a synaptic conductance. And that is given by g s.

And the other component of the synapse model, besides the conductance g s, is

the reversal potential or the equilibrium potential of the synapse.

And so here is the equation again, so we have tau m dv dt equals the first term is

the league term as in the previous slides but then here is the new term that is the

input coming in from the synapse. So we have the term corresponding to the

difference between the current voltage and the equilibrium.

Potential of the synapse as well as the conductance which is going to change as a

function of the inputs being received by the synapse and finally of course we have

the input current which is optional so if we have input current we can model that

by adding this additional term so the important point here is that.

For the synapse model we have these two components the Gs and as well as the Vs

and so for an excitatory synapse yu can imagine the Es is going to be a value

that is higher than the equilibrium potential of the cell which is going to

excite the cell on the other hand for an inhibitory synapse.

The ES is going to be a value lesser than the equalibrium potential and that's in

turn going to decrease the membrane potential.

So how does the synaptic conductors GS, change as a function of the inputs

received by the synapse. So you could have these spikes coming in,

and that in turn is going to change the synaptic conductance.

So how do we model the effects of input spikes on the synaptic conductance?

Here's the equation for the synaptic conductance.

It's a product of three different factors which together capture the function of

the synapse. The first factor, g max, is the maximum

conductance associated with that particular synapse.

And that for example is associated with the number of channels that one might

find on the post synaptic neuron. So the more the number of channels, the

larger the value for G max. The second term, P release is the

probability of release of neurotransmitter, given that you have an

input spike. So once you have an input spike, what is

the probability that neurotransmitters are going to be released into the

synaptic cleft. And the last term, Ps is the probability

of post synaptic channels opening, so, what are the probabilities that these

channels on the post synaptic side are going to be open given that you have

neurotransmitters being released. And that in turn also corresponds to the

fraction of channels that are opened at any point in time.

16:27

Now may be you thought that the differential equation for Ps looked a bit

intimidating or may be you thought that was a bit confusing but here's what Ps

really looks like as a function of time given as spike on the y axis we are

plotting Ps. Which has been normalized to have a

maximum value of 1 and on the x axis we have time measured in milliseconds and

what we are showing is biological data from three different kind of synapses.

The AMBA synapse, GABA a synapse and the NMDA synapse.

What you'll notice is that for the ampa synapse, the way that ps behaves can be

modeled quite well by using an exponential function.

Which we're calling kt. On the other hand, for the gaba a and the

nmda synapse, the way that ps behaves is fit better by something called the alpha

function which has. A peak that is after, the spike has

occurred. So there's some amount of delay before

the peak occurs and that's given by the alpha function as shown, down here.

So, this is the equation for the alpha function and it has a parameter Tal peak.

Which allows you to fit the particular data by shifting the peak from the time

that the input spike occurred. So the spike occurred at time zero, and

the peak might occur slightly later as determined by tal peak.

18:25

We can categorize the input spike train in terms of what is known as the response

function, Roby. So that's given by the summation over all

the times at which. A spike occured.

So some, some or all the I of delta, T minus TI.

So this is basically the delta function. So everytime you have a psike you put in

this delta function which is essentially an infinite pulse at that location of the

spike. Now why would you really want to do that?

Well it turns out that when we do an integral for the filtering, it turns out

to be quite convenient to have the spike train as one of these summations of delta

functions. So basically this is a technical detail,

so don't get so worried about it right now.

So suppose that we have a spike train and we would like to model the effect of all

the spikes on this particular neuron. How'd we do that?

Well let's first select what kind of synapse this particular synapse is.

So suppose it's something like an ampa synapse, as we discussed in the previous

slide. The ampa synapse behaves as of it is an

exponential function, so we have something that looks like this.

This is k and this is. T is a function of time.

And so, this can be used as a filter to act as the effect of an input spike on

the postsynaptic neuron. So, now we have a filter.

So here is the filtering equation that will model how the synaptic conductance

changes on the post-synaptic neuron side. So basically what we are saying is that

gb which is the synaptic conductance at b is essentially just nothing but the

maximum conductance times basically summation of all of these exponential

functions added together, and if you like integrals as the summation here is the

linear filtering equation. And here is your favorite function, the

rho b, the neural response function, where you have these delta functions

summed up at the locations where you have spikes.

Now, if you're still confused about this, actually, there's a very easy way to

interpret the summation or this integral. So, here is The spike train and here's

what the synaptic conductance GB is going to look like.

So every time you have a spike, you put in one of your K functions, your synaptic

filter. And then when you have another input

spike such as this one, you simply add a copy of the synaptic filter, and you do

so for each of these input spike. And so you're going to get a synaptic

conductance that looks something like this.

So this is what GBT looks like for this particular.

Input spike train. So that wasn't really too hard, was it?

The moral of the story here of course was that don't be too intimidated by these

types of complex equations. So are you ready now to put everything

that you have learned so far to create a network model?

Now let's do it, so here is a simple example, let's just take a two neurons,

neuron 1 and neuron 2 lets connect them together with excitatory synapses.

Neuron 1 cnnects with neuron 2 with this excitatory synapse and neuron 2 connects

with neuron 1 with this excitaory synapse.

Now each of this neurons is given by our. Favorite equation here is the equation

for how the membrane potential changes as a function of time.

Here's the time constant for the membrane.

And we're going to model these two neurons as Integrate-and-Fire neurons.

So, this is something you heard about in Adrian's lecture in a previous week.

And so the Integrate-and-Fire neuron essentially models the membrane potential

and then when there is A particular threshold that is reached.

So, here's the threshold. Then the neuron has a spike.

So, the neuron spikes here and then is reset back to a particular value.

So the particular value in this case is minus 80 millivolts.

And the synapses are going to be modeled as Alpha synapses.

So, you're going to use an Alpha function which has Essentially as we saw before,

it peaked just lightly after zero and then it decayed down to zero.

And so, we're going to first look at what happens if we model excited rate

synapses, so neuron one excites neuron two and neuron two excites neuron one.

And here is what the behavior of the network looks like for these two neurons

when they're exciting each other. So you can see that neuron one fires

first in this case and then neuron two fires after and so on.

So they basically alternate firing from one to the other.

Now what will happen if we change the synapses from excited rate to inhibit

rate? So here's something surprising that

happens. So if we change the synapses to

inhibitory synapses, so we can do that by changing the equilibrium potential, also

called the reversal potential of the synapse, to minus 80 millivolts.

So that's less than the resting potential neuron given by minus 70 millivolts.

So you can see that when you change the synapses to be inhibitory then we get

synchrony which means the two neurons start firing at the same times so this

synchronized wti each other and that's a really interesting property that people

have been looking at also in certain brain regions so.

Here's an example where a simple model of just two neurons either exciting each

other or inhibiting each other give rise to sudden interesting behaviors that

might be of relevance to people trying to model particular circuits in the brain.

Okay great, that wraps up this particular lecture segment.

In the next lecture. We look at how we can go from spiking

networks to networks based on firing rates.

And this, as we'll see, makes it much easier to simulate large networks of

neurons. So, until then, goodbye and ciao.