[BLANK_AUDIO] Hello. We are now onto the 5th lecture on mathematical models of action potentials. And this lecture is a little bit more conceptual. Rather than rather than getting into details of data. Or rather than getting into details of how you make a mathematical model or how you simulate with a mathematical model? But, even though it's, it's conceptual rather than focus on the details, I think it's really important. And the theme of this is going to be something that we covered a little bit with the cell cycle model. The idea of phenomenology versus mechanism. We talked about the difference between phenomenological representations and processes versus mechanistic representations of processes back in the cell cycle lectures. Now we're going to talk about this some more. And you know, the question we want to ask in this case is, is the Hodgkin-Huxley model a mechanistic model, or is it a phenomenological model? And when we discuss this, we'll see that the some aspects of the Hodgkin-Huxley model are, are clearly mechanistic, but other parts of it may appear to be more phenomenological rather than mechanistic. And I think that's common of a lot of models, is, is that the answer is both. There are mechanistic aspects and phenomenological aspects. And then we're going to ask this question. If you know what the mechanism is, can a phenomenological model still be useful? What we discussed with the cell cycle lectures is that models often evolve in a way that they start in a phenomenological and then when more biological details are obtained. Then more mechanistic representations of those processes are used so then you, you might think that it never goes in the other direction. And we're going to discuss an example where it went the other direction and that, that's called The Fitzhugh-Nagumo model. And what I want to argue is that, usually, it goes in one direction. Usually, phenomenological representations are, you know, begin when you don't know the details of the processes and then those usually become more mechanistic over time. But, I think that moving in the other direction can, in some cases, also be useful. So that sometimes phenomenological models are extremely important and extremely useful, even when the mechanism is well known and that's the argument I want to make, when I discuss the Fitzhugh-Nagumo model in this context. When we talk about phenomenology versus mechanism, one questions we can ask in the context of everything we just learned is, is the Hodgkin-Huxley model a mechanistic model or a phenomenological model? Personally I think the answer is both. This may be somewhat in the eye of the beholder, other people may have a different opinion. I'll explain to you why I think. The Hodgkin-Huxley model is both. There is clearly a mechanistic aspect of this and that Hodgkin-Huxley separated the, the total ion current across the membrane, into a sodium current, and, and into a potassium and in the previous lectures, we discussed how they did this. And this aspect of the, the Hodgkin-Huxley is clearly mechanistic. But then other parts of the Hodgkin-Huxley model were more phenomenological. And one example of this are the functions describing alpha as a function of voltage, and beta as a function of voltage. We talked about how these are derived directly from the data, but we didn't talk very much about what the actual equations look like, and the equations look something like this. Beta and of voltage is 4 times e raised to the minus v plus 60 divided by 20. There's no physical basis in Hodgkin-Huxley model for using an exponential function for these. And a numbers such as four or 60 or, or 20 are chosen simply to fit the data. So what we discussed was that you know the the infinity values. And the time constants came directly from the data, and when you got the infinity values of the time constants, then you're able to come up with good estimates of beta as a function of voltage, and alpha as a function of voltage. That's in terms of what data show, but then in terms of what actual function you choose to, to fit those data points. That parts more phenomenal logical and Hodgkin-Huxley just sort of picked whatever worked the fit their alphas and betas and in this aspect, I think that the model is phenomenal logical. Al though as I just discussed and it's clearly mechanistic in the way that it separates ionic current into a sodium component and potassium component. And then there's another interesting example we can we can think about with the Hodkin-Huxley model. And this is when phenomenal begets mechanism. Or this is what discussed in the context of the cell cycle model. Sometimes you start with a phenomenal cycle representation then over time when you learn more it becomes more mechanistic. And a great example of this with the Hodgkin-Huxley model, is with this four particle model. Why do you have n raised to the 4th power here in describing the potassium conductance and the, the, the potassium current? As we discussed in some of the previous lectures, it chooses four part, particle model based on curve fitting. But, now we know that I channels have most [INAUDIBLE] ion channels have a tetrameric ion channel structure in particular, these potassium channels are made up of tetramers. So now we have a rigorous physical basis for the fact that this is 10 raised to the 4th power. At the time, this was something that they chose just based on curvenage. Of when a representation in a mathematical method can start phenomenological and then over time when more details learned about the biology. you, we, one, one can see that that phenomenological representation actually has a mechanism bases of it. So this is a, a, great example of when phenomenology begets this mechanism. [BLANK_AUDIO] What are the examples we've seen, when phenomenology begets mechanism? First in the Hodgkin-Huxley model as we just discussed the fact that they chose these four particles to describe the Changes in potassium conductance were based on curve fitting, they observed that the rising phase of potassium conductance as a function of time had a delay, whereas the falling phase did not. And this is how they hypothesized that this rising phase looked like an exponential, raised to the 4th power, and when you have, when you're closing the channel, if it's only first particle out of four that has to close. To to get the channel to close, that can explain why the rising phase might have a delay because all four particles need to shift directions. And the following phase, where only one out of four needs to shift, that's why you wouldn't have a delay there. So, this is based on curve fitting, but know we know that this relates from tetrameric ion channel structure. Where the potassium channel here will be a tetramer, one, two, three, four. Each of these will have one alpha helix here the S4 segment that has charges in it and these four S4 segments have to move to get the channel to open. So this is a case where you go from kerfitting or phenomenological representation in the days of Hodgkin and Huxley 60 years ago Now we know that there is a physical basis for this. The other example we saw of phenomenology begeting mechanism was with the Novak and Tyson model. Remember that intermediate enzyme here was included in the model. Just so that they could account for the delay. Between the activation of mpf here. And the activation of anaphase promoting complex here. So, intermediate enzyme was something that they hypothesised. And they put it into the model. They didn't know what it was. And now we know what the gene is. Or what the protein is that intermediate enzyme corresponds to. And so, I think these are great successes of models. When something is put into the model, this is a way to explain the data in quantitative terms. And then, later, that becomes something that's known to have a biological basis. And this is one of the strengths of models. This is something that, when a model is, is well built and well designed can be one of its great strengths. Getting back now to the theme of phenomenology versus mechanism, we just discussed how models can sometimes start out as phenomenological and then become mechanistic with more biological details. Are, are acquired. Hm. What about the other direction? What about when when models start out making this and become phenomenological? And, we're going to discuss a somewhat extreme case here. The Fitzhugh-Nagumo model. This model shows an actual potential here. Voltage is a function of time. And this is a sub threshold stimulus it fails to induce actual potential. This model came about, just with these two equations here. Changing voltage with respect to time. And changing w with respect to time. Where v is a voltage like variable not explicitly voltage. And w is a recovery variable. And this was published in a paper by Doctor Richard Fitzhugh in 1961. In a biophysical journal. There are many different manifestations of the Fitzhugh-Nagumo model so if you look up the original paper the equations will look slightly different than this. I just picked these particular equations, because they're very easy to implement and they illustrate the same the same points as any of the other representations we can do. Nagumo model. And what you notice about this is that, this paper was published in 1961. The Hodgkin-Huxley model was published in 1952, so Fitzhugh certainly knew all about the Hodgkin-Huxley model, other investigators knew all about the Hodgkin-Huxley model, over the course of those nine years. What I want to argue is that? Even though the Hodgkin-Huxley model was already in the in the public domain and even though it was already in the literature, the Fitzhugh-Nagumo model was also extremely important and extremely valuable. Let's think about that, and let's think about, how and why the Fitzhugh-Nagumo model could have been could have been valuable? This is an abstract and very clearly, a phenomenological model. When we discuss Hodgkin-Huxley we had to argue that parts of that were mechanistic and other parts were phenomenological. This one, I think, we can conclude is entirely phenomenological. These are the two equations, dV/dt and dW/dt. It only has two variables. What we discussed with Hodgkin-Huxley in terms of what's actually going on mechanistically. In terms of potassium conductance and sodium conductance. You're certainly going to need more than two variables to describe that. This doesn't include any ionic currents, this includes this minus I here as a, as a stimulus current but other than that, in terms of how to membrane evolves autonomously after you give the stimulus current. There's nothing in here that's called ionic current. And this recovery variable W. >> Is not related to any specific biological process when we talked about recovery in the Hudgen-Huxley model, we discussed how the refractory period result primarily from the recovery of the sodium current. From the inactivation process recovery of the H jig in context to the model here you have a recovery variable but its not linked to any particular biological process. And as I discussed when in the last slide, this was published 9 years after Hodgkin-Huxley. So, if you already have a representation that's more mechanistic now you, can you back to a, a really abstract phenomenal representation? Can it, can this have any value? whatsoever. And what I want to argue is that in this case it did have a lot of value. The Fitzhugh-Nagumo model is a very famous model. The original paper by, by Fitzhugh has been cited something like 18,000 times. It's a. It's a very, very, well respected, well know model. And it has proven to be very valuable. In terms of the simulation and in terms of our understand. And how can this be, when is this coming, after the more mechanistic representation. So, this is the question that we want to address. Why would anyone care about a two variable phenomenological model when a more mechanistic, four variable model, in some ways a much better model. When this other model already exists, why would want to, someone want to work with a more abstract version. Well, one answer here is that, this occurred in the pre-digital era. In these days, as we've seen and as you've seen yourself doing your homework. It's really easy to write down a set of differential equations and implement them and, and run simulations using a program language such as MATLAB. That's because we have very powerful computers. Here in the 21st century. Working back here in the 1950's and in the 60's, they didn't have such powerful computers. In fact, Hodgkin-Huxley, solved a lot of there equations using a calculator, and it took them a really, really long time to do this. So back in those days, back in the 50s and 60s. There was value in simplifying. There was value in going from a four variable model to a two variable model. The reason this is called a Fitzhugh-Nagumo model, it's not because Fitzhugh-Nagumo published a paper together where they described this model, that was the case in Hodge-Huxley. What happened in this case for the historical lesson is that Fitzhugh published a paper that we just discussed in the last slide. And then a year later, Nagumo and coworkers, published another paper, here, but, so, the two models were very similar, and therefore they're described as the two Nagumo model. And with Nagumo, Nagumo's model, in fact, it wasn't just a set of equations that could be written down. And then solved on a computer. He actually implemented this in a box he used this device called the tunnel diode and a tunnel diode has a cubic current voltage relation similar to you may remember it in the Tunic Gilmo equations there was a voltage to the three to the third power term. And so these tunnel diodes have a very unusual relationship between how they pass current related to the voltage that you apply across them. And, so in these days Nagumo's representation was actually a box where you could turn the dials and you could get your, your output like that. And in those days when it was really difficult to implement these on digital computers, and you had these super computers there was great value in having something that you could solve in a box. So, this was one of the reasons why the Fitzhugh-Nagumo model was very valuable. There was, however, another reason why the Fitzhugh-Nagumo model grew to be so important as well. And I think that had to do with the fact that they went specifically with two variables. I think if Fitzhugh-Nagumo, had gone from a 20 variable model to an eight variable model. It would have helped a lot in terms of the computing power and computing speed, but it wouldn't have had the same impact. So what it, what's so special about going specifically to a two variable model in this case? We'll, we've already seen this in our lecture's name dynamical systems. When you have two variables, you can plot things in the face plane. You can plot one variable on one axis. The other variable on the other axis, and you can visualize how the system moves, visualize trajectories, and you can look at things like, like fixed points, and analyze them graphically. That's exactly what Fitzhugh did. He applied it nullclines. Calculate the voltage nullcline. W equals voltage minus voltage to the third minus I. And you can calculate the W nullcline, with this equation here, and then you can plot that. Black line here represents the voltage nullcline, and the red line represents the W nullcline. And then, once you have the your nullclines plotted in the phase plane, as we discussed in the lecture on, lectures on dynamical systems is. You can, you can apply arrows you can say in this quadrant the system is moving this way, up here it's moving up and to the left, here it's moving down and to the left etc. And those are the things that Fitzhugh did as he said, okay where is my fixed point going to be stable? When is it going to be unstable? Which way is the system going to move etc? And i think this even more than the computing speed issue. Is why the Fitzhugh-Nagumo model proved to be so influental. It's because it was specifically two variables. And because things could be plotted face plain. This allowed for, for more conceptual insight using these graphical methods. And conceptual insight into process such as [INAUDIBLE] sub threshold stimuli. Super threshold stimuli etcetera. Let's look at how we visualize various processes in the Fitzhugh-Nagumo model. And how potting mill times and looking at things in the face plan, and the idea of diamical systems can help us to understand these. First let's look at what happens with the an electrical stimulus. And we're going to represent an electrical stimulus here as an instantaneous increase in the variable V. So, there are are nullcline where we have voltage. Voltage nullcline. These are all the points where dvdt equals 0 and these are all the points where dwdt equals 0. And if we have a small increase in V. Then what we move from this point here from our fixed point, to this point to the V-W nullcline, and then If we run a simulation we see that it goes back. It goes back to the stable fixed point. We can plot this as a function of time, where the voltage goes up and then the voltage comes back down. What happens when we have a larger increase in V? Well here we move from this point, to this point, all the way over here. And now what happens is our system moves like this, and now we have that. Instead of both voltage decreasing afterwards, voltage increases. And how do we know voltage is going to decrease? Rather than increase after the stimulus, well that's because we've crossed the nullcline remember the blank nullcline here is a set of points where dbdt equals 0. So, clearly at this point here, after we release the stimulus, we'll just decrease, at this point because we've cross the nullcline, welshes is going to increase, so this so plotting nullclines. Helped Fitzhugh get insight into where your threshold occurs, and why, with a small increase in v, you return to the fixed point. You return to, again, in physiological terms, you return to around minus 60 millivolts. Whereas in this case, voltage continues to increase. Eventually w increases as well, and then you come back to the fixed point, after the voltage decreases. And in this case, when Fizhugh ran a simulation with his equations, he saw something that looked like actual potential. So, by reducing from four variables in the case of Hodgkin-Huxley to two variables in the case of Fitzhugh-Nagumo, we're able to represent things like. This is a sub-threshold stimulus, and this is a super-threshold stimulus that induces an action potential. One of the other analyses that Fitzhugh did that was really cool was he said, what happens when we, when we have constant current injection? How does that change the stability of our fixed points? Constant current injection in this case corresponds is going to correspond to negative value of I, and if we look at the equation for our v nullcline over here we see w equals v minus v cubed minus I, so if I is a negative number then you subtract the negative number. That's going to be like adding to it, so your voltage nullcline is going to occur at higher values of w. In other words it's going to shift the voltage nullcline up. And this is what we see when we have constant current injection. In this case, I equals minus 7. The black line here, the nullcline has shifted up compared to where, where it was before with our previous example. And what happens when we shift the nullcline up like this, is this fixed point now becomes unstable. And when this fixed point becomes unstable, what do you have? Instead, instead of a stable fixed point you have a stable limit cycle, and what this model is going to do is it's going to to continually go around this trajectory, again and again and again. And if you taught voltage is a function of time, rather than voltage and w together you'd see something that looks like this. One [UNKNOWN], another one, another one, another one. And that occurs because our fixed point. Does now, become unstable. So, in other words, what Fitzhugh was able to conclude in the context of dynamical systems, is that repetitive action potentials with constant current injection, and this is something they haven't seen experimentally. This is equivalent in dynamical systems terms to conversion from a stable fixed point. To an unstable fixed point and when you had an unstable fixed point, you had a stable limit cycle. So this is analogous to what we saw before, with the beer model of yeast glycolysis, where we could either have a stable fixed point or we could have a stable limit cycle. The is a stable limit cycle. And the stable limit cycle, comes about because the voltage known time. Sits up and when the voltage nullcline shits up, what used to be a stable fix point becomes an unstable fix point. To summarize then, what we seen is that the Hodgkin-Huxley model and this is true of most mathematics model. It contains a mixture of some mechanistic elements. Some phenomenological elements. We've also seen that when a phenomenological representation, is later found to have a rigorous mechanistic basis, this is something that can be considered a, a great success of the model. We saw this in the case of the cell cycle model. With intermediate enzyme and we saw this in the case of the Hodgkin-Huxley model. Where they had four particles describing the, changes in the [INAUDIBLE] endocrines. And those four particles were later shown to, have an actual physical basis. However, I don't want to leave you with the impression that things always go in that direction. They usually start off as phenomenal logical and then become more mechanistic over time, but even when mechanism is known phenomenal logical representations can none the less be useful, because they often time provide very general, very abstract insight into, into phenomena. And one prominent example of this. Which was the Fitzhugh-Nagumo model. So remember that even the, though the Fitzhugh-Nagumo model came after the Hodgkin-Huxley model and even though it was an abstract and phenomenological representation, it was nonetheless, very significant and very important, for the general insight that it provided.