We are now ready to discuss the important subject of Parameter Extraction. In other words, how we choose values for the parameters that are part of a compact model. Parameter Extraction is the matching of the model to measure data. We use software optimizers to pick the parameter values in such a way that this happens. And so that the error between the predictions of the model and the measurements is minimized. As an example here, the dots are measurements and the solid lines are a model. You can see that the matching is pretty good. But, nevertheless we have to be careful as to whether we can judge matching just by looking at such plots. I will comment on this shortly. A model should of course predict the current accurately, as a function of voltages, but this is not enough. It should also predict the small signal parameters accurately for any combination of biases. And we will be talking about small signal parameters later on in this course. And it should also predict correctly the functional dependence of quantities on other quantities. Again, we'll talk about this later on. Both of these are important, especially in analog circuit design. So parameter extraction generally proceeds along the following steps, which start with a physically correct compact model with input parameters that have been chosen carefully. They're independent parameters, there is no interference between them to the extent possible. Then we identify certain geometries, for example, certain ranges of Ws and Ls in current voltage regions where some parameters have a dominant effect. You can understand that if a model has, let's say 200 parameters and to some extent all of them affect all regions. You want to find regions where one or two parameters have a dominant affect, so you can at least extract the right value of those parameters from those regions. So you extract those values then. And then you repeat for other geometries or other BIOS regions to get the other parameters. And once you finish because of course there is some interaction between the parameters as I already mentioned, you do a global optimization. In other words, you fine-tune all parameters by looking at the entire model over all relevant ranges. One important result of this is the following. The final values of parameters you end up with should be physically meaningful. So, for example, if you're extracting the substraight doping concentration it should be more or less what you expected it to be, not something totally different. Just so that it can make up for the lack of appropriate modeling of certain effects. What are the error criteria we use in order to judge the matching of a model? First of all, the most important thing is to match the current with respect to voltages. Here I show you the current with expected VDS, the dots are measurements. The broken line is a model, and it appears that it matches pretty well. Now as model parameter values are varied in order to minimize the error, we need to have something specific that we call Error. So you can take the difference between the model prediction for the jth point here, so j ranges from one to k. Take any one of these points, take the predicted value and caluclate the error from the actual measurement of that point normalized by that value. And then square this so that if this area is sometimes positive for a certain point and negative for another point. The two errors don't cancel each other. And add up all of the relative errors possibly weighting them with weighting factors, so you can pay special attention to certain regions. For example if you're interested in having very good accuracy in the saturation region. WIDJ for points in this saturation region might be chosen larger than the corresponding weighting factors over here in the non-saturation region. Then you add up all this and you end up with a total square area, counting all of these points. Now if you divide by the number of points, K in this case, then you end up with a mean square error. And if you take the square root of that you end up with the root mean square error or RMS error. So you can use the RMS error to adjust the model. And you can give this criterion to your optimizer and ask it to choose parameter values so as to minimize the RMS error. So let's say we've done it, and we've jotted in the parameters and the model is as shown by the broken line, it looks pretty good. However but think of the following, let's say you are in the saturation region, and the saturation region the slope is the so called small signal output conductance, which is of key importance in analog design. Now although you matched the current pretty well, it doesn't mean you matched the slope very well. In fact you may see already that the broken line has a smaller slope than what could correspond to the measured points. So now if you plot the slope both for the measured points and for the model, you may end up with something like this. This is the slope that corresponds to the measurements, assuming you have plenty of points to calculate the slope from. And this is the slope that corresponds to the model. How can they be so different? Well because the slope is so small here, let's say Epsilon, it's Epsilon for the Measurements and Epsilon over two for the model. They're both very small and you cannot see the difference here but once you plot the slope things become very apparent that this excellent looking model is actually pretty bad in predicting the output conductance. So if you calculate the error of the output conductance you do the same thing as we did for the current. You end up with another error here that takes the predicted value of the output contactants minus the actual value, normalized, squared for the same regions, weighted. So in fact, both current and small signal parameter errors should be taken into account in minimization, and there are ways for doing that. Now I would like to warn you against blind optimization. I will give you an extreme example. It is extreme on purpose to really illustrate the problem. We know that in string eversion we have a parameter we call phi0 which is two times the firming potential plus delta phi which is a few tenths of a volt. Let's say that instead of using this a model uses the classic assumption that phi0 is equal to 2 phi F. Now pi F the firming potential is given by this expression, this is the thermal voltage. This is the Substrate Doping Concentration and this is the Intrinsic Carrier Concentration. You could make this phi0 be equal to this one if you artificially make phi F large. And how do you make it large? You can make NA large enough to make phi F such that this is the same as what you expect. But to do that, you have to allow NA to be a free parameter and the optimizer should be allowed then to adjust NA to give you the correct value of phi0. This is a very, very bad idea. Why? Because it will have to give an artificially high value to NA just to make up for the fact that you didn't do a good job in your expression for phi0. In fact, if delta phi is only 100 millivolts, you get a 600% higher NA here in order to be able to match the correct phi0. And once you have an artificially high Substrate doping, you're going to get artificially high capacitance, and things like that. In other words you may think that by adjusting NA you corrected the problem here, but this problem will hit you somewhere else. So it's always a bad idea to do blind optimization. Rather, you have to start from correct, physically meaningful expressions and then you optimize the parameters in them and the final values you end up with for those parameters make physical sense. I will continue with the parameter instruction in the next video.