[MUSIC] Hello, we're now going to take a look at Final Validation of the QI effort that we're going to do. We talked about baseline data earlier. Baseline data needs to be used to measure the current state performance, and is an indicator of the extent of the problem. One of the first things you want to do when you obtain this data, is perform a bit of a sniff test. Does this data make sense? And sometimes the best way to do that is to show that to a few of the stakeholders in the process who might even have a better sense than you if the data do, indeed, make sense. Data points should be graphed in time, series, order, to be able to determine things like variation, trends, shifts over time. One of the key elements you need to do is identify what is your gap. The gap equals goal performance minus baseline performance. How far away are you when you collect that data? That could help you establish what that goal should really be. The size of the gap is, therefore, very important. It might indicate that your goal is set too low or too high. You might be a little too lofty in your aspirations of fixing a problem or you might be a little wimpy on that goal. So that gap will tell you. It might indicate that the problem wasn't as significant as you first thought it was. A lot of times we get into some subjectivity and what happened recently is perceived to be a real problem. In fact, when we get that data, it might reveal that it wasn't the problem we thought. So let's look at an example. This is what we call an individual's chart, and it's showing MRI scanner idle time, and we all know that we want our MRIs going all the time, so that idle time's not a good thing. So in this case, I'd say, smaller is better than big. So we want that number down low. On the left hand access, we're looking at the minutes, that that scanner was idle. And you can see it's been kind of hang in there about 175, or so, on average, through, I'd say June or so, and then all of a sudden we start to see a number of data points increasing in a row. So one could look at that and say, my gosh, we have a problem, we need you to work on this, we need to get that scanner timed where it'd supposed to be. However, let's capture a little bit more data, and now what do we see? If for whatever reason, we had a spike in that idle time, but then look, it seemed to go back down, maybe a little bit higher than previous. But basically it's mostly in the range of normal variation. We had those two points up top that are lit in red and that means there's some kind of special cause there. So maybe we just want to understand what that special cause was and perhaps address those rather than commit an entire improvement effort into fixing what may not be a huge problem. So one of the things that I've noticed, I used to work in the world of manufacturing, and we used control charts quite frequently, when I got to the hospital setting, and I've asked many quality folks in various hospitals this, whether they use run charts or control charts. Typically they either use line charts or run charts which are very useful to say the least. In the case for a run chart, you're basically just capturing the data, placing a point, and literally connecting the line. And it's going to show you over time that things are moving one way or the other. So the good news is it's simple and it can be done very, very easily. The bad news is it really doesn't identify extremes or at least statistical extremes. So let's compare that to the control chart. It's very similar, only it's providing statistical data including stability of the process. It can be done for various types including attribute and variables data. The cons are it usually requires some software or large data sets. So if we look at this, we can determine, something in D did happen with those two red points over time, and this was statistically different, because it's above the upper control limit. Let's talk a little more about that. There are two types of variation that are present with any process. Every single process contains two types of variation. Common cause variation, it's almost happening. We all know that shift and drift happens, variability happens over time. However, sometimes there's a special cause. Something unusual or different that can happen that might be good or bad. So say for example that I'm plotting my weight over time. And there's variability in my weight, I will guarantee it. But what typically happens somewhere around late November, late December, is my weight seems to go up and yet right after the first of the year, my weight tends to go down. So the normal shift and drift would be common cost variation. I might gain a pound here, lose a pound every week, something like that, but then something unusual happens during those holiday times, I tend to eat and perhaps, maybe drink a little bit more than usual and that's a direct contributor to my weight going up or down. So with that, that would be a special cause. The key is we want to look for variability by things like shift, provider, day, collectors, etc. When you start diving down deep in you data, you might find that that variability could be day a week. For example, discharges from our ICUs. We have some significant issues in terms of people being delayed by discharges because they're in the ICU and they can't get the ride, they can't go home. We find that 80% of our problem happens on weekends. That's that variability. That's that special cause variation we're looking for. So maybe now instead of looking at the entire problem in terms of variation for all days of the week, maybe we focus on Saturday and Sunday. I talked about the statistical variation. And there's a lot of talk in terms of what constitutes a trend, for example, what constitutes a shift, for example. So let's kind of go through three that are most widely used. So when you do a control chart, typically what happens is we call this voice of the process. The mean is calculated of all the data points. And what the magic that happens is, we calculate three standard deviations above and three standard deviations below. And it's a measure of variability below the mean. And any point that goes outside those points is statistically different, i.e., it's a special cause type variation. Another form of special cause variation is what we call a trend. And again, there's a lot of conversation about what constitutes a trend. But statistically, it's six or more points points continuously either increasing or decreasing in a row. That means, you have a statistical trend going on. The chance of that is very, very, small by chance and chance alone. The last thing we're going to look at is shift where you have nine or more points on the same side of the mean. That's called a shift consecutive points on the same side. So again, you have to look at your data and determine, is this good or bad? Is bigger better? Smaller better? Or within a predictable range best? The next thing we're going to look at is capability. So in terms of the control charts, they're only looking at the stability of your process, the variability of your process. Capability is getting in terms of how well are we exceeding our customers requirements? Are we meeting our customer requirements? So it can be measured in lots of different ways, but I like the simplest way which is basically percentage good or percentage bad. You determine which way you want to express it. So look at the balls going in the soccer net there. So if you're familiar with soccer, the idea is to get that ball in that goal. And we have ten soccer balls there, seven of which are in the net. So that means three are out of the net. So three would have been unacceptable, and seven are acceptable. So we could term it in terms of we have, we're 70% good, or we're 30% bad. You get to choose. Are we matching our customers' requirements? Who sets the width of the net? Well, in this case, the customer would be the soccer association or whatever like that. They're the customer in this case. They get to determine what's good and what's bad. So in our world, our patient for the most part determines what's good or what's bad, we have determined what they're looking for, set our goal accordingly, and measure our success towards that goal. So there's four components for final validation. The first of which is establishing what the problem statement is. The second of which is coming up with the metric that ties to the problem. It's a measurement of the extent of the problem. The third is the goal, in terms of, are we getting better or worse then what the baseline data were. And the fourth is, if you can easily get the data, do a test on that data, run it by the stakeholders, the stakeholders being, remember the PSYPOCH, it's the S and C as the suppliers and customers. If that data are difficult to obtain, just do one through three, and then worry about the data later. If you can possibly collect the data right off, you're better off.