So, we've covered the four challenges that form the heart of this module. The four challenges of dealing with talent analytics and drawing correct inferences from the data that you're crunching. We have one special topic, we wanna cover before drop into the prescriptions. The special topic is Test and Algorithms, which are receiving a great deal of attention lately. So, for example, the cover story on Time magazine this summer was How High Is Your XQ? And XQ is intended to be your personality quotient, and this comes from the rise in firms promising to help companies hire more effectively by giving tests to the potential hires. So, a real cottage industry has grown up around this and it's related to big data. It's related to the increase in computing power and the quest, the never-ending quest for a better way to hire employees. So, we're going to talk about this separately. Matthew's going to talk about this separately in a hiring module, but I want to say a quick few words about it, especially because it's related to other news that's been covered lately. So, another front page story, this time the New York Times, on can an algorithm hire better than a human? Again, something that's arisen in recent years. Fostering the people on the next topic we're here talking about, but a few comments on these very important and relatively new topics. The first is, let's understand some of the pros and cons, because there are pros undoubtedly, but there are also cons. So, clearly you can process more efficiently through these test and through these algorithms, you can process many more candidates. You can aggregate them much more easily than doing so manually or intuitively. So, processing efficiency is a plus, so is broad search. Because you've got this efficiency, you can bring many more people into the filter, into the funnel that you're trying to process. And this can be very helpful. We, in general, think that people search too narrowly, they're too confident in being able to identify, and so broad search is a good thing. Finally, done well this is unbiased. The machines that are programmed don't have the kinds of biases we worry about. Don't have the human kinds of stereotypes, self-fulfilling issues that we talked about in our previous segment. Now, that's once their programmed. The programming itself can be biased or the historical data they're based on can be biased. But the machines themselves were unbiased. And this can be a real advantage for us. Downsides though, and this is important. And this is what gets lost in these glossy articles, or in so many sales people pushing for these new tools is that there's some cons. One is they're hyper-focused. These algorithms and these tests will do exactly what they're programmed to do and nothing else. They don't have the sense to balance it in the way that humans would, so if you go out, and tell them you want X they're gonna focus exclusively on X and if you just forgot to put Y in the algorithm, you're not gonna get it, you're not get any Y, it's gonna get exactly. And this can be dangerous. It can be very powerful, but it can be very dangerous. It's not that they don't work well enough, the problem is they work too well. And it's a very sharp tool for us to be working with. Second is most of these tools have relatively low explanatory power. Meaning, there might be a little signal, they might be picking up on something. But they're not in most cases explaining a lot of the variance downstream. They're not helping us really understand that much of what's going on. And so, we might draw on them, but you don't wanna put too much weight on them, if they're not really explaining much of the downstream variance. Some prescriptions that come from using these because by all means we need to be developing new ones, we need to be trying to take advantage of the ones that are good out there, but they need to come with some prescriptions. So, one, do the science, do the rigorous testing and importantly, identify what works best in your setting, very few of these are general, very few of these are tests or algorithms that are gonna work everywhere. Even something as fundamental as GPA, you can think of it as an algorithm even though it's been widely available for a long time, even something like GPA can, the applicability varies across organizations. I was at a conference a few months ago, where someone from Google stood up and said, we've crunched the numbers, and we've established once and for all that G.P.A. is not a valid predictor for performance inside our firm. Interesting, right? Very interesting. But, Goldman Sachs stands up and says, we've crunched the numbers and we've established definitively that G.P.A. is a valid predictor for performance inside our firm. Same numbers, different environments, different validity. You've got to run the numbers inside your organization and take with a big grain of salt anything that somebody tells you about the generality of a particular task or a particular algorithm. Second is provide human oversight. These are very sharp tools, and they need to be carefully used. And sadly, I speak from experience in that, you could design a very good algorithm, and then once you start using it you start sawing off edges of furniture here and there. You need human oversight of this. You need humans to be the once programming. You need humans testing these things, very importantly, you need humans error checking. You gotta make sure that the output on the other end makes sense and broadly corroborates what you expect. And until you've got a lot of experience with a particular algorithm, you've gotta be careful. Finally, use multiple tools. This is one of the best prescriptions we can give. It kind of licenses sampling these new tools, it licenses using them if used in conjunction with many other tools. So, by all means, take advantage of new technology, bring the new test in, drop them into the, see what they relate to, just don't put too much stock in any one of them, any one test or any one algorithm until you've really proven it up. But really, in general, with performance evaluation and talent evaluation, we want as many diverse signals as possible. So, this is a great new role for us. It brings in some new signals from some new places and we want that diversity. But it should only be a complement to the other more traditional measures.