So far you've learned a lot about what ML can do. You've also seen a few examples of ML being used in today's technology. In this module were going to shift their focus over to an important topic in machine learning. That is what it means to use ML responsibly and ethically. To illustrate what I mean, let's begin with a brief exercise. [MUSIC] I want you to close your eyes and picture a shoe. Try it just for a moment. All right, open your eyes. Did you picture this? How about this one? And what about this one? We may not even know why, but each of us is biased towards 1 shoe over the others. Now imagine if you wanted to train an ML model to recognize a shoe. Depending on the data you feed the ML model, you can end up exposing the model to your own bias. Now liking once you over another may not seem so bad. However, when you take into account the human lives that can be affected by ML, it's an important consideration where bias shows up. In this context, bias refers to a disproportionate weight in favor of or against an idea or thing, usually in a way that is unrepresentative, prejudicial, or unfair. And with that, here are the remaining topics I'll cover in this module. I'll introduce Googles AI principles and how they're used to promote ethical and fair ML practice. Next, I'll cover some common types of human bias that manifest themselves throughout an ML project. Then we'll look at how to evaluate an ML model for fairness. And finally, I'll close the module by presenting the second hands-on lab, inspecting a data set for bias. Check out the next video to learn more.