You've heard about orthogonalization, how to set up your dev and test sets, human-level performance as a proxy for Bayes error and how to estimate your avoidable bias and variance. Let's pull it all together into a set of guidelines to how to improve the performance of your learning algorithm. So, I think getting a supervised learning algorithm to work well means fundamentally hoping or assuming they can do two things. First, is that you can fit the training set pretty well, and you can think of this as roughly saying that you can achieve low avoidable bias. And the second thing you're assuming you can do well, is that doing well on the training set generalizes pretty well to the dev set or the test set, and this is sort of saying that variance is not too bad. And in the spirit of orthogonalization, what you see is that there's a certain set of knobs you can use to fix avoidable bias issues, such as training a bigger network or training longer, and as a separate set of things you could use to address variance problems, such as regularization or getting more training data. So, to summarize up the process we've seen in the last several videos, if you want to improve the performance of your machine learning system, I would recommend looking at the difference between your training error and your proxy for Bayes error and just gives you a sense of the avoidable bias. In other words, just how much better do you think you should be trying to do on your training set. And then look at the difference between your dev error and your training error as an estimate of how much of a variance problem you have. In other words, how much harder you should be working to make your performance generalized from the training set to the dev set that it wasn't trained on explicitly. So, to whatever extent you want to try to reduce avoidable bias, I will try to apply tactics like train a bigger model. So, you can just do better on your training sets or train longer, use a better optimization algorithm, such as ADS momentum or RMSprop, or use a better algorithm like Adam, or one other thing you could try is to just find a better neural network architecture or better set of hyperparameters, and this could include everything from changing the activation function to changing the number of layers or hidden units. Although if you do that, it would be in the direction of increasing the model size to trying out other models or other model architectures, such as recurrent neural networks and convolutional neural networks, which we'll see in later courses. Whether or not a new neural network architecture will fit your training set better is sometimes hard to tell in advance, but sometimes you can get much better results with a better architecture. Next to the extent that you find out variance is a problem, some of the many techniques you could try then includes the following: you can try to get more data because getting more data to train on could help you generalize better to dev set data that your algorithm room didn't see, you could try regularization. So, this includes things like L2 regularization or dropout or data augmentation, which we talked about in the previous course, or once again, you can also try various NN architecture/hyperparameters search to see if that can help you find a neural network architecture that is better suited for your problem. I think that this notion of bias or avoidable bias and variance is one of those things that's easily learnt but tough to master. And you're able to systematically apply the concepts from this week's videos. You actually will be much more efficient and much more systematic and much more strategic than a lot of machine learning teams in terms of how to systematically go about improving the performance of your machine learning system. So, that this week's homework will allow you to practice and exercise more your understanding of these concepts. Best of luck with this week's homework, and I look forward to also seeing you in next week's videos.