In this exercise, you will examine the concept of overfitting where model fails to generalize its performance to unseen data such as a separate validation set. You will also learn how to recognize overfitting using visualization tools, and how to apply Dropout to prevent overfitting. Here, you will use a simple convolutional network on the CIFAR-10 dataset. After running the model, you should notice that in the logs of the model, after around IPAC 15, the model begins to overfit. Even though that cost them the training set continues to decrease, the validation loss flattens, and even increases slightly. You can visualize these effects using the code provided. In order to correct this overfitting, you will introduce dropout layers into your model which randomly silences a subset of units for each minibatch which greatly helps overfitting. Add this to your model as indicated in the exercise, and compare the generated plot with the plots associated with the model without dropout. You should notice that the validation loss in blue, is not shifted downwards, compared to the previous figure, and the model reaches a better validation performance.