In this video we will show how to build a deep Neural network in PyTorch using n n dot Module list. This will allow you to automate the process creating a neural network with an arbitrary number of layers. This will allow you to automate the process creating a neural network with an arbitrary number of layers. We can add more and more layers to our network manually but this is time consuming and labor intensive. We will use the ModuleList in NN.Module package to automate this process. Let’s construct our neural network model. We create a list called layer, the first element of the list is the feature size; in this case, two. The second element of the list is the number of neurons in the first hidden layer; in this case, three. The third element is the number of neurons in the second hidden layer, which is 4. The fourth element is the number of classes in the output layer, which is three in this case. We use the list layer to construct our deep neural network model in the constructor, we create a module list object. We will use the list Layers to create the layers of our deep neural network. Here is the list in which we will be looping through. Each iteration we will be taking two consecutive elements from the list, for the first iteration, the variable input_size has the first term of the two consecutive elements that we select from the list. This corresponds to an input size of our first hidden layer. Output size is the second term of the two consecutive elements that we select from the list, it is the number of neurons of the first hidden layer. as the value is three, which sets the number of neurons in the first hidden layer, each neuron has an input dimension of 2 The second hidden layer for our deep neural network will be constructed in the second iteration of the for loop. Now we use the second element in the list Layers., this is the number of neurons for the previous layer. The third element in the list Layers will be the number of neurons in the second hidden layer, i.e we have 4 neurons, from the variable “input size” each neuron will have an input dimension of three. We will repeat this process to construct the output layer. Using the third element of the list as the input size, and fourth element as the output size. The output size is the number of neurons in the hidden layer or number of classes. Input size is the input dimension for each of the neurons. For the forward function we will iterate through the layers of the neural network. We will recursively apply the linear layers and then apply the activation functions. “L” is the number of layers in our neural network. We loop through each layer along with the index of the layer. We apply the linear transform and activation, here we use the relu activation function because it performs better than other activation functions. We do this until we get to the last layer. Since we are performing multiclass classification, for the last layer L we just apply the linear layer. As we have three classes for the output, we have three neurons. The training procedure is similar to that of the previous section. In the lab we will use the following data set. We can try different combinations of neurons and numbers of layers to see which combination gives the best performance. Now let's see some methods to improve the performance of these networks