0:00
Welcome back. In the last session,
we have discussed the
autograd.Variable element of PyTorch
and how to create and preserve the computational graph.
We've also briefly mentioned the CUDA functionality of PyTorch.
Today, we're going to create a Linear Model with PyTorch.
So let us start.
So first, we are importing their required libraries,
which is torch Library.
As you remember, torch library is the Tensor library of PyTorch.
Then we are importing nn module,
neural network module and then we importing autograd from autograd.Variable.
As you remember, autograd.Variable is the main building block of PyTorch,
a computational graph and here we're importing just numpy.
So let us execute the next cell,
what do we do here as following.
We are creating some data for our model.
So, first of all, we're creating x,
x is just the numbers from zero to 20.
Here, we use a nice feature of Python,
which is a list comprehension.
In the next step,
we are converting this x list to
a numpy array and then we reshape it to be a column vector.
And if you look at the print,
you see what we have created.
So we have created from zero to 19,
20 elements of x and then we have reshaped it to the form 20 to 1.
It's a column vector.
Almost the same thing with y.
But y is now the linear combination of x.
So every element of y is five multiplied by x or by index i,
which is the same in this case plus two.
And then we are creating numpy array out of it of type float32.
And then we have reshape to be a column vector.
Let us have a look at the print out.
This is the linear combination,
actually linear function and the printout is again,
of this numpy array,
20 to 1 numpy array.
Now, the important point, now,
we're creating a linear model with PyTorch,
every model with PyTorch is created with class.
So you have to create a class.
It is pretty straightforward and simple especially for those of you who has already
some experience in programming especially in object-oriented programming
like in Java or C Sharp or in Objective C, or whatever.
You are creating classes.
A class actually is a blueprint of an object.
Object is a special instance of a class.
So if you create a class,
you create a template for an object.
Object is an acting instance.
So now, we're creating class,
which we are calling LinearRegressor.
So the name you can choose freely whatever you like.
But here's the important point,
we are inheriting this class from module class
of Neural network package of nn library.
This is important point,
it must be always inherit from module,
every PyTorch model class inherits from module.
The next, we are defining init() method.
Init() is equivalent to
a constructor in other programming languages like Java or C Sharp.
Here, we don't call the class name but __init__.
It just flavor of Python to call this.
Constructor will initialize the class,
will initiate the class with the values which we are passing here.
So we are passing here the instance of this class itself.
Then we are passing input dim,
input data dimension and here,
we are passing output data dimension.
And then we call this super class,
super is the constructor of a class from which we are inheriting.
So we are inheriting from module and we call super is calling to the module class.
Super meaning give me the class from which we are inheriting.
In here, we are calling the init() method of the super class.
Init meaning constructor.
And here, the next line of code,
we are creating linear object,
linear model from this super class.
We don't want to create every nitty-gritty.
We are here with just getting older stuff from
the super class how linear our model is designed?
And we are storing this in our variable,
which is called linear, self.linear.
And this class has only one method,
which is called forward.
And for this method we are passing the instance of the class
itself and we are passing x data.
So in here, we see again,
calling self.linear and we are printing out what linear function computes.
That's it actually and for every model which you are going to create with PyTorch,
this is very similar structure which you create.
So you always create a class,
which you are inheriting from a model and in
construct that you are
processing what you are actually going to do in this special model.
So in this case,
it's linear model but there are a lot of other things,
which you can take from this.
And then module for example,
you see here Conv1d, Conv2d, whatever.
So even GRU, Gated Recurrent Unit.
If you want to create RNN,
Recurrent Neural Network of GRU or LSTM.
For example, you can also create LSTM, LSTM itself.
It's very, very granular.
You can create a lot of things but the structure remains always the same.
So if you've init, you have backward.
So now, we're defining the input dimension,
as you see, we need here the input dimension if we want to run this class.
Input dimensions is one.
So actually, we have one column vector with 20 elements.
And it has a one column dimensions is one and the output dimensions is also one.
In here, we instantiate this class,
meaning we create an instance of a class running object of a class.
And two, we're passing input dimension and output dimension.
This we have defined in our constructor.
So we have defined that we are passing self,
self will be passed automatically from the class itself.
But for input dimension and output dimension,
we have to pass them.
So let us execute the cell.
You'll see what is model?
Model is a LinearRegressor,
which inherits from linear,
inherits from module and has linear function init,
and linear class which is, sorry.
A linear class init,
and we are passing input features dimension one and output features dimension one.
So this is the main building block of our regressor.
Now, we need to specify loss and optimize the functions.
It's actually in any other Deep Learning Framework.
So we specify loss function with that MSE, Mean Squared Error Loss.
This is also a class which is stored in module and then in
neural network and we specify that optimizer with SGD, Stochastic Gradient Descent.
This is in a torch.optim package.
And here, we have to pass two arguments which are the model parameters, meaning model,
which we have created already,
and we have also to pass the learning rate,
which is here, 0.001.
Let us print out this.
Execute and print out.
So we have created optimizer,
and we have created loss function.
We can also print loss function.
Okay not so much information, MSE loss.
It's okay.
So we have created both.
Now, we are coming to the training.
And here, we have only to specify the number of epochs,
which I have created pretty high, 500,
but our data is very small,
and it will be very very fast.
And then, we are creating a full loop in the range epochs, so 500 epochs.
Now, we are coming to the inner structure of this full loop,
and this is very important.
This is also pretty the same for any other PyTorch model.
So first of all,
we increment epoch and then we are creating inputs.
Inputs, we have to convert to torch from numpy array,
which is done here,
torch.from_numpy, and we are passing the training data.
This is the data which we are passing to our autograph variable.
As you can remember, autograph variable is required to
build computational graph which we need to compute gradients later.
So we are passing this variable,
and we're getting their outputs.
Then, we have to reset gradients with zero.
So every epoch, we are resetting gradients for the optimizer.
The next step, we are creating formats.
So we are actually predicting.
So we have already our model.
We have instantiate our model,
which is here, and using this model,
we can predict already.
So we are predicting.
So we are passing the inputs again.
We are passing the inputs,
and we are getting predicted outputs.
The next step is computing the loss.
So now, we can compute the loss.
We know what are the real outputs.
Here, we're passing actually because it's y_train actually.
So we have defined, specified y_train.
We know what it is, and here,
we just converting it to a numpy array and passing to a variable,
but these are the real outputs.
And here, we have created the predicted outputs and
now we are specifying the loss function.
And we're passing to the loss function,
predicted outputs and real outputs.
And then if you'll compute loss,
and then we use loss function to go backward and to compute
gradients and then we have to go optimizer.step.
This is actually the skeleton of every single model training in PyTorch.
You will always run the same steps even if you want to create the most,
most complicated neuronal network,
it will be the same.
In PyTorch users also this optimization technique to optimize
even the simplest model like linear model or logistic model.
It's the same. It will always run the same steps
and actually apply the same technique of optimization.
And then you have to print out.
Actually in PyTorch, you will not have
this convenient methods like in
Keras where you just specify what do you like to be printed out,
and Keras will print out everything for you.
Here, you just have to write a small line of code to see the output,
but this is the price for this flexibility.
So you have here complete flexibility.
How do you build your computational graph.
You can decide its run time what be in the graph.
You can specify whether to run the tensors on cuda, on GPU or on CPU.
Everything you can specify the sizes of the tensors.
Whatever you like, its run time and you can change its run time.
And this is great for
the flexibility and my opinion is that the price which you have to pay
for it that you have to run some boilerplate code but it's not too much in my opinion.
So let us execute this,
full loop and train our model.
It's very fast.
It's already trained.
So we have 500 epochs and you see that the loss decreases continuously,
it decreases, and then we have we stock by loss 0.2,
and actually, we have very good accuracy here.
We can actually be proud of us.
It's a very simple model, of course,
but if you like to create something more complex,
you will actually use the same structure as we have discussed here.
Okay. This is the last session in our introductory session cycle of PyTorch.
I hope you have enjoyed and see you next time. Bye bye.