As the first thing we could do, and then we start the next cell we create with screen at the defaultt graph by using this function. And then we create three nodes for two variables x and y. And a constant a. They will be initialized through values three, four, and two, respectively. Now, let's check that our created nodes are indeed on a default graph. Very well, it this. And as a side remark, I want to show you that nodes can also be created on some other graph. Here's an example of a note, X2 that is created on graph called Graph. And if we evaluate these statements, then this is what we will see. Now, let's evaluate this cell and see that nodes that we just created are not yet initialized. The next cell, we define a function f equals x squared plus y plus a. Let's evaluate the cell. And if we now pin the value of f, we see that it's a Tensor of type add. There is no value of f yet, which is an example of lazy evaluation in TensorFlow. Now, everything is ready to run our first graph in TensorFlow. To this end, we need to start a TensorFlow session. After we open a session, we have to initialize our variables here and here, and then to run our function. Please note that there is no need to initialize constants in TensorFlow. After a session is ended, we should manually close it by using this command. Let's run this to see the results. This is the result. There is a more convenient way to automatically close the session using the with construction as shown in this cell. Finally, the whole code can be made even shorter if we introduce a new node on the graph here. That takes care of initialization of all labels at once. And here is how we use it. So if we run this cell, we again see the same result. To check what node was created by TensorFlow, we can type init here. This is the result. Our next illustration shows the lifecycle of a node value. When you create a node, it only adds a value to the executing phase when you run a TensorFlow session. Let's consider this example. Here we create a node for constant w equal three, and then we create three times x, y, and z. Let's assume that we want to compute the values of y and z. One way to do it would be like shown in the cell. In this case, the TensorFlow graph would be reversed twice to compute the values of y and x independently of each other. Even though y and z both use the same value of x. It's important to remember that all node values are dropped between different lines of the graph. The only exception to this rule are variables, who start their life when they are initialized as a code, and end it when the session is closed. So after we run the cell, we can check again the state of node x and we see again that initialized graph. Also node that here the code that varies both w and x twice to calculate y and z to separate trans of the graph. This can be done more efficiently by telling TensorFlow to do all calculations in one parse on the graph. And the syntax for this is shown in this cell. This is how we do it. If we evaluate it, we get the same result. Next, I want to demonstrate the working of the reverse-mode autodiff in TensorFlow. Let's consider the function shown here. It's an exponent of an exponent of an exponent. This function is actually pretty similar to loss functions that I implemented via Neural Network. So this example might be useful for your understanding of the working in Neural Networks in TensorFlow. Let's see how we implement this function. This is quite straightforward, and shown in this cell. Here, we first define the Tensor value output of the input cell, then use it as an input to calculate the output of the second layer. And then finally, use the last output to produce the value of the function. And we return all these three values as the output of the function. That's on the cell. We can also put some synthetic explore on top of the, and define each layer within it's own scope like as show in this cell. And this will be useful later for visualization of the TensorFlow graph and where is it belong. Now, let's specify a point at which you want to compute the derivative. I want to take a point where all intercepts equal zero and all slopes are ones. Let's do it here and let's verify what we got. That's the right point. Next, we compute all derivatives analytically, which is presented here. You can go through the details of this multiple calculation with the node that in the pencil of this video. But for now, let's see how TensorFlow computes these derivatives. So we clear the graph to start it from scratch again, and creates through variable nodes w and x, both of the floating type, and then we call our function. And [INAUDIBLE] the values of all the layers, f2, f1 and f0. The next line is a Q line. Here we defined a node for the gradients of the outer function, F2 with respect to all parameters w in a function. So let's evaluate it. And see the tensor that it returns. The evaluation here is done simply by calling tf.gradients, with two arguments f2, and the name of the function and the variables with respect to which we want to call to compute the gradients. Now, let's move to the next cell. And here's where we run our TensorFlow graph. We can run it twice to compute first the function values and then the values of the gradients. Or we choose the code commanded out here or we can do it in one run using the same syntax that was shown to you before. Let's run the seven boom. We have the gradients, we have six numbers for gradients, by the number of free parameters and you can check and manually that these are correct numbers, using the formulas that were given to us above. Now, we checked that the function after the session is over returns again the uninitialized value of the tensor. And finally, I want to show you how we can visualize the TensorFlow graph. There are two ways to do this. Either in a Jupyter Notebook or using TensorBoard. Here, I'll show you the first method. The matrix source for this is given in this cell and this is a code which I borrowed from Garren's book. So, let's run this, and move to the next cell, where we show a graph. Let's run it, and boom. We got out TensorFlow graph for our code. Now, you can see why we introduce this name scopes in our definition of the function. TensorFlow puts all name scopes in separate boxes like this ones. And we can click on them and see ops inside these boxes. Wait till in our demos I will show you how to run TensorBoard to visualize the graph, and as well as how to see performance of different training algorithms on this. This is what we're going to see so