You finish three of the four courses of this specialization on data deployments. Welcome to this final course on advance deployment scenarios. In this course, you're going to learn about several advance topics. What are the advanced topics? We got so many to choose from. So we're going to start with TensorFlow Serving. So you can learn how to serve your models over HTTP and HTTPS. Then we're going to be looking at TensorFlow Hub. So that's a repository for models that somebody can take models from and use the entire model and maybe use transfer learning to take layers from that model. Then there's TensorBoard, and a new part of TensorBoard called TensorBoard.depth, where you can actually deploy the metadata about your model and you'll get a URL back that you can then share, so other people can look at your model, can inspect, and maybe help you debug your model. Then finally, we'll end up with the one that I'm really most excited about, and that's to talk about federated learning. So when you have your model deployed in the wild, how do you then effectively get federated learning from that? So we'll start with TensorFlow Serving, then TensorFlow Hub, TensorBoard, and federated learning. Yeah. The first in this week we'll started with TensorFlow Serving. You found that after you've trained a machine learning model, relative learning model, is sometimes so many steps to take the model, package it out, post it like Cloud host server, maintain the Cloud host server, and send them in API so that you or someone else can call your model to get predictions back. So anything that TensorFlow provides to make all those steps easier just makes life easier for developers. Exactly, and that's the whole idea behind this, and then also model versioning. As you retrain your model, and you save it out in a new directory, you can serve from that one and you can have multiple models and handle your model versioning like that. We try to make that as easy as possible for the developer. So that you can deploy something to Cloud host server and when you version it, push a new model and have that. Just work without too much messing around with saving models in different directories and copying and pasting in right place and help people with this. Exactly, or taking a server offline to update it, those things. The goal is to really reduce the friction for developers so that they can have a serving infrastructure. So good, great. So you've learned a lot about how to train deep-learning models. With the TensorFlow Serving Infrastructure, you are going to easily push them all that you've trained to Cloud hosting infrastructure and have it ready to accept HTTP requests so that you or someone else can make queries to your model and get predictions out. So to learn how to do this, let's go on to the next video.