Optimize TensorFlow Models For Deployment with TensorRT

4.7
10 个评分
提供方
Coursera Project Network
在此指导项目中,您将:

Optimize Tensorflow models using TensorRT (TF-TRT)

Use TF-TRT to optimize several deep learning models at FP32, FP16, and INT8 precision

Observe how tuning TF-TRT parameters affects performance and inference throughput

Clock1.5 hours
Intermediate中级
Cloud无需下载
Video分屏视频
Comment Dots英语(English)
Laptop仅限桌面

This is a hands-on, guided project on optimizing your TensorFlow models for inference with NVIDIA's TensorRT. By the end of this 1.5 hour long project, you will be able to optimize Tensorflow models using the TensorFlow integration of NVIDIA's TensorRT (TF-TRT), use TF-TRT to optimize several deep learning models at FP32, FP16, and INT8 precision, and observe how tuning TF-TRT parameters affects performance and inference throughput. Prerequisites: In order to successfully complete this project, you should be competent in Python programming, understand deep learning and what inference is, and have experience building deep learning models in TensorFlow and its Keras API. Note: This course works best for learners who are based in the North America region. We’re currently working on providing the same experience in other regions.

您要培养的技能

Deep LearningNVIDIA TensorRT (TF-TRT)Python ProgrammingTensorflowkeras

分步进行学习

在与您的工作区一起在分屏中播放的视频中,您的授课教师将指导您完成每个步骤:

  1. Introduction and Project Overview

  2. Setup your TensorFlow and TensorRT Runtime

  3. Load the Data and Pre-trained InceptionV3 Model

  4. Create batched Input

  5. Load the TensorFlow SavedModel

  6. Get Baseline for Prediction Throughput and Accuracy

  7. Convert a TensorFlow saved model into a TF-TRT Float32 Graph

  8. Benchmark TF-TRT Float32

  9. Convert to TF-TRT Float16 and Benchmark

  10. Converting to TF-TRT INT8

指导项目工作原理

您的工作空间就是浏览器中的云桌面,无需下载

在分屏视频中,您的授课教师会为您提供分步指导

常见问题

常见问题

还有其他问题吗?请访问 学生帮助中心