Image Compression and Generation using Variational Autoencoders in Python

4.7
68 个评分
提供方
Coursera Project Network
3,157 人已注册
在此指导项目中,您将:

How to preprocess and prepare data for vision tasks using PyTorch

What a variational autoencoder is and how to train one

How to compress, reconstruct, and generate new images using a generative model

Clock90 minutes
Intermediate中级
Cloud无需下载
Video分屏视频
Comment Dots英语(English)
Laptop仅限桌面

In this 1-hour long project, you will be introduced to the Variational Autoencoder. We will discuss some basic theory behind this model, and move on to creating a machine learning project based on this architecture. Our data comprises 60.000 characters from a dataset of fonts. We will train a variational autoencoder that will be capable of compressing this character font data from 2500 dimensions down to 32 dimensions. This same model will be able to then reconstruct its original input with high fidelity. The true advantage of the variational autoencoder is its ability to create new outputs that come from distributions that closely follow its training data: we can output characters in brand new fonts. Note: This course works best for learners who are based in the North America region. We’re currently working on providing the same experience in other regions.

您要培养的技能

  • Image Compression
  • Machine Learning
  • Vision

分步进行学习

在与您的工作区一起在分屏中播放的视频中,您的授课教师将指导您完成每个步骤:

  1. An introduction to the variational autoencoder and our project

  2. Dataset visualization and preprocessing

  3. Dataset split into training and validation sets

  4. U​se data loaders to handle memory overload

  5. Create VAE architecture

  6. Create training loop for VAE

  7. R​esults of our model and short introduction to other potential projects using a VAE

指导项目工作原理

您的工作空间就是浏览器中的云桌面,无需下载

在分屏视频中,您的授课教师会为您提供分步指导

授课教师

审阅

来自IMAGE COMPRESSION AND GENERATION USING VARIATIONAL AUTOENCODERS IN PYTHON的热门评论

查看所有评论

常见问题

常见问题

还有其他问题吗?请访问 学生帮助中心