案例学习：预测房价

Loading...

来自 University of Washington 的课程

机器学习：回归

3589 个评分

案例学习：预测房价

从本节课中

Ridge Regression

You have examined how the performance of a model varies with increasing model complexity, and can describe the potential pitfall of complex models becoming overfit to the training data. In this module, you will explore a very simple, but extremely effective technique for automatically coping with this issue. This method is called "ridge regression". You start out with a complex model, but now fit the model in a manner that not only incorporates a measure of fit to the training data, but also a term that biases the solution away from overfitted functions. To this end, you will explore symptoms of overfitted functions and use this to define a quantitative measure to use in your revised optimization objective. You will derive both a closed-form and gradient descent algorithm for fitting the ridge regression objective; these forms are small modifications from the original algorithms you derived for multiple regression. To select the strength of the bias away from overfitting, you will explore a general-purpose method called "cross validation". <p>You will implement both cross-validation and gradient descent to fit a ridge regression model and select the regularization constant.

- Emily FoxAmazon Professor of Machine Learning

Statistics - Carlos GuestrinAmazon Professor of Machine Learning

Computer Science and Engineering

[MUSIC]

In the last module, we talked about the potential for

high complexity models to become overfit to the data.

And we also discussed this idea of a bias-varience tradeoff.

Where high complexity models could have very low bias, but high variance.

Whereas low complexity models have high bias, but low variance.

And we said that we wanted to trade off between bias and

variance to get to that sweet spot of having good predictive performance.

And in this module, what we're gonna do is talk about a way to automatically balance

between bias and variance using something called ridge regression.

So let's recall this issue of overfitting in the context of polynomial regression.

And remember, this is our polynomial regression model.

And if we assume we have some low order of polynomial that we're fitting to our data,

we might get a fit that looks like the following.

This is just a quadratic fit to the data.

But once we get to a much higher order polynomial,

we can get these really wild fits to our training observations.

Again, this is an instance of a high variance model.

But we refer to this model or this fit as being overfit.

Because it is very, very well tuned to our training observations, but

it doesn't generalize well to other observations we might see.

So, previously we had discussed a very formal notion of what it means for

a model to be overfit.

In terms of the training error being less than the training error of another model,

whose true error is actually smaller than the true error of the model with

smaller training error.

Okay, hopefully you remember that from the last module.

But a question we have now is, is there some type of quantitative measure

that's indicative of when a model is overfit?

And to see this, let's look at the following demo,

where what we're going to show is that when models become overfit,

the estimated coefficients of those models tend to become really,

really, really large in magnitude.

[MUSIC]