SPEAKER: Let's play a game.
Close your eyes and picture a shoe.
OK.
Did anyone picture this?
This?
How about this?
We may not even know why, but each of us
is biased toward one shoe over the others.
Now, imagine that you're trying to teach a computer
to recognize a shoe.
You may end up exposing it to your own bias.
That's how bias happens in machine learning.
But first, what is machine learning?
Well, it's used in a lot of technology we use today.
Machine learning helps us get from place to place,
gives us suggestions, translates stuff, even
understands what you say to it.
How does it work?
With traditional programming, people
hand code the solution to a problem, step by step.
With machine learning, computers learn the solution
by finding patterns in data, so it's
easy to think there's no human bias in that.
But just because something is based on data
doesn't automatically make it neutral.
Even with good intentions, it's impossible to separate
ourselves from our own human biases,
so our human biases become part of the technology
we create in many different ways.
There's interaction bias, like this recent game
where people were asked to draw shoes for the computer.
Most people drew ones like this.
So as more people interacted with the game,
the computer didn't even recognize these.
Latent bias-- for example, if you were training a computer
on what a physicist looks like, and you're using pictures
of past physicists, your algorithm
will end up with a latent bias skewing towards men.
And selection bias-- say you're training a model
to recognize faces.
Whether you grab images from the internet or your own photo
library, are you making sure to select
photos that represent everyone?
Since some of our most advanced products use machine learning,
we've been working to prevent that technology
from perpetuating negative human bias--