Before managing complete applications with Kubernetes and Container Engine,
you'll want to know how containers work and what advantages they provide.
If you were to build an online retail application with user sign in,
inventory management, billing and shipping,
you could break up your application into these modules.
Each requiring a number of
software instances and running on a number of machines to scale.
They may have their own hardware and software requirements to
run and scale properly such as operating system,
RAM, CPU, disk, networking,
and even software dependencies.
Instead of writing these requirements in code,
you can declare them outside your code in
configuration files called containers for easy management.
This module steps back,
describes how applications evolved into containers.
We explain why it's important to run
isolated containers using virtualized operating systems.
This also lays the groundwork for you to set up
continuous delivery of your applications in different environments like development,
staging and production, and into reconfigurations.
Also allows you to dig deeper into the details of
your applications as they become more sophisticated.
Welcome. My name is Raul Lozano.
This is an introduction to containers and Dockers.
Let's get started.
Looking back in the old days,
how applications were built on the individual bare-metal servers,
usually had to install the hardware,
you had to install a OS or a kernel,
on top of that you had all these dependencies that you installed also.
And then after that you installed the application code.
Even after doing all that,
it usually became really hard to
keep the application dependencies and everything in sync.
So, VMware came up with a way to hypervise your hardware,
in other words decouple your hardware from your kernel,
from your dependencies, and from your application.
So, they developed a hardware hypervisor layer,
that sort of decoupled the application itself,
like I said, everything above the hardware liberated it.
That was one abstraction level.
But we had a problem back then.
Problem is that you cannot install the same application or
multiple versions of the same application into one single VM,
one single virtual machine.
If you do that, you have dependency errors,
you have conflicts and so forth.
So, the answer back in the old days again is that create more virtual machines.
So yes, sometimes you would have this scenario here
where you have multiple virtual machines, many,
many virtual machines probably running
even the same instance or the same version of the same application,
but the reason is you could not run them all on
one virtual machine due to the dependencies.
So, it was costly,
ineffective, and hardware ineffective also.
Finally, what we see right now is a different level of abstraction.
We have decoupled with VM and
hypervisor the hardware layer on the bottom as you see in the right hand side,
then with the kernel and container runtime in a container,
we have decoupled that completely from any dependency in any application.
The good thing about this,
it's portable, it's very efficient,
its inside that has no dependencies again on the kernel or the hardware itself.
Why developers like this?
Well, couple reasons.
First of all, you could develop a cross test and in production at the same time.
You could also run this on bare-metal,
virtual machines, or on the cloud.
It could be packaged spread across speed of development,
agile creation of development,
and continuous integration and delivery.
Also, you could have single file copy or single instance storage for some of us.
This provides a path to microservices: Introspec,
isolated, and elastic, all at the same time.
Quick look at history.
Back in 2004, we had that limited isolation of yes you have these applications,
but they're all running in separate VMs, or separate groups,
or separate entities themselves,
whatever the hypervisor technology was.
In 2006, Google created a very,
very important technology called consistency groups.
Consistency groups basically use
containers to manage these applications and keep them consistent.
2013, Docker became really,
really popular, everybody's been using Docker now,
and as you can see,
Docker really releases that layer again,
the Dockerfile file provides that layer of
decoupling with the applications or the base OS itself.
And on top of that,
all that you see is a container layer.
And at the left here,
you will see a Dockerfile,
it's just an explanation of the Docker,
the runtime and the command itself, very simple.
Containers really promote small shared images.
That's the whole key in the whole let's say
Google magic if you want to say it that, behind the containers.
Really they isolate the OS which is the bulkiest part,
they isolate a lot of the dependencies from
the container and from the actual applications themselves.