quinta-feira, 9 de novembro de 2017

Windows Server Containers

What is a container?

The most important question to ask here at the outset is a simple one: what is a container? It is important to truly understand what the technology is and how it differs from other technologies like hardware virtualization.

A container is another form of virtualization, but one that is focused at the operating system (OS) layer. It is geared toward deploying and running applications without requiring a full virtual machine (VM) for each application. In hardware virtualization, a hypervisor implements this “virtualization” layer, whereas in containers a container engine performs this.

Each application runs in an isolated memory space but shares the underlying kernel from the host machine/VM. The host machine (whether it’s physical or virtual) regulates the container so that it doesn’t consume all of the available resources.

Each container has all of the necessary binaries to support the application running within it. Container hosts can run many different applications, fully isolated, at any one time. In a modern datacenter, containers can achieve a greater density per host than VMs because the footprint of a container is considerably smaller than that of a VM.


Containers versus VMs

In many ways, containers can be incorrectly thought of as a VM. VMs are independent operating systems—memory, CPU, and so on, whereas containers only appear to be. A container for example will share a kernel from the host machine, whereas a VM has its own kernel and doesn’t see the host machine at all.

Because VMs have been around for a long time, and even today we generally deploy applications to them, we are forced to ask the question: why choose a container over a VM, or vice versa?

VMs offer flexibility to an enterprise and allow it to run any type of application within them. They can be assigned resources based on the need and demand, and they tend to lend themselves well to the scale-up approach when you need to improve performance.

VMs require individual management, which generally increases the associated overhead of running an application. On the same note, a VM will take up a predefined set of resources while it is running. Thus, the number of VMs that you can store on a single host is directly related to the size of the VMs. This can have additional effects with respect to the cost of an IT environment.

Containers, conversely, are very flexible and naturally have a much smaller footprint than a VM. They don’t have the same management footprint, because you no longer need to manage a full OS, thus reducing potentials costs associated to the container lifecycle. As mentioned earlier, this allows for greater density per host and makes it possible for you to scale-out far more efficiently than you can with a VM deployment.

Isolation is a key aspect for containers; it provides separation for each application that you want to run within a container. The isolation essentially gives the application its own view of the OS from the perspective of memory, CPU and file system, among other things. The application can perform any operation in this “bubble,” even delete what it thinks is the OS without affecting any other containers that are running.

However, containers do lock you in to a type of OS, which in turns locks you in to the type of applications you might be able to use. This is not necessarily a bad thing, because it could simplify costs in terms of licensing and support. Containers also don’t maintain state; you are required to maintain state outside of the container runtime. Again, this is not necessarily a bad thing, but an assessment on a per-application basis is required to see if this is achievable for the application.

Containers also give an enterprise a migration path toward making its applications cloud-native without a major replatform. It also facilitates an agility to migrate applications between clouds. For example, if you containerize your application by using Docker and Windows as the host OS, as long as you can find a provider that uses Docker and Windows as the host OS, you can run your application.

Containers also require applications that are intended to be containerized to have noninteractive user interface (UI) or service-based applications. This might initially put people off, but, again, this is not a bad thing; however, it will of course require some planning and understanding of how your application is deployed and operates. It is crucial to understand that containers are best suited for cloud-native scenarios. These scenarios eliminate a large amount of unnecessary code and usually break an application into lots of small functional pieces. These pieces have a very specific job and, because of this, are very performant. With this simple understanding, we can begin to understand why containers are best geared toward headless apps.

So, what should you choose for an enterprise today? It very much depends on your needs. For example, if you are looking toward the future and want to be cloud-ready, containers are a better choice. They fit the DevOps model for rapid development and deployment, and allow for greater agility when moving between infrastructures, be it public or private cloud. If you want to provide additional isolation so that applications cannot interfere with one another, containers are a good fit.

Containers are a natural evolution or transition from a waterfall-based development cycle with an Ntier architecture that allows enterprises to get ready and shift into the mindset of cloud-based Agile architectures.


Container types

There are two main types of containers available today:
  • Windows Server Containers
  • Windows Server Containers with Hyper-V Isolation
Although this eBook doesn’t cover Linux, container technology exists for that platform and operates much like the Windows Server Containers, except with the Linux OS.

Windows Server Containers share a kernel with the container host and all running containers. It provides isolation for the application through process and namespace isolation technology. The process and namespace isolation give the application its own view of the OS and binaries, and enforces protection from other containers.

Windows Server Containers with Hyper-V Isolation provide an isolated kernel experience through a utility VM on a host. This increases security of the container because the isolation mechanisms are enforced at the hardware level, instead.

Both container technologies operate with the same API surface and can be controlled via Docker today. This simplifies management of a containerized estate.


Source: Introduction to Windows Containers, John McCabe, Michael Friis

Sem comentários:

Enviar um comentário