docker is the most popular and used container platform.
virtual machines (vms) are increasingly used by businesses. a vm is an operating system or application environment installed on software. it allows the user to enjoy the same experience as on a physical machine, with several advantages.
in particular, it is possible to run multiple os environments on the same machine, isolating them from each other. similarly, virtualization can reduce costs within a business by reducing the number of virtual machines required.
energy needs are also reduced. backups and restores are also simplified.
however, virtual machine hypervisors rely on hardware emulation, and therefore require a lot of computing power. to remedy this problem, many firms are turning to containers, and by extension to docker.
before approaching docker, it is essential to remember what a container image is. it is a lightweight, independent set of software processes that includes all the files needed to execute processes: code, runtime, system tools, library, and settings. they can be used to run linux or windows applications.
the containers are therefore close to the virtual machines, but have a significant advantage. while virtualization involves running many operating systems on a single system, the containers share the same operating system kernel and isolate the application processes from the rest of the system.
to put it simply, rather than virtualizing the hardware as the hypervisor, the container virtualizes the operating system. it is therefore significantly more efficient than a hypervisor in terms of consuming system resources. concretely, it is possible to execute nearly 4 to 6 times more instances of applications with a container than with virtual machines like xen or kvm on the same hardware.
it is an open source software platform for creating, deploying and managing virtualized application containers on an operating system.
the services or functions of the application and its various libraries, configuration files, dependencies and other components are grouped within the container. each executed container shares the services of the operating system.
originally created to work with the linux platform, docker now works with other os such as microsoft windows or apple macos. there are also platform versions designed for amazon web services and microsoft azure.
the containerization platform is based on seven main components. the docker engine is a client-server tool on which container technology is based to support container-based application creation tasks.
the engine creates a daemon server-side process for hosting images, containers, networks, and storage volumes. this daemon also provides a client-side sli interface that allows users to interact with the daemon via the platform api.
the created containers are called dockerfiles. the docker compose component allows you to define the composition of the components within a dedicated container. the docker hub is a saas tool that allows users to publish and share container-based applications through a common library.
the docker engine’s docker swarm mode supports load balancing of clusters. thus, the resources of multiple hosts can be brought together to act as a single set. thus, users can quickly scale the deployment of containers.
the docker platform has many advantages. it allows you to quickly compose, create, deploy, and scale containers on docker hosts. it also offers a high degree of portability, allowing users to register and share containers on a wide variety of hosts in public and private environments.
compared to virtual machines, docker also has several advantages. it makes it possible to develop applications more efficiently, using fewer resources, and to deploy these applications faster.
however, it also has several disadvantages. it can be difficult to efficiently manage a large number of containers simultaneously. in addition, security is a problem.
the containers are isolated but share the same operating system. in fact, an attack or a security breach on the os can compromise all containers. to minimize this risk, some companies run their containers in a virtual machine.
docker is not the only container platform on the market, but it remains the most used. its main competitor is coreos rkt. this tool is mainly known for its security, including the support of selinux. other major platforms include canonical lxd, or virtuozzo openvz, the oldest container platform.
we can also mention the ecosystem of tools that work with the platform for tasks such as clustering or container management.
one example is kubernetes, the open source container orchestration tool created by google.
the version 1.0 of docker was launched in june 2014, in order to facilitate the use of containers. very quickly, the platform has been very successful with many companies.
today, according to docker’s creators, more than 3.5 million applications have been containerized using this technology, and more than 37 billion containerized applications have been downloaded.
similarly, according to the datadog cloud monitoring system, 18.8% of users had adopted the platform in 2017.
for its part, rightscale estimates that the adoption of the platform in the cloud industry has increased by 35% in 2017 to 49% in 2018.
giants like oracle and microsoft have adopted it, just like almost all the companies of the cloud.
according to 451 research, the rise of docker is not about to stop. these analysts estimate that the container market will literally explode by 2021.
the revenues will be multiplied by 4 with an annual growth rate of 35%, from $ 749 million in 2016 to $ 3.4 billion in 2021.
traditional virtualization allows, via a hypervisor, to simulate one or more physical machines, and run them as virtual machines (vms) on a server or terminal.
these vms themselves integrate an os on which the applications they contain are executed. this is not the case of the container. the container makes direct call to the os of its host machine to make its system calls and execute its applications.
docker containers in linux format exploit a linux kernel component called lxc (or linux container). in windows server format, they rely on an equivalent brick, called windows server container.
the docker engine normalizes these bricks through apis in order to run the applications in standard containers, which are then portable from one server to another.
as the container does not ship os, unlike the virtual machine, it is therefore much lighter than the latter. it does not need to activate a second system to run its applications.
this results in a much faster launch, but also in the ability to migrate a container (because of its low weight) more easily from one physical machine to another.
another advantage: the docker containers, because of their lightness, are portable cloud cloud. only condition: that the clouds in presence are optimized to accommodate them.
and this is now the case of the main ones: amazon web services, microsoft azure, google cloud platform, ovh … what does that mean? well, a docker container, with its applications, can easily move from one cloud to another.
first, docker speeds up deployments. why? because the docker containers are light. switching from a development or test environment to a production environment can be done almost in one click, which is not the case for vm, heavier.
due to the disappearance of the intermediate vm, developers also benefit from an application stack closer to that of the production environment, which automatically generates fewer unpleasant surprises during production runs.
docker will allow at the same time to design more agile test architecture, each test container can integrate a brick of the application (database, languages, components …). to test a new version of a brick, simply change the container. finally, on the continuous deployment side, docker is of interest because it makes it possible to limit the updates to the delta of the container that needs to be.
thanks to docker, it is possible to containerize an application, with for each layer of containers isolating its components. this is the concept of microservice architecture.
these containers of component, because of their lightness, can themselves, each, rely on the required machinery resources. to achieve the same result, virtualization tools need an inactive vm pool provisioned in advance.
with docker, no need for a pool, since a container is bootable in a few seconds. but, the promise of docker goes further. because the docker containers are portable from one infrastructure to another, it becomes possible to imagine implementing application mirroring and load balancing between clouds, and why not plans for recovery or continuity of activity between clouds … or even decide to take over a project from another cloud provider.
to facilitate the management of complex architectures, docker has built a containers-as-a-service platform. called docker enterprise edition (docker ee), it includes the main tools needed to manage the deployment, management, security and monitoring of such environments.
at the end of 2014, docker also acquired the tutum cloud platform: a saas environment designed to drive the implementation of containerized applications on various public clouds (microsoft azure, digital ocean, amazon web services and ibm softlayer).
on the cluster management side, docker ee integrates both swarm, its home orchestration engine, but also kubernetes, which is none other than the main alternative to the latter. kubernetes comes from an open source project initiated by google.
but on the front of the automation of infrastructures, the company of san francisco intends to go even further. with this in mind, in early 2016, it acquired the start-up conductant, which has made a name for itself in the development of apache aurora, a clustering system designed to manage applications reaching hundreds of millions of users.
in early 2017, she also got her hands on the french startup inifinit which publishes a distributed and cross-platform storage technology.
yes. in particular, ibm published in 2014 a performance comparison between docker and kvm. his conclusion is unquestionable: docker equals or exceeds the performance of this open source virtualization technology – and this in all cases tested.
for big blue, the velocity of docker containers is similar to that of bare machine servers. by eliminating the resource-intensive virtualization layer, docker would reduce ram consumption by 4 to 30 times.
published in august 2017 by the it research department of lund university in sweden, another study compares the performance of containers with that of vmware machines. it also leads to a conclusion in favor of docker.
initially limited to linux, docker has since been ported to windows server. the fact remains that containers created on linux, can not be portable “natively” on the microsoft server, and vice versa.
this is the major limitation of docker, and its main difference with traditional virtualization. a virtual machine running linux can indeed run on a windows server, and vice versa.
the answer is yes. docker has a long history of providing tools for developers to manipulate containers and test container architectures on their computers.
the publisher also announced in april 2017 an open source toolkit (called linuxkit) designed to assemble a linux distribution from embedded system components in containers.
the idea is to propose a modular architecture to build a custom distribution limited to the only necessary system processes for the applications. advantages? being containerized, each component can be maintained independently.
the architecture also gives the opportunity to minimize the number of processes, which optimizes the weight of the os and its attack surface. a solution presented by docker as ideal for connected objects. the linuxkit project was launched in connection with the linux foundation and several market players (arm, hpe, ibm, intel and microsoft).
docker has released a dozen components under apache license. components that cover the main functionalities needed to drive a containerized architecture: network management, storage, security … among them is containerd.
a central brick of docker technology since it allows the execution of the container.
considering the criticality of this brick for the standardization of container offers, docker has transferred the rights to an independent organization (the cloud native computing foundation).
in april 2017, the publisher went a step further by launching moby: an open source framework designed to build container systems. it includes 80 open source components: those of docker (containerd, linuxkit, swarmkit …) but also other projects (redis, nginx …).
moby wants to be a participatory project, through which all stakeholders in the docker community building container-based solutions can share bricks.
no. the unikernel is halfway between the classic server virtualization and the container. while traditional virtualization takes over the entire server os in the virtual machine, the unikernel only embeds in the vm that the system libraries necessary to run the application it contains.
the core of the os remains outside of the machine. unlike the docker container, the unikernel therefore takes over part of the os in the vm.
among its main advantages compared to the container, the unikernel can make it possible to optimize the system layer embedded in the vm to the specificities of the application to be executed.
in january 2015, docker acquired unikernel systems, a start-up specializing in unikernel, with the aim of offering an alternative to its containers.
the docker user community is starting to be important, although its size remains difficult to evaluate. there is documentation provided on this open source technology on the web.
just on stackoverflow, several thousand pages are devoted to it. the company also provides users with a service called the docker hub, designed to allow the exchange and build of pre-configured docker containers.
hosting more than 460,000 container images (with ubuntu, wordpress, mysql, nodejs …), this space is also integrated with github.
docker also markets an on-premise version of the docker hub (the docker hub enterprise).
finally, docker has launched an online store of applications. objective: to offer publishers a commercial channel to distribute their applications in the form of containers.
if you need to run as many applications as possible on a minimum of servers, then you have to use containers – keeping in mind that you will need to keep an eye on the systems that run containers as long as their security is not locked.
if you need to run multiple applications on servers and / or have a wide variety of operating systems, it’s best to point to virtual machines. and if security is the number one priority for your business, then virtual machines should be preferred.
in the real world, you’ll likely be using both containers and virtual machines in the cloud and in your data centers. the economies of scale that containers provide cannot be ignored. at the same time, virtual machines always retain their benefits.
as container technology matures, “the vm / container association will be the nirvana of cloud portability,” says thorsten von eicken, cto of rightscale, a company specializing in cloud platform management. we’re not there yet, but that’s where we’re headed.
when you're a little too careless about virtualizing your domain controllers, cloning, migrating, backing up and restoring, returning from vacation… Read More
systemd is new service manager for linux. it's a replacement for all previous init systems (sysv/sysvinit & ubuntu's upstart) and… Read More