Linux Containers and the Future Cloud
Docker is an open-source project that automates the creation and deployment of containers. Docker first was released in March 2013 with Apache License Version 2.0. It started as an internal project by a Platform-as-a-Service (PaaS) company called dotCloud at the time, and now called Docker Inc. The initial prototype was written in Python; later the whole project was rewritten in Go, a programming language that was developed first at Google. In September 2013, Red Hat announced that it will collaborate with Docker Inc. for Red Hat Enterprise Linux and for the Red Hat OpenShift platform. Docker requires Linux kernel 3.8 (or above). On RHEL systems, Docker runs on the 2.6.32 kernel, as necessary patches have been backported.
Docker utilizes the LXC toolkit and as such is currently available only for Linux. It runs on distributions like Ubuntu 12.04, 13.04; Fedora 19 and 20; RHEL 6.5 and above; and on cloud platforms like Amazon EC2, Google Compute Engine and Rackspace.
Docker images can be stored on a public repository and can be downloaded
docker pull command—for example,
docker pull busybox.
To display the images available on your host, you can use the
images command. You can narrow the command for a specific type
of images (fedora, for example) with
On Fedora, running a Fedora docker container is simple; after installing
docker-io package, you simply start the docker
systemctl start docker, and then you can start a Fedora
docker container with
docker run -i -t fedora
Docker has git-like capabilities for handling
containers. Changes you
make in a container are lost if you destroy the container, unless you
commit your changes (much like you do in git) with
<containerId> <containerName/containerTag>. These images can be
uploaded to a public registry, and they are available for downloading
by anyone who wants to download them. Alternatively, you can set a private Docker
Docker is able to create a snapshot using the kernel device mapper feature. In earlier versions, before Docker version 0.7, it was done using AUFS (union filesystem). Docker 0.7 adds "storage plugins", so people can switch between device mapper and AUFS (if their kernel supports it), so that Docker can run on RHEL releases that do not support AUFS.
You can create images by running commands manually and committing the
resulting container, but you also can describe them with a Dockerfile.
Just like a Makefile will compile code into a binary
executable, a Dockerfile will build a ready-to-run container image from
simple instructions. The command to build an image from a Dockerfile is
docker build. There is a tutorial about Dockerfiles and
their command syntax on the Docker Web site.
For example, the following short Dockerfile is for installing the
iperf package for a Fedora image:
FROM fedora MAINTAINER Rami Rosen RUN yum install -y iperf
You can upload and store your images for free on the Docker public index. Just like with GitHub, storing public images is free and just requires you to register an account.
The Checkpoint/Restore Feature
The CRIU (Checkpoint/Restore in userspace) project is implemented mostly in userspace, and there are more than 100 little patches scattered in the kernel for supporting it. There were several attempts to implement Checkpoint/Restore in kernel space solely, some of them by the OpenVZ project. The kernel community rejected all of them though, as they were too complex.
The Checkpoint/Restore feature enables saving a process state in several image files and restoring this process from the point at which it was frozen, on the same host or on a different host at a later time. This process also can be an LXC container. The image files are created using Google's protocol buffer (PB) format. The Checkpoint/Restore feature enables performing maintenance tasks, such as upgrading a kernel or hardware maintenance on that host after checkpointing its applications to persistent storage. Later on, the applications are restored on that host.
Another feature that is very important in HPC is load balancing using live migration. The Checkpoint/Restore feature also can be used for creating incremental snapshots, which can be used after a crash occurs. As mentioned earlier, some kernel patches were needed for supporting CRIU; here are some of them:
A new system call named
kcmp()was added; it compares two processes to determine if they share a kernel resource.
A socket monitoring interface called
sock_diagwas added to UNIX sockets in order to be able to find the peer of a UNIX domain socket. Before this change, the
sstool, which relied on parsing of
/procentries, did not show this information.
A TCP connection repair mode was added.
procfsentry was added (/proc/PID/map_files).
Let's look at a simple example of using the
First, you should check whether your kernel supports Checkpoint/Restore,
criu check --ms. Look for a response
Basically, checkpointing is done by:
criu dump -t <pid>
You can specify a folder where the process state files will be saved by
You can restore with
criu restore <pid>.
In this article, I've described what Linux-based containers are, and I briefly explained the underlying cgroups and namespaces kernel features. I have discussed some Linux-based container projects, focusing on the promising and popular LXC project. I also looked at the LXC-based Docker engine, which provides an easy and convenient way to create and deploy LXC containers. Several hands-on examples showed how simple it is to configure, manage and deploy LXC containers with the userspace LXC tools and the Docker tools.
Due to the advantages of the LXC and the Docker open-source projects, and due to the convenient and simple tools to create, deploy and configure LXC containers, as described in this article, we presumably will see more and more cloud infrastructures that will integrate LXC containers instead of using virtual machines in the near future. However, as explained in this article, solutions like Xen or KVM have several advantages over Linux-based containers and still are needed, so they probably will not disappear from the cloud infrastructure in the next few years.
Thanks to Jérôme Petazzoni from Docker Inc. and to Michael H. Warfield for reviewing this article.
Google Containers: https://github.com/google/lmctfy
Docker Public Registry: https://index.docker.io
Pick up any e-commerce web or mobile app today, and you’ll be holding a mashup of interconnected applications and services from a variety of different providers. For instance, when you connect to Amazon’s e-commerce app, cookies, tags and pixels that are monitored by solutions like Exact Target, BazaarVoice, Bing, Shopzilla, Liveramp and Google Tag Manager track every action you take. You’re presented with special offers and coupons based on your viewing and buying patterns. If you find something you want for your birthday, a third party manages your wish list, which you can share through multiple social- media outlets or email to a friend. When you select something to buy, you find yourself presented with similar items as kind suggestions. And when you finally check out, you’re offered the ability to pay with promo codes, gifts cards, PayPal or a variety of credit cards.Get the Guide
|Preparing Data for Machine Learning||Apr 25, 2017|
|openHAB||Apr 24, 2017|
|Omesh Tickoo and Ravi Iyer's Making Sense of Sensors (Apress)||Apr 21, 2017|
|Low Power Wireless: 6LoWPAN, IEEE802.15.4 and the Raspberry Pi||Apr 20, 2017|
|CodeLathe's Tonido Personal Cloud||Apr 19, 2017|
|Wrapping Up the Mars Lander||Apr 18, 2017|
- Preparing Data for Machine Learning
- Teradici's Cloud Access Platform: "Plug & Play" Cloud for the Enterprise
- The Weather Outside Is Frightful (Or Is It?)
- Simple Server Hardening
- Understanding Firewalld in Multi-Zone Configurations
- Low Power Wireless: 6LoWPAN, IEEE802.15.4 and the Raspberry Pi
- From vs. to + for Microsoft and Linux
- Server Technology's HDOT Alt-Phase Switched POPS PDU
- Gordon H. Williams' Making Things Smart (Maker Media, Inc.)