Linux Containers and the Future Cloud
Docker is an open-source project that automates the creation and deployment of containers. Docker first was released in March 2013 with Apache License Version 2.0. It started as an internal project by a Platform-as-a-Service (PaaS) company called dotCloud at the time, and now called Docker Inc. The initial prototype was written in Python; later the whole project was rewritten in Go, a programming language that was developed first at Google. In September 2013, Red Hat announced that it will collaborate with Docker Inc. for Red Hat Enterprise Linux and for the Red Hat OpenShift platform. Docker requires Linux kernel 3.8 (or above). On RHEL systems, Docker runs on the 2.6.32 kernel, as necessary patches have been backported.
Docker utilizes the LXC toolkit and as such is currently available only for Linux. It runs on distributions like Ubuntu 12.04, 13.04; Fedora 19 and 20; RHEL 6.5 and above; and on cloud platforms like Amazon EC2, Google Compute Engine and Rackspace.
Docker images can be stored on a public repository and can be downloaded
docker pull command—for example,
docker pull busybox.
To display the images available on your host, you can use the
images command. You can narrow the command for a specific type
of images (fedora, for example) with
On Fedora, running a Fedora docker container is simple; after installing
docker-io package, you simply start the docker
systemctl start docker, and then you can start a Fedora
docker container with
docker run -i -t fedora
Docker has git-like capabilities for handling
containers. Changes you
make in a container are lost if you destroy the container, unless you
commit your changes (much like you do in git) with
<containerId> <containerName/containerTag>. These images can be
uploaded to a public registry, and they are available for downloading
by anyone who wants to download them. Alternatively, you can set a private Docker
Docker is able to create a snapshot using the kernel device mapper feature. In earlier versions, before Docker version 0.7, it was done using AUFS (union filesystem). Docker 0.7 adds "storage plugins", so people can switch between device mapper and AUFS (if their kernel supports it), so that Docker can run on RHEL releases that do not support AUFS.
You can create images by running commands manually and committing the
resulting container, but you also can describe them with a Dockerfile.
Just like a Makefile will compile code into a binary
executable, a Dockerfile will build a ready-to-run container image from
simple instructions. The command to build an image from a Dockerfile is
docker build. There is a tutorial about Dockerfiles and
their command syntax on the Docker Web site.
For example, the following short Dockerfile is for installing the
iperf package for a Fedora image:
FROM fedora MAINTAINER Rami Rosen RUN yum install -y iperf
You can upload and store your images for free on the Docker public index. Just like with GitHub, storing public images is free and just requires you to register an account.
The Checkpoint/Restore Feature
The CRIU (Checkpoint/Restore in userspace) project is implemented mostly in userspace, and there are more than 100 little patches scattered in the kernel for supporting it. There were several attempts to implement Checkpoint/Restore in kernel space solely, some of them by the OpenVZ project. The kernel community rejected all of them though, as they were too complex.
The Checkpoint/Restore feature enables saving a process state in several image files and restoring this process from the point at which it was frozen, on the same host or on a different host at a later time. This process also can be an LXC container. The image files are created using Google's protocol buffer (PB) format. The Checkpoint/Restore feature enables performing maintenance tasks, such as upgrading a kernel or hardware maintenance on that host after checkpointing its applications to persistent storage. Later on, the applications are restored on that host.
Another feature that is very important in HPC is load balancing using live migration. The Checkpoint/Restore feature also can be used for creating incremental snapshots, which can be used after a crash occurs. As mentioned earlier, some kernel patches were needed for supporting CRIU; here are some of them:
A new system call named
kcmp()was added; it compares two processes to determine if they share a kernel resource.
A socket monitoring interface called
sock_diagwas added to UNIX sockets in order to be able to find the peer of a UNIX domain socket. Before this change, the
sstool, which relied on parsing of
/procentries, did not show this information.
A TCP connection repair mode was added.
procfsentry was added (/proc/PID/map_files).
Let's look at a simple example of using the
First, you should check whether your kernel supports Checkpoint/Restore,
criu check --ms. Look for a response
Basically, checkpointing is done by:
criu dump -t <pid>
You can specify a folder where the process state files will be saved by
You can restore with
criu restore <pid>.
In this article, I've described what Linux-based containers are, and I briefly explained the underlying cgroups and namespaces kernel features. I have discussed some Linux-based container projects, focusing on the promising and popular LXC project. I also looked at the LXC-based Docker engine, which provides an easy and convenient way to create and deploy LXC containers. Several hands-on examples showed how simple it is to configure, manage and deploy LXC containers with the userspace LXC tools and the Docker tools.
Due to the advantages of the LXC and the Docker open-source projects, and due to the convenient and simple tools to create, deploy and configure LXC containers, as described in this article, we presumably will see more and more cloud infrastructures that will integrate LXC containers instead of using virtual machines in the near future. However, as explained in this article, solutions like Xen or KVM have several advantages over Linux-based containers and still are needed, so they probably will not disappear from the cloud infrastructure in the next few years.
Thanks to Jérôme Petazzoni from Docker Inc. and to Michael H. Warfield for reviewing this article.
Google Containers: https://github.com/google/lmctfy
Docker Public Registry: https://index.docker.io
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide
- Paranoid Penguin - Building a Secure Squid Web Proxy, Part IV
- SUSE LLC's SUSE Manager
- Google's SwiftShader Released
- Managing Linux Using Puppet
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
- Doing for User Space What We Did for Kernel Space