Infinite BusyBox with systemd

systemd Service Files

You will need to call on the host PID 1 (systemd) directly to launch your container in an automated manner, potentially at boot. To do this, you need to create a service file.

Because there is a dearth of clear discussion on moving inittab and service functions into systemd, I'll cover all the basic uses before creating a service file for the container.

Start by configuring a telnet server. The telnet protocol is not secure, as it transmits passwords in clear text. Don't practice these examples on a production server or with sensitive information or accounts.

Classical telnetd is launched by the inetd superserver, both of which are implemented by BusyBox. Let's configure inetd for telnet on port 12323. Run the following as root on the host:


echo '12323 stream tcp nowait root 
 ↪/home/nifty/bin/telnetd telnetd -i -l
/home/nifty/bin/login' >> /etc/inetd.conf

After the configuring above, if you manually launch the inetd contained in BusyBox, you will be able to telnet to port 12323. Note that the V7 platform does not include a telnet client by default, so you either can install it with yum or use the BusyBox client (which the example below will do). Unless you open up port 12323 on your firewall, you will have to telnet to localhost.

Make sure any inetd that you started is shut down before proceeding to create an inetd service file below:


echo '[Unit]
Description=busybox inetd
#After=network-online.target
Wants=network-online.target

[Service]
#ExecStartPre=
#ExecStopPost=
#Environment=GZIP=-9

#OPTION 1
ExecStart=/home/nifty/bin/inetd -f
Type=simple
KillMode=process

#OPTION 2
#ExecStart=/home/nifty/bin/inetd
#Type=forking

#Restart=always
#User=root
#Group=root

[Install]
WantedBy=multi-user.target' > 
 ↪/etc/systemd/system/inetd.service

systemctl start inetd.service

After starting the inet service above, you can check the status of the dæmon:


[root@localhost ~]# systemctl status inetd.service
inetd.service - busybox inetd
   Loaded: loaded (/etc/systemd/system/inetd.service; disabled)
   Active: active (running) since Sun 2014-11-16 12:21:29 CST; 
            ↪28s ago
 Main PID: 3375 (inetd)
   CGroup: /system.slice/inetd.service
            ↪3375 /home/nifty/bin/inetd -f

Nov 16 12:21:29 localhost.localdomain systemd[1]: Started 
 ↪busybox inetd.
Try opening a telnet session from a different console:

/home/nifty/bin/telnet localhost 12323

You should be presented with a login prompt:


Entering character mode
Escape character is '^]'.

S
Kernel 3.10.0-123.9.3.el7.x86_64 on an x86_64
localhost.localdomain login: jdoe
Password:

Checking the status again, you see information about the connection and the session activity:


[root@localhost ~]# systemctl status inetd.service
inetd.service - busybox inetd
   Loaded: loaded (/etc/systemd/system/inetd.service; disabled)
   Active: active (running) since Sun 2014-11-16 12:34:04 CST; 
            ↪7min ago
 Main PID: 3927 (inetd)
   CGroup: /system.slice/inetd.service
            ↪3927 /home/nifty/bin/inetd -f
            ↪4076 telnetd -i -l /home/nifty/bin/login
            ↪4077 -bash

You can learn more about systemd service files with the man 5 systemd.service command.

There is an important point to make here—you have started inetd with the "-f Run in foreground" option. This is not how inetd normally is started—this option is commonly used for debugging activity. However, if you were starting inetd with a classical inittab entry, -f would be useful in conjunction with "respawn". Without -f, inetd immediately will fork into the background; attempting to respawn forking dæmons will launch them repeatedly. With -f, you can configure init to relaunch inetd should it die.

Another important point is stopping the service. With a foreground dæmon and the KillMode=process setting in the service file, the child telnetd services are not killed when the service is stopped. This is not the normal, default behavior for a systemd service, where all the children will be killed.

To see this mass kill behavior, comment out the OPTION 1 settings in the service file (/etc/systemd/system/inetd.service), and enable the default settings in OPTION 2. Then execute:


systemctl stop inetd.service
systemctl daemon-reload
systemctl start inetd.service

Launch another telnet session, then stop the service. When you do, your telnet sessions will all be cut with "Connection closed by foreign host." In short, the default behavior of systemd is to kill all the children of a service when a parent dies.

The KillMode=process setting can be used with the forking version of inetd, but the "-f Run in foreground" in the first option is more specific and, thus, safer.

You can learn more about the KillMode option with the man 5 systemd.kill command.

Note also that the systemctl status output included the word "disabled". This indicates that the service will not be started at boot. Pass the enable keyword to systemctl for the service to set it to launch at boot (the disable keyword will undo this).

Make some note of the commented options above. You may set environment variables for your service (here suggesting a compression quality), specify a non-root user/group and commands to be executed before the service starts or after it is halted. These capabilities are beyond the direct features offered by the classical inittab.

Of course, systemd is capable of spawning telnet servers directly, allowing you to dispense with inetd altogether. Run the following as root on the host to configure systemd for BusyBox telnetd:


systemctl stop inetd.service

echo '[Unit]
Description=mytelnet

[Socket]
ListenStream=12323
Accept=yes

[Install]
WantedBy=sockets.target' > 
 ↪/etc/systemd/system/mytelnet.socket

echo '[Unit]
Description=mytelnet

[Service]
ExecStart=-/home/nifty/bin/telnetd telnetd -i -l 
 ↪/home/nifty/bin/login
StandardInput=socket' > 
 ↪/etc/systemd/system/mytelnet@.service

systemctl start mytelnet.socket

Some notes about inetd-style services:

  • The socket is started, rather than the service, when inetd services are launched. Similarly, they are enabled to set them to launch at boot.

  • The @ character in the service file indicates this is an "instantiated" service. They are used when a number of similar services are launched with a single service file (getty being the prime example—they also work well for Oracle database instances).

  • The - prefix above in the path to the telnet server indicates that systemd should not pay attention to any stats return codes from the process.

  • In the client telnet sessions, the command cat /proc/self/cgroup will return detailed connection information for the IP addresses involved.

At this point, I have returned from my long-winded tangent, so now let's build a service file for the container. Run the following as root on the host:


echo '[Unit]
Description=nifty container

[Service]
ExecStart=/usr/bin/systemd-nspawn -bD /home/nifty
KillMode=process' > /etc/systemd/system/nifty.service

Be sure that you have shut down any other instances of the nifty container. You optionally can disable the console getty by commenting/removing the first line of /home/nifty/etc/inittab. Then use PID 1 to launch your container directly:


systemctl start nifty.service

If you check the status of the service, you will see the same level of information that you previously saw on the console:


[root@localhost ~]# systemctl status nifty.service
nifty.service - nifty container
   Loaded: loaded (/etc/systemd/system/nifty.service; static)
   Active: active (running) since Sun 2014-11-16 14:06:21 CST; 
            ↪31s ago
 Main PID: 5881 (systemd-nspawn)
   CGroup: /system.slice/nifty.service
            ↪5881 /usr/bin/systemd-nspawn -bD /home/nifty

Nov 16 14:06:21 localhost.localdomain systemd[1]: Starting 
 ↪nifty container...
Nov 16 14:06:21 localhost.localdomain systemd[1]: Started 
 ↪nifty container.
Nov 16 14:06:26 localhost.localdomain systemd-nspawn[5881]: 
 ↪Spawning namespace container on /home/nifty 
 ↪(console is /dev/pts/4).
Nov 16 14:06:26 localhost.localdomain systemd-nspawn[5881]: 
 ↪Init process in the container running as PID 5883.

Memory and Disk Consumption

BusyBox is a big program, and if you are running several containers that each have their own copy, you will waste both memory and disk space.

It is possible to share the "text" segment of the BusyBox memory usage between all running programs, but only if they are running on the same inode, from the same filesystem. The text segment is the read-only, compiled code of a program, and you can see the size like this:


[root@localhost ~]# size /home/busybox-x86_64 
   text	   data	    bss	    dec	    hex	filename
 942326	  29772	  19440	 991538	  f2132	/home/busybox-x86_64

If you want to conserve the memory used by BusyBox, one way would be to create a common /cbin that you attach to all containers as a read-only bind mount (as you did previously with lib64), and reset all the links in /bin to the new location. The root user could do this:


systemctl stop nifty.service

mkdir /home/cbin
mv /home/nifty/bin/busybox-x86_64 /home/cbin
mv /home/nifty/bin/dropbearmulti-x86_64 /home/cbin
cd /
ln -s home/cbin cbin
cd /home/nifty/bin
for x in *; do if [ -h "$x" ]; then rm -f "$x"; fi; done
/cbin/busybox-x86_64 --list | awk '{print "ln -s 
 ↪/cbin/busybox-x86_64 " $0}' | sh
ln -s /cbin/dropbearmulti-x86_64 dropbear
ln -s /cbin/dropbearmulti-x86_64 ssh
ln -s /cbin/dropbearmulti-x86_64 scp
ln -s /cbin/dropbearmulti-x86_64 dropbearkey
ln -s /cbin/dropbearmulti-x86_64 dropbearconvert

You also could arrange to bind-mount the zoneinfo directory, saving a little more disk space in the container (and giving the container patches for time zone data in the bargain):


cd /home/nifty/usr/share
rm -rf zoneinfo

Then the service file is modified to bind /cbin and /usr/share/zoneinfo (note the altered syntax for sharing /cbin below, when the paths differ between host and container):


echo '[Unit]
Description=nifty container

[Service]
ExecStart=/usr/bin/systemd-nspawn -bD /home/nifty
--bind-ro=/home/cbin:/cbin --bind-ro=/usr/share/zoneinfo
KillMode=process' > /etc/systemd/system/nifty.service

systemctl daemon-reload

systemctl start nifty.service

Now any container using the BusyBox binary from /cbin will share the same inode. All versions of the BusyBox utilities running in those containers will share the same text segment in memory.

______________________

Charles Fisher has an electrical engineering degree from the University of Iowa and works as a systems and database administrator for a Fortune 500 mining and manufacturing corporation.