Practical Tiny Core in the Fire Service

I'm sure many of you have at least heard of Tiny Core Linux—legends of how small it is, how little it takes it to run a system with it and even now how it's been ported to run on Raspberry Pi. It's an esoteric minimalist distribution. There was a very good September 2011 write up about it in Linux Journal by Joey Bernard.

I came to Tiny Core Linux after a protracted quest to find a good solution to a problem I've had. I am a Firefighter and EMT with the Bushkill Fire Company in Pennsylvania. Being an all-volunteer fire department while also being the primary service provider for our entire coverage area poses some unique challenges. When a dispatch comes through, fire engines and rescue apparatuses are expected to get out the door and on the road, quick.

Part of the logistical acrobatics that we perform every call is figuring out who is even coming. Different personnel bring different skills, skill levels, responsibilities and capabilities. Some are specialists in operating particular and specialized pieces of equipment, while others may bring specific types of know-how in techniques. Depending on the call, officers like to know who they have arriving as they plan out the best approach to tackle the given emergency. It's nice to know, down right imperative to know, that your best vehicle extrication technicians are responding to a motor vehicle accident involving two cars with a possible entrapment.

Technology has been at the forefront of this ongoing struggle in emergency services to solve a plethora of problems. A few companies have released Web apps designed for the fire service to tackle the "who's coming" problem. These systems consist of a Web page that acts as a dashboard and is displayed in the station. Each responder dials a number that registers his or her name as responding to the call. Officers then can assess what their man power is like and decide quickly whether they need more resources, simply by looking up at the monitor in the station. These apps do more than that, but this is the crux of what they are designed for.

It sounds fantastic, and indeed it is an enormous help and resource. Being the go-to IT guy at my firehouse, I had fallen in the enviable position of making this system work for us. The trivial solution is, of course, just to fire up Windows with IE and let the monitor sit there—in fact, the vendor suggested this to me. I suppose when you're trying to sell something as easy to use, that's what you do. That solution, needless to say, was unsatisfactory. From a budgetary standpoint, I was encouraged to keep costs down. The first decision and the easiest decision was to use Linux. I just shaved off the cost of the Windows license.

The initial requirements and constraints became as follows:

  1. Have a low-power computer to run the Web browser—the smaller the better.

  2. The monitor must be 32" or larger.

  3. The computer and monitor are in different rooms.

  4. The building is on a generator backup, and the system must be able to endure the minimal power loss between the time the main power goes out and the generator activates.

  5. The monitor needs to be mounted 10–12 feet up so it is easily visible by everyone entering the building.

I went ahead and purchased a mini PC for about $160 (a Zotac Mag in my case). Given the distance requirement, with the monitor being in a different room from the computer, I decided to use an HDMI connection with an active range extender (Cat 5 Ethernet to HDMI extender). The monitor is a 32" 720p LCD TV.

My first instinct was to grab the latest Ubuntu and install it, and that's exactly what I did. At the time, perhaps two or three years ago, it was Ubuntu 11. I naively installed it and set up the wireless and made Firefox load on startup. I also set Firefox to save the session so it always booted with the same session open. Making sure to turn off screen blanking was important as well. Everything was working great, or so I thought. As time went by, I started encountering problems I hadn't thought of. Let me go through a few of them here.

The first most glaring problem is what happens when the Internet connection goes down. Ubuntu's Wi-Fi management is built for a desktop environment. It does a great job at performing in that environment. However, for what is essentially a kiosk, it has some drawbacks. It still requires some degree of interaction, as with most things designed for a desktop. The most succinct way to put it is that it is just too much operating system for my needs. I don't need Unity; I don't need a compositing windowing system; I don't need to be badgered about updates; I don't need a fancy packaging system, and I don't need to hunt down where each setting I want to change is. My approach was just wrong. I found myself solving problems by dismantling the operating system. I'm sure many of you have done this as well. Finally, it was corruptible. The setup could change if you wanted it to. This is fantastic for a desktop, because it means I can customize it to whatever I want. Conversely, this is terrible for a kiosk, because it means everyone who gets their hands on it, even with good intent, can change settings and give you that much more of a headache when you need it to go back to exactly how you originally set it up. This too happened. A fellow firefighter would come in and wonder if the page was refreshing correctly, grab the keyboard and mouse, and the next thing I knew, the browser was set to start on with a sports ticker. This is simply the natural consequence of having a publicly-facing system.

After constantly fixing these small issues, I'd had it. I decided there must be a better way. I started from scratch and jumped on and started my search for a better-suited distro. The few I'd considered were Vector Linux, Puppy Linux, Damn Small and SliTaz. Each of these is an amazing distro, and this is by no means an exposition on what each is capable of. This is just my account of what I did.

I finally settled on the reality that I will make a distro that does what I need to do—no more and no less. In other words, I wanted Just Enough Operating System (JeOS). Eventually, I settled on Tiny Core Linux. It lets me do just that.

I probably should regress for a second, lest I offend the Tiny Core experts out there. Tiny Core Linux should not be thought of as a distribution. It should be thought of as a set of tools for building your Linux system however you see fit. I needed just enough to get this particular job done.

Tiny Core is available on its Web site in three flavors. Core, the smallest of the three, is just 9MB. Core provides a command-line interface. TinyCore provides a basic FLTK/FLWM GUI, and finally, CorePlus provides a choice of seven different window managers, Wi-Fi support, remastering tools and support for non-US keyboards.

I installed CorePlus onto a USB drive and fired it up. I got to understand how Tiny Core works, and make no mistake, there is a pretty steep learning curve. For my purposes, it was more than worth it. Besides, who doesn't like to learn new things? One of the most important things about Tiny Core Linux is that it is non-corruptible. I can set it up exactly how I see fit, and it always will boot to that state. Nothing is saved. Tiny Core boots and runs entirely in a RAM disk. It opens the image file you create, loads it into memory and runs. Whether you boot it from a hard drive or USB drive, it simply loads the image file with all the programs, settings, files and so on that you built in to it, straight into memory. Tiny Core uses the concepts of extensions to install applications. There is an excellent write up on the Tiny Core Web site explaining extensions. For more intricacies on Tiny Core, there is also the excellent Linux Journal article I mentioned earlier, which I suggest you look at.

With a better idea of how Tiny Core Linux worked, I decided it was the best option for me. So, I got started setting up the system how I wanted:

  • Choose an X server: Tiny Core defaults to xvesa. I didn't need anything fancy but decided to go with Xorg due to ease of configuration. HDTVs come in two resolutions 1920x1080 (1080p) and 1280x720 (720p). So configuration wasn't too big an issue. My display was a 720p.

  • Choose a window manager: among the choices available, I chose Joe's Window Manager. Go with whatever is comfortable and suits your needs.

  • Pick my extensions: I went with the following extensions: Firefox as my chosen browser; lxrandr, for configuring resolutions under Xorg; wifi, includes as dependencies all the required Wi-Fi libraries and firmware Xorg, as stated previously.

  • Optional: wicd, graphical configuration for Wi-Fi.

Really, the only way to tailor your system completely is by trial and error. You live, you learn, and you become better. Once I had the extensions I wanted, I chose to have them loaded inside initrd apps on boot. The Tiny Core Wiki does a great job of explaining the differences. I chose this because there aren't too many extensions to load, and they all fit in the RAM of the machine I'm using. This also frees from me doing excessive writes to a Flash drive should the system be running one, or from even requiring a hard drive. The system I'm using has a 160GB hard drive that I installed. It also saves on power usage.

I was lucky enough to have an older laptop I could experiment with as I tailored my system. I also used VirtualBox on my Ubuntu box. With all my extensions chosen, I remastered and ended up with a system that fits in less than 70MB. You can get as pedantic as you want with this, but I needed to get it working quickly; although admittedly, I will go back one day and see how small I can get it!

The system booted and everything came up fine. Granted, this came after about four remasterings. Like I said, it's a learning process, and you tweak as you go. Now came the work of making it do what I wanted it to do on boot. The first issue was I needed Wi-Fi. I used the wifi utility to connect to a Wi-Fi network. The standard command-line Wi-Fi utility Tiny Core's wifi extension provides saves the SSID and password settings you provide in a wifi.db file.

Figure 1. The is called in this file. Here I am loading the Wi-Fi settings from opt/wifi.db. After it connects, the boot process continues.

Figure 2. An example wifi.db—notice the SSID first and password, in plain text, second.

Once I had saved my settings, I set Wi-Fi to connect automatically at boot time. Tiny Core has two places for loading commands and scripts upon startup, and, both in /opt. There is a subtle important difference between the two., as the name might imply, isn't asynchronous; commands are run, and they block the boot process until they are finished. The is run from in the background. I needed the Wi-Fi to be loaded before anything attempts to use the Internet. It would be catastrophic if the browser loaded and tried to load the page while the Wi-Fi script still was attempting to connect. One very important gotcha to note is Tiny Core's script will take down the WLAN interface if it does not connect. This confused me at first, making me think I had a kernel module issue. Alas, I didn't, so I saved the proper Wi-Fi credentials, and then I was up and running once again.

The next operating system configuration specific to deal with was the DPMS and screensaver settings—after all, I couldn't have the screen blanking every 15 minutes on a station display. There are a few ways to disable auto-blank and sleep. This is what I did: I put the following into the ~/.xsession file (being they are X-specific options, that's where they belong):

xset s off
xset s noblank
xset -dpms

The s off option shuts off the screensaver functionality; noblank tells it not to blank the screen, and finally, -dpms tells Xorg to disable DPMS Energy Star features.

I'm almost there. The final, albeit big, piece of this puzzle is to have the browser load on startup with the correct settings. To do so, I put a script in ~/.X.d. I simply made a filename of firefox and put firefox & inside it. It doesn't need to be executable, so there's no need to play with its permissions. Now the browser loads on startup. I could write an entire article on configuring browsers, but here is a rundown of how it went.

I used the Session Manager extension for Firefox to save the session I wanted. Session Manager has settings for auto-loading a session; tweak those how you see fit. I also made sure Session Manager would not overwrite my session should something go awry. In addition to Session Manager, I installed the following:

  • FF Fullscreen: starts browser in full screen.

  • Reload Every: refreshes the page at specified intervals.

  • Memory Restart: automatically restarts the browser when its memory usage reaches a threshold.

After setting up the browser just how I wanted, it was time to save my settings. I've already said that Tiny Core was incorruptible; what I mean by this is that it starts in the same state all the time. Your settings and anything else done are not saved on shutdown. Everything resides in RAM. When it boots up again, it simply decompresses its image file straight into memory. So what do you do when you need to save settings? Tiny Core lets you save any changes you make to the filesystem in the form of a backup. The backup is simply a tarball of whatever you specify. Tiny Core then can be configured to restore this file on each boot. Tiny Core includes a backup utility. The backup utility creates the tarball file containing anything you specify in the /opt/.filetool.lst and excluding anything in the /opt/.xfiletool.lst. My file include list consists of:

  • home

  • opt

My home and opt directories are both included in the backup, but I don't need everything:

  • Cache

  • cache

  • .cache

  • XUL.mfasl

  • XPC.mfasl

  • mnt

  • ./adobe/Flash_Player/AssetCache

  • .macromedia/Flash_Player

  • .opera/opcache

  • .opera/cache4

  • .Xauthority

  • .wmx

  • *.iso

Everything here is default, with the exception of the *.iso. This is because the remastering process can create a bootable ISO for you, and I don't want it included in my backup. Make sure any setting you want saved is in a file included in the backup. In my case, the Firefox extensions and settings are all contained in .mozilla in the home directory. You even can do a dry run to see what files would be saved given the rules you provided. Once I was satisfied, I did a backup, and my resulting backup file, mydata.tar.gz, was around 800KB.

Figure 3. The backup utility is where you create your .tgz backup file. The dry run option is always a good idea so you know exactly what it's going to do.

Now that I had my backup file, I was ready to remaster. I remastered using the included utility, but there are other ways to remaster (a quick trip to the Tiny Core documentation will show you them). During the remaster process, I chose my extensions (as outlined previously) and the boot codes I wanted.

Tiny Core has a number of boot codes. Let me review some important boot codes here. You can define the location of the home directory and the opt directory. Because the systems I was installing Tiny Core on have hard drives, I specified the boot codes as follows:

opt=sda1 home=sda1

There is a norestore option listed and described in the documentation that tells Tiny Core not to restore any backups. Because I wanted to have my exact settings restored on each boot, I did not use that option. During the Ezremaster process, I specified my backup file. I also told the Ezremaster process to create an ISO file for me.

Figure 4. Choosing what to load and the best way to load it will be a big part of your project. If need be, you can create your own extensions.

Figure 5. The Ezremaster utility makes it very easy to do your own remaster. This was a huge time-saver in my project.

With everything remastered, I loaded my new Tiny Core build onto a USB drive. It booted perfectly. Now with a working copy of what I needed, I went ahead and installed it on our mini PC. The installation process involves running the Tiny Core installer. I didn't need any particular partitioning scheme, so I just used the entire drive. I specified sda1 as my opt and home as previously stated. Once the system was installed, I placed my backup tarball on the newly created filesystem. Then, I removed my USB thumbdrive and gave it a reboot. I now had my custom, Tiny Core Linux display kiosk fully working. Every reboot provided me with a clean environment tailored to my specifications.

The time in learning how Tiny Core works was well worth the outcome. There have been no problems, and the system is rock-solid. Even if the power goes out, it will start up with our page loaded displaying dispatch information and who is arriving to the call should one come out. It needs no administration because the settings do not change. It is Just Enough Operating System for me!

Figure 6. This our display board, showing our call, unit and arrival information

Figure 7. Our display at the Firehouse: LED TV with an active HDMI-to-Ethernet extender—all up and running our custom Tiny Core Linux!

Figure 8. This is a forward view of our bays. When entering the station, crews getting into these apparatuses are able to view the display (not pictured).

Here are some things I'd like improve upon for my next version:

  • Move settings over from a backup file to an extension. The Tiny Core documents outline this procedure, and I can't wait to try it. This should streamline the process of loading settings and user-created files.

  • Create display profiles for 1080p and 720p. With two profiles available for both TV types, they can be selected easily should Xorg not be able to auto-detect the resolution of a particular display.

  • Remote monitoring—install something like monit to enable monitoring.

  • Create an extension with initial setup scripts. The scripts would set information, such as Wi-Fi credentials, monitor types and so on, and handle persistence for these settings.

  • Integrate with the UPS. Install the NUTS package for interfacing with the UPS, although this might be a needless complication. I'm sure someone will have a use for it.

Tiny Core Linux has saved us money, saved me some sanity and keeps our fellow firefighters informed and ready to perform the job. Firefighters must be ready 24/7, so technology should help us accomplish that goal and not get in our way. Linux has proved to more than capable, and I will be making changes and improvements as I see fit. I hope you try your hand at it as well and let us know how it goes.

Load Disqus comments