Linux on Track
The software comprises several main components: the data acquisition program, a watchdog and a taper, a GPS monitor and device drivers. These components are described in the following subsections.
Except for the time between two and four o'clock in the morning, the data acquisition is active and digitizes data. Because the data acquisition is synchronous with wheel rotation, the data rate depends on the train's speed. At 300kph the wheel rotates at about 29Hz resulting in a data rate of
which has to be streamed to disk without loss.
Every 345 rotations of the wheel (about one kilometer), the hardware is reset to trigger again on the next zero-degree marker of the resolver. At that time the data acquisition program fetches the most recent information from the GPS, writes it to file, closes the file and opens a fresh one. Each file covers one kilometer of track and is nearly one megabyte in size. This approach was chosen for several reasons:
One megabyte and one kilometer are convenient sizes to handle with data analysis software.
Synchronising every kilometer makes sure that losing individual events from the resolver due to noise will not spoil all data for the rest of the day.
One kilometer was determined to be a useful checkpoint to record GPS information.
The files are not created anew each day but are overwritten for efficiency reasons. In case of a power failure, it is almost impossible to find out how much of a file is new and how much is from the day before; therefore, a partly written file has to be thrown away. Throwing away up to one kilometer of data is a reasonable tradeoff between number of files and amount of data lost.
Of course, there is nothing magic about one kilometer. Two kilometers or one half kilometer would probably have worked equally well.
While reading data from the devices, the data acquisition program also monitors the wheel's rotational speed to check whether the train's speed is above 60kph. Below that threshold, data is considered to be of no interest and is thrown away. In particular the file currently being written is reset and reused as soon as the speed rises above the threshold. Of course, up to one kilometer worth of data recorded at speeds above 60kph is discarded, but in fact, the threshold of 60kph is a rough guess anyway so no harm is done by discarding some data recorded at speeds slightly above 60kph. Typical travelling speeds of the ICE are, depending on track type, 100kph, 160kph, 250kph and 280kph, and only those speeds were of major interest in the project.
The data acquisition program is rather simple, most of it doing error handling in case of read or write errors. Since device drivers were implemented for the RTI-860 as well as for the ADCO, digitizing is as easy as opening a file and reading from it. The only thing requiring even minimal thought was that the data rate from the two drivers is not identical. Reading the same amount of data from both devices in every course through the main loop would soon fill up one of the driver's buffers. A general solution in such cases is the use of the select() system call; however, in the given case, the exact ratio between the two data rates was known and the amount of data read from each driver in every read-call was chosen accordingly.
At two o'clock in the morning the data acquisition process stops recording data in order not to interfere with other work done at that time. First, a cron job reboots the system as a preventive measure against memory leaks. Although none were observed, rebooting costs nothing and does no harm. After the boot, the acquired data is written to tape with a script started as a cron job which ultimately calls tar.
A minor nuisance was that it is almost impossible to find out how much space is used on the tape if internal compression of the DAT drive is enabled. Assuming that the compression ratio is about the same every day, it would probably have been possible to put two days' worth or 1.5GB of uncompressed data onto a 2GB tape. Since the A/D converter only delivers 12 bits which are stored as 16-bit values, a compression to 1.125GB should be trivial. Another 12% reduction is probably possible because most of the time the digitized signals do not cover the full 12 bits.
During the rest of the day, i.e., not between two and four o'clock in the morning, another cron job is started every ten minutes. As a measure against yet unknown bugs in the data-acquisition program which may cause it to crash, a watchdog program checks if the data-acquisition process is still in the process table, and if it is, assumes that it is doing something useful. If it is not in the table, the system is rebooted. As of this writing the watchdog has still to prove its utility, since no such incident has been found in the log files.
- High-Availability Storage with HA-LVM
- DNSMasq, the Pint-Sized Super Dæmon!
- Localhost DNS Cache
- Real-Time Rogue Wireless Access Point Detection with the Raspberry Pi
- Days Between Dates: the Counting
- You're the Boss with UBOS
- The Usability of GNOME
- Linux for Astronomers
- Multitenant Sites
- PostgreSQL, the NoSQL Database