In-Vehicle Data Logging

by Stuart Warren

When we hear ``black box recorder'' or ``flight recorder'', it's usually in a news-piece where authorities are piecing together the events prior to an aviation accident. For the majority of the time though, flight recorders provide important information for maintenance activities and can assist the manufacturer in designing new aircraft.

In-Vehicle Data Logging

Figure 1. Looks Like a Black Box to Me

While flight recorders are specifically designed for the aviation industry, their function can be applied to other industries to assist with maintenance or product development. The company I work for, BTR Automotive-Drivetrain Systems, designs and manufactures automatic transmissions in Australia for Ford, SsangYong and Maserati. Considering the product is controlled by seven solenoids, the software required to control the transmission is surprisingly complex. As is often the case with embedded software, most of the complexity lies in dealing with component failures in a safe and timely manner. The software is bench tested and also tested in-vehicle where the driving conditions are targeted for the new feature under test. We also accumulate kilometers on the software with no specific driving conditions in mind. This testing can range from 5,000 to 100,000 km on different vehicles. For this testing it's useful to log the driving conditions for later analysis--just like a flight recorder.

So far I've only discussed the software testing side of things, but the data logging is equally useful for the mechanical engineers in the R&D department. During development, transmissions are instrumented with pressure and temperature transducers, drive shafts with strain gauges and the vehicle itself with accelerometers and a road-speed sensor. These signals are used in conjunction with the sensors already in the car--engine, output shaft and wheel speeds, throttle and torque signals, etc. The signals may be available as analog, pulse or digital inputs or via RS-232, diagnostic links or the vehicle's internal communication network.

Considering the vehicle under test could be accumulating 100,000 km during a test, there's a lot of data to log. This logging has to be reliable; it shouldn't have to depend on the driver having to log in to a laptop, start the logging software and shut it down at the end of the driving cycle.

Surprisingly, there didn't seem to be anything commercially available that would meet our requirements. The problem was how to log data from different sources, each time stamped from the same clock. Previously, we were using CANalyzer software on NT laptops to log CAN data, but this required driver intervention and seemed to work only half the time. We also had no way of logging other signals with the CAN data using a common time-stamp source. It looks like there are a lot of systems that cover one area particularly well but do not play well with other systems. The main problem appears to be the underlying operating system. To have an accurate time stamp, you need an operating system that can respond to an interrupt quickly (say, within 50 µs). Windows and Linux can't guarantee the interrupt response time because that isn't what they're designed to do. The alternative is to have the interface card do the time stamping. This is okay except where signals are coming in from different interfaces. With each interface using its own time-stamp clock and using proprietary drivers, how do you synchronize them? We'll come back to this.

What's CAN? Ever wanted to network your car? Your vehicle's manufacturer may have beaten you to it. Controller Area Network (CAN) was developed by Bosch to provide a way of sharing information between electronic modules in the car. For example, engine speed is typically used by the engine control module, instrument cluster and transmission control module. Without using a network, each controller needs to connect to the engine-speed sensor, or the one module must read the signal and provide a buffered signal to the other modules. Using a network like CAN cuts a lot of cable out of the wiring harness by sharing information between modules. It also allows more complex control of the components in your car. Thanks to CAN, we can log most of the signals we need by logging all of the CAN traffic.

As a starting point, we designed a system to log CAN, RS-232 and 16 analog inputs to DDS4 tape. It should be able to be put in a vehicle and then be forgotten. Our solution was to run Linux on an embedded PC comprising the Advantech PCM-9570/S with 256MB RAM, 16MB Compact Flash, a 600MHZ PIII, SCSI2, 10/100-T Ethernet and 4 × RS-232; Advantech PCM-3680 PC/104 Dual CAN Interface, Real Time Devices DM6420HR 500kHz 16-Channel Analog Input Card; Sony SDT1000 DDS4 Tape Drive; and Seetron 16 × 2 backlit LCD display.

The DDS4 drive provides 20GB of storage with the main advantage being that the tapes can be mailed back to the R&D section for analysis without recalling the vehicle.

As mentioned earlier, one of the requirements was being able to time stamp all data sources from the one clock. To do this we used the FSMLabs' real-time Linux patch. This allows us to respond to interrupts (within ~20 µs) rapidly, read the Pentium's time stamp clock and then read the data from the hardware. Finally, the data is put into a FIFO for it to be read by the user-space logging process. Using a modified version of Heinz Haeberle's real-time CAN driver, we were able to log CAN data as simply as

cat /dev/can > ./can_log

Better still, let's compress the data on the fly:

cat /dev/can | gzip -9 > ./can_log.gz
The next problem was how to have a system we could forget about. The system had to be able to cope with power loss without having to e2fsck the next time we booted up. To do this we ran Linux from a RAM disk, something that single floppy distributions do all the time. We based the software on MiniRTL V2.3, one of the many floppy Linux distributions around. MiniRTL had the advantage of already including the FSMLabs' real-time patch applied to the kernel, so it was mostly right from the beginning. The only problem was the older libc that it used. Red Hat 5.1 uses the same vintage of libc, so whenever we needed additional utilities, we lifted them from a Red Hat 5.1 box. By starting with a small footprint, we were able to replace the hard drive with a 16MB Compact Flash card (of which only 2MB is used). In removing the hard drive, we removed the component most likely to fail with continued vibration.

To get the tape drive going we had to drop in the SCSI kernel module and the mt utility for positioning the tape. Interfacing to the tape drive was new territory for us, but once we found the right tool for the job (mt), it was dead easy (if you enjoy controlling things with your computer, mt can amuse you for hours, especially if you think of how long it takes to erase a tape). In the end we were happily archiving to tape with

tar -cf /dev/ntape *

Being a headless box, there isn't much way of providing feedback to the user. To address this, the BTRA Black Box runs mini_httpd, a tiny web server supplied with the MiniRTL Project. We use this to allow the user to configure CAN bitrates and will use it to configure the analog channels in the future. As you'd expect, the box also runs ftpd and telnetd.

On the front of the box is a 16 × 2 LCD from Seetron. This connects directly to a spare serial port and allows us to provide status to the user. Once again, communicating with the LCD was a no-brainer:

echo -n -e "\015\016 BTRA Black Box\012  Ver
0.0.1" > dev/ttyS0

The \015 and \016 are for clearing the display and turning the backlight on.

Even though the majority of the functionality has been demonstrated with the three examples above, the real work was in the scripts for caching the data before stopping the logging and determining when the vehicle has stopped. We could easily have streamed the data directly to the tape drive but wanted to minimize the operation of the tape drive, preferably using it only when the vehicle was stationary. This required us to buffer the data in RAM until it was safe to write to the tape drive.

Figure 3 outlines the arrangement we use to buffer the data. The Black Box uses two 64MB Minix RAM disks. While one is logging data, the other is archived to tape. The script in Listing 1 [available at ftp://ftp.linuxjournal.com/pub/elj/listings/issue05/4813.tgz] shows the script used to coordinate the use of the buffers. When the buffer logging data is close to capacity, all logging sources are killed and then restarted, this time pointing to the new buffer. The archiving script is started to tar the buffer to tape. A short README file is placed in the buffer listing the start and end times of the logging, as well as any error messages that occurred.

In-Vehicle Data Logging

Figure 2. More I/O than You Can Shake a Fistful of Serial Cables At

In-Vehicle Data Logging

Figure 3. Block Diagram of How the Logger Software Works

Normally killing the logging tasks would mean we lose data. In our case, the data is coming from real-time tasks. RTLinux uses FIFOs as one way of communicating with the Linux kernel and user space. Where you see /dev/can in Figure 2, it's actually a symlink to /dev/rtf20. That's RTF for real-time FIFO. Real-time and nonreal-time tasks communicate to each other using either shared memory or real-time FIFOs. For a data-logging application like ours, the FIFOs accumulate the data until the logging task reads the data (or the FIFO overflows). Thanks to the FIFOs, no data is lost when changing buffers.

The Vehicle Logger was successfully used at the IDIADA vehicle test facility in Spain earlier this year. In twelve months of vehicle testing we were detecting a spurious fault in the system that would occur once every several thousand kilometers. As always happens, the fault never occurred when BTRA personnel visited the facility. In March the logger was fitted to the test vehicle. Three weeks and 1.3GB of gzipped CAN data later, the fault was recorded and the tape sent back to us for analysis. The analysis allows us to pinpoint the cause of the fault and leads to a better understanding of the environment in which the transmission operates.

How do we analyze the data? So far it's done manually, which can take a long time. Fortunately, the 64MB chunks are manageable, and the relevant archive is easily extracted from the tape using the file's creation date on the tape. We have written several utilities to manipulate the CAN data allowing us to use CANalyzer to parse the data. Unfortunately, this is not easily put in a batch file. It also doesn't handle data from sources other than CAN. We want to be able to put a tape in a machine and have it process the data overnight, generating a report of the types of shifts, oil-temperature profiles, infeasible vehicle signals, snippets of CAN data where faults were logged, etc. Unfortunately, there doesn't seem to be much available under the GPL that we can build on. As a starting point, we're working on a CAN data-processing library that allows us to associate callback routines with each CAN message.

The Vehicle Logger scripts were first written in an airport lounge en route to IDIADA, Spain--they were a bit of a mess. In the leadup to writing this article, the scripts were rewritten to make the system more easily configured to log other data sources. Work is underway to make each data source available from a TCP/IP server to enable the data to be streamed to several sources (see Figure 3). We can still log the data locally using netcat:

nc localhost 4201 > /daq/buf1

If you've never used netcat, you should. In ten words or less, you can pipe stdio over your network. Doesn't sound all that useful? It's just the thing for redirecting your serial port over the network. With netcat and an RS-232/IR interface, you can network your VCR with one command line.

Back to the Vehicle Logger. With the data server arrangement, the data can be streamed to a notebook connected with an Ethernet crossover cable where a LabVIEW or DASYLab application can be used to view the data in real time. In this way, the Vehicle Logger becomes a data acquisition hub--all of the data acquisition is done by the one device that then distributes it to the clients. The server is based on a queue with multiple readers. The server kills the connection with the client if it has been overtaken by the head of the FIFO. The server can optionally queue data from different sources, hence the name: multiple input multiple output (MIMO).

In-Vehicle Data Logging

Figure 4. In the future, the data collection will run via a server, allowing the data to be sent to different clients.

Looking to the future, the project has potential to be extended to incorporate remote dial-in to access vehicle diagnostics and reprogram the transmission control module. It would be nice to implement a standard protocol for distributing the data. National Instruments uses a protocol called DataSocket, although there doesn't seem to be much existing information discussing the protocol. The nice thing about DataSocket is its HTTP-type addressing. For example, to read engine speed from the Vehicle Logger, you would use:

dstp://edgar.transmissions.albury.com/engine_speed

The server then streams the time-stamped data to the client. This then leaves LabVIEW to do what it does best--pretty graphics.

CAN

Resources

In-Vehicle Data Logging

Stuart Warren (stuart@rachandstu.com) is an electrical engineer working for BTR Automotive in Sydney, Australia. He likes to spend his spare time rogaining and buskwalking.

Load Disqus comments

Firstwave Cloud