Robocar: Unmanned Ground Robotics
Two networked computers provide the brains for Robocar and the control for sensors and actuators. Debian Linux version 1.2 is installed on both these machines.
The first of these, Highlab, is a Pentium 166MHz with 16MB of RAM and a 1GB disk. The three boards in Highlab for sensor and actuator control are:
An ML16-P analog and digital I/O card made by Industrial Computer Source. The ML16-P is a low-quality, low-cost real-world interface for the ISA bus. It has sixteen 8-bit ADCs (analog to digital converters), two 8-bit DACs (digital to analog converters), eight digital output lines, eight digital input lines, and three 16-bit counter timers. We use this card for PWM motor control, e-stop, reverse and head-lamp relay toggling.
A CAN-PC card made by OmniTech for communicating to their CAN devices (the encoder wheel for speed sensing and the big servo for steering).
Two Matrox Meteor cards used for vision.
Highlab makes the high-level decisions and controls all of the actuators. It also performs vision and speed sensing.
Flea, the second of the two computers on Robocar, is a PC/104 stack. The PC/104 is an embeddable implementation of the common PC/AT architecture. It consists of small (90 by 96 mm) cards which stack together. A PC/104 uses ISA compatible hardware, although the connectors and pin-outs are different. Any software that runs on a regular desktop machine will also run on a PC/104. Its greatest advantage over a desktop machine, besides its compact size, is its greatly reduced power consumption. For more information on the PC/104 standard, see http://www.controlled.com/pc104/
Flea consists of several modules: a motherboard (the CoreModule/486-II from Ampro), an IDE floppy controller (the MiniModule/FI from Ampro), a digital I/O card (the Onyx-MM from Diamond Systems) and an Ethernet card (the MiniModule/Ethernet-II from Ampro). It has 16MB of memory and runs with a single 20MB solid-state IDE drive (the SDIBT-20 from Sandisk).
Since Flea has no video card, it uses a serial terminal as its console. We needed to patch the kernel to gain this ability, as it is not part of the normal kernel distribution. The serial console patch can be located at ftp://ftp.cistron.nl/pub/os/linux/kernel/patches /v2.0/linux-2.0.20-serial-cons-kmon.diff
The Onyx-MM features 48 digital I/O lines, 3 16-bit counter/timers, 3 PC/104 bus interrupt lines and an on-board 4MHz clock oscillator. Flea controls the scanning sonar's servo with this card. Sebastian Kuzminsky's Linux driver for this card can be found at ftp://ftp.cs.colorado.edu/users/kuzminsk/
Flea's task is simple; it turns the servo, pings the sonar and listens for the response. When it has a complete sweep of the arc in front of the robot, it processes and sends the information to Highlab.
This year's software, running under the Linux OS, is significantly improved from last year's, which ran under MS-DOS. Although the MS-DOS system worked fine (we won third, first and fifth place in the previous three years), it was extremely difficult to expand, ugly and monolithic. As soon as Sebastian finished developing Linux drivers for all our unsupported equipment, we completely removed any and all traces of MS-DOS from our systems and rewrote the code from scratch.
Functionality has been modularized into two types of programs: a single arbitrator which makes the decisions and controls the car, and sensors which provide information about the world to the arbitrator. Sensors are derived from a skeleton sensor and are easily created. You write the code to create a suggestion, to interface to the hardware and to link to the sensor library. The arbitrator and the sensors use a common configuration library which makes it easy to parse configuration information from the command line and configuration files.
Since the sensors and the arbitrator can run on any machine on the Robocar network, it is simple to add and remove computers to and from the system as needed. The arbitrator spawns sensors at startup using rsh. A simple command protocol allows communication between the sensors and the arbitrator over the network. The arbitrator can get and set a sensor's configuration, get a single suggestion from a sensor, set a sensor's suggestion rate and kill a sensor. Acknowledgments from the sensors are necessary, since we are using unreliable UDP (User Datagram Protocol) as our networking protocol.
Sensors generate several types of suggestions for the arbitrator: an occupancy grid, the current speed and (for Kevin's research only) a heading. Occupancy grids are just a way of representing world information in a grid format. Our occupancy grids are 6 meters wide and 3 meters high and have ten grid points per meter. The car is centered in the middle at the bottom of the grid. Each point of the grid can be marked with one of three values: good (it is okay for the car to move to that spot), bad (the car should avoid that position) and unknown. Not all sensors provide occupancy grids; those that do are only looking for specific types of “badness”—track boundaries (vision sensors) and obstacles (sonar sensor). In the future, we will probably allow the sensor to use weights of badness instead of a single value, so that the arbitrator can better choose between two “not-so-good” paths. Sensors send suggestions to the arbitrator as fast as they can, at a specified rate or on demand via UDP. These are not acknowledged by the arbitrator and can get dropped if the network gets bogged down. This protects the arbitrator from sensors that send suggestions too fast. Time stamps on the suggestions lets the arbitrator know how old the suggestion is.
The user can configure and debug sensors and the arbitrator from nice menus displayed using curses library routines. The arbitrator itself may wish to configure the sensors; for example, it may wish to alter the suggestion rate for a particular sensor or to change the type of filtering done by a sensor.
After spawning the sensors, the arbitrator waits for each sensor to connect to it and then gathers configuration information from all of the sensors for later use and display. Finally, it falls into a loop. Within the loop, the arbitrator selects from all of the sensor file descriptors and standard input to gather suggestions from the sensors and commands from the user. Using the suggestions, the arbitrator makes a navigation decision and actuates.
|diff -u: What's New in Kernel Development||Sep 04, 2015|
|Android Candy: Copay—the Next-Generation Bitcoin Wallet||Sep 03, 2015|
|The True Internet of Things||Sep 02, 2015|
|September 2015 Issue of Linux Journal: HOW-TOs||Sep 01, 2015|
|September 2015 Video Preview||Sep 01, 2015|
|Using tshark to Watch and Inspect Network Traffic||Aug 31, 2015|
- diff -u: What's New in Kernel Development
- Using tshark to Watch and Inspect Network Traffic
- The True Internet of Things
- Problems with Ubuntu's Software Center and How Canonical Plans to Fix Them
- Concerning Containers' Connections: on Docker Networking
- September 2015 Issue of Linux Journal: HOW-TOs
- Firefox Security Exploit Targets Linux Users and Web Developers
- Android Candy: Copay—the Next-Generation Bitcoin Wallet
- Where's That Pesky Hidden Word?
- A Project to Guarantee Better Security for Open-Source Projects