In order to test the vehicle extensively under different traffic conditions, road environments and weather, a 2000km journey was undertaken June 1 through June 6, 1998. During this test, ARGO drove autonomously along the Italian highway network, passing through flat areas and hilly regions including viaducts and tunnels. The Italian road network is particularly suited for such an extensive test since it is characterized by quickly varying road scenarios, changing weather conditions and generally a fair amount of traffic. The tour took place on highways and freeways, but the system also proved to work on sufficiently structured rural roads with no intersections.
During the journey, in addition to the normal tasks of data acquisition and processing for automatic driving, the system logged the most significant data, such as speed, position of the steering wheel, lane changes, user interventions and commands, dumping the whole status of the system (images included) whenever the system had difficulties in detecting the road lane reliably.
This data was processed off-line after the end of the tour in order to compute the overall system performance, such as the percentage of automatic driving, and to analyze unexpected situations. The possibility of reprocessing the same images, starting from a given system status, allows the reproduction of conditions in which a fault was detected and a way of solving it found. At the end of the tour, the system log contained more than 1200MB of raw data, while during the whole tour the system processed about 1,500,000 images (each 768 x 288 pixels) totalling about 330GB of input data.
During the tour, the ARGO vehicle broadcast a live video stream to the Internet: two GSM cellular modems were connected to the Vision Laboratory of the Dipartimento di Ingegneria dell'Informazione in Parma, and were used to transfer both up-to-date news on the test's progression and images acquired by a camera installed in the driving cabin to demonstrate automatic driving. Proving there was high interest in the scientific community, mass media and general public, the web site http://MilleMiglia.CE.UniPR.IT/ was visited more than 350,000 times during the tour and more than 3000MB of information was transferred, with a peak of 16,000 visits per hour during the first day of the tour.
The main problem experienced during the tour was due to image acquisition. One aim of the project is the development of a sufficiently low-cost system that will ease its integration into a large number of vehicles, so the use of low-cost acquisition devices was a clear starting point. In particular, videophone cameras (small-sized sensors at an average cost of $100 US each) were installed. Although these cameras have a high sensitivity even in low-light conditions (at night), a sudden change in illumination of the scene (for example, at the entrance or exit from a tunnel) causes a degradation in the image quality. (They were designed for applications characterized by constant illumination, such as video-telephony.) The cameras have a slow automatic gain control to the exit from a tunnel for a period of about 100 to 200ms; thus, the acquired images are completely saturated and their analysis becomes impossible.
On the other hand, the design of the processing system proved to be appropriate for automatic driving of the vehicle. Moreover, current technology provides processing systems with characteristics that are definitely more powerful than the one installed on ARGO: a commercial PC with a 200MHz Pentium processor and 32MB of memory. On such a system, enhanced by a video frame grabber able to acquire two stereo images simultaneously (with 768x576 pixel resolution), the GOLD system processes up to 25 pairs of stereo frames per second and provides the control signals for autonomous steering every 40ms. (This is equivalent to one refinement on the steering wheel position for every meter when the vehicle is driving at 100kmph.) Obviously, the processing speed influences the maximum safe vehicle speed: the higher the processing speed, the higher the maximum vehicle speed.
Differing weather conditions in particular light conditions demonstrated the robustness of both the approach and the image processing algorithms. In fact, the system was always able to extract the information for the navigation task even in critical light conditions, with the sun in front of the cameras, high or low on the horizon, during the evening as well as during the day, with high or low contrast. At night, the system's behavior is improved by the absence of sunlight reflections and shadows, while the area of interest is constantly illuminated by the vehicle headlights.
Finally, the system proved to be surprisingly robust despite high temperatures measured during the tour. In some cases, the external temperature reached 35 degrees Celsius and the system continued to work reliably even with no air conditioning.
Practical Task Scheduling Deployment
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.View Now!
|The Firebird Project's Firebird Relational Database||Jul 29, 2016|
|Stunnel Security for Oracle||Jul 28, 2016|
|SUSE LLC's SUSE Manager||Jul 21, 2016|
|My +1 Sword of Productivity||Jul 20, 2016|
|Non-Linux FOSS: Caffeine!||Jul 19, 2016|
|Murat Yener and Onur Dundar's Expert Android Studio (Wrox)||Jul 18, 2016|
- The Firebird Project's Firebird Relational Database
- Stunnel Security for Oracle
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
- SUSE LLC's SUSE Manager
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Google's SwiftShader Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide