Linux and the Next Generation Internet
It would be inappropriate, particularly with respect to any open-source developments, to neglect mentioning related efforts, or efforts which have contributed to the system described in this article. Two Linux-based Diffserv projects we feel are especially interesting and mature are the efforts underway at the University of Karlsruhe (see Resources 13) and the University of Kansas (see Resources 14). Many of the conclusions and insights made available through these projects correspond with our own observations, and they are excellent sources of further information on Diffserv and differentiated services under Linux. We highly recommend them to the interested reader.
In particular, we want to call attention to the differences between the demonstration environment described in this article and the DiffSpec tool under construction at the University of Kansas. The Diffserv approach to resource allocation for each class of service very explicitly requires external intervention in the form of what has been called a “bandwidth broker” (BB). The DiffSpec tool entails a much grander system concept than the demonstration environment discussed here. For example, DiffSpec includes an API for managing queue/class/filter combinations, CORBA-based system calls for automated configuration of DS parameters, and a general web-based user interface to the Linux traffic-control capabilities.
In contrast, for the purpose of our demonstration environment, we unwittingly followed a “separation of powers” philosophy well known to students of political science. We carefully segregated the “service level definition” (or “legislative branch”) functions of the BB into a manually crafted, static database of allowable configurations. At the same time, we placed the “network instantiation” (or “executive branch”) functions of the BB onto a cleverly distributed arrangement of ipchains rules. In this fashion, we are able to reconfigure the entire network instantly to one of several predefined “looks” through either an operator's input or by an automated means. This approach may be scalable in some contexts, and it may provide for convenient “governance” of network resources, but it was not specifically intended for mass consumption.
Additionally, the structure of the Karlsruhe Diffserv implementation seems to be somewhat different than the implementation maintained by Werner Almesberger at the Swiss Federal Institute of Technology in Lausanne. For our project, we used Almesberger's distribution, so we don't have specific experience with the Karlsruhe distribution or the differences between the two implementations.
In reviewing the architecture and explicit results provided by the K.I.D.S. project (see Resources 4), we agree with their conclusions regarding strengths and weaknesses of Diffserv. In particular, through the use of our “before-after” scenario for configuring a Diffserv domain, we have experienced first-hand corroboration of these factors in the context of our “real applications”.
We also agree with Metz, who states (see Resources 1) that “In the long run it will most likely be a combination (of technologies) that will enable the Internet to offer QoS.” When the long run materializes, we're confident that Linux will be a part of the solution, because QoS in the Internet is definitely “where you want to go tomorrow”.
Michael Stricklen (email@example.com) is a research assistant at the UAB Center for Telecommunications Education and Research. He enjoys tinkering with Linux projects and networking gear. When not in front of a computer, Michael would prefer to be riding a snowboard.
Bob Cummings (firstname.lastname@example.org) is a research assistant at the UAB Center for Telecommunications Education and Research. In his spare time, he likes to spend time with friends, play RPGs and hone his Perl skills.
Stan McClellan (email@example.com) is a Linux enthusiast who happens to be employed in the UAB Dept. of Electrical and Computer Engineering. When he isn't “playing Professor” by hustling research money or teaching classes, he can be found wandering around, pestering students for “interesting results”
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Google's SwiftShader Released
- Parsing an RSS News Feed with a Bash Script
- Rogue Wave Software's Zend Server
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide