What Price High-Performance I/O?
If you have been around the PC industry for a while, you most likely remember that the ISA architecture (AT bus as it was initially called) was recognized as not the best answer for high performance I/O, because of speed limitations and rather poor interrupt structure. Enter IBM with its MicroChannel architecture. All you had to do was pay IBM money, and you could use the design. Some manufacturers bought in, but it soon flopped because what the industry really wanted was an open standard. In fact, AT bus became ISA (Industry Standard Architecture) because of IBM's proprietary approach.
However, ISA still wasn't the answer. The first issue, bus speed, was addressed with the PCI bus which, being wider and faster, has satisfied the bandwidth requirements of I/O hungry systems. But, as the speed of everything has increased, so has the number of interrupts that must be dealt with.
The best solution is to have intelligent peripherals that don't have to interrupt the CPU as often in order to carry out their tasks. The most common example today is the buffered serial card UARTs (universal asynchronous receiver/transmitter). Another is intelligent serial cards that enable the transfer hundreds or thousands of characters in a single DMA (direct memory access) transfer and the take care of the character-by-character transfer themselves.
Each intelligent I/O card requires a driver for system communications. More accurately, a driver is needed for each operating system that wishes to talk to the card; thus, manufacturers must invest in support of each system. This investment means that most vendors tend to support only the most popular operating systems.
It was recently (meaning a few hours ago) called to my attention that there is an organization called the I2O Special Interest Group that is addressing this problem. Here are a few quotes from their web page:
The objective is to provide an open, standards-based approach ... and provide a framework for the rapid development of a new generation of portable, intelligent I/O solutions.
The I2O model provides an ideal environment for creating drivers that are portable across multiple operating systems and host platforms.
The I2O model is intended to provide a unifying approach to device driver design...
They also pose the question, “Do you see the Unix vendors supporting I2O in their future releases?” and answer it by stating, “SCO is a SIG member and has indicated it will support I2O in future releases of its OS. The SIG welcomes all other Unix vendors to join as well.”
All of this rhetoric sounds like we are all friends, and we will all interoperate happily ever after. However, there does seem to be a catch.
In one of those nice answers about compatibility, we see our first clue that there is a potential problem: “The SIG is set up so that only members and their licensees can design with the specification,....” Even more to the point: “The I2O Specification...is an agreement about the intellectual content and the terms and conditions for how the Specification can be used. Therefore, to make the Specification available to non-members a non-disclosure agreement must be executed.”
I have attempted to contact them for clarification but, so far, they have not returned either my phone call or e-mail.
I'm sure all Linux folks are familiar with the non-disclosure problem. Non-disclosure is why Diamond video boards weren't supported until Diamond changed their mind, and why Linux for the Mac didn't exist for years. To put it another way, you can't comply with both the GPL and a non-disclosure agreement.
Fight—not let someone who claims they are making an open standard get away with an “open to anyone except free software” standard. The first organization to take action is Software in the Public Interest, the same folks who bring us Debian Linux. I have just received a draft of a proposal for an Open Hardware Certification Program. In this program, vendors will make a set of promises about the availability of documentation for programming the device-driver interface of the specific hardware device.
The idea here is that while the program will not guarantee a device driver is available for a specific device and operating system, it does guarantee that anyone who wants to write one can get the information necessary to do so.
I am sure there will be more on this topic on the Usenet newsgroups, on the web and in the press. If you are a vendor, contact SPI (http://www.debian.org/ will point you in the right direction) for more information on their certification program. If you are a potentially unhappy consumer, check out http://www.io2sig.com/ and let the SIG members know what you think about the exclusion of free software from their open standard and about SPI's effort for real open hardware. Finally, watch the LJ web pages for news on what is happening in this important battle.
Phil Hughes is the publisher of Linux Journal. He can be reached via e-mail at firstname.lastname@example.org.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Google's SwiftShader Released
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Interview with Patrick Volkerding
- Managing Linux Using Puppet
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Non-Linux FOSS: Caffeine!
- SuperTuxKart 0.9.2 Released
- Tech Tip: Really Simple HTTP Server with Python
- Parsing an RSS News Feed with a Bash Script
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide