Impressions of LinuxWorld August 2004
The theme of last week's LinuxWorld Conference and Expo in San Francisco was Linux for the Enterprise, a fact made visually obvious the moment one stepped in to the exhibition hall at the Moscone Center. Full of huge booths and over 150 vendors, LinuxWorld attracted all the big names in computing. Buzz about Linux in a meaningful business sense was everywhere.
In the weeks leading up to the show, more than two dozen public relation firms approached me to schedule meetings. Previously, three to six PR firms would contact me prior to the show, so the change was noticeable this year. I made two days available during which I would meet with vendors, and both days were booked completely. In this report, I discuss what I learned while doing some of these interviews.
So you know where I am coming from, I offer this caveat: I have no problem with either open-source or closed-source projects that use Linux. The market will decide which it prefers. Also, being practical, I know how difficult it is to create a business model based solely on writing open-source software. Many of the most successful companies in the Linux space are packagers, and many open-source tools are gathered in user-friendly packages. Most of us are more than happy to pay for this packaging--simply look at the major distributions. Also, any successful business must get its message out as quickly and efficiently as possible. PR firms and interviews help do that. The trick for interviewers is to maintain balance and objectivity when speaking to these folks.
On the second day of the conference, I was able to attend two of the technical presentations. The first was a panel led by Andrew Morton, the 2.6 kernel maintainer. Other panel members represented various commercial distributions. First off, I was surprised by the number of attendees at this presentation. I had expected only 20-50 people to attend this highly specialized session, but more than 200 people showed up.
A number of interesting facts were revealed by this panel: about 1,000 contributors worked on 2.6, offering over 38,000 changes, and about 20 people produced the vast majority of changes. The Linux Test Project developed one of the major testing tools. Security patches typically are sent out three or four days after the receipt of the first report. Quality assurance is different for Linux; updates and patches are put out as soon as possible, resulting in "rolling releases" that offer quicker fixes and error detection.
Currently, there is no C++ in the kernel. An audience member asked if Linux is in danger of fragmenting the way the *nixes did. The consensus amongst the panel members was that economic incentives would prevent it. The cost of development and maintenance of proprietary kernel code is prohibitive.
The second session I attended was lead by Calvin Austin, the Specification Lead from Sun for the J2SE release 1.5 of Java. Again, I was surprised by the attendance. Given Java's popularity, I expected 150-200 people, but only about 60 people attended. Calvin called this release the most significant update since 1998. It incorporates over 100 new language features and faster performance. In light of the release of machines such as those using the AMD Opteron, Sun has implemented full 64-bit code that permits the 32-bit JVM to work perfectly in the 64-bit version of Linux. I don't recall any performance figures, but he did discuss some of the major language enhancements. These include metadata facilities and simplified generics syntax, very much like that used in C++. You can find more information on Sun's Web site (see Resources).
I met with an eclectic mix of vendors, some of which I discuss below. One constant, however, was evident in what most of them were trying to do: abstract away the limitations of the hardware and software layers. For example, VMware was trying to abstract the limits of the machine you are running on, while AMD was trying to eliminate the difference between 32-bit and 64-bit computing, Veritas was trying to abstract away the differences in filesystems, whereas Trolltech was abstracting away differences in GUIs and PyX was trying to eliminate the differences in dealing with local or remote filesystem devices.
In meeting with VMware, the most interesting idea we discussed was Vmotion, the ability to set up groups of virtual servers and migrate work from one virtual server to another transparently. This is particularly powerful when one of the machines goes down. At the moment, 95% of all VMware hosts are Windows machines. It seems we have some work to do here.
All of you probably are aware of the buzz surrounding the AMD Opteron. AMD's Pat Patia discussed the Opteron's bus concept, which uses the HyperTransport open standard to eliminate the bottleneck inherent in the North-Bridge architecture. Because of the bus architecture, AMD is looking to produce 8-way SMPs. With all this power, we started talking about heat. AMD has started shipping processors rated at 30 watts for the same performance as the older 55-watt processors. We aren't there yet, but we may be close to eliminating the need for fans.
As some of you know, I do industry trend analysis. For this reason, I met with Scott Melland, the CEO of Dice. I have used Dice for years for much of my analyses. Dice is the only large-scale job site that provides exact figures about job searches and does not short cut with the usual X+ openings available, where X is some arbitrary number. Here are some interesting facts from Dice that you might like to know: 2/3 of people looking at Dice are still employed; unlike job searches for other OSes, Linux certification currently is not required but experience is; the top five skills in demand are expertise in Java, C/C++, Oracle, Unix and SAP. Their Web site has a lot of other facts that you might find interesting (see Resources).
Another interesting company is Black Duck Consulting, which specializes in risk mitigation. For us techies, that means that the company examines large volumes of code for commonality with open-source and proprietary code. It then compares licenses covering these segments of code and tells the firm what is compatible and what is not. From there it is a business decision. Doug Levin, the company CEO, and I discussed valuation for a company that uses open source against one that produces its own code. The factors that come into play are too many to cover here, but it is interesting to try answering the question, "Is a program more or less valuable if it uses third-party software, whether it is open source or proprietary?" Unfortunately, not all companies would agree with my assessment.
Dan Frye of IBM had some interesting remarks, such as IBM now considers Linux to be a Tier 1 OS. All its server brands can run Linux. IBM also feels that the market wants Linux because of its reliability, performance, cost and open-source technology. When benchmarking its Power line of processors, IBM found that Linux achieved the best throughput and scaled the best of all the OSes tested. Dan also contends that programmers working for corporations interested in the success of Linux develop the majority of code contributed to Linux. If true, this is quite a change from three years ago. I also viewed some of IBM's hardware. The 1U systems are really elegant, no dangling wires here.
Ranajit Nevatia from Veritas also had some interesting things to say. We discussed the Veritas File System, which abstracts the filesystem across hardware platforms, and the Veritas Logical Volume Manager, which is shareable for clustering. At my last job with a competitor of Veritas, we used some of its software for dealing with snapshot technology, so its software is pervasive in the storage management field. Unfortunately, it is closed source. However, Linux has had an impact on Veritas, and the company had had to simplify its pricing policies and lower its prices. Given the work that is going on with Linux filesystems, I wonder how long VFS and its LVM will remain closed?
I use Qt with Python for a number of projects, so I was pleased to be able to meet with the founders of Trolltech, Eirik Chambe-Eng and Haarvard Nord. Qt is an excellent platform-independent GUI development environment and runtime library. It is the basis for KDE and many other applications. Of Qt's more than 4,000 customers, 72% now are targeting Linux. Eirik and Haarvard see the next big application area as embedded Linux for phones. Because Qt is Unicode compliant, it can be used in China, and China has become the main developer of phones. Even Motorola is doing most of its phone work there.
Astaro is a locked-down secure Linux distribution with a built-in firewall, VPN, virus protection, intrusion detection, spam protection and surf protection. It is a neat package to install-and-forget it as a frontend for an enterprise's network. Although any competent Linux administrator can do this on his or her own, packaging is the secret here, and many companies happily pay for an easy-to-administer security frontend.
The most interesting visual demonstration I saw at the show was a Sony Playstation 2 running Linux and PyX iSCSI drivers and playing two different DVD movies simultaneously. No, I don't normally watch two movies at the same time, but the smoothness of the video experience was terrific. PyX uses the iSCSI protocol and proprietary drivers with full error recovery for block-level transport to accomplish this trick. The advantage of iSCSI is the ability to treat remote devices as if they are local. Normally, I would not review a proprietary solution, but its input and output streams are open-source standards. The main use of this technology is for disaster recovery. Using iSCSI, it is possible to write data to both local and remote devices in parallel, all transparently.
ActiveState has a really nice IDE, Komodo, that supports dynamic languages, including Python, Perl, PHP, Tcl and XSLT. It includes inspectors, project management, a programming editor and a powerful debugger. Interestingly, Komodo is built on top of the Mozilla framework; I did not know that Mozilla's framework was that sophisticated. Also, ActiveState is a terrific resource. Whenever I have a question about Python and Google it, the majority of times the answer is found on the ActiveState Web site.
Finally, a number of new companies that currently are or will soon offer large-scale migration tools for transitioning everything from Windows to Linux, including dealing with registry settings and file format conversions, were displaying at the show. At the moment, there does not seem to be real leader in this field, but it has tremendous potential for capturing significant market share for Linux.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- SuperTuxKart 0.9.2 Released
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Google's SwiftShader Released
- Download "Linux Management with Red Hat Satellite: Measuring Business Impact and ROI"
- Parsing an RSS News Feed with a Bash Script
- SourceClear Open
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide