Linux Expo 1999
Red Hat proved once again that they can put on a good show for the Linux community. Bigger and better than ever, Linux Expo again doubled in size and attracted top speakers such as Dr. Peter Braam of Carnegie Mellon University) and Dr. Theodore Ts'o, of MIT, who now works for VA Linux Systems. Big business was there too, represented by such companies as IBM, Hewlett Packard and SGI (formerly Silicon Graphics), as well as the usual Linux vendors, such as SuSE, Caldera, VA Linux Systems, Enhanced Software Technologies, Cygnus and many others.
I talked to Dave McAllister of SGI about their involvement in Linux and Open Source and found SGI to be much more committed to this community than I would have suspected. They released their most robust and scalable file system, XFS, to the community in an effort to aid Linux in reaching what he called “Enterprise level”. This is certainly something that was applauded by everyone I talked to at the show.
One of the most exciting announcements before the show was O'Reilly's and HP's sourceXchange.com web site. I attended a discussion about this site, which is designed to aid in getting needed open source developed by obtaining sponsors who will pay developers to write the code they need and then release it to the public. This is an idea whose time has come, as another group has also started a web site for the same purpose—this one is CoSource.com from a couple of independents, Bernie Thompson and Norman Jacobowitz, who write for LJ. It's obvious that Bernie, Norman and O'Reilly are committed to the community and wish to drive open source development, but I was a bit suspicious of HP. When I asked about HP's motives for involvement in this project, Wayne Caccamo told me HP felt this project was inevitable and wanted to take a leadership role and they wanted to “ingratiate” themselves to the Open Source community—talk about honesty! After that remark, I was ready to believe anything. I'm looking forward to seeing how both these sites work out. (For more on this subject, see Bernie Thompson's article in this issue, “Market Making in the Bazaar”.)
There were the usual fun things to do, such as a chili pepper sauce contest and a paintball contest pitting vi against Emacs once more. Once again vi won, proving it is the best editor available or that its advocates are the best shots. More than one group bought blocks of tickets to a local showing of Star Wars—The Phantom Menace. The ALS (Atlanta Linux Showcase) group invited me to go along with them. Fun movie.
I especially enjoyed my booth time talking to current and future readers and authors. In particular, it was a pleasure to finally meet Alan Cox and Telsa Gwynne.
Alpha Processor, Inc., a Samsung company, announced they were joining Linux International; Richard Payne and Guy Ludden presented a check to Jon “maddog” Hall. I got the picture and then took several others of Jon, including one with a people-size Tux, who was roaming the show floor.
Compared to LinuxWorld, Linux Expo came across as more polished, more “we've done this before successfully”. LinuxWorld had a lot of glitz—electricity and energy filling the air—that just wasn't there at Linux Expo. I think this mostly had to do with the fact that it wasn't the first time for these guys—the experience showed. The speakers all like Linux Expo better, as the Expo paid their travel expenses while LinuxWorld left them to get there on their own. LinuxWorld had more people and more vendors, but they also have the advantage of being in Silicon Valley.
Evan Leibowitz described the Expo as “the show where Linux lost its innocence” due to two unpleasant situations that arose. One was Pacific HiTech's being kicked out for passing out t-shirts without buying booth space. The other was the use of the Red Hat trademark without permission, by LinuxCare on their poster parodying a Palm Pilot ad. No matter which side you took on these incidents—the calling of lawyers certainly signals the “end of innocence”.
The show was definitely a success. I talked to Bob Young on the last day, and he certainly seemed pleased with how it had turned out. See my interview with Bob in this issue. For vendor announcements, see “UpFront”.
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Linux Systems Administrator
- Validate an E-Mail Address with PHP, the Right Way
- Introduction to MapReduce with Hadoop on Linux
- RSS Feeds
- Weechat, Irssi's Little Brother
- New Products
- Tech Tip: Really Simple HTTP Server with Python
- Poul-Henning Kamp: welcome to
1 hour 49 min ago
- This has already been done
1 hour 50 min ago
- Reply to comment | Linux Journal
2 hours 36 min ago
- Welcome to 1998
3 hours 24 min ago
- notifier shortcomings
3 hours 48 min ago
5 hours 25 min ago
- Android User
5 hours 26 min ago
- Reply to comment | Linux Journal
7 hours 19 min ago
10 hours 9 min ago
- This is a good post. This
15 hours 22 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?