- LJ Index, September 2008
- Linux on the Desktop? Who Cares?
- New LinuxJournal.com Mobile
- Eclipse Ganymede
- New Top-Level Domains on the Way
- What They're Using: Christian Einfeldt, Producer, the Digital Tipping Point
- Adios Windows 9x
- diff -u: What's New in Kernel Development
- They Said It
LJ Index, September 2008
1. Number of directories in kernel 2.26: 1,417
2. Number of files in kernel 2.26: 23,810
3. Number of lines in kernel 2.26: 9,257,383
4. Number of directories in gcc 4.4: 3,563
5. Number of files in gcc 4.4: 58,264
6. Number of lines in gcc 4.4: 10,187,740
7. Number of directories in KDE 4.0: 7,515
8. Number of files in KDE 4.0: 100,688
9. Number of lines in KDE 4.0: 25,325,252
10. Number of directories in GNOME 2.23: 573
11. Number of files in GNOME 2.23: 8,278
12. Number of lines in GNOME 2.23: 4,780,168
13. Number of directories in X Window System 7.3: 1,023
14. Number of files in X Window System 7.3: 14,976
15. Number of lines in X Window System 7.3: 21,674,310
16. Number of directories in Eclipse 3.4: 297,500
17. Number of files in Eclipse 3.4: 912,309
18. Number of lines in Eclipse 3.4: 94,187,895
19. Number of dollars in the US National Debt: 9,388,297,685,583
20. Dollars earned per line by open-source developers if the US Debt had been used to fund these projects: 56,756
1–18: wc -l
Linux on the Desktop? Who Cares?
Every so often, you read on Slashdot, Digg or some other techie news site that Linux is finally ready for the desktop. It's finally to the point that any end user could sit down at a computer and happily compute away. The applications are sufficiently sanitized and Windows-like that even the average Joe can use them. I think, however, that it's fair to say most of our previous conceptions of “ready for the desktop” are moot points.
The only folks who are still up in arms over whether Linux ever will be ready are the same folks who have been talking about it for years. New users really don't care. I don't say that arbitrarily; I say that because I work in a school, and I see the current generation of computer users. They don't care if they use a Mac, a PC or a Linux machine. Most don't even notice the difference. In an unofficial, random sampling of college and high-school students, here's what they need from a computer:
Firefox (really, by name—cool, eh?).
A way to play music (iTunes often is mentioned, but not insisted upon).
And, that's it. The last point bummed me out a bit, so I asked more probing questions. It turns out that Microsoft Office has become the common name for an office suite—much like Kleenex became the name for facial tissue. For almost everyone I asked, OpenOffice.org or even Google Docs (in a pinch) is the same thing. In fact, some weren't really sure why I'd ask such a thing, because “aren't they all the same?”
Some people want a specific type of computer for tasks like video production or gaming, but they aren't the overwhelming majority anymore. Everyone wants or needs a computer now, and the general population doesn't seem to care much about what operating system it's running.
My suspicion is that Web 2.0 and mobile (smartphone) technology is doing more to help Linux than anything else in history. It's not because Linux is better at such things; it's because the world is moving to the Web. The vehicle to get there is becoming less and less important.
The good news is that now Linux finally can take over the world, and most people won't even notice!
New LinuxJournal.com Mobile
We are all very excited to let you know that LinuxJournal.com is now optimized for mobile viewing. You can enjoy all of our news, blogs and articles from anywhere you can find a data connection on your phone or mobile device.
We know you find it difficult to be separated from your Linux Journal, so now you can take LinuxJournal.com everywhere. Need to read that latest shell script trick right now? You got it.
Go to m.linuxjournal.com to enjoy this new experience, and be sure to let us know how it works for you.
The latest version of Eclipse, version 3.4, aka Ganymede, should be available by the time you read this. If you've never looked at Eclipse and you work with multiple programming languages or multiple platforms, take some time to try Eclipse.
Be prepared. Eclipse is a large, complex tool, and you won't grok it if you invest only 15 minutes. In addition to being large and complex, Eclipse's roots are at IBM, and it's big in the Java world, so there's a bit of “Blue-Speak” and “Enterprise-Speak” to deal with at times (and, of course, XML).
Most IDEs come with built-in “support” for lots of programming languages. Although for a lot of them, support means it colorizes your code. Eclipse is a bit different. It doesn't come with built-in support for many languages, or any, depending on the version you download. Support is provided via Eclipse Plugins. And normally, “support” means more than just colorizing your code. You usually get something that understands your language. It can show you an outline of the functions and data in your code; it can help you refactor code; it can show where something is defined, and it integrates with the language's debugger.
Eclipse is not without its annoyances. Perhaps the most annoying is that it's only an IDE and not a text editor. Of course it edits text, but it's not a general-purpose text editor. If you want to open a file that's not part of a project, it's a bit cumbersome. There's no filesystem browser, and the open dialog doesn't remember the directory that you used last time. And, if you don't have a plugin for the file type you open, you don't get any code colorizing. So, you often end up using Eclipse for your “projects” but then using another text editor to look at files that aren't part of your project.
If you develop only C++ applications for KDE on Linux, or only XXX applications for YYY on ZZZ, there might be a better IDE than Eclipse. However, if you use multiple languages and/or multiple systems, and you want to use only a single IDE, there's no better IDE than Eclipse. And, even if you use only one language on one system, Eclipse sets the bar pretty high.
New Top-Level Domains on the Way
In late June 2008, ICANN accepted a proposal to relax restrictions on the top-level domain namespace and, in the process, opened up the possibility for thousands of new domains.
Currently, there are only 21 top-level domains, such as .com, .org or .info, and around 240 active country-code domains, such as .us, .de and .uk. The proposed plan would allow any organization or person to apply for a customized top-level domain.
For example, New York City could operate the .nyc domain for addresses, such as brooklyn.nyc, penn-station.nyc or www.central-park.nyc. “It's a massive increase in the 'real estate' of the Internet”, said Dr Paul Twomey, President and CEO of ICANN. The .com registry is by far the most crowded at this point, with 71 million registered domains. For comparison, the second (.de) and third (.net) most popular registries have only 11.2 million and 10.6 million domains, respectively.
Before you rush to register your new top-level domain, you may want to check your bank account first. ICANN is expected to charge a minimum of $100,000 for the right to operate your own top-level domain, provided you qualify. Applicants must prove that they have a “business plan and technical capacity”. There is hope that this measure will help keep domain squatters out of the top-level namespace.
ICANN also has a process in place to deal with controversial submissions, as stated on icann.org: “Offensive names will be subject to an objection-based process based on public morality and order. This process will be conducted by an international arbitration body utilizing criteria drawing on provisions in a number of international treaties. ICANN will not be the decision maker on these objections.”
Applications for new names will be available in the second quarter of 2009.
Yes, it is true, ICANN HAZ MORE DOMAINS.
What They're Using: Christian Einfeldt, Producer, the Digital Tipping Point
I have six basic different uses for free, open-source software: 1) my law office practice; 2) managing and editing video for the Digital Tipping Point Project; 3) running a 25-seat Edubuntu lab at a public middle school as a volunteer in San Francisco; 4) placing ACCRC.org Linux computers in classrooms; 5) giving out ACCRC.org Ubuntu computers to friends, neighbors and the children who attend that school; and 6) supporting San Francisco's Tech Connect program by demonstrating Linux boxes at events for nonprofits and low-income individuals.
For my law practice, I use whatever cast-off computer I happen to have available at the moment from the other computers that I give out to students, friends or family. I generally can find a P4 computer with about 512MB of RAM, and I just copy my data from one machine to an external hard drive and then back onto the new machine. It really varies depending on the needs of the students, friends and neighbors I am helping. It's all part of a constant flow of equipment through my office. For a while, I was using OpenSUSE, but I switched to plain-old, brown GNOME Ubuntu, simply because most of the sysadmins who help me prefer plain-old brown.
For the Digital Tipping Point video project, I am using three machines. They all have the same “last name”, so to speak, as they are all members of the “Beast” family. The least muscular is the Server Beast (sb), with two single-core AMD processors at about 1GHz each, running on a Tyan 2460 motherboard and 750GB of storage on two internal hard drives (built by San Francisco Linux consultant Holden Aust). This machine has an added card with both USB 2.0 and IEEE 1394 ports. It's called the Server Beast because it was formerly a server owned by a law firm. I use it either for capturing video from my Sony tape deck, compressing the video, uploading the video to the Internet Archive's Digital Tipping Point Video Collection (www.archive.org/details/digitaltippingpoint) or for doing rough video editing with Kino, such as the 4:57 minute proof-of-concept video for the Digital Tipping Point Project (www.archive.org/details/proof_of_concept_four_mins.mpg).
Next up in the Beast family is the Render Beast (rb, also built by Holden Aust). It has a Gibabyte-brand GA-MA69GM-S2H motherboard with an Athlon AMD 64 4200+ chip and 4GB of RAM. This machine so far has been used mostly for the same basic thing as the Server Beast, but it's much faster. It also has 1.5TB of internal HD storage.
Finally, the newest addition to the family is the TeraByte Beast (tbb, built by San Francisco Bay Area Linux consultant Daniel Gimpelevich and Holden Aust), with a Gibabyte-brand GA-MA790FX-DS5 motherboard with an Athalon AMD 64 4200+ chip and 4GB of RAM. This machine's claim to fame (at least at Beast family gatherings) is that it has 16 one-terabyte drives, for a total of 16TB. It's primarily used for storing video, although it occasionally is pressed into service to do the same things as its Beast brothers.
The public middle school's Edubuntu lab has three machines running various flavors of Ubuntu (built by ZaReason, Inc., a Berkeley-based computer retailer that sells only Linux-powered computers). There are two video-ready machines, each with an Intel Core 2 Duo E6300 and 2GB of RAM. Each machine also has a 500GB SATA drive. These are used by the students for watching video and listening to music, as well as practicing photo editing in The GIMP. The teachers have not yet put together a video-editing course, as they still are learning how to use video editing under Cinelerra and Kino. Let's keep our fingers crossed for next year.
ZaReason also built the Edubuntu thin-client server, which is a Pentium D 940 with 2GB of RAM and a 320GB hard drive. That machine supports 23 thin clients and is used by the students every day except Friday for on-line research and composing essays and sending them to their teachers via e-mail. The students also are taught to do presentations, which they deliver in front of their science and social studies classes. For their essays and presentations, they use Google Docs, which now has a presentation element (OpenOffice.org was choking the server). As a nice little bonus, Microsoft paid for all of the ZaReason boxes—a result of California's antitrust settlement (linux.slashdot.org/article.pl?sid=07/10/11/1446254).
With the help of Andrew Fife and Tom Belote of Untangle.com (a networking security company) and Linux expert Drew Hess, we will be turning the Edubuntu thin-client lab into an Edubuntu hybrid client network running the programs locally but serving up the files from the Zareason.com server. The thin clients were choking the server when audio or video was attempted, so we are shifting some of the work to the clients next year.
James Burgett, who runs the Alameda County Computer Resource Center (ACCRC.org) has been a really generous donor of equipment for the public middle school I am supporting with free, open-source software. James gave the school an initial donation of 30 HP P4 Ubuntu machines with 256MB of RAM. Some of those boxes were given to students, and some were used in the Edubuntu lab. Other boxes were placed in classrooms, where the students use the machines for the same purposes as in the lab.
James Burgett (also of Untangle.com) and Andrew Fife organized a massive installfest (lwn.net/Articles/273770) at the school and four other locations in the San Francisco Bay Area (untangle.com/index.php?option=com_content&task=view&id=393&Itemid=139) on March 1, 2008. That installfest allowed me to give neighbors and friends some of the machines I had scrounged for the school, by replacing those machines with newer machines from the ACCRC.org - Untangle installfest. Also, many of the new machines were given out to students, many of whom have no computers at home. ACCRC.org and Untangle.com are planning another massive installfest (untangle.com/index.php?option=com_content&task=view&id=351&Itemid=139) for LinuxWorld Expo in August 2008 in San Francisco.
Finally, the St. Anthony Foundation of San Francisco has loaned me seven Dell GX 150 machines with 256MB of RAM, which I use to support Kari Gray in her work with the City and County of San Francisco's Tech Connect Project to introduce low-income people to technology. A video of an event at St. Anthony's Foundation in San Francisco's skid row is available at (news.cnet.com/Tenderloin-Tech-Day/1606-2_3-6223419.html?part=rss&tag=2547-1_3-0-20&subj=news).
Adios Windows 9x
The upcoming release of Cygwin version 1.7 will be dropping support for Windows 9x (Windows 95, Windows 98 and Windows Me). If you're lucky enough never to have to use Windows, Cygwin probably seems like a waste of effort. But, if you're not so lucky, Cygwin is what keeps you sane.
Cygwin is a Linux-like environment that runs on Windows. It provides you with a command-line environment with most of the tools you've come to know and love using Linux. It even provides a number of Linux dæmons that can run as Windows' services, most notably an SSH dæmon.
There also is a port of the X Window System called Cygwin/X, but it appears to have been without a maintainer for a few years. Given that most of the major open-source GUI toolkits now support Windows, lack of the X Window System may not be a huge stumbling block.
Cygwin was started in 1995 by Steve Chamberlain, an engineer working for Cygnus (later absorbed by Red Hat). The earliest mailing list references on the Web are in early 1997, by which time it appears to have been in a functional state.
If you understand programming on Windows and on Linux, and you need some mental exercise, try to figure how you'd implement fork() on Windows. If you want to cheat, check out cygwin/fork.cc in the Cygwin CVS.
We can all imagine a better world, one where our favorite operating system is ubiquitous, but imagine a world without Cygwin. If you have to use Windows now and then, that would be a scary world indeed.
Get it at cygwin.com.
diff -u: What's New in Kernel Development
There's an interesting new project, the Kernel Library Project, that aims to port the Linux OS features, such as the Virtual Filesystem, into a generic library that would work on any other operating system. Octavian Purdila, Stefania Costache and Lucian Adrian Grijincu have been working on this, and it could make it a lot easier to run Linux software anywhere else a user might want to run it. If you find this interesting, they're looking for volunteers to help out.
Mark Lord, Tejun Heo and a variety of others have been keeping Serial ATA good and solid. At the moment, they are focusing on fixing, or at least working around, all stability issues. In some cases, they've been making very small speed sacrifices in order to make sure that certain rare problems don't come up at all. At some point, they plan to revamp some of the code, in order to solve the problems and improve speed, but that will require more invasive changes. For the moment, they simply want to make sure that absolutely nothing can go wrong for users. Kudos to them for keeping up that discipline. As everyone knows, it's much more fun to throw caution to the wind and just build lots of new features.
Believe it or not, there still are plenty of people using 2.4 in the world. I'm sure they all wish they could upgrade to 2.6, and the kernel developers wish that too, but undoubtedly, there are reasons why their entire corporate infrastructure and all their products would break if they upgraded to 2.6. And for those users, Willy Tarreau has just come out with 18.104.22.168, which includes a small number of key security fixes. Willy encourages all 2.4 users to upgrade to 2.4.
David Woodhouse and Paul Gortmaker now are officially in charge of embedded systems. The idea of having a maintainer for a general kernel concept like embedded systems is fairly new, and it creates some ambiguity for people submitting patches. Do they submit patches to the maintainer of the specific hardware driver or to the embedded system maintainers? In practice, it's likely that this won't be a real concern, and folks will get used to cc-ing whomever they should on their e-mail messages.
Another potential problem with having an overarching embedded system maintainer is that such a person might become hypnotized by the idea of reducing size at any cost, as Andi Kleen has pointed out. But, David has reassured him and everyone else, that size reduction is only one part of supporting embedded devices, and that the new maintainers plan to keep a broad outlook, making sure their changes are good for everyone (or at least not harmful to larger systems or to the kernel sources themselves).
One of David and Paul's main hopes, and Andrew Morton's as well as the whole thing was his idea to begin with, is that companies designing embedded devices will work with David and Paul to create a better dialogue between that class of companies and the kernel developers.
Adrian Bunk has submitted a patch to remove the final PCI OSS driver from the kernel. The Trident 4DWave/SIS 7018 PCI Audio Core has been on Adrian's hit list for a very long time, but Muli Ben-Yehuda always has resisted. Now that Muli has moved on to other projects, and an ALSA driver exists that works for the exact same hardware, Adrian's patience has paid off. OSS finally is fully out of the kernel.
UBIFS seems to be on a relatively fast track into the main kernel tree. The new Flash filesystem is likely to go into Linux-Next for a while, and from there, it should feed relatively automatically into Linus Torvalds' tree at the next merge window. Artem Bityutskiy set the wheels in motion with a formal request to Stephen Rothwell. Christoph Hellwig had a lot of feedback on the code for Artem, and it came out that NFS would be very difficult for UBIFS to support without significant code revisions. Artem was surprised to learn about that, and admitted that yes, probably the initial version of UBIFS in Linus' tree would not support NFS. This doesn't seem to bother anyone, and in any case, Artem already is working on some ideas to fix the problems around NFS support. It does seem as though UBIFS will soon be part of the official kernel releases.
Recently, there was a fairly significant effort to eliminate the BKL (Big Kernel Lock) by replacing it with semaphores. This is an excellent goal, with all kinds of speed implications for regular users, but unfortunately, the particular implementation had some speed problems of its own that led Linus Torvalds eventually to undo the change entirely. This fairly severe step was prompted partly by the speed issues of the semaphore solution and partly by the sense that there must be a better solution out there.
Everyone, including Linus, wants to get rid of the BKL. But, doing this is very hard. The BKL has various qualities that are difficult to implement in any of the available alternative locking methods, and it also has some subtleties that make it hard to determine whether a given alternate implementation is doing the right thing or not.
Ingo Molnar, therefore, has decided to cut through the morass, with a partial solution that will make the full solution much more manageable. He plans first of all to extract all the BKL code out of the core kernel and into an isolated part of the source tree, where it can one day be replaced entirely, without requiring any subtle changes to core code. Eventually, he hopes to push each occurrence of the BKL into the relevant subsystem code, where it could be replaced with cleaner subsystem locks, which in turn could be eliminated in a more normal and familiar way.
With Ingo on the job, and Linus taking an active part, a lot of other big-time hackers have piled on, and there is no doubt that very significant locking changes are in store for the kernel. What does this mean for regular users? Probably a snappier, speedier kernel in the relatively near future.
They Said It
Not everything worth doing is worth doing well.
—Tom West, from The Soul of a New Machine by Tracy Kidder, 1981
Technology has the shelf life of a banana.
Never trust a computer you can't throw out a window.
Computers are useless. They can only give you answers.
In the long run, paying for Wi-Fi in your hotel will be like paying to use the toilet or the heater. You won't. Meanwhile, it would be nice if it were easy, cheap, good, or at least two out of those three.
First, it [Microsoft] “embraces” the wonderfulness of open source; then it “extends” open source through deals like the one it signed with Novell, effectively adding software patents to the free software mix; and then, one day, it “extinguishes” it by changing the terms of the licences it grants.
—Glyn Moody on Microsoft's old embrace, extend and extinguish cha-cha, www.linuxjournal.com/content/should-we-boycott-microsoft-can-we
Like the Presidential campaign, it's not who is most experienced or most viral or any of that. Rather, it's who's left after the least are gone. All the religious arguments—closed versus open in particular—are left in the dust by our desire to live as much in the future as we can.
—Steve Gillmor on the iPhone, gesturelab.com/?p=111
How much marketing fakery do you willingly accept, and how much do you want to know about? Does the vegetarian really want to know that they didn't wash the pot at the restaurant and a few molecules of chicken broth are in that soup?
As long as you have one person to talk to, you have a community. And I think way too many people are looking at how many Twitter followers they have, or how many RSS people they're having following them and that's a mistake. You need to embrace your community no matter how big or small—I mean, everyone started off real small.
—Gary Vaynerchuk, garyvaynerchuk.com/2008/06/05/when-do-you-know-you-have-a-community
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- My +1 Sword of Productivity
- Tech Tip: Really Simple HTTP Server with Python
- Non-Linux FOSS: Caffeine!
- Returning Values from Bash Functions
- Managing Linux Using Puppet
- Doing for User Space What We Did for Kernel Space
- Rogue Wave Software's Zend Server
- Google's SwiftShader Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide