This utility will go over your system and extract a lot of information, enough information that you should be able to rebuild the system almost exactly as it was. That's probably a little more detail than I'd be comfortable with putting on a web site, but great to print out and put in your system's notebook (your systems do have notebooks, right?). Requires: BASH and standard UNIX tools.
—David A. Bandel
Shortly after the kernel's Halloween feature-freeze, Guillaume Boissiere decided to put together some statistics on the incorporation of features into the 2.5 tree. He examined almost the entire history of the 2.5 development cycle, starting in early 2002. He created seven possible status categories for any given feature: planning, started, alpha, beta, ready for inclusion, pending inclusion and fully merged. His first progress chart (Figure 1) shows the percentage of features in each category. The second progress chart (Figure 2) shows the actual number of features in each category, changing over time. Without making any claim to complete accuracy, the graphs are interesting, if for no other reason than to observe how seriously most developers took the drive toward feature-freeze. Note also the hump of work done over the summer, followed by a complete end to new feature planning. That hump of activity corresponds roughly to the time when the decision was made to freeze by November.
To facilitate the movement from feature-freeze to actual 2.6 (or 3.0) release, the Open Source Development Lab (OSDL) donated labor and equipment to maintain a Bugzilla bug-tracking database for the Linux kernel at bugzilla.kernel.org. Support for this was initially strong among developers, but it tapered off a bit when big guns, like David S. Miller, found duplicate entries and frivolous reports made the system, at least in its original form, more trouble than it was worth. No one wanted to give up on it, however, and a concerted effort seems to be underway to bring the bug database to a usable state.
In more debugging news, Linus Torvalds indicated for the first time he might be willing to accept patches into the kernel to support a kernel-based debugger. Traditionally, Linus' stance has been that real programmers debug from source files. While not actually explaining the reason for his change of policy, he now seems to think that a kernel debugger running across a network would be a good feature to let into the kernel. Don't look for it in the next stable series, however, as he was careful to make this statement after the feature-freeze had passed.
A new read-only compressed filesystem, along the lines of cramFS, emerged in late October and targets the 2.4 kernel. SquashFS claims to be faster and to produce tighter compression than either zisoFS or cramFS. The author, Phillip Lougher, wanted to address some of the limitations of other compressed filesystems, particularly in the areas of maximum file size, maximum filesystem size and maximum block size.
And speaking of filesystems, does anyone remember xiaFS? In 1993 it was regarded, along with ext2fs, as a serious contender for world domination. In fact, the two filesystems leapt into public use within a few weeks of one another. For a while it even looked as though xiaFS had taken the lead. By 1994, however, it had essentially dropped off the map, and a few years later it was actually dropped from the official kernel tree. In 2000, Linus remarked that it would be fun to have it back. Finally, just after the Halloween freeze Carl-Daniel Hailfinger asked if this offer was still good. Linus said sure, and even offered to make an exception to the feature-freeze, if Carl could deliver the patches.
|Speed Up Your Web Site with Varnish||Jun 19, 2013|
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
- Speed Up Your Web Site with Varnish
- Containers—Not Virtual Machines—Are the Future Cloud
- Linux Systems Administrator
- Non-Linux FOSS: libnotify, OS X Style
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- RSS Feeds
- Reply to comment | Linux Journal
6 min ago
- Reply to comment | Linux Journal
4 hours 5 min ago
- Yeah, user namespaces are
5 hours 22 min ago
- Cari Uang
8 hours 53 min ago
- user namespaces
11 hours 46 min ago
12 hours 12 min ago
- One advantage with VMs
14 hours 41 min ago
- about info
15 hours 14 min ago
15 hours 15 min ago
15 hours 16 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?