- LJ Index, June 2010
- Maintaining Your System from the Command Line
- diff -u: What's New in Kernel Development
- Non-Linux FOSS
- Dual Booting, Not Just for Windows Users
- One-Eyed, One-Horned, Flying Purple...Ubuntu?
- Create BillyBobBuntu with Reconstructor
- They Said It
- Save Your Favorite Articles
LJ Index, June 2010
1. Millions of developers in the world: 15.2
2. Number of lines of code produced per developer per day: 10
3. Millions of lines of code produced per year by all developers: 31,616.0
4. Millions of lines of code produced per minute by all developers: 0.32
5. Millions of lines of code in kernel version 2.6.32: 12.99
6. Minutes required to rewrite the Linux kernel if all developers pitched in: 41
7. Millions of lines of code in the average Linux distro: 204.50
8. Hours required to rewrite the average Linux distro if all developers pitched in: 10.6
9. Number of the top 10 fastest computers in the world that run Linux: 10
10. Number of the top 10 fastest computers in the world that run UNIX: 0
11. Number of the top 10 fastest computers in the world that run Microsoft Windows: 0
12. Number of the top 10 fastest computers in the world built by Cray: 2
13. Number of the top 10 fastest computers in the world built by IBM: 4
14. Number of the top 10 fastest computers in the world built by Sun: 2
15. Number of the top 10 fastest computers in the world built by SGI: 1
16. Number of the top 10 fastest computers in the world built by NUDT (China): 1
17. Teraflop speed of world's fastest computer (Cray Jaguar at ORNL): 1,750
18. Terabytes of memory in the world's fastest computer: 362
19. Petabytes of disk storage in the world's fastest computer: 10
20. Number of Opteron processor cores in the fastest computer in the world: 224,256
1: Evans Data
2: Frederick P. Brooks in “The Mythical Man Month”
3: #1 * #2 * 208 (208 working days/year)
4: #1 * #2 / 8 / 60 (8-hour workday)
6: #5 / #4
7: Linux Foundation
8: #6 * #4 / 60
Maintaining Your System from the Command Line
Many Linux distributions use some form of packaging system to organize applications installed on a system. A formal packaging system lets you install, remove and, in general, maintain your software in a controlled and coherent way. The three main packaging systems that most distributions currently use are the Debian deb package, the Red Hat rpm package and the Slackware pkg package. They all have graphical utilities to interact with the packaging system, but what if you want to deal with the system on the command line? What if you're running a server or accessing a distant machine through SSH and don't want to deal with the overhead of X11? Let's look at how to do this for Debian-based systems.
First, you probably will want to install some software. The preferred way to do this is with the apt-get utility. apt-get is aware of the chain of dependencies between packages. If you want to install stellarium, simply run apt-get install stellarium, which downloads the relevant package file and all of its dependencies from a repository. What if you don't know the exact package name? Use the dpkg-query utility to query the package management system. So, if you know the package name has “kde” in it, you can list all the matching packages with dpkg-query -l "*kde*". Remember, quote any search strings that have an asterisk (*), so you don't inadvertently make the shell try to expand them.
This works great for software available in the given repository. But, what if you want something not available? If you have a .deb file available for download, you can download it and install it manually. After downloading the file, install it by running dpkg -i file_to_install.deb.
dpkg works with the deb packaging system at a lower level than apt-get. With it, you can install, remove and maintain individual packages. If you have a group of packages to install, you might want to add the relevant repository to your list so that apt-get knows about it. The list of repositories is stored in the configuration file /etc/apt/sources.list. Each line has the form:
deb http://us.archive.ubuntu.com/ubuntu/ karmic main restricted
The first field tells apt-get what is available at this repository: deb is for binary packages and deb-src is for source packages. The second field is the URL to the repository (here, the Ubuntu repository). The third field is the repository name (in this case, the repository for Ubuntu's karmic version). The last fields are the sections from which to install packages. This example looks at the main and restricted sections when trying to install applications or resolve dependencies.
Now that you have installed some applications, you probably want to maintain and keep them updated, because every piece of software will have bugs or security issues that come to light over time. Developers always are releasing new versions to fix those issues and updating the relevant packages in the repositories. To update the list of software and versions on your system, run apt-get update. Once you've updated the list, tell apt-get to install the updates with apt-get upgrade. If you want a list of what is about to be upgraded, add the -u option: apt-get upgrade -u.
Sometimes, when a new version for a package comes out (like when a distribution releases a new version), the dependencies for said package might change too. In such cases, a straight upgrade might be confused, so use apt-get dist-upgrade. This command tries to deal with these changes in dependencies intelligently, adding and removing packages as necessary.
What if you've installed a package just to try it out and don't want it anymore? Remove a package with apt-get remove stellarium. This removes all the files installed as part of the stellarium package, but it leaves any configuration files intact and also doesn't deal with any extra packages installed because stellarium depended on them. If you want to remove a package completely, including all configuration files, run apt-get purge stellarium.
Installing and removing all this software can result in space-wasting cruft accumulating on your system. To recover some space, run apt-get autoclean. This removes the package .deb files from the local cache for packages that no longer can be downloaded (mostly useless packages). If you want to clean out the local cache completely and recover more space, run apt-get clean.
Although remove and purge will remove a package, what can you do about any dependencies installed for this package? If you run apt-get autoremove, you can uninstall all packages that were installed as dependencies for other packages and aren't needed anymore.
Another way of finding packages that are no longer needed is with the deborphan utility. First, you need to install it, with apt-get install deborphan. (Most distributions don't install it by default.) Once installed, running it with no command-line options gives a list of all packages in the libs and oldlibs sections that have no dependencies. Because no other package depends on those packages, you safely can use apt-get to remove or purge them. If you want to look in all sections, use the -a option. If you're trying to save space, ask deborphan to print out the installed sizes for these orphan packages by using the -z option. Then, you can sort them with deborphan -z -a | sort -n, which gives a list of packages you can safely uninstall, sorted by installed size from smallest to largest.
Each of the tools discussed above has many other options that you should research in the relevant man pages. Also, Red Hat-based systems have equivalent commands to help you manage rpm files.
diff -u: What's New in Kernel Development
Paul E. McKenney has worked up a patch to include a more precise version number in the config data, so if you're running a kernel built from a git repository, you'll be able to identify the source tree precisely, even if it's in between officially released versions. In this case, the version number will look something like 2.6.33-01836-g90a6501. Isn't it beautiful? His code actually went through numerous revisions to make sure it derived the version number in a safe way that wouldn't cause other scripts to choke and to give users the option of setting environment variables to control whether full version information should be included.
Dave Young has posted patches to change the patch submission documentation to list Gmail as no longer useful for sending patches. In the past, Gmail apparently could be made to send patches cleanly by jumping through a couple hoops, but now that's no longer the case. Gmail converts tabs to spaces, automatically wraps long lines and will 64-bit encode messages that have non-ASCII characters. Any one of those features would be enough to corrupt a patch file. Now, it's possible to configure Firefox to edit the e-mail with an external editor, and in the past, Gmail would send the edited text instead of using its own editor. But, with the introduction of the line-wrapping feature, Gmail apparently wraps lines even when an external editor is used. The documentation used to explain the workaround involving the external editor, but Dave's patch now simply lists the various issues and states that Gmail shouldn't be used for sending patches to the linux-kernel mailing list.
Eric W. Biederman has changed the way /dev/console is created. The old way was to wait until the filesystem containing the /dev directory had been mounted and then mount /dev/console there. The problem with that is if you ever want to unmount the filesystem, you can run into problems if /dev/console is still open. Eric's patch mounts /dev/console as part of rootfs—still in the same location, still called /dev/console, but just as part of rootfs instead of whatever filesystem you choose to mount for your running system. Very, very few power users may have to adjust the way they do things slightly as a result of this patch. Everyone else should notice nothing at all, or they may notice in some situations, certain problems that used to crop up don't anymore.
Christine Caulfield has marked herself as no longer maintaining the DECnet networking layer and has marked that code as orphaned instead of maintained. With the decnet mailing list totally silent, her theory is that the only users are running older kernels and are happy with it as is. The DECnet networking protocols originally were used in the 1970s to connect PDP-11s. They were published as open standards, paving the way for Linux's DECnet implementation decades later.
Whether you think making each program have its own installer is a bug or a feature, in the Windows world, it's the norm. So, if you're porting open-source code to Windows, at some point, you have to think about creating an installer.
Inno Setup is a free and open-source installer for Windows programs. It's been around since 1997 and is written in Delphi Pascal. Inno Setup is driven by a script that you provide, allowing Inno Setup to create an installer for your program. The script is much like an INI file. You provide simple name/value pairs that drive the creation of the installer. For more complex scenarios, Inno Setup contains its own built-in Pascal compiler for creating real “code” sections in the script.
Inno Setup has a long list of supported features: support for 64-bit applications, customizable setup types, integrated uncompressing of installed files, creation of shortcuts, creation of registry entries, running programs before/during/after the install, password protection, digital signing and much more. See the Web site (www.jrsoftware.org/isinfo.php) for more information.
Inno Setup runs on all modern versions of Windows. It creates an uninstaller as well as an installer and packages it all up in a single EXE for easy distribution. At the time of this writing, Inno Setup is at version 5.3.8, released February 18, 2010.
Dual Booting, Not Just for Windows Users
This is LJ's Distribution issue, and it seems fair to note that programs like GRUB aren't only for those of us with one foot in the Windows world. Did you know you can run Fedora and Ubuntu on the same machine? Did you know you can run Fedora 10, Fedora 12, Ubuntu 8.04, Ubuntu 9.10, Slackware and Linux Mint all on the same machine?
One of the many great things about Linux is that it makes multiple installs simple! During the install process, carve off a hunk of hard drive, and most distributions happily will honor and respect your existing GRUB install. So if you can't decide which distribution you want to try, install them all! (Okay, if you install 20 distributions on one computer, you may start to run into problems keeping them straight!)
One-Eyed, One-Horned, Flying Purple...Ubuntu?
With the latest iteration of its Linux distribution, Canonical has changed its branding a bit. Although we might all speculate why it has moved on from its traditional brown themes, sadly the reality often is less exciting than speculation. True, the rebranding is due to years of planning, research and marketing decisions, but I suspect a strong underlying set of reasons:
UPS already had the corner of the brown market.
Ubuntu's “Human” theme limited its interplanetary domination strategy.
Mark Shuttleworth heard enough “scat” jokes as they pertain to the color brown.
The color brown would clash with the upcoming orange overtones of the 10.10 version of Ubuntu, Marauding Marmaduke.
All joking aside, the rebranding is a refreshing new look for Ubuntu. Whether it will have any effect on the marketability of Canonical's flagship product remains to be seen. For those of us who were just about browned-out though, I think it's safe to say, “Bring on the purple!”
Create BillyBobBuntu with Reconstructor
One glance at DistroWatch will prove that Linux users like to roll their own distributions. Heck, there's even a distribution called Linux From Scratch, which you'd think would just be called Linux! If you have been itching to roll your own distribution but feared it was too complicated, Reconstructor (www.reconstructor.org) might be exactly what you need.
I've written about Reconstructor before on the Linux Journal Web site (www.linuxjournal.com/content/reconstructor-when-you-lose-your-restore-cd), and more recently, Ross Larson wrote a follow-up on how the project has progressed (www.linuxjournal.com/content/howto-customized-live-dvds-reconstructors-web-ui). One interesting new feature is that you can build your own distribution from a Web-based distro builder. Surfing over to build.reconstructor.org (and creating an account) allows you to build a custom Linux distribution and then download it.
I do have one request: please don't start a new Linux distribution to compete with all the others. We already have plenty!
They Said It
We live in a society exquisitely dependent on science and technology, in which hardly anyone knows anything about science and technology.
The most overlooked advantage to owning a computer is that if they foul up, there's no law against whacking them around a little.
Any science or technology which is sufficiently advanced is indistinguishable from magic.
—Arthur C. Clarke
Any technology that is distinguishable from magic is not sufficiently advanced.
Microsoft once made the mistake of broad-brushing Linux as an intellectual property quagmire. It made Microsoft headlines, but few friends: lawyers didn't believe it, customers didn't want to hear it, and competitors dared it to sue.
Years later, Microsoft still hasn't sued, but instead plods away at convincing the world, one patent cross-licensing agreement at a time, that everyone, everywhere owes it money for alleged violations of its IP in Linux.
—Matt Asay, Chief Operating Officer at Canonical
A year spent in artificial intelligence is enough to make one believe in God.
—Alan J. Perlis
Save Your Favorite Articles
Did you know you can save your favorite LinuxJournal.com articles to reference later? Just click “Mark this as a favorite” at the bottom of any post, and you'll see it on your user profile. When you click your favorites tab, you can search your favorites for easy reference. Now, you can keep track of all the useful articles you come across on LinuxJournal.com in a sort of recipe box. Visit any author or reader profiles to see their favorite articles as well. We hope this makes it easier for you to recall specific info on the site. I'd love to hear how this feature is working for you, so feel free to drop me a line at email@example.com. See you on-line!
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Google's SwiftShader Released
- Parsing an RSS News Feed with a Bash Script
- Rogue Wave Software's Zend Server
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide