- LJ Index, March 2010
- Stupid tar Tricks
- Non-Linux FOSS
- diff -u: What's New in Kernel Development
- IRC, Still the Best Support Around
- They Said It
- Linux Journal Insider
- Become a Ninja on LinuxJournal.com
- Sync Your Life
LJ Index, March 2010
1. Number of times “Ubuntu” was used in LinuxJournal.com posts during 2009: 329
2. Number of times “openSUSE” or “SUSE” was used: 68
3. Number of times “Debian” was used: 116
4. Number of times “Red Hat” was used: 58
5. Number of times “Fedora” was used: 49
6. Number of times “CentOS” was used: 2
7. Number of times “Gentoo” was used: 13
8. Number of times “Ubuntu” was used in Linux Journal print articles during 2009: 343
9. Number of times “Debian” was used: 88
10. Number of times “openSUSE” or “SUSE” was used: 47
11. Number of times “Red Hat” was used: 49
12. Number of times “Fedora” was used: 51
13. Number of times “CentOS” was used: 17
14. Number of times “Gentoo” was used: 8
15. Number of Linux Journal issues with a Programming or Development focus (1994–2009): 14
16. Number of Linux Journal issues with a System Administration focus: 11
17. Number of Linux Journal issues with a Security focus: 10
18. Number of Linux Journal issues with an Embedded focus: 7
19. Number of Linux Journal issues with an Ultimate Linux Box focus: 5
20. Number of Linux Journal issues with a Desktop focus: 4
Stupid tar Tricks
One of the most common programs on Linux systems for packaging files is the venerable tar. tar is short for tape archive, and originally, it would archive your files to a tape device. Now, you're more likely to use a file to make your archive. To use a tarfile, use the command-line option -f <filename>. To create a new tarfile, use the command-line option -c. To extract files from a tarfile, use the option -x. You also can compress the resulting tarfile via two methods. To use bzip2, use the -j option, or for gzip, use the -z option.
Instead of using a tarfile, you can output your tarfile to stdout or input your tarfile from stdin by using a hyphen (-). With these options, you can tar up a directory and all of its subdirectories by using:
tar cf archive.tar dir
Then, extract it in another directory with:
tar xf archive.tar
When creating a tarfile, you can assign a volume name with the option -V <name>. You can move an entire directory structure with tar by executing:
tar cf - dir1 | (cd dir2; tar xf -)
You can go even farther and move an entire directory structure over the network by executing:
tar cf - dir1 | ssh remote_host "( cd /path/to/dir2; tar xf - )"
GNU tar includes an option that lets you skip the cd part, -C /path/to/dest. You also can interact with tarfiles over the network by including a host part to the tarfile name. For example:
tar cvf username@remotehost:/path/to/dest/archive.tar dir1
This is done by using rsh as the communication mechanism. If you want to use something else, like ssh, use the command-line option --rsh-command CMD. Sometimes, you also may need to give the path to the rmt executable on the remote host. On some hosts, it won't be in the default location /usr/sbin/rmt. So, all together, this would look like:
tar -c -v --rsh-command ssh --rmt-command /sbin/rmt ↪-f username@host:/path/to/dest/archive.tar dir1
Although tar originally used to write its archive to a tape drive, it can be used to write to any device. For example, if you want to get a dump of your current filesystem to a secondary hard drive, use:
tar -cvzf /dev/hdd /
Of course, you need to run the above command as root. If you are writing your tarfile to a device that is too small, you can tell tar to do a multivolume archive with the -M option. For those of you who are old enough to remember floppy disks, you can back up your home directory to a series of floppy disks by executing:
tar -cvMf /dev/fd0 $HOME
If you are doing backups, you may want to preserve the file permissions. You can do this with the -p option. If you have symlinked files on your filesystem, you can dereference the symlinks with the -h option. This tells tar actually to dump the file that the symlink points to, not just the symlink.
Along the same lines, if you have several filesystems mounted, you can tell tar to stick to only one filesystem with the option -l. Hopefully, this gives you lots of ideas for ways to archive your files.
If you're the paranoid type and you're still stuck using Windows, you need to get Eraser. Eraser is an open-source security tool for Windows that makes sure deleted files are erased and overwritten completely before they are deleted.
Many of us assume that when we delete a file, it's gone, but that's rarely the case. Most deletions merely mark the file as being deleted and recycle that area of the disk, nothing is written to it until the space is needed by a new file or by the growth of an existing file.
Even after the area is overwritten, there are some among us (and you know who you are) who believe by a careful analysis of the magnetic fields on the disk surface, one could reconstruct deleted data.
Even if you don't buy into the black helicopter scenario, there's no doubt that, at least for a time, your deleted files may still be accessible. That's where Eraser comes in. It overwrites the file with other data before it deletes the file. And, that's not all. Eraser not only overwrites the disk area used by the file, it also actually gets out a knife and scrapes off that part of the disk surface that contained the file (just kidding about that last part).
Eraser runs on all modern versions of Windows from Windows 95 on. It includes its own user interface as well as Windows Explorer extensions. The current version is 5.8.7 and is available from eraser.heidi.ie.
diff -u: What's New in Kernel Development
Sam Ravnbourg has handed off KBuild maintainership to Anibal Monsalve and Michal Marek. They hadn't planned it this way—both of them just volunteered to take over, and Michal suggested a co-maintainership. A pile of big-time kernel folks thanked Sam for doing all the work he did on it. He certainly is handing off a very robust and reliable element of the kernel. We'll see what direction Anibal and Michal take it now.
In spite of Linus Torvalds' proclamation that there was room only for a single process scheduler in the kernel, that doesn't stop people from wanting to roll their own and use it. Pankaj Parakh is one of these, and Peter Williams has been working on reviving the CPU plugin scheduler project on his own (to be renamed CPU_PISCH). In fact, the two of them may now be working on that together. They're also each naturally designing their own schedulers to plug in to the kernel, once they get CPU_PISCH ready. There really are a lot of schedulers out there, partly because it's a really cool, challenging programming project, and partly because it's so fraught with deep magic that it would be difficult for everyone to agree on the best ways for any scheduler to behave.
LogFS, after much struggle, is now headed for inclusion in the main kernel. The main roadblock was that its on-disk format was in flux, and including it in the kernel during that kind of change would create support nightmares inside the kernel, because users of each fluctuation of the disk format still would need to access their data, long after LogFS had settled on a final format for itself. There were other issues as well, but that was the main one. Jörn Engel recently submitted the updated LogFS to the Linux-Next tree, which typically means something will be headed in a fairly standardized way up into the official tree. Not that it's impossible for something to stall out between Linux-Next and the official kernel, but it's not the usual case.
A new development policy is in the works, allowing subsystem maintainers to migrate drivers they don't like into the staging tree. The staging tree is just a relatively new directory in the official kernel, where drivers that are not ready for true inclusion can hang out and be available to users. It's a way to get lots of testing for things that are still up and coming. Typically, the path is from outside the kernel, to the staging tree, to a proper place in the kernel sources. The proposal is to reverse that direction at the discretion of the subsystem maintainers. There are plenty of pros and cons, and the debate is fairly heated. Undoubtedly, the policy's true algorithm will settle down over time, but something along those lines does seem to have Linus Torvalds' support and the support of a lot of other folks. It's definitely something that's going to be worked out in the field, rather than made perfect beforehand. We can look forward to angry complaints from driver maintainers, and probably some users, until the kinks are worked out.
John Hawley took advantage of a momentary lull on master.kernel.org, due to it being nighttime in Japan during the kernel summit this past year, and upgraded the operating system on that server. He reported that the upgrade went fairly well, with a reboot and a six-hour configuration effort. At the time he reported on it, there were only a few remaining glitches left to iron out.
LTTng (Linux Trace Toolkit Next Generation) version 0.164 will be released under some additional licenses. It'll still have the GPL, but now some of the code will also be released under the LGPL, and some will be released under the BSD license. Mathieu Desnoyers made the announcement and also said he was still waiting for IBM to give its permission to include its contributions in the relicensing effort.
Michael Cree and Matt Turner have joined forces in common frustration at the large number (more than a dozen) of unimplemented system calls on the Alpha architecture. They plan to work together to implement them, once they can figure out some of the tricky technical issues standing in the way.
IRC, Still the Best Support Around
If you haven't gotten our subtle hints during the past year or so, IRC certainly is not dead. It really is the best way to get knowledgeable support from the folks who know best. There are a few caveats, however, that may not be obvious to people new to this old-school chat protocol.
Get a Good Client
If you just want to stop into the #linuxjournal channel for some quick banality, a Web-based client like the one at linuxjournal.com/irc is fine. You can drop in, request a !coffee from JustinBot, and chitchat with fellow geeks. If you're looking for something a bit more useful for the long haul, a native client makes more sense. Many people (myself included) like X-Chat. There are plenty of other options, like the command-line-only Irssi, but X-Chat offers a nice balance between features and usability.
If you look back at Kyle Rankin's Hack and / articles from the past year or so, you'll find easy ways to integrate your entire lifestyle into IRC. Kyle does everything from chatting to twittering inside his terminal window, and he shows us all how to do the same.
The opposite approach, which is actually what I do, is to add IRC as another instant-messaging protocol on my IM client. Although Kopete and Empathy may be slick-looking for instant messaging, none come close to Pidgin's elegance with IRC. Check out my video tech tip on how to set up IRC inside Pidgin if that makes more sense to the way you work during the day: www.linuxjournal.com/video/irc-chats-pidgin.
Every channel you visit will have a different “personality” to it. The #linuxjournal channel on Freenode, for example, is really a goofy, easy-going channel full of geeks having fun. If you come visit us and say “Garble bargle, loopity loo”, no one will find you odd. In fact, you'll fit in quite nicely. On other channels, specifically channels where developers hang out related to a specific application, the atmosphere might be a bit more stuffy. My suggestion: hang out in a room for a while before you post questions. There may be links in the channel pointing to FAQs or information about how to conduct yourself without making anyone angry.
IRC is the sort of thing most geeks leave running but don't monitor constantly. If you pose a question, but don't get a response for a while, just wait. If you have a question for a specific person, typing his or her name in the channel often will alert the person (I have Pidgin set up to do that, and many folks do the same with their IRC clients). And finally, don't forget, it's a community. If you see a question you can answer, do it!
They Said It
All scenarios likely to result from Oracle's acquisition of the [MySQL] copyrights, whatever Oracle's business intentions may be, are tolerable from the point of view of securing the freedom of the codebase.
—Eben Moglen, a Columbia University Law School professor, director of the Software Freedom Law Center
The real problem is not whether machines think but whether men do.
—B. F. Skinner
There are three roads to ruin: women, gambling and technicians. The most pleasant is with women, the quickest is with gambling, but the surest is with technicians.
Technology is dominated by two types of people: those who understand what they do not manage, and those who manage what they do not understand.
Fine print: All prices are final, there are no bogus fees and unfees. Period. Only SIP devices that have already been created can be connected to sip.callwithus.com to make calls. Please ensure you only use devices approved by you (please do not try and connect using two tin cans and a piece of string, as we do not yet support this, but we may support this in the future—the work is in progress and preliminary results are positive). Callwithus.com monthly subscription charge of $0 must be paid in advance and does not include tax of $0, which also must be paid in advance. You will be billed an activation fee of $0 plus tax and this must be paid in advance. Calls made incur tax at the rate of 0% each month and must be paid in advance. On cancellation of the service you will be charged a one-time disconnection charge of $0. Additional features such as caller ID with name on incoming calls will be billed at the additional rate of $0 per call. All **YOUR** rights reserved.
—The “Fine Print” from callwithus.com
Linux Journal Insider
If you're the type of Linux Journal reader who waits by your mailbox every month, setting up a tent to sleep in and taking days off work as you anxiously await the new issue, quite frankly, we want you to seek medical attention. If you're just a Linux Journal fan who would like to hear about the issue as it rolls off the presses, but before it is actually in your hands, we've got a special treat for you.
This year, Kyle Rankin and I are putting together a monthly podcast called, “Linux Journal Insider”, where we give you the ins and outs of the upcoming issue. We discuss the issue focus, read letters to the editor and usually manage to throw in a few bad puns and cheesy jokes along the way. Swing by the Web site (www.linuxjournal.com) and look for the RSS feed. It's fun for us to talk about the issue we've been working on and hopefully fun for you to hear about it!
Become a Ninja on LinuxJournal.com
Okay, so it is probably slightly more complicated than that, but a few visits to LinuxJournal.com certainly will help put you on the path to ninja status. I can only assume that you are reading the awesome collection of goodness that is this month's Linux Journal with the intention of becoming a Linux ninja or adding to your collection of ninja weapons. I am here to help you on that quest. You see, although there is ample ammunition here in your hands, a few essential bits of arsenal inventory are lurking on-line at LinuxJournal.com. In particular, the entire HOW-TOs section is full of such gems. One of my favorites from the vault is Shawn Powers' video tech tip on command-line substitution: “Forgetting Sudo (we've all done it)” at www.linuxjournal.com/video/forgetting-sudo-weve-all-done-it.
You never know when you'll happen upon a pirate in a dark alley and be very glad you spent some time on LinuxJournal.com. Oh, it'll happen. Trust me.
Sync Your Life
For those of us lucky enough to use Linux on all of our computers, Canonical's Ubuntu One is a great way to keep files in sync between computers. Unfortunately, most of us are stuck using other operating systems throughout the day. We all have our own ways of managing such things, but I thought a glimpse into my “world of sync” might help others synchronize their lives.
At home, I have a centralized file server, and at work, I have the same thing. But, sometimes I want to access documents regardless of my location—like from a coffee shop during lunch. For my word processing and spreadsheet files, along with a handful of other commonly used documents (Linux Journal digital PDFs come to mind), I use Dropbox. It is a cross-platform, free program that allows you to sync many computers in real time. The free version is limited to a gig or two, but for basic documents, it's perfect (www.dropbox.com).
I use Firefox on every operating system, but even if you are forced to use Internet Explorer, Safari or Google's Chrome browser, Xmarks syncs your bookmarks quite nicely between different browsers on different platforms. The service is free and works very well. I can't imagine life without Xmarks (www.xmarks.com).
Contacts and Calendars
Love it or hate it, Google has infiltrated every operating system rather effectively. I use a plethora of applications to keep my different devices (laptops, desktops, phones, PDAs) in sync with contacts and calendars, but they all are based on Google. My favorite feature is that in a pinch, I can access everything from a Web browser. A quick search for “google sync” brings up many options, most free, that should get you a consistent contact and calendar base across any platform.
This is starting to feel like a Google ad, so I'll stop with this one. Google Voice is the way I consolidate all my phone numbers. I like having a single number that I can give freely and then filter incoming calls however I want. Again a free solution, Google Voice offers features I'd likely pay for, although I'm certainly not complaining at the price.
So, there you have it. I currently have two cell phones, a Skype Wi-Fi phone, Magic Jack, home landline, work landline, three Linux laptops, one Windows laptop, one Apple laptop, three desktops at home, three desktops at work and enough media-playing devices in my house to open a movie theater. If I didn't sync some of my services, I'd go more insane than I already am!
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Tech Tip: Really Simple HTTP Server with Python
- Managing Linux Using Puppet
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Non-Linux FOSS: Caffeine!
- Returning Values from Bash Functions
- Rogue Wave Software's Zend Server
- Doing for User Space What We Did for Kernel Space
- Parsing an RSS News Feed with a Bash Script
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide