I just finished reading the article on Puppy Linux [LJ, April 2008]. I'm glad to see you introduce this distribution to your readers. I discovered PL about a year and a half ago. Every year, my wife and I travel for about six months, usually in our RV. During the 2007 travel period, I used PL exclusively to use the Internet safely. I found no reason to look at any other distribution. I wholeheartedly recommend it to anyone who wants the flexibility and security of using an operating system on a Flash drive. My version of PL includes Firefox, OpenOffice.org and The GIMP.
I am not sure it was clear from the article, but because the PL OS is loaded
anew into the computer memory at each bootup and runs from that memory, any
possible corruption of the OS by an on-line attack probably would last
only until the computer is turned off. Next boot, fresh OS.
In the March 2008 issue of LJ, in the “Desktop Must-Haves” article, author Dan Sawyer seemed to have forgotten about gThumb as a photo importer and organizer for the GNOME desktop environment, which also allows for photo import (using the PTP protocol), supports slideshows, as well as providing a limited array of image manipulation tasks (balance, contrast, transformation, crop, red-eye removal and so on). It is pretty much standard with many GNOME installations, and yet he didn't mention it, favoring the rather “controversial” F-Spot (due to Mono and its status regarding things such as Windows.Forms and so forth).
Generally, I like native applications better (due to the look and feel), but I
do agree with Mr Sawyer regarding all the applications he reviews in this
article, with the only exception being gThumb, which I think deserved to be
Gian Paolo Mureddu
Dan Sawyer replies: Quite honestly, Linux is a big software universe, and I'd not run into gThumb before I got your letter (it did not, alas, come standard with any of my GNOME installations). I haven't had time to do a proper assessment yet, but it looks very promising. Thanks for the recommendation!
As for the controversiality of Mono, I make it a point to stay as far away as possible from the infighting between various licensing and project camps. Although I certainly have opinions on which toolkits work best consistently, when it comes down to it, I care about the functionality. If that functionality is coming from a Mono codebase or a (until recently) Java codebase over/against another, less controversial toolkit, and it saves to data formats that are easily translatable and/or universally readable, then I have no quarrel with it.
Thank you for the letter. I'm pleased you liked the article!
This letter is not related directly to LJ, but as a magazine involved with Internet security issues, I think some of the following reflections could be considered by the readers and the magazine editors who can include some article(s) and discussion(s) on this in the near future.
I am a professor at a university. I do research and I teach. I've used the Internet since my old student days, when we FTPed, Telneted, fingered and so forth. Those were free days, free as in speech, free as in open source, open as it was the Internet. But, then came the “worms”, and we closed the doors. Later, we encrypted everything we sent, and built “walls of fire” and “military zones”. Now, we filter everything that comes into or out of our nets—sometimes on security grounds, sometimes to reduce traffic jams, and sometimes because of copyright infringements.
In the past few years, the troubles created by these “policies” have been greatly affecting our work. Big institutions have created rules to close their doors without regarding who might be affected. Sometimes we cannot even send e-mail to some colleague because our domain (which can be as general as .xy!!) is on a blacklist.
The most ridiculous extreme occurred last week. I advise students in different institutions, and we interchange information, data and archives. At one of these institutions, the SSH port was moved to a number greater than 1024, at the other, all ports above 1024 were closed, even for client connections. These measures were taken without notifying the users. The result was wasting time trying to discover why what we always have done (until recently) does not work anymore, wasting time in adapting to the new situation, and wasting time having unfruitful discussions with the system managers.
The freedom to filter packets today is amazingly big, and the Internet gradually is becoming a mess of entangled knots instead of a fluid traffic Net.
We need standards—standards for security policies. We need to convince security managers that the best security measure is just to unplug from the Net, or maybe better, to switch off the computer! But this trivial solution, as usual, has no interest to anybody, even to them. I can (hardly) do research without the Internet, but they will lose their jobs without it.
Security policies should be discussed with the end users who are, at last, the
reason we have the Internet.
Guillermo Giménez de Castro
Dave Taylor's article on Parallels and VMware Fusion was a welcome sight [“Running Ubuntu as a Virtual OS in Mac OS X” in the May 2008 issue of LJ]. I run Ubuntu 7.04 Server in Fusion on my MacBook, and it works great as a portable server environment. I also can rely on the Ubuntu software repository and get all the advantages of the Open Source world without cluttering up my Mac OS X install. Hopefully, the Linux in Fusion user base will grow over time, and VMware will implement more of the power-user features into its product. I would love to see a headless option that doesn't involve force-quitting the Fusion UI.
Are there any plans for more detailed articles in the future? Fusion in
particular has some options (like port forwarding) that can be
enabled only through config file editing.
I really enjoyed this article. However, I did notice three things that I don't really agree with.
First and foremost to me is the statement in the first paragraph that Mac OS X is a Linux distro. This is wrong. Mac OS X is based on Darwin, which is a BSD variant. BSD is not Linux and vice versa. They are totally separate codebases, although there has been some cross-pollination.
Second, calling X11 “a tightly integrated version of the popular Linux windowing system” is a bit off-base. X11 is a UNIX windowing system, which originally was developed at MIT long before Linux ever was envisioned. The paragraph is not really wrong, it's just a bit misleading—at least as I read it.
Third, in the fifth paragraph, the author states, “Free operating
systems (that is, anything but Microsoft Windows)....” There are many
nonfree OS systems for Intel machines. Examples include OS/2 (okay, it is now
dead), DR-DOS (also dead), Pick (not dead, but has a rather small market
share—integrated OS/RDMS system) and Sun's Solaris (the commercial one).
On non-Intel machines, most OSes are not at all free, such as z/VM,
z/VSE, z/TPF, z/OS on IBM's “mainframe” System z, AIX on IBM's
System p and
i5/OS on IBM's System i. You may have noticed that I know a bit about
IBM machines. I've worked on them, although not for IBM, since the
Dave Taylor replies: Oh jeez, sometimes I don't know how these gremlins get into the computer and mess up my perfectly written articles. I mean, really, I might have accidentally said that in my original piece as submitted, but it's clearly wrong and I know it! Mac's Darwin roots are NEXTSTEP, which itself was based on Mach 2.5 and 4.3BSD. Heck, I contributed to 4.3BSD! As you point out, X11 comes from the MIT Athena Project, and was released years before Linux was even a dream. Mea culpa on both of 'em.
You gotta cut me some slack on the comment about other nonfree operating systems for the Intel architecture, however. I was trying to be a bit wry and sarcastic in my commentary. Of course, there are many commercial operating systems that, outside of illegal P2P copies, are licensed and tightly monitored, including the systems you mention and many more.
Suffice to say, we let a few gaffes slip through and apologize for any confusion they caused. Glad you enjoyed the article. We'll get our facts straight next time, I promise.
I'm still not sure how Dave Taylor positions his column in Linux Journal. It probably must be meant as a column for the pros—some kind of “who finds the bugs I smuggled in” thing. Surely it can't be for beginners who'd get frustrated by all the code that does not work the way the text makes you believe.
In his May 2008 column, Dave wants to give us advice on error handling and making scripts bulletproof, again without checking his own code snippets for errors.
The 2>&1 >/dev/null output redirection will not
work as described, because
first, STDERR is redirected to where STDOUT is (currently still) wired to,
and then STDOUT is sent to data nirvana, but redirected STDERR will not
follow suit. The >&1 redirection does not mean
“pass it on to STDOUT” but
rather “rewire yourself to where STDERR is right now”. There
possibilities to do it right, the most often used is >/dev/null
2>&1. This works because first STDOUT is plugged in to the
“data store with
endless capacity”, and only then is STDERR told to put its hose into
Dave Taylor replies: Jeez, must be gremlins-attack day or something. Yeah, you're right that the order of metacharacters in that particular line is wrong. Thanks for pointing it out!
I just got a chance to read the May 2008 issue of LJ, and I wanted to write with respect to the article “Customizing Linux Live CDs, Part I”. It is a nice article and covers similar techniques I used long ago when remastering Knoppix (I remastered only if I needed something beyond the knoppix.sh injection model). However, as the article is discussing Debian-based distributions, I think it only fair to mention Debian Live, which I and many others use to make live CDs of Debian. With Debian Live, making a custom live CD is far easier than the remastering described in the article. I think it would be worth LJ readers' time (remastering, that is) to take a look at Debian Live:
Debian Live: debian-live.alioth.debian.org
Debian Live Download Server: live.debian.net
Debian Live Wiki: wiki.debian.org/DebianLive
Debian Live irc—channel #debian-live on irc.oftc.net
Mick Bauer replies: On the one hand, this series is intentionally Ubuntu-centric, and for Ubuntu fans, being able to customize one's favorite distro is worth learning a little command-line voodoo. It's also, I think, a good way to illustrate how to use compressed loopback filesystems.
But, you're right. I'd be remiss if I didn't at least mention a simpler way to achieve a similar thing! So, in Part III of this article [see page XX], I mention Debian Live and cite the link to their Wiki (which includes ample links to downloads and so forth). Thanks for bringing it to my attention.
Mick Bauer's “Customizing Linux Live CDs, Part I” (LJ, May 2008) was a great article, and the timing was perfect (for me...and it's all about me, right?).
A buddy and I have been playing around with bootable-USB sticks using different distros. Ideally, we want a fully functional desktop OS that we literally can take with us anywhere. There are lots of apps we want that are not on the live CD. Since you (and pendrivelinux) have done the heavy lifting for us, setting up the remastered Ubuntu USB stick was a breeze. We're not quite done tweaking yet, but our current image is approaching 1.4GB. The final version will live on a 4GB stick, but a valuable side benefit is that the 2GB Flash drive I'm using for testing this will be passed around the office for people to give Linux a test-drive.
So, with a stroke of the pen, you've not only provided tremendous value for my subscription dollars, but you've also increased the ranks of Linux users!
Oh, I also like the line numbering scheme you used for your scripts.
Mick Bauer replies: Wow, what a thoughtful, gratifying message! It gave me a boost just as I was wondering if and how I'll make deadline for the next issue. It makes a difference, being reminded that people actually do find this stuff to be useful. (Usually, I just hear about the parts I get wrong!)
The article on podcasting by Dan Sawyer in the May 2008 issue of LJ was of particular interest to me, and it confirmed that recording calls using Skype on Linux is a nontrivial issue. (I interview genre authors on my podcast, Radio Free Bliss.)
However, to say that “[t]here are a number of packages [that hijack the DSP with a middleware layer] that'll do this—for a fee—on Windows and Mac” is not strictly true. Driven from Linux, I use PowerGramo with Skype on Windows, the basic (and very functional) version of which is free. I've had no problems using it.
As to why I choose to use Skype: well, most nontechnical people know the
Skype name much better than they know Gizmo. And, for every ten people
I've asked who have Skype, there are none who have Gizmo. It would be
very arrogant of me to demand that my guests sign up for a completely new
service, all for the sake of one 45-minute conversation. So, even though
my main machine is Linux, running Mandriva, I keep a Windows machine
around for podcast purposes. I fear, especially among the less technical,
that it's going to be a Skype-Win world for the forseeable future.
Dan Sawyer replies: Thanks for the correction and the additional information. I too tend to do my Skyping on Windows, even though I actually record the calls on Linux. I do this because all my Linux boxen are 64-bit systems, and Skype, as yet, doesn't particularly play nice with 64-bit. Plus, running it on an emulation layer can get a bit twitchy. One of these days, it'll come out for 64-bit distros. Until then, I'll be using my Windows machine as a conference-call PBX.
Such is life, sometimes.
Having read “Go Green, Save Green with Linux” in the April 2008 issue of LJ, I got red. James Gray spouting “our fragile planet's inability to support an SUV-lifestyle” is nonsense.
The planet will adapt. If the planet doesn't like what man is doing, then it will wipe him out. The human race is just a blink in time for this planet. It is a selfish attitude of personal survival that drives this fascist mindset.
“Mother Nature's Mayday” is a farce, or skillfully exploited situation. It is just a humanistic perspective applied to generate a human emotional response. “Mother Nature” has no qualms, or an uneasy feeling or pang of conscience, as to conduct or compunction, about life and death.
The bottom line of this article is about the “bottom line”. People are
frustrated with wasting money on inefficient products.
James Gray replies: Thank you for your reply. I appreciate your reading the article and value your feedback.
Your point about the Earth “caring” whether humans survive or not is well taken. In the grand scheme of things, we are merely one small part of a resilient and dynamic natural system that doesn't choose its victims indiscriminately.
On the other hand, I also hope you will accept my writing “Nature's Mayday calls” for the metaphor that it is. Here, my intent was to illustrate how the planet is giving us clear feedback that our actions are causing drastic and perhaps permanent change to natural systems. Furthermore, although you appear to believe that humans should just act however they will and face the consequences, I personally feel that we humans have a moral obligation to treat our Earth home with utmost respect. I think it is in our enlightened self-interest to protect not only those natural systems that sustain us, but also to not adversely affect the results of billions of years of wondrous evolution.
Evolutionary biologists say that a sense of morality is hard-wired into our genes. I am surprised you would lump me together with Hitler simply for writing that my own moral compass leads me to convince others that better natural resource management is a positive thing.
Finally, though you dispute my point about the Earth's inability to support an SUV lifestyle for billions, please note that this assertion has been proven empirically in several studies. There are simply not enough resources for all six billion-plus humans to enjoy our level of material consumption. Please contact me if you would like to receive more information about these studies. Thanks again for your feedback.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Tech Tip: Really Simple HTTP Server with Python
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Rogue Wave Software's Zend Server
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide