I tripped across Shawn Powers' video titled “An Open Video to
HP” on YouTube
it occurred to me that the market share that Windows enjoys is actually very
misleading in that there are a lot of Linux people who buy machines that
come pre-installed with Windows and then toss out the Windows. That's what
I did, and I know of others. So my point is (and I'm sure you probably
thought of this already) that the Windows market share may not be as big
as companies like HP are being led to believe. It would be nice if
companies could be forced by law to sell machines without a pre-installed
OS anywhere they market their machines.
I think you have a very good point. Sadly, I think OEM manufacturers get a significant kickback from the “crapware” they pre-install with Windows. My guess is that offsets the price of Windows for the OEM manufacturers, so they have little motivation to sell them without Windows. You're absolutely correct though; I have many computers with Windows license stickers on them that are running Linux. The numbers are probably skewed greatly regarding the installed base.—Ed.
In his December 2008 article “Samba Security, Part II”, Mick Bauer wrestles uneasily with sudo: “Note the sudo, necessary for Ubuntu. On other distributions, su to root...and omit the sudo that [begins each line]....” I've seen similar laments in other forums.
On systems like Ubuntu and Mac OS X, to avoid typing exhaustion and disruption to normal trains of thought, I “su to root” with:
I haven't read Linux Journal for a while. Perhaps I'm missing something.
Mick Bauer replies: If my writing style was awkward in this case, I apologize, but in fact, I'm quite comfortable with Ubuntu's requiring sudo for privileged commands. Habitually using root shells (including, I'm afraid, via sudo su) is a good way to make mistakes with an avoidably severe impact.
The inconvenience of having to precede individual commands with sudo is significantly offset by the fact that if you issue several in a row within a short period of time, you'll be prompted for your password only after the first command in the sequence.
So again, I'd be the last to “lament” about this. On the contrary, I think the Ubuntu team has made a very sensible design choice with its sudo policy!
When is Linux Journal going to change its name to Ubuntu Journal? For about two years now, I've seen a gradual migration from covering Linux in general to covering Ubuntu specifically. It's all well and good that most, if not all, of your writers use Ubuntu, but the rest of the community uses different distributions. I, for one, use OpenSUSE and have for well over five years. In fact, according to distrowatch.org, the second largest distribution in terms of “registered users” is OpenSUSE, and yet most of the mention I've been able to find regarding it feels like an afterthought.
I have no interest in switching to Ubuntu, Debian or any such distro. Why then do I have to feel like a secondary target in any article I read within Linux Journal? Worse yet, there are sidebars that seem to ignore completely the fact that other distros exist (see Mick Bauer's sidebar about regenerating the smb.conf file in Ubuntu/Debian in the December 2008 issue).
Perhaps it is time to find another source of Linux information—one
that pertains to Linux in general and not what one magazine thinks I should
I understand your frustration. One of the difficulties with producing content that is beneficial to most people is that the procedures vary so widely from distribution to distribution. I'm guilty of using Ubuntu as an example often too. Sure, part of it is because it's the most popular distribution right now, but for me, it's also the one with which I'm most familiar.
We have had discussions internally about trying to make our content as distro-neutral as possible, so perhaps you'll see at least a slight shift in future issues. At least one of our staff members is a die-hard OpenSUSE fan, so you're certainly not alone. Thanks for the comment; it's important to be reminded of such things.—Ed.
I could not believe my eyes when I received my [February 2009] copy of Linux
caught sight of the cover. I wanted to ask it, “Is that a penguin in your
pocket or are you really happy to see me?” Going for a different
demographic? I am not insulted, but I almost choked on my coffee I was
laughing so hard!
Bill Childers replies: They say the camera can add ten pounds. Well, just like in First Life, cameras in Second Life can make objects appear larger than they are.
As usual, Mick Bauer's article, “Secured Remote Desktop/Application Sessions” in the September 2008 issue was overall excellent. If only I could have read it about three years ago, it would've saved me a lot of time researching all this stuff myself.
I noticed only one important detail that wasn't addressed. When using a graphical environment provided by a distant Linux or UNIX box, one frequently has performance issues, as the X window protocol isn't very compact. RFB is a lot better, but there's still a lot of data to transfer, and it's not compressed.
Of course, because it's all not compressed, there's a fairly simple solution: tell the ssh process we're tunneling through to compress the data stream, by giving it a -C command-line argument. This may not be needed when remotely administering your home Linux box from your laptop, hard-wired to your home gigabit Ethernet or even when using your 802.11n wireless network. When you're in the US and your server is in Australia (yes, I've done this), or even if you're just managing a server on the opposite coast of the US, the cost of compressing and uncompressing your data packets is going to be a lot less than the cost of getting the uncompressed data across that pipe.
For the advanced user, one can modify the gzip compression level using the GZIP environment variable. In my experience, -9 works best on very fast machines and intercontinental packets (when I was managing that GUI-only application in Australia, the difference between -8 and -9 actually was noticeable). On the other hand, unless you have a really slow link, when talking to the data center in the same building you're in, you will probably get the best speed from -1, if compression is even a net win.
Thanks also for your recent articles on Samba security [see Mick's Paranoid
Penguin column in the November 2008, December 2008, January 2009 and
February 2009 issues for the Samba articles]. About four
months ago, my wife's boss gave her a Windows box for home use. As a
result, I had a sudden interest in offering some Windows services from
my home Linux server, and your series was very timely.
Mick replies: Thanks so much for your kind words and your important compression tips! You're right, I completely overlooked the possibility of needing compression, which is so easily achieved with SSH and GZIP.
In the article, “When Disaster Strikes: Hard Drive Crashes”
Kyle Rankin advises as
last resort when fsck can't get your files back to use strings to find your
text data. Before doing that, I would suggest you try the great photorec tool
(www.cgsecurity.org/wiki/PhotoRec). It originally was written to get
photos back from dead Flash cards by looking for JPEG headers, but it now can
identify hundreds of different file types on various filesystems.
Kyle Rankin replies: Thanks for the tip!
Regarding the “Slice and Dice PDF” Tech Tip in the February 2009 issue of LJ [page 40], I would like to point out that PDF slicing and more can be done using pdftk, without converting to PS and back to PDF. To do the same operation as the example in the tech tip, you need to issue the command:
pdftk afile.pdf cat 11-14 output file-p11-14.pdf
I think this is a little easier.
I've been an LJ reader on and off since 1996. I've had my current subscription for the past few years now, and I'm noticing with dismay the steady decline in technical articles on Linux internals. My favourite column used to be Kernel Korner. My current favourite is, perhaps unsurprisingly, the woefully short “diff -u”. As tracking Linux core development is becoming more of a full-time job, those of us who can't afford the requisite time investment have to rely ever more on sources like LJ to avoid reaching the point where our systems are big black boxes to which we sacrifice the occasional goat in the hope that it'll appease the binary powers that be. For the sake of all those goats, would you consider carrying more articles akin to the LWN's “Kernel Development” section (current my only reliable source of good technical Linux news)? It's not that I don't think browser comparisons, reviews of the latest desktops' new features and so on are a waste of ink, just that the information is more available elsewhere on-line for those who seek it, whereas with core Linux topics, not so much. I'm asking for a more balanced magazine, equally suited to the new multimedia-savvy, Web 2.0-type users who don't know (or care) what a bootloader is, as it is to the vim + gcc + xterm users who don't know (or care) how to access Twitter's newest features using the foo API. I realise this is generally easier said than done.
Thanks, and much respect for your dedication to the cause for all these
Thanks for your letter. It's a constant challenge to balance between articles that appeal to our super-techie crowd, and those that benefit the more desktop-oriented users. Because Linux is really beginning to show itself in less niche environments (Netbooks, mobile devices and so on), we do need to make sure those folks feel Linux Journal is for them too. That said, we'll make sure our hard-core geeks don't get left behind. You'll probably see some variance between issues depending on the focus for that month, but we'll keep trying to balance our content so it appeals to our entire readership. Be sure to check out our upcoming Kernel Capers issue (August 2009).—Ed.
Regarding Dave Taylor's “Counting Words and Letters” article in the March 2009 issue: there are some options to tr that can be used to simplify Dave's script:
cat ^txt | tr '[:upper:]' '[:lower:]' | tr -cs ↪'[:alpha:]' '\n' | sort | uniq -c | sort -nr | head
tr accepts the '\n' argument. Also, the complement and squeeze options
replace two calls to tr and one to grep.
Plus, note that this eliminates counting spaces, which erroneously shows up
as the second most-popular word in Dave's script.
Have a photo you'd like to share with LJ readers? Send your submission to email@example.com. If we run yours in the magazine, we'll send you a free T-shirt.
Practical Task Scheduling Deployment
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.View Now!
|The Firebird Project's Firebird Relational Database||Jul 29, 2016|
|Stunnel Security for Oracle||Jul 28, 2016|
|SUSE LLC's SUSE Manager||Jul 21, 2016|
|My +1 Sword of Productivity||Jul 20, 2016|
|Non-Linux FOSS: Caffeine!||Jul 19, 2016|
|Murat Yener and Onur Dundar's Expert Android Studio (Wrox)||Jul 18, 2016|
- The Firebird Project's Firebird Relational Database
- Stunnel Security for Oracle
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- SUSE LLC's SUSE Manager
- Managing Linux Using Puppet
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Doing for User Space What We Did for Kernel Space
- Google's SwiftShader Released
- SuperTuxKart 0.9.2 Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide