I tripped across Shawn Powers' video titled “An Open Video to
HP” on YouTube
it occurred to me that the market share that Windows enjoys is actually very
misleading in that there are a lot of Linux people who buy machines that
come pre-installed with Windows and then toss out the Windows. That's what
I did, and I know of others. So my point is (and I'm sure you probably
thought of this already) that the Windows market share may not be as big
as companies like HP are being led to believe. It would be nice if
companies could be forced by law to sell machines without a pre-installed
OS anywhere they market their machines.
I think you have a very good point. Sadly, I think OEM manufacturers get a significant kickback from the “crapware” they pre-install with Windows. My guess is that offsets the price of Windows for the OEM manufacturers, so they have little motivation to sell them without Windows. You're absolutely correct though; I have many computers with Windows license stickers on them that are running Linux. The numbers are probably skewed greatly regarding the installed base.—Ed.
In his December 2008 article “Samba Security, Part II”, Mick Bauer wrestles uneasily with sudo: “Note the sudo, necessary for Ubuntu. On other distributions, su to root...and omit the sudo that [begins each line]....” I've seen similar laments in other forums.
On systems like Ubuntu and Mac OS X, to avoid typing exhaustion and disruption to normal trains of thought, I “su to root” with:
I haven't read Linux Journal for a while. Perhaps I'm missing something.
Mick Bauer replies: If my writing style was awkward in this case, I apologize, but in fact, I'm quite comfortable with Ubuntu's requiring sudo for privileged commands. Habitually using root shells (including, I'm afraid, via sudo su) is a good way to make mistakes with an avoidably severe impact.
The inconvenience of having to precede individual commands with sudo is significantly offset by the fact that if you issue several in a row within a short period of time, you'll be prompted for your password only after the first command in the sequence.
So again, I'd be the last to “lament” about this. On the contrary, I think the Ubuntu team has made a very sensible design choice with its sudo policy!
When is Linux Journal going to change its name to Ubuntu Journal? For about two years now, I've seen a gradual migration from covering Linux in general to covering Ubuntu specifically. It's all well and good that most, if not all, of your writers use Ubuntu, but the rest of the community uses different distributions. I, for one, use OpenSUSE and have for well over five years. In fact, according to distrowatch.org, the second largest distribution in terms of “registered users” is OpenSUSE, and yet most of the mention I've been able to find regarding it feels like an afterthought.
I have no interest in switching to Ubuntu, Debian or any such distro. Why then do I have to feel like a secondary target in any article I read within Linux Journal? Worse yet, there are sidebars that seem to ignore completely the fact that other distros exist (see Mick Bauer's sidebar about regenerating the smb.conf file in Ubuntu/Debian in the December 2008 issue).
Perhaps it is time to find another source of Linux information—one
that pertains to Linux in general and not what one magazine thinks I should
I understand your frustration. One of the difficulties with producing content that is beneficial to most people is that the procedures vary so widely from distribution to distribution. I'm guilty of using Ubuntu as an example often too. Sure, part of it is because it's the most popular distribution right now, but for me, it's also the one with which I'm most familiar.
We have had discussions internally about trying to make our content as distro-neutral as possible, so perhaps you'll see at least a slight shift in future issues. At least one of our staff members is a die-hard OpenSUSE fan, so you're certainly not alone. Thanks for the comment; it's important to be reminded of such things.—Ed.
I could not believe my eyes when I received my [February 2009] copy of Linux
caught sight of the cover. I wanted to ask it, “Is that a penguin in your
pocket or are you really happy to see me?” Going for a different
demographic? I am not insulted, but I almost choked on my coffee I was
laughing so hard!
Bill Childers replies: They say the camera can add ten pounds. Well, just like in First Life, cameras in Second Life can make objects appear larger than they are.
As usual, Mick Bauer's article, “Secured Remote Desktop/Application Sessions” in the September 2008 issue was overall excellent. If only I could have read it about three years ago, it would've saved me a lot of time researching all this stuff myself.
I noticed only one important detail that wasn't addressed. When using a graphical environment provided by a distant Linux or UNIX box, one frequently has performance issues, as the X window protocol isn't very compact. RFB is a lot better, but there's still a lot of data to transfer, and it's not compressed.
Of course, because it's all not compressed, there's a fairly simple solution: tell the ssh process we're tunneling through to compress the data stream, by giving it a -C command-line argument. This may not be needed when remotely administering your home Linux box from your laptop, hard-wired to your home gigabit Ethernet or even when using your 802.11n wireless network. When you're in the US and your server is in Australia (yes, I've done this), or even if you're just managing a server on the opposite coast of the US, the cost of compressing and uncompressing your data packets is going to be a lot less than the cost of getting the uncompressed data across that pipe.
For the advanced user, one can modify the gzip compression level using the GZIP environment variable. In my experience, -9 works best on very fast machines and intercontinental packets (when I was managing that GUI-only application in Australia, the difference between -8 and -9 actually was noticeable). On the other hand, unless you have a really slow link, when talking to the data center in the same building you're in, you will probably get the best speed from -1, if compression is even a net win.
Thanks also for your recent articles on Samba security [see Mick's Paranoid
Penguin column in the November 2008, December 2008, January 2009 and
February 2009 issues for the Samba articles]. About four
months ago, my wife's boss gave her a Windows box for home use. As a
result, I had a sudden interest in offering some Windows services from
my home Linux server, and your series was very timely.
Mick replies: Thanks so much for your kind words and your important compression tips! You're right, I completely overlooked the possibility of needing compression, which is so easily achieved with SSH and GZIP.
In the article, “When Disaster Strikes: Hard Drive Crashes”
Kyle Rankin advises as
last resort when fsck can't get your files back to use strings to find your
text data. Before doing that, I would suggest you try the great photorec tool
(www.cgsecurity.org/wiki/PhotoRec). It originally was written to get
photos back from dead Flash cards by looking for JPEG headers, but it now can
identify hundreds of different file types on various filesystems.
Kyle Rankin replies: Thanks for the tip!
Regarding the “Slice and Dice PDF” Tech Tip in the February 2009 issue of LJ [page 40], I would like to point out that PDF slicing and more can be done using pdftk, without converting to PS and back to PDF. To do the same operation as the example in the tech tip, you need to issue the command:
pdftk afile.pdf cat 11-14 output file-p11-14.pdf
I think this is a little easier.
I've been an LJ reader on and off since 1996. I've had my current subscription for the past few years now, and I'm noticing with dismay the steady decline in technical articles on Linux internals. My favourite column used to be Kernel Korner. My current favourite is, perhaps unsurprisingly, the woefully short “diff -u”. As tracking Linux core development is becoming more of a full-time job, those of us who can't afford the requisite time investment have to rely ever more on sources like LJ to avoid reaching the point where our systems are big black boxes to which we sacrifice the occasional goat in the hope that it'll appease the binary powers that be. For the sake of all those goats, would you consider carrying more articles akin to the LWN's “Kernel Development” section (current my only reliable source of good technical Linux news)? It's not that I don't think browser comparisons, reviews of the latest desktops' new features and so on are a waste of ink, just that the information is more available elsewhere on-line for those who seek it, whereas with core Linux topics, not so much. I'm asking for a more balanced magazine, equally suited to the new multimedia-savvy, Web 2.0-type users who don't know (or care) what a bootloader is, as it is to the vim + gcc + xterm users who don't know (or care) how to access Twitter's newest features using the foo API. I realise this is generally easier said than done.
Thanks, and much respect for your dedication to the cause for all these
Thanks for your letter. It's a constant challenge to balance between articles that appeal to our super-techie crowd, and those that benefit the more desktop-oriented users. Because Linux is really beginning to show itself in less niche environments (Netbooks, mobile devices and so on), we do need to make sure those folks feel Linux Journal is for them too. That said, we'll make sure our hard-core geeks don't get left behind. You'll probably see some variance between issues depending on the focus for that month, but we'll keep trying to balance our content so it appeals to our entire readership. Be sure to check out our upcoming Kernel Capers issue (August 2009).—Ed.
Regarding Dave Taylor's “Counting Words and Letters” article in the March 2009 issue: there are some options to tr that can be used to simplify Dave's script:
cat ^txt | tr '[:upper:]' '[:lower:]' | tr -cs ↪'[:alpha:]' '\n' | sort | uniq -c | sort -nr | head
tr accepts the '\n' argument. Also, the complement and squeeze options
replace two calls to tr and one to grep.
Plus, note that this eliminates counting spaces, which erroneously shows up
as the second most-popular word in Dave's script.
Have a photo you'd like to share with LJ readers? Send your submission to email@example.com. If we run yours in the magazine, we'll send you a free T-shirt.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- New Products
- Linux Systems Administrator
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- Designing Electronics with Linux
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- Nice article, thanks for the
9 hours 54 min ago
- I once had a better way I
15 hours 40 min ago
- Not only you I too assumed
15 hours 57 min ago
- another very interesting
17 hours 50 min ago
- Reply to comment | Linux Journal
19 hours 44 min ago
- Reply to comment | Linux Journal
1 day 2 hours ago
- Reply to comment | Linux Journal
1 day 2 hours ago
- Favorite (and easily brute-forced) pw's
1 day 4 hours ago
- Have you tried Boxen? It's a
1 day 10 hours ago
- seo services in india
1 day 15 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?