UpFront

by Various

UpFront

LJ Index, September 2010

1. Millions of active .com domain names: 87.7

2. Millions of active .net domain names: 13.1

3. Millions of active .org domain names: 8.5

4. Millions of active .info domain names: 6.4

5. Millions of active .biz domain names: 2.1

6. Millions of active .us domain names: 1.7

7. Thousands of new .com domains registered per day: 51.6

8. Thousands of new .net domains registered per day: 7.6

9. Thousands of new .org domains registered per day: 7.1

10. Thousands of new .info domains registered per day: 9.9

11. Thousands of new .biz domains registered per day: 2.2

12. Thousands of new .us domains registered per day: 2.3

13. Percent of registered domains managed by top domain registrar (GoDaddy): 30.3

14. Percent of registered domains managed by 2nd top domain registrar (Enom): 8.3

15. Percent of registered domains managed by 3rd top domain registrar (TuCows): 6.7

16. Percent of registered domains managed by 4th top domain registrar (Network Solutions): 5.7

17. Millions of domains in country with largest number of domains (US): 71.4

18. Millions of domains in country with 2nd largest number of domains (Germany): 6.2

19. Millions of domains in country with 3rd largest number of domains (UK): 4.2

20. Millions of domains in country with 4th largest number of domains (China): 3.9

1–12: domaintools.com

13–20: webhosting.info

Don't Buy Candy from the Car Salesman

If you want to find double-chocolate truffle, chances are you would shop at a place that specializes in making candy. Sure, the used-car salesman might have a jar of cheap candies he's giving away, but for serious chocolate-lovers, nothing compares to confections made by experts. The same thing is true with computer equipment. No, there aren't free jars of server blades at the used-car lot, but when you buy hardware or software, you want to buy it from someone who specializes in your operating system—in our case, Linux.

Over at the Linux Journal Web site, our very own Joe Krack keeps a handy database of vendors, systems and even sales promotions from Linux-friendly companies. They are vendors we've personally worked with, and they offer deals unique to Linux Journal readers. It's not a big list of advertisements; rather, it's a big list of products from companies we trust. Check it out over at www.linuxjournal.com/buyersguide. To be fair, I wouldn't recommend buying candy from them either. The “chips” they sell aren't chocolate chips.

UpFront

The Web on the Console

Most people think “graphical interfaces” when they think of surfing the Web. And, under X11, there are lots of great programs, like Firefox or Chrome. But, the console isn't the wasteland it might seem. Lots of utilities are available for surfing the Web and also for downloading or uploading content.

Let's say you want to surf the Web and find some content. The first utility to look at is also one of the oldest, the venerable Lynx. Lynx actually was my first Web browser, running on a machine that couldn't handle X11. In its most basic form, you simply run it on the command line and give it a filename or a URL. So, if you wanted to hit Google, you would run:

lynx http://www.google.com

Lynx then asks you whether you want to accept a cookie Google is trying to set. Once you either accept or reject the cookie, Lynx loads the Web page and renders it. As you will no doubt notice, there are no images. But, all the links and the text box for entering search queries are there. You can navigate from link to link with the arrow keys. Because the layout is very simple and text-based, items are in very different locations on the screen from what you would see when using a graphical browser.

Several options to Lynx might be handy to know. You can hand in more than one URL when you launch Lynx. Lynx adds all of those URLs to the history of your session and renders the last URL and displays it. When you tested loading Google above, Lynx asked about whether or not to accept a cookie. Most sites these days use cookies, so you may not want to hear about every cookie. Use the option -accept_all_cookies to avoid those warning messages. You can use Lynx to process Web pages into a readable form with the option -dump, which takes the rendered output from Lynx and writes it to standard out. This way, you can process Web pages to a readable format and dump them into a file for later viewing. You can choose what kind of key mapping to use with the options -vikeys or -emacskeys, so shortcut keys will match your editor of choice.

Lynx does have a few issues. It has a hard time with HTML table rendering, and it doesn't handle frames. So, let's look at the Links browser. Links not only works in text mode on the command line, but it also can be compiled to use a graphics display. The graphics systems supported include X11, SVGA and framebuffer. You can select one of these graphics interfaces with the option -g. Links also can write the rendered Web pages to standard output with the -dump option. If you need to use a proxy, tell Links which to use with the option -http-proxy host:port. Links also is able to deal with buggy Web servers. Several Web servers claim to be compliant with a particular HTTP version but aren't. To compensate for this, use the -http-bugs.* options. For example, -http-bugs.http10 1 forces Links to use HTTP 1.0, even when a server claims to support HTTP 1.1.

If you are looking for a strictly text replacement for the venerable Lynx, there is ELinks. ELinks supports colors, table rendering, frames, background downloading and tabbed browsing. One possibly useful option is -anonymous 1. This option disables local file browsing and downloads, among other things. Another interesting option is -lookup. When you use this, ELinks prints out all the resolved IP addresses for a given domain name.

Now that you can look at Web content from the command line, how can you interact with the Web? What I really mean is, how do you upload and download from the Web? Say you want an off-line copy of some content from the Web, so you can read it at your leisure by the lake where you don't have Internet access. You can use curl to do that. curl can transfer data to or from a server on the Internet using HTTP, FTP, SFTP and even LDAP. It can do things like HTTP POST, SSL connections and cookies. You can specify form name/value pairs so that the Web server thinks you are submitting a form by using the option -F name=value. One really interesting option is the ability to use multiple URLs through ranges. For example, you can specify multiple hosts with:

curl http://site.{one,two,three}.com

which hits all three sites. You can go through alphanumeric ranges with square brackets. The command:

curl http://www.site.com/text[1-10].html

downloads the files text1.html to text10.html.

What if you want a copy of an entire site for off-line browsing? The wget tool can help here. In this case, you likely will want to use the command:

wget -k -r -p http://www.site.com

The -r option recurses through the site's links starting at http://www.site.com/index.html. The -k option rewrites the downloaded files so that links from page to page are all relative, allowing you to navigate correctly through the downloaded pages. The -p option downloads all extra content on the page, such as images. This way, you can get a mirror of a site on your desktop. wget also handles proxies, cookies and HTTP authentication, along with many other conditions.

If you're uploading content to the Web, use wput. wput pushes content up using FTP, with an interface like wget.

Now you should be able to interact with the Internet without ever having to use a graphical interface—yet another reason to keep you on the command line.

diff -u: What's New in Kernel Development

Some new documentation is available for SysFS and libudev. Alan Ott couldn't find the docs he wanted, so he wrote some of his own and put them up at www.signal11.us/oss/udev. The relationship between those tools is that SysFS presents a filesystem interface to view kernel and hardware status and to edit configuration options, while libudev presents a C library to track SysFS's changes and to modify the various configuration options. Through the years, a lot of attempts have been made to create a consistent interface to present hardware and kernel options to the user. SysFS is one of the most recent, and it seems to be the one that is gradually taking over from ProcFS, ioctls and all the older mechanisms.

Jesse Barnes has been trying to give kernel panics a better chance of presenting screen output when the user has been running the X Window System. Typically, running X means a panic just won't produce visible output, which in turn means that a meaningful bug report is much harder to create. Jesse's code improves the situation in some cases, but if X has disabled the display, his patch still won't cause panic output to appear on the screen. There was not much immediate interest displayed in his patches, which could mean that Linus Torvalds and David S. Miller just haven't had a chance to look at them yet, or it could mean they think he's on the wrong track. It's cool that someone's looking into better panic output though.

The old GCC 3.x compiler is having more and more trouble compiling the Linux kernel, and folks like H. Peter Anvin are getting less and less enthused about fixing all those problems as they turn up. Recently, there was talk about just dumping support for that compiler, at least for the x86 platform. It seems clear that very few people still use GCC 3.x to compile current Linux releases, although there probably are some. But, even as one group of developers moved more in the direction of deprecating GCC 3.x and eventually abandoning support for it, another group of developers seemed to gain interest in preserving support for GCC 3.x. As Eric Dumazet put it, if there's no significant technical reason to drop support, the mere fact of GCC 3.x being “old” didn't seem like a good enough reason, especially as the work involved in maintaining support was not so extreme.

TmpFS has a speed issue, because if multiple threads try to access a mounted TmpFS filesystem, they run into so much lock contention that the filesystem slows down considerably. Tim Chen and various other folks implemented a “token jar” to handle the lock contention in TmpFS and saw 270% speed increases on some of their tests. Andi Kleen liked the patches and said their token-jar implementation might also be useful elsewhere in the kernel. So, it looks like upcoming kernels will include a much faster TmpFS.

Non-Linux FOSS

Hardly a day goes by without the need to compress or uncompress something. And, plenty of compression/decompression programs exist, but if you like the idea of using the same tool on multiple platforms and you like open source, you should consider PeaZip. PeaZip runs on Windows and Linux, and because it's written with Free Pascal and Lazarus, the Linux version comes in both a GTK2 (GNOME) flavor and a Qt (KDE) flavor.

PeaZip can create the standard types of compressed files/archives: ZIP, GZ and 7Z. Plus, it creates a few that you don't often see in a desktop GUI tool: BZ2 and TAR. In addition, it creates ARC, PAQ/ZPAQ, PEA, QUAD/BALZ and UPX files/archives. On the decompression side, PeaZip goes ballistic and handles (currently) 123 different archive/file types. In other words, if it's compressed/archived, it's unlikely that PeaZip won't be able to deal with it.

PeaZip is hosted on SourceForge at peazip.sourceforge.net. There are installers for Windows (32- and 64-bit). There also are RPMs and DEBs for Linux/GTK2 or Linux/Qt. And, if you're a Lazarus type, you can grab the source. PeaZip also provides localizations for dozens of other languages.

UpFront

PeaZip Zipping PeaZip

LUG Startup Kit

I live in a very remote area, and the closest active LUG is a several-hour drive away. I figure my situation isn't unique, so while I begin to form a LUG in my area, I thought it would be nice to share some quick tips I've gathered about doing so (mostly from Kyle Rankin, a friend and president of the North Bay Linux Users Group, www.nblug.org). Here's my quick list of things to gather when forming a Linux Users Group:

  • People: this might seem obvious, but in all the preparations, it's easy to forget that you need at least a half-dozen people or so who are willing to show up regularly.

  • Regular time and place: most LUGs meet monthly. There is no rule about this, but monthly meetings seem to be a good regularity, making the LUG feel dedicated, yet occuring not so often that finding speakers (my next bullet point) becomes difficult. If possible, keep a standard meeting place as well. That way, if people miss a meeting, they don't show up at the wrong place next time.

  • A reason to come: socializing is great, but having a speaker, a demonstration, a Skype interview or anything unique to gatherings is essential. Why would we leave the comfort of our La-Z-Boy recliners when we could just banter in an IRC channel? Make it worthwhile to put on pants.

  • Refreshments: this might be just water, or it might be water and coffee. Perhaps you have donuts. The important thing is for people to have something to hold in their hands, especially during the socialization time of the meeting. Many of us are introverts, and standing in a room full of other introverts is difficult. Put a cup of coffee in people's hands, however, and they have something to do. They're no longer standing awkwardly; they're drinking coffee. Trust me, it helps.

Really, that's about it. If you are starting the LUG, you'll likely need to stand up and talk for a few minutes to welcome everyone and introduce your special guest/video/event. After an hour or hour and a half (try to stick to your scheduled time), you can adjourn the meeting. You're done; you started a LUG.

From there, many other options exist. Most LUGs have a Web site with information about their meetings. Some LUGers go to a bar after the meeting is over. (Many people don't drink alcohol, so don't make it part of your LUG meeting to go for drinks.) Some LUGs host installfests, hackfests or gaming parties. There really aren't any rules for what your group should do. It's a rather open concept. And, if you're in Northern Michigan any time after fall 2010, check out NOMLUG (www.nomlug.org). Hopefully, we'll be meeting regularly by then!

LJ Store's Featured Product of the Month: Linux Odyssey T-Shirt

UpFront

Our very own Shawn Powers models our favorite T-shirt.

  • Front reads: I'm sorry Mr. Gates I'm afraid I can't do that.

  • Back reads: 2010 A Linux Odyssey (with the Linux Journal logo).

  • Regular price: $19.95.

  • Sale price: $10.00.

  • Coupon code: bluesteel.

  • Sale ends September 30, 2010.

Are You a Longtime LJ Subscriber?

The 200th issue of Linux Journal is rapidly approaching, and we'd like to take this opportunity for everyone to learn a bit more about some of the people who've helped make LJ possible for so many years. If you're a longtime subscriber, please send a message to ljeditor@linuxjournal.com by September 10, 2010, and include the following information (we reserve the right to print your responses):

  • How long you've been a subscriber.

  • Why you subscribe to LJ and/or what you like most about LJ.

  • A brief bio of yourself.

  • A photo of yourself.

  • Your postal address. (Your address will not be published or used for any purpose other than to send you a T-shirt if you win.)

  • Your shirt size.

We will randomly select ten subscribers who participate, and send the “winners” a free T-shirt.

Cloudy Tech Tips

One of the things I love about the Linux community is how willing to share knowledge everyone tends to be. Whether it's a software suggestion, hardware review or just a quick tech tip, we love to share. Most of the tech tips we receive here at Linux Journal are command-line tips. Those are great, but what about all those Web tools you might use? Just because a Web-based tip might work for more than only Linux doesn't mean it's not a great tip.

For example, I love the STEEP.IT Web site. Like the old tea-timer applications we have used in the past, this simple Web-based application helps me get perfect green tea instead of over-steeped yuckiness. Just visit steep.it/green, and the counter starts for a perfect cup of green tea.

Do you have any handy tech tips you'd like to share? They can be Web-based, GUI-based, command-line-based, or even just tips like, “don't eat yellow snow”. Send your tech tips to techtips@linuxjournal.com, and if we print your submission in the magazine, we'll send you a free T-shirt! Be warned, however, it's very unlikely we'll print any tips about eating colored snow.

UpFront

They Said It

Their management made some very bad decisions that damaged their business and allowed us to buy them for a bargain price....The underlying engineering teams are so good, but the direction they got was so astonishingly bad that even they couldn't succeed. Really great blogs do not take the place of great microprocessors. Great blogs do not replace great software. Lots and lots of blogs does [sic] not replace lots and lots of sales.

—Larry Ellison talking about Sun Microsystems

Those days are dead and gone and the eulogy was delivered by Perl.

—Rob Pike in a Slashdot interview responding to a question related to having one tool do one job well.

Eternity is a very long time, especially towards the end.

—Woody Allen

Computers are magnificent tools for the realization of our dreams, but no machine can replace the human spark of spirit, compassion, love, and understanding.

—Louis Gerstner, CEO, IBM

Get your feet off my desk, get out of here, you stink, and we're not going to buy your product.

—Joe Keenan, President of Atari, in 1976 responding to Steve Jobs' offer to sell him rights to the new personal computer he and Steve Wozniak developed.

To be a nemesis, you have to actively try to destroy something, don't you? Really, I'm not out to destroy Microsoft. That will just be a completely unintentional side effect.

—Linus Torvalds

Windows is just DOS in drag.

—Anonymous

LinuxJournal.com—Under the Hood

Since this issue focuses on Web development, I'd like to take the opportunity to provide a glimpse behind the scenes at LinuxJournal.com. Many of you already know that LinuxJournal.com is largely made possible by the Drupal platform, aka my favorite open-source project. Many also have asked to learn more about the specifics of our Drupal setup, and to that end, I'd like to share some of my favorite Drupal modules that power LinuxJournal.com:

  1. Views: this one is pretty obvious to anyone who has used Drupal at all, but stating the obvious never hurts anyone. Views is the absolutely essential query-building module. In my opinion, you cannot build a Drupal site without this module (drupal.org/project/views).

  2. CCK: again, I state the obvious, but the content construction kit allows you to add fields to content, allowing you to build custom types of content for almost any data imaginable. I can't imagine a Drupal world without CCK (drupal.org/project/cck).

  3. Flag: the Flag module is brilliant in its simplicity—a simple yes or no, on or off, 1 or 0 to almost anything on your Drupal site. The possibilities are endless (drupal.org/project/flag).

  4. Views Attach: what could be better than attaching a list of data to a user or individual piece of content? In my experience, the answer to “How do I display ____?” frequently involves Views Attach. I highly recommend giving this little module a spin to see if it works for you (drupal.org/project/views_attach).

  5. Mollom: last, but certainly not least, Mollom is the module that keeps me relatively sane. Spam makes LinuxJournal.com less cool, and must, therefore, be destroyed. I hate spam. Mollom gets rid of spam Thanks, Mollom (drupal.org/project/mollom)!

These five modules are some of my favorites, but there are lots more where they came from, so I hope you'll visit LinuxJournal.com to read more about these and other great modules, as well as other tidbits from my adventures in Drupaling. Just go to LinuxJournal.com and search “Drupal”. See you there!

Load Disqus comments

Firstwave Cloud