Google vs. AllTheWeb
There used to be a debate about which search engine was best. And maybe there still is, but we haven't been hearing much about it because Google is pretty much it. Even Yahoo uses Google. The situation is typified by these remarks posted by Jason Kottke the other day at Kottke.org: "Google has been down for most of the day (for me, at least), so I had to use, ugh, Altavista to search for something earlier. It's the first time I'd used something other than Google in more than a year, and it took me about 3 times as long as normal to find what I was looking for. Google is useful enough that I would pay a $5-8 subscription fee per month for access to it. Google is the default command-line interface to the Web...and well worth paying for."
Now there's a pull-quote for you: "default command-line interface for the Web". And maybe that's what we should expect from a well-funded runaway hack by Linux weenies (who nonetheless have a policy of patenting their software).
When you're the default de facto portal for searching everything on the Web, you don't need to do a lot of PR. So Google doesn't. But they're certainly glad to share info when they're asked, which is what happened when I asked Google's VP Corporate Communications, Cindy McCaffrey, to share a few up-to-date facts about the company. Here's some of what she gave me:
Data centers: 4
Linux computers: >10,000
Searches per day: >150 million
Index of Web pages: >1.6 billion
Image base: >330 million
Usenet messages: >650 million (going back >5yrs)
Language subsets in the index: 28
International domain sites: 23
PDFs: >22 million
Included in searches by file type: wk1,wk2, wk3, wk4, wk5, wki, wks, wku, mw, xls, ppt, doc, wks, wps, wdb, wr, irtf, ans, txt
They also have maps, phone directories, dictionary definitions, Web page translation... the list just keeps growing.
Fast Search and Transfer ASA is a Norwegian company with offices in the US and elsewhere. Their original and persistent goal has been to build the world's largest and deepest search engine. Early on they partnered with Dell and Lycos, which ultimately employed FAST engines for searching the Web, images, multimedia and everything else.
And now FAST has rebranded its site as "AllTheWeb", with the tagline "all the web. all the time". And they're doing some aggressive PR. Normally I resist that kind of thing, but I've been warming to these Norwegian guys ever since I started hearing from them, mostly because they felt that they should be no less legit to the community than Google. Their engines run on FreeBSD and were developed on FreeBSD and Linux machines. In fact, FAST's first engine, FTPsearch, was developed under the GPL. You can still download the GPL version of that software at ftp://ftpsearch.ntnu.no/pub/ftpsearch/. Search results are also presented by Apache and PHP.
I was also told that some of the same folks were involved in PHP's development for a long time, and that many of FAST's R&D people in Norway come from one UNIX-oriented computer club at the university in Trodheim. It's called "Programvareverkstedet," or PVV.
Whether it's merit, PR or both, AllTheWeb.com is clearly getting some mojo going. A few days ago Kevin Elliot at About.com wrote, "for searches related to news and current events, it blows the conventional wisdom about Google right out of the water". There's more positive spin at SearchDay, Pandia, Research Buzz and the company's own press release list.
I just ran a quick test of the two services. Here's how they did, at least in terms of returning raw numbers:
"Geeks on the Half Shell":
That last one was a real test, because it referred to a real piece that's been up on both the old and the new LJ site since November 7.
So here's a PR lesson for the AllTheWeb folks. If you're going to send out press releases to editors bragging about how fast you crawl news sites, at least crawl the ones you're pitching.
That said, I've been an AllTheWeb user since it started, and I still use their image searches as much as I use Google's. If you're in heavy search mode, it's better to choose between them with AND logic, not OR.
Doc Searls is Senior Editor of Linux Journal.
Doc Searls is Senior Editor of Linux Journal
|Speed Up Your Web Site with Varnish||Jun 19, 2013|
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
- Speed Up Your Web Site with Varnish
- Containers—Not Virtual Machines—Are the Future Cloud
- Linux Systems Administrator
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Non-Linux FOSS: libnotify, OS X Style
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- RSS Feeds
- Reply to comment | Linux Journal
1 hour 5 min ago
- Reply to comment | Linux Journal
5 hours 4 min ago
- Yeah, user namespaces are
6 hours 21 min ago
- Cari Uang
9 hours 52 min ago
- user namespaces
12 hours 45 min ago
13 hours 11 min ago
- One advantage with VMs
15 hours 40 min ago
- about info
16 hours 13 min ago
16 hours 14 min ago
16 hours 15 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?