After only a few months of operation, NeoPets.com, a web site built on a Red Hat Linux/Apache platform, is already turning a profit, recording billions of page views monthly. Targeting youths aged 20 or younger, the site enables users to create and care for their own personal virtual critter known as a “NeoPet”. It also boasts a series of constantly changing “universes” complete with games, stories, contests and entertainment. According to recent figures from PC Data Online, NeoPets attracts 2.1 billion page views and 2.3 million unique users each month who each stay for an average of 7.48 hours, making this the stickiest site on the Web.
Based on August numbers, NeoPets ranks higher in page views than Excite, Lycos and Amazon. What's more, it engenders far more loyalty (termed stickiness) among users. The average AOL user, for example, visits for 35 minutes a month, while Yahoo users spend three hours 22 minutes. In the Gen Y market, NeoPets total of seven hours 48 minutes trounces the competition, with ten times the page views of Disney.
Initially created in a college dorm with a “launch campaign” that consisted of sending a couple of e-mails to other virtual pet sites, the site chalked up 200 sign-ups on its first day and was soon scoring as many as 200,000 page views a day. A management and technical team was then formed to create the corporate platform needed to help NeoPets expand. They added more staff and moved its servers to Pixelgate, a Westlake Village, California-based web hosting and internet services company. “After being off-line for several days, we surpassed 600,000 page views within three days of getting back on-line,” said NeoPets Chairman and CEO Doug Dohring.
The company increased the number of Apache/Linux boxes from two to five, using single CPU P3-600s as image servers and dual P3-600s for web servers, each with 512MB to 1GB RAM. Continual load expansion eventually pushed NeoPet's MySQL database technology to the limit. By this time, NeoPets was surpassing up to ten million page views a day. Reorganization again became a necessity.
The company secured the services of Web Zone Inc. of Santa Clara, California and Broomfield, Colorado-based Level 3, a multinational Tier One provider with hosting facilities in Los Angeles. This provided enough bandwidth to deal comfortably with anticipated traffic volumes. NeoPets then added yet more staff and purchased about 50 Red Hat/Apache web and image servers, two more MySQL Servers and a Sun server to run an Oracle database. Once the Oracle conversion was completed, page views soared to over 40 million a day.
The current NeoPets architecture comprises a Red Hat 6.2 and Apache front end, with a Solaris and Oracle back end. At the same time, MySQL is still used for a wide range of database operations.
Despite the introduction of Oracle, NeoPets remains one of the larger users of Apache on the Web. Though Oracle had to be introduced to provide a heavy-duty database, NeoPets believes that open source ultimately offers better quality and greater product reliability and remains committed to further expanding the robustness and capacity of PHP, Apache and MySQL as an alternative to Oracle.
“We are looking for people who can modify these open-source applications and take them to a new plateau,” said CTO Bill McCaffrey. “If we involve the right people, we believe that we can take these applications to the point where they can be used for even the largest sites on the Web.”
In anticipation of another summertime boom in site usage, NeoPets is planning to add many more web developers and open-source programmers, as well as system administrators and IT support staff.
Open source is a fine development model, but with the obvious exception of Eric Raymond it kind of sucks at PR.
Okay, let's qualify that. There are some fine companies that get mileage out of open source as a virtue, but as an editor I can tell you that there are too darn few pure open-source projects .org-type with a PR department (we suspect that number is zero), or with much PR instinct, by which I mean they bother editors like me with interesting information about what they're up to. Sure, we get flamed to a cinder when we neglect to mention the obvious, such as early last year when we wrongly reported that Borland's InterBase was about to become the first open-source database project, earning the outrage of some PostgreSQL folks (though surprisingly few, considering). But there isn't much outreach by the growing assortment of nuts-and-bolts open-source projects that simply make something handy that a lot of others can use.
Take proxy caching, which is very handy if you've got a lot of traffic to manage—but not much of a conversation starter except for those who (for professional or other reasons) obsess about it.
As it happens there are more than a few obsessives out there, and one of them (I forget who) told me that Squid (http://www.squid-cache.org/) is the cat's pajamas of open-source proxy servers. Well, it seems there are a pile of proprietary (presumably closed-source, certainly not free) proxy servers in the world. You can get them from Lucent, Novell, IBM, Cisco, Microsoft and the other usual suspects. Their prices run from zero to six figures. Squid is at the bottom of that range. As their FAQ puts it, “You can download Squid via FTP from the primary FTP site or one of the many worldwide mirror sites. Many sushi bars also have Squid.”
The product is competitive—literally. A group called IRCache holds frequent bake-offs (which they now call cache-offs) using the web Polygraph (http://www.polygraph.ircache.net/), a benchmarking tool developed by the National Science Foundation and a bunch of those same usual suspects. The results (also on the IRCache site) for each bake/cache-off run through many pages, many tables and many graphs. Squid leads in some places and lags in others, but it runs in the thick of every race.
Perhaps the most telling results come from this level-5 post from Matthew P. Barnson on Slashdot last year:
I can personally say that the three I've had experience with, Novell's ICS caches (which comprised ten of the twenty entrants), Network Appliance's NetCache, and Squid (on Solaris, in our case) all rock. Squid 2.3-stable1 was a dream to compile, install, and configure.
When we contacted him directly, he added this about Squid: “As an outgrowth of the Harvest Project, this venerable, free-software proxy cache sets the benchmark by which all other caches are measured.... For the price, Squid kicks some serious butt!” He also has kind words for another open-source project:
Apache web server was not specifically mentioned in the bake-off, but in my experience is extremely popular for caching services because the same server that can serve your web pages from your dorm room can also speed up your web surfing.
So let's raise a glass of saki to the Squid team and invite all the other open-source and free-software developers who envy this kind of coverage to let us know what they're up to.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Google's SwiftShader Released
- Interview with Patrick Volkerding
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Tech Tip: Really Simple HTTP Server with Python
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- SuperTuxKart 0.9.2 Released
- Returning Values from Bash Functions
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide