After only a few months of operation, NeoPets.com, a web site built on a Red Hat Linux/Apache platform, is already turning a profit, recording billions of page views monthly. Targeting youths aged 20 or younger, the site enables users to create and care for their own personal virtual critter known as a “NeoPet”. It also boasts a series of constantly changing “universes” complete with games, stories, contests and entertainment. According to recent figures from PC Data Online, NeoPets attracts 2.1 billion page views and 2.3 million unique users each month who each stay for an average of 7.48 hours, making this the stickiest site on the Web.
Based on August numbers, NeoPets ranks higher in page views than Excite, Lycos and Amazon. What's more, it engenders far more loyalty (termed stickiness) among users. The average AOL user, for example, visits for 35 minutes a month, while Yahoo users spend three hours 22 minutes. In the Gen Y market, NeoPets total of seven hours 48 minutes trounces the competition, with ten times the page views of Disney.
Initially created in a college dorm with a “launch campaign” that consisted of sending a couple of e-mails to other virtual pet sites, the site chalked up 200 sign-ups on its first day and was soon scoring as many as 200,000 page views a day. A management and technical team was then formed to create the corporate platform needed to help NeoPets expand. They added more staff and moved its servers to Pixelgate, a Westlake Village, California-based web hosting and internet services company. “After being off-line for several days, we surpassed 600,000 page views within three days of getting back on-line,” said NeoPets Chairman and CEO Doug Dohring.
The company increased the number of Apache/Linux boxes from two to five, using single CPU P3-600s as image servers and dual P3-600s for web servers, each with 512MB to 1GB RAM. Continual load expansion eventually pushed NeoPet's MySQL database technology to the limit. By this time, NeoPets was surpassing up to ten million page views a day. Reorganization again became a necessity.
The company secured the services of Web Zone Inc. of Santa Clara, California and Broomfield, Colorado-based Level 3, a multinational Tier One provider with hosting facilities in Los Angeles. This provided enough bandwidth to deal comfortably with anticipated traffic volumes. NeoPets then added yet more staff and purchased about 50 Red Hat/Apache web and image servers, two more MySQL Servers and a Sun server to run an Oracle database. Once the Oracle conversion was completed, page views soared to over 40 million a day.
The current NeoPets architecture comprises a Red Hat 6.2 and Apache front end, with a Solaris and Oracle back end. At the same time, MySQL is still used for a wide range of database operations.
Despite the introduction of Oracle, NeoPets remains one of the larger users of Apache on the Web. Though Oracle had to be introduced to provide a heavy-duty database, NeoPets believes that open source ultimately offers better quality and greater product reliability and remains committed to further expanding the robustness and capacity of PHP, Apache and MySQL as an alternative to Oracle.
“We are looking for people who can modify these open-source applications and take them to a new plateau,” said CTO Bill McCaffrey. “If we involve the right people, we believe that we can take these applications to the point where they can be used for even the largest sites on the Web.”
In anticipation of another summertime boom in site usage, NeoPets is planning to add many more web developers and open-source programmers, as well as system administrators and IT support staff.
Open source is a fine development model, but with the obvious exception of Eric Raymond it kind of sucks at PR.
Okay, let's qualify that. There are some fine companies that get mileage out of open source as a virtue, but as an editor I can tell you that there are too darn few pure open-source projects .org-type with a PR department (we suspect that number is zero), or with much PR instinct, by which I mean they bother editors like me with interesting information about what they're up to. Sure, we get flamed to a cinder when we neglect to mention the obvious, such as early last year when we wrongly reported that Borland's InterBase was about to become the first open-source database project, earning the outrage of some PostgreSQL folks (though surprisingly few, considering). But there isn't much outreach by the growing assortment of nuts-and-bolts open-source projects that simply make something handy that a lot of others can use.
Take proxy caching, which is very handy if you've got a lot of traffic to manage—but not much of a conversation starter except for those who (for professional or other reasons) obsess about it.
As it happens there are more than a few obsessives out there, and one of them (I forget who) told me that Squid (http://www.squid-cache.org/) is the cat's pajamas of open-source proxy servers. Well, it seems there are a pile of proprietary (presumably closed-source, certainly not free) proxy servers in the world. You can get them from Lucent, Novell, IBM, Cisco, Microsoft and the other usual suspects. Their prices run from zero to six figures. Squid is at the bottom of that range. As their FAQ puts it, “You can download Squid via FTP from the primary FTP site or one of the many worldwide mirror sites. Many sushi bars also have Squid.”
The product is competitive—literally. A group called IRCache holds frequent bake-offs (which they now call cache-offs) using the web Polygraph (http://www.polygraph.ircache.net/), a benchmarking tool developed by the National Science Foundation and a bunch of those same usual suspects. The results (also on the IRCache site) for each bake/cache-off run through many pages, many tables and many graphs. Squid leads in some places and lags in others, but it runs in the thick of every race.
Perhaps the most telling results come from this level-5 post from Matthew P. Barnson on Slashdot last year:
I can personally say that the three I've had experience with, Novell's ICS caches (which comprised ten of the twenty entrants), Network Appliance's NetCache, and Squid (on Solaris, in our case) all rock. Squid 2.3-stable1 was a dream to compile, install, and configure.
When we contacted him directly, he added this about Squid: “As an outgrowth of the Harvest Project, this venerable, free-software proxy cache sets the benchmark by which all other caches are measured.... For the price, Squid kicks some serious butt!” He also has kind words for another open-source project:
Apache web server was not specifically mentioned in the bake-off, but in my experience is extremely popular for caching services because the same server that can serve your web pages from your dorm room can also speed up your web surfing.
So let's raise a glass of saki to the Squid team and invite all the other open-source and free-software developers who envy this kind of coverage to let us know what they're up to.
Getting Started with DevOps - Including New Data on IT Performance from Puppet Labs 2015 State of DevOps Report
August 27, 2015
12:00 PM CDT
DevOps represents a profound change from the way most IT departments have traditionally worked: from siloed teams and high-anxiety releases to everyone collaborating on uneventful and more frequent releases of higher-quality code. It doesn't matter how large or small an organization is, or even whether it's historically slow moving or risk averse — there are ways to adopt DevOps sanely, and get measurable results in just weeks.
Free to Linux Journal readers.Register Now!
- Django Models and Migrations
- Hacking a Safe with Bash
- Secure Server Deployments in Hostile Territory, Part II
- Huge Package Overhaul for Debian and Ubuntu
- Home Automation with Raspberry Pi
- The Controversy Behind Canonical's Intellectual Property Policy
- Shashlik - a Tasty New Android Simulator
- Embed Linux in Monitoring and Control Systems
- KDE Reveals Plasma Mobile
- diff -u: What's New in Kernel Development