What Does "Fast" Mean?
Good news! One of my clients is launching a new marketing campaign, which we expect will make the business even more successful than before.
Bad news! This means our Web application, which has existed for some time on a fairly simple infrastructure, and which has handled a steadily growing number of users, now (we hope) will need to deal with a massive spike in users.
The big question is this: can our servers handle the load we expect? Indeed, what load can we expect? And, what happens if we need to crank up even more capacity?
So in this article, I walk through some of the basic points having to do with Web scalability, describing a few of the key things to keep in mind. Next month, I'll take a deeper dive into these ideas and discuss some of the techniques you can use to improve the speed—or apparent speed—of your applications.
Many of my clients are companies that need a Web application, but aren't familiar with the ways in which the Web works. A common question for them to ask me is "We have many thousands of users every month. Can the server handle that many people?" When I explain that users consume server resources only when they actively are making an HTTP request, their understanding begins to improve. A company with 10,000 visitors a month doesn't need to worry about 10,000 simultaneous visitors; they likely will have some periods of time with a few dozen and other periods of time with absolutely none. Thus, scaling up their infrastructure to handle 10,000 simultaneous users would be foolish.
At the same time, there are times—such as after launching an advertising campaign or being mentioned on a TV show—that you indeed will have a huge spike in traffic. Companies that advertise during the Super Bowl not only expect to get millions of viewers, they also expect to have many of those people visit their Web sites after (or during) watching the ads. This means normal assumptions for scaling no longer are applicable.
This is one of the reasons why Amazon's EC2 has become so popular. If you can treat a server as a commodity, paying for it by the hour and spinning servers up and down as necessary, you can solve this scaling problem. As traffic rises, you add more servers. As it falls, you remove them.
But of course, life is more complicated than that. First and foremost, every system has bottlenecks that can't just be wished away by auto-scaling. For example, if it turns out that your database can't handle a large load and you have only a single database server, auto-scaling your Web servers may exacerbate the problem, rather than solve it.
Second, although it's nice to imagine infinite budgets for auto-scaling servers, it's probably a bit more realistic to think not just about increasing the number of servers, but also about making each individual server more efficient. If there are ways to improve the efficiency of your code, that's often a good place to work on scaling, before throwing (virtual) hardware at the problem.
Third, if you're in charge of a site's technical infrastructure, your answer to the "how many people can we serve simultaneously" question probably should not be "it's infinite, assuming an infinite budget". The technical staff might like that answer, but the company's CFO might have a bit of an issue with giving the IT department a blank check.
What Is Speed?
Many non-technical people will say "I want to have a fast Web site." From a technical perspective, however, that's not a very useful statement, because it neither differentiates between the different types of speed, nor does it consider the multiple layers involved in a modern Web application, nor does it take into consideration multiple people and the crunch that comes from a sudden surge of interest in the site.
So, let's consider the many different parts of a Web application and how each of them might affect the speed.
It's true that networks can have different speeds. In general, people describe this in terms of bandwidth, which doesn't really mean that the electrons (or photons) are moving through the wires (or fibers or air) any faster, but rather that more of them are pushing through, in parallel, at a time. You can think of bandwidth as a straw through which you're trying to drink your favorite cold beverage. Two straws will allow you to drink twice as much at the same time and, thus, drink more quickly, even if the speed with which liquid flows through each straw is the same.
One of the potential problems with using shared servers, and with using a virtual machine on shared hardware, is that the network capacity is being divided among the many users. Think of what would happen if several people were to share your drinking straw from the previous example. Sure, the overall straw might be the same size, but each individual gets less than the full bandwidth. You don't need a virtual machine to see such effects either—just try to run several network-intensive applications on the same computer, and you'll quickly find that they are competing for network resources.
The upshot here is that you want to maximize the bandwidth available to your server. This means having your own server—even if it's a VM, you probably don't want it sharing resources with other VMs—and putting different services on different computers.
This term also has to do with speed, but in a different way from pure bandwidth. Let's say you want to transfer data between two huge servers, so you put a huge, high-speed wire between those networks. You would say that such a network has high bandwidth but also low latency, since the signals would go between the two via a high-speed wire.
Let's now replace that high-speed wire with a satellite link. Suddenly, because it takes time to send the data from one network to another, you have decreased your latency while keeping your bandwidth identical. The network speed hasn't changed, but loading each page now will take significantly longer. One of the major considerations of a Web application is latency—of the networks on which the server is running, but also of the application itself. If it takes several seconds for a server to reply, you can say that the application has high latency. This not only frustrates users (who have to wait for a response from the server), but it also means that a larger number of processes are running on the server at the same time, consuming resources. Thus, reducing latency in a Web application is in the best interest of users and of the company.
Client-Side Wait Time
This means you need to think about performance in new and different ways from what you might have before. It's not enough to push all of the files to the user's browser or to indicate from which sites the user's browser can retrieve them. You also need to think about where they are loaded. A <script> tag at the top of the page can have very different performance characteristics than at the bottom of the page, since browsers interpret and render tags from top to bottom.
As if all of that weren't enough, it is increasingly becoming an era of rich client-side Web applications. Regardless of whether you're using something as simple as Backbone or as complex as Ember.js, you are writing software that will be executing inside the user's browser. For its first two decades, the Web was highly biased in favor of the server, which also made it easier to scale, profile, debug and improve programs. But, now that programs are running inside different browsers, on users' computers, there is much more to think and worry about.
What Does This All Mean?
Web development used to seem so simple. You get a domain, set up a server, slap together some software, and you're in business. And indeed, you often still can do that today. But, if you're expecting to get lots of visitors at once, you need to understand the different types of "fast" that you'll need to consider, measure and then optimize.
Next time, I'll dig deeper into each of these types of speed, looking at specific parts of your software that can affect each of them. I'll give some specific suggestions for how you can identify such issues, as well as solve them, particularly if you have some low-hanging fruit.
Reuven M. Lerner, Linux Journal Senior Columnist, a longtime Web developer, consultant and trainer, is completing his PhD in learning sciences at Northwestern University.
Pick up any e-commerce web or mobile app today, and you’ll be holding a mashup of interconnected applications and services from a variety of different providers. For instance, when you connect to Amazon’s e-commerce app, cookies, tags and pixels that are monitored by solutions like Exact Target, BazaarVoice, Bing, Shopzilla, Liveramp and Google Tag Manager track every action you take. You’re presented with special offers and coupons based on your viewing and buying patterns. If you find something you want for your birthday, a third party manages your wish list, which you can share through multiple social- media outlets or email to a friend. When you select something to buy, you find yourself presented with similar items as kind suggestions. And when you finally check out, you’re offered the ability to pay with promo codes, gifts cards, PayPal or a variety of credit cards.Get the Guide
- Ensono M.O.
- Understanding OpenStack's Success
- Own Your DNS Data
- Teradici's Cloud Access Platform: "Plug & Play" Cloud for the Enterprise
- Simple Server Hardening
- From vs. to + for Microsoft and Linux
- Understanding Firewalld in Multi-Zone Configurations
- Returning Values from Bash Functions
- Bash Shell Script: Building a Better March Madness Bracket
- The Weather Outside Is Frightful (Or Is It?)