Quick Takes - Coyote Point Equalizer E550si Load Balancer
Providing fault tolerance as well as the ability to scale beyond the capacity of a single server, load balancers are practically a necessity for any commercial site. Because loads on a given Web site can fluctuate by several orders of magnitude (five or six, in the case of sites like Victoria's Secret or the World Cup Soccer site), and given that thousands of dollars a minute may be lost if the site is unavailable, being able to spread the load across many servers and ensure that users can still connect, even if one or more physical servers fails or stops responding, is crucial.
The latest load balancer available from Coyote Point Systems is the Equalizer E550si, a 1u (1.75"-high) appliance that offers 20 10/100/1000 ports, all the load-balancing features necessary to set up a sophisticated Web farm or other type of virtual cluster, and excellent performance, at a cost of $10,995 US.
You may be asking yourself, “Why do I need a load balancer?” Or, “Why should I pay that much for something I can get for free?” In its simplest form, load balancing simply distributes requests as they come in to one of several back-end servers in a virtual cluster, sharing the load equally among all the servers in a round-robin scheme. A DNS server can do this by mapping several IP addresses to the same host name, for instance:
www.store.com 192.168.0.10 www.store.com 192.168.0.11 www.store.com 192.168.0.11
The problem with using a DNS server in this fashion is that requests are distributed to each server in turn, whether or not that server is actually available, and regardless of how heavily loaded each of the servers is. Also, the first address in the list may be cached more often across the Web, resulting in higher loads on that server. Finally, many applications, such as e-commerce, can break unless a client is connected to the same server through its session, and there's no way to ensure this with a DNS round-robin setup.
Apache and Tomcat also can balance loads across a cluster of Apache and Tomcat servers, using a specialized Tomcat Worker instance. This type of load balancing is somewhat more sophisticated, allowing for checks to ensure that a host is available and adding more sophisticated algorithms than simple round-robin—for instance, allowing new requests to be sent to the least heavily loaded server. This type of load balancing can enable persistent sessions, so that a client can be directed to the same server for the duration of the session. However, this method will not work with other Web servers and will take some fairly specialized knowledge to set up and maintain.
There also are open-source load balancers, such as Ultra Monkey, which can offer sophisticated load-balancing algorithms, persistent sessions, health checking, failover to a backup load balancer and more. These can be installed on any Linux server and simply need one or two NICs to begin creating a virtual cluster.
So, why buy a $10,995 box when you can set up a server for a few hundred?
First, performance. A single-processor server with two standard NICs can't hope to match the millions of concurrent users and the levels of traffic that the Equalizer can, with a carefully tuned OS and 20 gigabit ports available.
Second, ease of use. The Equalizer comes with a very simple and straightforward Web-based GUI that any network admin can use to create an enterprise-class load-balanced cluster.
Third, the Equalizer can be used with any IP-based application, not only HTTP/HTTPS. It supports DNS, WAP, RADIUS, SMTP, POP, IMAP, NNTP, FTP and streaming media, as well as most other UDP- and TCP/IP-based protocols. It also can handle Active Server Pages, as well as Java application servers, and pretty much any kind of SQL back-end database server.
The Equalizer also offers an optional SSL acceleration card that provides SSL encoding/decoding, which can reduce server loads quite substantially, and multiple Equalizers can be networked together to provide geographic load balancing, which allows you to set up several geographically separate Web sites that all serve the same URL, so that even if an entire data center is off-line, the others can continue to service users. The geographic load-balancing software, Envoy, can determine which data center will be able to respond the fastest to any given clients and to send those clients to the site that will give them the best service.
Setting up the Equalizer is a simple matter of performing the initial network configuration via serial terminal, then logging in to the system via the browser interface to configure one or more virtual clusters. Setting up a cluster is easily done by filling in the IP addresses of the servers in the cluster and making a few selections from drop-down boxes.
The major choices are the method of load balancing and the type of cluster. The load-balancing options are round-robin, static weight (set percentages of the total number of connections given to each server), adaptive, fastest response, least connections or server agent. Adaptive should be the default in most cases, as it combines the fastest response and least connections to provide very even server loads under most conditions. The type of cluster can be HTTP, HTTPS or any designated TCP/IP port range desired. Once a cluster is set up, you can be as granular as you like about creating persistent sessions, logging, reporting, monitoring services and servers to ensure availability, error handling or even automatically adding additional servers to a cluster as load increases. The default settings generally will be the optimal ones, but your ability to customize things is limited only by your ability to script actions.
For example, you can ping a server to ensure hardware connectivity, but you also can send a query via any text-based request/response protocol—not merely HTTP, but something like a Telnet-based SQL command—and verify that the response is valid. This means you can ensure that specific services are available on each member of a cluster, rather than just confirming that the network interface is operational. You can route traffic to a cluster based on rules that are written in standard POSIX.2 expressions. You could specify a rule that directs all traffic coming from a specific set of IP addresses to one cluster, and all other traffic to another, or match IP ranges assigned to specific countries to localize a Web site in other languages.
The Equalizer can automatically place cookies in the HTTP stream returned to a client so that it can identify a specific client and ensure that all traffic for that session comes to the same server. In addition, you can run scripts when a condition is met. For instance, you could define a rule that sends an e-mail if average loads on the cluster exceed 70% or even add additional servers to a cluster when loads are high.
Although there are load-balancing solutions that are less expensive than the Equalizer E550si (and many that are more expensive), the mix of high performance, ease of use and programmability is hard to beat.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Google's SwiftShader Released
- Parsing an RSS News Feed with a Bash Script
- SourceClear Open
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide