Simplicity and Performance: JavaScript on the Server

The award for the hottest new server Web development language goes to...JavaScript! JavaScript, language of the browser since the early days, is the hottest language for server-side development. See why JavaScript is the language for developing quick, easy and incredibly powerful server-side applications.

For years, Douglas Crockford, the high priest of JavaScript (JS), has claimed that it is a powerful, flexible language suited to a multitude of tasks, especially if you can separate it from the ugly browser-side piece that is the Document Object Model, or DOM. Because of the browser, JavaScript is the most popular programming language around by number of users. Job sites dice.com and monster.com post more jobs for JavaScript than any other language, except Java. Of course, if JavaScript runs in a browser, or anywhere, it must have an engine. Those engines have been around since the earliest JS-capable browsers, and they have been available as separate standalone entities for several years. Thus, the potential for running JS on its own always has been available. However, JavaScript always has missed two critical elements to make it worthwhile to run on the server side.

Free Trade Agreement

The first missing piece was a common set of libraries. Quite simply, because JS was so focused on the browser, it was missing basic I/O libraries, such as file reading and writing, network port creation and listening, and other elements that can be found in any decent standalone language. Ruby includes them natively; Java includes them in its java.io and java.net packages. For JavaScript, running alone when all you can do is process text and data structures, but not communicate with the outside world, was rather useless. Over the years, several attempts have been made to make some form of JS I/O and Net packages, mostly wrapped around native C calls, if the JS engine was written in C, such as SpiderMonkey, or java.io, and java.net calls, if the JS engine was written in Java, for example, Rhino.

This began to change in early 2009 with the creation of the CommonJS Project (which, for some mystical reason, stands for Common JavaScript), which unified these efforts under a common namespace, with JS-specific semantics and included a package-inclusion system to boot.

Using Rhino as an example, you could read from a file using:

defineClass("File");
var f = new File("myfile.txt"), line;
while ((line = f.readLine()) !== null) {
   // do some processing
}

// this example slightly modified and simplified 
// from the Mozilla Rhino site

As you can see, this is not file processing in JavaScript; it is file processing in Java! All I have done is opened the Java API to JavaScript. It is great if you really intend to program in Java, but it's of limited help if you are trying to do pure JS, and especially if your engine is not Java-based.

With CommonJS, there emerged a standard JavaScript-native interface to include a package, for example an I/O package or http package, and define many of the standard functionalities. Under the covers, the implementation may be C, Java, Erlang or Gobbledygook. All that matters is that the interface to the developer is platform-agnostic and portable from one interpreter to another.

The Missing Node

The second missing piece was a server, similar either to Tomcat/Jetty for Java or Mongrel/Thin for Ruby, that provides a real environment, includes the necessary modules and is easy to use. Most important, it needed to take advantage of JavaScript's strengths, rather than attempt to copy a system that works for Java or Ruby. The real breakthrough was Ryan Dahl's Node.JS. Ryan combined Google's highly performant V8 engine, JavaScript's natural asynchronous semantics, a module system and the basic modules to create a server that suits JavaScript to a tee.

Most Web servers have a primary process that receives each new request. It then either forks a new process to handle the specific request, while the parent listens for more requests, or creates a new thread to do the same, essentially the same method if somewhat more efficient. The problem with processes or threads is threefold. First, they require significant resource usage (memory and CPU) for a small amount of differing code. Second, these threads often will block on various activities, such as filesystem or network access, tying up precious resources. Finally, threads and processes require context switches in the CPU. As good as modern operating systems are, context switches still are expensive.

The alternative, gaining in popularity, is event-driven, asynchronous callbacks. In an event model, everything runs in one thread. However, each request does not have its own thread. Rather, each request has a callback that is invoked when an event—like a new connection request—occurs. Several products already take advantage of the event-driven model. Nginx is a Web server with similar CPU utilization characteristics to dominant Apache, but with constant memory usage, no matter how many simultaneous requests it serves. The same model has been taken to Ruby using EventMachine.

As anyone who has programmed in JavaScript, and especially in asynchronous AJAX, knows, JS is extremely well suited to event-driven programming. Node.JS brilliantly combines packaging and an asynchronous event-driven model with a first-rate JS engine to create an incredibly lightweight, easy-to-use yet powerful server-side engine. Node has been in existence for less than two years and was first released to the world at large only at the end of May 2009, yet it has seen widespread adoption and has served as a catalyst for many other frameworks and projects. Quite simply, Node changes the way we write high-performance server-side nodes (pun intended) and opens up a whole new vista.

The rest of this article explores installing Node and creating two sample applications. One is the classic “hello world”, a starting point for every programming example, and the other is a simple static file Web server. More complex applications, Node-based development frameworks, package managers for Node, available hosting environments and how to host your own Node environment, will be subjects for future articles.

______________________

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState