JavaScript All the Way Down

There is a well known story about a scientist who gave a talk about the Earth and its place in the solar system. At the end of the talk, a woman refuted him with "That's rubbish; the Earth is really like a flat dish, supported on the back of a turtle." The scientist smiled and asked back "But what's the turtle standing on?", to which the woman, realizing the logical trap, answered, "It's very simple: it's turtles all the way down!" No matter the verity of the anecdote, the identity of the scientist (Bertrand Russell or William James are sometimes mentioned), or even if they were turtles or tortoises, today we may apply a similar solution to Web development, with "JavaScript all the way down".

If you are going to develop a Web site, for client-side development, you could opt for Java applets, ActiveX controls, Adobe Flash animations and, of course, plain JavaScript. On the other hand, for server-side coding, you could go with C# (.Net), Java, Perl, PHP and more, running on servers, such as Apache, Internet Information Server, Nginx, Tomcat and the like. Currently, JavaScript allows you to do away with most of this and use a single programming language, both on the client and the server sides, and with even a JavaScript-based server. This way of working even has produced a totally JavaScript-oriented acronym along the lines of the old LAMP (Linux+Apache+MySQL+PHP) one: MEAN, which stands for MongoDB (a NoSQL database you can access with JavaScript), Express (a Node.js module to structure your server-side code), Angular.JS (Google's Web development framework for client-side code) and Node.js.

In this article, I cover several JavaScript tools for writing, testing and deploying Web applications, so you can consider whether you want to give a twirl to a "JavaScript all the way down" Web stack.

What's in a Name?

JavaScript originally was developed at Netscape in 1995, first under the name Mocha, and then as LiveScript. Soon (after Netscape and Sun got together; nowadays, it's the Mozilla Foundation that manages the language) it was renamed JavaScript to ride the popularity wave, despite having nothing to do with Java. In 1997, it became an industry standard under a fourth name, ECMAScript. The most common current version of JavaScript is 5.1, dated June 2011, and version 6 is on its way. (However, if you want to use the more modern features, but your browser won't support them, take a look at the Traceur compiler, which will back-compile version 6 code to version 5 level.)

Some companies produced supersets of the language, such as Microsoft, which developed JScript (renamed to avoid legal problems) and Adobe, which created ActionScript for use with Flash.

There are several other derivative languages (which actually compile to JavaScript for execution), such as the more concise CoffeeScript, Microsoft's TypeScript or Google's most recent AtScript (JavaScript plus Annotations), which was developed for the Angular.JS project. The asm.js project even uses a JavaScript subset as a target language for efficient compilers for other languages. Those are many different names for a single concept!

Why JavaScript?

Although stacks like LAMP or its Java, Ruby or .Net peers do power many Web applications today, using a single language both for client- and server-side development has several advantages, and companies like Groupon, LinkedIn, Netflix, PayPal and Walmart, among many more, are proof of it.

Modern Web development is split between client-side and server-side (or front-end and back-end) coding, and striving for the best balance is more easily attained if your developers can work both sides with the same ease. Of course, plenty of developers are familiar with all the languages needed for both sides of coding, but in any case, it's quite probable that they will be more productive at one end or the other.

Many tools are available for JavaScript (building, testing, deploying and more), and you'll be able to use them for all components in your system (Figure 1). So, by going with the same single set of tools, your experienced JavaScript developers will be able to play both sides, and you'll have fewer problems getting the needed programmers for your company.

Figure 1. JavaScript can be used everywhere, on the client and the server sides.

Of course, being able to use a single language isn't the single key point. In the "old days" (just a few years ago!), JavaScript lived exclusively in browsers to read and interpret JavaScript source code. (Okay, if you want to be precise, that's not exactly true; Netscape Enterprise Server ran server-side JavaScript code, but it wasn't widely adopted.) About five years ago, when Firefox and Chrome started competing seriously with (by then) the most popular Internet Explorer, new JavaScript engines were developed, separated from the layout engines that actually drew the HTML pages seen on browsers. Given the rising popularity of AJAX-based applications, which required more processing power on the client side, a competition to provide the fastest JavaScript started, and it hasn't stopped yet. With the higher performance achieved, it became possible to use JavaScript more widely (Table 1).

Table 1. The Current Browsers and Their JavaScript Engines

Browser JavaScript Engine
Chrome V8
Firefox SpiderMonkey
Opera Carakan
Safari Nitro

Some of these engines apply advanced techniques to get the most speed and power. For example, V8 compiles JavaScript to native machine code before executing it (this is called JIT, Just In Time compilation, and it's done on the run instead of pre-translating the whole program as is traditional with compilers) and also applies several optimization and caching techniques for even higher throughput. SpiderMonkey includes IonMonkey, which also is capable of compiling JavaScript code to object code, although working in a more traditional way. So, accepting that modern JavaScript engines have enough power to do whatever you may need, let's now start a review of the Web stack with a server that wouldn't have existed if it weren't for that high-level language performance: Node.js.

Node.js: a New Kind of Server

Node.js (or plain Node, as it's usually called) is a Web server, mainly written itself in JavaScript, which uses that language for all scripting. It originally was developed to simplify developing real-time Web sites with push capabilities—so instead of all communications being client-originated, the server might start a connection with a client by itself. Node can work with lots of live connections, because it's very lightweight in terms of requirements. There are two key concepts to Node: it runs a single process (instead of many), and all I/O (database queries, file accesses and so on) is implemented in a non-blocking, asynchronous way.

Let's go a little deeper and further examine the main difference between Node and more traditional servers like Apache. Whenever Apache receives a request, it starts a new, separate thread (process) that uses RAM of its own and CPU processing power. (If too many threads are running, the request may have to wait a bit longer until it can be started.) When the thread produces its answer, the thread is done. The maximum number of possible threads depends on the average RAM requirements for a process; it might be a few thousand at the same time, although numbers vary depending on server size (Figure 2).

Figure 2. Apache and traditional Web servers run a separate thread for each request.

On the other hand, Node runs a single thread. Whenever a request is received, it is processed as soon as it's possible, and it will run continuously until some I/O is required. Then, while the code waits for the I/O results to be available, Node will be able to process other waiting requests (Figure 3). Because all requests are served by a single process, the possible number of running requests rises, and there have been experiments with more than one million concurrent connections—not shabby at all! This shows that an ideal use case for Node is having server processes that are light in CPU processing, but high on I/O. This will allow more requests to run at the same time; CPU-intensive server processes would block all other waiting requests and produce a high drop in output.

Figure 3. Node runs a single thread for all requests.

A great asset of Node is that there are many available modules (an estimate ran in the thousands) that help you get to production more quickly. Though I obviously can't list all of them, you probably should consider some of the modules listed in Table 2.

Table 2. Some widely used Node.js modules that will help your development and operation.

Module Description
async Simplifies asynchronous work, a possible alternative to promises.
cluster Improves concurrency in multicore systems by forking worker processes. (For further scalability, you also could set up a reverse proxy and run several Node.js instances, but that goes beyond the objective of this article.)
connect Works with "middleware" for common tasks, such as error handling, logging, serving static files and more.
ejs, handlebars or jade, EJS Templating engines.
express A minimal Web framework—the E in MEAN.
forever A command-line tool that will keep your server up, restarting if needed after a crash or other problem.
mongoose, cradle, sequelize Database ORM, for MongoDB, CouchDB and for relational databases, such as MySQL and others.
passport Authentication middleware, which can work with OAuth providers, such as Facebook, Twitter, Google and more.
request or superagent HTTP clients, quite useful for interacting with RESTful APIs.
underscore or lodash Tools for functional programming and for extending the JavaScript core objects.

Of course, there are some caveats when using Node.js. An obvious one is that no process should do heavy computations, which would "choke" Node's single processing thread. If such a process is needed, it should be done by an external process (you might want to consider using a message queue for this) so as not to block other requests. Also, care must be taken with error processing. An unhandled exception might cause the whole server to crash eventually, which wouldn't bode well for the server as a whole. On the other hand, having a large community of users and plenty of fully available, production-level, tested code already on hand can save you quite a bit of development time and let you set up a modern, fast server environment.

Planning and Organizing Your Application

When starting out with a new project, you could set up your code from zero and program everything from scratch, but several frameworks can help you with much of the work and provide clear structure and organization to your Web application. Choosing the right framework will have an important impact on your development time, on your testing and on the maintainability of your site. Of course, there is no single answer to the question "What framework is best?", and new frameworks appear almost on a daily basis, so I'm just going with three of the top solutions that are available today: AngularJS, Backbone and Ember. Basically, all of these frameworks are available under permissive licenses and give you a head start on developing modern SPA (single page applications). For the server side, several packages (such as Sails, to give just one example) work with all frameworks.

AngularJS (or Angular.JS or just plain Angular—take your pick) was developed in 2009 by Google, and its current version is 1.3.4, dated November 2014. The framework is based on the idea that declarative programming is best for interfaces (and imperative programming for the business logic), so it extends HTML with custom tag attributes that are used to bind input and output data to a JavaScript model. In this fashion, programmers don't have to manipulate the Web page directly, because it is updated automatically. Angular also focuses on testing, because the difficulty of automatic testing heavily depends upon the code structure. Note that Angular is the A in MEAN, so there are some other frameworks that expand on it, such as MEAN.IO or MEAN.JS.

Backbone is a lighter, leaner framework, dated from 2010, which uses a RESTful JSON interface to update the server side automatically. (Fun fact: Backbone was created by Jeremy Ashkenas, who also developed CoffeeScript; see the "What's in a Name?" sidebar.) In terms of community size, it's second only to Angular, and in code size, it's by far the smallest one. Backbone doesn't include a templating engine of its own, but it works fine with Underscore's templating, and given that this library is included by default, it is a simple choice to make. It's considered to be less "opinionated" than other frameworks and to have a quite shallow learning curve, which means that you'll be able to start working quickly. A deficiency is that Backbone lacks two-way data binding, so you'll have to write code to update the view whenever the model changes and vice versa. Also, you'll probably be manipulating the Web page directly, which will make your code harder to unit test.

Finally, Ember probably is harder to learn than the other frameworks, but it rewards the coder with higher performance. It favors "convention over configuration", which likely will make Ruby on Rails or Symfony users feel right at home. It integrates easily with a RESTful server side, using JSON for communication. Ember includes Handlebars (see Table 2) for templating and provides two-way updates. A negative point is the usage of <script> tags for markers, in order to keep templates up to date with the model. If you try to debug a running application, you'll find plenty of unexpected elements!

Simplify and Empower Your Coding

It's a sure bet that your application will need to work with HTML, handle all kinds of events and do AJAX calls to connect with the server. This should be reasonably easy—although it might be plenty of work—but even today, browsers do not have exactly the same features. Thus, you might have to go overboard with specific browser-detection techniques, so your code will adapt and work everywhere. Modern application users have grown accustomed to working with different events (tap, double tap, long tap, drag and drop, and more), and you should be able to include that kind of processing in your code, possibly with appropriate animations. Finally, connecting to a server is a must, so you'll be using AJAX functions all the time, and it shouldn't be a painful experience.

The most probable candidate library to help you with all these functions is jQuery. Arguably, it's the most popular JavaScript library in use today, employed at more than 60% of the most visited Web sites. jQuery provides tools for navigating your application's Web document, handles events with ease, applies animations and uses AJAX (Listing 1). Its current version is 2.1.1 (or 1.11.1, if you want to support older browsers), and it weighs in at only around 32K. Some frameworks (Angular, for example) even will use it if available.

Listing 1. A simple jQuery example, showing how to process events, access the page and use AJAX.

var myButtonId = "#processButton");
$(myButtonId).click(function(e) {		 // when clicked...
    $(myButtonId).attr("disabled", "disabled");	 // disable button
    $.get("my/own/services", function(data) {	 // call server service
	window.alert("This came back: " + data); // show what it returns
	$(myButtonId).removeAttr("disabled");	 // re-enable the button

Other somewhat less used possibilities could be Prototype (current version 1.7.2), MooTools (version 1.5.1) or Dojo Toolkit (version 11). One of the key selling points of all these libraries is the abstraction of the differences between browsers, so you can write your code without worrying if it will run on such or such browser. You probably should take a look at all of them to find which one best fits your programming style.

Also, there's one more kind of library you may want. Callbacks are familiar to JavaScript programmers who need them for AJAX calls, but when programming for Node, there certainly will be plenty of them! You should be looking at "promises", a way of programming that will make callback programming more readable and save you from "callback hell"—a situation in which you need a callback, and that callback also needs a callback, which also needs one and so on, making code really hard to follow. See Listing 2, which also shows the growing indentation that your code will need. I'm omitting error-processing code, which would make the example even messier!

Listing 2. Callback hell happens when callbacks include callbacks, which include callbacks and so on.

function nurseryRhyme(...) {
  ..., function eeny(...) {
    ..., function meeny(...) {
      ..., function miny(...) {
        ..., function moe(...) {

The behavior of promises is standardized through the "Promises/A+" open specification. Several packages provide promises (jQuery and Dojo already include some support for them), and in general, they even can interact, processing each other's promises. A promise is an object that represents the future value of an (usually asynchronous) operation. You can process this value through the promise .then(...) method and handle exceptions with its .catch(...) method. Promises can be chained, and a promise can produce a new promise, the value of which will be processed in the next .then(...). With this style, the callback hell example of Listing 2 would be converted into more understandable code; see Listing 3. Code, instead of being more and more indented, stays aligned to the left. Callbacks still are being (internally) used, but your code doesn't explicitly work with them. Error handling is also simpler; you simply would add appropriate .catch(...) calls.

Listing 3. Using promises produces far more legible code.

  .then(function eeny(...) {...})
  .then(function meeny(...) {...})
  .then(function miny(...) {...})
  .then(function moe(...) {...});

You also can build promises out of more promises—for example, a service might need the results of three different callbacks before producing an answer. In this case, you could build a new single promise out of the three individual promises and specify that the new one will be fulfilled only when the other three have been fulfilled. There also are other constructs that let you fulfill a promise when a given number (possibly just one) of "sub-promises" have been fulfilled. See the Resources section for several possible libraries you might want to try.

I have commented on several tools you might use to write your application, so now let's consider the final steps: building the application, testing it and eventually deploying it for operation.

Testing Your Application

No matter whether you program on your own or as a part of a large development group, testing your code is a basic need, and doing it in an automated way is a must. Several frameworks can help you with this, such as Intern, Jasmine or Mocha (see Resources). In essence, they are really similar. You define "suites", each of which runs one or more "test cases", which test that your code does some specific function. To test results and see if they satisfy your expectations, you write "assertions", which basically are conditions that must be satisfied (see Listing 4 for a simple example). You can run test suites as part of the build process (which I explain below) to see if anything was broken before attempting to deploy the newer version of your code.

Listing 4. Suites usually include several test cases.

describe("Prime numbers tests", function() {
  it("Test prime numbers", function() {
  it("Test non-prime numbers", function() {
    expect(isPrime(4);	// just for variety!

Tests can be written in "fluent" style, using many matchers (see Listing 5 for some examples). Several libraries provide different ways to write your tests, including Chai, Unit.js, Should.js and Expect.js; check them out to decide which one suits you best.

Listing 5. Some examples of the many available matchers you can use to write assertions.

expect(someFunction(...));  // or .true(), .null(), 
                                          // .undefined(), .empty()
expect(someFunction(...)).to.not.equal(33);  // also .above(33), 
                                             // .below(33)
expect(someObject(...))"key", 22);

If you want to run tests that involve a browser, PhantomJS and Zombie provide a fake Web environment, so you can run tests with greater speed than using tools like Selenium, which would be more appropriate for final acceptance tests.

Listing 6. A sample test with Zombie (using promises, by the way) requires no actual browser.

var browser = require("zombie").create();
browser.localhost("", 3000);
  .then(function() { // when loaded, enter data and click
    return browser
      .fill("User", "fkereki")
      .fill("Password", ¡")
      .pressButton("Log in");
  .done(function() { // page loaded
    browser.assert.text("#greeting", "Hi, fkereki!");

A Slew of DDs!

Modern agile development processes usually emphasize very short cycles, based on writing tests for code yet unwritten, and then actually writing the desired code, the tests being both a check that the code works as desired and as a sort of specification in itself. This process is called TDD (Test-Driven Development), and it usually leads to modularized and flexible code, which also is easier to understand, because the tests help with understanding. BDD (Behavior-Driven Development) is a process based on TDD, which even specifies requirements in a form quite similar to the matchers mentioned in this article. Yet another "DD" is ATDD (Acceptance Test-Driven Development), which highlights the idea of writing the (automated) acceptance tests even before programmers start coding.

Building and Deploying

Whenever your code is ready for deployment, you almost certainly will have to do several repetitive tasks, and you'd better automate them. Of course, you could go with classic tools like make or Apache's ant, but keeping to the "JavaScript all the way down" idea, let's look at a pair of tools, Grunt and Gulp, which work well.

Grunt can be installed with npm. Do sudo npm install -g grunt-cli, but this isn't enough; you'll have to prepare a gruntfile to let it know what it should do. Basically, you require a package.json file that describes the packages your system requires and a Gruntfile.js file that describes the tasks you need. Tasks may have subtasks of their own, and you may choose to run the whole task or a specific subtask. For each task, you will define (in JavaScript, of course) what needs to be done (Listing 7). Running grunt with no parameters will run a default (if given) task or the whole gamut of tasks.

Listing 7. A Sample Grunt File

module.exports = function(grunt) {
    concat: {
      dist: {
	  dest: 'dist/build.js',
	  src: ['src/**.js'],
      options: 	{ 
	separator: ';',
	stripBanners : true,
    uglify: {
      dist: {
        files: 	{ 'dist/build.min.js': 
           ↪['<%= concat.dist.dest %>'] },

  grunt.registerTask('default', ['concat', 'uglify']);

Gulp is somewhat simpler to set up (in fact, it was created to simplify Grunt's configuration files), and it depends on what its authors call "code-over-configuration". Gulp works in "stream" or "pipeline" fashion, along the lines of Linux's command line, but with JavaScript plugins. Each plugin takes one input and produces one output, which automatically is fed to the next plugin in the queue. This is simpler to understand and set up, and it even may be faster for tasks involving several steps. On the other hand, being a newer project implies a smaller community of users and fewer available plugins, although both situations are likely to work out in the near future.

You can use them either from within a development environment (think Eclipse or NetBeans, for example), from the command line or as "watchers", setting them up to monitor specific files or directories and run certain tasks whenever changes are detected to streamline your development process further in a completely automatic way. You can set up things so that templates will be processed, code will be minified, SASS or LESS styles will be converted in pure CSS, and the resulting files will be moved to the server, wherever it is appropriate for them. Both tools have their fans, and you should try your hand at both to decide which you prefer.

Getting and Updating Packages

Because modern systems depend on lots of packages (frameworks, libraries, styles, utilities and what not), getting all of them and, even worse, keeping them updated, can become a chore. There are two tools for this: Node's own npm (mainly for server-side packages, although it can work for client-side code too) and Twitter's bower (more geared to the client-side parts). The former deals mainly with Node packages and will let you keep your server updated based on a configuration file. The latter, on the other hand, can install and update all kinds of front-end components (that is, not only JavaScript files) your Web applications might need also based on separate configuration metadata files.

Usage for both utilities is the same; just substitute bower for npm, and you're done. Using npm search for.some.package can help you find a given package. Typing npm install some.package will install it, and adding a --save option will update the appropriate configuration file, so future npm update commands will get everything up to date. In a pinch, npm also can be used as a replacement for bower, although then you'll possibly want to look at browserify to organize your code and prepare your Web application for deployment. Give it a look just in case.


Modern fast JavaScript engines, plus the availability of plenty of specific tools to help you structure, test or deploy your systems make it possible to create Web applications with "JavaScript all the way down", helping your developers be more productive and giving them the possibility of working on both client and server sides with the same tools they already are proficient with. For modern development, you certainly should give this a thought.


Keep up to date with JavaScript releases, features and more at

Get the Traceur compiler at CoffeeScript can be found at, and TypeScript is at You can read a draft version of the AtScript Primer at Finally, for more details on asm.js, go to

Common client-side frameworks are AngularJS at, Backbone at and Ember at, among many others. With Backbone, you also might consider Chaplin ( or Marionette ( for large-scale JavaScript Web applications. MEAN.JS is at, MEAN.IO is at , and Sails is at

You certainly should use libraries like jQuery (, MooTools (, Dojo ( or Prototype (

Use "promises" to simplify callback work in Node; you can choose among Q (, when (, bluebird ( or kew (actually, an optimized subset of Q, at, among many more. You can find the standard "Promises/A+" documentation at An alternative to promises could be async (

For server-side package management, use npm from For client-side packages, either add browserify from, or get bower from

Working with CSS is simpler with Sass or {less}; note that the latter can be installed with npm.

Use testing frameworks, such as Intern, Jasmine or Mocha. Chai, Should.js (), Expect.js () and UnitJS are complete assertion libraries with different interfaces to suit your preferences. (UnitJS actually includes Should.js and Expect.js to give you more freedom in choosing your preferred assertion writing style.) PhantomJS and Zombie.js allow you to run your tests without using an actual browser, for higher speeds, while Selenium is preferred for actual acceptance tests.

Deploy your systems with Grunt or Gulp.

Load Disqus comments