System Administration of the IBM Watson Supercomputer

System administrators at the USENIX LISA 2011 conference (LISA is a great system administration conference, by the way) in Boston in December got to hear Michael Perrone's presentation "What Is Watson?"

Michael Perrone is the Manager of Multicore Computing from the IBM T.J. Watson Research Center. The entire presentation (slides, video and MP3) is available on the USENIX Web site, and if you really want to understand how Watson works under the hood, take an hour to listen to Michael's talk (and the sysadmin Q&A at the end).

I approached Michael after his talk and asked if there was a sysadmin on his team who would be willing to answer some questions about handling Watson's system administration, and after a brief introduction to Watson, I include our conversation below.

What Is Watson?

In a nutshell, Watson is an impressive demonstration of the current state of the art in artificial intelligence: a computer's ability to answer questions posed in natural language (text or speech) correctly.

Watson came out of the IBM DeepQA Project and is an application of DeepQA tuned specifically to Jeopardy (a US TV trivia game show). The "QA" in DeepQA stands for Question Answering, which means the computer can answer your questions, spoken in a human language (starting with English). The "Deep" in DeepQA means the computer is able to analyze deeply enough to handle natural language text and speech successfully. Because natural language is unstructured, deep analysis is required to interpret it correctly.

It demonstrates (in a popular format) a computer's capability to interface with us using natural language, to "understand" and answer questions correctly by quickly searching a vast sea of data and correctly picking out the vital facts that answer the question.

Watson is thousands of algorithms running on thousands of cores using terabytes of memory, driving teraflops of CPU operations to deliver an answer to a natural language question in less than five seconds. It is an exciting feat of technology, and it's just a taste of what's to come.

IBM's goal for the DeepQA Project is to drive automatic Question Answering technology to a point where it clearly and consistently rivals the best human performance.

Watson's Vital Statistics
  • 90 IBM Power 750 servers (plus additional I/O, network and cluster controller nodes).

  • 80 trillion operations per second (teraflops).

  • Watson's corpus size was 400 terabytes of data—encyclopedias, databases and so on. Watson was disconnected from the Internet. Everything it knows about the world came from the corpus.

  • Average time to handle a question: three seconds.

  • 2880 POWER7 cores (3.555GHz chip), four threads per core.

  • 500GB per sec on-chip bandwidth (between the cores on a chip).

  • 10Gb Ethernet network.

  • 15TB of RAM.

  • 20TB of disk, clustered. (Watson built its semantic Web from the 400TB corpus. It keeps the semantic Web, but not the corpus.)

  • Runs IBM DeepQA software, which has open-source components: Apache Hadoop distributed filesystem and Apache UIMA for natural language processing.

  • SUSE Linux.

  • One full-time sysadmin on staff.

  • Ten compute racks, 80kW of power, 20 tons of cooling (for comparison, a human has one brain, which fits in a shoebox, can run on a tuna-fish sandwich and can be cooled with a handheld paper fan).

How Does Watson Work?

First, Watson develops a semantic net. Watson takes a large volume of text (the corpus) and parses that with natural language processing to create "syntatic frames" (subject→verb→object). It then uses syntactic frames to create "semantic frames", which have a degree of probability. Here's an example of semantic frames:

  • Inventors patent inventions (.8).

  • Fluid is a liquid (.6).

  • Liquid is a fluid (.5).

Why isn't the probability 1 in any of these examples? Because of phrases like "I speak English fluently". They tend to skew the numbers.

To answer questions, Watson uses Massively Parallel Probabilistic Evidence-Based Architecture. It uses the evidence from its semantic net to analyze the hypotheses it builds up to answer the question. You should watch the video of Michael's presentation and look at the slides, as there is really too much under the hood to present in a short article, but in a nutshell, Watson develops huge amounts of hypotheses (potential answers) and uses evidence from its semantic Web to assign probabilities to the answers to pick the most likely answer.

There are many algorithms at play in Watson. Watson even can learn from its mistakes and change its Jeopardy strategy.

Watson Is Built on Open Source

Watson is built on the Apache UIMA framework, uses Apache Hadoop, runs on Linux, and uses xCAT and Ganglia for configuration management and monitoring—all open-source tools.

Interview with Eddie Epstein on System Administration of the Watson Supercomputer

Eddie Epstein is the IBM researcher responsible for scaling out Watson's computation over thousands of compute cores in order to achieve the speed needed to be competitive in a live Jeopardy game. For the past seven years, Eddie managed the IBM team doing ongoing development of Apache UIMA. Eddie was kind enough to answer my questions about system administration of the Watson cluster.

AT: Why did you decide to use Linux?

EE: The project started with x86-based blades, and the researchers responsible for admin were very familiar with Linux.

AT: What configuration management tools did you use? How did you handle updating the Watson software on thousands of Linux servers?

EE: We had only hundreds of servers. The servers ranged from 4- to 32-core machines. We started with CSM to manage OS installs, then switched to xCat.

Configuration Management of the Watson Cluster

CSM is IBM's proprietary Cluster Systems Management software ( It is intended to simplify administration of a cluster and includes parallel execution capability for high-volume pushes:

[CSM is] designed for simple, low-cost management of distributed and clustered IBM Power Systems in technical and commercial computing environments. CSM, included with the IBM Power Systems high-performance computer solutions, dramatically simplifies administration of a cluster by providing management from a single point-of-control....In addition to providing all the key functions for administration and maintenance of typical distributed systems, CSM is designed to deliver the parallel execution required to manage clustered computing environments effectively.

xCAT also originated at IBM. It was open-sourced in 2007. The xCAT Project slogan is "Extreme Cloud Administration Toolkit", and its logo is a cat skull and crossbones. It now lives at, which describes it as follows:

  • Provision operating systems on physical or virtual machines: SLES10 SP2 and higher, SLES 11 (incl. SP1), RHEL5.x, RHEL 6, CentOS4.x, CentOS5.x, SL 5.5, Fedora 8-14, AIX 6.1, 7.1 (all available technology levels), Windows 2008, Windows 7, VMware, KVM, PowerVM and zVM.

  • Scripted install, stateless, satellite, iSCSI or cloning.

  • Remotely manage systems: integrated lights-out management, remote console, and distributed shell support.

  • Quickly set up and control management node services: DNS, HTTP, DHCP and TFTP.

xCAT offers complete and ideal management for HPC clusters, render farms, grids, WebFarms, on-line gaming infrastructure, clouds, data centers, and whatever tomorrow's buzzwords may be. It is agile, extendible and based on years of system administration best practices and experience.

xCAT grew out of a need to rapidly provision IBM x86-based machines and has been actively developed since 1999. xCAT is now ten years old and continues to evolve.

AT: xCat sounds like an installation system rather than a change management system. Did you use an SSH-based "push" model to push out changes to your systems?

EE: xCat has very powerful push features, including a multithreaded push that interacts with different machines in parallel. It handles OS patches, upgrades and more.

AT: What monitoring tool did you use and why? Did you have any cool visual models of Watson's physical or logical activity?

EE: The project used a home-grown cluster management system for development activities, which had its own monitor. It also incorporated ganglia. This tool was the basis for managing about 1,500 cores.

The Watson game-playing system used UIMA-AS with a simple SSH-based process launcher. The emphasis there was on measuring every aspect of runtime performance in order to reduce the overall latency. Visualization of performance data was then done after the fact. UIMA-AS managed the work on thousands of cores.

What Is UIMA-AS?

UIMA (Unstructured Information Management Architecture) is an open-source technology framework enabling Watson. It is a framework for analyzing a sea of data to discover vital facts. It is computers taking unstructured data as input and turning it into structured data and then analyzing and working with the structured data to produce useful results.

The analysis is "multi-modal", which means many algorithms are employed, and many kinds of algorithms. For example, Watson had a group of algorithms for generating hypotheses, such as using geo-spatial reasoning, temporal reasoning (drawing on its historical database), pun engine and so on, and another group of algorithms for scoring and pruning them to find the most likely answer.

In a nutshell, this is Massively Parallel Probabilistic Evidence-Based Architecture. (The evidence comes from Watson's 400TB corpus of data.)

The "AS" stands for Asynchronous Scaleout, and it's a scaling framework for UIMA—a way to run UIMA on modern, highly parallel cores, to benefit from the continuing advance in technology. UIMA brings "thinking computers" a giant step closer.

To understand unstructured information, first let's look at structured information. Computers speak with each other using structured information. Sticking to structured information makes it easier to extract meaning from data. HTML and XML are examples of structured information. So is a CSV file. Structured information standards are maintained by OASIS at

Unstructured information is much more fluid and free-form. Human communication uses unstructured information. Until UIMA, computers have been unable to make sense out of unstructured information. Examples of unstructured information include audio (music), e-mails, medical records, technical reports, blogs, books and speech.

UIMA was originally an internal IBM Research project. It is a framework for creating applications that do deep analysis of natural human language text and speech.

In Watson, UIMA managed the work on nearly 3,000 cores. Incidentally, Watson could run on a single core—it would take it six hours to answer a question. With 3,000 cores, that time is cut to 2–6 seconds. Watson really takes advantage of massively parallel architecture to speed up its processing.

AT: What were the most useful system administration tools for you in handling Watson and why?

EE: clusterSSH ( was quite useful. That and simple shell scripts with SSH did most of the work.

AT: How did you handle upgrading Watson software? SSH in, shut down the service, update the package, start the service? Or?

EE: Right, the Watson application is just restarted to pick up changes.

AT: How did you handle packaging of Watson software?

EE: The Watson game player was never packaged up to be delivered elsewhere.

AT: How many sysadmins do you have handling how many servers? You mentioned there were hundreds of operating system instances—could you be more specific? (How many humans and how many servers?) Is there actually a dedicated system administration staff, or do some of the researchers wear the system administrator hat along with their researcher duties?

EE: We have in the order of 800 OS instances. After four years we finally hired a sysadmin; before that, it was a part-time job for each of three researchers with root access.

AT: Regarding your monitoring system, how did you output the system status?

EE: We are not a production shop. If the cluster has a problem, only our colleagues complain.

What's Next?

IBM wants to make DeepQA useful, not just entertaining. Possible fields of application include healthcare, life sciences, tech support, enterprise knowledge management and business intelligence, government, improved information sharing and security.


IBM's Watson Site—"What is Watson?", "Building Watson" and "Watson for a Smarter Planet":

IBM's DeepQA Project:

Eddie Epstein's IBM Researcher Profile:

Wikipedia Article on Watson:

Apache UIMA:

Load Disqus comments