The Crystal Experiment
Imagine this: in a 27 kilometer long circular pipe running along a tunnel drilled over 100 meters underground, two beams of a few billion protons accelerate to an energy in excess of 14,000 times their own mass and collide head-on, generating a small big bang where hundreds and hundreds of newly-created particles are violently projected in all directions. Searching these, thousands of physicists from all around the world will try to find a few new particles whose existence, according to modern theories, would give new insights into the deepest symmetries of the universe and possibly explain the origin of mass itself.
This almost science fiction scenario is more or less what will happen near Geneva, Switzerland, at CERN (see Resources ), the European Center for Nuclear Research, when the Large Hadron Collider (LHC) starts its operations in the year 2005. The instruments the scientists will use to observe these very high-energy interactions are two huge and extremely complex particle detectors, code-named ATLAS and CMS, each weighing over 10,000 tons, positioned around the point where the protons will collide.
Our experimental physics group is now involved in a multi-disciplinary R&D project (see Resources ) related to the construction of one of the two detectors, CMS (Compact Muon Solenoid). In particular, we are studying the characteristics of a new crystal, the lead tungstate or PWO, which, when hit by a particle, emits visible light. About 100,000 small PWO bars (Figure 1) will compose the part of the CMS detector called the “electromagnetic calorimeter”, which will measure the energy of all the electrons and photons created in the collisions.
Figure 2. The dark chamber of our experimental bench: crystals to be measured are inserted here. The rail on the top moves a small radioactive source along the crystal (here wrapped in aluminum foil) and the produced light is collected by the phototube on the left.
In our laboratory, located in the Physics Department of the University “La Sapienza” in Rome, Italy, we spent the past two years setting up a full experimental bench to measure all the interesting properties of this crystal. The PWO crystals are inserted into a dark chamber (Figure 2) and a small radioactive source is used to excite them so that we can measure the small quantities of light produced. Instruments used on the bench include light detectors, temperature probes, analog-to-digital converters (ADC), high-voltage power supplies, and step motors (Figure 3). To interconnect and control most of these instruments and to allow a digital readout of the data, we used the somewhat old (but perfectly apt to our needs) CAMAC standard.
Figure 3. The electric signal coming from the phototube is fed into a CAMAC-based DAQ chain which amplifies and digitizes it before sending it to our computer. The photo shows all the instruments involved in the operation.
One of the problems we had to face when the project began at the end of 1995 was how to connect the data acquisition (DAQ) chain to a computer system for data collection without exceeding the limited resources of our budget. A possibility was the use of an old ISA-bus-based CAMAC controller board available from past experiments. This was a CAEN A151 board released in 1990, a low-level device which nonetheless guaranteed the speed we needed. We then bought an off-the-shelf 100 MHz Pentium PC to handle all the communications. The problem was how to use it. CAEN only provided a very old MS-DOS software driver which, of course, hardly suited our needs as the mono-user, mono-task operating system could not easily fit into our UNIX-based environment.
One of us (E.L.) was using Linux at the time on his PC at home where he could appreciate Linux's stability and the possibilities offered by the complete availability of the source code. The idea of using such a system in our lab presented several appealing features. First, using Linux would give us a very reliable and efficient operating system. The CPU time fraction spent in user programs is quite large with respect to the time used by the kernel, and there is complete control of priorities and resource sharing among processes. This feature is of great importance when the time performances of the DAQ program are strict (but not so strict to require a true real-time system): data acquisition can be given maximum priority over any other task that may be running on the same computer, such as monitor programs or user shells.
Moreover, we had access to a large UNIX cluster composed of HP workstations which we could use for data analysis. Using Linux, with all the facilities typical of a UNIX OS and the GNU compilers, the data acquisition system could be smoothly integrated with this cluster. Porting of scripts and programs would be straightforward and the use of the TCP/IP-based tools (NFS, FTP) would permit an automatic data transfer among the systems. Also, the use of X-based graphical interfaces would permit remote monitoring of ongoing DAQ sessions, not only from our offices, located a few hundred meters from the lab, but also from remote locations such as CERN.
The multi-user environment would allow a personalized access to data and programs, e.g., granting some users the permissions to start a DAQ session but not to modify the software or allow user interference in ongoing DAQ sessions.
Last but not least, the entire system would be completely free under the GNU license, including compilers, development tools, GUIs and all the public domain goodies that come with Linux.
All these advantages were quite clear in our minds but exploiting Linux was still dependent on being able to use our old CAMAC controller board. It is here that Linux proved all of its great potential as the operating system of choice in our lab.
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Linux Systems Administrator
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Validate an E-Mail Address with PHP, the Right Way
- Technical Support Rep
- Senior Perl Developer
- UX Designer
- Introduction to MapReduce with Hadoop on Linux
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?