The Crystal Experiment
Imagine this: in a 27 kilometer long circular pipe running along a tunnel drilled over 100 meters underground, two beams of a few billion protons accelerate to an energy in excess of 14,000 times their own mass and collide head-on, generating a small big bang where hundreds and hundreds of newly-created particles are violently projected in all directions. Searching these, thousands of physicists from all around the world will try to find a few new particles whose existence, according to modern theories, would give new insights into the deepest symmetries of the universe and possibly explain the origin of mass itself.
This almost science fiction scenario is more or less what will happen near Geneva, Switzerland, at CERN (see Resources ), the European Center for Nuclear Research, when the Large Hadron Collider (LHC) starts its operations in the year 2005. The instruments the scientists will use to observe these very high-energy interactions are two huge and extremely complex particle detectors, code-named ATLAS and CMS, each weighing over 10,000 tons, positioned around the point where the protons will collide.
Our experimental physics group is now involved in a multi-disciplinary R&D project (see Resources ) related to the construction of one of the two detectors, CMS (Compact Muon Solenoid). In particular, we are studying the characteristics of a new crystal, the lead tungstate or PWO, which, when hit by a particle, emits visible light. About 100,000 small PWO bars (Figure 1) will compose the part of the CMS detector called the “electromagnetic calorimeter”, which will measure the energy of all the electrons and photons created in the collisions.
Figure 2. The dark chamber of our experimental bench: crystals to be measured are inserted here. The rail on the top moves a small radioactive source along the crystal (here wrapped in aluminum foil) and the produced light is collected by the phototube on the left.
In our laboratory, located in the Physics Department of the University “La Sapienza” in Rome, Italy, we spent the past two years setting up a full experimental bench to measure all the interesting properties of this crystal. The PWO crystals are inserted into a dark chamber (Figure 2) and a small radioactive source is used to excite them so that we can measure the small quantities of light produced. Instruments used on the bench include light detectors, temperature probes, analog-to-digital converters (ADC), high-voltage power supplies, and step motors (Figure 3). To interconnect and control most of these instruments and to allow a digital readout of the data, we used the somewhat old (but perfectly apt to our needs) CAMAC standard.
Figure 3. The electric signal coming from the phototube is fed into a CAMAC-based DAQ chain which amplifies and digitizes it before sending it to our computer. The photo shows all the instruments involved in the operation.
One of the problems we had to face when the project began at the end of 1995 was how to connect the data acquisition (DAQ) chain to a computer system for data collection without exceeding the limited resources of our budget. A possibility was the use of an old ISA-bus-based CAMAC controller board available from past experiments. This was a CAEN A151 board released in 1990, a low-level device which nonetheless guaranteed the speed we needed. We then bought an off-the-shelf 100 MHz Pentium PC to handle all the communications. The problem was how to use it. CAEN only provided a very old MS-DOS software driver which, of course, hardly suited our needs as the mono-user, mono-task operating system could not easily fit into our UNIX-based environment.
One of us (E.L.) was using Linux at the time on his PC at home where he could appreciate Linux's stability and the possibilities offered by the complete availability of the source code. The idea of using such a system in our lab presented several appealing features. First, using Linux would give us a very reliable and efficient operating system. The CPU time fraction spent in user programs is quite large with respect to the time used by the kernel, and there is complete control of priorities and resource sharing among processes. This feature is of great importance when the time performances of the DAQ program are strict (but not so strict to require a true real-time system): data acquisition can be given maximum priority over any other task that may be running on the same computer, such as monitor programs or user shells.
Moreover, we had access to a large UNIX cluster composed of HP workstations which we could use for data analysis. Using Linux, with all the facilities typical of a UNIX OS and the GNU compilers, the data acquisition system could be smoothly integrated with this cluster. Porting of scripts and programs would be straightforward and the use of the TCP/IP-based tools (NFS, FTP) would permit an automatic data transfer among the systems. Also, the use of X-based graphical interfaces would permit remote monitoring of ongoing DAQ sessions, not only from our offices, located a few hundred meters from the lab, but also from remote locations such as CERN.
The multi-user environment would allow a personalized access to data and programs, e.g., granting some users the permissions to start a DAQ session but not to modify the software or allow user interference in ongoing DAQ sessions.
Last but not least, the entire system would be completely free under the GNU license, including compilers, development tools, GUIs and all the public domain goodies that come with Linux.
All these advantages were quite clear in our minds but exploiting Linux was still dependent on being able to use our old CAMAC controller board. It is here that Linux proved all of its great potential as the operating system of choice in our lab.
Getting Started with DevOps - Including New Data on IT Performance from Puppet Labs 2015 State of DevOps Report
August 27, 2015
12:00 PM CDT
DevOps represents a profound change from the way most IT departments have traditionally worked: from siloed teams and high-anxiety releases to everyone collaborating on uneventful and more frequent releases of higher-quality code. It doesn't matter how large or small an organization is, or even whether it's historically slow moving or risk averse — there are ways to adopt DevOps sanely, and get measurable results in just weeks.
Free to Linux Journal readers.Register Now!
|Secure Server Deployments in Hostile Territory, Part II||Jul 29, 2015|
|Hacking a Safe with Bash||Jul 28, 2015|
|KDE Reveals Plasma Mobile||Jul 28, 2015|
|Huge Package Overhaul for Debian and Ubuntu||Jul 23, 2015|
|diff -u: What's New in Kernel Development||Jul 22, 2015|
|Shashlik - a Tasty New Android Simulator||Jul 21, 2015|
- Secure Server Deployments in Hostile Territory, Part II
- Hacking a Safe with Bash
- Huge Package Overhaul for Debian and Ubuntu
- The Controversy Behind Canonical's Intellectual Property Policy
- KDE Reveals Plasma Mobile
- Shashlik - a Tasty New Android Simulator
- Home Automation with Raspberry Pi
- Embed Linux in Monitoring and Control Systems
- diff -u: What's New in Kernel Development
- General Relativity in Python