The Crystal Experiment

by Emanuele Leonardi

Imagine this: in a 27 kilometer long circular pipe running along a tunnel drilled over 100 meters underground, two beams of a few billion protons accelerate to an energy in excess of 14,000 times their own mass and collide head-on, generating a small big bang where hundreds and hundreds of newly-created particles are violently projected in all directions. Searching these, thousands of physicists from all around the world will try to find a few new particles whose existence, according to modern theories, would give new insights into the deepest symmetries of the universe and possibly explain the origin of mass itself.

This almost science fiction scenario is more or less what will happen near Geneva, Switzerland, at CERN (see Resources [1]), the European Center for Nuclear Research, when the Large Hadron Collider (LHC) starts its operations in the year 2005. The instruments the scientists will use to observe these very high-energy interactions are two huge and extremely complex particle detectors, code-named ATLAS and CMS, each weighing over 10,000 tons, positioned around the point where the protons will collide.

Figure 1. This photo shows one of the about 100,000 lead tungstate scintillating crystals which will be used in the electromagnetic calorimeter of the CMS experiment.

Our experimental physics group is now involved in a multi-disciplinary R&D project (see Resources [2]) related to the construction of one of the two detectors, CMS (Compact Muon Solenoid). In particular, we are studying the characteristics of a new crystal, the lead tungstate or PWO, which, when hit by a particle, emits visible light. About 100,000 small PWO bars (Figure 1) will compose the part of the CMS detector called the “electromagnetic calorimeter”, which will measure the energy of all the electrons and photons created in the collisions.

Figure 2. The dark chamber of our experimental bench: crystals to be measured are inserted here. The rail on the top moves a small radioactive source along the crystal (here wrapped in aluminum foil) and the produced light is collected by the phototube on the left.

In our laboratory, located in the Physics Department of the University “La Sapienza” in Rome, Italy, we spent the past two years setting up a full experimental bench to measure all the interesting properties of this crystal. The PWO crystals are inserted into a dark chamber (Figure 2) and a small radioactive source is used to excite them so that we can measure the small quantities of light produced. Instruments used on the bench include light detectors, temperature probes, analog-to-digital converters (ADC), high-voltage power supplies, and step motors (Figure 3). To interconnect and control most of these instruments and to allow a digital readout of the data, we used the somewhat old (but perfectly apt to our needs) CAMAC standard.

Figure 3. The electric signal coming from the phototube is fed into a CAMAC-based DAQ chain which amplifies and digitizes it before sending it to our computer. The photo shows all the instruments involved in the operation.

One of the problems we had to face when the project began at the end of 1995 was how to connect the data acquisition (DAQ) chain to a computer system for data collection without exceeding the limited resources of our budget. A possibility was the use of an old ISA-bus-based CAMAC controller board available from past experiments. This was a CAEN A151 board released in 1990, a low-level device which nonetheless guaranteed the speed we needed. We then bought an off-the-shelf 100 MHz Pentium PC to handle all the communications. The problem was how to use it. CAEN only provided a very old MS-DOS software driver which, of course, hardly suited our needs as the mono-user, mono-task operating system could not easily fit into our UNIX-based environment.

Enter Linux

One of us (E.L.) was using Linux at the time on his PC at home where he could appreciate Linux's stability and the possibilities offered by the complete availability of the source code. The idea of using such a system in our lab presented several appealing features. First, using Linux would give us a very reliable and efficient operating system. The CPU time fraction spent in user programs is quite large with respect to the time used by the kernel, and there is complete control of priorities and resource sharing among processes. This feature is of great importance when the time performances of the DAQ program are strict (but not so strict to require a true real-time system): data acquisition can be given maximum priority over any other task that may be running on the same computer, such as monitor programs or user shells.

Moreover, we had access to a large UNIX cluster composed of HP workstations which we could use for data analysis. Using Linux, with all the facilities typical of a UNIX OS and the GNU compilers, the data acquisition system could be smoothly integrated with this cluster. Porting of scripts and programs would be straightforward and the use of the TCP/IP-based tools (NFS, FTP) would permit an automatic data transfer among the systems. Also, the use of X-based graphical interfaces would permit remote monitoring of ongoing DAQ sessions, not only from our offices, located a few hundred meters from the lab, but also from remote locations such as CERN.

The multi-user environment would allow a personalized access to data and programs, e.g., granting some users the permissions to start a DAQ session but not to modify the software or allow user interference in ongoing DAQ sessions.

Last but not least, the entire system would be completely free under the GNU license, including compilers, development tools, GUIs and all the public domain goodies that come with Linux.

All these advantages were quite clear in our minds but exploiting Linux was still dependent on being able to use our old CAMAC controller board. It is here that Linux proved all of its great potential as the operating system of choice in our lab.

The CAMAC Device Driver

Our CAMAC controller consisted of a board which, when inserted on the ISA bus of a computer, could connect to as many as seven different CAMAC crates, each containing up to 22 different specialized devices connected to measuring instruments.

This board was mapped on a known set of ISA bus memory addresses through which a user could send commands to each individual instrument and retrieve the responses. UNIX permits access to physical memory addresses only at the kernel level: it was clear that we needed a set of software routines dedicated to the interaction with the DAQ board, usually called a device driver, to insert into the Linux kernel.

Though aware of the existence of kernel-level device drivers, none of us knew exactly how to write one. (All this happened a few months before the appearance of the good article by Alessandro Rubini. See Resources [3].) We then decided to ask for help on the comp.os.linux.hardware newsgroup, and less than 24 hours later we were contacted by Ole Streicher, a German researcher who sent us the source code of a device driver he wrote for a different CAMAC controller (see Resources [4]). Adapting it to our board was a matter of a couple of days, and then experimental data were happily flowing in and out of our DAQ system: the Linux option was finally open.

The ability to dynamically load and unload modules in the kernel, a feature which had been introduced in Linux only a few months before, was of great help in the driver development phase.

DAQ Control and Monitoring Programs

From the user's point of view, the CAMAC system was now visible via simple files, one for each different crate, which could be opened, closed, written to and read from. We also provided an interface library in order to hide the low-level details of the CAMAC operations and facilitate code writing. The presence of both the gcc C compiler and the f2c FORTRAN-to-C converter allowed us to provide both a C and a FORTRAN version of this interface library, in order to allow our colleagues to write their own DAQ programs.

Using this library we wrote the main DAQ program which was able to automatically set the run conditions, control the movement of the radioactive source via a serial link to a step motor, send light pulses to calibrate the light sensors, and collect the data coming from the DAQ system and analyze them on the fly. To write the user interface, we used the Tcl/Tk package (see Resources [5]): all the program controls appeared on a graphical window which could be opened on any X display (Figure 4).

Figure 4. This is a snapshot of our PC screen during an actual DAQ run. In the center you can see the Tcl/Tk-based control interface while the window on the left shows the data collected during the run. The histogram, updated in real-time during the run, is created using the HBOOK and HPLOT packages of the CERN libraries.

Parallel to the DAQ program, we wrote a program to monitor the status of the data acquisition and of some important parameters such as the number of events collected, event rate and average values. With a scientific libraries package called CERNLIB (see Resources [6]) developed at CERN, freely available along with its source code and widely used in the high-energy physics community, we interfaced the monitor program to a simple analysis facility. This allowed us to access interesting information and execute some preliminary analysis even while the DAQ was being done (Figure 4).

Figure 5. The authors of this article in their natural environment. On the left, with an arm on the DAQ PC, is Emanuele Leonardi and on the right is Giovanni Organtini.

Performance

An important factor for a DAQ system is the time performance. If the controlling software is too slow, data may be lost and the time required to collect a useful amount of data can grow to an unacceptable level.

We found the only time-limiting factor in our system was the conversion time of the ADC board; the operating system could easily keep pace with the DAQ task, even while running several other user tasks. This is very important, as this year our bench will move from a prototype level with a single active DAQ chain to an industrial-strength production facility where multiple measurements will proceed in parallel in order to quickly handle all of the many thousands of crystals needed for the CMS experiment.

In practice, we measured the time to execute a single CAMAC operation to be on the order of 10 microseconds, large with respect to the 1.5 microseconds minimum CAMAC operation time, but very good for an inexpensive board such as the CAEN A151 and much lower than the ADC response time of 110 microseconds.

Conclusion

Thanks to the introduction of Linux in our lab, we were able to realize a complete data acquisition and monitoring system using an off-the-shelf Pentium PC and a low-cost CAMAC board.

The system has been performing flawlessly since the beginning of 1996, and the data collected have been used to study the properties of PWO crystals, which will be used in the CMS experiment at CERN.

The key points in using Linux were the availability of the kernel code and the enthusiasm and technical knowledge of the Linux community which enabled us to create a personalized device driver for our data acquisition system. The standard UNIX tools and the GNU compilers guaranteed a perfect integration with the existing machines and an immediate acceptance of the system by all the physicists in our group.

As soon as we started to show our work, we were invited to several congresses dedicated to computing for high-energy physics and data acquisition systems all around Europe (the PCaPAC'96 Workshop in Hamburg, Germany, the ESONE'97 Workshop at CERN and the CHEP'97 Conference in Berlin, Germany). Everywhere we got in touch with many other Linux enthusiasts working on related items; the interest of the high-energy physics community in Linux is very high indeed.

We now plan to use this same system for a larger automatic bench which will be used in the next six years to measure the properties of the tens of thousands of crystals which will be used to build the electromagnetic calorimeter of the CMS experiment.

For those interested in our work, an archive containing the latest version of the device driver code and the interface libraries can be found on our FTP site at ftp://ftpl3.roma1.infn.it/pub/linux/local/.

Resources

Emanuele Leonardi got his Ph.D. in physics in 1997 at the University “La Sapienza” in Rome. He is now working as a technology researcher for the National Institute of Nuclear Physics in Rome.

Both authors worked in the L3 experiment at CERN where they published several physics papers and are now collaborating on the CMS experiment R&D phase.

Giovanni Organtini got his Ph.D. in physics in 1995 at the University “La Sapienza” in Rome. He is now a physics researcher at the University RomaTRE in Rome.

Both authors worked in the L3 experiment at CERN where they published several physics papers and are now collaborating on the CMS experiment R&D phase.

Load Disqus comments

Firstwave Cloud