Automated Imaging Microscope System
The Aging Research Centre (ARC) is developing an Automated Imaging Microscope System (AIMS) to enable researchers to view the three-dimensional cellular organization of a region of tissue from a series of microscope slides. AIMS employs a computer-controlled microscope that can move a microscope slide along the X and Y axes, so that the field of view can be moved in equal increments. The microscope also has an autofocus mechanism. A CCD camera captures images, one field of view at a time. Using a magnification of 400 power, a typical slide contains 25,000 fields of view; perhaps 7000 of those fields contain something of interest. By using the thousands of individually captured images, the system is able to reconstruct a very high-resolution image of the entire slide.
AIMS analyzes each of the thousands of images and identifies and stores the following attributes for each cell it was instructed to identify: X,Y and Z coordinates for each desired cell type, amount of each stain in the cell, type of each stain in the cell, color of each stain in the cell and type of cell.
When AIMS has processed multiple layers of a tissue block, the system will be able to reconstruct a three-dimensional map of the cells in that block. In an experiment where there are two groups of animals and the researcher wishes to compare tissues from the two groups, AIMS will be able to compare three-dimensional structures as well as cell counts.
The system has two very different functions: acquisition and analysis. When we designed the system, we started on both ends simultaneously, attempting to analyze images captured from other sources, then working on our own data. Because we hope for the system to be used by biology researchers, not UNIX gurus, a decent user interface is one of the primary goals. We chose Tcl/Tk for the ease with which a GUI can be developed. The easily modifiable prototypes are useful. Various commands, written in C (but with Tcl/Tk interfaces) are used to control physical devices.
Physically, the system consists of an optical microscope, a CCD camera and three stepper motors hooked to the parallel port. The stepper motors move a microscope slide around the stage under computer control; basically, we have a 60,000bpi scanner.
The stepper motors are driven by Darlington-pair transistors (see http://www.doc.ic.ac.uk/~ih/doc/stepper/ for a discussion on stepper-motor control). This allows for great control, at the expense of hefty timing requirements. In the Linux kernel, we can find floppies driven every three to eight milliseconds. Empirically, given the load on our motors, we can sometimes move more than twice a second. We've hooked them up to the parallel port. The parallel port has eight data lines and four control lines which can be used for output. This is just sufficient to control three stepper motors at four controls/motor. If we needed more motors, we would have to use a different type of controller. Figure 1 shows the microscope looking at the focus mechanism. Figure 2 shows the circuit driving the motors. It uses two ULN2003A chips and is powered by a spare computer power supply.
We need to do a lot of moving, so speed is very important. Using nanosleep seems the simplest, if not the best, alternative. Combined with real-time priority, this causes the motors to move with a nice smooth hum. The other alternative, using the real-time clock at 2048KHz, doesn't allow as precise a control over speed. The major problem with this approach is the way nanosleep handles its delays. A busy-wait prevents any other task from running. RTLinux does seem like a better solution, although we haven't investigated that yet. The older usleep call is a poor choice, as it has 10ms granularity.
The greatest problem is figuring out the speed to move the motor. The speed at which a stepper motor is capable of moving depends on the load placed on it, and this load varies depending on the friction in the stage. A speed that works at one location may not work at all at another. Empirical experimentation seems necessary.
We can move in three dimensions. Not only can we view the entire slide, we can change the focus. A jury-rigged solution seems to work—move up and down, and pick the image with the most detail. We assume the one with the most detail is the most focused. Technically, “detail” is based on a “busyness” function. For each pixel, find the difference in intensity between each of its neighbors, then sum the absolute value of those differences.
Practical Task Scheduling Deployment
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.View Now!
|The Firebird Project's Firebird Relational Database||Jul 29, 2016|
|Stunnel Security for Oracle||Jul 28, 2016|
|SUSE LLC's SUSE Manager||Jul 21, 2016|
|My +1 Sword of Productivity||Jul 20, 2016|
|Non-Linux FOSS: Caffeine!||Jul 19, 2016|
|Murat Yener and Onur Dundar's Expert Android Studio (Wrox)||Jul 18, 2016|
- The Firebird Project's Firebird Relational Database
- Stunnel Security for Oracle
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- SUSE LLC's SUSE Manager
- Managing Linux Using Puppet
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Parsing an RSS News Feed with a Bash Script
- Google's SwiftShader Released
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide