Linux and Samba in a Federal Lab
Linux and Samba recently answered the needs of the Army Research Lab (ARL) at Adelphi, Maryland. Our branch does state-of-the-art research into a specific type of lasers and amasses large amounts of data during the performance testing of these devices. We were able to connect our test equipment over the network to a Samba server. The twist to this approach is that our configuration makes it appear to the users that they access the data through the branch's NT fileserver. I'll explain the setup in detail, but the main trick is creating a network shortcut on the NT box to point to the Samba share while making the Linux box invisible on the network. Figure 1 depicts the setup of the network.
Our branch develops extremely small lasers called VCSELs (vertical-cavity, surface-emitting lasers), which fall under the general category of photonics research. We easily can put over 60 lasers into a square millimeter, and the full wafer containing the lasers can be three inches in diameter. Therefore, we can have thousands of devices on a single wafer. Figure 2 shows a picture of a typical VCSEL. The main tests we run to characterize the performance of each VCSEL are called ILV curves for current, light and voltage. Basically, we see how much light comes out for the power that was put in. Also, most of the analysis software is on the user's desktop machine so they need to be able to access the raw data from there. Users are creatures of habit. Getting data pertinent to the branch has historically meant going to the NT server. Since the users were used to getting data from the NT box, we did not want to make them go somewhere else. We tried to make everything transparent to the user and make it appear as though they were getting the data from the NT server. To force the users to go through the NT box, we make the Linux box invisible to the network. We rely on the security of the NT box to authenticate users accessing the data.
Two pieces of equipment are key to characterizing the VCSELs. First is the probe station that is basically just a microscope with some tiny probes and a light meter. The probes apply the power to the device, and we measure the power produced with the light meter. A 4155B parameter analyzer from Agilent is the second piece of equipment. This analyzer is programmed to sweep the current level and measure the voltage and light. It has two main ways of being controlled: front panel and the GPIB interface. Granted, the GPIB port is a popular scientific interface and allows us to do fancier tests by controlling the test setup with a computer as well as collect the data, but our controlling computer is about five feet down the lab bench and cannot be moved closer. This makes it difficult to start the test when the probes are in place. Fortunately our main test is simple to program through the front panel. Our test routine is to position the probes by looking through the eyepiece of the microscope, reach up carefully and push the test button on the parameter analyzer and then save the data. Figure 3 shows the lab hardware.
After we get a clean run, we need to save the data. The 4155B has three ways to save the data: GPIB, floppy and TCP/IP. Since we aren't controlling the analyzer with the GPIB, that's not an option. The floppy supports 3.5" disks, but these disks fill up quickly and you have to walk around with them. Since we have several lab areas where we work, it's not unheard of to have to backtrack to recover a temporarily misplaced disk. The answer we put together works because of the TCP/IP support.
The parameter analyzer supports TCP/IP, specifically NFS. You can even ping the analyzer. Since it's registered in the lab's DNS, the ping can be done by way of IP address or name. We were able to put together a Linux box out of obsolete or broken equipment. Literally, we pulled together parts of three computers into one. It didn't cost the government anything, and it fills the need. For the installation, the newest distribution that we had and that the P-133 hardware would support is Red Hat 6.2, so we put that on and hardened it with Bastille and the latest patches. Additionally, all the unnecessary services were turned off and SSH was added. We sliced the hard drive space carefully and ended up with about 1.5GB of space for data. Total time of install and configuration was three hours.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
- Google's SwiftShader Released
- Rogue Wave Software's Zend Server
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide