Linux in the Real World
The United States Army Publications and Printing Command (USAPPC) is, as implied by its name, that part of the Army charged with the creation, publication, and distribution of all Army publications. This can include everything from simple forms up to complete technical manuals. The Command uses CD-ROMs to distribute a list of some 40,000 publications to the field units, who use them as a basis for submitting orders for publications. These CDs are mailed out quarterly, but the process of getting the CDs cut and distributed can take as long as four or five months. See the problem? By the time some customers get their copy of the CD, it has already been replaced by the next one, and can potentially contain obsolete listings. How could this dilemma be solved? Linux, of course!
An ideal platform for distributing or ordering publications is the World Wide Web. The Command already has a TCP/IP link to the Internet; all it would take is a system to run as the server. The Command had looked into putting up a web server in the summer of 1995 but decided against it because they were told it would cost $60,000 to implement. This is where I became involved. After being informed of the situation in November 1995, I asked if I could attempt to set up a web server on one of the Pentiums the Command had. After being told that, due to budget constraints, no money could be spent on this project, I was given a PC. Now for the fun stuff.
Like most Linux enthusiasts, I have a number of CDs with various versions of Linux distributions at home. I brought them in and set to work. First, I installed the Fall release of Slackware 3.0. After figuring out the type of Ethernet card installed (nothing like blind guessing), the installation went smooth as silk. But there were some minor problems running 3.0, so I dropped back to version 2.3 for my base. I installed and configured the entire system in half a day. (It was easy after doing it hundreds of times on my home system.)
Now to find an HTTP server. I looked at the usual choices: NCSA, CERN, Apache... All are good programs, but I ended up going with WN. WN is a fast, flexible HTTPD that has built in search and image map capabilities as well as very strong security. It can be found at ftp://ftp.acns.nwu.edu/pub/wn/. Further information may be found at WN's home page: hopf.math.nwu.edu/. The main reasons for this choice were its easy installation and the built-in search engine, which would be perfect for what was needed. Once I had the system up and working, I started building the pages.
It turned out that constructing the web site was more effort than loading the OS. My first task was to set up an ordering system for publications. Since the application to process orders was on the mainframe, I set up a form that takes the necessary input and saves it to a file. Then, every night, a cron job sends the contents of this file to the mainframe using NCFTP. This way, the current system—with all its editing and security checks—can be used, and the procedures for submitting orders by e-mail and paper that were already in place did not have to change.
Next came the task of putting the contents of the publications CD-ROM on the web site. Using the program that came with the CD, I generated extract files for all the different types of publications. This totaled 7 files of over 39 MB. I put these on the server, and using the built-in search capabilities of WN, created a form to view and/or search the files for user-defined strings.
Once this was working, I started work on having a job run on the mainframe that would extract the publications data in the correct format from the original source that was used to make the CDs. This can be retrieved via NCFTP as often as needed so the current data is always available on the web site.
Right now, I'm working with the section that produces forms in electronic format. These forms, which are in Perform Pro and Formflow formats, are also distributed to customers on CD-ROM. I am currently building a page where customers can search and download the forms they need using FTP. This should be working by the time this article is published.
Future plans for this system include linking it up with a dial-up BBS so that customers without direct Net access will be able to access the ordering and search systems with the data shared between the BBS and the web site. From there, who knows? If you'd like to see what has been done on the USAPPC site, the address is www-usappc.hoffman.army.mil.
Because of this system, the cost savings for publications ordering and distribution will be quite large. All this was made possible by Linux; without Linux, there would be no USAPPC web site at all.
And I'd still be hacking away at JCL.
Joe Klemmer (firstname.lastname@example.org) is a 33-year-old civilian Informations Systems employee of the US Army, and has worked for them for over 10 years. A follower of Linux since version 0.12, he enjoys giving away Linux CDs to spread the faith. Other than Linux, his passions include his wife, Joy, and their four ferrets and six finches (as of this writing).
-- Indie Game Dev and Linux User Contact Info: http://about.me/joeklemmer "Running Linux since 1991"
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
- Google's SwiftShader Released
- Rogue Wave Software's Zend Server
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide