UNIX: Old School
I have been called “nostalgic beyond my years” by some, and I suppose that is accurate. I was born in 1976 and have always had a voracious appetite for early minicomputer and mainframe history. I believe recorded history itself is the single-most important innovation of human existence. We humans seem to have a hard-wired compulsion to record, pass on and learn from the mistakes and successes of those before us. Open-source software is the natural evolution of this concept applied to computer technology. In the Open Source philosophy, we are all free to learn from the wealth of software created by the masses that came before us. By examining the evolution of a project, we can learn from the mistakes of others and, perhaps most important, copy verbatim from their successes. By harnessing this freely available history as well as unfettered cooperation, we advance the common good.
Recently, companies have begun to loosen their grip on their early computing “intellectual property”. Although some have not fully embraced open source, these sometimes small, token gestures offer us a wealth of knowledge. In this article, I focus on how we can explore early operating system history by running “historic” UNIX releases on our very own Linux boxes using a simulator. The SCO Group (Yes, “them”, previously Caldera, Inc.) claims current ownership of early UNIXes and has released them under an “Ancient Unix” license, which allows for noncommercial use. I focus here on the UNIX V5 release, because it is the earliest available. UNIX V6, V7 and various early BSD releases are also available. If you plan on trying out any of these OSes, examine the licenses included with each before booting them up.
Stranger in a Strange Land: the UNIX V5 User Environment
The UNIX V5 system provided in the disk image is rather stark and unfriendly compared to modern, lush UNIX/Linux systems. Here are a few pointers to get you started:
sh is the shell. It's only 858 lines of C; don't expect it to work like bash.
Use chdir to change the default directory.
Backspace and arrow keys rarely work.
ed is the text editor; see en.wikipedia.org/wiki/Ed.
bas is a basic interpreter.
fc is a FORTRAN interpreter.
cc is the C compiler.
Source code is in /usr/source.
There are not many files, so use find / -print to see what else is included.
In order to explore these OSes, we need to be able to run them on commonly available computing hardware. Luckily, we have simulators for this purpose. Because of its quality and depth of support, one of the most popular simulators is SIMH, available from the SIMH Web site (see the on-line Resources). SIMH runs on every popular *nix OS, as well as Microsoft Windows, and is capable of simulating a wide range of early computer systems, including Digital Equipment Corp.'s PDP and VAX systems, the MITS Altair, early IBM systems and many more. Some of the most historically significant systems are DEC's PDP series, the birth-system of UNIX.
SIMH is a ground-up system simulator; it simulates the CPU, memory, firmware and devices of a number of early computer systems. This means that original distributed software can run unmodified on these simulated systems. SIMH successfully simulates devices such as disks, tape drives, printers and networking devices. This means that not only can we run these historic systems, but we can communicate and transfer data to and from them using modern technologies and protocols. A great deal of thanks is owed to the contributors of SIMH. Their decision to contribute and release under open source furthers all our understanding of our history and guarantees that this history will always be free.
Download the latest SIMH release, V3.4-0 at the time of this writing, compile and install. If you want to use Ethernet emulation, you may need to upgrade the libpcap library bundled with your OS as most currently distributed versions are too old. The SIMH installation documents explain how to do this, and you can skip this step if you're not going to be using networking support on your simulated machines. Compiling can be done as any user and is as simple as:
$ mkdir simh $ cd simh $ unzip /path/to/simhv34-0.zip $ mkdir BIN # Note all CAPS $ gmake USE_NETWORK=1 all # Only include USE_NETWORK=1 if your PCAP lib is up to date. (compilation chatter omitted) $ ls -l ./BIN/ total 11624 -rwxrwxr-x 1 matt matt 301959 Jul 16 18:45 altair -rwxrwxr-x 1 matt matt 482274 Jul 16 18:45 altairz80 -rwxrwxr-x 1 matt matt 529317 Jul 16 18:44 eclipse -rwxrwxr-x 1 matt matt 297590 Jul 16 18:45 gri -rwxrwxr-x 1 matt matt 375737 Jul 16 18:44 h316 -rwxrwxr-x 1 matt matt 577678 Jul 16 18:44 hp2100 -rwxrwxr-x 1 matt matt 355225 Jul 16 18:44 i1401 -rwxrwxr-x 1 matt matt 381672 Jul 16 18:45 i1620 -rwxrwxr-x 1 matt matt 441079 Jul 16 18:46 ibm1130 -rwxrwxr-x 1 matt matt 502037 Jul 16 18:46 id16 -rwxrwxr-x 1 matt matt 508378 Jul 16 18:46 id32 -rwxrwxr-x 1 matt matt 294614 Jul 16 18:46 lgp -rwxrwxr-x 1 matt matt 434940 Jul 16 18:44 nova -rwxrwxr-x 1 matt matt 345034 Jul 16 18:41 pdp1 -rwxrwxr-x 1 matt matt 752055 Jul 16 18:43 pdp10 -rwxrwxr-x 1 matt matt 1055376 Jul 16 18:43 pdp11 -rwxrwxr-x 1 matt matt 474153 Jul 16 18:42 pdp15 -rwxrwxr-x 1 matt matt 459203 Jul 16 18:41 pdp4 -rwxrwxr-x 1 matt matt 460363 Jul 16 18:41 pdp7 -rwxrwxr-x 1 matt matt 499473 Jul 16 18:42 pdp8 -rwxrwxr-x 1 matt matt 467662 Jul 16 18:42 pdp9 -rwxrwxr-x 1 matt matt 352233 Jul 16 18:45 s3 -rwxrwxr-x 1 matt matt 429312 Jul 16 18:46 sds -rwxrwxr-x 1 matt matt 982694 Jul 16 18:43 vax
This builds all possible system simulators. Each simulator becomes a separate binary in the ./BIN/ directory. SIMH can be run as any normal user, but if you want to use Ethernet network simulation, you need to execute it as root (under UNIX) to allow libpcap access to the Ethernet device.
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
- RSS Feeds
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- Dynamic DNS—an Object Lesson in Problem Solving
- New Products
- Validate an E-Mail Address with PHP, the Right Way
- Drupal Is a Framework: Why Everyone Needs to Understand This
- A Topic for Discussion - Open Source Feature-Richness?
- Download the Free Red Hat White Paper "Using an Open Source Framework to Catch the Bad Guy"
- Tech Tip: Really Simple HTTP Server with Python
- Roll your own dynamic dns
4 hours 21 min ago
- Please correct the URL for Salt Stack's web site
7 hours 33 min ago
- Android is Linux -- why no better inter-operation
9 hours 48 min ago
- Connecting Android device to desktop Linux via USB
10 hours 17 min ago
- Find new cell phone and tablet pc
11 hours 15 min ago
12 hours 44 min ago
- Automatically updating Guest Additions
13 hours 52 min ago
- I like your topic on android
14 hours 39 min ago
- This is the easiest tutorial
21 hours 14 min ago
- Ahh, the Koolaid.
1 day 2 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?