Letter to Bob: Configuring an Intel Linux System
I have finished configuring your Intel Linux system for you. I think you will be highly pleased with the resulting capabilities.
First, I split the disk space up into a small amount for Windows for Workgroups and WNT, with the rest of the space devoted to Linux. I did this so that you could do a triple-boot, booting either Windows for Workgroups, WNT or Linux. I know you like to experiment with those other operating systems, and when you have visitors from Redmond you like to at least pretend you run their operating systems from time to time, so having the triple-boot capability is handy. This also gives you access to their new 32-bit applications, as well as ensuring that their old 16-bit applications work without a hiccup.
I continued with a basic installation of Red Hat Linux (Red Hat, Inc., Durham, NC), installing the FVWM-95 window manager, which gives a Windows 95 “look and feel”. This will also help with surprise visits from your Washington friends, but I added the “accelerated X” window server and the Common Desktop Environment (CDE) desktop from Xi Graphics, both of which are available from WorkGroup Solutions of Aurora, Colorado. By using these products you can run those really great 3D modeling programs on your Digital UNIX server, and display them on your Linux box using the optional OpenGL extensions. You also get exactly the same look and feel of your Digital UNIX desktop on your Linux system. I am sure this will make you more comfortable with Linux, since CDE is used in your existing environments.
From time to time I know you like to run breadboard simulation applications that run only on a Mac, just to keep your hand in electronic design. I have installed a Macintosh emulator called “Executor 2” from ARDI (Albuquerque, NM) on the Linux box, and tied that into CDE so you can easily launch it. Likewise for those dull moments, you can execute Wabi (available from Caldera, Inc.) and run solitaire there, as well as a whole bunch of other Windows 3.1 applications which you can launch directly off the W3.1 part of the disk, since Linux can mount MS-DOS file systems. And Bob, you will be amused to find out the “disk copy” function under W3.1 actually works faster with Wabi than it does running native on the hardware. In fact, a lot of the Windows programs running under Wabi seem to execute faster than they do in native mode. Perhaps that is only my perception (I do admit a bias), but it could also be that Wabi takes advantage of the buffer cache of Linux in its input/output operations, as well as the virtual address space and virtual memory protections of Linux.
Speaking of faster, I would not have done all of this work if I could not give you a better environment than you had before, and with the addition of two more tools, I believe I have accomplished that goal. Of course, I realized you still wanted to take advantage of the power and high availability of your 64-bit Digital UNIX system, with its huge address spaces, fail over capabilities and over 5,000 commercial applications available. On the other hand, I know you want to minimize the network traffic over the Internet and create a seamless environment in which you get the most compute power on your desk for the least amount of money. Therefore, I enlisted the aid of Empress Software, Inc. of Markham, Ontario, Canada and Platform Computing of Toronto, Canada. Empress has a distributed a relational database that runs on 1500 Unix platforms, and when it can take advantage of a 64-bit environment (as in the case of Digital UNIX), it does so. This is particularly important (as you know) when you are trying to process large unstructured binary objects. In addition, the Empress RDBMS can attach to an Oracle Parallel Server database and extract information, bringing it directly back to your Linux system. This allows you to intermix the SQL calls with some of the data coming from the Empress database and some coming from the Oracle database, but only the pertinent data coming back directly to your Linux system with the least possible impact on the network. Sure beats grepping those terabyte files over NFS, doesn't it?
Finally, the use of Platform Computing's Load Sharing Facility (LSF) has me really excited. You know Digital UNIX has clustering capability with really fast recovery from a variety of failures, but Platform Computing has developed software that allows you to have a cluster in a heterogeneous environment. Supported on almost every Unix platform (and even Bill's WNT), it allows you to execute from your Linux system a program on whatever system is best able to run that program, with the output of the application coming right back to your Linux system as if it was executing locally. So, for example, if you were executing GIMP or Emacs, you would probably be executing it on your desktop Linux system, but if you wished to execute PV-WAVE (which runs on both platforms), LSF might transparently run that on the Digital UNIX platform to take advantage of the Alpha's superior floating point capabilities. Or, if you wished to execute that specialized accounting package which only runs on Digital UNIX, by typing in the application's name, LSF realizes it works only on Digital UNIX and finds the least loaded Digital UNIX system (taking into account CPU, I/O, memory constraints, etc.) to run it on, and send the output back to your system. And the accountants will be happy to know LSF can also do that batch processing they have been asking for. LSF truly makes use of the network as an extension of your computer.
So Bob, I hope you enjoy the Linux system I have created for your desktop. Whether you choose CDE or Windows 95 as the “Look and Feel” of Linux, you will still be able to have access to these features, since all of these applications run both on Intel Linux and Digital UNIX, creating that seamless environment you keep demanding.
I will have a little time next week, so I will be happy to duplicate this environment for your Hi-Note Ultra II laptop.
Please say “Hello” to Mrs. Palmer for me, and thank her for the cookies.
Sincerely,Jon “maddog” HallExecutive Director, Linux International80 Amherst St.Amherst, NH 03031-3032 U.S.A.
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Linux Systems Administrator
- Validate an E-Mail Address with PHP, the Right Way
- Introduction to MapReduce with Hadoop on Linux
- RSS Feeds
- Weechat, Irssi's Little Brother
- New Products
- Tech Tip: Really Simple HTTP Server with Python
- Didn't read
24 sec ago
- Reply to comment | Linux Journal
5 min 24 sec ago
- Poul-Henning Kamp: welcome to
2 hours 15 min ago
- This has already been done
2 hours 16 min ago
- Reply to comment | Linux Journal
3 hours 1 min ago
- Welcome to 1998
3 hours 50 min ago
- notifier shortcomings
4 hours 13 min ago
5 hours 50 min ago
- Android User
5 hours 52 min ago
- Reply to comment | Linux Journal
7 hours 45 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?