Porting Linux to the DEC Alpha: Infrastructure
My initial experiments in compiling some of the Linux code on an Alpha system used the Digital Unix compiler and tools. While this was successful and allowed me to do some early prototyping work, the freeware criterion required that we build the kernel using a freeware compiler and tool set. The GNU C compiler the obvious choice; it is used by Intel Linux, and no other freeware compiler comes close to its functionality and sophistication.
Although gcc can be configured to generate Alpha code, the version available from FSF only understands the 64-bit Digital Unix programming model. While I know a few things about compilers, I'm no expert. I attempted to modify the gcc machine description files for Alpha to generate 32-bit pointers and longs, with disastrous results. It turns out that small changes to machine descriptions can have far-reaching consequences, and I had neither the time nor the inclination to stare at the machine description until I achieved enlightenment.
Fortunately, I did not have to. Another project at Digital had paid Cygnus Support to produce a version of gcc that can generate 32-bit Alpha code, and I was able to use this for my Linux work. The first version that I had implemented only a 32-bit programming model; 64-bit quantities were not available for computation. This drove certain early design decisions. Later on, 64-bit “long long” and “double” datatypes were added, which allowed me to revisit and simplify a number of areas where I needed 64-bit quantities for machine and PALcode interfacing.
I built the compiler and tool suite as cross-compilers on both Digital Unix and Intel Linux and tested both extensively. I did quite a bit of development work both at the office on various Digital Unix systems, and at home on my personal 486-based Linux system.
When I began the Linux/Alpha project, I decided to do my initial debugging not on Alpha hardware but on an Alpha instruction simulator. The simulator, called “ISP”, provides much greater control over instruction execution than I could get in hardware. It also provided some support functionality that I would otherwise have to add to the kernel (for example, in ISP I could set and catch breakpoints without needing a breakpoint handler in the kernel). In addition, since I had the source code for ISP I could insert custom code to trap strange conditions as needed.
The version of ISP that I was using included its own versions of the SRM console and Digital Unix PALcode, so I was able to debug my interfaces to the console and PALcode with reasonable assurance that the same code should work unmodified on the real hardware.
While ISP is extremely slow compared with real Alpha hardware, its performance was acceptable for initial debug. In fact, on a 486DX2/66 Linux system, ISP was able to boot up to the shell prompt in under three minutes.
Once the cross-development and execution environment was in place, I started working on a bootstrap loader. The design of the bootstrap loader was dictated in part by the mechanics of booting from disk via the SRM console. When the user issues the boot command to the SRM console, the console first reads in the initial sector of the specified boot device. Two fields in this sector specify the block offset and block count of an initial bootstrap program. The console then reads this bootstrap program into memory beginning at virtual address 0x20000000 and jumps to that address.
While it would be theoretically possible to simply read in the entire kernel this way, it is impractical for two reasons. First, 0x20000000 is in the middle of user memory space. The kernel could not remain there; it would have to be relocated to a more convenient address. Second, the kernel is large; since the bootstrap program loaded by the SRM must be contiguous, booting this way would tend to preclude such things as loading the kernel from a file system.
For these reasons, the preferred method of loading an operating system via the SRM console is to have the console load a small “bootloader” program; this, in turn, can use the console callback functions to load the operating system itself from disk. Conceptually, the bootloader is rather simple; it sets up the kernel virtual memory space, reads in the kernel, and jumps to it.
The bootloader was developed in stages. The first version simply assumed that the kernel image was concatenated to the end of the bootloader image. The bootloader would examine the boot sector to determine where the kernel started, and it would read the COFF header of the kernel to determine how large it was.
The second major update of the bootloader added the ability to read files from an ext2 file system. This way, both the bootloader and the Linux kernel itself were regular files. The bootloader had to be installed by a special program (e2writeboot) which created a contiguous file on the ext2 file system and which wrote the extents of the bootloader file to the boot block. Nevertheless, this approach added greater flexibility as it made updating the bootloader and the Linux kernel much easier.
The final major update of the bootloader was provided by David Mosberger-Tang; it was the ability to unpack a compressed Linux kernel image. Not only did this save disk space, it made loading much faster as well.
Next month, we will cover porting the kernel.
Jim Paradis works as a Principal Software Engineer for Digital Equipment Corporation as a member of the Alpha Migration Tools group. Ever since a mainframe system administrator yelled at him in college, he's wanted to have a multiuser, multitasking operating system on his own desktop. To this end, he has tried nearly every Unix variant ever produced for PCs, including PCNX, System V, Minix, BSD, and Linux. Needless to say, he likes Linux best. Jim currently lives in Worcester, Massachusetts, with his wife, eleven cats, and a house forever under renovation.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Tech Tip: Really Simple HTTP Server with Python
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
- Google's SwiftShader Released
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide