Porting Linux to the DEC Alpha: Infrastructure

by Jim Paradis

Porting an operating system is not trivial. Operating systems are large, complex, asynchronous software systems whose behavior is not always deterministic. In addition, there are numerous development tools, such as compilers, debuggers, and libraries, that programmers generally take for granted but which are not present at the start of the porting project. The porting team must implement these tools and other pieces of infrastructure before the porting work itself can begin.

This article is the first of three describing one such porting effort by a small team of programmers at Digital Equipment Corporation. Our goal was to port the Linux operating system to the Digital Alpha family of microprocessors. These articles concentrate on the initial proof-of-concept port that we did. Although much of our early work has been superseded by Linus Torvalds' own portability work for 1.2, our tale vividly illustrates the type and scale of the tasks involved in an operating system port.

What Got Us into All This?

The article by Jon Hall on page 29 describes many of the business-case justifications for our involvement in the Linux porting effort. I will describe the actual events that led to my starting work on the Linux port.

First, some background: I work for the Alpha Migration Tools Group, which is an engineering development group within Digital Semiconductor. We were initially chartered near the beginning of the Alpha project to develop automated methods for migrating Digital customers' legacy applications to Alpha-based systems. Our first product was VEST, which translated VAX/VMS binary executables into binaries that could be executed on OpenVMS Alpha. This was soon followed by MX, which translated MIPS Ultrix executables into executables that run on Alpha systems under Digital Unix. Since then, our charter has expanded into other areas of “enabling technology” (technology which enables users to move to Alpha). In addition to producing translators and emulators, we have supplied technology to third-party vendors, and we have participated in the development of compilers and assemblers for Alpha.

Our involvement in Linux began at the end of 1993, when we realized that there was no entry-level operating system for Alpha-based systems. While OpenVMS, Digital Unix, and Windows NT were all solid, powerful operating systems in their own right, they were too resource-hungry to run on bare-bones system configurations. In many cases, the smallest usable configuration of a particular system costs at least several thousand US dollars more than the smallest possible configuration. We decided that to compete on the low end with PC-clone systems, we needed to make the lowest-priced system configurations usable. After investigating various alternatives, we decided that Linux had the best combination of price (free), performance (excellent), and support (thousands of eager and competent hackers worldwide, with third-party commercial support starting to appear as well).

Project Goals

When putting together the proposal to do the port, I set forth the following goals for the Linux/Alpha project:

  • Price: Linux/Alpha would continue to be free software. All code developed by Digital for Linux/Alpha would be distributed free of charge according to the GNU General Public License. In addition, all tools used to build Linux/Alpha would also be free.

  • Resource stinginess: Linux/Alpha would be able to run on base configurations of PC-class Alpha systems. My goal was to be able to run in text mode in 8MB of memory and with X-Windows in 16MB. In addition, a completely functional Linux/Alpha system should be able to fit, with room to spare, on a 340MB hard disk.

  • Performance: Linux/Alpha's performance should be comparable to Digital Unix.

  • Compatibility: Linux/Alpha should be source-code-compatible with existing Linux applications.

  • Schedule: We wanted to be able to show a working port as quickly as possible.

Design Decisions

The above criteria drove several of the design decisions we made regarding Linux/Alpha. To meet the schedule criterion, we decided to “freeze” our initial code base at the Linux 1.0 level and work from there, not incorporating later changes unless we needed a bug fix. This would minimize perturbations to the code stream (a necessity when you're reaching in and changing virtually the whole universe), and would eliminate the schedule drain of constantly catching up to the latest release. We reasoned that once we got a working kernel, we could then make use of what we had learned to catch up to the most current version.

The scheduling criterion also drove our decision to make our initial port a 32-bit (as opposed to a 64-bit) implementation. The major difference between the two involves the C programming model used. Intel Linux uses a “32-bit” model where ints, longs, and pointers are all 32 bits. Digital Unix uses a “64-bit” model where ints are still 32 bits while longs and pointers are 64 bits. At Digital, we have encountered a lot of C code that treats ints, longs, and pointers interchangeably. Code like this might fortuitously work in a 32-bit programming model, but it may produce incorrect results in a 64-bit model. We decided to do a 32-bit initial port so as to minimize the number of such problems. We felt that limiting longs and pointers to 32 bits would not unduly hamper any existing code and by the time new applications appeared which would require larger datatypes, a 64-bit Linux implementation would be available.

We also decided, in the interests of expediency, to use the existing PALcode support for Digital Unix rather than write our own. The Digital Unix PALcode was reasonably well-suited to other Unix implementations, it was readily available, and it had already been extremely well-tested. Using the Digital Unix PALcode in turn required that we use the “SRM” console firmware. The SRM firmware contained device drivers that could be used by Linux via callback functions. While these console callback drivers were extremely slow and had to be run with all interrupts turned off, they did allow us to concentrate on other areas of the Linux port and defer the work on device drivers.

Some design decisions were driven by differences in execution environment between Intel and Alpha. On Intel, the kernel virtual memory space is mapped one to one with system physical memory space. Because of the potential collision with user virtual memory, Intel Linux uses segment registers to keep the address spaces separate. In kernel mode, the CS, DS, and SS segments point to kernel virtual memory space, while the FS segment points to user virtual memory space. This is why there are routines in the kernel such as put_fs_byte(), put_fs_word(), put_fs_long(), etc; this is how data is transferred between kernel space and user space on Intel Linux implementations.

Since Alpha does not have segmentation, we needed to use some other mechanism to ensure that user and kernel address spaces did not collide. One way would be to have only one address space mapped at a time. This requires a translation buffer (sometimes called a translation lookaside buffer, or TLB), a special cache on the CPU used to considerably speed up virtual memory address lookups. But this makes data transfer between user and kernel space cumbersome. It can also exact a performance penalty; on systems that do not implement address space identifiers, using the same virtual address range for kernel space and user space requires that the entire translation buffer be invalidated for that range for every transition between user and kernel space. This could conceivably cause multiple translation buffer misses across every system call, timer tick, or device interrupt.

The other way to avoid address space collisions between user and kernel is to partition the address space, assigning specified address ranges to specified purposes. This is the approach taken for the 32-bit Linux/Alpha port. It is simple, it does not require wholesale translation buffer invalidation for every entry to kernel mode, and it makes data transfer between user and kernel an utterly trivial copy.

Designing the address space layout required attention to certain other constraints. First, no address could be greater than 0x7fffffff, because of Alpha's treatment of 32-bit quantities in 64-bit registers. When one issues an LDL (Load Long) instruction, the 32-bit quantity that is loaded is sign-extended into the 64-bit register. Therefore, loading the address 0x81234560 into R0 would result in R0 containing 0xffffffff81234560. Attempting to dereference this pointer would result in a memory fault. There are techniques for double-mapping such problematic addresses, but we decided that we did not need the additional complications for a proof-of-concept port. Therefore, we simply limited virtual addresses to 31 bits.

The other consideration was that we needed an area which was mapped one for one with system physical memory. We did not want to simply use the low 256MB (for instance) because we wanted to be able to place user programs in low addresses, so we chose an area of high memory for this purpose and made the physical address equal the virtual address minus a constant. This is referred to below as the “mini-KSEG”.

Once all the constraints were considered, we ended up with a system virtual memory layout as follows:

0x00000000--0x3fffffff        User
0x40000000--0x5fffffff        Unused
0x60000000--0x6fffffff        Kernel VM
0x70000000--0x7bffffff        mini-KSEG (1:1 with physical memory)
0x7c000000--0x7fffffff        Kernel code, data, stack

Finally, I had to decide how heavily I would modify the code base to accomplish the port. I felt that I did not have the latitude to make wholesale changes and rearrangements of the code the way Linus did for the 1.1.x to 1.2.x transition. To do so would cause my code to diverge further and further from the mainstream code base, which would adversely affect its acceptance among the Linux community.

I decided to keep the original Intel code 100% intact, so one could conceivably still build an Intel kernel from my code base. The Alpha code would be either additions to or replacements for the Intel code base. Areas that needed to be changed would be set off via conditional compilation. Sometimes this required me to swallow my pride and devise a less clean Alpha-specific version of an algorithm to correspond to a less clean Intel-specific version when I really would rather have implemented a clean, generalized algorithm that could accommodate both. Fortunately, Linus implemented clean, generalized algorithms for all of us when he did his portability work for Linux 1.1.x and Linux 1.2.x.

The Compiler Suite

My initial experiments in compiling some of the Linux code on an Alpha system used the Digital Unix compiler and tools. While this was successful and allowed me to do some early prototyping work, the freeware criterion required that we build the kernel using a freeware compiler and tool set. The GNU C compiler the obvious choice; it is used by Intel Linux, and no other freeware compiler comes close to its functionality and sophistication.

Although gcc can be configured to generate Alpha code, the version available from FSF only understands the 64-bit Digital Unix programming model. While I know a few things about compilers, I'm no expert. I attempted to modify the gcc machine description files for Alpha to generate 32-bit pointers and longs, with disastrous results. It turns out that small changes to machine descriptions can have far-reaching consequences, and I had neither the time nor the inclination to stare at the machine description until I achieved enlightenment.

Fortunately, I did not have to. Another project at Digital had paid Cygnus Support to produce a version of gcc that can generate 32-bit Alpha code, and I was able to use this for my Linux work. The first version that I had implemented only a 32-bit programming model; 64-bit quantities were not available for computation. This drove certain early design decisions. Later on, 64-bit “long long” and “double” datatypes were added, which allowed me to revisit and simplify a number of areas where I needed 64-bit quantities for machine and PALcode interfacing.

I built the compiler and tool suite as cross-compilers on both Digital Unix and Intel Linux and tested both extensively. I did quite a bit of development work both at the office on various Digital Unix systems, and at home on my personal 486-based Linux system.

The Simulator

When I began the Linux/Alpha project, I decided to do my initial debugging not on Alpha hardware but on an Alpha instruction simulator. The simulator, called “ISP”, provides much greater control over instruction execution than I could get in hardware. It also provided some support functionality that I would otherwise have to add to the kernel (for example, in ISP I could set and catch breakpoints without needing a breakpoint handler in the kernel). In addition, since I had the source code for ISP I could insert custom code to trap strange conditions as needed.

The version of ISP that I was using included its own versions of the SRM console and Digital Unix PALcode, so I was able to debug my interfaces to the console and PALcode with reasonable assurance that the same code should work unmodified on the real hardware.

While ISP is extremely slow compared with real Alpha hardware, its performance was acceptable for initial debug. In fact, on a 486DX2/66 Linux system, ISP was able to boot up to the shell prompt in under three minutes.

The Bootloader

Once the cross-development and execution environment was in place, I started working on a bootstrap loader. The design of the bootstrap loader was dictated in part by the mechanics of booting from disk via the SRM console. When the user issues the boot command to the SRM console, the console first reads in the initial sector of the specified boot device. Two fields in this sector specify the block offset and block count of an initial bootstrap program. The console then reads this bootstrap program into memory beginning at virtual address 0x20000000 and jumps to that address.

While it would be theoretically possible to simply read in the entire kernel this way, it is impractical for two reasons. First, 0x20000000 is in the middle of user memory space. The kernel could not remain there; it would have to be relocated to a more convenient address. Second, the kernel is large; since the bootstrap program loaded by the SRM must be contiguous, booting this way would tend to preclude such things as loading the kernel from a file system.

For these reasons, the preferred method of loading an operating system via the SRM console is to have the console load a small “bootloader” program; this, in turn, can use the console callback functions to load the operating system itself from disk. Conceptually, the bootloader is rather simple; it sets up the kernel virtual memory space, reads in the kernel, and jumps to it.

The bootloader was developed in stages. The first version simply assumed that the kernel image was concatenated to the end of the bootloader image. The bootloader would examine the boot sector to determine where the kernel started, and it would read the COFF header of the kernel to determine how large it was.

The second major update of the bootloader added the ability to read files from an ext2 file system. This way, both the bootloader and the Linux kernel itself were regular files. The bootloader had to be installed by a special program (e2writeboot) which created a contiguous file on the ext2 file system and which wrote the extents of the bootloader file to the boot block. Nevertheless, this approach added greater flexibility as it made updating the bootloader and the Linux kernel much easier.

The final major update of the bootloader was provided by David Mosberger-Tang; it was the ability to unpack a compressed Linux kernel image. Not only did this save disk space, it made loading much faster as well.

Next month, we will cover porting the kernel.

Jim Paradis works as a Principal Software Engineer for Digital Equipment Corporation as a member of the Alpha Migration Tools group. Ever since a mainframe system administrator yelled at him in college, he's wanted to have a multiuser, multitasking operating system on his own desktop. To this end, he has tried nearly every Unix variant ever produced for PCs, including PCNX, System V, Minix, BSD, and Linux. Needless to say, he likes Linux best. Jim currently lives in Worcester, Massachusetts, with his wife, eleven cats, and a house forever under renovation.

Load Disqus comments

Firstwave Cloud