More than a year ago I was hired by Linux NetworX to work on LinuxBIOS, and I've been on a steep learning curve ever since. After working on LinuxBIOS, I am qualified to say that I have no question that the kernel code is high-level code and that C is a high-level language.
When a microprocessor powers up, it starts executing instructions located in a ROM chip. These initial instructions are responsible for initializing the hardware (especially enabling RAM) and loading an operating system. The implementations and interfaces to this functionality vary from machine to machine, but its basic responsibility remains the same.
On the Alpha platform the microprocessor reads an entire serial ROM, referred to as the SROM, into the instruction cache and begins executing code. The code in the SROM initializes the processor and memory and loads the SRM from a Flash EEPROM. The SRM then loads the palcode (basically the real kernel on the Alpha), initializes a little more hardware and loads an operating system. Since the firmware is split into two pieces, the SRM can be upgraded or even replaced. In fact, the initial design of the Alpha architecture specified that there would be different firmware (at the SRM level) for each operating system.
The x86 microprocessor begins executing code in 16-bit mode at 16 bytes short of the end of the address space, with the CS register pointing to 64K below the end of the address space. On the 8086 this is at address 0xf000:0xfff0 == 0xffff0, just below 1MB. On the 80286 the address is 0xfffff0, just below 16MB. And on the 80386 and above, this space is at 0xfffffff0, just below 4GB. For the 286 and later Intel processors, the value in CS is not one that you can ever load again. To compensate for this, the hardware maps the ROM chip at both 0xffff0000 and 0xf0000.
Unlike the Alpha, x86 processors fetch instructions one at a time from the ROM chip. As ROM chips are essentially ISA devices, this leads to some interesting consequences, the first being that until some form of caching is enabled, the code runs quite slowly. The second effect is that the chipset must come up mostly enabled, as the usual path to the ROM chip is from the CPU to the Northbridge, to the PCI Bus to the Southbridge, to the ISA Bus to the ROM. When working with a known good board, this second fact makes the debugging of initial devices much easier.
The standard PC BIOS has the responsibilities of initializing the hardware, loading an operating system and providing a variety of services (mostly in the form of minimal device drivers) after an operating system has loaded.
SPARC and PowerPC architectures have specified firmware, also known as OpenBoot, Open Firmware or the defunct IEEE 1275. The standardized Forth firmware sits close to the same location that the SRM does on the Alpha. There are several unique things about Open Firmware: it runs on multiple processor and machine architectures; it uses a Forth-based byte code, so the binaries are processor-independent; and it does most of its system initialization from this Forth-base byte code.
The Itanium/IA64 architecture uses the EFI firmware and is more architecture-dependent than Open Firmware because its drivers are either IA32 or IA64 code. In scope it appears to be even more ambitious; EFI includes an IP stack and some filesystem drivers. As with Open Firmware, the early hardware initialization stage is not specified.
Requirements placed upon the firmware by the Linux kernel are minimal. The Linux kernel directly drives the hardware and does not use the BIOS. Since the Linux kernel does not use the BIOS, most of the hardware initialization is overkill. Linux is not alone in this respect; I don't know of a modern operating system that doesn't follow this trend. Modern operating systems require only basic system initialization services. Extra device drivers and system features that firmware like EFI, Open Firmware or even a PCBIOS provide are not necessary except to help load the operating system. Since these services are not necessary, the LinuxBIOS code does not provide them.
The LinuxBIOS code is sufficient to load a standalone program encoded as an ELF executable from a Flash ROM. A standalone program can be an operating system kernel like Linux, but most standalone programs are hardware diagnostics or boot loaders (e.g., Memtest86, Etherboot and RedBoot). LinuxBIOS is expected to be paired with a standalone boot loader in order to load the operating system.
The original idea of LinuxBIOS was to load the Linux kernel from the ROM and build a boot loader on top of that. The boot loader nbc implements this idea, loading a Linux kernel or a standalone program over the network and booting from Linux using the kexec kernel patch. This solution works fine when 512KB of ROM (or more) is available. Unfortunately, most standard motherboards shipping today have only 256KB of ROM. For the x86 platform it is nearly impossible to get a useable Linux kernel under 360KB.
Various strategies have been developed to address these systems limited by the amount of available ROM. Some of these strategies include Tiara, which appears to be a complete firmware and boot loader for the SiS630 chipset; Etherboot, which has been ported to work under LinuxBIOS; RedBoot, which runs under LinuxBIOS but is not yet useable; and some hacks on LinuxBIOS itself.
Alpha firmware requires a standalone program to be familiar with the motherboard it is running on, which can be problematic. While having this level of familiarity is nice, supporting a new motherboard can be extremely difficult because of the number of pieces of software that must be updated. With LinuxBIOS we do our best to avoid that problem.
We start with the traditional x86 approach: initialize the Super-IO chips to working and expected values (i.e., serial ports at their expected legacy address, IRQ, etc.) and then provide IRQ routing tables and mptables for SMP.
For the long term we need a table tracking the capabilities Plug-and-Play has identified. This software lists what hardware is present and configures which resources the hardware will use, or at a minimum it lists which resources an individual device uses. The solution I am working on involves creating a table of devices with information about how they are connected to each other on the motherboard. The table will list devices not currently participating in any kind of Plug-and-Play enumeration, as well as give enough information so that IRQ routing can be handled. Additionally the idea seems to fit in well with the struct device tree planned for the 2.5 kernel. ACPI appears to offer an alternative solution but it seems PC-centric, and the interpreted byte codes seem unnecessary and even dangerous.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- SuperTuxKart 0.9.2 Released
- Doing for User Space What We Did for Kernel Space
- Google's SwiftShader Released
- Parsing an RSS News Feed with a Bash Script
- SourceClear Open
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide