Constructing Red Hat Enterprise Linux 4
SELinux refers to Security Enhanced Linux. Details of SELinux have been presented in prior Linux Journal articles (see Resources).
At its core, SELinux consists of a set of low-level primitives that provide fine-grained access control. Prior to the advent of SELinux, the Linux security model has been a rather all-or-nothing approach, in that the two common cases were general unprivileged user applications and privileged applications. The privileged applications typically consisted of system services such as bind, Apache, MySQL, Postgres, ntpd, syslogd, snmpd and squid. The historical down-side to having all-powerful system services is that if they were compromised by a virus attack or other exploit, the entire system could then become compromised.
SELinux provides a means of tightly restricting the capabilities of user applications and system services to a strict need-to-know authorization. For example, it sets access control on the Apache Web server (httpd) to limit the set of files and directories it is able to modify. Additionally, Apache is strictly limited to what other applications it is capable of executing. In this manner, if Apache is attacked, the set of damage that can occur is well contained. In fact, SELinux is so well contained that one of Red Hat's developers, Russell Coker has set up a Fedora system where he provides the root password and invites anyone to see if they can inflict damage to the system.
What is most monumental about Red Hat Enterprise Linux v.4's SELinux implementation is that it is the first widely adopted commercial operating system to provide such fine-grained security integrated in the newest release. Historically, it has been the case that such fully featured secure operating systems have been relegated to obscure forks of mainstream products, which typically have lagged a year or two behind the respective new releases.
The implementation of SELinux got its tentacles into virtually all areas of the distribution. This included:
Implementation of policies for the core system services.
Providing default policies for all RPM packages we provide.
Installer and system management utilities to enable end users to define access domains of their own.
Kernel support throughout a range of subsystems.
There were many challenges in the implementation of SELinux. On the kernel front, the core SELinux primitives were highly at risk of being accepted into the upstream 2.6 Linux kernel. James Morris valiantly completed the implementation and garnered the required upstream consensus. On the user-level package front, the introduction of SELinux required a specific or default policy to be constructed for each package. Naturally, this at times was a bumpy process as we sorted out which files should be writable and other details.
Minor implementation glitches would wreak havoc across the entire distribution. However, it also resulted in SELinux being the initial scapegoat for virtually all problems. Dan Walsh was a true workhorse in pouring through this onslaught of issues.
“Upstream, Upstream, Upstream”—this became the mantra among our kernel team throughout the entire duration of Red Hat Enterprise Linux v.4 construction. The reason for this is that every change in which Red Hat's kernel diverges from the upstream Linux community kernel.org becomes a liability for the following reasons:
Peer review—all patches incorporated upstream undergo a rigorous peer review process.
Testing—there are thousands of users worldwide from hundreds of companies who routinely access upstream kernels.
Maintenance burden—the closer we are to upstream kernels, the more efficient we can be about pulling fixes back into the maintenance streams for shipping products.
Next release—getting fixes and features into upstream means that we don't have to re-add the feature manually into future releases.
These principles are core to the value of true community open-source development. As testament to Red Hat's active participation in the upstream Linux kernel community, through the course of 2.6 development more patches were accepted from Red Hat kernel developers than from any other company. During the past year, more than 4,100 patches from Red Hat employees were integrated into the upstream 2.6 kernel. In contrast, other companies boast that their offering contains the most patches on top of the community kernel. An interesting statistic is that currently, more than 80% of all kernel patches originate from kernel developers employed explicitly to do such development. The kernel has become mostly a professional employment endeavor, not a hobbyist project.
Red Hat's developers were highly active in upstream 2.6 development. Some of the areas of involvement included:
Virtual Memory (VM) management.
SELinux and other security features.
IDE and USB.
Logical Volume Manager (LVM).
Hardware and driver support.
Arjan van de Ven and Dave Jones, Red Hat Enterprise Linux v.4 kernel pool maintainers, integrated kernel contributions from our collective internal kernel development team.
They frequently rebased our trees against the latest upstream kernels as well as integrated additional bug fixes, performance tunings, hardware platform support and feature additions. This is truly a monumental effort given that we simultaneously support seven different architectures (x86, x86_64—AMD64 and Intel(r) EM64T, Itanium2, IBM Power (31- and 64-bit), mainframe in 31- and 64-bit variants) from a single codebase.
Initially, it was painful for Arjan to be beating everyone over the head to ensure that all patches were accepted upstream prior to incorporating them into our pool. Through his vigilance, the entire team became conditioned to working upstream first. In the short term, it involves more effort on the part of the developer to work both internal to Red Hat as well as upstream. However, in the long term, as described above, the benefits are considerable.
Fast/Flexible Linux OS Recovery
On Demand Now
In this live one-hour webinar, learn how to enhance your existing backup strategies for complete disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible full-system recovery solution for UNIX and Linux systems.
Join Linux Journal's Shawn Powers and David Huffman, President/CEO, Storix, Inc.
Free to Linux Journal readers.Register Now!
- The Italian Army Switches to LibreOffice
- Download "Linux Management with Red Hat Satellite: Measuring Business Impact and ROI"
- Petros Koutoupis' RapidDisk
- Linux Mint 18
- Oracle vs. Google: Round 2
- The FBI and the Mozilla Foundation Lock Horns over Known Security Hole
- Varnish Software's Varnish Massive Storage Engine
- Privacy and the New Math
- Ben Rady's Serverless Single Page Apps (The Pragmatic Programmers)
Until recently, IBM’s Power Platform was looked upon as being the system that hosted IBM’s flavor of UNIX and proprietary operating system called IBM i. These servers often are found in medium-size businesses running ERP, CRM and financials for on-premise customers. By enabling the Power platform to run the Linux OS, IBM now has positioned Power to be the platform of choice for those already running Linux that are facing scalability issues, especially customers looking at analytics, big data or cloud computing.
￼Running Linux on IBM’s Power hardware offers some obvious benefits, including improved processing speed and memory bandwidth, inherent security, and simpler deployment and management. But if you look beyond the impressive architecture, you’ll also find an open ecosystem that has given rise to a strong, innovative community, as well as an inventory of system and network management applications that really help leverage the benefits offered by running Linux on Power.Get the Guide