Constructing Red Hat Enterprise Linux 4
In April 2004, Red Hat conducted a global company meeting in Raleigh, North Carolina. The entire company was invited. One of the strongest impressions I took from this meeting was how truly worldwide Red Hat is. It seemed as though there were as many non-US team members as US members. In addition to the US, development is conducted in Australia, Canada, Germany, Czech Republic, UK, Japan, India and Brazil.
Not all development is conducted within the offices of Red Hat. Through the worldwide legions of contributors to Fedora we invite broader participation. We actively contribute and draw from a great diversity of community open-source projects. Again, this substantially broadens the circle of participation. In many ways, this inclusive process makes Red Hat feel like a trusted steward of the community, forming a distribution representing the best and brightest technology. This is a privilege we do not take for granted as we know it needs to be continuously earned every day. This makes both Red Hat Enterprise Linux and Fedora truly distributions “by the people, for the people”.
Red Hat Enterprise Linux v.4 is supported in 15 different languages. These translations are all performed as an integral part of the development cycle. Consequently, the translation process doesn't lag the release or introduce forks in the development trees. We have a team of “translation elves” located in Australia who magically do their work at an opposite phase of the clock from headquarters. This results in a nearly real-time translation that tracks development changes. Additionally, there are many contributors to Fedora who are actively involved in internationalization activities.
There are several ways in which Red Hat has improved upon our development methodology over the course of Red Hat Enterprise Linux v.4's construction. Interestingly, the main theme of these improvements has been to stick to core proven Linux open-source development practices. Although we did subscribe to these practices previously, we paid increased focus this time around to the following:
Upstream—doing all our development in an open community manner. We don't sit on our technology for competitive advantage, only to spring it on the world as late as possible.
Customer/user involvement—through a combination of Fedora and increased “early and often” releasing of beta versions through the development cycle, we are able to get huge volumes of invaluable feedback (both good and bad).
Partner involvement—on-site partner developers have augmented our ability to address features, bugs and incremental testing.
Avoiding feature creep—putting a clamp on the introduction of late-breaking features in order to allow stabilization.
We are all extremely grateful for the steady guiding influences of Donald Fischer who did an outstanding job as overall product manager and release manager. He was at once a diplomat, innovator, book-keeper and go-to guy. Hats off to “the Donald”.
Red Hat is truly a restless place to be. It seems that no sooner have we shipped one release, than we are already behind on the next one. This is due to the fact that in addition to new release development, we also support prior releases for a seven-year interval. So, for example, here's the list of releases concurrently in development now:
Fedora Core 4 (FC4).
Red Hat Enterprise Linux v.2.1 Update 7.
Red Hat Enterprise Linux v.3 Update 5.
Red Hat Enterprise Linux v.4 Update 1.
Red Hat Enterprise Linux v.5.
Numerous new technologies in prerelease stages, targeted at various upstream and internal release delivery vehicles.
Never a dull moment, and we wouldn't have it any other way!
Resources for this article: /article/8204.
Tim Burke is the director of Kernel Development at Red Hat. This team is responsible for the core kernel portion of Red Hat Enterprise Linux and Fedora. Prior to becoming a manager, Tim earned an honest living developing Linux high-available cluster solutions and UNIX kernel technology. When not juggling bugs, features and schedules, he enjoys running, rock climbing, bicycling and paintball.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Stunnel Security for Oracle
- SourceClear Open
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Google's SwiftShader Released
- Non-Linux FOSS: Caffeine!
- Parsing an RSS News Feed with a Bash Script
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide