Mageia Trudging on to Release
The Mageia project is moving on to their initial alpha, now expected sometime in January. They've been busy setting up the infrastructure, developmental and administrative teams, and choosing a permanent logo.
At the beginning of November the Mageia project had many necessary elements almost in place. These included things like a build server, Website and wiki hosting, a Code of Conduct, development and management teams, and a roadmap. The build server is based on Mandriva One and is just almost complete. PLF is temporarily hosting the some online resources and Zarb.org is hosting the mailing lists until a move to Gandi is completed. Packaging, artwork, distribution developers, translators, designers, QA, and other teams were organized. An alpha was planned for December at that time.
Mageia is making some further headway according to a recent blog post. The many logo entries have been short-listed and a final decision is expected any day now. There were so many nice entries that this is bound to be a very difficult task.
Earlier in the week the Mageia.org association was created and registered. This will allow Mageia to collect and distribute funds necessary to develop the distribution. Anne Nicolas was appointed President, Arnaud Patard is the new Secretary, and the Treasurer is Damien Lallement. Monthly reports will be published for those interested in the financial details. Report logs of the Founders' Weekly meetings will also be published and each team will have their own public communication channels as well.
Discussions are on-going concerning the repository directory structure and subversion repositories are being implemented. Main mirrors will start with three media directories: core, nonfree, and tainted; and each will have five subdirectories: release, updates, updates_testing, backports, and backports_testing. Importing from Mandriva will be logical and organized as developers start with the base system, compiler, and rpm tools. X will be next followed by the desktop environments before moving on the remaining software. Removing any code encumbered by licensing restrictions is a top priority.
The most significant tidbit for anxious testers is that the December alpha has been pushed back to sometime in January with the first release still on schedule for March.
Susan Linton is a Linux writer and the owner of tuxmachines.org.
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Linux Systems Administrator
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Validate an E-Mail Address with PHP, the Right Way
- Technical Support Rep
- Senior Perl Developer
- UX Designer
- Introduction to MapReduce with Hadoop on Linux
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?