Red Hat Summit: Overview and Reflections
Last week in New Orleans, Red Hat held its first annual conference called Red Hat Summit. I've used Red Hat Linux for quite a few years, so this seemed like a good opportunity to meet some of the Red Hat people and to learn more about the operating system and related software. Although the newness of this conference showed at times, overall it was a good meeting. Many interesting presentations were given, which made it worth attending. In this article I give an overview of the conference and conclude with some reflections on Linux that occurred to me during some of the presentations.
Over 700 participants attended the Red Hat Summit--not the thousands LinuxWorld Expo can brag about, but a respectable showing all the same for Red Hat's first conference. At the registration desk, attendants were given a neat bag worth keeping, a hat and an impressive booklet describing the speakers and sessions. The registration package looked like it went over budget. I can't imagine it being as nice at future conferences. Each night a reception or party was offered with live music, an open bar and plenty of food. The breakfasts and lunches all were part of the admission price and were excellent as well. The parties along with the meals were paid for in part by corporate sponsors, such as IBM, HP and AMD. Throw in the talks and the hotel room, which was part of the registration fee, and there certainly was good value in the price of admission.
One complaint that many of us had, though, is that the conference didn't provide users with wireless Internet access. We had access initially, but the staff realized it inadvertently had left the network open. By late morning of the first day, they had locked us out. Many of us complained, but it did no good. The staff's retort was that we should use the hotel's wireless network. Unfortunately, it wasn't free, it was down much of the time and it wasn't available in the meeting rooms where the conference was held. Maybe I'm spoiled, but I find it difficult to dedicate several days to a conference and thereby forego all of my other work. Also, it's useful to pull up Web pages and download software discussed by speakers at the various sessions. Hopefully, next year the conference staff will change its attitude on this point.
The conference started off each morning with an opening keynote address by a top person from Red Hat, immediately followed by a partner keynote talk from an executive of one of Red Hat's partners. The introduction capped off with a visionary keynote from a member of the community. The executive talks were interesting from a business perspective, but the visionary keynotes were much more interesting for general attendees. The staging, lighting and videos were spectacular, by the way: a highly professional crew orchestrated the keynote talks. My only complaint about the organization of the keynote talks was they rolled from one to the next without a break. For some, though, this may be good in that we were able to listen to three presentations in a row without having to get up.
The first day's keynote address was given by Red Hat's CEO, Matt Szulik. He talked about the future of open-source and free software and how we're at the beginning of a new revolution. He finished off his talk by donning a choir robe and joining in with some gospel singers who sang about being misunderstood. Following Szulik, the partner keynote was presented by Martin Fink, Vice President of Linux at Hewlett-Packard. Fink gave a business analysis of the open-source market. The visionary keynote of the day came from John Buckman of Magnatune who spoke about the music-download industry. At a press conference on the first day, Red Hat announced the Red Hat and the Fedora Directory Server, both of which are based on the Netscape Directory Server that Red Hat acquired last year. According to one of the pamphlets, it is "an LDAP server that centralizes application settings, user profiles, group data, policies, and access control information into a network-based registry." Red Hat intends to make the related software open-source under the GPL fairly soon.
On the second day, the keynote line up was Michael Tiemann, VP for Open-Source Affairs at Red Hat, who talked about how this century belongs to open-source and not to closed-source companies. For the partner keynote, Irving Wladawsky-Berger, VP of Technical Strategy and Innovation at IBM, spoke in the same vein. Then, Greg Stein of the Apache Software Foundation provided an interesting talk on open-source and Apache and the activities at the Foundation. He wasn't originally on the schedule, but he made a great fill-in speaker and should be asked to speak again at next year's conference.
The third and final morning offered keynotes from Mark Webbink, Deputy General Counsel of Red Hat; Richard Wirt, VP of Intel; and Bruce Mau, CEO of Bruce Mau Design. It was odd having a lawyer speaking at a software conference. However, Webbink was the right person to explain Red Hat's plan to give Fedora more independence, among other things. This includes handing over the copyrights of Red Hat code to the community spin-off. Webbink also announced that Red Hat is creating an organization called the Software Patent Commons that will work for the sharing of software patents. Red Hat has opposed software patents, but legal actions on the part of Microsoft has made it necessary for patents to be taken seriously.
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
- RSS Feeds
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- Validate an E-Mail Address with PHP, the Right Way
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Download the Free Red Hat White Paper "Using an Open Source Framework to Catch the Bad Guy"
- A Topic for Discussion - Open Source Feature-Richness?
- Dynamic DNS—an Object Lesson in Problem Solving
- Home, My Backup Data Center
- Please correct the URL for Salt Stack's web site
2 hours 18 min ago
- Android is Linux -- why no better inter-operation
4 hours 33 min ago
- Connecting Android device to desktop Linux via USB
5 hours 2 min ago
- Find new cell phone and tablet pc
6 hours 7 sec ago
7 hours 28 min ago
- Automatically updating Guest Additions
8 hours 37 min ago
- I like your topic on android
9 hours 24 min ago
- This is the easiest tutorial
15 hours 59 min ago
- Ahh, the Koolaid.
21 hours 38 min ago
- git-annex assistant
1 day 3 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?