Linux in Government: California Air Resources Board's Secrets Revealed
In a recent article, I criticized an evaluation of the Linux desktop made by a member of California's Air Resources Board (ARB). Knowing how public sector vendors work, I wanted to head off a possible sabotage of California's open-source initiative based on the evaluation of a non-enterprise Linux desktop. That prompted some communication between Jim Welty, the CIO of ARB, and myself, which resulted in a conference call with key members of his staff.
In speaking with Bill, I discovered a model state agency that has taken advantage of Linux and open-source software extensively for over a decade. The team believes ARB is first in the country in air quality management and first in the state in open-source IT solutions. When I first spoke to Welty, he immediately pointed out that his team is responsible for the agency's IT success. He points to Bill Fell, Harry Ng and Narci Gonzales as the proponents, visionaries and programmers who make open-source systems work at ARB.
The California ARB has documented both the effectiveness and the cost savings of open-source software, proving that the open-source model saves money; provides comparable or better performance than proprietary software; offers reliability, flexibility and freedom from licensing hassles and violations; and provides support options from a rich variety of suppliers and user groups. As Bill states, "Management tends to believe that not all great or elegant solutions, IT or otherwise, need to be expensive, must come pre-packaged or shrink-wrapped or include every bell and whistle. The goal is to facilitate and enhance individual productivity, albeit at a reasonable cost."
Bill also openly discusses having his agency be in control of the software. He states that open source provides his agency with control over upgrades and source code. He also believes that the agency's software allows users to access data without requiring them to stay current with proprietary solutions.
In the same breath, Bill begins discussing Metcalfe's law, the law developed by Robert Metcalfe of PARC, who also is known as the Father of the Ethernet. Metcalfe stated that the value of a system equals the square of its nodes. The ARB team sees that value because its Internet sites are organized and supported organically. The knowledge of the organization increases exponentially as everyone participates in sharing their knowledge.
At ARB, every employee can contribute to the Web sites every day. Bill's team calls the process organic because its sites refresh from the bottom up. Bill Fell says, "We've kept our model open: anyone can contribute. We have at least one page for every program, and we empower the program staff to work in [its] own best interest to keep the pages up-to-date."
ARB's first use of open-source products was to address the delivery of information. Welty describes the process like this:
This [made] sense, because the Internet was developed to support the exchange of information between disparate systems worldwide. We addressed the development of systems using open-source products, such as the LAMP suite.
We joined the Internet movement in 1991. Working with Teale Data Center, we built the Ethernet infrastructure needed to connect our air quality modelers with the San Diego SuperComputer. We also introduced Internet-based e-mail to the Board, using products such as Eudora and Pegasus. Those were primal days of the Internet. Some of you will recall using Internet search engines with names like Gopher, Archie, Veronica; names taken from Archie comic books.
Our World Wide Web services program began in 1994, when the Web sported only 50 servers. Today, there are over 35 million.
In 1995, ARB purchased a distribution of Red Hat for $50 to support proxy services to protect NT 3.51 servers falling prey to crackers. The agency subsequently implemented Linux to support its list serv program, FTP server, the network DNS and an Internet search engine.
Beginning in 2000, ARB began developing Web-based applications using Linux as the OS, Apache as the Web server and PHP as the scripting language. According to Billy, "Harry Ng has initiated nearly all OS programs using LAMP."
ARB migrated its NT Web servers to Linux in 2000. According to Bill, the net result of IT's efforts was measurable in cost savings. He also claims that the team greatly increased its understanding of Internet systems and benefited from inexpensive redundancy, systems reliability, freedom from vendor licensing strategies and increased control over operations. In a presentation he gave, Welty said, "Our experiences confirmed what the trade magazines had been saying about these open-source products."
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Linux Systems Administrator
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Validate an E-Mail Address with PHP, the Right Way
- Technical Support Rep
- Senior Perl Developer
- UX Designer
- Introduction to MapReduce with Hadoop on Linux
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?