Linux in Government: Technical Aspects of The Emergency Response Network System
Last week's column was about Linux acceptance in a critical emergency role at the Department of Homeland Security (DHS). Specifically, that article discussed YHD Software Inc., a company started by Jo Balderas and her son, Mike, and its efforts to provide Emergency Response Network systems (ERN) currently being used by "the Federal Bureau of Investigation, the Department of Public Safety and the Department of Homeland Security".
This week, I met with the Mike Balderas, who does the critical programming and manages development and support for YHD Software. In the following interview, Mike explains more of the technical aspects of the ERN project.
Michael David Balderas began programming seriously at age 20 in Ft. Worth, Texas. Prior to that, he worked for a national ISP in technical services. He became interested in open-source software because he had an insatiable desire to learn and wanted to see the code. Today, he's leading a team building one of the critical pieces of infrastructure responsible for our nation's security. If you met him, you quickly would recognize that's he more than capable of handling the task.
Tom Adelstein: Can you give us a thumbnail of how ERN began?
Mike Balderas: The FBI's ERN system originally began as a request for a simple e-mail list server. We compiled and converted the user list [from] several different data sources the FBI provided. We quickly found that the majority of the contact information on record was outdated.
This led to the deployment of another system, in which we forced the users to authenticate and required them to update their contact information. By attrition, the data stores became irrelevant.
Over time, our client needed to find classes of individuals based on demographic information. This allowed the client to target specific members in the system and contact only those who fit a profile or by priority fashion. Instead of setting up an infinite number if individual e-mail list servers with the potential for replication, we used existing technology. We developed for other clients to address this issue.
After partnering with Twenty First Century Communications, we added voice and fax capabilities and ERN grew from an e-mail listserver into a fully functional contact solution with the ability to vet the audience, assign them levels of authority and responsibility and contact them at any level in between.
Those are the technical origins.
TA: What prompted you to use LAMP?
MB: I chose the technology for several reasons. Economically, it was the most feasible solution to perform the task, as it cost next to nothing for the development tools and I could use commodity hardware. Support for the components of LAMP (Linux, Apache, MySQL, PHP) easily are obtained and accessible on the Web via newsgroups, HOWTOs, e-mail lists and user groups.
TA: Back in early 2001, I had not heard of LAMP, how did you know about it and how did you string it together?
MB: I guess you could say we pioneered the use of LAMP. But, back then I never heard or saw it called LAMP. I had been using UNIX derivatives for several years.
I supported FreeBSD and later tried Mandrake in the early stages of this project. I had a subscription to CDROM.com's services for both projects, and I would compile the latest and greatest flavors when they came out. FreeBSD's lack of hardware support was a serious drawback for me at the time. So, I focused primarily on Mandrake for the project. I still supported FreeBSD with the hopes that it eventually would catch up.
Then, I found Red Hat. Red Hat hit my radar about the time we did our first production run of ERN. Red Hat caught my eye for many reasons, and we have stuck [with] it to this day. We deploy our system on Red Hat Enterprise Linux 3.0.
TA: What about the other components?
MB: Apache had a large following and plenty of market visibility. I found it easy to use and easy to configure and maintain. The Apache team deploys patches rapidly when people find issues. The team resolves those quickly. Many eyes are on Apache code. So, Apache was a given.
MySQL seemed like a fledgling at the time, not that well known to me. My first experience and interaction with MySQL came from its bundling as a package in Red Hat 7.0. I had set up and maintained a few PostgreSQL systems, but the ease of use and functionality of MySQL got my attention. It does what we need and scales.
Then, I chose PHP. At the time, I wanted a tag language to work with our database. MySQL was something new to me, and PHP seemed like a good alternative to other server-side scripting languages. With its tight integration with MySQL's APIs and easy-to-use syntax, I tried it. I also liked the Apache module capabilities. I tried it, liked it and it became an easy choice.
TA: I understand why you started with Red Hat Linux and understand why you stuck with it. I'm sure you looked at other packages, and I even heard you have been approached to change platforms. Is there something else about Red Hat that you favor?
MB: Like I said, Red Hat popped onto the map when we started putting our first production system on-line. Red Hat's documentation seemed useful to me. I especially liked their introduction to their package management system. That made it easy for me to understand.
For developers, the ability to install, update and maintain packages and dependencies easily reduces development time. I can find all the packages I need, and they install in seconds. That reduces my effort and the cost of doing business. I also trust them to give me good packages or fix them instantly.
Red Hat dropped our maintenance cost significantly. I couple that with more time allowed for development of products and less time spent maintaining the underlying OS. With the release of the Red Hat Network platform for maintaining OS and package updates, as well as alert and notification of new bug, security and performance updates, we spend less time trying to keep up with the latest security holes and fixes and have more time to focus on inventing, providing for change requests and helping fulfill the client's requirements and needs.
TA: I like PHP myself and have my own reasons for why I like it. Can you provide any additional insight into the way you utilize PHP today that is different from when you first started using it?
MB: In the early days, PHP was used primarily for simple things, such as echoing the console date back to the clients browser and minimal database connections for content. We considered PHP to be the best way to go for our solution for many reasons. Aside from the inherent MySQL interoperability already mentioned, the user base and support groups for [PHP] are huge.
We have found that, although it may take some time, we can meet all client demands using PHP. The language is structured and ordered logic, and its support of Perl regular expressions, the MySQL API set and PEAR [PHP's equivalent to CPAN] make it highly extensible and adaptive.
We use PHP on the front end of our solution to initiate connectivity to the database, obtain information and present it to the user, write files on the fly as well as build e-mail on the fly. We also use PHP console scripts, much like one would use a Perl script or shell script, to do a lot of our high-demand, resource-intensive functions.
Many people when they think of PHP think Web only, in the sense that it is great for pre-processing HTML and presenting the data formatted in a certain way. PHP also has a strong console engine and can do almost anything that any other shell scripting language can and then some.
I also think that the large community involvement provides a lot of innovation. I forget the amount of Web service applications it supports, but it is the most used language on the Web. It's used even more than Perl. Now, that's something we didn't see when we started using it. Today it does everything.
TA: Some people question the use of MySQL, but you seem to favor it in this application. Why?
MB: We initially chose to give it a chance because it was bundled with Red Hat 7. All the necessary documentation is at your fingertips on-line. With the introduction of InnoDB and transactions, I felt it had matured to the point that it was reliable and still cost effective. MySQL doesn't require a full-time database administrator like Oracle or some of the others do.
MySQL also is backed by a commercial company. We've had good service from them, and I can point to them when someone asks about the technology we use. That may not seem important to small developers, but when you work in highly sensitive areas in law enforcement, homeland security and the like, it's critical.
TA: That leads to the next question. Working with government clients can be demanding. What do you like best about working for the FBI, DHS and law enforcement?
MB: I would have to say the challenge of it all. Although we all believe the government knows what it knows when it knows it, the sad fact of the matter is that's not always true. As the 9/11 Commission report shows, some barriers exist to communication and information sharing between the departments related to intelligence and enforcement. That gives me a field in which to use technology to solve important problems.
I love the challenge of building an interoperable, cross-agency solution that allows the distribution and collaboration of information. Initially, the FBI came to us saying it wanted an e-mail listserver. We showed the FBI what more could be done, and [the project] has snowballed since.
TA: What technical challenges do you see the most often?
MB: Technical challenges are a minimum in this line of business. The real challenges lie in getting exactly what the clients (FBI, DHS, law enforcement) have in their heads into development. Sometimes, we're able to get specifications right away. Sometimes, we start working on something and that leads to a new train of thought.
More than technical challenges, such as making one component talk to another component, getting requirements, specifications and business rules down on paper can be the bigger challenges. Our history of doing that gives us a kind of edge.
TA: As your product begins to move out from the local office of the FBI and Homeland Security, have you had to make programming modifications? What modifications are required to hook the program into a national network?
MB: We have had to make minimal programming modifications related to the migration. The system was designed from the ground up to be modular and scalable. Due to the modular structure and the underlying layout of the system, multiple systems can plug into a national implementation with little to no modification.
[The move] has been more of a usability challenge due to the fact that most law enforcement and government agents have certain expectations about the look and feel of user screens. We have had to find a common ground among the different internal methods of operations among all the agencies and organizations represented and make the product reflect [that commonality].
TA: What technical advice do you have for Linux startups wanting to provide Web service applications to the public sector?
MB: Get to know your clients and the way they think. Each and every sector has a different way of doing things. Think the way they think, and see the world they way they see the world. Don't try to shove a solution down someone's throat. Don't expect them to know technical terms or be technically inclined. You're there to solve problems. Find out what those problems are and make it easy on them.
Tom Adelstein lives in Dallas, Texas, with his wife, Yvonne, and works as a Linux and open-source software consultant locally and nationally. He's the co-author of the upcoming book Exploring the JDS Linux Desktop, published by O'Reilly and Associates. Tom has written numerous articles on Linux technical and marketing issues as a guest editor for a variety of publications. His latest venture has him working as the webmaster of JDSHelp.org.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- Designing Electronics with Linux
- New Products
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Dynamic DNS—an Object Lesson in Problem Solving
- Linux Systems Administrator
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- Using Salt Stack and Vagrant for Drupal Development
- Reply to comment | Linux Journal
2 hours 33 min ago
- Dynamic DNS
3 hours 7 min ago
- Reply to comment | Linux Journal
4 hours 5 min ago
- Reply to comment | Linux Journal
4 hours 55 min ago
- Not free anymore
8 hours 57 min ago
12 hours 44 min ago
- Reply to comment | Linux Journal
12 hours 52 min ago
- Understanding the Linux Kernel
15 hours 7 min ago
17 hours 37 min ago
- Kernel Problem
1 day 3 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?