Linux in Government: Technical Aspects of The Emergency Response Network System
Last week's column was about Linux acceptance in a critical emergency role at the Department of Homeland Security (DHS). Specifically, that article discussed YHD Software Inc., a company started by Jo Balderas and her son, Mike, and its efforts to provide Emergency Response Network systems (ERN) currently being used by "the Federal Bureau of Investigation, the Department of Public Safety and the Department of Homeland Security".
This week, I met with the Mike Balderas, who does the critical programming and manages development and support for YHD Software. In the following interview, Mike explains more of the technical aspects of the ERN project.
Michael David Balderas began programming seriously at age 20 in Ft. Worth, Texas. Prior to that, he worked for a national ISP in technical services. He became interested in open-source software because he had an insatiable desire to learn and wanted to see the code. Today, he's leading a team building one of the critical pieces of infrastructure responsible for our nation's security. If you met him, you quickly would recognize that's he more than capable of handling the task.
Tom Adelstein: Can you give us a thumbnail of how ERN began?
Mike Balderas: The FBI's ERN system originally began as a request for a simple e-mail list server. We compiled and converted the user list [from] several different data sources the FBI provided. We quickly found that the majority of the contact information on record was outdated.
This led to the deployment of another system, in which we forced the users to authenticate and required them to update their contact information. By attrition, the data stores became irrelevant.
Over time, our client needed to find classes of individuals based on demographic information. This allowed the client to target specific members in the system and contact only those who fit a profile or by priority fashion. Instead of setting up an infinite number if individual e-mail list servers with the potential for replication, we used existing technology. We developed for other clients to address this issue.
After partnering with Twenty First Century Communications, we added voice and fax capabilities and ERN grew from an e-mail listserver into a fully functional contact solution with the ability to vet the audience, assign them levels of authority and responsibility and contact them at any level in between.
Those are the technical origins.
TA: What prompted you to use LAMP?
MB: I chose the technology for several reasons. Economically, it was the most feasible solution to perform the task, as it cost next to nothing for the development tools and I could use commodity hardware. Support for the components of LAMP (Linux, Apache, MySQL, PHP) easily are obtained and accessible on the Web via newsgroups, HOWTOs, e-mail lists and user groups.
TA: Back in early 2001, I had not heard of LAMP, how did you know about it and how did you string it together?
MB: I guess you could say we pioneered the use of LAMP. But, back then I never heard or saw it called LAMP. I had been using UNIX derivatives for several years.
I supported FreeBSD and later tried Mandrake in the early stages of this project. I had a subscription to CDROM.com's services for both projects, and I would compile the latest and greatest flavors when they came out. FreeBSD's lack of hardware support was a serious drawback for me at the time. So, I focused primarily on Mandrake for the project. I still supported FreeBSD with the hopes that it eventually would catch up.
Then, I found Red Hat. Red Hat hit my radar about the time we did our first production run of ERN. Red Hat caught my eye for many reasons, and we have stuck [with] it to this day. We deploy our system on Red Hat Enterprise Linux 3.0.
TA: What about the other components?
MB: Apache had a large following and plenty of market visibility. I found it easy to use and easy to configure and maintain. The Apache team deploys patches rapidly when people find issues. The team resolves those quickly. Many eyes are on Apache code. So, Apache was a given.
MySQL seemed like a fledgling at the time, not that well known to me. My first experience and interaction with MySQL came from its bundling as a package in Red Hat 7.0. I had set up and maintained a few PostgreSQL systems, but the ease of use and functionality of MySQL got my attention. It does what we need and scales.
Then, I chose PHP. At the time, I wanted a tag language to work with our database. MySQL was something new to me, and PHP seemed like a good alternative to other server-side scripting languages. With its tight integration with MySQL's APIs and easy-to-use syntax, I tried it. I also liked the Apache module capabilities. I tried it, liked it and it became an easy choice.
TA: I understand why you started with Red Hat Linux and understand why you stuck with it. I'm sure you looked at other packages, and I even heard you have been approached to change platforms. Is there something else about Red Hat that you favor?
MB: Like I said, Red Hat popped onto the map when we started putting our first production system on-line. Red Hat's documentation seemed useful to me. I especially liked their introduction to their package management system. That made it easy for me to understand.
For developers, the ability to install, update and maintain packages and dependencies easily reduces development time. I can find all the packages I need, and they install in seconds. That reduces my effort and the cost of doing business. I also trust them to give me good packages or fix them instantly.
Red Hat dropped our maintenance cost significantly. I couple that with more time allowed for development of products and less time spent maintaining the underlying OS. With the release of the Red Hat Network platform for maintaining OS and package updates, as well as alert and notification of new bug, security and performance updates, we spend less time trying to keep up with the latest security holes and fixes and have more time to focus on inventing, providing for change requests and helping fulfill the client's requirements and needs.
TA: I like PHP myself and have my own reasons for why I like it. Can you provide any additional insight into the way you utilize PHP today that is different from when you first started using it?
MB: In the early days, PHP was used primarily for simple things, such as echoing the console date back to the clients browser and minimal database connections for content. We considered PHP to be the best way to go for our solution for many reasons. Aside from the inherent MySQL interoperability already mentioned, the user base and support groups for [PHP] are huge.
We have found that, although it may take some time, we can meet all client demands using PHP. The language is structured and ordered logic, and its support of Perl regular expressions, the MySQL API set and PEAR [PHP's equivalent to CPAN] make it highly extensible and adaptive.
We use PHP on the front end of our solution to initiate connectivity to the database, obtain information and present it to the user, write files on the fly as well as build e-mail on the fly. We also use PHP console scripts, much like one would use a Perl script or shell script, to do a lot of our high-demand, resource-intensive functions.
Many people when they think of PHP think Web only, in the sense that it is great for pre-processing HTML and presenting the data formatted in a certain way. PHP also has a strong console engine and can do almost anything that any other shell scripting language can and then some.
I also think that the large community involvement provides a lot of innovation. I forget the amount of Web service applications it supports, but it is the most used language on the Web. It's used even more than Perl. Now, that's something we didn't see when we started using it. Today it does everything.
TA: Some people question the use of MySQL, but you seem to favor it in this application. Why?
MB: We initially chose to give it a chance because it was bundled with Red Hat 7. All the necessary documentation is at your fingertips on-line. With the introduction of InnoDB and transactions, I felt it had matured to the point that it was reliable and still cost effective. MySQL doesn't require a full-time database administrator like Oracle or some of the others do.
MySQL also is backed by a commercial company. We've had good service from them, and I can point to them when someone asks about the technology we use. That may not seem important to small developers, but when you work in highly sensitive areas in law enforcement, homeland security and the like, it's critical.
TA: That leads to the next question. Working with government clients can be demanding. What do you like best about working for the FBI, DHS and law enforcement?
MB: I would have to say the challenge of it all. Although we all believe the government knows what it knows when it knows it, the sad fact of the matter is that's not always true. As the 9/11 Commission report shows, some barriers exist to communication and information sharing between the departments related to intelligence and enforcement. That gives me a field in which to use technology to solve important problems.
I love the challenge of building an interoperable, cross-agency solution that allows the distribution and collaboration of information. Initially, the FBI came to us saying it wanted an e-mail listserver. We showed the FBI what more could be done, and [the project] has snowballed since.
TA: What technical challenges do you see the most often?
MB: Technical challenges are a minimum in this line of business. The real challenges lie in getting exactly what the clients (FBI, DHS, law enforcement) have in their heads into development. Sometimes, we're able to get specifications right away. Sometimes, we start working on something and that leads to a new train of thought.
More than technical challenges, such as making one component talk to another component, getting requirements, specifications and business rules down on paper can be the bigger challenges. Our history of doing that gives us a kind of edge.
TA: As your product begins to move out from the local office of the FBI and Homeland Security, have you had to make programming modifications? What modifications are required to hook the program into a national network?
MB: We have had to make minimal programming modifications related to the migration. The system was designed from the ground up to be modular and scalable. Due to the modular structure and the underlying layout of the system, multiple systems can plug into a national implementation with little to no modification.
[The move] has been more of a usability challenge due to the fact that most law enforcement and government agents have certain expectations about the look and feel of user screens. We have had to find a common ground among the different internal methods of operations among all the agencies and organizations represented and make the product reflect [that commonality].
TA: What technical advice do you have for Linux startups wanting to provide Web service applications to the public sector?
MB: Get to know your clients and the way they think. Each and every sector has a different way of doing things. Think the way they think, and see the world they way they see the world. Don't try to shove a solution down someone's throat. Don't expect them to know technical terms or be technically inclined. You're there to solve problems. Find out what those problems are and make it easy on them.
Tom Adelstein lives in Dallas, Texas, with his wife, Yvonne, and works as a Linux and open-source software consultant locally and nationally. He's the co-author of the upcoming book Exploring the JDS Linux Desktop, published by O'Reilly and Associates. Tom has written numerous articles on Linux technical and marketing issues as a guest editor for a variety of publications. His latest venture has him working as the webmaster of JDSHelp.org.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Google's SwiftShader Released
- Parsing an RSS News Feed with a Bash Script
- Rogue Wave Software's Zend Server
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide