A Conversation with Alfredo Delgado of Inalambrica.net
In October 2001, while I was in Costa Rica, I met with a number of Linux advocates. Some were in the public sector and others in private industry. One private company that bases their whole product on Linux is Inalambrica.net. I interviewed their CTO for an article in Linux Journal but also saw some interesting stuff they were doing that I felt was of interest to ELJ readers.
This article is based on a conversation I had with Alfredo Delgado (Alf) who came to Inalambrica on January 1, 2001 and is in charge of their systems design and integration. Inalambrica.net needed an embedded Linux system on which to base their product, and they came up with what I see as a unique solution.
ELJ: Briefly describe the products that Inalambrica is producing.
Alf: Our goal is to build a cheap and easy-to-use network management appliance (internet access and bandwidth control), with several connectivity options, aimed at SOHO and small-to-midsized businesses and networks. The appliance has a web interface to handle everything from configuration to reports to system maintenance (package upgrades, new hardware, etc.).
Not being a hardware company, our main product is the software to drive such an appliance, not the appliance itself. The idea is to be able to send out the software (our own distro and its modular web interface) to distributors and even, sometime in the not-so-distant future, directly to customers.
Ultimately we'd like to send out just a CompactFlash with an IDE interface and a list of compatible hardware, but it will take some time for us to get to that point.
ELJ: Let's get some background. Prior to your joining of Inalambrica, what were they using as far as the Linux base for their products?
Alf: Handcrafted machines with Debian or Slackware.
ELJ: What were the shortcomings of this base?
Alf: Lack of standardization, bloat, time-consuming installations--both are great distros, but they were not well tailored to the task at hand, especially as the number of machines started growing; the customizations to the base distros piled up, the different connectivity options spawned different sets of package, configuration and interface options, and in general, everything became bigger, slower and more complex.
ELJ: You decided to use a real (by that, I mean a general-purpose SQL-based product) database to manage the distribution. Before you picked this, did you have other ideas?
Alf: Yes. But I scraped them very early in the process. Our first idea was just to hatchet, chisel and mold Slackware to our purposes; thus, our first test machines were cut-down Slackwares with a lot of extra packages, and the web interface had to be tailored for every installation. We'd gained some standardization, killed a bit of bloat and improved our installation time, but we were far from happy with the results. As soon as customers started to spring up in several countries, with very different hardware and connectivity needs, we realized we were going to face huge support issues very soon if we did not find a better (more flexible and automatic) way to do things.
ELJ: You elected to use PostgreSQL as the database. Most people would consider this rather heavy. Why PostgreSQL rather than, for example, MySQL?
Alf: Our web interface uses PostgreSQL extensively. There wasn't much sense in having two separate DBMSes. Also, I've always been a PostgreSQL guy, so I was already on the right side of the debate. I use referential integrity, subselects and other features regularly, and they were not supported on MySQL the last time I checked.
ELJ: Describe the structure of the database you have built. That is, what is in the tables?
Alf: Every package has its own database. The base installer holds the DBMS system, of course, and the main database holds package and file information (installed packages, versions, dependencies, reference counts for files and directories, upgrade history, etc.).
Each new package creates a database with the package's name to hold configuration options, interface information, etc. Each package maintainer is responsible for the respective database.
ELJ: You say, "Every package has its own database.'' Is that really a database or just a new table?
Alf: Let's take the base networking package as an example. The package is called network, so it CREATEs DATABASE (network) in PostgreSQL. This database holds several tables: 1) device, which holds physical interface information; 2) interface, which holds logical interfaces information (IP addresses, netmasks and data rates, for instance); 3) host, which holds host entries for DNS use with DNRD; and 4) nservers, nameserver addresses--also for DNRD.
There are also historical tables to keep track of configuration changes, although I have yet to finish rollback and roll-forward on this.
ELJ: How have you built the glue? That is, the programs that you use to put packages into the database and manage the contents of a distribution?
Alf: Right now everything is shell scripts and a lot of documentation on standards for the package maintainers. The idea is to stick to the KISS principle, and with all the developers in the same building this is not too hard to do. All the complex issues actually are interface-related and are separated from the base installer, configuration files and databases by one level of abstraction: the configuration, administration and control scripts in each system package.
ELJ: Describe the install process.
Alf: The process is boot, partition, format, install the base, fire up the database, install packages and reboot. There's nothing more to it. Our package installer takes care of the main database, and each package handles its own database.
ELJ: How much user interaction is required?
Alf: Short answer: Pop the disk in and lie still until I'm done. After the installation process finishes, all interaction with the user is through the web interface.
ELJ: How long does it take?
Alf: That depends a bit on hardware, how many partitions the installer has been instructed (hard-coded) to build, how many of these will be encrypted, which packages are on the installer, etc. My test installation takes about eight minutes on a 750MHz PIII with 128MB RAM, a 128MB SanDisk CompactFlash and a 20GB hard disk, with everything but the /boot partition on ReiserFS.
ELJ: What is a recovery like?
Alf: Hell to program. Can't say much more at the time, as it's what I'm working on right now.
ELJ: When do you expect to have it completed?
Alf: Late November/early December 2001. I'm working on the rollback (taking the system to a prior state) and roll-forward (redoing changes from a certain state on) features, and running into nice problems like hardware failure notification, database corruption and so on. Designing solutions (and interfaces) for each one of them is what's slowing me down.
ELJ: What future functionality do you intend on adding? How does it take advantage of the database?
Alf: I'd talk of planned rather than future functionality: rollback and roll-forward of the installation (upgrades, configuration checkpoints, access levels), many more report generators, auto-upgrade. Since the package system uses the database to keep track of things (all the way to the "who's using this file?'' level), I guess it's safe to state that the database is what allows these functions to be implemented. There are, of course, other ways to implement them, but databases are remarkably well suited to handling information.
Something I'd really love to work on in the future is using the database directly from each package, as opposed to having it as a backup for regular configuration and log files. I envision a system where users are not in /etc/passwd, but rather stored in a database, with all their files being BLOBs. But that's a long-term project.
ELJ: Inalambrica's products are proprietary in nature but based on GPLed software. Please explain the relationship between Inalambrica and the Open Source community.
Alf: Most of the software we use is open source under some license or other, a lot of it GPLed. Our work has consisted, mainly, in combining the pieces in new and interesting ways and, of course, writing interfaces.
Our relationship to the Open Source community is mainly through our local LUG, where many of us are active members. Our contract with Inalambrica provides each one of us technicians with two hours per day to work on open-source projects. We've used it mainly to give time back to our local Linux community in the form of e-mail support and our IRC channels at http://www.openprojects.net/.
ELJ: With the "two hours a day to give back to the community'', it sounds like that is mostly in the form of support. Is that correct, or is there some other software project that has happened or is happening?
Alf: Support and project coordination, mostly--events like Conquered, our booth at Compuexpo. Our InstallFests take a lot of time to set up, and our community hours are used quite lavishly for these purposes.
On the software side of things, we are working on a weather-station monitoring program to be GPLed. I also help out with some PHP and database work on the non-main-web site part of our LUG's web site.
ELJ: Open source?
Alf: Our interfaces are proprietary and, right now, so is our distro. We plan on releasing a version of our distro as soon as we are better established in the market. Meanwhile, any modifications we make to open-sourced products are, of course, to be released back to the community in accordance with the licensing requirements. Right now this includes only some patches to work with encrypted filesystems, which will probably be ready (and available for download) by January 2002.
ELJ: Thanks for taking the time to talk to us about the project.
Phil Hughes is publisher of Embedded Linux Journal.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- Designing Electronics with Linux
- New Products
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Dynamic DNS—an Object Lesson in Problem Solving
- Using Salt Stack and Vagrant for Drupal Development
- Validate an E-Mail Address with PHP, the Right Way
- Tech Tip: Really Simple HTTP Server with Python
- Build a Skype Server for Your Home Phone System
- Why Python?
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Reply to comment | Linux Journal
14 min 37 sec ago
- Reply to comment | Linux Journal
1 hour 4 min ago
- Not free anymore
5 hours 6 min ago
8 hours 53 min ago
- Reply to comment | Linux Journal
9 hours 1 min ago
- Understanding the Linux Kernel
11 hours 16 min ago
13 hours 46 min ago
- Kernel Problem
23 hours 49 min ago
- BASH script to log IPs on public web server
1 day 4 hours ago
1 day 7 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?