A Conversation with Alfredo Delgado of Inalambrica.net
In October 2001, while I was in Costa Rica, I met with a number of Linux advocates. Some were in the public sector and others in private industry. One private company that bases their whole product on Linux is Inalambrica.net. I interviewed their CTO for an article in Linux Journal but also saw some interesting stuff they were doing that I felt was of interest to ELJ readers.
This article is based on a conversation I had with Alfredo Delgado (Alf) who came to Inalambrica on January 1, 2001 and is in charge of their systems design and integration. Inalambrica.net needed an embedded Linux system on which to base their product, and they came up with what I see as a unique solution.
ELJ: Briefly describe the products that Inalambrica is producing.
Alf: Our goal is to build a cheap and easy-to-use network management appliance (internet access and bandwidth control), with several connectivity options, aimed at SOHO and small-to-midsized businesses and networks. The appliance has a web interface to handle everything from configuration to reports to system maintenance (package upgrades, new hardware, etc.).
Not being a hardware company, our main product is the software to drive such an appliance, not the appliance itself. The idea is to be able to send out the software (our own distro and its modular web interface) to distributors and even, sometime in the not-so-distant future, directly to customers.
Ultimately we'd like to send out just a CompactFlash with an IDE interface and a list of compatible hardware, but it will take some time for us to get to that point.
ELJ: Let's get some background. Prior to your joining of Inalambrica, what were they using as far as the Linux base for their products?
Alf: Handcrafted machines with Debian or Slackware.
ELJ: What were the shortcomings of this base?
Alf: Lack of standardization, bloat, time-consuming installations--both are great distros, but they were not well tailored to the task at hand, especially as the number of machines started growing; the customizations to the base distros piled up, the different connectivity options spawned different sets of package, configuration and interface options, and in general, everything became bigger, slower and more complex.
ELJ: You decided to use a real (by that, I mean a general-purpose SQL-based product) database to manage the distribution. Before you picked this, did you have other ideas?
Alf: Yes. But I scraped them very early in the process. Our first idea was just to hatchet, chisel and mold Slackware to our purposes; thus, our first test machines were cut-down Slackwares with a lot of extra packages, and the web interface had to be tailored for every installation. We'd gained some standardization, killed a bit of bloat and improved our installation time, but we were far from happy with the results. As soon as customers started to spring up in several countries, with very different hardware and connectivity needs, we realized we were going to face huge support issues very soon if we did not find a better (more flexible and automatic) way to do things.
ELJ: You elected to use PostgreSQL as the database. Most people would consider this rather heavy. Why PostgreSQL rather than, for example, MySQL?
Alf: Our web interface uses PostgreSQL extensively. There wasn't much sense in having two separate DBMSes. Also, I've always been a PostgreSQL guy, so I was already on the right side of the debate. I use referential integrity, subselects and other features regularly, and they were not supported on MySQL the last time I checked.
ELJ: Describe the structure of the database you have built. That is, what is in the tables?
Alf: Every package has its own database. The base installer holds the DBMS system, of course, and the main database holds package and file information (installed packages, versions, dependencies, reference counts for files and directories, upgrade history, etc.).
Each new package creates a database with the package's name to hold configuration options, interface information, etc. Each package maintainer is responsible for the respective database.
ELJ: You say, "Every package has its own database.'' Is that really a database or just a new table?
Alf: Let's take the base networking package as an example. The package is called network, so it CREATEs DATABASE (network) in PostgreSQL. This database holds several tables: 1) device, which holds physical interface information; 2) interface, which holds logical interfaces information (IP addresses, netmasks and data rates, for instance); 3) host, which holds host entries for DNS use with DNRD; and 4) nservers, nameserver addresses--also for DNRD.
There are also historical tables to keep track of configuration changes, although I have yet to finish rollback and roll-forward on this.
ELJ: How have you built the glue? That is, the programs that you use to put packages into the database and manage the contents of a distribution?
Alf: Right now everything is shell scripts and a lot of documentation on standards for the package maintainers. The idea is to stick to the KISS principle, and with all the developers in the same building this is not too hard to do. All the complex issues actually are interface-related and are separated from the base installer, configuration files and databases by one level of abstraction: the configuration, administration and control scripts in each system package.
ELJ: Describe the install process.
Alf: The process is boot, partition, format, install the base, fire up the database, install packages and reboot. There's nothing more to it. Our package installer takes care of the main database, and each package handles its own database.
ELJ: How much user interaction is required?
Alf: Short answer: Pop the disk in and lie still until I'm done. After the installation process finishes, all interaction with the user is through the web interface.
ELJ: How long does it take?
Alf: That depends a bit on hardware, how many partitions the installer has been instructed (hard-coded) to build, how many of these will be encrypted, which packages are on the installer, etc. My test installation takes about eight minutes on a 750MHz PIII with 128MB RAM, a 128MB SanDisk CompactFlash and a 20GB hard disk, with everything but the /boot partition on ReiserFS.
ELJ: What is a recovery like?
Alf: Hell to program. Can't say much more at the time, as it's what I'm working on right now.
ELJ: When do you expect to have it completed?
Alf: Late November/early December 2001. I'm working on the rollback (taking the system to a prior state) and roll-forward (redoing changes from a certain state on) features, and running into nice problems like hardware failure notification, database corruption and so on. Designing solutions (and interfaces) for each one of them is what's slowing me down.
ELJ: What future functionality do you intend on adding? How does it take advantage of the database?
Alf: I'd talk of planned rather than future functionality: rollback and roll-forward of the installation (upgrades, configuration checkpoints, access levels), many more report generators, auto-upgrade. Since the package system uses the database to keep track of things (all the way to the "who's using this file?'' level), I guess it's safe to state that the database is what allows these functions to be implemented. There are, of course, other ways to implement them, but databases are remarkably well suited to handling information.
Something I'd really love to work on in the future is using the database directly from each package, as opposed to having it as a backup for regular configuration and log files. I envision a system where users are not in /etc/passwd, but rather stored in a database, with all their files being BLOBs. But that's a long-term project.
ELJ: Inalambrica's products are proprietary in nature but based on GPLed software. Please explain the relationship between Inalambrica and the Open Source community.
Alf: Most of the software we use is open source under some license or other, a lot of it GPLed. Our work has consisted, mainly, in combining the pieces in new and interesting ways and, of course, writing interfaces.
Our relationship to the Open Source community is mainly through our local LUG, where many of us are active members. Our contract with Inalambrica provides each one of us technicians with two hours per day to work on open-source projects. We've used it mainly to give time back to our local Linux community in the form of e-mail support and our IRC channels at http://www.openprojects.net/.
ELJ: With the "two hours a day to give back to the community'', it sounds like that is mostly in the form of support. Is that correct, or is there some other software project that has happened or is happening?
Alf: Support and project coordination, mostly--events like Conquered, our booth at Compuexpo. Our InstallFests take a lot of time to set up, and our community hours are used quite lavishly for these purposes.
On the software side of things, we are working on a weather-station monitoring program to be GPLed. I also help out with some PHP and database work on the non-main-web site part of our LUG's web site.
ELJ: Open source?
Alf: Our interfaces are proprietary and, right now, so is our distro. We plan on releasing a version of our distro as soon as we are better established in the market. Meanwhile, any modifications we make to open-sourced products are, of course, to be released back to the community in accordance with the licensing requirements. Right now this includes only some patches to work with encrypted filesystems, which will probably be ready (and available for download) by January 2002.
ELJ: Thanks for taking the time to talk to us about the project.
Phil Hughes is publisher of Embedded Linux Journal.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
- Google's SwiftShader Released
- Rogue Wave Software's Zend Server
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide