Linux WAN Routers
People sometimes tell me that GPL software, because it's free, does not have the quality to be part of a production environment. This is like saying that only authors with publishing contracts write good poetry. When I hear this, I always have to wonder if these people have ever even used GPL software or know how much they depend upon it every time they browse the Web or receive e-mail from the Internet.
There are some very distinct advantages in source code availability for network administrators. As the Internet continues to evolve, along with protocols, security measures and resource conservation techniques, routers will have to keep up.
Five years ago, the shortcomings of IPv4, and the need for protocol encryption and encapsulation might have been far-fetched ideas, suited only for the minds of IETF gurus. Today you deal with them each time you use IP-masquerading or PPTP. Because of its openness and its rich tool set, Linux makes an ideal platform for developing and testing these sorts of protocol extensions (e.g., IPv6 and ENSKIP). As a network administrator running on Linux, these tools will often be available to you sooner than in commercial implementations. And they will be written by someone who not only wants to see the software work, but also uses the software himself; not by someone trying to meet a coding deadline. Because Linux is not in commercial competition, the focus is on interoperability, not on proprietary protocol extensions. Looking forward, it is difficult to say what we will be running in the year 2005. I do, however, feel certain that developing these tools in a robust, open operating system must be substantially easier than developing for proprietary architectures with more limited tools and support. Therefore, I feel better supported.
Good support and vendor stability are essential aspects of any large IT investment. I never have to worry that Linux is going to go out of business or be purchased by another company and then discontinued. As for WAN routing hardware for Linux systems, I have the feeling that it will be around as long as there is a market for it. If my communications hardware vendor does ever go out of business, I'm not left hung out to dry as I would be with a traditional router. I have the source code for the drivers and can continue to adapt and enhance for as long as it is functional and economically feasible to do so.
Just because you paid for support does not mean that you will get it. Arguments to the effect that “we cannot use Linux because we cannot get support” are flawed. Typically, they are made by a management that does not believe employees are capable of performing their jobs. The largest percentage of my experience with vendor support can be categorized in one of these ways:
Completely wasted time trying to explain the problem to someone who has no idea what I'm talking about.
“Have you tried our latest patch/reloading the software?”
“It sounds like it has nothing to do with our system/software.”
Troubleshooting IT problems can be difficult and time-consuming, and no one can afford to staff their help desk with their top programmers and troubleshooters. So since you will have to troubleshoot a large majority of your problems yourself, why pay for support?
My firm has offices in the United States, Europe and Asia. As an international company, WAN connections are an important part of the infrastructure. Because of the time zone differences between our sites, it is critical to have a stable routing platform; midnight in one location is high noon in another, so maintenance windows are small. Just like any other company, we are conscious of costs. To meet these goals, we use Linux/Sangoma routers for:
512Kbps link to the Internet
56Kbps backup link to the Internet
We intend to deploy three more Linux/Sangoma frame-relay routers this year. In addition, we use Linux as a LAN router, a server platform for all of the standard TCP/IP services (DNS, FTP, HTTP, packet-filtering, IP-masquerading, proxying, SMTP, NTP, NNTP, etc.) and, of course, as a desktop.
The actual configuration of our Linux frame-relay router is GNU/Debian Linux (version 1.2) running on a 486/66 with 8MB RAM, a 850MB IDE hard drive, a Sangoma WANPIPE S508 router card and a SpellCaster DataCommute ISDN card. The ISDN card is used as a backup, in case the frame-relay fails. This system had been up for over 160 days before it was rebooted by a sadly mistaken NT-administrator trying to log into another system that shares the same keyboard and monitor.
If you're wondering why I went to the trouble to write an article about using Linux as a router, maybe the following anecdote will help explain it.
Once upon a time, our Internet link was connected with a BigName router. One day, this router decided to die. In total, it took about an hour to get a technician from BigName on the phone; we whiled away the time scrambling around looking for our support ID, wading through the “press six if you'd like to use our fax-back server” menus, waiting on hold and fending off frantic users. After a short discussion about my abilities to configure a terminal program (peppered with a few curt remarks of my own about what sort of idiot cable was needed to access the console), the technician decided that we needed a new motherboard. Since we had paid copiously for our support contract, a new board was to arrive the next day. We informed our users of the situation and eagerly awaited our package. A package did arrive promptly the next day by the promised time. However, much to our dismay, we had received a new power-supply and case—no motherboard.
Now we were in trouble. BigName was going to send us our part, but that meant at least another 24 hours of downtime. Based on our experience with the Linux frame-relay router, we decided to try our spare Sangoma S508 card for this link. We had Linux loaded and the software configured in about an hour. We started the WANPIPE software and nothing happened. Using the ppipemon utility that comes with the Sangoma product, we were able to tell that the link was failing in the LCP negotiation phase. That is, our router was talking to the ISP's router, but they could not mutually agree on an operating parameter set for the link. It is fortunate that we had these tools. Our ISP was telling us that they were quite certain that we had no routing hardware whatsoever attached to the line. This despite the fact that we could tell them the exact data streams we were receiving from their router.
In desperation, we called Sangoma to see if they were familiar with this sort of behavior. They were not, but offered to look at the output of a data trace. We collected a few seconds of the failing negotiation sequence and mailed this to Sangoma. Less than four hours later, I received a call from an engineer at Sangoma who told me there was a nebulous portion of the PPP RFC which had been implemented by our ISP's port multiplexor. Best of all, Sangoma had already placed a patch on their FTP server. Fifteen minutes later we were up and running. Although the motherboard did arrive from BigName, we have never gone back. This router sits in storage as a backup to our backup. In looking back at the sequence of events, I am impressed by the following:
We were better equipped with tools to troubleshoot problems than our ISP. Maybe we were just more motivated, but I have to question the integrity of either the technician or the tools when I am interrupted while listing the sequence of LCP packets with “Are you sure the router is powered on and attached to the CSU/DSU?”
We were able to get a patch in less than a day.
We were able to turn an outage of at least 48 hours into less than 30, and it would have been even less than that if we had been quicker to consider using the Linux router. (In a production environment that strives to have 99.5% availability, you have 43.8 hours a year for maintenance and downtime.)
Getting Started with DevOps - Including New Data on IT Performance from Puppet Labs 2015 State of DevOps Report
August 27, 2015
12:00 PM CDT
DevOps represents a profound change from the way most IT departments have traditionally worked: from siloed teams and high-anxiety releases to everyone collaborating on uneventful and more frequent releases of higher-quality code. It doesn't matter how large or small an organization is, or even whether it's historically slow moving or risk averse — there are ways to adopt DevOps sanely, and get measurable results in just weeks.
Free to Linux Journal readers.Register Now!
- Hacking a Safe with Bash
- Django Models and Migrations
- Secure Server Deployments in Hostile Territory, Part II
- Huge Package Overhaul for Debian and Ubuntu
- Home Automation with Raspberry Pi
- The Controversy Behind Canonical's Intellectual Property Policy
- Shashlik - a Tasty New Android Simulator
- Embed Linux in Monitoring and Control Systems
- KDE Reveals Plasma Mobile
- diff -u: What's New in Kernel Development