Upgrading Linux Over the Internet
As a business offering software internationalization services, we operate a small office in western Massachusetts as well as a small sister company in Taipei, Taiwan. We also need to support a distributed software development environment for engineers working remotely. While our bandwidth demands are not great, we do need reliable e-mail, web, news and FTP services. Primarily used to provide connectivity from the inside office to the Internet, the connection has to be available for external access from remote users on a 24 by 7 basis.
The network consists of a variety of UNIX workstations and PCs running Windows 95 and Windows NT, used for software development and support as well as the usual office applications. We use Linux running on Intel-based PCs as our network servers because it provides one of the most cost-effective small business network server solutions. The network uses a private class C Internet address from the 192.168.*.* block, since it is not directly connected to the Internet.
Our internal Linux server, with a Pentium 133MHz processor, an Adaptec 2940 SCSI card, a bunch of SCSI drives and a 4mm DAT tape drive, provides the backbone of our computing infrastructure. As a mail hub, it provides POP3 and SMTP support for mail-client applications running on the network. By running Samba, it acts a network file server for Windows-based PCs. Finally, it provides name resolution services using BIND.
The second Linux server in this “dynamic duo”, an old 486 PC with a 500MB hard disk and a monochrome monitor, is our external gateway machine. It connects us to the Internet through a persistent PPP connection with a static IP address over a 28.8K dial-up phone line to a local Internet service provider. This machine also acts as a dial-in server and as a firewall. It provides an e-mail relay and spam filter to and from the internal mail hub. HTTP, FTP and NNTP proxy services are also provided by this machine to allow internal users access to these Internet resources.
Both Linux machines were running Debian version 1.3. On an Internet firewall machine, you want to have precise control over what software is loaded on the machine. You want the minimum necessary to do the job, no more. Since the machine was to be remotely administered, it was even more important that it be easy to upgrade individual packages as necessary without having to do cold installs for new OS versions. Debian's dselect/dpkg system of handling software packages is ideal in this situation. We could easily select the software required to run the system, knowing that all prerequisite packages were included. Plus, Debian's large collection of software packages included almost everything we needed in its convenient dpkg format.
Debian Linux can be downloaded for free from http://www.debian.org/ or a host of mirror sites. In our case, we purchased a CD-ROM from Linux Software Labs, which also made it easy to add a contribution to the Debian project, whose work we greatly appreciate.
The Taipei office used a Linux gateway to connect to the Internet, but the configuration was quite different. We were issued a block of class C addresses from the Taiwanese ISP which advertised a route to them. The gateway machine was running a publicly accessible FTP server, HTTP server and mail hub, as well as being the primary public name server for our domain in Taiwan—all using a very old version of Caldera Linux.
When the operating systems on the server machines in the U.S. office were upgraded to take advantage of the new features in Linux 2.0, it seemed an ideal time to upgrade the systems in Taiwan, as well as reconfigure the network to more closely match the one in the U.S. office.
While the project seemed straightforward enough, the problem was that the work had to be done from ten thousand miles away across the Pacific Ocean using the Internet.
Upgrading Linux boxes remotely, especially across the ocean, requires some advance planning. Some of the issues we had to deal with were:
Which Linux to use?
What were the security concerns?
Were we going to set up both a private and public side network?
Our choice for the first question was to stay with Debian Linux version 1.3. It was the same version we were running in Massachusetts, so we could essentially install a copy of what was on the U.S. system, reconfigure it for the different names and addresses in Taiwan, and be all set.
Since the upgrade was to be done across the Internet, security was a major concern. We needed a secure connection from the U.S. to Taiwan so that logins and passwords would not be revealed to Internet eavesdroppers and Ethernet sniffers; thus, we chose the Secure Shell (SSH) package. Due to U.S. export restrictions, we could not just upload the software from Massachusetts, so we downloaded the source for the SSH package from a Taiwanese FTP site to the Linux machine in the Taipei office. We then compiled and installed it, so the install/upgrade could proceed in a secure fashion.
While our U.S. setup is required to service only an internal network, our Taiwanese operation decided they needed to set up an area to allow public Web and FTP access. To do this without compromising security for the internal network, things had to be set up a bit differently.
Taiwan's block of Class C addresses, assigned by their ISP, were used by both the internal machines and the firewall. We designed a network setup including a publicly accessible network created using these addresses for use by the public HTTP and FTP servers. The rest of the machines were connected to a private network, once again using addresses from the 192.168.*.* block as in the U.S. office. The firewall machine was then configured with a second Ethernet interface: one to connect the outside PPP connection to the publicly available network and the other to connect the private network. We then used the IP firewalling capabilities of the Linux kernel to keep network traffic where it belonged.
Getting Started with DevOps - Including New Data on IT Performance from Puppet Labs 2015 State of DevOps Report
August 27, 2015
12:00 PM CDT
DevOps represents a profound change from the way most IT departments have traditionally worked: from siloed teams and high-anxiety releases to everyone collaborating on uneventful and more frequent releases of higher-quality code. It doesn't matter how large or small an organization is, or even whether it's historically slow moving or risk averse — there are ways to adopt DevOps sanely, and get measurable results in just weeks.
Free to Linux Journal readers.Register Now!
- Django Models and Migrations
- Hacking a Safe with Bash
- Secure Server Deployments in Hostile Territory, Part II
- The Controversy Behind Canonical's Intellectual Property Policy
- Home Automation with Raspberry Pi
- Shashlik - a Tasty New Android Simulator
- Huge Package Overhaul for Debian and Ubuntu
- KDE Reveals Plasma Mobile
- Embed Linux in Monitoring and Control Systems
- diff -u: What's New in Kernel Development