Linux in the Real World
When I came to SSC (publishers of Linux Journal), I was told the first thing I had to do was learn the computer system. Having never been exposed to Unix, I set out to discover as much as I could. Coming from an MS-Windows environment, I had a lot to learn. The more I learned about the system we use, the more questions I asked. Here is what I found out.
The first thing I noticed was the multi-tasking capabilities of Linux (I'm not even going to get into Win95). Everyone at SSC has a Linux system (workstation) at their desk, which they log into every morning. In addition, there are two non-Linux systems in the office: a Windows for Workgroups system used for graphics and magazine layout and a Unix System V, Release 4.2 system used to run the Progress database, which has not yet been ported to Linux.
Once logged onto their local system, users can perform tasks locally (such as reading electronic mail) or access the other computers via rlogin, telnet, ftp, and so on.
All of the workstations are linked via a thin Ethernet backbone, except for a few which are connected via twisted pair Ethernet to a twisted pair hub, which is then connected to the thinnet backbone.
The main backbone ends at the Orion Firewall System that sits between the internal network and a second, externally visible network that connects to the Internet through a Xyplex router and a CSU/DSU (also known as a “digital modem”) over a T1 Frame Relay connection to our Internet Service Provider, Alternate Access, Inc. Our Web server, www.ssc.com, is also on this externally visible network, outside our firewall.
There is a third network in the office. The Windows for Workgroups (WfW) machine is on this network with one Linux system also connected to the regular internal network. This setup keeps the large amount of traffic between the WfW system and the Linux system (which drives the Imagesetter) from bogging down the main network.
Often, in a multi-user environment like ours, every computer has a unique password file, local to that system. If someone wants to change their password, they have to log into every system individually to make the change office-wide. All of our systems instead use NIS (Network Information Service) to centrally manage all password and group files, access permissions, host address information, and data on a single server. NIS distributes a single master password file to all the systems transparently. Since the network is running NFS (the Network File System), files can be accessed between systems easily. This is easier for both the user and the administrator.
SSC uses sendmail as its mail daemon to monitor and manage the delivery of all electronic mail messages. Sendmail is the de facto standard Mail Transfer Agent (MTA) for most complicated networks. Although it is not easy to configure, it is the most configurable and most flexible of all the mail daemons available. It determines whether each e-mail address is local or remote, delivers local e-mail locally, and sends remote e-mail to remote systems via the Internet (using SMTP, the Simple Mail Transfer Protocol) or UUCP (described later).
All outgoing mail at SSC is routed through a single workstation for delivery via sendmail. This centralizes the e-mail system so there is only one log file, one daemon, one thing to break and be fixed. Incoming mail is queued on the Web server outside the Orion firewall system with smap, a secure mail queue program. Smap acts like a normal mail daemon and queues mail. Then smap calls sendmail to process the queued mail, sending it to the real internal hub for local delivery. The smap client implements a minimal version of SMTP, accepting messages from over the network and writing them to the disk for future delivery by smapd. Like anonymous ftp, smap is designed to run under chroot, except it also runs as a non-privileged process to overcome potential security risks presented by privileged mailers running where they can be accessed over a network. Sendmail still runs, but only when it's told to, instead of all the time.
UUCP (Unix-to-Unix copy) delivered mail is also forwarded to the sendmail hub via smap. Sendmail then sorts the regular SMTP mail from the UUCP delivered mail. The SMTP mail is delivered locally and the UUCP delivered mail is spooled in directories where the UUCP system can find them. Since the modems which deliver the spooled UUCP delivered mail to local recipients are on the Web server, which is on the externally-visible network, these files are transfered from the internal system to the Web server with tar and scp, a secure version of the rcp (remote copy) command.
tar (tape archive) and scp (secure copy) are used every three hours to transfer the mail automatically to the Web server. The mail is then deleted from the local workstation to avoid duplication.
By handling mail this way, only one machine, the Web server, needs to have a modem and access/exposure to the outside world, and the line doesn't need to remain open.
PPP (Point-to-Point Protocol) service allows employees remote dial-in access outside the firewall by the Web server. Users then access the internal network (and their own desktop workstations, if they wish) using ssh, a secure shell that encrypts all the data sent between the internal workstation being accessed and the Web server. Employees can also use ssh to get from their home computers to the Web server ensuring a completely secure line, or telnet if they don't have ssh on their home machines.we
All the user home directories at SSC, as well as the local binaries directory, are shared via NFS (Network File System) between all workstations. With the current system, every time users want to read files in their home directory, the files must be transferred across the network to their computers. Soon we will be moving all the user directories from the office file server to each user's own workstation for the following reasons:
Transferring file across the network is much slower than transfering them from the local hard drives, making file access slower.
Also the network has a limited amount of bandwidth (amount of information it can carry at one time), and eliminating unnecessary use will speed up the network.
Getting Started with DevOps - Including New Data on IT Performance from Puppet Labs 2015 State of DevOps Report
August 27, 2015
12:00 PM CDT
DevOps represents a profound change from the way most IT departments have traditionally worked: from siloed teams and high-anxiety releases to everyone collaborating on uneventful and more frequent releases of higher-quality code. It doesn't matter how large or small an organization is, or even whether it's historically slow moving or risk averse — there are ways to adopt DevOps sanely, and get measurable results in just weeks.
Free to Linux Journal readers.Register Now!
- Django Models and Migrations
- Hacking a Safe with Bash
- Secure Server Deployments in Hostile Territory, Part II
- The Controversy Behind Canonical's Intellectual Property Policy
- Home Automation with Raspberry Pi
- Huge Package Overhaul for Debian and Ubuntu
- Shashlik - a Tasty New Android Simulator
- KDE Reveals Plasma Mobile
- Embed Linux in Monitoring and Control Systems
- diff -u: What's New in Kernel Development