Linux in the Real World
When I came to SSC (publishers of Linux Journal), I was told the first thing I had to do was learn the computer system. Having never been exposed to Unix, I set out to discover as much as I could. Coming from an MS-Windows environment, I had a lot to learn. The more I learned about the system we use, the more questions I asked. Here is what I found out.
The first thing I noticed was the multi-tasking capabilities of Linux (I'm not even going to get into Win95). Everyone at SSC has a Linux system (workstation) at their desk, which they log into every morning. In addition, there are two non-Linux systems in the office: a Windows for Workgroups system used for graphics and magazine layout and a Unix System V, Release 4.2 system used to run the Progress database, which has not yet been ported to Linux.
Once logged onto their local system, users can perform tasks locally (such as reading electronic mail) or access the other computers via rlogin, telnet, ftp, and so on.
All of the workstations are linked via a thin Ethernet backbone, except for a few which are connected via twisted pair Ethernet to a twisted pair hub, which is then connected to the thinnet backbone.
The main backbone ends at the Orion Firewall System that sits between the internal network and a second, externally visible network that connects to the Internet through a Xyplex router and a CSU/DSU (also known as a “digital modem”) over a T1 Frame Relay connection to our Internet Service Provider, Alternate Access, Inc. Our Web server, www.ssc.com, is also on this externally visible network, outside our firewall.
There is a third network in the office. The Windows for Workgroups (WfW) machine is on this network with one Linux system also connected to the regular internal network. This setup keeps the large amount of traffic between the WfW system and the Linux system (which drives the Imagesetter) from bogging down the main network.
Often, in a multi-user environment like ours, every computer has a unique password file, local to that system. If someone wants to change their password, they have to log into every system individually to make the change office-wide. All of our systems instead use NIS (Network Information Service) to centrally manage all password and group files, access permissions, host address information, and data on a single server. NIS distributes a single master password file to all the systems transparently. Since the network is running NFS (the Network File System), files can be accessed between systems easily. This is easier for both the user and the administrator.
SSC uses sendmail as its mail daemon to monitor and manage the delivery of all electronic mail messages. Sendmail is the de facto standard Mail Transfer Agent (MTA) for most complicated networks. Although it is not easy to configure, it is the most configurable and most flexible of all the mail daemons available. It determines whether each e-mail address is local or remote, delivers local e-mail locally, and sends remote e-mail to remote systems via the Internet (using SMTP, the Simple Mail Transfer Protocol) or UUCP (described later).
All outgoing mail at SSC is routed through a single workstation for delivery via sendmail. This centralizes the e-mail system so there is only one log file, one daemon, one thing to break and be fixed. Incoming mail is queued on the Web server outside the Orion firewall system with smap, a secure mail queue program. Smap acts like a normal mail daemon and queues mail. Then smap calls sendmail to process the queued mail, sending it to the real internal hub for local delivery. The smap client implements a minimal version of SMTP, accepting messages from over the network and writing them to the disk for future delivery by smapd. Like anonymous ftp, smap is designed to run under chroot, except it also runs as a non-privileged process to overcome potential security risks presented by privileged mailers running where they can be accessed over a network. Sendmail still runs, but only when it's told to, instead of all the time.
UUCP (Unix-to-Unix copy) delivered mail is also forwarded to the sendmail hub via smap. Sendmail then sorts the regular SMTP mail from the UUCP delivered mail. The SMTP mail is delivered locally and the UUCP delivered mail is spooled in directories where the UUCP system can find them. Since the modems which deliver the spooled UUCP delivered mail to local recipients are on the Web server, which is on the externally-visible network, these files are transfered from the internal system to the Web server with tar and scp, a secure version of the rcp (remote copy) command.
tar (tape archive) and scp (secure copy) are used every three hours to transfer the mail automatically to the Web server. The mail is then deleted from the local workstation to avoid duplication.
By handling mail this way, only one machine, the Web server, needs to have a modem and access/exposure to the outside world, and the line doesn't need to remain open.
PPP (Point-to-Point Protocol) service allows employees remote dial-in access outside the firewall by the Web server. Users then access the internal network (and their own desktop workstations, if they wish) using ssh, a secure shell that encrypts all the data sent between the internal workstation being accessed and the Web server. Employees can also use ssh to get from their home computers to the Web server ensuring a completely secure line, or telnet if they don't have ssh on their home machines.we
All the user home directories at SSC, as well as the local binaries directory, are shared via NFS (Network File System) between all workstations. With the current system, every time users want to read files in their home directory, the files must be transferred across the network to their computers. Soon we will be moving all the user directories from the office file server to each user's own workstation for the following reasons:
Transferring file across the network is much slower than transfering them from the local hard drives, making file access slower.
Also the network has a limited amount of bandwidth (amount of information it can carry at one time), and eliminating unnecessary use will speed up the network.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Google's SwiftShader Released
- Parsing an RSS News Feed with a Bash Script
- SourceClear Open
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide