Linux In The Real World: Linux Serving IKEA
It started by coincidence early last year. We had repeated problems with a system that sent files to a business partner using a leased line and VMS-based Kermit. A consultant suggested that we installed a PC with UUCP as replacement. We agreed, the PC arrived, and guess what operating system I found on the disk. Right, it was Linux 1.0, our—and my—first experience with a PC-based Unix system.
For those that don't know IKEA, here is a overview of my company. IKEA is a retailer of furniture and home interior stuff. We have sites in nearly all European countries, the US and Canada, and the Far East, including Australia. These countries are sub-divided into organizational units. Our unit, Northern Europe (at the moment of this writing), is responsible for managing the stores, warehouses and offices in 7 European countries. Our headquarters is located in Helsingborg, in southern Sweden, and this is where I work.
From here, there are network connection to all “our” countries, as well as to the other IKEA branches. We also have a couple of DEC Alpha systems as central “mainframes” for common database applications. These systems are critical for our survival (and so are the network connections to these systems).
Like many other companies, we are rapidly evolving our IT structure to more distributed systems. This means that communication lines that could be cut for a day or more before without any serious problems, are now becoming absolutely essential for business.
In late 1994 and early 1995, we at IKEA Northern Europe restructured our European network to have faster links, new routers, LANs, etc. From being a more or less pure DECNET network, we converted more and more to TCP/IP, since this is the common denominator with our sister organizations worldwide. While we planned for this big task, we also found that we had to change our IP addressing and naming in order to avoid havoc. Our decision was to have a tree-structured DNS [footnote:Domain Name Service] system with the root at our head office in Helsingborg, Sweden, and secondary DNS servers at each site. The DNS servers would serve approximately 75 VMS systems, 20 AIX systems, and hundreds of other PCs and and NT-servers.
One option was to use an existing VMS or AIX system as primary server, but since we demand 100% uptime, this was not very attractive. Not that VMS or AIX systems are unreliable, but they are used for other tasks, which means that they could be down for many unrelated reasons. We saw that we needed a dedicated system to avoid problems.
This was one year after our first Linux system; since I had been running Linux both at home and at work for over a year and was convinced of its usefulness and stability, I suggested Linux. My boss listened to my arguments, and agreed to give it a try.
We bought two 486/33 machines with 8 MB ram each, and I started to install 1.1.91 on a rainy day in March. After two days, we had BIND 4.3.9 [footnote:The nameserver program] up and running, with a second PC as a backup system should the first PC fail. After running internally for a week, we decided to start the transformation. The whole tech department spent a weekend changing more than 300 TCP/IP systems all over Europe. On Monday morning, we had over 50 secondary DNS servers (mostly VMS systems running TCPware) getting their information from CYGNUS, the primary server PC. CYGNUS was also serving approximately 200 IP systems at our headquarters from day one.
For redundancy, all headquarter systems have both “primaries” in their resolver file. Empirical tests (pulling out the cable) have shown that this works flawlessly. Another thing is that if CYGNUS breaks completely, we can replace him with VIRGO, the second PC. In order to make this a bit easier, we rcp all DNS, RCS and passwd files via cron at regular intervals. All CYGNUS specific files (rc.inet and such) are also stored safely on VIRGO so it should be fairly easy to switch identities, if needed.
As I mentioned, we use GNU RCS to maintain our DNS files. We have set up one master file for all IP systems and one reverse-translation file for each country. All persons allowed to edit these files have their private accounts which gives us a good overview of who did what, and the ability to reverse their editing in case they have done something wrong.
IKEA is using the RFC-recommended address 10.x.x.x internally. Each country has its earlier DECNET area as the second number, so for example, Belgium has 10.14.x.x since they were in DECNET area 14. The third part represents the location, so Antwerp has 10.14.5.x. This leaves 253 possible hosts for each LAN or site, and enables us to use class C subnet masking for all countries. Again, the exception is our headquarters, which has more than 253 hosts. We have allocated a special “area”, 62, and use a class B subnet mask at headquarters.
The master file is called named.neurope, and the reverse files are called named.xx, where xx is the ISO code for each country (be, gb, dk, se, nl, and no). We also have a reverse file for our headquarters, named.hbg, since this is the single largest “domain”. As an example, Figure 1 contains an edited extract form this file.
The cache (named.ca) file has entries for our central DNS system (where the Internet connection is) and for our sister organizations' primary DNS servers.
The boot (named.boot) file has a “forwarders” record which routes all unknown lookups to our Internet-connected DNS server, as well as a record that states that we are primary server for our organization.
Our secondary servers (40 VMS hosts) have corresponding files with pointers to CYGNUS so they know where to “zone-transfer” files and updates from. These updates takes place at boot and every fourth hour.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Paranoid Penguin - Building a Secure Squid Web Proxy, Part IV
- SUSE LLC's SUSE Manager
- Google's SwiftShader Released
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide