WIX: a Distributed Internet Exchange
Wellington, the capital city of New Zealand, has one of the oldest and possibly largest distributed Internet exchanges in the world. It is built on top of the Citylink public LAN infrastructure. In this article, we look at how it all started and the part Linux plays in its day-to-day operation.
Back in the 1980s, Richard Naylor, then IT manager for the Wellington City Council, was stuck with a common but difficult problem—an over-utilized VAX cluster at one data centre and an under-utilized one at another centre across town. He came up with an idea that was unique for the time: run a fibre optic cable between the two council buildings and share processing resources that way. The idea was so new that a jointer had to be flown down from Auckland to do the splices on the now-ancient slotted core cable.
Naylor set up a 10Mbps Ethernet connection using DECBridges, and the network itself was running DECnet on a 10-base5 and a little 10-base2. Terminals comprised most of the load at the time, as PCs didn't network. The overall idea worked, and the system was upgraded to 10-baseT in 1989, with IP being added.
In the early 1990s, Naylor's idea caught the attention of then Mayor Fran Wilde, who was intrigued by what Naylor and colleague Charles Bagnall had been up to in what was called “their spare time”. Mayor Wilde had attended a local secondary school production and noticed that it was being streamed live on the Internet by an off-duty Naylor.
After talking with Naylor and Bagnall, Wilde came to understand what was possible. Their design and the use of fibre could be used to provide a broadband infrastructure for the city, and that plan became a key part of a 25-year strategy for Wellington City.
Soon after, 17 investors, including the council, came up with $5,000 each, and three drums of fibre optic cable were run from one end of Wellington City to the other. The cable was run along overhead trolley bus electric support cables during November and December 1996. The primary aim was to provide an infrastructure to enable greater growth within the local business community.
In the first few years, Citylink expanded at a rate of 100% each year—doubling the number of connected buildings. At one point, the team connected 50 new buildings in ten weeks.
In 1997, Naylor left the council and helped set up Citylink as a separate company. The first customers were government departments and financial businesses, as central government is located in Wellington. Later customers included publishers and IT companies. ISPs were there from the start too, with many using the fibre infrastructure as a way of providing genuine broadband to city customers at a low cost. I should note here that Citylink does not consider domestic 1Mb and 2Mb connections to be broadband; it prefers to start at 10Mb. Citylink can provide 10Mb, 100Mb or 1,000Mb connections anywhere on the fibre infrastructure.
Nearly ten years after the initial cable was positioned, around 50 kilometres (30 miles) of cable now exists within the central business district of Wellington. More than 300 buildings are connected.
The Citylink fibre network originally was interconnected with a bunch of oddball switches and hubs from various vendors. Over time, these have been phased out and replaced with Cisco switches, mainly 35xx and 29xx, on a gigabit core.
Citylink now offers two major services, dark fibre and Ethernet. Dark fibre services allow point-to-point connectivity between buildings. Each customer has sole use of his or her own strand of fibre and can connect whatever gear is required on either end. Dark fibre runs up to 1Gb/s at present. The Ethernet services are the most widely used and allow customers to connect to a city-wide “shared Ethernet”.
INL, a newspaper publisher, was the first Citylink customer. The company used dark fibre to link its corporate office to its Wellington office, and it used Ethernet to link back to the Wellington City Council offices. From there, a microwave link to Victoria University of Wellington provided access to the public Internet. At that point, Citylink was involved in providing only basic layer 2, Ethernet infrastructure.
The Internet exchange runs on top of the fibre Ethernet infrastructure. Before we look at this in detail, let's briefly run through a few routing basics. A router on a network receives packets of data, each with its own destination address. The router checks its internal list of addresses—the routing table—to see if there is a route or path to that destination. If there is, the packet is sent to the specified address. This might be the final destination, or it could be a gateway—another router that repeats the process and sends the packet on once again.
A traditional Internet exchange has participants connecting to a single location, and each participant has a router. One side of the router is connected to all of the other routers by way of a common Ethernet backbone. The other side is connected to the participant's own network infrastructure (Figure 1).
Each participant has a list of IP addresses that represent the networks it can access. Each router has its own IP address on the common backbone. These addresses often are private to the outside world and act as gateways to each participant's network.
As a point of interest, the first Citylink routers in general use were based on standard PC components with LS/120 floppy or disk-on-chip boot mechanisms running the Linux Router Project (LRP). These were deployed only for customers using wireless access points.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
- Google's SwiftShader Released
- SourceClear Open
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide