Last summer, the investment company I work for decided to build a disaster recovery site. This location, 40 miles from New York City, would provide a mirror of our Downtown Manhattan operation. We decided to utilize Linux as much as possible for this project for the following reasons:
We primarily are a Linux operation already, so we could use our existing experience.
We could customize the configuration as much as we wanted, because everything would be open source.
We hoped Linux solutions would be less expensive than other solutions, such as Cisco's.
In this article, I focus on our use of Linux in wide-area network (WAN) routers. I define a WAN router as a system that connects to both wide-area links, such as T1 or T3 lines, and local-area networks, such as 100baseT, and forwards packets between the two networks.
We purchased dedicated connections because this is a disaster recovery site and we need the connections to be as reliable as possible. Based on our calculations, one T3 (45Mb/sec) and four T1 (1.544Mb/sec each) lines would provide sufficient bandwidth for our operations. Ultimately, we decided to use the T3 link as the primary connection and leave the T1s as a bonded 5.7Mb/sec backup link.
The choice of WAN connectivity determined our network design. For redundancy, we installed two WAN routers at each site. The routers are identical and contain hardware to connect to both the T1 and T3 links. With the use of splitter hardware, we hoped to connect all the WAN links to all the routers, as shown in Figure 1. However, that design ultimately turned out to be extremely difficult to implement, due to technical issues I discuss below.
In addition to the WAN links, we also connected the remote site to the Internet through the hosting company backbone. We operated on the principle that more connectivity was better, and this turned out to be useful when we were designing the network. There's nothing like accidentally bringing down your T3 with a mistyped command to make you appreciate a back door to your routers over the Internet.
Our space for servers at the hosting company was limited to one standard rack. This put space at a premium, because we needed to install a lot of servers. Thus, we decided to use 1U systems for the WAN routers. This was a difficult decision to make, as hardware options are limited in that form factor. In retrospect, it would have been much easier to use 2U systems for the WAN routers.
The next step was the selection of T1 and T3 interface cards. The main choice here is whether to use a card with an integrated channel service unit/data service unit (CSU/DSU) that connects directly to the incoming WAN circuit or a card with a high-speed serial connection along with a standalone CSU/DSU. Given our space constraints, an integrated card made the most sense. In previous WAN installations, we used Cisco 2620 router boxes with T1 cards installed. However, that was not appropriate for this project because we wanted to connect multiple T1 and T3 lines.
After much searching, the only vendor we found that could supply both T3 and multiport T1 cards was SBE, Inc. The market for these cards is small and the number of vendors is limited. My suggestion for finding WAN cards is to start asking tech support a lot of questions and see how they respond. Also, carefully look over the driver and hardware specs before committing to a particular vendor.
With the T3 and four T1 cards from SBE, we would require a system with two free full-height, half-length PCI slots. We decided on Tyan S5102 motherboards with a single Pentium 4 Xeon 2.4GHz CPU. For memory, we used 256MB of ECC RAM for maximum reliability.
To cut down on the chance of system failure, we used Flash-based IDE devices. We found a device from SimpleTech that connects and operates like a conventional hard drive. We decided on a 256MB device as we thought that would be enough room for Fedora Core 1 to operate.
The complete computer systems (minus the WAN cards) were purchased from a white box system supplier. This proved troublesome, though, as the supplier was not able to produce four completely identical systems. The systems had variations in CPU fan manufacturers and memory speeds.
One area where the system supplier was helpful was in finding the right case. Only one of the numerous system vendors I contacted could supply a motherboard and case combination that could hold two full-height PCI cards. We had hoped to use a stock system from a supplier such as Dell or IBM, but none of the big names could give us a system that matched all our criteria.