Multilink PPP: One Big Virtual WAN Pipe
Network management is a little like alchemy: take a dash or two of ISDN, add some frame relay, throw in a couple of routers, mix them all together, and somehow, some way, the result is bandwidth gold.
Of course, the formula for creating fully interoperable networks is much more complicated. Fortunately, network managers do have access to some tools that can make bandwidth magic a little easier to perform. Two of the most important elements in the technology bag of tricks are the point-to-point protocol (PPP) and its follow-up, the multilink point-to-point protocol (MLPPP).
PPP, a product of the Internet Engineering Task Force (IETF), is the de facto WAN link protocol for connecting clients and servers and for interconnecting routers to form enterprise networks. PPP's main advantage is that unlike other protocols which operate at the data link layer, PPP achieves interoperability between devices by negotiating different configuration options, including link quality, link authentication and network protocols.
Over the years, the IETF has made some significant changes to PPP. But as its name states, PPP is intended for simple point-to-point connections. Now that the enterprise network infrastructure is moving rapidly to digital switched services such as ISDN, frame relay and ATM, PPP is in need of even more changes.
Enter MLPPP, known in IETF circles as RFC (Request for Comment) 1717. MLPPP takes advantage of the ability of switched WAN services to open multiple virtual connections between devices to give users extra bandwidth as needed. With MLPPP, routers and other access devices can combine multiple PPP links connected to various WAN services into one logical data pipe.
The IETF formally approved the MLPPP specification last November. Makers of ISDN routers and access devices have already started using MLPPP to bundle 64Kbps ISDN B channels to deliver more bandwidth. MLPPP also allows network managers aggregate WAN circuits of different types without requiring major configuration changes to existing router Internet works.
Because MLPPP works over any switched WAN service, it has a wide range of potential uses (see “PPP Plus”). Network managers could deploy MLPPP-equipped devices to create a technology-independent enterprise framework in which the actual WAN services linking two devices would be invisible to end users. Under this model, WAN devices would negotiate bandwidth rules between two directly connected peers, using whatever type of service was available. New digital WAN services such as ATM (asynchronous transfer mode) could be added to the network mix as needed, without making the existing network infrastructure obsolete.
Although it is usually considered a single entity, PPP is actually a group of protocols that together provide an extensive set of network connectivity services. The PPP suite is based on four key design principles: negotiation of configuration options, multi-protocol support, protocol extendibility and WAN service independence.
Negotiation of configuration options: This refers to PPP's ability to establish throughput requirements between two directly connected end systems. In an enterprise network, end systems often differ in terms of buffer requirements, packet-size limits and network protocol-support lists. The physical link that interconnects any two end systems could vary from a low-speed analog line to a high-speed digital connection with varying degrees of line quality.
To cope with all these possibilities, PPP has a suite of standard default settings to handle all common configurations. To establish a link, two communicating devices attempt to use these default settings to find a common ground. Each end of the PPP link describes its capabilities and requirements; the settings are negotiated between the two sides for each option at the link level. These options include data encapsulation formats, packet sizes, link quality and authentication.
The protocol that negotiates all these options is known as the link control protocol (LCP). The protocol that negotiates the network protocols to be multiplexed over a PPP link is called the network control protocol (NCP); there can be many NCP data streams over a single PPP link. Although PPP's configuration negotiation options also allow end systems to set link peer authentication (a security function) and data compression options, PPP does not dictate the actual algorithms used for security or compression. For security, PPP defines PAP (password authentication protocol) and CHAP (challenge handshake authentication protocol) as common standard authentication methods that may be negotiated, but it also lets users add new authentication algorithms. The same holds true for compression.
Multi-protocol support: PPP's ability to handle multiple network-layer protocols was one of the chief reasons it became a de facto standard. Unlike the serial IP protocol (SLIP), the IETF routing protocol that handles only IP datagrams, PPP works with a range of packet formats including IP, Novell IPX, AppleTalk, DECnet, XNS, Banyan Vines and OSI. Each network-layer protocol is separately configured by the appropriate NCP.
Protocol extendibility: Over the years, the IETF extended PPP through a number of additional RFCs that define features like common data authentication services and encryption capabilities for security and compression algorithms. For example, with many WAN technologies, compression algorithms are chosen according to the quality of the link. Different technologies use different compression schemes, introducing multiple layers of compression and decompression into the network. Running PPP compression at the NCP level removes these considerations and uses fewer system resources.
WAN service independence: The initial version of PPP was built expressly to run over HDLC (high-level data link control) networks. Since then, the IETF has added RFCs that enable PPP to work with every major WAN service now in use including ISDN, frame relay, X.25, Sonet and synchronous/asynchronous HDLC framing.
|Speed Up Your Web Site with Varnish||Jun 19, 2013|
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
- Speed Up Your Web Site with Varnish
- Containers—Not Virtual Machines—Are the Future Cloud
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Linux Systems Administrator
- Non-Linux FOSS: libnotify, OS X Style
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- Android's Limits
- Reply to comment | Linux Journal
45 min 2 sec ago
- Yeah, user namespaces are
2 hours 1 min ago
- Cari Uang
5 hours 32 min ago
- user namespaces
8 hours 26 min ago
8 hours 51 min ago
- One advantage with VMs
11 hours 20 min ago
- about info
11 hours 53 min ago
11 hours 54 min ago
11 hours 55 min ago
11 hours 57 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?