Linux and the Next Generation Internet
A goal of our demonstration environment, in addition to concisely demonstrating the effect of differentiated services, was to prove that the queuing mechanisms within the Linux Diffserv implementation were robust enough to enforce various SLAs throughout our Diffserv domain. As shown in Figure 1, the domain was composed of three routers (one core router, two leaf routers), two Litton CAMVision-2 MPEG-2 codecs (up to 15Mbps) or two Vbrick MPEG-1 codecs (up to 3Mbps), two client workstations, one web server and one network management workstation (NMS).
In the figure, the classification of traffic is performed by the leaf routers “obiwan” and “nimitz”, and the core router “quigon” is configured for the corresponding DSCP-based forwarding and queueing. The traffic streams are color-coded to correspond to particular types of PHBs (blue=BE, red=EF and so on). Notice from the figure that the link between quigon and nimitz is 10 MBps Ethernet and is consistently oversubscribed with multiservice traffic. This is the situation where differentiation between SLAs is critical. To make sure the instantaneous change between SLAs was clearly visible to the casual observer, we used the MPEG video stream as well as some interactive, web-based streaming media (RealAudio, RealVideo, etc.).
As shown in Table 1 and Figure 1, we were able to configure several service levels with our approach, each of which was available via a single mouse click. Note that the values and configurations shown in Table 1 and Figure 1 reflect a particular set of SLAs which used only BE and EF traffic classes. When the user clicks on the desired SLA icon, the value from the HTML form field is passed to the web server via an HTTP POST operation. The form values are passed via CGI to a Perl script that processes the POST, then reconfigures each router in the domain. The routers are contacted one by one, and the SLA chosen by the administrator is invoked. Sample Perl pseudocode for the client portion of router control is shown in Listing 4, and the server portion is shown in Listing 5. As can be seen from the Perl client code in Listing 4, the NMS (or other web server) can easily pass the “current SLA” to all routers in the domain based on input from the network manager. This “control channel” interface was protected in all network configurations by a high-priority, low-rate queuing configuration, shown as the black line in Figure 1.
To provide positive user feedback at the NMS, the web interface is refreshed for the administrator while each router begins its unique network setup. Each Diffserv-enabled router in the domain receives the desired SLA and must set up its rules accordingly, depending on its position within the domain and the collection of statically defined SLAs. This is done dynamically via a system call to ipchains-restore according to the new SLA. When the ipchains-restore command finishes, the network setup is complete. The Perl pseudocode for this operation is shown in Listing 5 for a typical core router. As our system is defined, we maintain essentially a simple “database” of network/SLA configurations in pre-stored ipchains mappings.
To attempt to simulate some typical end-user traffic in addition to the constant MPEG stream, we used a number of FTP downloads, some streaming audio/video sources and a small flood ping throughout the network. Due to the interactive nature of our demonstration environment, these network-based data sources were also available “on demand” from a web-based GUI.
Fast/Flexible Linux OS Recovery
On Demand Now
In this live one-hour webinar, learn how to enhance your existing backup strategies for complete disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible full-system recovery solution for UNIX and Linux systems.
Join Linux Journal's Shawn Powers and David Huffman, President/CEO, Storix, Inc.
Free to Linux Journal readers.Register Now!
- The Italian Army Switches to LibreOffice
- Download "Linux Management with Red Hat Satellite: Measuring Business Impact and ROI"
- Petros Koutoupis' RapidDisk
- Linux Mint 18
- Oracle vs. Google: Round 2
- The FBI and the Mozilla Foundation Lock Horns over Known Security Hole
- Varnish Software's Varnish Massive Storage Engine
- Privacy and the New Math
- Ben Rady's Serverless Single Page Apps (The Pragmatic Programmers)
Until recently, IBM’s Power Platform was looked upon as being the system that hosted IBM’s flavor of UNIX and proprietary operating system called IBM i. These servers often are found in medium-size businesses running ERP, CRM and financials for on-premise customers. By enabling the Power platform to run the Linux OS, IBM now has positioned Power to be the platform of choice for those already running Linux that are facing scalability issues, especially customers looking at analytics, big data or cloud computing.
￼Running Linux on IBM’s Power hardware offers some obvious benefits, including improved processing speed and memory bandwidth, inherent security, and simpler deployment and management. But if you look beyond the impressive architecture, you’ll also find an open ecosystem that has given rise to a strong, innovative community, as well as an inventory of system and network management applications that really help leverage the benefits offered by running Linux on Power.Get the Guide