Kernel Korner - Analysis of the HTB Queuing Discipline

Can Linux do Quality of Service in a way that both offers high throughput and does not exceed the defined bandwidth? Here's a thorough test.

The Hierarchical Token Buckets (HTB) queuing discipline, part of the Linux set of traffic control functions, is a mechanism that provides QoS capabilities and is useful for fine-tuning TCP traffic flow. This article offers a brief overview of queuing discipline components and describes the results of several preliminary performance tests. Several configuration scenarios were set up within a Linux environment, and an Ixia device was used to generate traffic. This testing demonstrated that throughput accuracy can be manipulated and that the bandwidth range is accurate within a 2Mbit/s range. The test results demonstrated the performance and accuracy of the HTB queuing algorithms and revealed methods for improving traffic management.

The traffic control mechanism comes into play after an IP packet is queued for transmit on an output interface but before the packet actually is transmitted by the driver. Figure 1 shows where traffic control decisions are made in relation to packet transmission on the physical Ethernet and transport-layer protocols, such as UDP and TCP.

Figure 1. The kernel makes traffic control decisions after the packet is queued for transmit.

The traffic control kernel functionality, as implemented in Linux by Alexey Kuznetsov, includes four main components: queuing disciplines, classes of service, filters and policing.

Queuing disciplines are software mechanisms that define the algorithms used for treating queued IP packets. Each network device is associated with a queuing discipline, and a typical queuing discipline uses the FIFO algorithm to control the queued packets. The packets are stored in the order received and are queued as fast as the device associated with the queue can send them. Linux currently supports various queuing disciplines and provides ways to add new disciplines.

A detailed description of queuing algorithms can be found on the Internet at “Iproute2+tc Notes” (see the on-line Resources). The HTB discipline uses the TBF algorithm to control the packets queued for each defined class of service associated with it. The TBF algorithm provides traffic policing and traffic-shaping capabilities. A detailed description of the TBF algorithm can be found in the Cisco IOS Quality of Service Solutions Configuration Guide (see “Policing and Shaping Overview”).

A class of service defines policing rules, such as maximum bandwidth or maximum burst, and it uses the queuing discipline to enforce those rules. A queuing discipline and a class are tied together. Rules defined by a class must be associated with a predefined queue. In most cases, every class owns one queue discipline, but it also is possible for several classes to share the same queue. In most cases when queuing packets, the policing components of a specific class discard packets that exceed a certain rate (see “Policing and Shaping Overview”).

Filters define the rules used by the queuing discipline. The queuing discipline in turn uses those rules to decide to which class it needs to assign incoming packets. Every filter has an assigned priority. The filters are sorted in ascending order, based on their priorities. When a queue discipline has a packet for queuing, it tries to match the packet to one of the defined filters. The search for a match is done using each filter in the list, starting with the one assigned the highest priority. Each class or queuing discipline can have one or more filters associated with it.

Policing components make sure that traffic does not exceed the defined bandwidth. Policing decisions are made based on the filter and the class-defined rules. Figure 2 shows the relationship among all the components in the Linux traffic control mechanism.

Figure 2. Linux traffic control mechanisms include queuing disciplines, classes, filters and policing.

TC Tool and HTB Definitions

TC is a user-level program that creates queues, classes and filters and associates them with an output interface (see “tc—Linux QoS control tool” in Resources). The filters can be set up based on the routing table, u32 classifiers and TOS classifiers. The program uses netlink sockets in order to communicate with the kernel's networking system functions. Table 1 lists the three main functions and their corresponding TC commands. See the HTB Linux Queuing Discipline User Guide for details regarding TC command options.

Table 1. TC Tool Functions and Commands

TC FunctionCommand
tc qdiscCreate a queuing discipline.
tc filterCreate a filter.
tc classCreate a class.

The HTB mechanism offers one way to control the use of the outbound bandwidth on a given link. To use the HTB facility, it should be defined as the class and the queuing discipline type. HTB shapes the traffic based on the TBF algorithm, which does not depend on the underlying bandwidth. Only the root queuing discipline should be defined as an HTB type; all the other class instances use the FIFO queue (default). The queuing process always starts at the root level and then, based on rules, decides which class should receive the data. The tree of classes is traversed until a matched leaf class is found (see “Hierarchical Token Bucket Theory”).

______________________

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState