Queueing in the Linux Network Stack

Starvation and Latency

Despite its necessity and benefits, the queue between the IP stack and the hardware introduces two problems: starvation and latency.

If the NIC driver wakes to pull packets off of the queue for transmission and the queue is empty, the hardware will miss a transmission opportunity, thereby reducing the throughput of the system. This is referred to as starvation. Note that an empty queue when the system does not have anything to transmit is not starvation—this is normal. The complication associated with avoiding starvation is that the IP stack that is filling the queue and the hardware driver draining the queue run asynchronously. Worse, the duration between fill or drain events varies with the load on the system and external conditions, such as the network interface's physical medium. For example, on a busy system, the IP stack will get fewer opportunities to add packets to the queue, which increases the chances that the hardware will drain the queue before more packets are queued. For this reason, it is advantageous to have a very large queue to reduce the probability of starvation and ensure high throughput.

Although a large queue is necessary for a busy system to maintain high throughput, it has the downside of allowing for the introduction of a large amount of latency.

Figure 4 shows a driver queue that is almost full with TCP segments for a single high-bandwidth, bulk traffic flow (blue). Queued last is a packet from a VoIP or gaming flow (yellow). Interactive applications like VoIP or gaming typically emit small packets at fixed intervals that are latency-sensitive, while a high-bandwidth data transfer generates a higher packet rate and larger packets. This higher packet rate can fill the queue between interactive packets, causing the transmission of the interactive packet to be delayed.

Figure 4. Interactive Packet (Yellow) behind Bulk Flow Packets (Blue)

To illustrate this behaviour further, consider a scenario based on the following assumptions:

  • A network interface that is capable of transmitting at 5 Mbit/sec or 5,000,000 bits/sec.

  • Each packet from the bulk flow is 1,500 bytes or 12,000 bits.

  • Each packet from the interactive flow is 500 bytes.

  • The depth of the queue is 128 descriptors.

  • There are 127 bulk data packets and one interactive packet queued last.

Given the above assumptions, the time required to drain the 127 bulk packets and create a transmission opportunity for the interactive packet is (127 * 12,000) / 5,000,000 = 0.304 seconds (304 milliseconds for those who think of latency in terms of ping results). This amount of latency is well beyond what is acceptable for interactive applications, and this does not even represent the complete round-trip time—it is only the time required to transmit the packets queued before the interactive one. As described earlier, the size of the packets in the driver queue can be larger than 1,500 bytes, if TSO, UFO or GSO are enabled. This makes the latency problem correspondingly worse.

Large latencies introduced by over-sized, unmanaged queues is known as Bufferbloat (http://en.wikipedia.org/wiki/Bufferbloat). For a more detailed explanation of this phenomenon, see the Resources for this article.

As the above discussion illustrates, choosing the correct size for the driver queue is a Goldilocks problem—it can't be too small, or throughput suffers; it can't be too big, or latency suffers.

Byte Queue Limits (BQL)

Byte Queue Limits (BQL) is a new feature in recent Linux kernels (> 3.3.0) that attempts to solve the problem of driver queue sizing automatically. This is accomplished by adding a layer that enables and disables queueing to the driver queue based on calculating the minimum queue size required to avoid starvation under the current system conditions. Recall from earlier that the smaller the amount of queued data, the lower the maximum latency experienced by queued packets.

It is key to understand that the actual size of the driver queue is not changed by BQL. Rather, BQL calculates a limit of how much data (in bytes) can be queued at the current time. Any bytes over this limit must be held or dropped by the layers above the driver queue.

A real-world example may help provide a sense of how much BQL affects the amount of data that can be queued. On one of the author's servers, the driver queue size defaults to 256 descriptors. Since the Ethernet MTU is 1,500 bytes, this means up to 256 * 1,500 = 384,000 bytes can be queued to the driver queue (TSO, GSO and so forth are disabled, or this would be much higher). However, the limit value calculated by BQL is 3,012 bytes. As you can see, BQL greatly constrains the amount of data that can be queued.

BQL reduces network latency by limiting the amount of data in the driver queue to the minimum required to avoid starvation. It also has the important side effect of moving the point where most packets are queued from the driver queue, which is a simple FIFO, to the queueing discipline (QDisc) layer, which is capable of implementing much more complicated queueing strategies.


Dan Siemon is a longtime Linux user and former network admin who now spends most of his time doing business stuff.


Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Nice Sharing on your Blog

Boutique's picture

We are very happy to share my opinion on your blog. We are professionally committed in providing Boutique products to our visitors online including service charges. If you are here then visit our blog.

Interesting article. Your

Anonymous's picture

Interesting article. Your post affects many "burning" issues in our society. We can't be indifferent to these challenges. There are many articles out there on this particular point, but you have captured different sides of the topic. This post gives a lot of awesome information and inspiration. I really enjoyed simply reading.

I just added this feed to my

Personal VPN Service's picture

I just added this feed to my bookmarks. I have to say, I very much enjoy reading your blogs. Keep it up!

safe flight academy with linux system

lordtn's picture

with safe flight academy : Get quality flying lessons in Tunisia using Flying Schools in Tunisia for advanced pilots and beginner pilot lessons base in linux systems

I found this is an

laurawillson's picture

I found this is an informative and interesting post so i think so it is very useful and knowledgeable. I would like to thank you for the efforts you have made in writing this article http://www.actual-braindumps.com braindumps

IT Network Support

IT Network Support's picture

I am really pleased to read this webpage posts which contains lots of helpful data, thanks for providing such statistics.
IT Network Support

The cotton bag at

Anonymous's picture

The cotton bag at http://www.irisweb.co.uk has a lovely and valuable property. It is feasible to recycle and reuse the cotton bag. It is speedy. You can make use of the cotton bag for numerous significant purposes.

good stuff

lordtn's picture

great work! Thanks for the post and this fantastic website.
tunisie annonce

Comment Spam

the critic's picture

why is it that a supposedly professional website such als LJ can'r provide proper anti-spam measures in their comment system?

Typo in tag

Nigel's picture

I think this should be tagged 'Networking' not 'Netowrking' ?

It is very nice to read this

Biker's picture

It is very nice to read this post. I appreciate your views about this. It helps us a lot. Thank you very much.

Thank you for the effort you

question voyance gratuite's picture

Thank you for the effort you have made ​​in creating this blog, better shared information that's also one of the values ​​of democracy ... if I can do anything to help this site I 'd be happy .. Good luck !

Thanks for the post. Good and

Anonymous's picture

Thanks for the post. Good and succint overview of packet queueing in Linux.

Reply to comment | Linux Journal

logo design's picture

You have published a fantastic website.