Network Buffers and Memory Management
Ethernet is probably the most common physical interface type that can be handled. The kernel provides a set of general purpose Ethernet support routines that such drivers can use.
eth_header() is the standard Ethernet handler for the dev-hard_header routine, and can be used in any Ethernet driver. Combined with eth_rebuild_header() for the rebuild routine it provides all the ARP lookup required to put Ethernet headers on IP packets.
The eth_type_trans() routine expects to be fed a raw Ethernet packet. It analyses the headers and sets skb->pkt_type and skb->mac itself as well as returning the suggested value for skb->protocol. This routine is normally called from the Ethernet driver receive interrupt handler to classify packets.
eth_copy_and_sum(), the final Ethernet support routine is internally quite complex, but offers significant performance improvements for memory mapped cards. It provides the support to copy and checksum data from the card into a sk_buff in a single pass. This single pass through memory almost eliminates the cost of checksum computation when used and improves IP throughput.
Alan Cox has been working on Linux since version 0.95, when he installed it in order to do further work on the AberMUD game. He now manages the Linux Networking, SMP, and Linux/8086 projects and hasn't done any work on AberMUD since November 1993.
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Linux Systems Administrator
- Validate an E-Mail Address with PHP, the Right Way
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- Introduction to MapReduce with Hadoop on Linux
- RSS Feeds