Linux IPv6: Which One to Deploy?
The Linux kernel has its own IPv6 implementation. However, as mentioned previously, based on the TAHI Project results, this implementation proved to be not as good as other implementations. However, this is not a big surprise given that no major development activity has happened for a while. Another factor is that the original writer of the Linux kernel IPv6 code, Pedro Roque, has left the community, and since then, Alexey Kuznetsov and others have made quite a few enhancements over the years; however, this is not a major development. The USAGI Project has submitted some small fixes, but the question of a complete integration with the kernel stack is still an open issue.
At this point, we already have two IPv6 test networks in our lab. One network had Linux nodes with the USAGI IPv6 stack and the other had Linux nodes with the kernel IPv6 stack. Much work has been put to perform the setup and solve routing and tunneling issues. However, the question was still which implementation to adopt.
To be able to answer this question objectively, Ericsson Research (Budapest) has performed conformance tests for the latest version of the official Linux kernel (at that time kernel 2.4.5) and the USAGI IPv6 implementation (based on 2.4.0). The tests were based on the University of New Hampshire InterOperability Lab IPv6 Test Description document (see Resources).
The result of each test case can be:
Pass: the implementation passes the test.
Fail: the implementation fails the test.
Inc: the verdict is inconclusive if we cannot decide whether the implementation is capable of passing the test. For example, when the test consists of three request/reply sequences and we do not get an answer for the tester's second request, then the verdict is inconclusive.
The Conformance Lab conducted four types of testing: basic specification, address autoconfiguration, redirect and neighbor discovery. Below, we explain these tests and present the results.
Basic specification: this series of tests covers the base specification for IPv6. The base specification specifies the basic IPv6 header and the initially defined IPv6 extension headers and options. It also discusses packet-size issues, the semantics of flow labels and traffic classes and the effects of IPv6 on upper-layer protocols (see Figure 1).
Address autoconfiguration: these tests cover address autoconfiguration for IPv6. They are designed to verify conformance with the IPv6 stateless address autoconfiguration specification (see Figure 2).
Redirect: the redirect tests cover the redirect function of the neighbor discovery specification for IPv6. Redirect messages are sent by routers to redirect a host to a better first-hop router for a specific destination or to inform hosts that a destination is in fact a neighbor, i.e., on-link (see Figure 3).
Neighbor discovery: these tests cover the neighbor discovery specification for IPv6. The neighbor discovery protocol is used by nodes (hosts and routers) to determine the link layer address for neighbors known to reside on attached links as well as to purge cached values that become invalid quickly. Hosts also use neighbor discovery to find neighboring routers that are willing to forward packets on their behalf. Finally, nodes use the protocol actively to keep track of neighbors that are reachable and those that are not. When a router or the path to a router fails, a host actively searches for functioning alternates (see Figures 4 and 5).
Based on these results, we can see that the USAGI implementation had better results than the Linux kernel implementation; it passed more tests, failed fewer tests and had less inconclusive cases than the Linux kernel implementation.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- RSS Feeds
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Designing Electronics with Linux
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- A Topic for Discussion - Open Source Feature-Richness?
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Validate an E-Mail Address with PHP, the Right Way
- What's the tweeting protocol?
- Kernel Problem
7 hours 4 min ago
- BASH script to log IPs on public web server
11 hours 31 min ago
15 hours 7 min ago
- Reply to comment | Linux Journal
15 hours 39 min ago
- All the articles you talked
18 hours 2 min ago
- All the articles you talked
18 hours 6 min ago
- All the articles you talked
18 hours 7 min ago
22 hours 32 min ago
- Keeping track of IP address
1 day 23 min ago
- Roll your own dynamic dns
1 day 5 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?