Linux IPv6: Which One to Deploy?
The Linux kernel has its own IPv6 implementation. However, as mentioned previously, based on the TAHI Project results, this implementation proved to be not as good as other implementations. However, this is not a big surprise given that no major development activity has happened for a while. Another factor is that the original writer of the Linux kernel IPv6 code, Pedro Roque, has left the community, and since then, Alexey Kuznetsov and others have made quite a few enhancements over the years; however, this is not a major development. The USAGI Project has submitted some small fixes, but the question of a complete integration with the kernel stack is still an open issue.
At this point, we already have two IPv6 test networks in our lab. One network had Linux nodes with the USAGI IPv6 stack and the other had Linux nodes with the kernel IPv6 stack. Much work has been put to perform the setup and solve routing and tunneling issues. However, the question was still which implementation to adopt.
To be able to answer this question objectively, Ericsson Research (Budapest) has performed conformance tests for the latest version of the official Linux kernel (at that time kernel 2.4.5) and the USAGI IPv6 implementation (based on 2.4.0). The tests were based on the University of New Hampshire InterOperability Lab IPv6 Test Description document (see Resources).
The result of each test case can be:
Pass: the implementation passes the test.
Fail: the implementation fails the test.
Inc: the verdict is inconclusive if we cannot decide whether the implementation is capable of passing the test. For example, when the test consists of three request/reply sequences and we do not get an answer for the tester's second request, then the verdict is inconclusive.
The Conformance Lab conducted four types of testing: basic specification, address autoconfiguration, redirect and neighbor discovery. Below, we explain these tests and present the results.
Basic specification: this series of tests covers the base specification for IPv6. The base specification specifies the basic IPv6 header and the initially defined IPv6 extension headers and options. It also discusses packet-size issues, the semantics of flow labels and traffic classes and the effects of IPv6 on upper-layer protocols (see Figure 1).
Address autoconfiguration: these tests cover address autoconfiguration for IPv6. They are designed to verify conformance with the IPv6 stateless address autoconfiguration specification (see Figure 2).
Redirect: the redirect tests cover the redirect function of the neighbor discovery specification for IPv6. Redirect messages are sent by routers to redirect a host to a better first-hop router for a specific destination or to inform hosts that a destination is in fact a neighbor, i.e., on-link (see Figure 3).
Neighbor discovery: these tests cover the neighbor discovery specification for IPv6. The neighbor discovery protocol is used by nodes (hosts and routers) to determine the link layer address for neighbors known to reside on attached links as well as to purge cached values that become invalid quickly. Hosts also use neighbor discovery to find neighboring routers that are willing to forward packets on their behalf. Finally, nodes use the protocol actively to keep track of neighbors that are reachable and those that are not. When a router or the path to a router fails, a host actively searches for functioning alternates (see Figures 4 and 5).
Based on these results, we can see that the USAGI implementation had better results than the Linux kernel implementation; it passed more tests, failed fewer tests and had less inconclusive cases than the Linux kernel implementation.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SUSE LLC's SUSE Manager
- Tech Tip: Really Simple HTTP Server with Python
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
- Doing for User Space What We Did for Kernel Space
- Rogue Wave Software's Zend Server
- Google's SwiftShader Released
- SuperTuxKart 0.9.2 Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide