CIDR: A Prescription for Shortness of Address Space
CIDR, Classless Inter-Domain Routing, allows you to maximize use of the limited address space under the current implementation of the Internet Protocol version 4 (IPv4). After reading this article, even if you have never configured a computer for network communications before, you should have a good understanding of these references to networking.
CIDR is the current trend in routing and has been for over three years. This concept was introduced in 1993 to alleviate the shortage of Internet Protocol (IP) addresses until the next generation (IP version 6—IPv6, aka IPng for IP next generation) arrives.
Currently in testing, IPng will significantly expand the IP address space by several orders of magnitude. IPng will also come with its own security enhancements. Those desiring to participate in the future today may have the opportunity to do so, since Linux has kernel-level support for IPng. Until IPng is deployed on a wide scale, making the best use of what we have is what CIDR is all about.
To help you understand why we need CIDR at all, let's journey back in time to the beginning of this decade. IPv4, the protocol used by computers to find each other on a network, was in use then, but there really weren't many connections to the Internet or machines needing Internet connections. In fact, a good number of systems still relied on uucp, the UNIX to UNIX copy protocol, where machines “called” each other at predetermined times and exchanged e-mail traffic. At that time, the IP-address pool seemed unlimited. That was also about the time Mosaic, the first web browser, appeared.
Those who consider themselves well-versed in “classful” routing may wish to skip ahead to the next section. Computers understand base 2 numbers (ones and zeroes), and humans understand base 10 (0-9), so engineers worked out a compromise to give computers numbers while keeping it simple for use by humans. All computers on the Internet have a unique IP address which can be represented by a string of ones and zeroes. If that string is divided up into four sets of eight (octets), you get four numbers with a range from 0 (eight zeroes) to 255 (eight ones), which are arranged in the form XXX.XXX.XXX.XXX. This arrangement is called “dotted decimal notation” and makes understanding the significance of each unique IP address a little easier for us humans. These addresses were then further broken down into arbitrary “classes” A-D. Looking at the first half of the first octet:
Class A = 0-127 (0000) Class B = 128-191 (1000) Class C = 192-223 (1100) Class D = the rest (1110)
The positions beginning from the left represent 128, 64, 32 and 16—see Table 1. Furthermore, Class A uses only the first number as the network number, e.g., 10.XXX.XXX.XXX; Class B uses the first two numbers as the network number, e.g., 172.32.XXX.XXX; Class C uses three numbers as the network number, e.g., 192.168.1.XXX; Class D is reserved for testing purposes. A network address can be thought of as having a network and host portions represented by numbers and XXXs respectively. For a Class C address, the network portion consists of the first three octets with the host portion as the final octet.
The following concepts with respect to networking computers must be understood. Note that the “definitions” I provide here are given to aid in understanding basic concepts for use in this article, and are not the actual definitions of the terms.
host address: A unique address assigned to a communications device in a computer. If a computer has multiple communications devices (e.g., Ethernet cards or modems), each of these devices will have its own unique address. This means that a host (computer or router) can be multi-homed, i.e., have multiple IP addresses. This can also be artificially created by assigning different IP addresses to the same device (called IP aliasing).
network address: The base (lower) address assigned to a network segment, depending on its netmask. This is the first host IP number on a subnet. For example, on the Class C network that extends from 192.168.1.0 to 192.168.1.255, the network address would be 192.168.1.0.
broadcast address: The upper address assigned to a network segment. In the example above, this address would be 192.168.1.255.
netmask: A mask consisting of that portion of the IP address where all greater bits consist of ones (in base 2) and all lower bits consist of zeroes—in other words, ones represent the network portion of the address, and zeroes represent the host portion. For the example above, this mask would be 255.255.255.0.
With this introduction to IP addressing, and remembering that a decade ago almost no PCs participated in networking, it is easy to see why during the 1980s IPv4 seemed to have an endless supply of addresses, even though not all addresses could be assigned. Theoretically, if you could make use of all the usable IP addresses available, you'd have a maximum of approximately 500 million addresses, but even 100 million is extremely optimistic and insufficient for today.
Before leaving this section, I'd like to describe an experiment. This experiment will not work properly if performed in an environment with machines using only the Microsoft Windows IP stack, since its implementation is broken, or at least doesn't follow the rules everyone else plays by. Therefore, you will need to be on a UNIX or Linux machine with other UNIX or Linux boxes on your network. Type the following command:
ping -c 1
What you will see in response is every UNIX box answering back with its IP address, and each reply following the first one will have (DUP!) next to it, indicating it is a duplicate reply. The -c 1 argument tells ping to send only one ping packet. The number of replies received will depend on how many (non-MS) machines you have on the network. If this is performed from an MS Windows machine (95 or NT), you will receive a reply from the local machine only.
What is the point of this little demonstration? If you change the netmask on a machine, say from 255.255.255.0 to 255.255.0.0 thereby changing its network and broadcast addresses, even though nothing else changed (i.e., it still has the same IP address and is still connected to the network the same way) it will cease talking to its neighbors. In other words, this machine is now on another network and will require a gateway to talk to the other machines on the local net (all bets are off for the Microsoft machines).
Practical Task Scheduling Deployment
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.View Now!
|The Firebird Project's Firebird Relational Database||Jul 29, 2016|
|Stunnel Security for Oracle||Jul 28, 2016|
|SUSE LLC's SUSE Manager||Jul 21, 2016|
|My +1 Sword of Productivity||Jul 20, 2016|
|Non-Linux FOSS: Caffeine!||Jul 19, 2016|
|Murat Yener and Onur Dundar's Expert Android Studio (Wrox)||Jul 18, 2016|
- Stunnel Security for Oracle
- The Firebird Project's Firebird Relational Database
- SUSE LLC's SUSE Manager
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- Google's SwiftShader Released
- SuperTuxKart 0.9.2 Released
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide