Multicast: From Theory to Practice
In order to play with multicast, your GNU/Linux box needs special configuration. Your kernel must be compiled with IP: multicasting enabled. This will add support for the IGMP protocol (Internet group management protocol) to send and receive multicast traffic. If you keep on playing with multicast, it is quite likely you will need to use your box as a multicast router, as old routers do not support multicasting. In that case, check the HOWTO for several additional compile options which must be enabled (i.e., say YES). You will also need the mrouted application, a daemon which instructs the kernel on how to forward multicast datagrams when acting as a multicast router (mrouter).
Finally, you need to set a default route for outgoing multicast datagrams. Assuming the eth0 network interface is to act as that outgoing route (your application can instruct the kernel to send its datagrams using a different interface if needed), you'll need to use:
route add -net 220.127.116.11 netmask 240.0.0.0 dev eth0
Now that multicast is defined and your hosts are set up, I will explain how to write multicast applications while developing one. Its aim is to be both a didactic and useful tool. The reader needs a basic background in network programming using the sockets API. UNIX Network Programming by W. Richard Stevens, Internetworking with TCP/IP Vol. 3 by Douglas E. Comer and the setsockopt man page are helpful references.
The idea for the application in Listing 1 came from a popular TV commercial in Spain: a little boy takes his father's mobile telephone, starts calling numbers randomly and saying: “Hi, I'm Edu. Merry Christmas!” His father gulps when he discovers it and, of course, the lesson is how cheap this company's mobile phone calls are (in Europe, local calls are quite expensive).
Our program (see Listing 1) will do the same thing: it will send to the multicast group and port, passed as command-line arguments, the string “Hi, I'm name_of_machine. Merry Christmas!” along with the time to live (TTL) of the message. The program is short and simple, but it is also quite useful. I have used it several times when configuring multicast networks. You can run it on all your machines to see whether they are sending and/or receiving traffic. The TTL is very handy when using multicast routers and/or tunnels, as it makes it easy to determine the lowest TTL needed to reach a given destination.
The first lines of the program are the usual include statements. I tried to add comments to point out which functions and/or data structures need them. In the main function, variable definition and basic initializations are done in lines 27 to 44. Later, we use a dedicated socket for sending (send_s) and another for receiving (recv_s). These sockets must be SOCK_DGRAM (UDP), as TCP does not support the multicast paradigm.
When multicast was implemented, the sockets layer was extended a bit to support it. That support came via the setsockopt/getsockopt system calls.
Three of the five new optnames (see the setsockopt man page) were intended for use when sending data: IP_MULTICAST_LOOP, IP_MULTICAST_TTL and IP_MULTICAST_IF. They are all at the IPPROTO_IP level.
If IP_MULTICAST_LOOP is set, all multicast packets sent from this socket will be looped back internally by the kernel. This way, the rest of the applications waiting to receive traffic for this group will see it just as if it had been received by the network card. We are not interested in that behavior for our application, so it is disabled in lines 65 to 69. By default, loopback is enabled.
The TTL field of the IP header plays a primary role in multicasting. Its original role of avoiding problems with packets being looped forever due to routing errors is kept, but a new one is added: that field is also associated with a meaning of “threshold”. It acts as a delimiter to keep multicast packets from being forwarded without control across the Internet. You can establish frontiers by specifying a multicast packet will cross your multicast router only if its TTL field is greater than a particular value. This way, you can multicast a conference restricting its scope to your LAN (TTL of 1), your local site (TTL<32), your country (TTL<64) or allow it to be unrestricted in scope (TTL<256). Our test program lets you specify the TTL on the command line, then sets it using the IP_ MULTICAST_TTL option. If none is specified, TTL 1 is assumed (see lines 52 to 62). If you are using multicast tunnels or your applications are separated by multicast routers, you can run the program on both ends by increasing the value of the TTL field until the two programs “see” each other. This way, you can easily discover the minimum TTL necessary for your applications to communicate.
If not otherwise specified, outgoing multicast datagrams are sent following the default multicast route set by the system administrator. If this is not what you want, you can specify another output interface for that socket. Our sample program is quite simple and does not need this feature, so we did not use the IP_MULTICAST_IF option. Instead, we let the kernel choose the correct route. If you need it, write code such as:
struct in_addr interface_addr; setsockopt (socket, IPPROTO_IP, IP_MULTICAST_IF, &interface_addr, sizeof(interface_addr));
filling the interface_addr structure with a suitable value. If later you want to revert to the original behavior, just call setsockopt again using INADDR_ANY as the interface field.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- Designing Electronics with Linux
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Dynamic DNS—an Object Lesson in Problem Solving
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- Validate an E-Mail Address with PHP, the Right Way
- Build a Skype Server for Your Home Phone System
- Why Python?
- A Topic for Discussion - Open Source Feature-Richness?
- Tech Tip: Really Simple HTTP Server with Python
3 hours 4 min ago
- Reply to comment | Linux Journal
3 hours 12 min ago
- Understanding the Linux Kernel
5 hours 27 min ago
7 hours 57 min ago
- Kernel Problem
18 hours 1 sec ago
- BASH script to log IPs on public web server
22 hours 27 min ago
1 day 2 hours ago
- Reply to comment | Linux Journal
1 day 2 hours ago
- All the articles you talked
1 day 4 hours ago
- All the articles you talked
1 day 5 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?