Linux Teleconferencing: Improving the Wireless Network
The effects of the wireless channel and the response of the compression algorithms can be modeled using a small collection of hardware and software, namely a Linux box, a pair of H.323 generators and some freely available libraries. The rather simple setup required to simulate all of this can be done with just three computers, a pair of speakers and microphones, a few extra Ethernet cards and two cross cables. The software is just as minimal, and when used in conjunction with the setup shown in Figure 2, we obtain a particularly easily reproducible system.
The open H.323 project is an excellent source of software components required for the generation of packets that comply with the standard. Two alternatives are available at the web site (http://www.openh323.org/) for both Linux and Windows platforms. For those of you who have Windows already, NetMeeting, commonly bundled with the operating system, offers yet another version of an H.323-compliant multimedia engine.
Important scenarios need factoring into an implementation in terms of simulating the channel, including:
Startup—establishing the session requires a great deal of robustness so that the compressor/decompressor pair can get into context as described in the previously mentioned strategies.
Handoff—handoffs are a function of how fast the mobile device is moving, and the number of packets dropped follow a Poisson distribution with parameter nine. This behavior is considered graceful in that third-generation networks allow the session to survive the handover of the session without having to restart.
Deep fade—these are the enemies of wireless communications and are caused by the hostile nature of spitting bits out on radio waves. Deep fades are typically attributed to the momentary shadowing of the radio signal as well the detrimental effect of interference experienced from congested areas. These are currently the major limiting factors under the academic microscope to the operation of all third-generation cellular networks.
Having installed the Linux machine in the middle of the network, we delve into the TPC/IP stack so that we can create the adaption functions in a modular, nonblocking, input/output, real-time, client/server fashion. In effect, we stop the packet flow by setting up rules in IP chains and using the libpcap interface to bring the stopped packets up to user space for analysis and, ultimately, compression/decompression. Having physically reproduced the architecture of Figure 2, it is a trivial matter to establish an uninterrupted teleconference session whose packets are forced through your Linux machine. IP forwarding must be enabled for this to function, particularly since we are going to stop the flow of packets with the use of IP chains. To verify that IP forwarding is indeed enabled, type:
echo /proc/sys/net/ipv4/ip_forwardingThe result is equal to one if we are ready to forward packets. The next step is to verify we can stop and restart the stream without killing the session. This means TCP packets must be uninterrupted. At this point, we need only concentrate on IP/UDP/RTP packets. The following commands will stop and restart the stream:
ipchains -P forward -DENY ipchains -P forward -ACCEPTThe -P option is indiscriminate of protocol; it will stop ICMP, ARP, TCP and UDP packets.
Now that we can play with the stream we can be selective with which ones we transmit and how we transmit them.
The link layer (LL) is where we want to pick up our packets in order to retain all the IP fields. The packets are received by the networking stack and queued in a linked list structure called sk_buff where they are serviced automatically by the top-half software interrupts of the kernel in ip_input.c, ip_forward.c and ip_forward.c. For a more in-depth treatment of how socket buffers are managed in memory, see Alan Cox's “Network Buffers and Memory Management” (Linux Journal, October 1996). Most user-space programs interface the networking stack via Berkeley Packet Filter (BPF) or INET sockets. For security purposes, these socket interfaces were not designed to delve down to the Ethernet or device/physical layer (PL). A compromise is reached by opening a raw socket that retains the IP fields by interacting directly with the IP layer of the stack. Although reading packets in their raw form is supported by Libpcap, the ability to transmit them is only feasible through modifications to Libpcap itself. The definitive text on TCP/IP networking stack can be found in UNIX Network Programming, Vol.1, by W. Richard Stevens. For a more Linux-specific treatment of the subject, the interested reader is referred to David A. Rusling's “Chapter 10, Networks” in The Linux Kernel, which can be found at www.linuxhq.com/guides/TLK/net/net.html.
A succinct way of circumventing the necessary changes to either the kernel or libpcap in order to pop the raw packets in full, to and from the Ethernet device, is found in the form of a Perl5 CPAN package, namely RawIP (http://quake.skif.net/RawIP/). Figure 3 is a diagrammatic representation that maps the Linux TCP/IP stack and the code in the kernel (2.2.x) responsible for dealing with the packet flow to be compressed.
Listing 1 [at LJ's ftp site] is a rudimentary means, using Perl5, of picking up the stopped packets, outputting the IP id field contents and, in turn, passing them on to their final destination. The IP address of source and destinations are the only parameters required. The script is discriminate of the protocol, so we are now able to concentrate on just the voice or even, as it turns out, video packets.
The script creates a text file id_dump.txt that gives a handle on the IP id during any given session. Extending this to the other fields by quoting them in the script is the first stage needed in creating a state machine that implements any one of the proposals submitted to the IETF working group. Listing 2 uses the Math::Random Perl5 CPAN package to introduce an average packet or, in the case of VoIP, frame error rate according to a uniform distribution of 20%. The effects of which are immediately distinguishable when used in conjunction with the gateway of Listing 1, which will now begin to take the form of a high-level wireless channel simulator. The justification of deeming this a sufficient means of corrupting the stream is two-fold. A very precise 3G wideband code-division multiple access modulating channel model with multiple Raleigh fading paths is freely available for download from w3.antd.nist.gov/wctg/3G/3G.html. Its use, while it comes highly recommended, can have a tremendous impact on the real-time nature of the session in the absence of a Beowulf cluster. Furthermore, the delay will reduce the subjective nature of system performance that engineers often rely upon in the case of the perception of voice transmission quality.
Now that you are armed with an all applicable framework, the next step would be to apply the work to the analysis of video packets and their resilience to our hostile channel or pack this into an embedded system for the creation of a low-cost teleconferencing tool.
Practical Task Scheduling Deployment
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.View Now!
|The Firebird Project's Firebird Relational Database||Jul 29, 2016|
|Stunnel Security for Oracle||Jul 28, 2016|
|SUSE LLC's SUSE Manager||Jul 21, 2016|
|My +1 Sword of Productivity||Jul 20, 2016|
|Non-Linux FOSS: Caffeine!||Jul 19, 2016|
|Murat Yener and Onur Dundar's Expert Android Studio (Wrox)||Jul 18, 2016|
- The Firebird Project's Firebird Relational Database
- Stunnel Security for Oracle
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- SUSE LLC's SUSE Manager
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Parsing an RSS News Feed with a Bash Script
- Doing for User Space What We Did for Kernel Space
- Google's SwiftShader Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide