Distance Education Using Linux and the MBone
The recording and playback of DETA sessions is not entirely without problems. The recording system works by capturing all the network packets received at a client station during the initial live session, encoding them along with a time stamp and writing them to disk. On playback the time stamps are used to deliver an “identical” stream of network packets that will exactly mimic the original session and preserve the synchronization of the audio, video and whiteboard. The quality of the recording (or the live session) can be affected by packet loss due to network congestion, processor loading, or other sources. Generally speaking, the loss of a single packet does only minimal damage to the audio or video recording. A missed syllable during playback is easily filled in from the context, and a missed video frame causes only a momentary pause or distortion. This is often not the case for the whiteboard. One of the ways faculty use the whiteboard is to load preprepared PostScript slides containing figures or equations, and then use the mark-up capability of the whiteboard to provide additional information and lead the class through the discussion of the presented material. A typical PostScript image, even when compressed, may be 10 to 50KB. In order to stream this to the client systems, the image is broken into multiple network packets by the server and reassembled at the client prior to rendering. If even a single packet is lost in transit, the reassembled image will be incomplete and the rendering will fail. The entire image will be absent from the client whiteboard. The whiteboard protocol contains a feature for dealing with this problem during a live session. Each network packet is numbered so every client can identify when a packet is missing. That client can then send a request to all the other session participants asking for the packet to be retransmitted, and any client that has the requested packet can respond. This late delivery mechanism works well for the whiteboard because it less sensitive to the synchronization. Unfortunately, the mVCR recording tool is passive. It simply records the packets that are received without requesting retransmission of missing packets. Whenever a PostScript image is received with any packets missing, it will be incomplete in the recording, and will not appear during playback. This can significantly degrade the quality of the learning experience, since the students viewing the playback are unable to see the prepared material.
To overcome this difficulty, a repair tool was developed: fix_wb_recording.pl. The Perl tool simply parses through the whiteboard recording, locates any PostScript images with missing data, and replaces them with complete images. Parsing the recording requires detailed knowledge of both the mVCR recording format and the wb data format. Neither of these is well documented but sufficient information is available on the Web. The mVCR recording format is described by Peter Parnes at www.cdt.luth.se/~peppar/progs/mMOD/doc/fileformat.txt. The mVCR format is just a header containing basic setup information (multicast address and port number) and any number of wrapped wb data packets. The wb data format has never been published but Lars Rasmusson has posted a reverse-engineered description of the format at www.it.kth.se/~d90-lra/wb-proto.html. While it is known that this description is incomplete and incorrect in some details, it was sufficiently accurate for the purpose of repairing the mVCR recordings. Within the wb data format, PostScript images appear as a sequence (not necessarily consecutive) of draw commands that encode the PostScript data and issue the command to the client to render the complete image.
Whenever the parsing routine encounters a PostScript image in the recording, it opens each PostScript file that was used in the original session and searches for a matching data block. Since most instructors prepare all of their PostScript images using a single software tool, the images often contain large sections that are identical. Therefore it is usually necessary to check several recorded data blocks before one is found that produces a unique match with one of the original images. Once a unique match is found, each draw command associated with the recorded image is deleted and new draw commands are inserted containing all of the data blocks of the complete image and an associated render image command. After a recording has been completed, DETA gives the user the option of post-processing the slides with the repair tool. If this option is chosen, DETA sends the actual PostScript slides to the VCR server and runs the fix_wb_recording.pl tool. Once this was implemented, problems with the quality of the recorded sessions dropped to nearly zero.
Fast/Flexible Linux OS Recovery
On Demand Now
In this live one-hour webinar, learn how to enhance your existing backup strategies for complete disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible full-system recovery solution for UNIX and Linux systems.
Join Linux Journal's Shawn Powers and David Huffman, President/CEO, Storix, Inc.
Free to Linux Journal readers.Register Now!
- Download "Linux Management with Red Hat Satellite: Measuring Business Impact and ROI"
- Peppermint 7 Released
- Sony Settles in Linux Battle
- Libarchive Security Flaw Discovered
- Maru OS Brings Debian to Your Phone
- Profiles and RC Files
- Snappy Moves to New Platforms
- The Giant Zero, Part 0.x
- Git 2.9 Released
- Susan Lauber's Linux Command Line Complete Video Course (Prentice Hall)
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide