Open Source in MPEG
Readers may wonder why, if the coding algorithm is implemented in software, there was a need to develop a standard. Shouldn't it suffice to download the code that allows for the decoding of the particular algorithm used to produce the bitstream of your interest?
In the early days of MPEG-4's development this question used to be asked very often, but today, with the ever-expanding use of MP3, it is easier to understand the benefits of having a standard: a playback device is not necessarily connected to the network. Instead, it may be on a broadcast channel, a stand-alone or portable device; the devices can use many different CPUs for which it could be too costly to develop playback codes; the hardware may use an ASIC for the audio-visual decoding that is not upgradeable; or it may have been designed to run with just the amount of RAM that the standard algorithm requires. In other words, it is simpler to have a common standard on which business opportunities can multiply, instead of having to struggle with incompatibilities all over the place.
Lastly, it should be kept in mind that compression coding is not a transparent operation. In general, the lower the bitrate used, the more the quality is affected negatively. Transcoding from one algorithm to another may simply produce garbage. Also, the idea that compression technology keeps improving is a myth. Only now, after many years, is MPEG re-issuing a call for proposals for video compression technologies because of the feeling that there may be something worth considering. For audio compression MPEG is still at the level of issuing a call for evidence because the group is not convinced this is an area currently worth pursuing.
The very size of the standard has transformed the development of the reference software into a huge undertaking. It is therefore interesting to see how such a project was managed. These are the most important features:
The condition was set that any component of the standard, both normative (decoder) and informative (encoder), had to be implemented in software. For any proposal to be accepted and adopted, it was a condition that source code be made available and the copyright released to ISO.
For each portion of the standard, a manager of the code was appointed: a representative of Microsoft and MoMuSys for video in C++ and C respectively, Fraunhofer for natural audio, MIT for Structured Audio, ETRI for Text-to-Speech interface, Optibase for the so-called “Core” (the code portion on which all media decoders and other components plug in), Apple for the so-called MPEG-4 File Format, etc.
Each portion of the standard had a manager of experiments appointed. This manager integrated the code of the accepted tools in the existing code base.
Unlike traditional open-source software projects, only MPEG members could participate in the project. Discussions were usually held (and the practice still continues) on e-mail reflectors that are open to non-MPEG members.
MPEG is a place where new ideas are continuously forged. One idea was generated by the fact that while the reference code is intended to be “reference” (normative or informative as the case may be), it is not intended to be efficient. Therefore, since December 1999, MPEG has been working on a new part of MPEG-4 that will contain optimized code (e.g., optimized ways to search for motion vectors, a computationally expensive part of the standard). Any implementer can take this code and use it free of copyright. The condition has been set, however, that such optimized code should not require patents. A second idea, launched in October 2000, led to the decision to develop an MPEG-4 “reference hardware description”. It is expected that this will further promote the use of MPEG-4 as the basic multimedia infrastructure in both software and hardware.
The text of the so-called “copyright disclaimer” that is found on all MPEG-4 software modules is given below.
This software module was originally developed by <First Name 1> <Last Name 1> (<Company Name 1>) and edited by <First Name 2> <Last Name 2> (<Company Name 2>), <First Name 3> <Last Name 3> (<Company Name 3>), in the course of development of the <MPEG standard>. This software module is an implementation of a part of one or more <MPEG standard> tools as specified by the <MPEG standard>. ISO/IEC gives users of the <MPEG standard> free license to this software module or modifications thereof for use in hardware or software products claiming conformance to the <MPEG standard>. Those intending to use this software module in hardware or software products are advised that its use may infringe existing patents. The original developer of this software module and his/her company, the subsequent editors and their companies, and ISO/IEC have no liability for use of this software module or modifications thereof. Copyright is not released for non-<MPEG standard>-conforming products. <Company Name 1> retains full right to use the code for its own purpose, assign or donate the code to a third party and to inhibit third parties from using the code for non-<MPEG standard>-conforming products. This copyright notice must be included in all copies or derivative works. Copyright ( 199_).
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
- Google's SwiftShader Released
- Rogue Wave Software's Zend Server
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide