Open Source in MPEG
My work experience has been in a telecommunications research establishment. The telecommunication industry used to be characterized by considerable innovation in the network infrastructure where investments were not spared and by reluctance to invest in terminal equipment. This was in part because terminals were alien to its culture (even though the more enlightened individuals were aware that unless there were new digital terminals there would not be much need for network innovation), and in part because the terminal was technically and legally outside of its competence. The attitude was “Let the manufacturing industry do the job of developing terminals.” Unfortunately, the telecommunications manufacturing industry, accustomed to being pampered and running the risk of fewer orders from the telcos based in solid CCITT standards, had no desire to make investments in something based on the whim of end users they did not understand. The consumer electronics industry, which knew end users better and was accustomed to make business decisions based on their judgment of the validity of the products, still considered telecommunications terminals out of its interest. This explains why, at the end of the 1980s, there was virtually no end-user equipment based on compression technologies, with the exception of facsimile. To make cheap and small terminals one would have needed ASICs (Applications Speciftc Integrated Curcuits) capable of performing the sophisticated signal processing functions needed by compression algorithms.
I saw the attempts being made by both Philips and RCA in those years to store digital video on CDs for interactive applications (called CD-i and DVI, respectively) as an opportunity to ride on a mass market of video compression chips that could be used for video co-communication devices. What was required was the replacement of the laborious and unpredictable “survival-of-the-fittest” market approach of the consumer electronics world with a regular standardization process.
So started MPEG in January 1988 with the addition of the mandate a few months later for audio compression and the function needed to multiplex and synchronize the two streams (called “systems”). In four years the first standard MPEG-1 was developed. Interestingly, none of the two original target applications—interactive CD and digital audio broadcasting—are currently large users of the standard (video communication has not become too popular either). On the other hand, MPEG-1 is used by tens of millions of video CDs and MP3 players. One feature of MPEG-1 that is remarkable: MPEG-1 was the first audio-visual standard that made full use of simulation for its development. The laboratory at which I worked took part in the development of the 1.5-2Mbps video conference codec using three 12U racks and minimal support from computer simulation. Even more significant for future implications was the fact that MPEG-1—a standard in five parts—has a software implementation that appears as “part 5” of the standard (ISO/IEC 11172-5).
In July 1990, MPEG started its second project, MPEG-2. While MPEG-1 was a very focused standard for well-identified products, MPEG-2 addressed a problem everybody had an interest in: how to convert the 50-year-old analogue television system to a digital compressed form in such a way that the needs of all possible application domains were supported. This was achieved by developing two system layers. One, called the MPEG-2 Transport Stream (TS), was designed for error-prone environment targets (such as cable, satellite and terrestrial) of the transmission application domains. The other, called MPEG-2 Program Streams (PS), was designed to be software-friendly and was used for DVD. The idea was that MPEG-2 would become the common infrastructure for digital television; indeed, something that has been successfully achieved if one thinks that at any given moment there are more bits carried by MPEG-2 TS than by IP. The title of the standard “Generic Coding of Moving Pictures and Associated Audio” formally conveyed this intention. By the time MPEG-2 was approved (November 1994), the first examples of real-time MPEG-1 decoding on popular programmable machines had been demonstrated. This was, if there had been a need for it, an incentive to continue the practice of providing reference software for the new standard (ISO/IEC 13818-5).
In July 1993, MPEG started its third project, MPEG-4. The first goal is reflected in the original title of the project, “very low bitrate audio-visual coding”. Even though no specific mass-market applications were in sight, many sensed that the digitization of narrowband analogue channels, such as the telephone access network (Internet was not yet a mass phenomenon), would provide interesting opportunities to carry video and audio at a bitrate definitely lower than 1Mbps, roughly the lowest bitrate value supported by MPEG-1 and MPEG-2. For that bitrate range it was clear that a decoder could very well be implemented on a programmable device, unlike other MPEG standards. It was possible that there would eventually be more software-based than hardware-based implementations of the standard. This was the reason the reference software, part 5 of MPEG-4 (ISO/IEC 14496-5) has the same normative status as the traditional text-based descriptions of the other parts of MPEG-4.
MPEG-4 became a comprehensive standard as signaled by its current title, “coding of audio-visual objects”. The standard supports the coded representation of individual audio-visual objects whose composition in space and time is signaled to the receiver. The different objects making up a scene can even be of different origins: natural and synthetic.
This does not mean, however, that a particular implementation of the standard is necessarily “complex”. An application developer may choose among the many profiles—dedicated subsets of the full MPEG-4 tools—to select the one used to develop his application. For all these reasons, it is expected that MPEG-4 will become the infrastructure on top of which the currently disjointed world of multimedia will flourish.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- I once had a better way I
5 hours 12 min ago
- Not only you I too assumed
5 hours 29 min ago
- another very interesting
7 hours 22 min ago
- Reply to comment | Linux Journal
9 hours 16 min ago
- Reply to comment | Linux Journal
16 hours 10 min ago
- Reply to comment | Linux Journal
16 hours 26 min ago
- Favorite (and easily brute-forced) pw's
18 hours 17 min ago
- Have you tried Boxen? It's a
1 day 9 min ago
- seo services in india
1 day 4 hours ago
- For KDE install kio-mtp
1 day 4 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?