Advanced Video Coding on Linux
The impact of H.264 on the world of digital video compression is growing. Companies such as Apple are already switching wholeheartedly to it. As part of the MPEG-4 standard (part 10), H.264 is now a part of both the HD-DVD and Blu-ray specifications for high-definition DVD. And for good reason—H.264 can encode video using very low bitrates while maintaining an incredibly high perceived quality.
Of particular interest are the low-bitrate possibilities this video codec provides. Luckily for those who run Linux, the H.264 codec (also known as the Advanced Video Codec, or AVC) has a successful and effective open-source implementation known as x264. In fact, the x264 Project won the Doom9 2005 codec comparison test (see the on-line Resources). x264 continues to make progress and improvements, and it remains an active project. So let's take advantage of what it offers us: an extremely high-quality AVC encoding tool that can be used right away for DVD and home movie backups, to create video clips for streaming over the Web or simply for experimenting with the latest video encoding technology.
The balance of this article focuses on the basic steps involved in creating standard .mp4 files that contain H.264 video coupled with AAC audio (Advanced Audio Codec, also an MPEG standard). The vagaries and subtle corners of hard-core video encoding are beyond the scope of this discussion. But hopefully, this introduction will encourage you to explore the topic further.
Because both AVC and AAC are now MPEG standards, it stands to reason that many tools (commercial and otherwise) are already available that support it. For example, Apple's QuickTime natively supports the video files we will be creating. And, MPlayer, the well-known and successful open-source media player, also supports .mp4 playback.
Creating a standards-compliant video file involves three basic steps: the creation of the encoded video, the creation of the encoded audio and the combination of those two things. Here are the software tools we need:
MPlayer (includes mencoder, cvs version 060109 or higher)
faac 1.24 or higher
MP4Box (part of gpac 0.4.0 or higher)
x264 (compiled with gpac support)
Our goal is to produce a low-bitrate video file suitable for posting on the Web. It will be a small file, but the quality will be exceptional compared with a higher-bitrate XviD encoding. Our source video will be a home movie clip called max.dv, which is a nine-second raw DV file captured directly from a digital video camera.
Let's process the audio first, as it is a pretty straightforward operation. The idea is first to have MPlayer dump the raw pcm audio directly from our video source:
mplayer -ao pcm -vc null -vo null max.dv
This produces a file called audiodump.wav. The video portion of the source file is ignored. Now, encode this wave file to AAC:
faac --mpeg-vers 4 audiodump.wav
The --mpeg-vers parameter specifies the MPEG version. We now have the audio portion of our work finished and can listen to audiodump.aac by playing it with MPlayer.
When it comes to encoding the video, we are faced with several options. The highest quality encodes can be made only by using multiple passes. We actually process the source video twice (or more) in order to allow the encoder to pick the best possible distribution of bits across the destination file. Using multiple passes also enables us to pinpoint the bitrate and resulting file size of the output. However, encoding with an AVC encoder, such as x264, is very processor-intensive and thus can run pretty slowly, so we may not want to sit through a lengthy multipass encoding. Instead, we could run the encoding with one pass. This still will produce outstanding results, but never as good as a multipass encode. We also give up the possibility of targeting the resulting file size and bitrate. It all depends on what is most important to you, time or quality.
Fortunately, x264 provides a good middle ground. An option exists to specify a Constant Rate Factor (or Constant Quality), which instructs x264 to take into account the differences between high- and low-motion scenes. Because your eye loses details in high-motion scenes anyway, x264 uses fewer bits in those spots so that it can allocate them elsewhere, resulting in a much improved overall visual quality. This mode allows the highest quality possible without using multiple passes, which is a great time saver. The cost in using this mode, however, is in giving up the ability to determine the final file size and bitrate. Although this is possible with multiple passes, we would be forced to double the encoding time. So for our example, let's stick with one pass, utilizing the Constant Rate Factor feature (--crf) for greatly improved quality. Good values of the Constant Rate Factor range between approximately 18 and 26 (where a lower value produces higher quality but larger file sizes). Your needs in terms of size vs. time vs. quality may be different, however. If so, you should investigate multipass mode further to gain more control.
The x264 encoder accepts only raw YUV 4:2:0 input. To do this, simply pipe the output of mencoder directly into x264:
mkfifo tmp.fifo.yuv mencoder -vf format=i420 -nosound -ovc raw -of rawvideo \ -ofps 23.976 -o tmp.fifo.yuv max.dv 2>&1 > /dev/null & x264 -o max-video.mp4 --fps 23.976 --crf 26 --progress \ tmp.fifo.yuv 720x480 rm tmp.fifo.yuv
As you can see, we must specify the framerate (--fps); otherwise x264 will not know what is being fed into it. Do this similarly for the width and height of the incoming raw video. Encoding in this way enables the x264 default encoding parameters, which are quite good, but we can make a few improvements. In particular, we can make general improvements to some of the encoding strategies it uses without sacrificing too much in the way of extra encoding time. The number and variability of the parameters you can feed into x264 is great, and they are all geared toward improving the quality of the resulting output in some way. However, some options are more expensive, time- and processor-wise, than others. And, some options can sacrifice compatibility with certain media players, notably QuickTime. In order to remain compatible with the existing install base of QuickTime users, we need to keep a few things in mind.
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Linux Systems Administrator
- RSS Feeds
- Introduction to MapReduce with Hadoop on Linux
- Validate an E-Mail Address with PHP, the Right Way
- Weechat, Irssi's Little Brother
- Tech Tip: Really Simple HTTP Server with Python
- New Products
- Poul-Henning Kamp: welcome to
1 hour 14 min ago
- This has already been done
1 hour 15 min ago
- Reply to comment | Linux Journal
2 hours 1 min ago
- Welcome to 1998
2 hours 49 min ago
- notifier shortcomings
3 hours 13 min ago
4 hours 50 min ago
- Android User
4 hours 51 min ago
- Reply to comment | Linux Journal
6 hours 44 min ago
9 hours 34 min ago
- This is a good post. This
14 hours 47 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?