DVD Transcoding with Linux Metacomputing

A Condor high-throughput DVD transcoding system for Linux.
Parallel Video Transcoding

Once the video partitioning stage is done, video data chunks are submitted to Condor transcoding jobs. These jobs are processed in the Condor Vanilla universe, because they load the DivX library dynamically.

In order to transcode a data chunk, every transcoder reads the data directly from source VOBs. Output data is written to the same folder. Read/write operations are performed on a frame-to-frame basis: a transcoder reads a frame, transcodes it and writes the result back. This strategy yields a better performance than delivering whole chunks to the workers, transcoding them in worker local filesystems and sending whole transcoded results from the workers to the server computer for joining. All computers used NFS to share both input VOB files and the output folder. Once the parallel transcoding stage finishes, transcoded results are a set of independent files, which are concatenated at the master to generate a DivX movie. Table 2 presents the results.

We used the two load balancing strategies, Small-Chunks and Master-Worker. The testing movie was All about My Mother, which has a length of 1 hour and 37 minutes and an original size of 2.94GB. The tuples in the comp column in Table 2 are the first letters of the names of the test bed computers, for example, g refers to gigabyte and t refers to titan. A - symbol indicates that the computer was not used in that particular test. Chunk size in Small-Chunks was set to 60MB. Video preprocessing time has not been included because it was negligible in all cases.

Table 2. Computational Results (t = time, Fps = frames per second)

Several conclusions can be extracted from Table 2. First, according to the Fps column, load balancing is better with Master-Worker than it is with Small-Chunks. The difference is small, but it tends to grow as the number of machines used increases. In general, parallelization increases transcoding performance, which is evident when adding a second powerful machine (see [g----] vs. [gk---]). The impact of adding low-end machines successively is low (see [gk---] vs. [gk--b] and [gk--b] vs. [gkntb]). However, the combined impact of all low-end machines is noticeable, especially when departing from the case of a single available powerful machine (see [g----] vs. [g-ntb]).

In order to evaluate further the behavior of the prototype, we compared it with two popular transcoding tools, Mencoder and FlaskMpeg. Table 3 shows these results. The speed of the monoprocessor version of our prototype lies between FlaskMpeg's and Mencoder's. Regarding output size, in the worst case (Small-Chunks), our prototype delivers a DivX movie that is only 2.6% larger than FlaskMpeg's output. Indeed, the global compression rates achieved by Small-Chunks (24.67%) and FlaskMpeg (24.05%) are similar, and the difference is not relevant if processing speedup is taken into consideration. It is important to note that FlaskMpeg uses DivX codec v. 5.0.5 Pro, which was not available for Linux at the time this article was written. Therefore, compression performances may be even closer when the Linux version becomes available.

Table 3. Comparative Results, State-of-the-Art Transcoding Applications

Finally, Figures 1 and 2 show the throughput of the machines in the system prototypes, both Small-Chunks and Master-Worker, for load balancing. The computers do not finish their assignments exactly at the same time. This should be expected, though, as Small-Chunks load balancing is only approximate. Plus, Master-Worker job size is assigned according to the results of a training stage, which is representative but not exact.

Figure 1. Individual Computer Throughput, Small-Chunks

Figure 2. Individual Computer Throughput, Master-Worker

Conclusion

In this article, we have presented a Condor high-throughput DVD transcoding system for Linux. Our results indicate that metacomputing-oriented parallel transcoding is of practical interest, and it can achieve noticeable improvements when compared to existing monoprocessor Windows tools.

Attending to the statistics of our case study, pure Master-Worker produces better results than Small-Chunks, but the difference is minimal and seems irrelevant in practice.

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Source code for this experiment

Anonymous's picture

This experiment sounds interesting, any possibility of sharing the code that pulls it all together?

Webinar
One Click, Universal Protection: Implementing Centralized Security Policies on Linux Systems

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Learn More

Sponsored by Bit9

Webinar
Linux Backup and Recovery Webinar

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.

Learn More

Sponsored by Storix