The Home-Produced Movie Revolution
Will independent movie production grow in the garden fertilized by lousy broadband service? That's the question on the floor.
Let's start with the gear.
The Golden Age of Mass Production began after World War II, when Allied industries defeated Axis industries, and after which winners and losers both industrialized out the wazoo. Among the most significant industrial trends in the '50s and '60s was the branching of consumer electronics off the stout trunk of professional gear. Low-end gear from Brownie cameras to home stereos to transistor radios equipped the consuming classes with abilities that could often mimic, though not quite rival, production qualities still available only to professionals.
This era began to end when personal computing stripped the gears of the professional computing. It almost finished ending once the Net became ubiquitous in the world's industrial regions and dial-up became the retrograde exception for Net connection from homes. It will finish ending when professional-grade photo and video gear becomes available at consumer-grade prices, and the only distinctions that matter will be intrinsic to the gear rather than to the customer.
The next era the one in which the bulk of producers will emerge from a mass market formerly filled only with consumers will begin when video customers begin to realize they can produce higher-definition video than what they can get over their cable and satellite connections. That will happen quickest for customers who buy 1920 x 1080 screens to take full advantage of their new 1920 x 1080 camcorders. While spending under $2000 for both.
Add in cheap storage on zero-DRM Linux-based NAS (network attached storage) boxes, and we're home free. Literally.
We're getting close now.
Consider the Sony HDR-HC1 HDV Camcorder. It's been out since last Summer. Listed at around $2 grand, it's selling on the Web's street (Froogle) for $949 and up. The HDR-HC1 will shoot at 1080i, which is 1080 vertical pixels of resolution. The "i" stands for interlaced. Meaning, each scan covers every other line. The next step up is 1080p, with the "p" standing for progressive scanning. True, 1080p is better than 1080i, but both are far better than the 720 dots (or, in the old TV parlance, lines) of vertical resolution you get with most of the plasma and LCD screens they sell at the big box stores
On the display side, I saw lots of 1080i and 1080p screens at CES in January, and talked to makers who said the prices of both would be under $1000 at discounters by Christmas. Some are already out there, if you do some digging.
Production is still a gating factor. LinuxDevCenter has an excellent interview with a developer at Cinelerra, an open-source package modestly described by its makers as a "50,000 watt flamethrower of multimedia editing power". That was over two years ago, and it's just one package.
Meanwhile, there will be plenty of amateurs doing high-def production on new Macs and PCs. The "content" will be there. Will most of it suck? Probably. But so what? Some of it won't. Considering how many producers we'll have in the world, the raw number of good source material will go straight up. Nearly all of it will be personal. And persons will be doing the watching.
For a hint of the growth curve here, look at how many high quality photos are showing up on Flickr. Project that on video. Then film, and the death of the distinction between the two. Steve Weiderman writes,
George Lucas used a prototype Sony-Panavision 1080p/24 camera to shoot several scenes in the "Phantom Menace" Star Wars feature. Nobody is saying which scenes they are, but people in the know will tell you not to look for video artifacts, look for the really nice scenes. The video scenes were printed to film and intercut with the rest of the film material. The success of that test drove the decision to shoot the next two installments of the Star Wars series completely in the 1080p/24 HDTV format, not in film.
Now, what about distribution?
The problem with cable and satellite carriage is that they have to carry more and more channels, all running 24 x 7 x 365 at whatever bandwidth can be managed. They'll make room for a few 1080i channels, but how much will those be compressed and how will they look? Home users showing stuff off their NAS (or equivalent) boxes won't have to compress what they show. Or at least not as much.
I've been talking with Jim Thompson about the situation. Jim is an expert at too many things to name, as well as a Unix/Linux/etc hacker of long and substantive standing. Here are a few nuggets he passes along:
- Wikipedia has an excellent rundown of the technologies and choices involved in its Digital Cinematography entry
- Check out the Kinetta 4K Film Recorder: http://www.kinetta.com/home.php
- Panasonic's HVX200 is about $9k and shoots in 1080p 24 (that's 24 frames per second, or fps).
- JVC has a 1080p 24 camera as well.
- Sony's HDC-X300 camera is a 3-ccd 1080p unit and costs $15K for the body alone. "Someone is sure to clone this (as a single-CCD unit) and introduce it for $5k", Jim says.
- North American video runs at 29.97 fps). Elsewhere it runs at 25 Fps (don't ask). Film for the world's theatres, still runs internationally at 24 Fps, so the 1080p/24 format maps nicely. There are issues around converting back to NTSC or PAL video, but let's not go there. Look forward.
- Transport bandwidth is the big issue. Jim: "Current 720p and 1080i cameras output video at about 1.5 Gigabits per second, but 1080p would roughly double that to 3 Gbps. To convert that into a standard 19.4 Megabit per second channel for transmission across a cable network, there's a whole set of other technologies that have to be accomplished in between there. 1080i barely fits for some types of content, for example, almost anything "live" (sports, concerts, specials) where you can't do the off-line compression, you're going to see "artifacts".
- A factor: Mark Cuban's 2929 Entertainment, which owns all the Landmark "art" theatres.
- Check out VirtualDub. It's a video capture/processing utility for 32-bit Windows platforms (95/98/ME/NT4/2000/XP), licensed under the GPL. "It lacks the editing power of a general-purpose editor such as Adobe Premiere, but is streamlined for fast linear operations over video. It has batch-processing capabilities for processing large numbers of files and can be extended with third-party video filters." It's aimed toward AVI files, although it can read (not write) MPEG-1 and handle sets of BMP images. Also, "the author groks linux, but has had bad experiences. Maybe some readers can correct that, or carry the project forward on a penguin platform.
Back to broadband. On both cable and DSL, it's barely growing and still not symmetrical. Offsite storage and backup won't be good businesses until we get symmetrical service to the home, which will happen fastest with phoneline-based (DSL) providers looking to compete aggressively with cable. I doubt this will come from the big phone companies, which can't think outside the regulatory box where they're busy fighting the cable companies. Instead it will have to come from the smaller phone companies, of which there are still a few.
The cable companies, unfortunately, are pickled in asymmetricality. Proof is in the DOCSIS (Data Over Cable Service Interface Specification) pudding. Here's one excerpt from the DOCSIS 2.0 spec:
In the downstream direction, the cable system is assumed to have a passband with a lower edge between 50 and 54 MHz and an upper edge that is implementation-dependent but is typically in the range of 300 to 864 MHz. Within that passband, NTSC analog television signals in 6-MHz channels are assumed to be present on the standard, HRC or IRC frequency plans of [EIA-S542], as well as other narrowband and wideband digital signals. In the upstream direction, the cable system may have a subsplit (5-30 MHz) or extended subsplit (5-40 or 5-42 MHz) passband. NTSC analog television signals in 6-MHz channels may be present, as well as other signals.
Translation: cable Internet service is, by standard, something asymmetrical packed around the edges of TV service to which it is subordinate in importance. Glenn Fleishman in Wi-Fi Networking News recently said, "DOCSIS 3.0 has more upstream capability, but its highly asymmetrical".
Bottom line: the fattest pipes will be local inside homes. Sharing across the Net will be slow but supported in a store-and-forward way, especially using BitTorrent. Which is good enough to route around all the carriers that still think consumers aren't producers.
And thus homes will become production laboratories for next-generation videos and movies.
Doc Searls is Senior Editor of Linux Journal
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Google's SwiftShader Released
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Interview with Patrick Volkerding
- Managing Linux Using Puppet
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Non-Linux FOSS: Caffeine!
- SuperTuxKart 0.9.2 Released
- Tech Tip: Really Simple HTTP Server with Python
- Parsing an RSS News Feed with a Bash Script
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide