Localizing the Broadband Battle
If "all politics is local", as Tip O'Niell famously said, can't we say the same about all business? If so, maybe we should start walking our Net Neutrality talk on our own main streets.
"Congress shaping telecom law in private", reads the headline in the Austin American-Statesman. "While most conference negotiations are closed to public view, lobbyists continue to influence the members and their staffers", the story says.
On May 1st, Susan Crawford commented on Senator Stevens' telecom bill draft, which presumably forms the base material for whatever legislators and lobbyists are cooking up. She wrote, "It's 135 pages long, and the first Title is: 'War on Terrorism.'"
On May 3, Harold Feld wrote, "Well, the Stevens Bill would not just limit FCC authority, it would eliminate it altogether. A dream for the telcos, cable cos and my opposite numbers at Progress and Freedom Foundation, a nightmare for the rest of us." Among many other thoughts, he adds,
Stevens goes further than the House Bill (COPE) in stripping the FCC of authority. The House Bill stripped the FCC of any power to create network neutrality rules, limits the FCC to enforcing its rather vague Broadband Policy Statement by case-by-case complaints. (The four principles: consumers should be able to access all legal content, consumers should be able to use all legal applications and services of their choice, consumers should be able to attach any device to the network that won't harm the network, and consumers should be able to enjoy competition.) COPE also appears to endorse charging third parties for premium access to subscribers (what I call Whitacre tiering, after AT&T CEO Ed Whitacre who first popularized the concept).
Apparently, this is just too much gosh-darn government interference and heavy handed regulation for Senator Stevens. The Stevens Bill removes the ability of the FCC to even adjudicate complaints about violations of the "four principles" contained in the Broadband Policy Statement. In other words, if an broadband ISP tells you sorry, no using Vonage, but you can use our VOIP product, or no attaching an Apple wireless router to our network because we have an exclusive deal with Cisco to use only Lynksis, Senator Stevens thinks that's just fine.
And the Stevens Bill is just one item on the growing docket of Stuff To Consider when mulling the matter of Net Neutrality.
I've been one of the voices engaged in the fight for Net Neutrality -- or at least for some of the concepts it represents. Saving the Net, Net Neutrality vs. Net Neutering and Imagining the Maximum Net all took a pro-Neutrality stand.
Net Neutrality basically says the Net's packetized goods are inherently "neutral". Meaning that the nature of the Net itself does not favor one source of bits over another. It just delivers the goods. In David Isenberg's immortal words, the Net is "stupid" in this respect. Like the Earth's gravity, Neutrality serves an equally simple (and "stupid") purpose for everything it supports.
Twenty-seven years ago, the inventors of the Internet designed an architecture which was simple and general. Any computer could send a packet to any other computer. The network did not look inside packets. It is the cleanness of that design, and the strict independence of the layers, which allowed the Internet to grow and be useful. It allowed the hardware and transmission technology supporting the Internet to evolve through a thousandfold increase in speed, yet still run the same applications. It allowed new Internet applications to be introduced and to evolve independently.
When, seventeen years ago, I designed the Web, I did not have to ask anyone's permission. . The new application rolled out over the existing Internet without modifying it. I tried then, and many people still work very hard still, to make the Web technology, in turn, a universal, neutral, platform. It must not discriminate against particular hardware, software, underlying network, language, culture, disability, or against particular types of data.
Anyone can build a new application on the Web, without asking me, or Vint Cerf, or their ISP, or their cable company, or their operating system provider, or their government, or their hardware vendor.
1. Vint Cerf, Bob Kahn and colleagues
2. TCP and IP
3. I did have to ask for port 80 for HTTP
I found that post through Richard Bennett, who characterizes it as "flying off to socialist Neverland". Richard, like Tim, is a techie. To say they disagree about Net Neutrality, however, would be a severe understatement. Their political positions are polar opposites.
The big issue here is that the choices that need to be made between good practices and bad are very hard to make in legislation, which tends to be more like an ax than a scalpel. Anti-competitive practices are hard to identify until we have actual markets in which to measure them. So at this point it seems that the prudent thing is to ban only the most egregious abuses in law, and wait and see what really comes to pass as the new IMS networks are rolled out.
In a competitive marketplace, the government usually does not require that vendors treat all customers and all suppliers alike for all purposes. Very often such differences in treatment in a competitive marketplace reflect economic efficiencies to be realized from that result in cost savings, and these cost savings enhance overall consumer welfare. Avoiding broad prohibitions on such differential treatment gives operators the freedom and flexibility to invest with confidence in new facilities and innovative services consumers may value.
Net neutrality (formerly known as the end-to-end principle) means that the people who provide connections to the Internet don't get to favor some bits over others. This principle is not only under attack, it's about to be regulated out of existence.
On the one hand, it is good for geeks to get interested in how politics can screw up something they value. Larry Lessig has been urging this loudly ever since his famous Free Culture speech at OSCon in the summer of 2002. On the other hand, it maynot be good to re-cast a technical issue as a political one for both technical and political reaons. Though we may be An Army of Davids (as right-leaning and Neutrality-favoring law professor and superblogger Glenn Reynolds calls us in his book by that title), the Goliaths still own the votes. Which is why Net Neutrality is losing in Congress. Jonathan Peterson sums up the prospects:
The reality is that this is a battle that we are going to lose. The telcos are going to be allowed to implement special carriage pricing to pass to content and service providers - perhaps the Supreme Court will strike it down, perhaps not. But just as no one burned down Washington DC when the decisions that made our cellular infrastructure and services fall so far behind were made, no one will burn down DC as our internet goes the same way. (Which doesn't mean that you shouldn't go to Savetheinternet.org and sign their Save Network Neutrality Petition).
So its time to put on a strategic planning hat and start figuring out what a post-network neutrality world will look like. Only companies with deep pockets will pay the fees for fat content.
I've read that YouTube is burning through $1M/month in hosting fees. That cant continue in a rational world, even without bandwidth surcharges from ISPs. This means that Google and Yahoo will be able to afford to host amateur video content, but most of the other players will die or be purchased by the big guys for their content.
Google and Yahoo are great companies, but an end of network neutrality actually helps them out by locking out new competitors who won't get the best rates for fat pipe carriage. Thats a deal with the Devil that's hard to ignore.
To pass, Net Neutrality need bipartisan support. Toward that end, it probably hasn't helped to have Moveon.org, a partisan organization on the left, come out with a petition to save what it calls "the Internet's First Amendment". Partisanship breeds sportscasts in the media. So, predictably, Net Neutrality became what CNET called "a hotly contested Democratic bid to enshrine extensive Net neutrality regulations in the law books", when it failed in House committee by a 34-22 vote, mostly along partisan lines.
Predictably, the carriers are reframing Net Neutrality as a way for government to mess with business. NETcompetition.org slickly leverages the cable industry's ample lobbying and public relations muscle. Former Clinton White House spokesman Mike McCurry, now in the employ of the telcos, makes a formidible anti-neutrality spokesman.
Still, more Net Neutrality bills keep getting floated, incluidng one co-sponsored by Olympia Stowe, a Republican senator from Maine.
Yet when I met with tech wonks at the Personal Democracy Forum in New York last week, there was general agreement that all these bills, including Stevens', were forms of political posturing (one wonk told me Stevens' bill was made huge just to assure that it wouldn't be read, and would just sit there, doing nothing), and that the best hope for Net Neutrality at this point is that nothing at all happens in Congress. This was also, the wonks agreed, the way to bet. At least for now.
So. What next?
In Comparative Broadband Ideas, Susan Crawford says there's a simple reason why the U.S. is falling farther and farther behind in broadband access, while Korea and Japan lead the way:
The primary reason that Japan and Korea do so much better than the U.S. on any measurement of broadband (availability, penetration, price, speed) is that there is fierce competition in the market for broadband internet access in these countries.
Here in the U.S. access is controlled by monopolies and duopolies. Here in Santa Barbara only one carrier, Cox Communications (a cable company), reaches nearly all the homes and businesses in town. One reason we moved here in 2001 was that Cox's offering was far better than the lousy 100Kb IDSL we were getting at our old house in Silicon Valley. Since then Cox has improved services in a few ways, but in others has cut back. There is some competition from Verizon, which now offers faster upstream speeds at lower prices than Cox, but not for the whole town. Where I live the best Verizon offers is "Up to 768 Kbps/128 Kbps". My Cox connection at the house we just vacated was last measured at 4.371Mb down and 331Kb up (via DSL Reports. At our new house next door, the Cox connection is better: 5Mb down and 768Kb up. At the new place I'm paying more $64 or so for 10Mb down and 1Mb up (less if you get it with cable TV which we don't) but the service guy on the phone told me that level of service actually still isn't deployed in Santa Barbara, which is one of Cox's hind tits, apparently. Still, it's an improvement over what we had for the last three years or more.
Meanwhile in Japan and Korea customers are getting 100Mb service for a fraction of what I pay to Cox. And I have no choice: Cox has to be my provider. Nothing else beats them. Yet.
Competition is the key. Broadband markets need to be opened. Susan Crawford again:
There are three routes towards increasing competition in broadband access: (1) "local loop unbundling," which means requiring the incumbent to physically open its facilities to new entrants, who then find new ways to provide services to end-customers; (2) "wholesale access," which means requiring the incumbent to sell a wholesale broadband access product to all comers; and (3) encouraging other kinds of broadband access ("facilities-based competition"), which means helping new entrants have their own networks without having to deal with the incumbents at all.
I vote for #3. This is what we have in Utah with UTOPIA , where a consortium of 14 cities built out fiber infrastructure that they're wholesaling back to the incumbents who didn't want to make the effort. Loma Linda, CA is mandating 5-15Mbps to premises. Other efforts are going ahead in Burlington, VT, Lafayette, LA and many other localities. Why? People want it. Save Muni Wireless reported last summer,
After the passage of a law in Louisiana requiring a public referendum for municipal broadband, voters in Lafayette approved a $125 million fiber-to-the-home project by a 62% to 38% margin.
Yet here in Santa Barbara a Cox official told me a few months back that too few people are interested in better broadband. This was after a meeting of a local broadband coalition (of which I am a member), where customer after customer talked about their need for exactly that. At another meeting a Cox representative said she didn't "see the problem", adding that customers could get all the fiber they want, if they'll just pay for it. When pressed on costs, estimates ran up to $50,000. Per site.
Of course, the carriers will fight the municipalities (and the companies that the municipalities grant rights to string fiber on poles and pull fiber through buried conduits). Read the Lafayette Pro Fiber Blog for a running account of the fight between citizens (and municipalities on behalf of citizens) and carrier-controlled state legislators.
But with citizens backing, there isn't much they can do. We might not be able to work around Congress, or even all the state legislatures. But we can work locally to find solutions that work for both vendors and customers. We need to enlist the participation of independent companies that are accustomed to real competition in real markets, and are not just inhabitants of what Bob Frankston calls "The Regulatorium".
In the long run, that's the only way.
Doc Searls is Senior Editor of Linux Journal
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Google's SwiftShader Released
- Parsing an RSS News Feed with a Bash Script
- SourceClear Open
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide