I’ve written about network neutrality before. It’s a difficult topic because of its different aspects, and because there are vehement opinions on all sides of it. Before I left the Internet Architecture Board, I started the process of setting up a network neutrality talk as the technical topic of the plenary session at the upcoming Stockholm IETF meeting — my IAB colleague Marcelo Bagnulo has taken over the planning for that, and is getting a great program lined up.
Writer and BoingBoing editor Cory Doctorow has long been an outspoken advocate of net neutrality, and he’s recently written a technology article in The Guardian, opining that failure to ensure a full, open, neutral Internet will likely block innovative new applications and services.
The article is a good one; read it.
I agree with most of what Cory says, but there’s one section, one point, with which I have to argue. I’ll quote it here, in full, though it’s a longer quote than I’d usually take:
Finally, there’s the question of metered billing for ISP customers. The logic goes like this: “You have a 20Mbs connection, but if you use that connection as though it were unmetered, you will saturate our bandwidth and everyone will suffer.” ISPs like to claim that their caps are “fair” and that the majority of users fit comfortably beneath them, and that only a tiny fraction of extraordinary bandwidth hogs reach the ceiling.
The reality is that network usage follows a standard statistical distribution, the “Pareto Distribution,” a power-law curve in which the most active users are exponentially more active than the next-most-active group, who are exponentially more active than the next group, and so on. This means that even if you kick off the 2% at the far right-hand side of the curve, the new top 2% will continue to be exponentially more active than the remainder. Think of it this way: there will always be a group of users in the “top 2%” of bandwidth consumption. If you kick those users off, the next-most-active group will then be at the top. You can’t have a population that doesn’t have a ninety-eighth percentile.
But the real problem of per-usage billing is that no one — not even the most experienced internet user — can determine in advance how much bandwidth they’re about to consume before they consume it. Before you clicked on this article, you had no way of knowing how many bytes your computer would consume before clicking on it. And now that you’ve clicked on it, chances are that you still don’t know how many bytes you’ve consumed. Imagine if a restaurant billed you by the number of air-molecules you displaced during your meal, or if your phone-bills varied on the total number of syllables you uttered at 2dB or higher.
Even ISPs aren’t good at figuring this stuff out. Users have no intuition about their bandwidth consumption and precious little control over it.
Metering usage discourages experimentation. If you don’t know whether your next click will cost you 10p or £2, you will become very conservative about your clicks. Just look at the old AOL, which charged by the minute for access, and saw that very few punters were willing to poke around the many offerings its partners had assembled on its platform. Rather, these people logged in for as short a period as possible and logged off when they were done, always hearing the clock ticking away in the background as they worked.
This is good news for incumbents who have already established their value propositions for their customers, but it’s a death sentence for anything new emerging on the net.
I can easily counter Cory’s analogies with ones of removing speed limits from roads, of elevators that must have maximum capacities, or of the shower that suddenly alters its water flow when someone flushes a toilet elsewhere in the house. But, as usually happens with analogies, these only match the situation so far before they break down. The fact is that no analogy really describes the Internet, with all its unique aspects. And no one really is proposing to charge for every click, every packet, every megabyte. The proposals — at least those currently out there — are to define thresholds beyond which a higher (and more costly) level of service is required.
It’s true that when you click on a web page, you don’t know whether what will come back represents a kilobyte of plain text, a megabyte of images, or even several megabytes of high-resolution pictures. To be sure, you could even wind up on a silly web site that refreshes a 2 megabyte image constantly as long as you stay on the page, eating up your bandwidth as you read. But that’s not what this is really about.
What it’s really about is looking at people who spend their Internet time downloading TV shows and movies, say, and who take up a lot of bandwidth in doing it. It’s certainly true that ISPs need to consider that usage in their capacity planning, and need extra capacity to support a significant amount of it. They think the users who drive that should be the ones paying for it, and that’s a defensible point.
A large part of Cory’s point, though, is that forthcoming innovations on the Internet might be — will likely be — data-intensive applications, perhaps even rivalling media downloads for data rates. Allowing companies to charge based on data transfer rates will keep these applications from getting off the ground. And that’s certainly something to be concerned about. The question is: what’s the right way to handle it?
We have ample precedent for usage fees. When you plug in a new appliance — put in a new refrigerator, install an air conditioning system for the first time, or get a newer, faster, top-of-the-line computer — you pay for the electricity it uses. And, let’s be realistic, you don’t know how much that will cost. In these days of concern about energy usage, many U.S. appliances have Energy Star stickers, and there are similar programs elsewhere. Still, I think most people aren’t sure at all about how much it’ll cost to run that air conditioner until they’ve done it for a month and seen their electricity bill. And if you run your computer all night to download television programs and movies, you get a similar effect: a high-end computer can cost $50 or more per month to run, if you live in an area where electricity is particularly expensive.
And it’s not just the cost of using more electricity billed at your flat rate. Many utility companies charge a higher rate to heavier users. Your cellular phone service, at least in the U.S., usually has some usage level included, with a fairly dear rate per minute if you exceed that. Can we say, outright, that it’s not fair to use the same model for Internet usage?
To be sure, we have yet to sort out the business model for charging for Internet usage. But we can see what does and doesn’t work: Cory points out that AOL discovered that its meter-by-the-minute plan was a failure. They learned that they can do better business by changing how they bill. But the thing is that when we started with just logging in for email, the by-the-minute method was all right. Changes in how we used the network resulted in changes to the cost model.
And so it will likely be with the next Internet innovations. The “utility companies” — the service providers — are right to say that it’s not their responsibility to “ensure that the Googles of tomorrow attain liftoff from the garages in which they are born.” On the other hand, we do have government oversight of utility companies, and some government oversight of ISPs’ rates would not be a bad thing.
The bottom line, though, shows up at the companies’ bottom lines: it costs them more to be capable of moving more data around for more users, and it’s not wrong for them to charge more for users who put a greater load on their systems.
Comments